This project has been created as part of the 42 curriculum by dporhomo.
Write a C program called push_swap that calculates and displays the shortest sequence of Push_swap instructions needed to sort the given integers.
The push_swap project is a highly efficient C program designed to solve a complex sorting problem using two stacks (Stack A and Stack B) and a limited set of instructions. The primary goal is to calculate and display the shortest possible sequence of operations to sort a given list of integers in ascending order.
For this implementation, I chose the Greedy Algorithm (also known as the Turk Algorithm). Unlike non-comparative methods like Radix Sort, the Greedy approach uses a cost-optimization strategy that calculates the exact number of operations required to move every available number to its correct position. This dynamic approach actively minimizes move counts, consistently hitting the project's most competitive benchmarks.
As detailed in your documentation, your implementation doesn't just push numbers; it calculates the "cost" of every possible move to ensure the most efficient sort.
For every number in Stack A, the algorithm evaluates four distinct rotation scenarios to find the cheapest path to its target in Stack B:
| Scenario | Operation | Description |
|---|---|---|
**Case 1: rarb** |
rr |
Both numbers are in the top halves of their respective stacks. |
**Case 2: rrarrb** |
rrr |
Both numbers are in the bottom halves. |
**Case 3: rarrb** |
ra + rrb |
Stack A needs a rotation, Stack B needs a reverse rotation. |
**Case 4: rrarb** |
rra + rb |
Stack A needs a reverse rotation, Stack B needs a rotation. |
-
Cost-Optimization: Scans all available numbers and compares four movement scenarios (rotating both stacks, reverse-rotating both, or rotating them individually) to find the cheapest move.
-
Double Greedy Strategy: Optimizes move efficiency during both the distribution phase (A to B) and the collection phase (B to A).
-
Input Validation: Robustly handles errors such as non-integer arguments, integer overflows, and duplicate values.
The project includes a Makefile that supports standard rules. To compile the program and the bonus checker, use:
# Compile the main push_swap program
make
# Compile the bonus checker program
make bonus
Run push_swap with a list of integers to see the sequence of instructions:
./push_swap 3 2 1 0
pb
ra
sa
pa
ranumber randomizer: ARG=$(python3 -c "import random; print(' '.join(map(str, random.sample(range(-300, 300), 600))))")
ARG=$(python3 -c "import random; print(' '.join(map(str, random.sample(range(-500, 500), 100))))"); ./push_swap $ARG | ./checker $ARG
OKARG=$(python3 -c "import random; print(' '.join(map(str, random.sample(range(-500, 500), 100))))"); ./push_swap $ARG | wc -l
4945The checker program validates if the instructions actually sort the stack. You can run it manually or pipe the output of push_swap into it:
Manual usage:
-
Run
./checker 3 2 1 0. -
Type instructions (e.g.,
pb,ra,sa,pa,ra) followed by Enter. -
Press Ctrl+D to send the EOF signal.
-
The program will display OK if the stack is sorted and Stack B is empty, otherwise it displays KO.
Automated usage:
ARG="3 2 1 0"; ./push_swap $ARG | ./checker $ARG
./push_swap 3 2 1 0
pb
ra
sa
pa
ra
OK
./checker 3 2 1 0
pb
ra
sa
pa
ra
OK./checker 3 2 1 0
ra
pb
pa
sa
ra
KO
To test the checker automatically without manual typing, you can pipe the output of your push_swap directly into it:
ARG="3 2 1 0"; ./push_swap $ARG | .checker $ARG
OKYour implementation consistently hits top-tier benchmarks because it calculates exact costs rather than relying on fixed patterns like Radix or Butterfly sorts.
- 3 numbers: Max 2-3 moves.
- 100 numbers: Typically < 700 moves.
- 500 numbers: Typically < 5500 moves.
-
Parsing & Initialization: The program uses
ft_processto split string arguments, convert them to integers viaatoi2, and build Stack A using linked nodes. -
Distribution (A to B): Elements are pushed to Stack B in a roughly descending order. Before each push, the algorithm calculates the rotation cost for every number in Stack A to reach its correct spot in B.
-
The Final Three: Once Stack A is reduced to three elements,
ft_sort_threeis called to sort them using a hardcoded, highly efficient logic. -
Collection (B to A): Elements are pushed back to Stack A. The "Double Greedy" logic scans Stack B for the cheapest candidate to return to its target position in Stack A.
-
Final Alignment: After Stack B is empty, a final rotation is performed to ensure the smallest number is at the very top of Stack A.
AI was used in this project to assist with the following tasks:
-
Algorithm Conceptualization: Comparing the trade-offs between Radix, Butterfly, and Greedy sorting strategies.
-
Logical Documentation: Generating step-by-step breakdowns of the
Double Greedylogic and function flows. -
Documentation: Converting technical project notes into a structured and readable README.md format.
Radix Sort is a non-comparative algorithm that sorts numbers by processing their binary bits one by one. It is the most straightforward to implement, relying on a repetitive loop of pushing "0" bits to Stack B and rotating "1" bits in Stack A1. However, while it guarantees stable linear complexity, it often results in a higher total move count because it strictly follows a pattern rather than optimizing for the specific state of the stack.
Butterfly Sort (or Chunk Sort) is a distribution algorithm that divides numbers into ranges ("chunks") based on mathematical thresholds like square roots and logarithms. It pushes elements to Stack B to create a roughly sorted curve---often visualized as a butterfly's wings---before pushing the maximum values back to Stack A. While it performs better than Radix, it relies on abstract "magic numbers" to define chunk sizes, making the underlying logic harder to tune and justify intuitively.
Greedy Algorithm (Turk Algorithm) is a heuristic cost-optimization strategy. Instead of forcing a pre-set pattern, it calculates the exact "cost" (number of rotation and push operations) required to move every available number to its correct position in the opposite stack. It then executes the specific move that requires the minimum amount of operations. This dynamic approach typically yields the lowest possible move count (highest score) of the three.
For this project, I have chosen the Greedy Algorithm. This choice strikes the optimal balance between high-performance scoring and logical clarity. Unlike Radix Sort, which sacrifices move efficiency for code simplicity, the Greedy approach actively minimizes operations to meet the project's most competitive benchmarks.
Furthermore, unlike the Butterfly algorithm's reliance on distribution constants, the Greedy logic is transparent - every action is strictly justified by a calculated cost.
The core logic implements a "Double Greedy" strategy, optimizing move efficiency during both the distribution (A to B) and collection (B to A) phases. Initially, the algorithm pushes elements from Stack A to Stack B in descending order. Crucially, before every push, it scans all available numbers in Stack A and calculates the exact operation cost to place each one into its correct position in Stack B. It compares four movement scenarios (rotating both stacks together, reverse-rotating both, or rotating them individually) and executes the specific move combination that costs the least. Once Stack A is reduced to three elements and sorted, the process reverses: elements are pushed back to Stack A in ascending order, again selecting the specific candidate from Stack B that requires the fewest moves to reach its target. A final rotation aligns the sorted stack to ensure the smallest number is at the top.
This specific implementation is a "Double Greedy" approach. It doesn't just push blindly; it checks the cost of moving every single number before making a move, both when going from Stack A to B and when coming back from B to A.
Goal: Move all numbers from Stack A to Stack B (leaving only 3 in A), keeping Stack B sorted in descending order.
- Push the first two numbers from A to B (pb, pb).
- This creates a baseline in Stack B to start comparing against.
While Stack A has more than 3 numbers:
-
Analyze Every Number in A: The code iterates through every single number currently in Stack A.
-
Find Target Position in B: For each number in A (let's call it nbr), it calculates where it belongs in Stack B.
- Rule: Stack B is sorted descending. nbr must go above the closest number smaller than it.
- Exception: If nbr is larger than the max or smaller than the min in B, it must go above the max value.
-
Calculate Costs (The 4 Scenarios): For every nbr, it calculates the cost (number of operations) for four distinct move combinations to get it to the top of A and its target to the top of B.
-
Case 1: rarb (Rotate Both) -- Both nbr and target are in the top half of their stacks.
- Cost: max(index_A, index_B) (because rr does both at once).
-
Case 2: rrarrb (Reverse Rotate Both) -- Both are in the bottom half.
- Cost: max(distance_from_bottom_A, distance_from_bottom_B) (using rrr).
-
Case 3: rarrb (Mixed) -- nbr is top half, target is bottom half.
- Cost: index_A + distance_from_bottom_B (sum of ra + rrb).
-
Case 4: rrarb (Mixed) -- nbr is bottom half, target is top half.
- Cost: distance_from_bottom_A + index_B (sum of rra + rb).
-
- Select Winner: It compares the costs of all numbers in A and picks the one with the absolute lowest cost.
- Execute Moves: It performs the operations for that winning scenario (e.g., if Case 1 won, it spams rr until aligned).
- Push: It performs pb to move the chosen number to Stack B.
Goal: Sort the final 3 numbers left in Stack A.
- The code calls ft_sort_three.
- This is a simple, hardcoded check (e.g., if max is at top, ra; if top > middle, sa) to ensure these 3 are perfectly sorted in ascending order (e.g., 1, 2, 3).
Goal: Move everything back from Stack B to Stack A, placing them in their correct ascending order spots.
Note: Most greedy implementations just take the top of B. This implementation is smarter; it scans B to find the "cheapest" return candidate.
While Stack B is not empty:
-
Analyze Every Number in B: It iterates through every number in Stack B.
-
Find Target Position in A: For each number in B (nbr), it finds the correct spot in A.
- Rule: Stack A is sorted ascending. nbr must go above the closest number larger than it.
- Exception: If nbr is larger than max or smaller than min in A, it must go above the min value.
-
Calculate Costs: It repeats the same 4-scenario calculation (Case rarb, rrarrb, etc.) but this time calculating moves to get nbr to top of B and target to top of A.
-
Execution:
- It selects the number in B that requires the fewest moves.
- It executes the shared rotations (rr/rrr) or individual rotations to align both stacks.
- It performs pa to push the number home to Stack A.
Goal: The stack is now sorted, but the start (lowest number) might not be at index 0 (e.g., 3, 4, 1, 2).
- Find Min: Locate the smallest number in Stack A.
- Calculate Distance: Is it faster to rotate up (ra) or reverse rotate down (rra)?
- Rotate: Perform the necessary rotations until the smallest number is at the top.
The program starts and passes arguments: string(s) of numbers.
First, builds the stack_a:
ft_process -- prepares the stack_a:
- splits up all strings by using ft_split(),
- converts strings into integers via **atoi2() **
- creates new nodes with ft_stack_new() and
- builds up the stack_a by adding new nodes from the back via ft_add_back()
Second, the program checks for duplicate integers via ft_checkdup(). If there are duplicates:
- the program frees stack_a with ft_free() and
- returns "Error" via ft_error()
Third, the program checks if the stack_a is already sorted.
- If it is, then the programs clears the stack_a with **ft_free() **and terminates by return (0).
- if it isn't, the program runs the sort algorithm by calling ft_sort().
The stack_a has been created by ft_process().
Now, ft_sort() declares stack_b and sets it to NULL.
Then, it uses **ft_lstsize() **to determine the size of stack_a.
If the size of stack_a equals to 2, it calls ft_sa(stack_a, 0).
If the size of stack_a is greater than 2, it performs:
-
set stack_b to ft_sort_b(stack_a)
-
set stack_a to ft_sort_a(stack_a, &stack_b)
-
find min number and rote stack_a until min number is at the top:
Find the current position (index) of the smallest value in Stack A, where 0 is the top of the stack.
i = ft_find_index(*stack_a, ft_min(*stack_a)
Compares the distance from the top of the stack against the distance from the bottom of the stack. If the min umber is closer to the bottom, rotates the stack down, else reverse rotates the stack up.
if index i < ft_lstsize(*stack_a) -- i
while ((*stack_a)->nbr != ft_min(*stack_a))
run ft_ra(stack_a, 0)
while ((*stack_a)->nbr != ft_min(*stack_a))
**ft_rra(stack_a, 0); **
The objective of this function is to move elements from Stack A to Stack B until only three elements remain in A, while keeping Stack B in a "near-sorted" state.
-
Initial Pushes: If Stack A has more than 3 elements and is not
sorted, the program immediately pushes the first two elements from Stack A to Stack B using ft_pb(stack_a, &stack_b, 0).
-
The "Cost-Calculation" Loop (ft_sort_b_till_3): For
all subsequent elements until only 3 remain in A:
-
Find Best Move: It calls ft_rotate_type_ab to calculate
which element in Stack A can be moved to its correct position in Stack B with the fewest total operations.
-
Identify the Case: It checks which rotation combination is
most efficient: rarb, rrarrb, rarrb, or rrarb.
-
Apply Rotations: It executes the rotations via
ft_apply_... functions, which use ft_ra, ft_rb, ft_rra, or ft_rrb to align both stacks.
-
Push: Once aligned, the element is pushed to B.
-
-
Handle the Remainder: When exactly 3 elements are left in Stack
A, it calls ft_sort_three(stack_a).
This function handles the three remaining elements in Stack A using the minimum number of moves (maximum of 2).
-
Case 1: If the smallest number is at the top, it rotates and
swaps.
-
Case 2: If the largest number is at the top, it rotates up. If
the stack is still not sorted, it swaps the top two.
-
Case 3: If the middle number is at the top, it either reverse
rotates or swaps based on where the maximum value is located.
Now that Stack A is sorted (with 3 elements) and Stack B contains the rest, the program moves everything back from B to A.
-
Iterative Push-Back: While Stack B is not empty:
-
Calculate Costs: It calls ft_rotate_type_ba to find the
most efficient element in B to move back to its correct position in A.
-
Find Position: It uses ft_find_place_a to determine
exactly where the number from B fits between the elements in A.
-
Execute Moves: Just like Step 1, it uses ft_apply_...
functions to perform simultaneous or individual rotations on Stack A and B.
-
Push back: It calls ft_pa(stack_a, stack_b, 0) to move the
element.
-
After ft_sort_a finishes, all numbers are in Stack A and the stack is technically sorted, but it might be "rotated" (e.g., 3, 4, 5, 1, 2).
-
The program finds the index of the minimum value using
ft_find_index.
-
It calculates if it is faster to rotate up (ft_ra) or down
(ft_rra) to bring that minimum value to the top.
-
The stack is now fully sorted, the memory is freed via ft_free,
and the program terminates.
Analysis of algorithms
It is the process of finding the computational complexity of algorithms
Computational complexity of algorithms
the amount of time, storage, or other resources needed to execute an algorithm. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior.
Computational complexity theory
Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
Big O notation
it is common to estimate the algorithmic complexity in the asymptotic
sense, i.e., to estimate the complexity function for arbitrarily large
input.
It is a mathematical notation that describes the approximate size of a
function on a domain.
The letter O was chosen by Bachmann to stand for the Order of approximation. The letter O is used because the growth rate of a function is also referred to as the order of the function. A description of a function in terms of big O notation only provides an upper bound on the growth rate of the function.
Model of computation
It describes how an output of a mathematical function is computed given an input. A model of computation describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
Think of a "model of computation" simply as a set of rules for how a computer processes information to solve a problem.
In this model, the computer performs one action at a time, in a specific order. It cannot start the next step until the current one is finished. This is how most people intuitively think computers work.
-
How it works: It reads an instruction, changes its internal
memory (state), and moves to the next instruction.
-
Real-world analogy: Baking a cake. You must crack the eggs
before you whisk them, and you must whisk them before you bake. You cannot do everything at once.
-
Classic Example: The Turing Machine. It is a theoretical
machine that reads a strip of tape one square at a time to perform calculations.
This model treats computation as the evaluation of mathematical functions. It focuses on what needs to be solved rather than how to change the computer's memory.
-
How it works: You feed data into a function, and it gives you a
result. Crucially, it avoids "side effects"---meaning it doesn't change any hidden data in the background. If you feed it the same input, you will always get the exact same output.
-
Real-world analogy: A factory assembly line of transformation.
Raw metal goes in one end, passes through a stamper (function A), then a painter (function B), and comes out as a car part. The stamper doesn't care about the history of the factory; it just stamps what is in front of it.
-
Classic Example: Lambda Calculus. A formal system in logic
for expressing computation based on function abstraction and application.
In this model, multiple calculations happen at the same time. Different parts of the system communicate with each other to solve a larger problem.
-
How it works: Independent agents or processes work
simultaneously. They might share information or send messages to coordinate, but they don't have to wait for each other to finish every single task.
-
Real-world analogy: A busy restaurant kitchen. The chef is
grilling steaks while the sous-chef chops vegetables and the waiter takes new orders. They are all working at the same time to serve dinner.
-
Classic Example: Petri Nets or the Actor Model. These
visualize or mathematicalize how independent systems synchronize and share resources.
Cost models
A cost model is simply a method for estimating a computer program's speed by defining the "price" of each operation. The Uniform Cost Model works like a "flat rate" shipping box: it assumes every step (like adding two numbers) takes exactly one unit of time regardless of how big the numbers are, which makes it simple and effective for standard software. However, when programs use massive numbers (as in cryptography), this flat rate becomes inaccurate, so experts switch to the Logarithmic Cost Model. This model charges "by weight"---calculating the cost based on the specific number of bits (digits) involved---to accurately reflect the extra time the computer needs to process larger data.
Run-time analysis
Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as n) increases.
Software profiling
Software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.
"Big O notation describes the worst-case scenario for how many operations the code performs relative to the input size (N). It tells us how much slower the program gets as we add more numbers."
Common Complexities:
-
O(1) - Constant Time
- Meaning: "Instant." It takes the exact same time regardless of if you have 1 number or 1 million.
- Analogy: Accessing a specific index in an array.
-
O(logN) - Logarithmic Time
- Meaning: "Very Fast." The time grows very slowly. If you double the input, you only add one extra step.
- Analogy: Finding a word in a dictionary by splitting the book in half repeatedly.
-
O(N) - Linear Time
- Meaning: "Proportional." If you double the input, the time takes exactly twice as long.
- Analogy: Reading a book page by page.
-
O(N logN) - Linearithmic Time
- Meaning: "Efficient Sorting." Slightly slower than linear, but much faster than quadratic. This is the gold standard for sorting.
- Analogy: Most efficient sorting algorithms (like Merge Sort) fall here.
-
O(N^2^) - Quadratic Time
- Meaning: "Slow." If you double the input, the time takes 4 times as long. If you multiply input by 10, time goes up by 100.
- Analogy: Nested loops (comparing everyone to everyone).
-
O(2^N^) - Exponential Time
- Meaning: "Impossible." Adding just one number doubles the workload.
- Analogy: Brute-forcing a password.
Sorting Algorithms - algorithms that order data.
Algorithmic Techniques - strategies to solve specific problems.
- Sorting Algorithms: Bubble, Radix, Bucket, Insertion, Merge, Quick, Greedy (Turk), Butterfly.
- Sorting Techniques: Two Pointers, Sliding Window, BFS, DFS, Binary Search, Backtracking.
1. Radix Sort
-
How it works: Sorts numbers by processing them digit-by-digit (or bit-by-bit in binary), usually from the least significant to the most significant.
-
Complexity: O(N⋅K) (Linear).
- Explanation: N is the amount of numbers, K is the number of bits (digits). The time depends purely on how many numbers you have. It doesn't compare numbers against each other.
-
Pros: Very stable move count; easy to code using bitwise operations.
-
Cons: Not the absolute most efficient move count for small lists.
2. Greedy (Turk Algorithm)
-
How it works: Pushes everything to Stack B. Then, for every number in B, it calculates the "cost" (moves) to put it into the correct spot in A. It moves the cheapest one.
-
Complexity: O(N^2^) (Quadratic).
- Explanation: For every number you want to move (N), you iterate through the stack to check positions (N). N×N=N^2^. However, because the stacks shrink, it runs fast enough in practice.
-
Pros: Produces the lowest move count (highest score) for Push_Swap.
-
Cons: Mathematical logic is harder to implement.
3. Butterfly (Chunk Sort / Bucket Variation)
-
How it works: Define a range (chunk). Push all numbers in that range from A to B. Once B is full, push everything back to A.
-
Complexity: O(N√N) or O(N logN) depending on chunk sizing.
- Explanation: You scan the list (N) essentially once per chunk. It is faster than Quadratic, but slower than pure Linear.
-
Pros: Good balance of code simplicity and score.
4. Bubble Sort
-
How it works: Repeatedly steps through the list, swapping adjacent elements if they are wrong.
-
Complexity: O(N^2^) (Quadratic).
- Explanation: You loop through the list (N) for every item in the list (N). Extremely slow for large data.
-
Relevance: Only used in Push_Swap for sorting 2 or 3 elements.
5. Insertion Sort
-
How it works: Builds the sorted list one item at a time by taking an item and shifting it into the correct slot.
-
Complexity: O(N^2^) (Quadratic).
- Explanation: Like sorting a hand of playing cards. For every new card, you scan your hand to find where it fits.
-
Relevance: Good for small lists (size < 10).
6. Selection Sort
-
How it works: Scans the entire list to find the smallest number and moves it to the front. Repeats for the rest.
-
Complexity: O(N^2^) (Quadratic).
- Explanation: Even if the list is sorted, it still scans everything. Very inefficient.
-
Relevance: Conceptually simple, but rarely used due to slowness.
7. Quicksort
-
How it works: Picks a "pivot" element. Puts smaller numbers on the left, larger on the right. Recursively does this for the subarrays.
-
Complexity: O(N logN) (Linearithmic) on average.
- Explanation: Dividing the list in half repeatedly (logN) and processing each item (N).
-
Relevance: Fast for arrays, but hard to implement with Push_Swap stacks because you can't access random indices.
8. Merge Sort
-
How it works: Divides the list into halves until they are single items, then merges them back together in order.
-
Complexity: O(N logN) (Linearithmic).
- Explanation: Guaranteed speed. It never degrades to O(N^2^) like Quicksort can.
-
Relevance: Excellent logic, but requires extra memory space, which fits poorly with the "two stacks" constraint.
9. Heap Sort
-
How it works: Organizes data into a "Heap" (a tree structure where the parent is always larger than children). Repeatedly grabs the root (largest) element.
-
Complexity: O(N logN).
- Explanation: Very consistent speed, but shuffling elements in a heap is often slower in practice than Quicksort.
-
Relevance: Not practical for Push_Swap.
10. Shell Sort
-
How it works: A variation of Insertion Sort that compares items far apart (using a "gap") and slowly reduces the gap.
-
Complexity: O(N(logN)^2^) or O(N^1.5^).
- Explanation: Faster than O(N^2^) because it moves big items to their destination with giant leaps rather than tiny steps.
11. Bucket Sort
-
How it works: Scatters elements into different "buckets" (ranges), sorts the buckets, then gathers them.
-
Complexity: O(N+K).
- Explanation: If data is uniformly distributed, it is incredibly fast (Linear).
-
Relevance: The "Butterfly" algorithm is a variation of this.
12. Counting Sort
-
How it works: Counts how many times each specific number appears. (e.g., "There are three 5s"). Reconstructs the list from counts.
-
Complexity: O(N+K).
- Explanation: Only works for integers. Very fast if the range of numbers (K) is small.
-
Relevance: The "Indexing" step in Push_Swap (converting huge numbers to 0..N ranks) is a form of pre-processing similar to this concept.
13. Timsort
-
How it works: The default sort in Python and Java. It is a hybrid of Merge Sort and Insertion Sort. It looks for naturally ordered runs in the data.
-
Complexity: O(N logN).
- Explanation: Highly optimized for "real world" data that often contains partially sorted chunks.
14. Tree Sort
-
How it works: Inserts all elements into a Binary Search Tree (BST), then performs an in-order traversal.
-
Complexity: O(N logN) average.
- Explanation: Building the tree takes time. If the tree becomes unbalanced, it can slow to O(N^2^).
15. Cubesort
-
How it works: A specialized parallel algorithm designed to sort data on multi-dimensional hypercubes.
-
Complexity: O(N logN).
- Explanation: Extremely theoretical. Useful for supercomputers, irrelevant for standard coding projects.
These are tools you use to build solutions, not sorting algorithms themselves.
16. Two Pointers
- Definition: Using two indices (e.g., one at start, one at end) to traverse data.
- Complexity: Reduces O(N^2^) problems to O(N).
- Push_Swap Usage: Checking the top and bottom of a stack simultaneously to see which number is cheaper to move.
17. Sliding Window
-
Definition: Creating a "window" of size K over an array and sliding it forward one step at a time.
-
Complexity: O(N).
- Explanation: Instead of recalculating the whole group, you just subtract the element leaving and add the element entering.
-
Push_Swap Usage: Used to find the "Longest Increasing Subsequence" (the largest group of sorted numbers you can keep in Stack A).
18. BFS (Breadth-First Search)
-
Definition: exploring a graph layer-by-layer (checking neighbors first).
-
Complexity: O(V+E) (Vertices + Edges).
- Explanation: You visit every possible state once.
-
Push_Swap Usage: The only way to find the perfect solution for size 3-5. It checks "all possible 1 move", then "all possible 2 moves".
19. DFS (Depth-First Search)
-
Definition: Exploring a path as deep as possible before backtracking.
-
Complexity: O(V+E).
- Explanation: Goes deep fast.
-
Push_Swap Usage: Not recommended. You might go down a path of 1000 rotations and miss a solution that was only 2 moves away.
20. Binary Search
-
Definition: finding a target in a sorted list by repeatedly cutting the search area in half.
-
Complexity: O(logN).
- Explanation: Extremely fast. Even with 1 million items, it only takes ~20 steps to find a number.
-
Push_Swap Usage: Finding where to insert a number into a sorted stack (calculating costs).
21. Backtracking
-
Definition: Building a solution piece by piece and "undoing" (backtracking) if you hit a dead end.
-
Complexity: Exponential O(2^N^) or Factorial O(N!).
- Explanation: The "Brute Force" approach. It tries every single combination.
-
Push_Swap Usage: Only for very tiny datasets where accuracy is more important than speed.
To sort the integers in descending order, the comparison logic must be inverted across several files. Rotating an already sorted ascending stack will not work, as the circular relative order remains the same. It must be changed how the algorithm perceives "correct" placement and the final sorted state.
The success condition must be flipped so the function returns 1 only if each number is larger than the one following it.
C
#include "../../includes/push_swap.h"
int ft_checksorted(t_stack *stack_a)
- if (i < stack_a->nbr) // Changed from > to <*
The placement logic determines where a number is inserted during the "push" operations. To sort descending, the "place" functions must search for the opposite mathematical relationship.
C int ft_find_place_b(t_stack *stack_b, int nbr_push)
-
// Logic inverted to find spot for descending order*
-
*if (nbr_push < stack_b->nbr && nbr_push >
-
else if (nbr_push < ft_min(stack_b) || nbr_push > ft_max(stack_b))*
-
i = ft_find_index(stack_b, ft_min(stack_b)); // Smallest on top for B*
-
else*
-
{*
-
tmp = stack_b->next;*
-
while (stack_b->nbr > nbr_push || tmp->nbr < nbr_push)*
C
int ft_find_place_a(t_stack *stack_a, int nbr_push)
-
// Logic inverted: Find where nbr_push is LARGER than next and SMALLER than prev*
-
if (nbr_push > stack_a->nbr && nbr_push < ft_lstlast(stack_a)->nbr)*
-
i = 0;*
-
else if (nbr_push > ft_max(stack_a) || nbr_push < ft_min(stack_a))*
-
i = ft_find_index(stack_a, ft_max(stack_a)); // Max on top for A*
-
else*
-
{*
-
tmp = stack_a->next;*
-
*while (stack_a->nbr < nbr_push || tmp->nbr > nbr_push)
After the algorithm pushes all numbers back into Stack A, the stack is "sorted" but may be rotated. You must change the final alignment to bring the maximum value to the top instead of the minimum.
C
void ft_sort(t_stack **stack_a)
-
stack_b = ft_sort_b(stack_a);*
-
stack_a = ft_sort_a(stack_a, &stack_b);*
-
// Bring MAX to the top for descending order*
-
*i = ft_find_index(*stack_a, ft_max(*stack_a)); **
-
if (i < ft_lstsize(*stack_a) - i)*
-
while ((*stack_a)->nbr != ft_max(*stack_a))*
-
ft_ra(stack_a, 0);*
The sort_three logic is hardcoded for ascending permutations. It must be updated to target the 3-2-1 state.
C
void ft_sort_three(t_stack **stack_a)
- if (ft_max(*stack_a) == (*stack_a)->nbr)*
- {*
- ft_ra(stack_a, 0);*
- ft_sa(stack_a, 0);*
- }*
- else if (ft_min(*stack_a) == (*stack_a)->nbr)*
- {*
- ft_ra(stack_a, 0);*
- if (!ft_checksorted(*stack_a))*
- ft_sa(stack_a, 0);*
- }*
- else*
- {*
- if (ft_find_index(*stack_a, ft_min(*stack_a)) == 1)*
- ft_rra(stack_a, 0);*
- else*
- ft_sa(stack_a, 0);*
- }* }
- Placement: shifted the logic from "finding where it's bigger than the top" to "finding where it's smaller than the top".
- Final Alignment: shifted the final anchor from ft_min to ft_max.
- Checker: If the bonus checker is used, it will now only return OK if the stack is perfectly descending because it uses the updated ft_checksorted.