Basic Programming Algorithms: The 7 You’ll Actually Use Every Day
Let’s be real: you don’t need to memorize Dijkstra’s algorithm or implement a red-black tree from scratch to be a productive programmer. In fact, you could go years without touching a graph traversal. But there are a handful of basic programming algorithms that show up constantly—in your code reviews, in technical interviews, and in the libraries you use every day without realizing it.
This tutorial covers the 7 algorithms that actually matter for daily programming. You’ll learn what they do, when to use them, how they perform, and—most importantly—how to recognize when you’re accidentally implementing a worse version of one that already exists.
1. Binary Search: The Workhorse of “Find” Operations
Divide and Conquer Search O(log n)What it does: Finds an element in a sorted collection by repeatedly dividing the search space in half. It’s the reason your IDE can jump to a line number instantly and why `git bisect` can find the commit that broke your build in seconds.
How it works: Start in the middle. If the target is smaller, search the left half; if larger, search the right half. Repeat until found or the search space is empty.
def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1 # Not found
When you'll use it:
- Searching sorted arrays (obviously)
- Finding the insertion point for a new element
- Finding boundaries in monotonic functions ("find the first index where X becomes true")
- Debugging with git bisect
Critical requirement: The data must be sorted. If it's not, sort it first—but that's a separate cost. Binary search is O(log n) on sorted data; without the sort, it's useless.
Pitfall to avoid: The `mid = (low + high) // 2` calculation can overflow in languages with fixed-size integers (looking at you, Java and C++). Use `mid = low + (high - low) // 2` for safety.
2. Quicksort: The Sorting Algorithm Behind Your Language's `sort()`
Divide and Conquer Sorting O(n log n) avgWhat it does: Sorts an array by picking a "pivot" element, partitioning the array into elements less than and greater than the pivot, then recursively sorting the partitions. It's the foundation of sorting in most standard libraries.
How it works: Choose a pivot (ideally the median). Move everything smaller to the left, everything larger to the right. Recursively sort both sides. The pivot is now in its final position.
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
When you'll use it:
- Calling `.sort()` in Python, `Arrays.sort()` in Java, or `std::sort` in C++ (they all use hybrid quicksort variants)
- Any time you need to order data for binary search, display, or processing
Why quicksort dominates: Despite the worst-case O(n²) scenario (when the pivot is always the smallest or largest element), quicksort's average case is excellent and it has great cache locality—meaning it's actually faster in practice than theoretically "better" sorts like mergesort.
Production reality: You will almost never write quicksort yourself. Your language's built-in sort is heavily optimized—often using Timsort (Python, Java) or Introsort (C++), which are quicksort variants with safeguards against worst-case behavior.
3. Hash Lookup: O(1) Access That Powers Everything
Hashing Key-Value O(1) avgWhat it does: Maps keys to values using a hash function that converts the key into an array index. This enables constant-time lookups, insertions, and deletions—on average.
How it works: A hash function takes your key (e.g., "username") and returns an integer. That integer is used as an index into an array. If two keys hash to the same index (a collision), the table uses chaining (linked list) or open addressing (probing) to resolve it.
When you'll use it: Every single day. Python's `dict`, JavaScript's `object`, Java's `HashMap`, and C++'s `std::unordered_map` are all hash tables. They're used for:
- Caching (memoization)
- Counting frequencies ("find the most common element")
- Duplicate detection ("have I seen this before?")
- Routing tables in network devices
Critical insight: The quality of your hash function determines everything. A bad hash function that maps many keys to the same bucket degrades performance to O(n)—effectively a linked list.
4. BFS and DFS: Traversing Anything Connected
Graph Traversal Tree Traversal O(V + E)What they do: Breadth-First Search (BFS) explores level by level, finding shortest paths. Depth-First Search (DFS) explores as far down a branch as possible before backtracking. Together, they're the foundation of graph and tree algorithms.
BFS (queue-based):
from collections import deque
def bfs(graph, start):
visited = set([start])
queue = deque([start])
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
return visited
DFS (recursive or stack-based):
def dfs(graph, node, visited=None):
if visited is None:
visited = set()
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
dfs(graph, neighbor, visited)
return visited
When you'll use them:
- BFS: Shortest path in unweighted graphs, web crawling, social network "degrees of separation," finding the nearest [something]
- DFS: Topological sorting, cycle detection, maze solving, finding connected components, tree traversals (preorder/inorder/postorder)
Key difference: BFS uses more memory (stores an entire level) but guarantees shortest paths. DFS uses less memory (stores current path) but may not find the optimal path first.
5. Boyer-Moore Majority Vote: Finding Dominant Elements in One Pass
Streaming O(1) Space O(n) timeWhat it does: Finds the majority element—an element that appears more than n/2 times—in a single pass through the array, using only two variables. No hash maps, no sorting.
How it works: Maintain a `candidate` and a `count`. When `count` is zero, set the current element as `candidate`. For each element: if it matches `candidate`, increment `count`; otherwise, decrement. The majority element (if it exists) will survive with a positive count.
def majority_element(nums):
candidate = None
count = 0
for num in nums:
if count == 0:
candidate = num
count += 1 if num == candidate else -1
return candidate
When you'll use it:
- Finding the dominant item in a stream of data (network packets, user votes)
- Identifying the most frequent element when you know one element dominates
- Interview questions (this is a classic)
Important caveat: This algorithm assumes a majority element exists. If it might not, you need a second pass to verify the candidate actually appears more than n/2 times.
6. Dynamic Programming: Remembering So You Don't Recompute
Memoization Optimization VariesWhat it does: Solves complex problems by breaking them into overlapping subproblems, solving each once, and storing the results. It's the algorithmic equivalent of "work smarter, not harder."
The key insight: Many recursive problems recalculate the same values repeatedly. Dynamic programming caches those values. Fibonacci is the classic example—without memoization, it's O(2ⁿ); with it, O(n).
# Without DP: exponential time
def fib_naive(n):
if n <= 1:
return n
return fib_naive(n-1) + fib_naive(n-2)
# With DP (memoization): linear time
memo = {}
def fib_dp(n):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib_dp(n-1) + fib_dp(n-2)
return memo[n]
When you'll use it:
- Optimization problems: "What's the maximum/minimum way to do X?"
- String algorithms: edit distance (Levenshtein), longest common subsequence, regex matching
- Pathfinding with constraints
- Resource allocation (knapsack problems)
Two approaches:
- Top-down (memoization): Start with the recursive solution, add a cache.
- Bottom-up (tabulation): Build the solution from smallest subproblems up.
7. Fast Fourier Transform (FFT): The Algorithm You Use Trillions of Times
Signal Processing Divide and Conquer O(n log n)What it does: Transforms a signal from the time domain to the frequency domain—and back—efficiently. Without FFT, this would be O(n²); with FFT, it's O(n log n). This is the algorithm that makes modern digital life possible.
How it works: FFT uses a divide-and-conquer strategy (the Cooley-Tukey algorithm) to compute the Discrete Fourier Transform. It recursively breaks the transform into smaller transforms on even- and odd-indexed samples, combining results using complex roots of unity.
You use it constantly without knowing:
- Every time you stream video or music: Compression algorithms (MP3, AAC, JPEG, MPEG) rely on FFT
- Every phone call and Wi-Fi connection: OFDM modulation in 4G/5G/Wi-Fi uses FFT
- Medical imaging: MRI reconstruction uses FFT
- Voice assistants: Speech recognition starts with FFT to extract frequency features
You probably won't implement it: FFT is complex to get right. You'll use libraries like NumPy's `numpy.fft`, MATLAB's `fft()`, or specialized DSP libraries. But understanding what it does—converting between time and frequency domains—is essential for any work with signals, audio, images, or communications.
Practical use case: Need to remove a specific frequency noise from an audio recording? Transform to frequency domain with FFT → zero out the noise frequency → inverse FFT back to time domain. Done.
Algorithm Complexity at a Glance
Big-O notation describes how an algorithm's resource requirements grow as input size increases. Here's what you need to know:
| Algorithm | Time (Average) | Time (Worst) | Space | When to Use |
|---|---|---|---|---|
| Binary Search | O(log n) | O(log n) | O(1) | Searching sorted data |
| Quicksort | O(n log n) | O(n²) | O(log n) | General-purpose sorting |
| Hash Lookup | O(1) | O(n) | O(n) | Key-value storage, caching |
| BFS/DFS | O(V + E) | O(V + E) | O(V) | Graph/tree traversal |
| Boyer-Moore Majority | O(n) | O(n) | O(1) | Finding dominant element in stream |
| Dynamic Programming | Depends on problem; often reduces exponential to polynomial | O(n) to O(n²) | Optimization with overlapping subproblems | |
| FFT | O(n log n) | O(n log n) | O(n) | Signal/image processing, compression |
How to Choose: A Decision Framework
When faced with a problem, ask these questions in order:
- Am I searching for something in sorted data? → Binary Search
- Do I need to store key-value pairs with fast lookup? → Hash Table
- Do I need to sort data? → Use your language's built-in sort (it's better than yours)
- Am I traversing a network, tree, or graph structure? → BFS (shortest path) or DFS (explore all)
- Am I finding the most frequent element in a stream? → Boyer-Moore Majority
- Am I solving an optimization problem with repeating subproblems? → Dynamic Programming
- Am I working with signals, audio, or images? → FFT (via library)
⚠️ The Most Common Algorithm Mistake
Reinventing the wheel poorly. The built-in algorithms in your language's standard library are almost certainly better optimized than anything you'll write in an afternoon. Python's `list.sort()` uses Timsort, a hybrid of mergesort and insertion sort optimized for real-world data. Java's `Arrays.sort()` uses dual-pivot quicksort. Use them.
Where you need algorithmic thinking is in problem formulation—recognizing that your problem is actually a search problem, or a graph traversal, or a dynamic programming problem. The implementation details often come from libraries; the pattern recognition comes from you.
Algorithms Are Tools, Not Trophies
You don't need to memorize every algorithm in CLRS to be effective. Focus on the 7 in this tutorial—they cover 90% of what you'll encounter in daily programming. Understand why hash lookup is fast, when binary search applies, and how dynamic programming avoids redundant work. That mental model—not rote memorization—is what makes you a better programmer.
The best way to internalize these? Write them once. Implement binary search from scratch. Build a hash table. Watch your naive recursive Fibonacci grind to a halt, then fix it with memoization. One afternoon of hands-on practice will teach you more than hours of reading.
Practice with Python → Master OOP Design →