Basic Programming Algorithms: The 7 You’ll Actually Use Every Day

Developer writing algorithm pseudocode on whiteboard

Let’s be real: you don’t need to memorize Dijkstra’s algorithm or implement a red-black tree from scratch to be a productive programmer. In fact, you could go years without touching a graph traversal. But there are a handful of basic programming algorithms that show up constantly—in your code reviews, in technical interviews, and in the libraries you use every day without realizing it.

This tutorial covers the 7 algorithms that actually matter for daily programming. You’ll learn what they do, when to use them, how they perform, and—most importantly—how to recognize when you’re accidentally implementing a worse version of one that already exists.

🎯 What you’ll learn: Which algorithms dominate everyday programming, how to choose between them, and the time/space tradeoffs that actually matter in production.

2. Quicksort: The Sorting Algorithm Behind Your Language's `sort()`

Divide and Conquer Sorting O(n log n) avg

What it does: Sorts an array by picking a "pivot" element, partitioning the array into elements less than and greater than the pivot, then recursively sorting the partitions. It's the foundation of sorting in most standard libraries.

How it works: Choose a pivot (ideally the median). Move everything smaller to the left, everything larger to the right. Recursively sort both sides. The pivot is now in its final position.

def quicksort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[len(arr) // 2]
    left = [x for x in arr if x < pivot]
    middle = [x for x in arr if x == pivot]
    right = [x for x in arr if x > pivot]
    return quicksort(left) + middle + quicksort(right)

When you'll use it:

  • Calling `.sort()` in Python, `Arrays.sort()` in Java, or `std::sort` in C++ (they all use hybrid quicksort variants)
  • Any time you need to order data for binary search, display, or processing

Why quicksort dominates: Despite the worst-case O(n²) scenario (when the pivot is always the smallest or largest element), quicksort's average case is excellent and it has great cache locality—meaning it's actually faster in practice than theoretically "better" sorts like mergesort.

Production reality: You will almost never write quicksort yourself. Your language's built-in sort is heavily optimized—often using Timsort (Python, Java) or Introsort (C++), which are quicksort variants with safeguards against worst-case behavior.

Sorting algorithm visualization on screen

3. Hash Lookup: O(1) Access That Powers Everything

Hashing Key-Value O(1) avg

What it does: Maps keys to values using a hash function that converts the key into an array index. This enables constant-time lookups, insertions, and deletions—on average.

How it works: A hash function takes your key (e.g., "username") and returns an integer. That integer is used as an index into an array. If two keys hash to the same index (a collision), the table uses chaining (linked list) or open addressing (probing) to resolve it.

When you'll use it: Every single day. Python's `dict`, JavaScript's `object`, Java's `HashMap`, and C++'s `std::unordered_map` are all hash tables. They're used for:

  • Caching (memoization)
  • Counting frequencies ("find the most common element")
  • Duplicate detection ("have I seen this before?")
  • Routing tables in network devices

Critical insight: The quality of your hash function determines everything. A bad hash function that maps many keys to the same bucket degrades performance to O(n)—effectively a linked list.

4. BFS and DFS: Traversing Anything Connected

Graph Traversal Tree Traversal O(V + E)

What they do: Breadth-First Search (BFS) explores level by level, finding shortest paths. Depth-First Search (DFS) explores as far down a branch as possible before backtracking. Together, they're the foundation of graph and tree algorithms.

BFS (queue-based):

from collections import deque

def bfs(graph, start):
    visited = set([start])
    queue = deque([start])
    while queue:
        node = queue.popleft()
        for neighbor in graph[node]:
            if neighbor not in visited:
                visited.add(neighbor)
                queue.append(neighbor)
    return visited

DFS (recursive or stack-based):

def dfs(graph, node, visited=None):
    if visited is None:
        visited = set()
    visited.add(node)
    for neighbor in graph[node]:
        if neighbor not in visited:
            dfs(graph, neighbor, visited)
    return visited

When you'll use them:

  • BFS: Shortest path in unweighted graphs, web crawling, social network "degrees of separation," finding the nearest [something]
  • DFS: Topological sorting, cycle detection, maze solving, finding connected components, tree traversals (preorder/inorder/postorder)

Key difference: BFS uses more memory (stores an entire level) but guarantees shortest paths. DFS uses less memory (stores current path) but may not find the optimal path first.

5. Boyer-Moore Majority Vote: Finding Dominant Elements in One Pass

Streaming O(1) Space O(n) time

What it does: Finds the majority element—an element that appears more than n/2 times—in a single pass through the array, using only two variables. No hash maps, no sorting.

How it works: Maintain a `candidate` and a `count`. When `count` is zero, set the current element as `candidate`. For each element: if it matches `candidate`, increment `count`; otherwise, decrement. The majority element (if it exists) will survive with a positive count.

def majority_element(nums):
    candidate = None
    count = 0
    for num in nums:
        if count == 0:
            candidate = num
        count += 1 if num == candidate else -1
    return candidate

When you'll use it:

  • Finding the dominant item in a stream of data (network packets, user votes)
  • Identifying the most frequent element when you know one element dominates
  • Interview questions (this is a classic)

Important caveat: This algorithm assumes a majority element exists. If it might not, you need a second pass to verify the candidate actually appears more than n/2 times.

🎯 Why this algorithm is brilliant: It runs in O(n) time and O(1) space. The alternative—counting frequencies with a hash map—takes O(n) space. When processing terabytes of streaming data, that O(1) space is the difference between possible and impossible.

6. Dynamic Programming: Remembering So You Don't Recompute

Memoization Optimization Varies

What it does: Solves complex problems by breaking them into overlapping subproblems, solving each once, and storing the results. It's the algorithmic equivalent of "work smarter, not harder."

The key insight: Many recursive problems recalculate the same values repeatedly. Dynamic programming caches those values. Fibonacci is the classic example—without memoization, it's O(2ⁿ); with it, O(n).

# Without DP: exponential time
def fib_naive(n):
    if n <= 1:
        return n
    return fib_naive(n-1) + fib_naive(n-2)

# With DP (memoization): linear time
memo = {}
def fib_dp(n):
    if n in memo:
        return memo[n]
    if n <= 1:
        return n
    memo[n] = fib_dp(n-1) + fib_dp(n-2)
    return memo[n]

When you'll use it:

  • Optimization problems: "What's the maximum/minimum way to do X?"
  • String algorithms: edit distance (Levenshtein), longest common subsequence, regex matching
  • Pathfinding with constraints
  • Resource allocation (knapsack problems)

Two approaches:

  • Top-down (memoization): Start with the recursive solution, add a cache.
  • Bottom-up (tabulation): Build the solution from smallest subproblems up.

7. Fast Fourier Transform (FFT): The Algorithm You Use Trillions of Times

Signal Processing Divide and Conquer O(n log n)

What it does: Transforms a signal from the time domain to the frequency domain—and back—efficiently. Without FFT, this would be O(n²); with FFT, it's O(n log n). This is the algorithm that makes modern digital life possible.

How it works: FFT uses a divide-and-conquer strategy (the Cooley-Tukey algorithm) to compute the Discrete Fourier Transform. It recursively breaks the transform into smaller transforms on even- and odd-indexed samples, combining results using complex roots of unity.

You use it constantly without knowing:

  • Every time you stream video or music: Compression algorithms (MP3, AAC, JPEG, MPEG) rely on FFT
  • Every phone call and Wi-Fi connection: OFDM modulation in 4G/5G/Wi-Fi uses FFT
  • Medical imaging: MRI reconstruction uses FFT
  • Voice assistants: Speech recognition starts with FFT to extract frequency features

You probably won't implement it: FFT is complex to get right. You'll use libraries like NumPy's `numpy.fft`, MATLAB's `fft()`, or specialized DSP libraries. But understanding what it does—converting between time and frequency domains—is essential for any work with signals, audio, images, or communications.

Practical use case: Need to remove a specific frequency noise from an audio recording? Transform to frequency domain with FFT → zero out the noise frequency → inverse FFT back to time domain. Done.

Algorithm Complexity at a Glance

Big-O notation describes how an algorithm's resource requirements grow as input size increases. Here's what you need to know:

Algorithm Time (Average) Time (Worst) Space When to Use
Binary Search O(log n) O(log n) O(1) Searching sorted data
Quicksort O(n log n) O(n²) O(log n) General-purpose sorting
Hash Lookup O(1) O(n) O(n) Key-value storage, caching
BFS/DFS O(V + E) O(V + E) O(V) Graph/tree traversal
Boyer-Moore Majority O(n) O(n) O(1) Finding dominant element in stream
Dynamic Programming Depends on problem; often reduces exponential to polynomial O(n) to O(n²) Optimization with overlapping subproblems
FFT O(n log n) O(n log n) O(n) Signal/image processing, compression

How to Choose: A Decision Framework

When faced with a problem, ask these questions in order:

  1. Am I searching for something in sorted data? → Binary Search
  2. Do I need to store key-value pairs with fast lookup? → Hash Table
  3. Do I need to sort data? → Use your language's built-in sort (it's better than yours)
  4. Am I traversing a network, tree, or graph structure? → BFS (shortest path) or DFS (explore all)
  5. Am I finding the most frequent element in a stream? → Boyer-Moore Majority
  6. Am I solving an optimization problem with repeating subproblems? → Dynamic Programming
  7. Am I working with signals, audio, or images? → FFT (via library)

⚠️ The Most Common Algorithm Mistake

Reinventing the wheel poorly. The built-in algorithms in your language's standard library are almost certainly better optimized than anything you'll write in an afternoon. Python's `list.sort()` uses Timsort, a hybrid of mergesort and insertion sort optimized for real-world data. Java's `Arrays.sort()` uses dual-pivot quicksort. Use them.

Where you need algorithmic thinking is in problem formulation—recognizing that your problem is actually a search problem, or a graph traversal, or a dynamic programming problem. The implementation details often come from libraries; the pattern recognition comes from you.

Algorithms Are Tools, Not Trophies

You don't need to memorize every algorithm in CLRS to be effective. Focus on the 7 in this tutorial—they cover 90% of what you'll encounter in daily programming. Understand why hash lookup is fast, when binary search applies, and how dynamic programming avoids redundant work. That mental model—not rote memorization—is what makes you a better programmer.

The best way to internalize these? Write them once. Implement binary search from scratch. Build a hash table. Watch your naive recursive Fibonacci grind to a halt, then fix it with memoization. One afternoon of hands-on practice will teach you more than hours of reading.

Practice with Python → Master OOP Design →

Scroll to Top