Basic Docs
cs-fundamentalsalgorithms

Algorithms Basics

Big O notation, sorting, searching, and common algorithmic patterns.

Big O Notation

Big O describes how an algorithm's time or space usage scales with input size. It lets you compare algorithms independent of hardware — an O(log n) solution beats O(n) at any scale large enough to matter.

O(1)
Constant — hash map lookup, array index
O(log n)
Logarithmic — binary search, BST ops
O(n)
Linear — single loop, linear search
O(n log n)
Linearithmic — merge sort, heap sort
O(n²)
Quadratic — nested loops, bubble sort
Big O Growth Rates (operations for n=100)
O(1)1
O(log n)7
O(n)100
O(n log n)700
O(n²)10000

Sorting Algorithms

Different sorting algorithms trade off simplicity, memory, and worst-case guarantees. Merge sort and heap sort guarantee O(n log n); quicksort is fast in practice but degrades to O(n²) on bad pivots.

AlgorithmTime / Space
Bubble SortO(n²) / O(1) — simple, rarely used in practice
Merge SortO(n log n) / O(n) — stable, predictable
Quick SortO(n log n) avg / O(log n) — fastest in practice
Heap SortO(n log n) / O(1) — in-place, not stable
[8,3,5,1]
Split → [8,3] [5,1]
Sort → [3,8] [1,5]
Merge → [1,3,5,8]

Searching Algorithms

Linear search scans every element — O(n), no preconditions. Binary search halves the search space each step — O(log n), but requires a sorted array. Always sort first if you'll search repeatedly.

Start
Check mid
Go left / right
Found
Binary Search
function binarySearch(arr, target) {
  let lo = 0, hi = arr.length - 1;
  while (lo <= hi) {
    const mid = (lo + hi) >> 1;
    if (arr[mid] === target) return mid;
    if (arr[mid] < target) lo = mid + 1;
    else hi = mid - 1;
  }
  return -1;
}

Common Patterns

Most algorithm problems map to a handful of recurring patterns. Recognizing the pattern — not memorizing solutions — is the skill that transfers to new problems.

Core Algorithmic Patterns
Two Pointers — shrink search space from both ends simultaneously
Sliding Window — maintain a moving subarray without re-scanning
Divide & Conquer — split, solve subproblems, merge results
Dynamic Programming — cache overlapping subproblems, build bottom-up

When to Optimize

Premature optimization wastes time. Measure first with real data, then target the hottest path — usually a data structure switch or algorithm class change delivers far more than micro-tuning.

Optimization Mindset
A 10× better algorithm beats a 10× faster machine at scale. Fix time complexity before reaching for cache tricks or loop unrolling.
Algorithm Design Approach
1
Understand Problem
Clarify constraints, input/output, and edge cases
2
Choose Pattern
Map to known patterns: two pointers, DP, divide & conquer
3
Implement
Write clean code with meaningful variable names
4
Test Edge Cases
Empty input, single element, duplicates, boundaries
5
Optimize
Reduce complexity class before micro-optimizing
1
Profile First
Measure with realistic data to find actual bottlenecks, not assumed ones.
2
Choose the Right Structure
Swap an O(n) array scan for an O(1) hash map lookup before rewriting any logic.
3
Reduce Complexity Class
Eliminate nested loops, apply memoization, or replace brute-force with a smarter algorithm.
Built: 4/8/2026, 12:01:11 PM