Algorithm Patterns

Pattern Reference

19 patterns · animated visualizations · LeetCode problems
Grid BFS BFS on a 2D Grid
What It Is
Like a ripple spreading outward in water. BFS visits every cell one ring at a time — all neighbors at distance 1 before distance 2 — guaranteeing the shortest path.
Dead GiveawayIf input is int[][] or char[][] and you need minimum steps, shortest path, or want to count/fill connected regions → Grid BFS.
Look For These Words
"grid" "matrix" "minimum steps" "island" "flood fill" "connected cells"
Mechanic
1.Push start cell into queue; mark it visited with dist[r][c]=0
2.While queue not empty: pop cell, push all unvisited neighbors
3.Set dist[nr][nc] = dist[r][c] + 1 before enqueuing
4.Return distance when target is reached — guaranteed shortest
200Number of IslandsMed
BFS floods from each unvisited '1' cell — each complete flood = one island.
Spot these in the problem
"2D grid" "count islands" "connected horizontally/vertically" "'1' = land, '0' = water"
Key Insight: Every '1' cell is a graph node; adjacent '1' cells are edges. Counting islands = counting connected components. BFS from any unvisited '1' floods its entire component.
Why not DFS / other? DFS works equally well here — both flood-fill to count connected components. The BFS vs DFS choice only matters when minimum distance is required (BFS guarantees shortest path; DFS does not).
1.Iterate every cell. When you hit an unvisited '1', increment island count.
2.BFS from it: push to queue, mark as '0' (visited). Pop each cell, check 4 neighbors — if neighbor is '1', push and mark.
3.When queue empties, one full island has been consumed. Continue scanning.
Open on LeetCode ↗
994Rotting OrangesMed
Multi-source BFS from all rotten cells — each BFS level = 1 minute.
Spot these in the problem
"rotten orange" "4-directionally adjacent" "minimum time" "all fresh oranges rot"
Key Insight: All rotten oranges spread simultaneously — this is Multi-Source BFS. Push ALL rotten cells into the queue at t=0. Each BFS level = 1 minute of spreading.
Why not DFS / other? DFS would process one rotten orange's chain fully before moving to the next — this gives the wrong elapsed time. All rotten cells spread simultaneously, which requires BFS's level-by-level expansion (1 level = 1 minute).
Naive approach fails: BFS from each rotten orange separately counts wrong minutes — it ignores that multiple sources spread at the same time. You'd overcount time for cells near multiple rotten oranges.
1.Scan grid: push all '2' cells to queue, count '1' (fresh) cells.
2.BFS level by level: pop rotten cell, for each fresh neighbor mark it '2', decrement freshCount, push to queue.
3.After BFS: if freshCount > 0 → return -1. Otherwise return number of BFS levels (minutes).
Open on LeetCode ↗
54201 MatrixMed
Invert the BFS: start from all 0-cells simultaneously, not from each 1.
Spot these in the problem
"distance to the nearest 0" "for each cell" "matrix of 0s and 1s"
Key Insight: BFS from each '1' to find its nearest '0' is O(n²). The insight: reverse the direction. BFS outward from ALL '0' cells at once. First time BFS reaches any '1' cell = its guaranteed shortest distance.
Why not DFS / other? DFS does not guarantee shortest distance — it explores one path deeply. Only BFS guarantees the first time it reaches a cell = minimum distance from a 0.
Naive approach fails: BFS from each '1' cell individually to find its nearest '0' is O(n²·n²) — too slow. The fix: invert direction and multi-source BFS from ALL '0' cells at once, reducing to a single O(n²) pass.
1.Push all '0' cells to queue with dist=0. Set all '1' cells to infinity.
2.BFS: pop cell, for each unvisited neighbor set dist[nr][nc] = dist[r][c]+1, push to queue.
3.BFS wave naturally writes correct shortest distances to every '1' cell in one pass.
Open on LeetCode ↗
417Pacific AtlanticMed
Two reverse BFS from each ocean's border — intersection is the answer.
Spot these in the problem
"water can flow to adjacent cells with equal or lower height" "which cells can flow to both Pacific and Atlantic" "border"
Key Insight: Forward BFS from each interior cell is expensive. Reverse it: BFS uphill from each ocean's edge. Pacific BFS starts at top/left border; Atlantic at bottom/right. Any cell reachable by both = water flows to both oceans.
Why not DFS / other? DFS also works for the reachability traversal. The critical choice here is not BFS vs DFS — it is the direction reversal (inward from borders instead of outward from cells). Both BFS and DFS implement this equally well.
Naive approach fails: BFS/DFS from every interior cell to check both oceans = O(n⁴). Reversing direction and running 2 traversals from borders = O(n²).
1.Pacific BFS: push all top-row and left-column cells. Expand to any neighbor with height >= current (reverse flow). Mark all cells reachable.
2.Atlantic BFS: same from bottom-row and right-column.
3.Intersection of both reachable sets = answer.
Open on LeetCode ↗
Topo BFS Topological Sort — Kahn's Algorithm
What It Is
Process things in dependency order — a task only runs after all its prerequisites finish. Uses in-degree counts to find what's unblocked.
Dead GiveawayIf the problem says "prerequisites", "dependencies", "finish all courses", or "valid ordering" → Topo BFS.
Look For These Words
"prerequisites" "depends on" "finish all" "ordering" "course schedule"
Mechanic
1.Build adjacency list + compute inDeg[] for every node
2.Enqueue all nodes with inDeg == 0
3.Pop node → append to result → decrement neighbor in-degrees
4.If neighbor hits 0: enqueue it. Cycle if result.length < n
207Course ScheduleMed
Can you finish all = does the prereq graph have a cycle? Kahn's detects it.
Spot these in the problem
"prerequisites" "can you finish all courses" "course A requires course B" "directed graph"
Key Insight: Prerequisites form a directed graph. "Can you finish all courses" = "does this graph have a cycle?" Kahn's algorithm removes zero-in-degree nodes one by one — if any node's in-degree never reaches zero, a cycle is blocking it.
Why not DFS / other? DFS can also detect cycles via back-edge tracking (visited/in-progress states). Kahn's BFS is preferred because it naturally builds the topological order and the cycle-detection falls out of the node count — no extra state tracking needed.
1.Build adjacency list + in-degree count for every course.
2.Push all courses with inDeg==0 (no prerequisites) into queue.
3.Pop course, decrement neighbors' in-degrees; if a neighbor hits 0, push it.
4.If processed count == numCourses → no cycle, return true. Else return false.
Open on LeetCode ↗
210Course Schedule IIMed
Same as 207 but collect the processing order as Kahn's runs.
Spot these in the problem
"return the ordering" "prerequisites" "valid order of courses" "empty array if impossible"
Key Insight: Identical graph and algorithm to Course Schedule 207. The only change: record each processed node. The order nodes are processed by Kahn's algorithm IS a valid topological ordering.
Why not DFS / other? DFS post-order traversal (reversed) also produces a valid topological order. Kahn's BFS is simpler: nodes are dequeued in valid topological order directly — no reversal step needed.
1.Steps 1–3 identical to 207: build graph, push inDeg=0 nodes, pop and decrement.
2.Append each popped course to result[].
3.If result.length == numCourses return result (valid order found); else return [] (cycle exists).
Open on LeetCode ↗
269Alien DictionaryHard
First differing char in adjacent words = an ordering rule → Kahn's finds the order.
Spot these in the problem
"list of words sorted lexicographically" "alien language" "return the order of letters" "unique characters"
Key Insight: Compare each pair of adjacent sorted words. The first character that differs gives one ordering rule (char A comes before char B in the alien alphabet). Collect all rules as directed edges, then Kahn's finds the order (or detects a cycle).
Why not DFS / other? DFS cycle detection and topological sort also works. Kahn's BFS is cleaner for producing the output ordering directly without post-order reversal.
Naive approach fails: Comparing non-adjacent words doesn't give reliable constraints — you can only infer ordering from adjacent word pairs (first differing character). Edge case: if a longer word appears before its prefix (e.g. "abc" before "ab"), it's impossible — return "".
1.Compare each adjacent word pair, find first mismatch → add directed edge char1→char2. Edge case: if word1 is longer and a prefix of word2 → invalid, return "".
2.Run Kahn's on character nodes.
3.If result.length < number of unique chars → cycle, return "".
Open on LeetCode ↗
Graph BFS BFS on a General Graph
What It Is
BFS on an explicit graph (edges given directly). Explores level by level from a source node to find shortest distances or check connectivity.
Dead GiveawayIf input is an edge list or adjacency matrix and you need shortest path or bipartite check → Graph BFS.
Look For These Words
"n nodes" "edges" "adjacency" "clone graph" "bipartite" "shortest path"
Mechanic
1.Initialize visited={src}, queue=[src]
2.Pop node, process it, iterate neighbors
3.If neighbor not visited: mark visited and enqueue
4.Count levels by snapshotting queue size at each BFS round
Word Ladder — Why It's This Pattern
The hidden graph in Word Ladder
Each word is a node. No edges are given — the graph is invisible.
Two words are neighbors if they differ by exactly 1 letter. That difference = an edge.
Generate neighbors on-the-fly: for each position, try all 26 letters, check if result is in the word list.
"Minimum transformations" = shortest path. BFS guarantees it; DFS would find a path, not the shortest.
hit hot dot dog cog 5 nodes · 4 hops = answer is 5
133Clone GraphMed
BFS visits every node once; a map prevents cloning the same node twice.
Spot these in the problem
"deep copy" "connected undirected graph" "Node with val and neighbors" "clone"
Key Insight: BFS visits every node exactly once. Use a HashMap (original→clone) as both a visited marker and a place to retrieve the clone when wiring neighbor connections.
Why not DFS / other? DFS also works for graph cloning — any traversal that visits each node once is sufficient. HashMap is the essential structure regardless of BFS or DFS.
Naive approach fails: Without a HashMap, when a node appears as a neighbor of multiple nodes, you'd clone it multiple times, creating duplicate nodes instead of a connected graph.
1.Create clone of start node, add to map and queue.
2.Pop node from queue. For each neighbor: if not in map, create its clone, add to map and queue.
3.Regardless of whether neighbor is new, append map[neighbor] to current clone's neighbors list.
Open on LeetCode ↗
785Is Graph Bipartite?Med
BFS 2-colors the graph; same-color neighbors = not bipartite.
Spot these in the problem
"split into two independent sets" "no two adjacent nodes in same group" "undirected graph" "bipartite"
Key Insight: 2-color the graph using BFS. Assign color 0 to a source node; every neighbor gets color 1; their neighbors get 0; etc. If any edge connects two nodes of the same color, the graph is not bipartite.
Why not DFS / other? DFS 2-coloring is equally valid. BFS is slightly more intuitive here because you naturally color layer 0 (color 0), layer 1 (color 1), layer 2 (color 0)... — same-layer edges immediately flag non-bipartiteness.
1.For each unvisited node (graph may be disconnected), push it with color 0.
2.Pop node; for each neighbor: if uncolored → color it 1-color[node] and push. If already colored and same color as current node → return false.
3.If all nodes colored without conflict → return true.
Open on LeetCode ↗
127Word LadderHard
Each word = node; 1-letter difference = edge; BFS finds minimum transformations.
Spot these in the problem
"beginWord to endWord" "change one letter at a time" "each intermediate word must exist in wordList" "minimum"
Key Insight: Words are nodes. A one-letter change = an edge. The graph is never explicitly built — generate neighbors on-the-fly by trying every letter substitution at each position. "Minimum transformations" = shortest path, which BFS guarantees.
Why not DFS / other? DFS finds a path but not the shortest path — it can go arbitrarily deep before reaching endWord. Only BFS guarantees the first time endWord is reached = minimum number of transformations.
Naive approach fails: Pre-building a graph of all word pairs differing by 1 letter = O(n²·L) space and time. Instead, generate neighbors on-the-fly: for each position, try all 26 letters, check if result is in wordSet.
1.Push beginWord with distance 1 into queue; add to visited set.
2.Pop word; for each position (0..len-1), try replacing that char with 'a'..'z'.
3.If result is in wordList and not visited: mark visited, push with distance+1. If result == endWord, return distance. Queue empties without finding endWord → return 0.
Open on LeetCode ↗
State BFS BFS on a State Space (Letter Combinations)
What It Is
The graph is invisible — each "state" (a string, board config, number) is a node. One valid move = one edge. BFS finds the fewest moves to reach the goal.
Dead GiveawayIf the problem asks "minimum moves/operations/transformations" to go from one configuration to another → State BFS.
Look For These Words
"minimum moves" "word ladder" "fewest operations" "lock" "transform"
Mechanic
1.Start queue with [""]
2.For each digit, expand every queued string by appending all mapped letters
3.Replace queue with expanded strings after each digit
4.Queue after all digits = all valid combinations
Bridge — Letter Combinations & Generate Parentheses
Neither problem gives you a graph. You build states.
Letter Combinations #17
State = string built so far ("", "a", "ad"…)
One move = append a letter mapped to the next digit
BFS level = which digit you're currently expanding
All strings at final level = all answers
Generate Parentheses #22
State = (string, open_count, close_count)
Valid moves: add ( if open < n · add ) if close < open
Goal states: all strings where length = 2n
The rules eliminate invalid branches — no backtracking needed
n=2 states: "" "(" "(("or "()" "(()" "(())"
17Letter CombinationsMed
State = string so far; BFS expands all strings by one letter per digit level.
Spot these in the problem
"digit string" "all possible letter combinations" "telephone keypad" "return all combinations"
Key Insight: State = the string built so far. Each phone digit maps to 2-4 letters. BFS processes one digit at a time: all strings in the queue grow by one letter simultaneously — exactly like BFS expanding one level.
Why not DFS / other? DFS/backtracking builds one combination at a time. BFS builds all combinations simultaneously, one level per digit. Neither is strictly better — BFS makes the layered expansion more visible.
1.Start queue with [""].
2.For each digit: dequeue ALL current strings. For each string, append every letter mapped to this digit and add to next queue. Replace queue.
3.After processing all digits, queue contains all complete combinations.
Open on LeetCode ↗
22Generate ParenthesesMed
State = (string, open, close); valid moves constrained to keep balance.
Spot these in the problem
"n pairs of parentheses" "generate all combinations of well-formed parentheses" "all valid"
Key Insight: State = (current_string, open_count, close_count). Edges are constrained: add '(' only if open < n; add ')' only if close < open. Only valid states are ever created — no backtracking or invalid states to prune.
Why not DFS / other? A plain BFS on all '(' and ')' strings would generate invalid states. The state MUST include open and close counts to enforce the constraints. Without tracking counts, you can't distinguish valid from invalid expansions.
Naive approach fails: Generate all 2^(2n) strings of '(' and ')', then filter valid ones = exponential waste. Constraint-guided state BFS only creates valid states — O(Catalan(n)) results, no filtering needed.
1.Start with ("", 0, 0).
2.For each state: if len == 2n → add to results. Otherwise: if open < n → expand with '('; if close < open → expand with ')'.
3.All completed states (length 2n) are guaranteed balanced.
Open on LeetCode ↗
Tree DFS Depth-First Search on a Tree
What It Is
Go deep into a tree recursively before backtracking. The call stack IS the DFS — you return values up the tree naturally.
Dead GiveawayIf input contains TreeNode or root and you need depth, path sum, diameter, or validate a property → Tree DFS.
Look For These Words
"TreeNode" "root" "path sum" "depth" "diameter" "inorder/preorder"
Mechanic
1.Base case: if not node: return
2.Pre-order: process node, then recurse left, right
3.Post-order: recurse children first, then process (e.g., compute height)
4.Pass path info down via params; bubble results up via return values
104Max DepthEasy
Post-order DFS: return max(left depth, right depth) + 1 up the tree.
Spot these in the problem
"binary tree" "maximum depth" "root to leaf" "TreeNode" "longest path"
Key Insight: Depth is defined recursively: depth(node) = 1 + max(depth(left), depth(right)). Post-order DFS computes this bottom-up naturally — children return their depths before the parent uses them.
Why not BFS / other? BFS (level-order) can find depth by counting levels. DFS is simpler: a 2-line recursive function with no queue management needed.
1.Base case: if node is null, return 0.
2.leftDepth = dfs(node.left); rightDepth = dfs(node.right).
3.Return 1 + max(leftDepth, rightDepth). The root call returns the answer.
Open on LeetCode ↗
113Path Sum IIMed
Pre-order DFS carries path + running sum; backtrack after each leaf check.
Spot these in the problem
"root-to-leaf path" "path sum equals targetSum" "find all paths" "TreeNode"
Key Insight: DFS carries the current path and running sum from root to current node. At each leaf, check if the sum equals target. Backtrack (pop from path) when returning from a node.
Why not BFS / other? BFS doesn't naturally carry the path from root. You'd need to store the full path at every queue node — expensive. DFS carries path/sum in the recursive call stack automatically.
Naive approach fails: Checking sum at every node (not just leaves) gives wrong results. The path must go all the way from root to a leaf — check node.left == null && node.right == null as the base case.
1.dfs(node, remaining, path): append node.val to path.
2.If leaf and remaining - node.val == 0: add copy of path to results.
3.Recurse left and right with remaining - node.val. Pop node.val from path before returning (backtrack).
Open on LeetCode ↗
124Max Path SumHard
Post-order: at each node compute the best single arm; update global max through node.
Spot these in the problem
"path" "maximum sum" "any node to any node" "binary tree" "path does not need to pass through root"
Key Insight: At each node, the best "path through this node" = node.val + max(leftGain, 0) + max(rightGain, 0). But you can only return one arm upward (a path can't branch). Track global max.
Why not BFS / other? BFS cannot naturally compute path sums that arc left-through-node-through-right; it processes nodes level by level, not subtree by subtree. Post-order DFS is the only approach that computes the maximum arc at each node bottom-up.
Naive approach fails: The common mistake: returning node.val + leftGain + rightGain to the parent. This would create an invalid branching path. You update a global max with the full arc (left + node + right) but can only return ONE arm upward.
1.dfs(node) returns best single-arm gain.
2.leftGain = max(dfs(left), 0); rightGain = max(dfs(right), 0).
3.Update global: res = max(res, node.val + leftGain + rightGain).
4.Return node.val + max(leftGain, rightGain) to parent.
Open on LeetCode ↗
Trie DFS Prefix Tree — Character-by-Character
What It Is
A tree where each edge is one character. Walking root→leaf spells a word. DFS searches letter by letter — O(L) lookup instead of O(n).
Dead GiveawayIf the problem says "implement a prefix tree", "startsWith", or "search words in a grid" → Trie DFS.
Look For These Words
"startsWith" "prefix" "insert word" "search" "dictionary" "autocomplete"
Mechanic
1.Each node: {children:{}, isEnd:false}
2.Insert: walk each char, create node if missing, set isEnd=true at last
3.Search: walk chars; return false if any char node is missing
4.startsWith: same as search but don't check isEnd
208Implement TrieMed
Each node has 26 children; insert/search walk letter by letter in O(L).
Spot these in the problem
"design" "insert word" "search word" "startsWith prefix" "prefix tree"
Key Insight: A trie node has 26 child slots (one per letter) and an isEnd flag. Insert and search walk the trie one character at a time — O(L) where L = word length, far faster than searching a list.
Why not HashMap / other? A HashMap of whole words handles insert/search in O(L) but can't do startsWith efficiently — you'd scan all keys. Trie shares prefixes, handles all three operations in O(L), and enables prefix pruning.
1.Insert: for each char, if child doesn't exist create it; advance to child; after last char set isEnd=true.
2.Search: for each char, if child doesn't exist return false; advance. Return isEnd.
3.StartsWith: identical to search but return true at end regardless of isEnd.
Open on LeetCode ↗
212Word Search IIHard
Trie guides DFS on the grid — prune instantly when no trie child exists.
Spot these in the problem
"board of characters" "list of words" "find all words that can be constructed" "adjacent cells"
Key Insight: Build a trie from the word list. DFS on the grid is guided by the trie — if the current cell's character has no trie child, prune immediately. This eliminates redundant work compared to searching for each word separately.
Why not separate DFS per word? Running a separate DFS for each word = O(words × cells × 4^L). Trie lets you search all words simultaneously in one DFS pass — when the current cell's char has no trie child, the entire subtree is pruned for ALL words at once.
Naive approach fails: Separate DFS per word is TLE for large word lists. The trie's prefix pruning collapses the search dramatically.
1.Insert all words into trie.
2.DFS from each grid cell: if grid[r][c] not in current trie node's children, return.
3.Advance trie pointer; if isEnd, add word to result (remove from trie to avoid duplicates). Mark cell visited, recurse 4 directions, unmark (backtrack).
Open on LeetCode ↗
211Add and Search WordsMed
'.' wildcard tries all 26 trie children — DFS with backtracking.
Spot these in the problem
"addWord" "search" "'.' matches any letter" "wildcard" "design"
Key Insight: Regular chars follow a straight trie path. The '.' wildcard forces DFS — try every existing child at that level, backtrack if the subtree yields no match.
Why not HashMap / other? A HashMap can't handle '.' wildcards efficiently — you'd scan all stored words. Trie DFS handles '.' by branching into all 26 children at that level, only visiting relevant subtrees.
Naive approach fails: For a pattern like "a.." stored in a map: scan every key of length 3 starting with 'a' = O(n·L). Trie traversal with '.' branching is O(26^wildcards · L) — fast for patterns with few wildcards.
1.Insert: standard trie insert.
2.Search: recursive DFS through trie. For '.': iterate all 26 possible children, recurse into each. For regular char: follow that one child only.
3.Return true if any recursive path reaches isEnd=true at the word's final character.
Open on LeetCode ↗
Grid Scan Nested-Loop Scan — No Traversal Needed
What It Is
Just iterate — no queue, no recursion. Scan every cell and apply a rule. Simpler than BFS when traversal order doesn't matter.
Dead GiveawayIf input is a grid and you need to count/validate/set values without needing shortest-path order → Grid Scan.
Look For These Words
"valid sudoku" "set zeroes" "diagonal traverse" "count" "check each cell"
Mechanic
1.Transpose: swap(m[i][j], m[j][i]) for j > i
2.Reverse each row: row.reverse()
3.Result is 90° clockwise rotation using no extra space
48Rotate ImageMed
Transpose then reverse each row — 90° rotation in-place, two linear passes.
Spot these in the problem
"n×n matrix" "rotate 90 degrees clockwise" "in-place" "do not allocate another 2D matrix"
Key Insight: 90° clockwise rotation = Transpose (swap [i][j] ↔ [j][i]) then reverse each row. Two simple in-place scans, O(1) extra space. No need to track movements.
Why not direct copy / other? No graph traversal needed — each cell maps to a fixed destination by formula. A simple nested loop + swap is sufficient.
Naive approach fails: Using a copy matrix = O(n²) extra space. Transpose + reverse achieves rotation in O(1) space. Direct formula rotation (place [i][j] at [j][n-1-i]) risks overwriting before reading.
1.Transpose: for i in 0..n, for j in i+1..n: swap matrix[i][j] and matrix[j][i].
2.Reverse each row: for each row, swap [0] with [n-1], [1] with [n-2], etc.
3.Result is the 90° clockwise rotation.
Open on LeetCode ↗
36Valid SudokuMed
Iterate all 81 cells once; use sets to detect duplicates per row/col/box.
Spot these in the problem
"valid sudoku" "each row, column, 3×3 box" "contains 1-9 without repetition" "determine if valid"
Key Insight: No recursion or BFS needed. One pass through all 81 cells. For each cell, check if the digit already exists in three hash sets (row, column, 3×3 box). If any collision, immediately return false.
Why not graph traversal / other? No connectivity or traversal between cells needed — it's pure constraint checking with hash sets. One linear scan through all 81 cells is sufficient.
1.Create 9 row sets, 9 col sets, 9 box sets.
2.For each cell [i][j]: if empty, skip. Compute boxIdx = (i/3)*3 + (j/3).
3.Check digit against row[i], col[j], box[boxIdx]. If found in any → return false. Add digit to all three sets.
Open on LeetCode ↗
73Set Matrix ZeroesMed
Two-pass scan: first record which rows/cols have zeros, then zero them.
Spot these in the problem
"if element is 0, set its entire row and column to 0" "in-place" "matrix"
Key Insight: Don't zero cells as you scan — that corrupts the input. Two-pass approach: first collect all (row, col) positions of zeros; then zero those entire rows and columns.
Why not in-place zeroing / other? No graph traversal needed — it's a two-pass scan with a collection step.
Naive approach fails: Zeroing cells DURING the scan corrupts the input — a '1' adjacent to a newly-zeroed cell would incorrectly zero its own row/column. Must collect all original zero positions first, then apply the zeroing.
1.Pass 1: scan entire matrix; record all row indices and column indices that contain a zero.
2.Pass 2: for each recorded row → zero the entire row. For each recorded col → zero the entire column.
3.O(m+n) extra space. Optimize to O(1) by using first row/col as markers.
Open on LeetCode ↗
Linked List Pointer Manipulation — Reverse In-Place
What It Is
Use two pointers to avoid O(n²) scanning. Fast/slow detects cycles; prev/curr/next rewires nodes for reversal.
Dead GiveawayIf input is ListNode with a .next pointer → Linked List pattern.
Look For These Words
"ListNode" "reverse" "cycle" "nth from end" "merge" "reorder"
Mechanic
1.prev=null, curr=head
2.Save next = curr.next before overwriting it
3.Reverse pointer: curr.next = prev
4.Advance: prev = curr; curr = next
5.Return prev as the new head when curr == null
206Reverse Linked ListEasy
Three pointers (prev/curr/next) rewire .next backward in one pass.
Spot these in the problem
"reverse a linked list" "ListNode" ".next" "return the reversed list"
Key Insight: Walk the list with three pointers: prev (starts null), curr (starts head), next (lookahead). At each step, rewire curr.next to point backward. One pass, O(1) space.
Why not array copy / other? No other traversal pattern applies — linked list reversal is inherently pointer rewiring. The question is O(n) time, O(1) space vs. O(n) space (array copy).
Naive approach fails: Collecting all values into an array then rebuilding the list = O(n) space. The 3-pointer in-place walk does it in O(1) space.
1.prev = null, curr = head.
2.While curr is not null: save next = curr.next; set curr.next = prev; advance prev = curr, curr = next.
3.Return prev (the new head of reversed list).
Open on LeetCode ↗
92Reverse Linked List IIMed
Find the connection point, reverse exactly n-m+1 nodes, reconnect.
Spot these in the problem
"reverse from position left to right" "ListNode" "do it in one pass" "in-place"
Key Insight: Identify the node just before position m (the "connection point"). Reverse exactly (n-m+1) nodes using the standard 3-pointer technique. Reconnect both ends of the reversed segment back into the list.
Why not full reversal / other? Purely pointer manipulation. The challenge is tracking the four connection points: node before left, node at left, node at right, node after right.
Naive approach fails: Reversing the entire list and trimming doesn't work for subranges. You need to walk to position left-1 and then reverse exactly (right-left+1) nodes, carefully reconnecting both ends.
1.Advance to the node just before position m; call it prevM. Let curr = prevM.next.
2.Reverse the next (n-m+1) nodes using the 3-pointer walk.
3.prevM.next = new head of reversed segment. Old head of segment (now the tail).next = node after position n.
Open on LeetCode ↗
328Odd Even Linked ListMed
Two pointer chains — collect odd-indexed and even-indexed separately, then join.
Spot these in the problem
"group all odd-indexed nodes then all even-indexed" "original relative order" "O(1) space" "ListNode"
Key Insight: Two simultaneous pointer chains: one for odd-indexed nodes, one for even-indexed. Walk both at once, advancing two steps at a time. Join odd chain's tail to even chain's head at the end. One pass, O(1) space.
Why not two separate lists / other? Two simultaneous pointer chains naturally collect odd and even nodes. Array-based approach works but uses O(n) space.
Naive approach fails: Using two separate lists then merging them uses O(n) space. The in-place two-chain technique reuses existing nodes — just rewires pointers.
1.odd = head, even = head.next, evenHead = even.
2.While even and even.next exist: odd.next = even.next; even.next = odd.next.next; advance odd and even.
3.odd.next = evenHead to join the two chains.
Open on LeetCode ↗
Hash Map O(1) Lookup — Two Sum Example
What It Is
Store what you've seen in a dictionary so you can check in O(1) later. Trade memory for speed.
Dead GiveawayIf you need to find a complement, check existence, count frequency, or group items → Hash Map.
Look For These Words
"two sum" "group anagrams" "first unique" "frequency" "pairs"
Mechanic
1.Initialize seen = {}
2.For each num, compute need = target - num
3.If need in seen: return [seen[need], i]
4.Else: seen[num] = i
1Two SumEasy
For each number, check if its complement is already in the map — O(n) total.
Spot these in the problem
"two numbers that add up to target" "return their indices" "exactly one solution" "array"
Key Insight: For each number, you need its complement (target - num). Store seen numbers in a map with their index. For each new number, check if its complement is already there — O(1) lookup turns an O(n²) search into O(n).
Why not sort + two pointers? Sorting is O(n log n) and destroys original indices (you'd need to track index pairs). HashMap is O(n) time and O(n) space and preserves original indices.
Naive approach fails: Nested loop checking all pairs = O(n²). HashMap stores each number's index as you scan, reducing to O(n) with one pass.
1.Initialize empty map.
2.For each index i: compute complement = target - nums[i].
3.If complement is in map → return [map[complement], i]. Otherwise add nums[i] → i to map and continue.
Open on LeetCode ↗
49Group AnagramsMed
Sort each word → canonical key; group all words with the same key.
Spot these in the problem
"group anagrams together" "strings" "anagram" "rearrangement of letters"
Key Insight: All anagrams produce the same string when sorted. Use the sorted form as a canonical key in a HashMap. All words mapping to the same key are anagrams of each other.
Why not pairwise comparison / other? Comparing every pair of strings for anagram-ness = O(n²·L). Canonical key (sorted string) + HashMap groups all anagrams in O(n·L log L) with one pass.
1.Initialize map: sortedWord → [].
2.For each word: sort its characters to get the key (or use char-count as key for O(L) instead of O(L log L)).
3.Append the original word to map[key]. Return all map values.
Open on LeetCode ↗
347Top K FrequentMed
Count frequencies, then bucket-sort by frequency to find top-k in O(n).
Spot these in the problem
"k most frequent elements" "return the k elements" "frequency" "top k"
Key Insight: Count frequencies with a hash map. Then use bucket sort: an array of size n+1 where index = frequency. Walk from high to low frequency collecting elements until k are gathered — O(n) total, no heap needed.
Why not full sort? Sorting by frequency = O(n log n). Min-heap of size k = O(n log k). Bucket sort by frequency = O(n). For interviews, bucket sort is optimal but heap is the safe answer.
Naive approach fails: Sorting the entire array by frequency then taking last k = O(n log n). Bucket sort achieves O(n) by indexing on frequency directly.
1.Count frequencies: map[num] → count.
2.Bucket: bucket[freq] = [list of nums with that freq].
3.Iterate buckets from index n down to 1, collecting nums into result until result.length == k.
Open on LeetCode ↗
Sliding Win Expand Right, Shrink Left
What It Is
A window [left, right] expands right to include elements and shrinks left to fix a violated constraint. One pass, O(n).
Dead GiveawayIf you need the longest or shortest subarray/substring that satisfies a condition → Sliding Window.
Look For These Words
"longest substring" "max length" "at most k" "consecutive" "no repeat"
Mechanic
1.L=0; loop R from 0 to n-1
2.Add arr[R] to window state (sum/map/count)
3.While constraint violated: remove arr[L], then L++
4.Update answer with current valid window [L..R]
3Longest SubstringMed
Expand right; when duplicate found, shrink left until window is valid again.
Spot these in the problem
"longest substring" "without repeating characters" "string" "no duplicate letters"
Key Insight: Maintain a window [left, right] with a set of characters. Expand right freely; when adding right would create a duplicate, shrink left until the duplicate is gone. Window always contains unique chars.
Why not fixed window / other? The window size isn't fixed — it depends on when a duplicate appears. Variable window size requires dynamic left-pointer movement, not a fixed stride.
Naive approach fails: Checking all O(n²) substrings for uniqueness = O(n³). Sliding window finds the answer in O(n) with a single pass.
1.left = 0, maxLen = 0, window set = {}.
2.For each right: while chars[right] is in window, remove chars[left] from window, left++.
3.Add chars[right] to window. maxLen = max(maxLen, right - left + 1).
Open on LeetCode ↗
76Min Window SubstringHard
Expand right until all chars covered; shrink left as far as possible to minimize.
Spot these in the problem
"minimum window substring" "contains all characters of t" "minimum length" "substring"
Key Insight: Two-phase loop. Expand right until all required characters are satisfied. Then shrink left as much as possible while keeping the window valid — record this minimum window. Alternate expand/shrink.
Why not fixed window / other? The expand-then-shrink two-phase loop is the only O(n) approach. Any fixed-window approach fails because the valid window size varies.
Naive approach fails: Checking all O(n²) substrings, validating each = O(n²·|t|). Sliding window with character frequency tracking = O(n + |t|).
1.Count required char frequencies.
2.Expand right: add to window count; when a char's window count matches required, increment formed.
3.When formed == required: record window if smallest seen; shrink left (remove leftmost char, decrement formed if needed). Resume expanding right.
Open on LeetCode ↗
713Product < KMed
Expand right multiplying; shrink left dividing; count valid subarrays at each step.
Spot these in the problem
"subarray" "product less than k" "count subarrays" "contiguous" "positive integers"
Key Insight: For each right position, all subarrays ending at right between [left, right] are valid (product < k). Shrink left whenever product ≥ k. The count of valid subarrays ending at right = right - left + 1.
Why not DP / other? Products don't have a natural DP recurrence for this constraint (they're multiplicative, not additive). Sliding window works because product updates by multiply/divide when adding/removing one element.
Naive approach fails: Checking all O(n²) subarrays with O(n) product computation = O(n³). Sliding window: each element enters and leaves the window at most once = O(n).
1.left = 0, product = 1, count = 0.
2.For each right: product *= nums[right]. While product >= k: product /= nums[left], left++.
3.count += right - left + 1. Return count.
Open on LeetCode ↗
Two Ptrs Converging Pointers on Sorted Input
What It Is
Two indices moving inward (or same direction) to find pairs without a nested loop. Works because the array is sorted or has a clear structure.
Dead GiveawayIf the array is sorted and you need pairs/triplets that sum to a target, or validate a palindrome → Two Pointers.
Look For These Words
"sorted" "two sum II" "3Sum" "valid palindrome" "remove duplicates" "container with most water"
Mechanic
1.L=0, R=n-1
2.Compute sum/product of arr[L] and arr[R]
3.Too small → L++; too large → R--; equal → record answer
4.Stop when L >= R
167Two Sum IIMed
Sorted array + two pointers from ends: move whichever side fails the target sum.
Spot these in the problem
"sorted array" "two numbers add up to target" "1-indexed" "exactly one solution"
Key Insight: Array is already sorted. Two pointers from both ends. If sum > target, move right pointer left (decrease). If sum < target, move left pointer right (increase). Convergence guarantees we find the pair.
Why not HashMap / other? HashMap works for unsorted arrays but uses O(n) extra space. Sorted input enables two-pointer convergence in O(1) space.
1.left = 0, right = n-1.
2.While left < right: sum = nums[left] + nums[right].
3.If sum == target → return indices. If sum < target → left++. If sum > target → right--.
Open on LeetCode ↗
153SumMed
Sort, fix one number, two-pointer the rest to find pairs summing to its negation.
Spot these in the problem
"three elements sum to 0" "all unique triplets" "no duplicate triplets" "array"
Key Insight: Sort first. Fix one number at index i, then run Two Pointers on the subarray (i+1 to n-1) to find pairs summing to -nums[i]. Skip duplicates to avoid repeated triplets.
Why not HashMap for all pairs? HashMap approach is O(n²) but deduplication is tricky. Sort + two pointers is also O(n²) and handles deduplication naturally by skipping repeated values.
Naive approach fails: Brute force O(n³) checks all triplets. The sort-and-fix approach reduces to O(n²): fix one number, two-pointer the rest.
1.Sort nums.
2.For i from 0 to n-3: skip if duplicate (nums[i] == nums[i-1]).
3.left = i+1, right = n-1. Find pairs: if sum < 0 → left++; if sum > 0 → right--; if == 0 → record and skip duplicates on both sides.
Open on LeetCode ↗
11Container With Most WaterMed
Move the shorter pointer inward — it's the only side that could possibly improve the area.
Spot these in the problem
"vertical lines" "maximum amount of water" "container" "array of heights"
Key Insight: Area = min(h[l], h[r]) × width. Moving the taller side inward can never improve (min stays same or decreases; width decreases). So always move the shorter side — it might find something taller that beats the current area.
Why not check all pairs? O(n²). Two pointers: O(n). The key insight: moving the taller pointer inward can never improve area (min height stays same or drops; width drops). Always move the shorter pointer.
Naive approach fails: The proof that moving shorter pointer is correct: if h[l] < h[r], area = h[l] × width. Moving left pointer: any new h[l'] may be taller, possibly improving. Moving right pointer: area is still bounded by h[l] (the shorter side), but width is now smaller — strictly worse.
1.left = 0, right = n-1, maxWater = 0.
2.While left < right: area = min(h[left], h[right]) × (right-left). Update maxWater.
3.If h[left] < h[right]: left++; else right--.
Open on LeetCode ↗
Bin Search Halve the Search Space Each Step
What It Is
Cut the search space in half each time. Needs sorted order (or a monotonic condition) — turns O(n) into O(log n).
Dead GiveawayIf input is sorted and you need to find a position, OR you can rephrase the answer as "find the smallest X that satisfies Y" → Binary Search.
Look For These Words
"sorted array" "find position" "rotated" "first/last occurrence" "peak element"
Mechanic
1.lo=0, hi=n-1
2.mid = (lo+hi) // 2
3.arr[mid]==target: found! If less: lo=mid+1; else: hi=mid-1
4.Loop until lo > hi — not found if loop ends
704Binary SearchEasy
Classic: eliminate half the search space each step — O(log n).
Spot these in the problem
"sorted array" "find target" "O(log n)" "return index" "-1 if not found"
Key Insight: Sorted array. Each comparison lets you eliminate half the remaining space. After each step, the target must be in the surviving half. Repeat until found or space is empty.
Why not linear scan? O(n) vs O(log n). Whenever the input is sorted and you need to find a value, binary search is the default O(log n) solution.
1.lo = 0, hi = n-1.
2.While lo ≤ hi: mid = (lo+hi) // 2.
3.If nums[mid] == target: return mid. If nums[mid] < target: lo = mid+1. Else: hi = mid-1. Return -1 if not found.
Open on LeetCode ↗
153Find Min in RotatedMed
Minimum is always in the unsorted half — binary search on which half that is.
Spot these in the problem
"rotated sorted array" "find minimum" "no duplicates" "O(log n)"
Key Insight: In a rotated array, one half is always fully sorted. If nums[mid] > nums[hi], the left half is sorted (minimum is in right half). If nums[mid] ≤ nums[hi], minimum is in left half (possibly mid itself).
Why not linear scan? O(n). Binary search exploits the structural property: one half of any rotated array is always fully sorted.
Naive approach fails: Using nums[mid] > nums[0] to determine which half to search is a common bug — it fails when the rotation point is at position 0. Compare nums[mid] against nums[hi] instead (always reliable).
1.lo = 0, hi = n-1.
2.While lo < hi: mid = (lo+hi) // 2. If nums[mid] > nums[hi]: lo = mid+1 (min in right half). Else: hi = mid (min is in left half, possibly mid).
3.Return nums[lo].
Open on LeetCode ↗
33Search in RotatedMed
One half is always sorted — check which half the target belongs to, search there.
Spot these in the problem
"rotated sorted array" "search for target" "O(log n)" "may be rotated at unknown pivot"
Key Insight: One of [lo..mid] or [mid..hi] is always sorted. Determine which half the target falls in using the sorted half's boundaries, then eliminate the other half.
Why not two separate searches? Find the pivot first (one binary search), then binary search in the correct half (second binary search). Two searches = 2×O(log n) = still O(log n), but a single binary search that identifies the sorted half inline is cleaner.
Naive approach fails: Two separate binary searches (find pivot, then search) works but involves more code. Single-pass binary search handles both steps simultaneously.
1.lo=0, hi=n-1. While lo ≤ hi: mid = (lo+hi)//2. If match → return mid.
2.If left half sorted (nums[lo] ≤ nums[mid]): if nums[lo] ≤ target < nums[mid] → search left; else search right.
3.Else right half sorted: if nums[mid] < target ≤ nums[hi] → search right; else search left.
Open on LeetCode ↗
Dyn. Prog. Cache Overlapping Subproblems — Kadane's Algorithm
What It Is
Break into sub-problems, cache results, reuse them. Choose between choices (include/skip, take/leave) and keep the best.
Dead GiveawayIf the problem asks "maximum profit", "minimum cost", "number of ways", or "can you reach?" and involves sequential choices → DP.
Look For These Words
"maximum profit" "number of ways" "minimum cost" "subarray" "subsequence" "climb stairs"
Mechanic
1.Define state: what does dp[i] represent?
2.Write recurrence: dp[i] = f(dp[i-1], arr[i])
3.Identify base case(s): e.g., dp[0] = arr[0]
4.Fill left-to-right; return answer from max(dp)
53Maximum SubarrayMed
Kadane's: at each index, either extend previous subarray or start fresh.
Spot these in the problem
"maximum sum" "contiguous subarray" "array contains negative numbers" "integers"
Key Insight: Kadane's algorithm: dp[i] = max(nums[i], dp[i-1] + nums[i]). Either extend the previous subarray (if it's positive, it helps), or start a new one from the current element. Track global max throughout.
Why not divide and conquer? O(n log n). Kadane's DP is O(n) with O(1) space. The recurrence is: at each position, either extend the previous subarray or start fresh — whichever is larger.
Naive approach fails: Checking all O(n²) subarrays with running sum = O(n²). Prefix sum + min tracking = O(n) but more complex. Kadane's is the cleanest O(n) solution.
1.currSum = nums[0], maxSum = nums[0].
2.For i from 1 to n-1: currSum = max(nums[i], currSum + nums[i]).
3.maxSum = max(maxSum, currSum). Return maxSum.
Open on LeetCode ↗
70Climbing StairsEasy
To reach step n, you came from n-1 or n-2 — it's Fibonacci.
Spot these in the problem
"n steps" "1 or 2 steps at a time" "distinct ways" "how many ways"
Key Insight: You can reach step n from step n-1 (1-step) or step n-2 (2-step). So dp[n] = dp[n-1] + dp[n-2]. This is Fibonacci. The answer is the number of distinct paths, not their lengths.
Why not recursion without memoization? Each subproblem f(n) branches into f(n-1) and f(n-2), both of which expand further — exponential recomputation. f(3) is computed in the f(4) branch AND the f(5) branch AND the f(6) branch...
Naive approach fails: Pure recursion = O(2^n). The recurrence dp[n] = dp[n-1] + dp[n-2] (Fibonacci) reduces to O(n) with memoization and O(1) space with two rolling variables.
1.dp[1] = 1, dp[2] = 2.
2.For i from 3 to n: dp[i] = dp[i-1] + dp[i-2].
3.Return dp[n]. Optimize to O(1) space by tracking just two variables.
Open on LeetCode ↗
322Coin ChangeMed
dp[amount] = min coins to make that amount; try every coin at each amount.
Spot these in the problem
"minimum number of coins" "make up the amount" "unlimited coins of each denomination" "coin denominations"
Key Insight: Build dp bottom-up: dp[a] = minimum coins to make amount a. For each amount, try every coin c: if dp[a - c] + 1 < dp[a], update. Start from dp[0]=0 and fill up to target.
Why not greedy (take largest first)? Greedy fails for denominations like [1, 3, 4] with amount=6. Greedy: 4+1+1=3 coins. Optimal: 3+3=2 coins. Greedy doesn't account for future combinations.
Naive approach fails: Greedy fails. Brute-force recursion = exponential. Bottom-up DP builds dp[0..amount] iteratively — at each amount, try every coin denomination.
1.dp[0]=0, dp[1..amount]=∞.
2.For each amount a from 1 to target: for each coin c: if c ≤ a and dp[a-c]+1 < dp[a]: update dp[a].
3.Return dp[amount], or -1 if still ∞.
Open on LeetCode ↗
300LISMed
dp[i] = LIS length ending at i; look back at all j<i where nums[j]<nums[i].
Spot these in the problem
"strictly increasing subsequence" "not necessarily contiguous" "longest" "array of integers"
Key Insight: dp[i] = length of the longest increasing subsequence ending exactly at index i. For each i, look at all j < i where nums[j] < nums[i]: dp[i] = max(dp[i], dp[j]+1).
Why not sliding window / other? Subsequences are non-contiguous — a window only works for contiguous elements. DP with O(n²) is the standard approach; binary search + patience sorting gives O(n log n).
Naive approach fails: Checking all 2^n subsequences is exponential. The O(n²) DP is the expected interview answer. The binary search optimization exists but is much harder to derive on the spot.
1.dp[i] = 1 for all i (each element alone is a subsequence).
2.For i from 1 to n-1: for j from 0 to i-1: if nums[j] < nums[i]: dp[i] = max(dp[i], dp[j]+1).
3.Return max(dp).
Open on LeetCode ↗
Greedy Collect Every Gain — Stock II
What It Is
At every step, pick the locally best option. No backtracking. Works when local-best decisions don't hurt future choices.
Dead GiveawayIf you can sort and make decisions left-to-right without needing to undo them → Greedy.
Look For These Words
"minimum number of" "jump game" "intervals" "meeting rooms" "gas station"
Mechanic
1.Scan consecutive pairs prices[i-1], prices[i]
2.If prices[i] > prices[i-1]: profit += prices[i]-prices[i-1]
3.Never look back — greedy choice is always take any positive move
122Stock IIMed
Sum every positive day-to-day difference — buy every valley, sell every peak.
Spot these in the problem
"multiple transactions" "maximize total profit" "hold at most one share" "any number of times"
Key Insight: You can hold multiple transactions. Every upward move is profit. Sum every positive consecutive difference — equivalent to buying at every valley and selling at every peak in one pass.
Why not DP / other? DP works but is overkill. The greedy insight: every upward price movement between consecutive days is capturable profit. Sum all positive consecutive differences.
Naive approach fails: Trying to identify explicit buy/sell pairs to maximize profit seems complex. The insight is that summing ALL positive day-to-day deltas gives the same result as any optimal buy-sell pairing.
1.profit = 0.
2.For i from 1 to n-1: if prices[i] > prices[i-1]: profit += prices[i] - prices[i-1].
3.Return profit. Each upward tick is one "micro-transaction."
Open on LeetCode ↗
55Jump GameMed
Track farthest reachable index; if you outrun it before reaching end, return false.
Spot these in the problem
"jump game" "reach the last index" "nums[i] = maximum jump length" "return true or false"
Key Insight: Track maxReach — the farthest index reachable from any position seen so far. If at any index your current position exceeds maxReach, you're trapped. Otherwise keep updating maxReach.
Why not DP / other? DP: for each position, check all positions that can reach it = O(n²). Greedy: track only maxReach = O(n). Both are correct but greedy is faster.
Naive approach fails: DP dp[i] = true if reachable = O(n²). The greedy observation: if you can reach index i, you know the farthest you can reach from i. If your current position ever exceeds that farthest reach, you're stuck.
1.maxReach = 0.
2.For i from 0 to n-1: if i > maxReach: return false. maxReach = max(maxReach, i + nums[i]).
3.Return true.
Open on LeetCode ↗
134Gas StationMed
If total gas ≥ total cost, a solution exists; find start by resetting when tank goes negative.
Spot these in the problem
"circular route" "gas stations" "can you complete the circuit" "starting station index"
Key Insight: If total gas ≥ total cost, a valid starting station always exists. Find it greedily: whenever the cumulative tank drops below 0, reset the start to the next station (current stretch is definitely not startable here).
Why not brute force? Try every starting station and simulate = O(n²). The single-pass greedy is O(n) with a non-obvious correctness proof.
Naive approach fails: Key proof: if total gas ≥ total cost, a valid starting station always exists. When cumulative tank drops below 0 at station i, every station from the current start to i is ruled out as a valid start. Reset to i+1.
1.totalGas=0, currGas=0, start=0.
2.For i from 0 to n-1: delta = gas[i]-cost[i]. totalGas+=delta, currGas+=delta.
3.If currGas < 0: start = i+1, reset currGas=0. Return start if totalGas ≥ 0, else -1.
Open on LeetCode ↗
Heap Priority Queue — Always Pop the Minimum
What It Is
A priority queue that serves the min (or max) element in O(log n). Perfect when you need the K-th largest/smallest at any point.
Dead GiveawayIf problem asks for "K largest", "K smallest", "top K", "median from stream", or "merge K sorted" → Heap.
Look For These Words
"k largest" "top k" "median" "merge k lists" "task scheduler"
Mechanic
1.heapq.heappush(h, val) — O(log n)
2.heapq.heappop(h) — always returns minimum, O(log n)
3.For k-th largest: maintain min-heap of size k. Root = answer.
4.If heap size exceeds k: pop (evicts the current minimum, keeps k largest)
215K-th LargestMed
Min-heap of size k: after scanning all elements, heap top = kth largest.
Spot these in the problem
"kth largest element" "in an array" "not sorted" "find kth largest"
Key Insight: Maintain a min-heap of size k. When size exceeds k, pop the minimum (smallest in heap). After all elements, the heap top is the kth largest — everything below it was smaller and got evicted.
Why not sort? Sort = O(n log n). Min-heap of size k = O(n log k). QuickSelect = O(n) average. For interviews, min-heap is the clearest O(n log k) solution.
Naive approach fails: Max-heap of all elements then pop k times = O(n + k log n). Min-heap of size k is more efficient when k << n: O(n log k).
1.min-heap = [].
2.For each num: push to heap. If heap.size > k: pop (removes current minimum).
3.After all elements, heap[0] = the kth largest. O(n log k) time.
Open on LeetCode ↗
347Top K FrequentMed
Count frequencies; min-heap of size k keeps only the k most frequent.
Spot these in the problem
"k most frequent" "return elements" "top k" "frequency of occurrence"
Key Insight: Count frequencies with hash map. Then use a min-heap of (freq, num) of size k. When size exceeds k, the smallest-frequency element is evicted. What remains = k most frequent elements.
Why not full sort? Sorting all frequencies = O(n log n). Min-heap of size k = O(n log k). Bucket sort = O(n). Min-heap is the recommended interview approach.
Naive approach fails: Building a sorted frequency map then taking the last k elements = O(n log n). The heap maintains only k elements at a time, evicting the least frequent — O(n log k).
1.Count frequencies: map[num] → count.
2.For each (num, freq): push (freq, num) to min-heap. If size > k: pop (removes least frequent).
3.Return all elements remaining in heap.
Open on LeetCode ↗
23Merge K Sorted ListsHard
Min-heap always gives the globally smallest node across all k lists — pop, attach, push next.
Spot these in the problem
"k sorted linked lists" "merge into one sorted linked list" "ListNode"
Key Insight: A min-heap holds the current head of each list. Pop the globally smallest node, attach it to the result, then push that node's .next onto the heap. Repeat until heap is empty.
Why not merge sequentially? Merging list by list: each subsequent merge touches all accumulated nodes. Total work = O(n·k). Min-heap processes each node exactly once = O(n log k).
Naive approach fails: Sequential merges: list1+list2 = O(n), result+list3 = O(2n), ...+listk = O((k-1)n). Total = O(nk). Heap: each of n total nodes is pushed/popped once with O(log k) heap ops = O(n log k).
1.Push all k non-null list heads into min-heap (keyed by node.val).
2.While heap not empty: pop minimum node, attach to result list's tail. If popped.next is not null: push it.
3.Return result list head.
Open on LeetCode ↗
Stack LIFO — Deferred Computation
What It Is
Push and wait. Pop when you find something that "resolves" the top. Great for matching pairs and finding the next greater element.
Dead GiveawayIf the problem has nested/paired structures (parentheses, brackets) or asks "next greater/smaller" → Stack.
Look For These Words
"valid parentheses" "next greater" "daily temperatures" "calculator" "decode string"
Mechanic
1.Track current number and last operator op (default +)
2.On +/-: push ±num onto stack
3.On *//: pop top, compute top*num or int(top/num), push result
4.Return sum(stack)
227Basic Calculator IIMed
Stack handles * and / immediately; defer + and - to final sum of stack.
Spot these in the problem
"basic calculator" "+ - * /" "no parentheses" "evaluate the expression string"
Key Insight: Process operators with a stack. For + and -: push the number (negated for -) onto the stack and handle later. For * and /: immediately pop stack top and push the computed result. Sum the entire stack at the end.
Why not left-to-right / other? Left-to-right evaluation ignores operator precedence — 2+3*4 would give 20 instead of 14. The stack defers + and - while handling * and / immediately (higher precedence).
Naive approach fails: Evaluating left-to-right without precedence gives wrong results for * and /. The stack simulates correct precedence by immediately computing *// and deferring +/- for a final summation.
1.Parse left to right, tracking current number and last operator.
2.At each operator or end: if lastOp is + → push num; if - → push -num; if * → pop and push top * num; if / → pop and push trunc(top / num).
3.Return sum of stack.
Open on LeetCode ↗
20Valid ParenthesesEasy
Push openers; closing bracket must match the top of the stack.
Spot these in the problem
"valid parentheses" "brackets" "'(', ')', '{', '}', '[', ']'" "well-formed"
Key Insight: Opening brackets push onto the stack. Each closing bracket must match the top. Any mismatch or a non-empty stack at the end = invalid.
Why not a counter / other? A counter works for a single bracket type. For mixed types, ([)] has a balanced count (2 opens, 2 closes) but is invalid. Stack correctly checks that each closer matches the most recent opener.
1.For each char: if (, [, or {: push to stack.
2.If ), ], or }: if stack empty OR top doesn't match → return false. Pop stack.
3.After loop: return stack.isEmpty().
Open on LeetCode ↗
739Daily TemperaturesMed
Monotonic decreasing stack of indices — pop when current temp is warmer.
Spot these in the problem
"daily temperatures" "how many days until a warmer temperature" "result[i] = number of days" "warmer"
Key Insight: Monotonic decreasing stack of indices. Push each index. When the current day's temperature exceeds the top of the stack, the top index has "found its warmer day" — record the difference and pop.
Why not brute force? For each day, scan forward until a warmer day = O(n²). Monotonic stack: each element is pushed and popped at most once = O(n).
Naive approach fails: For each index, inner loop until finding warmer temp = O(n²). The monotonic decreasing stack resolves each "pending" day the moment a warmer day arrives.
1.Stack of indices, result[i]=0 by default.
2.For i from 0 to n-1: while stack not empty and temps[i] > temps[stack.top]: pop j, result[j] = i - j. Push i.
3.Remaining stack entries never found a warmer day; result stays 0.
Open on LeetCode ↗
Design Combine Data Structures for O(1) API — LRU Cache
What It Is
Build a class with specific API methods, each running in O(1) or O(log n). Usually needs a HashMap + another structure together.
Dead GiveawayIf the problem says "Design a data structure" or "Implement..." with a constructor → Design pattern.
Look For These Words
"design" "implement" "O(1)" "LRU cache" "insert/getRandom" "constructor"
Mechanic
1.HashMap maps key → DLL node for O(1) access
2.get(key): O(1) lookup + move that DLL node to head
3.put(key,val): insert at head; if over capacity, remove DLL tail and map entry
4.Use sentinel head/tail nodes to simplify edge cases
146LRU CacheMed
HashMap + doubly-linked list: O(1) lookup AND O(1) move-to-front/evict.
Spot these in the problem
"LRU cache" "get and put in O(1)" "least recently used" "capacity"
Key Insight: O(1) get AND O(1) eviction needs two structures together: HashMap (key→node) for instant lookup + Doubly Linked List (most-recent at head, LRU at tail) for O(1) move-to-front and O(1) tail removal.
Why not HashMap alone? HashMap gives O(1) lookup but can't evict the LRU item in O(1) — you'd need to scan all entries to find the oldest.
Naive approach fails: HashMap + timestamp per key: O(1) get/put but O(n) eviction (scan for min timestamp). HashMap + doubly-linked list: O(1) get/put/evict by moving nodes to head on access and removing tail on eviction.
1.Get: if key not in map, return -1. Move node to list head; return value.
2.Put: if key exists, update value and move to head. Else create node at head, add to map.
3.If size exceeds capacity, remove tail node and delete from map.
Open on LeetCode ↗
460LFU CacheHard
Three structures: key→(val,freq), freq→LRU-set, minFreq for O(1) eviction.
Spot these in the problem
"LFU" "least frequently used" "O(1)" "frequency" "capacity"
Key Insight: LFU needs O(1) get, put, AND eviction of least-frequently-used (ties broken by LRU). Three structures: (a) key→(value, freq) map, (b) freq→LinkedHashSet of keys (preserves insertion order for LRU tie-breaking), (c) minFreq variable.
Why not LRU approach? LRU tracks recency only. LFU needs per-key access frequency plus LRU tie-breaking within the same frequency — two layers of ordering.
Naive approach fails: Sorting by frequency per eviction = O(n log n). Three-structure design (key→freq map + freq→LRU-ordered-set map + minFreq tracker) achieves O(1) by keeping frequency buckets pre-sorted.
1.Get: increment key's freq; move from old freq-set to freq+1 set; update minFreq if old set is now empty.
2.Put: if cache full, evict first key from freq[minFreq].
3.Add new key with freq=1 to freq[1]; set minFreq=1.
Open on LeetCode ↗
355Design TwitterMed
HashMap per user for tweets; min-heap to merge k most recent tweet feeds.
Spot these in the problem
"postTweet" "getNewsFeed" "follow/unfollow" "10 most recent" "design"
Key Insight: Store each user's tweet list (timestamp + tweetId) in a HashMap. For getNewsFeed: collect tweet lists from user + followees, then use a min-heap to merge them and extract the 10 most recent.
Why not global sorted tweet list? Maintaining one global list means every getNewsFeed sorts all tweets = O(n log n). Per-user lists + heap merge extracts top-10 in O(k log k) where k = followees.
1.postTweet: append (timestamp, tweetId) to user's list.
2.getNewsFeed: for user + all followees, push their latest tweet into a max-heap by timestamp.
3.Pop up to 10 tweets, pushing the previous tweet from each popped list.
Open on LeetCode ↗
Array Sort First, Then Single Pass — Merge Intervals
What It Is
In-place tricks: prefix sums, index math, rotations. Avoid extra space by reusing the input array itself.
Dead GiveawayIf input is a simple array and you need to transform it in-place with O(1) space → Array.
Look For These Words
"rotate array" "product of array" "prefix sum" "subarray sum" "in-place"
Mechanic
1.Sort intervals by start
2.result = [intervals[0]]
3.If cur.start <= result[-1].end: extend end to max(both)
4.Else: append current interval to result
56Merge IntervalsMed
Sort by start; single pass merging overlapping intervals.
Spot these in the problem
"intervals" "overlapping" "merge all overlapping" "[[start,end],...]"
Key Insight: Sort by start time. Then a single pass: if current interval's start ≤ last merged interval's end → they overlap, extend the end. Otherwise push current as a new separate interval.
Why not unsorted / other? Without sorting, you'd need to check each new interval against every existing merged interval = O(n²). Sorting by start guarantees only the last merged interval needs to be checked against the current one.
Naive approach fails: Unsorted intervals require O(n²) pairwise comparison. Sort by start → single pass O(n) merge.
1.Sort intervals by start.
2.result = [intervals[0]].
3.For each remaining interval: if interval.start ≤ result.last.end: result.last.end = max(result.last.end, interval.end). Else: result.push(interval). Return result.
Open on LeetCode ↗
238Product Except SelfMed
Two passes: left products fill right→left, right products multiply in.
Spot these in the problem
"product of all elements except self" "without using division" "O(n) time" "O(1) extra space"
Key Insight: No division allowed. Two passes using the input array itself. Left pass: result[i] = product of everything to the left. Right pass: multiply in product of everything to the right. Result is exact without dividing.
Why not division / other? Division fails when the array contains zeros — dividing by zero is undefined. The constraint "without division" also exists explicitly.
Naive approach fails: With division: O(n) but breaks for zeros. Two-pass prefix/suffix product: O(n) time, O(1) space (reusing output array for left products, then multiplying in right products).
1.Left pass: result[0]=1; for i from 1 to n-1: result[i] = result[i-1] * nums[i-1].
2.Right pass: rightProd=1; for i from n-1 down to 0: result[i] *= rightProd; rightProd *= nums[i].
3.Return result.
Open on LeetCode ↗
253Meeting Rooms IIMed
Min-heap of end times: reuse a room if its end ≤ new meeting's start.
Spot these in the problem
"minimum number of conference rooms" "intervals" "all meetings must be held" "no overlap"
Key Insight: Sort meetings by start. Use a min-heap of end times (earliest-ending meeting). For each new meeting: if it starts after or when the earliest-ending meeting ends, reuse that room (pop + push new end). Otherwise open a new room.
Why not counting overlaps / other? Counting the maximum overlap at any single point works but requires O(n²) pairwise checks. Heap or two-pointer gives O(n log n).
Naive approach fails: For each meeting, check how many other meetings overlap it = O(n²). Sort starts and ends separately (two-pointer): O(n log n). Min-heap of end times: O(n log n).
1.Sort by start time. min-heap = [].
2.For each meeting: if heap not empty and heap.top ≤ meeting.start: pop (reuse that room). Push meeting.end.
3.heap.size = total rooms needed.
Open on LeetCode ↗
String Character-by-Character Parsing and Manipulation
What It Is
Character-level operations: reverse, parse, count, validate. Often combine with Hash Map for O(1) char lookups.
Dead GiveawayIf input is a string or string[] and you need to transform, parse, or validate → String.
Look For These Words
"reverse words" "valid" "roman" "integer to string" "permutation in string"
Mechanic
1.words = s.split() — splits on any whitespace, trims empties
2.words.reverse()
3.return " ".join(words)
151Reverse WordsMed
Split on spaces, reverse the token array, rejoin — handles multiple spaces.
Spot these in the problem
"reverse the order of the words" "leading/trailing spaces" "multiple spaces between words" "string"
Key Insight: Trim and split on whitespace (handles multiple spaces and leading/trailing spaces), reverse the token list, rejoin with single spaces. The key challenge is handling irregular spacing.
Why not reverse all characters? Reversing all characters gives "eulb si yks eht" — individual words are spelled backward, not just reordered. Need word-level reversal.
Naive approach fails: split(" ") fails on multiple consecutive spaces — produces empty strings in the array. Use split("\\s+") or strip().split() to handle irregular spacing.
1.Split string on whitespace (using split("\\s+") or similar), filtering empty tokens.
2.Reverse the token list/array.
3.Join with a single space " ". In-place variant: reverse entire char array, then reverse each word individually.
Open on LeetCode ↗
14Longest Common PrefixEasy
Vertical scan: compare index by index across all strings until mismatch.
Spot these in the problem
"longest common prefix" "array of strings" "common beginning" "write a function"
Key Insight: Vertical scan: compare all strings at position 0, then position 1, etc. Stop the instant any string either ends or has a different character. The prefix up to that point is the longest common prefix.
Why not sort first / other? Sort the array and compare only first and last strings = O(n log n). Vertical scan = O(n·L) without sorting. Both are correct but vertical scan is more direct and doesn't require sorting.
1.For i from 0 to len(strs[0])-1: for each string s in strs: if i >= len(s) or s[i] != strs[0][i]: return strs[0][:i].
2.If all chars match, return strs[0].
Open on LeetCode ↗
273Integer to EnglishHard
Process in groups of 3 digits right-to-left; prepend 'Thousand'/'Million' per group.
Spot these in the problem
"integer to English words" "non-negative integer" "Billion/Million/Thousand" "convert to words"
Key Insight: Split the number into chunks of 3 digits from right to left. Each chunk = hundreds + tens/ones, followed by a scale word ("Thousand", "Million", "Billion"). Handle teens (11-19) and tens (20, 30...) as special cases.
Why not giant if-else / other? This is primarily an implementation problem. The key structure is groups of 3 digits with scale words.
Naive approach fails: Giant if-else for every case leads to bugs. The clean approach: one helper for numbers 1-999, applied recursively to each 3-digit group (billions, millions, thousands, remainder), with the scale word appended.
1.Divide number into 3-digit groups.
2.For each non-zero group: convert to words using helper (handles hundreds, tens, ones). Append scale word for the group's position.
3.Handle edge cases: num=0 → "Zero", teens (11-19), leading zeros in groups.
Open on LeetCode ↗
Graph Traversal
1 / 19 · Grid BFS