100 LeetCode Problems — Editorial Guide
A companion set of 100 curated LeetCode problems, written in the same editorial style as the main guide. Medium, medium-hard, and hard only — the kind you’d expect in a senior coding round, with extra weight on graphs.
Pilot (1 problem). Single stylistic sample. Once you approve the format, the remaining 99 will be written in the same structure, in batches.
Each problem is presented in three parts:
- Problem Statement — a clean, self-contained specification.
- Solution — the intended approach with a complete C++11 implementation and complexity.
- Discussion — key observations, edge cases, and common pitfalls.
All code is strict C++11: #include <bits/stdc++.h>, using namespace std;, short variable names, minimal ceremony.
1. Number of Islands
LC 200 · Medium · Graphs (grid BFS)
Problem Statement
Given an m × n 2D binary grid grid where each cell is either '1' (land) or '0' (water), return the number of islands.
An island is a maximal group of land cells connected 4-directionally (up, down, left, right). You may assume the four edges of the grid are surrounded by water.
Input. vector<vector<char>> grid. Each cell is the character '0' or '1'.
Output. The number of islands as an int.
Constraints.
1 <= m, n <= 300grid[i][j]is'0'or'1'
Example.
grid = [
['1','1','0','0','0'],
['1','1','0','0','0'],
['0','0','1','0','0'],
['0','0','0','1','1']
]
answer = 3
Solution
Scan every cell. When you land on an unvisited '1', run a flood fill that marks the entire component as visited. Each flood corresponds to exactly one island, so the count of floods is the answer.
The flood is written iteratively (BFS with a queue) rather than recursively. A 300×300 all-ones grid is 90,000 connected cells — deep enough to blow the default stack on DFS recursion on some judges. BFS is always safe.
To save memory we mark visited cells by overwriting them with '0' in place. If the interviewer asks you not to mutate the input, swap in a parallel vector<vector<bool>> visited.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int numIslands(vector<vector<char>>& grid) {
int m = grid.size();
int n = m ? (int)grid[0].size() : 0;
int islands = 0;
for (int i = 0; i < m; ++i) {
for (int j = 0; j < n; ++j) {
if (grid[i][j] == '1') {
++islands;
flood(grid, i, j, m, n);
}
}
}
return islands;
}
private:
void flood(vector<vector<char>>& g, int si, int sj, int m, int n) {
static const int dr[4] = {-1, 1, 0, 0};
static const int dc[4] = { 0, 0,-1, 1};
queue<pair<int,int> > q;
q.push(make_pair(si, sj));
g[si][sj] = '0'; // mark on enqueue
while (!q.empty()) {
pair<int,int> cur = q.front(); q.pop();
int r = cur.first, c = cur.second;
for (int k = 0; k < 4; ++k) {
int nr = r + dr[k];
int nc = c + dc[k];
if (nr < 0 || nr >= m || nc < 0 || nc >= n) continue;
if (g[nr][nc] != '1') continue;
g[nr][nc] = '0'; // mark *before* pushing
q.push(make_pair(nr, nc));
}
}
}
};- Time.
O(m · n)— every cell is enqueued at most once and processed in constant work. - Space.
O(min(m, n))for the BFS frontier in the worst case (a sweeping wavefront across a rectangular grid).
Discussion
Three things decide whether the solution is correct and fast:
Mark before enqueue, not after dequeue. If you defer the mark to when a cell is popped, the same cell can be pushed by all four neighbors before any of them pops it. Work explodes from O(mn) to O(mn · 4) in the best case and correctness suffers on pathological inputs. Mark at the moment you add to the queue.
BFS over recursive DFS, by default. An easy mistake is to write a clean four-line recursive dfs(r, c). It passes the small samples and fails on a large dense grid with a stack overflow. Iterative BFS (or iterative DFS with an explicit stack) avoids that failure mode entirely and costs nothing in clarity.
Don’t forget the empty-grid guard. grid.size() == 0 is a legal input on some variants; grid[0].size() then crashes. The m ? grid[0].size() : 0 pattern is cheap insurance.
Common generalizations worth mentioning out loud if the interviewer nods:
- 8-directional connectivity — extend
dr/dcto eight entries; nothing else changes. - Component sizes — have
floodreturn the number of cells it coloured, and collect the returns into a vector. - Largest island — same, but track a running max instead of a vector.
- Multiple queries after edits — switch to Union-Find; each toggle is near-constant amortized, whereas re-running BFS is
O(mn)per query.
Union-Find is sometimes proposed as the “main” solution to this problem, but for a single static grid it is strictly more code and strictly no faster than BFS. Prefer it only when the grid evolves (LeetCode 305, Number of Islands II, is the textbook example).
2. Max Area of Island
LC 695 · Medium · Graphs (grid BFS)
Problem Statement
Given an m × n binary grid, return the size (number of cells) of the largest island, where an island is a maximal group of 1-cells connected 4-directionally. If there are no islands, return 0.
Constraints. 1 <= m, n <= 50, cells are 0 or 1.
Example.
grid = [[0,0,1,0],
[1,1,1,0],
[0,1,0,1]]
answer = 4
Solution
Same flood-fill skeleton as Number of Islands, but the flood returns the number of cells it visited. Track the running maximum.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int maxAreaOfIsland(vector<vector<int>>& g) {
int m = g.size();
int n = m ? (int)g[0].size() : 0;
int best = 0;
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
if (g[i][j] == 1)
best = max(best, flood(g, i, j, m, n));
return best;
}
private:
int flood(vector<vector<int>>& g, int si, int sj, int m, int n) {
static const int dr[4] = {-1,1,0,0};
static const int dc[4] = {0,0,-1,1};
queue<pair<int,int> > q;
q.push(make_pair(si, sj));
g[si][sj] = 0;
int area = 0;
while (!q.empty()) {
pair<int,int> cur = q.front(); q.pop();
++area;
int r = cur.first, c = cur.second;
for (int k = 0; k < 4; ++k) {
int nr = r + dr[k], nc = c + dc[k];
if (nr < 0 || nr >= m || nc < 0 || nc >= n) continue;
if (g[nr][nc] != 1) continue;
g[nr][nc] = 0;
q.push(make_pair(nr, nc));
}
}
return area;
}
};- Time.
O(m·n). - Space.
O(min(m, n))for the queue.
Discussion
Two small traps compared with the plain count-islands version. First, the answer for an empty grid is 0, not -1 — make sure the running best starts at 0 so the empty case falls out for free. Second, count the cell when it’s popped, not when it’s pushed; otherwise the size is one off for a zero-sized input. Everything else carries over: mark before enqueue, use BFS for stack safety, don’t mutate if the interviewer objects.
If the follow-up is “return all island sizes,” swap the running max for a vector<int> and push each area. If it becomes “k-th largest island,” collect sizes and partial-sort or heap the top-k.
3. Rotting Oranges
LC 994 · Medium · Graphs (multi-source BFS)
Problem Statement
You are given an m × n grid where each cell is 0 (empty), 1 (fresh orange), or 2 (rotten orange). Each minute, every rotten orange infects all 4-directionally adjacent fresh oranges. Return the minimum number of minutes until no fresh orange remains, or -1 if it is impossible.
Constraints. 1 <= m, n <= 10, cells are 0, 1, or 2.
Solution
Multi-source BFS. Seed the queue with every rotten cell at minute 0, then expand level by level. Track the count of fresh cells; after BFS, if any fresh cells remain they were unreachable — return -1.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int orangesRotting(vector<vector<int>>& g) {
int m = g.size(), n = g[0].size();
queue<pair<int,int> > q;
int fresh = 0;
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j) {
if (g[i][j] == 2) q.push(make_pair(i, j));
else if (g[i][j] == 1) ++fresh;
}
if (fresh == 0) return 0;
static const int dr[4] = {-1,1,0,0};
static const int dc[4] = {0,0,-1,1};
int minutes = 0;
while (!q.empty() && fresh > 0) {
int sz = q.size();
for (int i = 0; i < sz; ++i) {
pair<int,int> cur = q.front(); q.pop();
int r = cur.first, c = cur.second;
for (int k = 0; k < 4; ++k) {
int nr = r + dr[k], nc = c + dc[k];
if (nr < 0 || nr >= m || nc < 0 || nc >= n) continue;
if (g[nr][nc] != 1) continue;
g[nr][nc] = 2;
--fresh;
q.push(make_pair(nr, nc));
}
}
++minutes;
}
return fresh == 0 ? minutes : -1;
}
};- Time.
O(m·n). - Space.
O(m·n)worst case for the queue.
Discussion
Two things to nail: multi-source seeding and level tracking. If you seed only one rotten cell, you compute distance from that source — wrong. Seed all rotten cells before the first step. If you advance the minute counter on every pop, your clock runs too fast; the pattern here is to fix the queue size at the top of each round (int sz = q.size()), process exactly sz items, then tick. Handle the fresh == 0 base case up front so a grid with no fresh oranges returns 0, not something confused from the BFS.
4. Shortest Path in Binary Matrix
LC 1091 · Medium · Graphs (8-dir BFS)
Problem Statement
Given an n × n binary matrix, return the length of the shortest clear path from (0,0) to (n-1,n-1), moving in any of 8 directions, walking only on 0-cells. Return -1 if no such path exists. Path length is the number of cells visited, including the endpoints.
Constraints. 1 <= n <= 100, cells are 0 or 1.
Solution
BFS from (0,0) treating each cell as a node with up to 8 edges. The first time we reach (n-1, n-1), its BFS layer is the answer. If the source or the target is 1, return -1 immediately.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int shortestPathBinaryMatrix(vector<vector<int>>& g) {
int n = g.size();
if (g[0][0] != 0 || g[n-1][n-1] != 0) return -1;
static const int dr[8] = {-1,-1,-1,0,0,1,1,1};
static const int dc[8] = {-1,0,1,-1,1,-1,0,1};
queue<pair<int,int> > q;
q.push(make_pair(0, 0));
g[0][0] = 1; // mark visited
int dist = 1;
while (!q.empty()) {
int sz = q.size();
for (int i = 0; i < sz; ++i) {
pair<int,int> cur = q.front(); q.pop();
int r = cur.first, c = cur.second;
if (r == n-1 && c == n-1) return dist;
for (int k = 0; k < 8; ++k) {
int nr = r + dr[k], nc = c + dc[k];
if (nr < 0 || nr >= n || nc < 0 || nc >= n) continue;
if (g[nr][nc] != 0) continue;
g[nr][nc] = 1;
q.push(make_pair(nr, nc));
}
}
++dist;
}
return -1;
}
};- Time.
O(n²)— each cell enqueued once. - Space.
O(n²)worst case for the queue.
Discussion
BFS gives shortest-path-in-an-unweighted-graph “for free,” which is what this problem is: an unweighted 8-connectivity grid. Do not reach for Dijkstra unless edge weights vary. The start/end guard is a common oversight — if either endpoint is blocked, the canonical loop happily returns -1 only after exploring everything. The single-cell grid (n == 1) is a sneaky base case; the loop above handles it because the first popped cell is already the target.
A* would be faster in theory (Chebyshev distance as heuristic), but for n <= 100 BFS is more than fast enough and has no subtle bugs.
5. Pacific Atlantic Water Flow
LC 417 · Medium-Hard · Graphs (reverse BFS/DFS)
Problem Statement
Given an m × n matrix of heights, water can flow from a cell to any 4-adjacent neighbor with height <= its own. The Pacific touches the top and left edges; the Atlantic touches the bottom and right edges. Return a list of cells from which water can reach both oceans.
Constraints. 1 <= m, n <= 200, heights are non-negative int.
Solution
Work backwards. From each Pacific-edge cell, walk uphill (next height >= current) marking every cell reachable. Do the same from every Atlantic-edge cell. A cell is in the answer iff it was marked by both sweeps.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
vector<vector<int>> pacificAtlantic(vector<vector<int>>& h) {
int m = h.size(), n = m ? (int)h[0].size() : 0;
vector<vector<char> > pac(m, vector<char>(n, 0));
vector<vector<char> > atl(m, vector<char>(n, 0));
queue<pair<int,int> > qp, qa;
for (int i = 0; i < m; ++i) {
pac[i][0] = 1; qp.push(make_pair(i, 0));
atl[i][n-1] = 1; qa.push(make_pair(i, n-1));
}
for (int j = 0; j < n; ++j) {
pac[0][j] = 1; qp.push(make_pair(0, j));
atl[m-1][j] = 1; qa.push(make_pair(m-1, j));
}
bfs(h, pac, qp, m, n);
bfs(h, atl, qa, m, n);
vector<vector<int> > out;
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
if (pac[i][j] && atl[i][j]) {
vector<int> cell; cell.push_back(i); cell.push_back(j);
out.push_back(cell);
}
return out;
}
private:
void bfs(vector<vector<int>>& h, vector<vector<char>>& mark,
queue<pair<int,int> >& q, int m, int n) {
static const int dr[4] = {-1,1,0,0};
static const int dc[4] = {0,0,-1,1};
while (!q.empty()) {
pair<int,int> cur = q.front(); q.pop();
int r = cur.first, c = cur.second;
for (int k = 0; k < 4; ++k) {
int nr = r + dr[k], nc = c + dc[k];
if (nr < 0 || nr >= m || nc < 0 || nc >= n) continue;
if (mark[nr][nc]) continue;
if (h[nr][nc] < h[r][c]) continue; // walk uphill
mark[nr][nc] = 1;
q.push(make_pair(nr, nc));
}
}
}
};- Time.
O(m·n)— each cell visited at most twice. - Space.
O(m·n)for the two marker grids.
Discussion
The inversion is the whole trick: a direct “from each cell, can I reach the ocean?” BFS is O((m·n)²). Flipping the direction — “from the ocean, which cells can it reach walking uphill” — makes both sweeps linear. Once you see the trick, the rest is bookkeeping: maintain two visited grids, seed each with its own border, intersect at the end. The edge-comparison is >= not >: equal heights are reachable in both directions.
6. Course Schedule
LC 207 · Medium · Graphs (topo sort)
Problem Statement
There are numCourses courses labeled 0..numCourses-1. You are given a list prerequisites where each entry [a, b] means you must take course b before a. Return true iff it is possible to finish all courses (i.e., the dependency graph has no cycle).
Constraints. 1 <= numCourses <= 2000, 0 <= prerequisites.length <= 5000.
Solution
Kahn’s algorithm. Build adjacency list from b -> a and an in-degree array. Seed a queue with all zero-in-degree nodes. Pop, visit, decrement neighbors; any neighbor whose in-degree hits 0 is enqueued. If we pop all numCourses nodes the graph is a DAG.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
bool canFinish(int n, vector<vector<int>>& pre) {
vector<vector<int> > g(n);
vector<int> ind(n, 0);
for (int i = 0; i < (int)pre.size(); ++i) {
int a = pre[i][0], b = pre[i][1];
g[b].push_back(a);
++ind[a];
}
queue<int> q;
for (int i = 0; i < n; ++i) if (ind[i] == 0) q.push(i);
int done = 0;
while (!q.empty()) {
int u = q.front(); q.pop();
++done;
for (int i = 0; i < (int)g[u].size(); ++i) {
int v = g[u][i];
if (--ind[v] == 0) q.push(v);
}
}
return done == n;
}
};- Time.
O(V + E). - Space.
O(V + E).
Discussion
Kahn beats recursive DFS cycle-detection on every dimension that matters for interviews: no recursion limits, no “3-color” visited state to get wrong, and the same code naturally extends to LC 210 (just record the order as you pop). Two micro-gotchas: build the edge in the direction prereq -> course, not the reverse, and remember the self-loop case ([a, a]) — it leaves a node with ind[a] = 1 that never reaches the queue, so the final count correctly reports failure.
7. Course Schedule II
LC 210 · Medium · Graphs (topo sort)
Problem Statement
Same setup as Course Schedule, but return a valid ordering of all courses if one exists. Return an empty array if it does not.
Constraints. Same as LC 207.
Solution
Identical Kahn skeleton, but record the pop order into an output vector.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
vector<int> findOrder(int n, vector<vector<int>>& pre) {
vector<vector<int> > g(n);
vector<int> ind(n, 0);
for (int i = 0; i < (int)pre.size(); ++i) {
g[pre[i][1]].push_back(pre[i][0]);
++ind[pre[i][0]];
}
queue<int> q;
for (int i = 0; i < n; ++i) if (ind[i] == 0) q.push(i);
vector<int> order;
while (!q.empty()) {
int u = q.front(); q.pop();
order.push_back(u);
for (int i = 0; i < (int)g[u].size(); ++i) {
int v = g[u][i];
if (--ind[v] == 0) q.push(v);
}
}
if ((int)order.size() != n) return vector<int>();
return order;
}
};- Time.
O(V + E). - Space.
O(V + E).
Discussion
The only substantive difference from LC 207 is the order vector. Return an empty vector on failure (not the partial order). If the interviewer asks for “any valid order,” say so and move on — Kahn doesn’t give a lexicographically smallest order unless you replace queue with a priority_queue. That’s a cheap tweak and worth mentioning.
8. Alien Dictionary
LC 269 · Hard · Graphs (topo sort on characters)
Problem Statement
Given a sorted list of words from an alien language using lowercase English letters, return any ordering of the letters consistent with the dictionary order, or "" if the ordering is invalid.
Constraints. 1 <= words.length <= 100, 1 <= words[i].length <= 100.
Solution
Each pair of adjacent words in the sorted list reveals at most one ordering constraint: the first character position at which they differ gives words[i][k] < words[i+1][k]. Collect all such constraints, topo-sort the 26 letters. Special case: if words[i] is a proper prefix of words[i+1] you get no constraint; if words[i+1] is a proper prefix of words[i] the input is invalid.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
string alienOrder(vector<string>& words) {
vector<unordered_set<int> > g(26);
vector<int> ind(26, -1); // -1 = letter absent
for (int i = 0; i < (int)words.size(); ++i)
for (int j = 0; j < (int)words[i].size(); ++j)
ind[words[i][j] - 'a'] = 0;
for (int i = 0; i + 1 < (int)words.size(); ++i) {
const string& a = words[i];
const string& b = words[i+1];
int L = min(a.size(), b.size());
int k = 0;
while (k < L && a[k] == b[k]) ++k;
if (k == L) {
if (a.size() > b.size()) return ""; // invalid prefix
continue;
}
int u = a[k] - 'a', v = b[k] - 'a';
if (g[u].insert(v).second) ++ind[v];
}
queue<int> q;
for (int c = 0; c < 26; ++c) if (ind[c] == 0) q.push(c);
string out;
while (!q.empty()) {
int u = q.front(); q.pop();
out.push_back('a' + u);
for (unordered_set<int>::iterator it = g[u].begin(); it != g[u].end(); ++it)
if (--ind[*it] == 0) q.push(*it);
}
int present = 0;
for (int c = 0; c < 26; ++c) if (ind[c] != -1) ++present;
return (int)out.size() == present ? out : "";
}
};- Time.
O(C)whereCis total input characters. There are at most26²edges. - Space.
O(1)— the graph is bounded by the alphabet.
Discussion
Three subtleties, all of which have historically failed otherwise-correct solutions: (1) only letters that appear in the input belong in the output — track presence separately from in-degree; (2) the prefix rule ("abc" before "ab") is illegal and must be detected explicitly; and (3) duplicate edges must not double-count in-degree, hence the unordered_set per node and the .insert().second check.
The graph is over letters (at most 26 nodes), not words, so the whole thing is effectively O(C). Mention that complexity up front; interviewers sometimes expect an answer in terms of N (words) and miss that C dominates.
9. Redundant Connection
LC 684 · Medium · Graphs (union-find)
Problem Statement
You are given an undirected graph that started as a tree with n nodes labeled 1..n and had one extra edge added. Return the last edge (in input order) that can be removed so that the remaining graph is still a tree.
Constraints. 3 <= n <= 1000. Input is vector<vector<int>> edges of length n.
Solution
Union-find with path compression. Process edges in order; the first edge whose endpoints are already in the same component is the answer. Since the question asks for the last edge that could be removed and the input has exactly one extra edge, the first edge that closes a cycle is that last edge.
#include <bits/stdc++.h>
using namespace std;
class DSU {
public:
vector<int> p;
DSU(int n): p(n, 0) { for (int i = 0; i < n; ++i) p[i] = i; }
int find(int x) { while (p[x] != x) { p[x] = p[p[x]]; x = p[x]; } return x; }
bool unite(int a, int b) {
int ra = find(a), rb = find(b);
if (ra == rb) return false;
p[ra] = rb;
return true;
}
};
class Solution {
public:
vector<int> findRedundantConnection(vector<vector<int>>& edges) {
int n = edges.size();
DSU d(n + 1);
for (int i = 0; i < n; ++i)
if (!d.unite(edges[i][0], edges[i][1]))
return edges[i];
return vector<int>();
}
};- Time.
O(n · α(n)). - Space.
O(n).
Discussion
The problem statement promises exactly one extra edge, which makes this problem trivial once you know DSU: “find the first edge that closes a cycle, return it.” Without that guarantee you would need to track all cycle-closing edges and return the one appearing latest in the input.
Skip union-by-rank if you want — path compression alone is fast enough for n <= 1000. For the directed variant (LC 685) the problem gets much harder: you need to distinguish “cycle” from “two parents” and handle both together.
10. Accounts Merge
LC 721 · Medium-Hard · Graphs (union-find on strings)
Problem Statement
Each element of accounts is a list [name, e1, e2, ...] — a person’s name followed by their emails. Two accounts belong to the same person iff they share at least one email (names may clash and are not sufficient). Merge all accounts that belong to the same person, outputting each merged account as [name, sorted emails...]. Output order doesn’t matter.
Constraints. 1 <= accounts.length <= 1000, total emails <= 10⁴.
Solution
Treat each account index as a DSU node. Walk every account and for each email, either record “this email belongs to account i” or, if the email already belongs to some account j, union i with j. At the end, group emails by root, sort each group, and emit.
#include <bits/stdc++.h>
using namespace std;
class DSU {
public:
vector<int> p;
DSU(int n): p(n, 0) { for (int i = 0; i < n; ++i) p[i] = i; }
int find(int x) { while (p[x] != x) { p[x] = p[p[x]]; x = p[x]; } return x; }
void unite(int a, int b) {
int ra = find(a), rb = find(b);
if (ra != rb) p[ra] = rb;
}
};
class Solution {
public:
vector<vector<string>> accountsMerge(vector<vector<string>>& acc) {
int n = acc.size();
DSU d(n);
unordered_map<string, int> owner; // email -> account idx
for (int i = 0; i < n; ++i) {
for (int j = 1; j < (int)acc[i].size(); ++j) {
const string& e = acc[i][j];
unordered_map<string,int>::iterator it = owner.find(e);
if (it == owner.end()) owner[e] = i;
else d.unite(i, it->second);
}
}
unordered_map<int, vector<string> > bucket;
for (unordered_map<string,int>::iterator it = owner.begin(); it != owner.end(); ++it)
bucket[d.find(it->second)].push_back(it->first);
vector<vector<string> > out;
for (unordered_map<int, vector<string> >::iterator it = bucket.begin(); it != bucket.end(); ++it) {
sort(it->second.begin(), it->second.end());
vector<string> row;
row.push_back(acc[it->first][0]);
for (int k = 0; k < (int)it->second.size(); ++k) row.push_back(it->second[k]);
out.push_back(row);
}
return out;
}
};- Time.
O(E · α(n) + E log E)whereEis total emails; sorting dominates. - Space.
O(E).
Discussion
The natural but wrong approach is to union emails directly — which needs string-keyed DSU and gets messy fast. Unioning account indices keeps the DSU tiny and lets you do one pass. The email-to-owner map does double duty: it both merges accounts that share an email and lets you group emails back by account at the end.
Names are attached per-account, not per-email, so after merging pick any representative account’s name (all accounts in a group have the same name by problem guarantee). Sort emails lexicographically; case matters — LC’s judge is case-sensitive.
11. Most Stones Removed with Same Row or Column
LC 947 · Medium · Graphs / Union-Find
Problem Statement
You’re given n stones at integer coordinates on a 2D plane, no two at the same cell. A stone can be removed if it shares a row or column with another stone that has not yet been removed. Return the maximum number of stones that can be removed.
Constraints. 1 ≤ n ≤ 1000; coordinates are in [0, 10^4].
Solution
Two stones that share a row or column are in the same connected component. From any component of size k you can always remove k-1 stones (leave one behind as the “anchor”). So the answer is n − (#components).
Union stones that share a row or a column. Rather than unioning stones pairwise (O(n²)), union each stone with a virtual “row node” and a virtual “column node”; stones that touch the same row/column end up in one set. Offset column ids by 10001 to avoid collisions with row ids.
#include <bits/stdc++.h>
using namespace std;
struct DSU {
vector<int> p, r;
DSU(int n) : p(n), r(n, 0) { for (int i = 0; i < n; ++i) p[i] = i; }
int find(int x) { while (p[x] != x) { p[x] = p[p[x]]; x = p[x]; } return x; }
bool unite(int a, int b) {
a = find(a); b = find(b);
if (a == b) return false;
if (r[a] < r[b]) swap(a, b);
p[b] = a;
if (r[a] == r[b]) ++r[a];
return true;
}
};
class Solution {
public:
int removeStones(vector<vector<int> >& stones) {
const int OFF = 10001;
DSU dsu(2 * OFF);
set<int> seen;
for (size_t i = 0; i < stones.size(); ++i) {
int r = stones[i][0];
int c = stones[i][1] + OFF;
dsu.unite(r, c);
seen.insert(r);
seen.insert(c);
}
set<int> roots;
for (set<int>::iterator it = seen.begin(); it != seen.end(); ++it)
roots.insert(dsu.find(*it));
return (int)stones.size() - (int)roots.size();
}
};- Time.
O(n · α(n)). - Space.
O(R + C)for the DSU and seen set.
Discussion
The key reframing is answer = n − components. Once you see that, the only remaining question is how to union efficiently. The “row node / column node” trick is worth internalizing: any time two things are related through a shared attribute, unioning with a virtual node for that attribute avoids quadratic pairwise unions.
Offsetting column ids by a constant larger than the max row is a cheap way to put rows and columns in the same DSU namespace. You could also use two separate DSUs or a map<int,int> keyed by (kind, value), but the offset trick is fastest.
Only count roots reachable from stones you actually inserted — iterating the full DSU would overcount isolated nodes.
12. Number of Connected Components in an Undirected Graph
LC 323 · Medium · Graphs / Union-Find
Problem Statement
Given n nodes labeled 0..n-1 and an edge list, return the number of connected components.
Constraints. 1 ≤ n ≤ 2000; edges are unique and undirected.
Solution
Classic DSU: start with n components, decrement every time unite actually merges two different roots.
class Solution {
public:
int countComponents(int n, vector<vector<int> >& edges) {
vector<int> p(n), r(n, 0);
for (int i = 0; i < n; ++i) p[i] = i;
int comps = n;
for (size_t i = 0; i < edges.size(); ++i) {
int a = edges[i][0], b = edges[i][1];
while (p[a] != a) { p[a] = p[p[a]]; a = p[a]; }
while (p[b] != b) { p[b] = p[p[b]]; b = p[b]; }
if (a == b) continue;
if (r[a] < r[b]) swap(a, b);
p[b] = a;
if (r[a] == r[b]) ++r[a];
--comps;
}
return comps;
}
};- Time.
O((n + m) · α(n)). - Space.
O(n).
Discussion
DSU shines here because edges arrive in arbitrary order and you never need to traverse the graph. The alternative (BFS/DFS from each unvisited node) works and is arguably simpler if you already have an adjacency list, but DSU is strictly better when edges stream in or when you care about incremental connectivity queries.
Path compression (p[a] = p[p[a]]) plus union-by-rank is what makes this effectively linear. Omit either and you can be tricked into O(n log n) or worse on pathological inputs.
The while loops here are inlined find; this avoids the function call and shaves a few cycles for tight graph problems, but writing a DSU struct (as in problem 11) is more readable once you have more than one call site.
13. Network Delay Time
LC 743 · Medium · Graphs / Dijkstra
Problem Statement
You’re given a directed weighted graph on n nodes (labeled 1..n) and a source k. Each edge (u, v, w) means a signal from u reaches v after time w. Return the time for every node to receive the signal from k, or -1 if any node is unreachable.
Constraints. 1 ≤ n ≤ 100; 1 ≤ edges.length ≤ 6000; 1 ≤ w ≤ 100.
Solution
The answer is max(dist[v]) over all v, where dist is the shortest-path distance from k. All weights are positive, so Dijkstra is the right tool.
Standard lazy Dijkstra: min-heap of (dist, node), pop smallest, skip if stale, relax outgoing edges.
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int networkDelayTime(vector<vector<int> >& times, int n, int k) {
vector<vector<pair<int,int> > > adj(n + 1);
for (size_t i = 0; i < times.size(); ++i)
adj[times[i][0]].push_back(make_pair(times[i][1], times[i][2]));
const int INF = INT_MAX;
vector<int> dist(n + 1, INF);
dist[k] = 0;
priority_queue<pair<int,int>, vector<pair<int,int> >, greater<pair<int,int> > > pq;
pq.push(make_pair(0, k));
while (!pq.empty()) {
pair<int,int> top = pq.top(); pq.pop();
int d = top.first, u = top.second;
if (d > dist[u]) continue; // stale
for (size_t i = 0; i < adj[u].size(); ++i) {
int v = adj[u][i].first;
int w = adj[u][i].second;
if (d + w < dist[v]) {
dist[v] = d + w;
pq.push(make_pair(dist[v], v));
}
}
}
int ans = 0;
for (int v = 1; v <= n; ++v) {
if (dist[v] == INF) return -1;
if (dist[v] > ans) ans = dist[v];
}
return ans;
}
};- Time.
O((V + E) log V). - Space.
O(V + E).
Discussion
The “signal reaches every node” framing is a disguise — once you see it as single-source shortest paths on a positive-weight graph, Dijkstra is the canonical answer. If any dist[v] stays INF, that node is unreachable and the answer is -1.
The stale-entry check (if (d > dist[u]) continue) is what makes lazy Dijkstra work without a decrease-key operation. You can push the same node multiple times, but you’ll only process it at its best distance; later pops are skipped cheaply.
For dense graphs (E = Θ(V²)), an O(V²) Dijkstra without a heap is actually faster. For LC-scale inputs the heap version is fine and generalizes better.
14. Cheapest Flights Within K Stops
LC 787 · Medium-Hard · Graphs / Bellman-Ford
Problem Statement
You’re given n cities, a list of flights [u, v, price], a source src, destination dst, and a limit k. Return the cheapest price from src to dst with at most k stops (i.e. at most k + 1 edges), or -1 if unreachable.
Constraints. 1 ≤ n ≤ 100; 0 ≤ flights.length ≤ n*(n-1)/2; 0 ≤ k < n.
Solution
The stop limit makes plain Dijkstra unsafe: the cheapest path might exceed k stops, and a dearer path with fewer stops could be the real answer. Bellman-Ford relaxed exactly k + 1 times is a clean fit — each round corresponds to “use one more edge.”
Relax from a snapshot of distances so a single round can’t chain multiple edges.
class Solution {
public:
int findCheapestPrice(int n, vector<vector<int> >& flights, int src, int dst, int k) {
const int INF = INT_MAX / 2;
vector<int> dist(n, INF);
dist[src] = 0;
for (int round = 0; round <= k; ++round) {
vector<int> snap = dist;
for (size_t i = 0; i < flights.size(); ++i) {
int u = flights[i][0], v = flights[i][1], w = flights[i][2];
if (snap[u] == INF) continue;
if (snap[u] + w < dist[v]) dist[v] = snap[u] + w;
}
}
return dist[dst] >= INF ? -1 : dist[dst];
}
};- Time.
O(k · E). - Space.
O(n).
Discussion
Using a snapshot per round is the crucial detail. If you relax in-place, an edge a → b relaxed early in the round lets a later edge b → c use the freshly-updated dist[b], effectively using two edges in one round and breaking the “at most k + 1 edges” guarantee.
The Dijkstra alternative is to push (cost, node, stops) into the heap and only expand states with stops ≤ k. It works but is fiddlier and uses more memory. Bellman-Ford’s “relax k+1 times” is dead simple and perfectly matches the problem statement.
Watch the off-by-one: k stops means k+1 edges, so loop round from 0..k inclusive.
15. Path With Maximum Probability
LC 1514 · Medium-Hard · Graphs / Dijkstra variant
Problem Statement
You have n nodes and undirected edges with success probabilities in [0, 1]. Return the maximum probability of reaching end from start, or 0 if unreachable.
Constraints. 2 ≤ n ≤ 10^4; edges up to 2·10^4.
Solution
Shortest path in the log-sense: maximize a product of probabilities. Replace the min-heap with a max-heap and the “smaller is better” check with “larger is better,” and Dijkstra still works because probabilities are in [0,1] (so any edge makes the path less likely, preserving monotonicity just like non-negative edge weights).
class Solution {
public:
double maxProbability(int n, vector<vector<int> >& edges, vector<double>& prob,
int start, int end) {
vector<vector<pair<int,double> > > adj(n);
for (size_t i = 0; i < edges.size(); ++i) {
int u = edges[i][0], v = edges[i][1];
adj[u].push_back(make_pair(v, prob[i]));
adj[v].push_back(make_pair(u, prob[i]));
}
vector<double> best(n, 0.0);
best[start] = 1.0;
priority_queue<pair<double,int> > pq; // max-heap by default
pq.push(make_pair(1.0, start));
while (!pq.empty()) {
pair<double,int> top = pq.top(); pq.pop();
double p = top.first;
int u = top.second;
if (u == end) return p;
if (p < best[u]) continue;
for (size_t i = 0; i < adj[u].size(); ++i) {
int v = adj[u][i].first;
double np = p * adj[u][i].second;
if (np > best[v]) {
best[v] = np;
pq.push(make_pair(np, v));
}
}
}
return 0.0;
}
};- Time.
O((V + E) log V). - Space.
O(V + E).
Discussion
Dijkstra requires an “edge cost that can only make things worse” — for shortest paths that’s “non-negative weights”; for max probability that’s “factors in [0,1]”. If any probability could exceed 1, the greedy argument breaks and you’d need Bellman-Ford or SPFA.
Early termination when you pop end is safe because at that point p is already the best possible probability for end. Without it, the algorithm still works but does more work than necessary.
Taking −log(p) turns the problem into ordinary shortest paths with positive weights, but working in probability space directly is simpler and avoids floating-point loss from repeated log/exp.
16. Path With Minimum Effort
LC 1631 · Medium-Hard · Graphs / Dijkstra variant
Problem Statement
You’re given an m×n heights grid. A path’s effort is the maximum absolute height difference between consecutive cells along the path. Return the minimum effort to go from (0,0) to (m-1,n-1).
Constraints. 1 ≤ m,n ≤ 100; heights fit in int.
Solution
This is a “minimax path” problem — minimize the largest edge weight on the path. Dijkstra still works, but the relaxation is new_cost = max(cost[u], |h[u] − h[v]|) instead of cost[u] + w.
class Solution {
public:
int minimumEffortPath(vector<vector<int> >& h) {
int m = h.size();
int n = h[0].size();
vector<vector<int> > eff(m, vector<int>(n, INT_MAX));
eff[0][0] = 0;
priority_queue<pair<int,int>, vector<pair<int,int> >, greater<pair<int,int> > > pq;
pq.push(make_pair(0, 0)); // (effort, i*n+j)
int dr[4] = {-1, 1, 0, 0};
int dc[4] = { 0, 0,-1, 1};
while (!pq.empty()) {
pair<int,int> top = pq.top(); pq.pop();
int e = top.first, code = top.second;
int i = code / n, j = code % n;
if (i == m - 1 && j == n - 1) return e;
if (e > eff[i][j]) continue;
for (int d = 0; d < 4; ++d) {
int ni = i + dr[d], nj = j + dc[d];
if (ni < 0 || nj < 0 || ni >= m || nj >= n) continue;
int ne = max(e, abs(h[ni][nj] - h[i][j]));
if (ne < eff[ni][nj]) {
eff[ni][nj] = ne;
pq.push(make_pair(ne, ni * n + nj));
}
}
}
return 0;
}
};- Time.
O(mn log(mn)). - Space.
O(mn).
Discussion
The max instead of + in the relaxation is the only change from vanilla Dijkstra, and the correctness proof still holds: the “cost” of a path is monotone in the set of edges used, so once you pop (e, v) you know e is the best effort to reach v.
Alternative approaches: binary search on effort + BFS/DFS (“can I reach the end using only edges with |Δh| ≤ mid?”) — same asymptotics, often faster in practice; or union-find over edges sorted by weight — elegant, useful when you need all pairs.
Encoding the cell as i * n + j avoids pair overhead in the heap. Minor but meaningful on hot inner loops.
17. Min Cost to Connect All Points
LC 1584 · Medium-Hard · Graphs / MST
Problem Statement
Given n points on a 2D plane, the cost to connect two points is their Manhattan distance. Return the minimum cost to connect all points into one component.
Constraints. 1 ≤ n ≤ 1000.
Solution
Textbook minimum spanning tree. The graph is dense (n² implicit edges), so Prim’s algorithm with an O(n²) array beats heap-based Prim or Kruskal:
- Track
minEdge[v]= cheapest edge from the current tree tov. - Each step, pick the unused
vwith the smallestminEdge[v], add it to the tree, then updateminEdge[u]for every remainingu.
class Solution {
public:
int minCostConnectPoints(vector<vector<int> >& pts) {
int n = pts.size();
vector<int> best(n, INT_MAX);
vector<char> inTree(n, 0);
best[0] = 0;
int total = 0;
for (int k = 0; k < n; ++k) {
int u = -1, bu = INT_MAX;
for (int v = 0; v < n; ++v)
if (!inTree[v] && best[v] < bu) { bu = best[v]; u = v; }
inTree[u] = 1;
total += bu;
for (int v = 0; v < n; ++v) {
if (inTree[v]) continue;
int d = abs(pts[u][0] - pts[v][0]) + abs(pts[u][1] - pts[v][1]);
if (d < best[v]) best[v] = d;
}
}
return total;
}
};- Time.
O(n²). - Space.
O(n).
Discussion
For dense graphs, the array-based Prim trades log n factors away entirely. Heap-Prim is O(E log V) = O(n² log n); Kruskal is O(n² log n²) from sorting. Array-Prim at O(n²) wins decisively at n = 1000.
Kruskal with DSU is more natural when edges are given explicitly. Here the edges are implicit (all pairs), so materializing them just to sort is wasteful — Prim lets you discover edge weights on demand.
Manhattan MST admits a clever O(n log n) algorithm using sweep + BIT, but for n ≤ 1000 it’s overkill; the O(n²) version is simpler and fast enough.
18. Graph Valid Tree
LC 261 · Medium · Graphs / Union-Find
Problem Statement
Given n nodes and an edge list, determine whether the edges form a valid tree: connected and acyclic.
Constraints. 1 ≤ n ≤ 2000.
Solution
A tree on n nodes has exactly n − 1 edges and no cycles. Check the edge count first, then run DSU: if any edge connects two nodes already in the same component, there’s a cycle.
class Solution {
public:
bool validTree(int n, vector<vector<int> >& edges) {
if ((int)edges.size() != n - 1) return false;
vector<int> p(n), r(n, 0);
for (int i = 0; i < n; ++i) p[i] = i;
for (size_t i = 0; i < edges.size(); ++i) {
int a = edges[i][0], b = edges[i][1];
while (p[a] != a) { p[a] = p[p[a]]; a = p[a]; }
while (p[b] != b) { p[b] = p[p[b]]; b = p[b]; }
if (a == b) return false; // cycle
if (r[a] < r[b]) swap(a, b);
p[b] = a;
if (r[a] == r[b]) ++r[a];
}
return true;
}
};- Time.
O(n · α(n)). - Space.
O(n).
Discussion
Two properties characterize a tree on n nodes: exactly n − 1 edges, and connectivity. If you have n − 1 edges and no cycles, connectivity comes free (every additional connection component would require an edge you don’t have). If you have n − 1 edges and any cycle, you must have a disconnected component — same edge count.
So the combined check “edges.size() == n − 1 AND no cycle” is sufficient for “valid tree.” No need to separately verify connectivity.
DFS/BFS also works: start at node 0, visit everything, check you didn’t revisit via a non-parent edge, and confirm you hit all n nodes. DSU is a bit shorter and doesn’t require building an adjacency list.
19. Minimum Height Trees
LC 310 · Medium-Hard · Graphs / Topological trimming
Problem Statement
A tree is given as n nodes and n − 1 edges. A root yields a rooted tree of some height. Return all roots that minimize the tree’s height (there are at most 2).
Constraints. 1 ≤ n ≤ 2·10^4.
Solution
The minimum-height roots are the centroids of the tree (at most two). Peel off leaves in BFS layers until 1 or 2 nodes remain — those are the centers.
class Solution {
public:
vector<int> findMinHeightTrees(int n, vector<vector<int> >& edges) {
if (n == 1) return vector<int>(1, 0);
vector<vector<int> > adj(n);
vector<int> deg(n, 0);
for (size_t i = 0; i < edges.size(); ++i) {
int a = edges[i][0], b = edges[i][1];
adj[a].push_back(b);
adj[b].push_back(a);
++deg[a]; ++deg[b];
}
queue<int> leaves;
for (int v = 0; v < n; ++v)
if (deg[v] == 1) leaves.push(v);
int remaining = n;
while (remaining > 2) {
int sz = leaves.size();
remaining -= sz;
while (sz--) {
int u = leaves.front(); leaves.pop();
for (size_t i = 0; i < adj[u].size(); ++i) {
int v = adj[u][i];
if (--deg[v] == 1) leaves.push(v);
}
}
}
vector<int> ans;
while (!leaves.empty()) { ans.push_back(leaves.front()); leaves.pop(); }
return ans;
}
};- Time.
O(n). - Space.
O(n).
Discussion
Why are there at most two centers? If you pick any longest path in the tree (diameter), its midpoint is a center. An even-length diameter has one middle vertex; odd-length has two.
The leaf-peeling process is equivalent to BFS from all leaves at once; every peeled round removes one level of depth from every longest path. When ≤ 2 nodes remain, you’ve hit the middle.
The naive alternative — root at each node and BFS for depth — is O(n²) and TLEs at n = 2·10⁴. Leaf-peeling is the standard trick for any “center of a tree” problem.
20. Reconstruct Itinerary
LC 332 · Hard · Graphs / Eulerian path
Problem Statement
Given a list of airline tickets [from, to], reconstruct the itinerary starting from "JFK" using every ticket exactly once. If multiple valid itineraries exist, return the lexicographically smallest one.
Constraints. 1 ≤ tickets.length ≤ 300.
Solution
Every ticket must be used exactly once — that’s an Eulerian path on a directed multigraph. Hierholzer’s algorithm, with a twist: to get the lexicographically smallest path, always take the smallest outgoing edge first. Store outgoing edges in a per-airport min-heap.
class Solution {
public:
vector<string> findItinerary(vector<vector<string> >& tickets) {
map<string, priority_queue<string, vector<string>, greater<string> > > g;
for (size_t i = 0; i < tickets.size(); ++i)
g[tickets[i][0]].push(tickets[i][1]);
vector<string> route;
stack<string> st;
st.push("JFK");
while (!st.empty()) {
string u = st.top();
if (!g[u].empty()) {
string v = g[u].top();
g[u].pop();
st.push(v);
} else {
route.push_back(u);
st.pop();
}
}
reverse(route.begin(), route.end());
return route;
}
};- Time.
O(E log E)from heap operations. - Space.
O(V + E).
Discussion
Naive DFS backtracking tries every lexicographic path and gets exponential worst cases. Hierholzer’s insight is that you can commit to edges greedily without backtracking — when you hit a dead end, push the current node to the output, and the post-order reverse gives a valid Eulerian path.
The heap per node guarantees you always descend into the smallest available destination first. A multiset<string> plus begin()/erase is equivalent and slightly more flexible (supports iteration) but a min-heap is the tightest fit.
Proving correctness: LC guarantees at least one valid itinerary exists, so the dead-end the iterative DFS hits when the stack unwinds is always JFK or a valid terminal, and the reversed post-order is Eulerian.
21. Bus Routes
LC 815 · Hard · Graphs / BFS on meta-graph
Problem Statement
You have a list of bus routes; routes[i] is the sequence of stops bus i visits in a loop. You start at stop source and want to reach target. Return the minimum number of buses (not stops) to take, or -1 if impossible.
Constraints. 1 ≤ routes.length ≤ 500; total stops ≤ 10^5.
Solution
The natural graph — nodes = stops, edges = adjacent stops on a route — is wrong because transferring buses is free but traversing stops is not. The right graph has buses as nodes: two buses are connected if they share a stop. BFS from every bus that visits source to any bus that visits target.
class Solution {
public:
int numBusesToDestination(vector<vector<int> >& routes, int source, int target) {
if (source == target) return 0;
unordered_map<int, vector<int> > stopToBuses;
int B = routes.size();
for (int b = 0; b < B; ++b)
for (size_t j = 0; j < routes[b].size(); ++j)
stopToBuses[routes[b][j]].push_back(b);
if (stopToBuses.find(source) == stopToBuses.end()) return -1;
vector<char> seenBus(B, 0);
unordered_set<int> seenStop;
queue<int> q;
vector<int>& starts = stopToBuses[source];
for (size_t i = 0; i < starts.size(); ++i) {
q.push(starts[i]);
seenBus[starts[i]] = 1;
}
int buses = 1;
while (!q.empty()) {
int sz = q.size();
while (sz--) {
int b = q.front(); q.pop();
for (size_t j = 0; j < routes[b].size(); ++j) {
int stop = routes[b][j];
if (stop == target) return buses;
if (seenStop.insert(stop).second) {
vector<int>& nexts = stopToBuses[stop];
for (size_t k = 0; k < nexts.size(); ++k) {
if (!seenBus[nexts[k]]) {
seenBus[nexts[k]] = 1;
q.push(nexts[k]);
}
}
}
}
}
++buses;
}
return -1;
}
};- Time.
O(S + Σ|routes[i]|)whereSis distinct stops. - Space. same.
Discussion
Defining the graph correctly is 80% of the work. “Buses as nodes” makes the BFS layer count directly correspond to “number of buses taken,” which is what the problem asks for. If you try “stops as nodes,” you’d have to weight edges by bus changes, which is messier.
Two seen sets are needed: seenStop (so you don’t expand the same stop twice) and seenBus (so you don’t re-queue a bus). Omit either and you’ll TLE on adversarial inputs with repeated stops.
Edge case: source == target returns 0 (you take zero buses). Missing this gives wrong answers on the trivial case.
22. Swim in Rising Water
LC 778 · Hard · Graphs / Dijkstra variant (or binary search + BFS)
Problem Statement
You’re given an n×n grid where grid[i][j] is the elevation at (i,j). At time t, you can move between adjacent cells a, b iff max(grid[a], grid[b]) ≤ t. Return the minimum t to travel from (0,0) to (n-1,n-1).
Constraints. n ≤ 50; distinct elevations in [0, n²−1].
Solution
Minimax path again: minimize the maximum elevation along the path. Dijkstra with cost = max(cost[u], grid[v]).
class Solution {
public:
int swimInWater(vector<vector<int> >& g) {
int n = g.size();
vector<vector<int> > best(n, vector<int>(n, INT_MAX));
best[0][0] = g[0][0];
priority_queue<pair<int,int>, vector<pair<int,int> >, greater<pair<int,int> > > pq;
pq.push(make_pair(g[0][0], 0));
int dr[4] = {-1, 1, 0, 0};
int dc[4] = { 0, 0,-1, 1};
while (!pq.empty()) {
pair<int,int> top = pq.top(); pq.pop();
int t = top.first, code = top.second;
int i = code / n, j = code % n;
if (i == n - 1 && j == n - 1) return t;
if (t > best[i][j]) continue;
for (int d = 0; d < 4; ++d) {
int ni = i + dr[d], nj = j + dc[d];
if (ni < 0 || nj < 0 || ni >= n || nj >= n) continue;
int nt = max(t, g[ni][nj]);
if (nt < best[ni][nj]) {
best[ni][nj] = nt;
pq.push(make_pair(nt, ni * n + nj));
}
}
}
return -1;
}
};- Time.
O(n² log n). - Space.
O(n²).
Discussion
Same minimax relaxation as problem 16 (Min Effort), just with a different combine rule: max(cost, grid[v]) instead of max(cost, |h[u]−h[v]|). Once you recognize the pattern, you can apply it mechanically.
Alternative: binary search on t + BFS. Search space is [0, n²−1]; for each candidate t, BFS to see if the target is reachable using only cells ≤ t. This is O(n² log n²) = O(n² log n) — same class, often faster in practice due to simpler inner loop.
Another alternative: Kruskal-style DSU. Sort cells by elevation, activate them one by one, and return the elevation that first unions (0,0) with (n-1,n-1). Elegant but requires more setup.
23. Longest Increasing Subsequence
LC 300 · Medium · DP / Patience sorting
Problem Statement
Given an integer array nums, return the length of the longest strictly increasing subsequence.
Constraints. 1 ≤ n ≤ 2500 for the O(n²) version; n ≤ 10^5 for the optimal.
Solution
Classic O(n²) DP: dp[i] = LIS ending at i, transition dp[i] = 1 + max(dp[j]) for j < i with nums[j] < nums[i].
Better: maintain tails, where tails[k] is the smallest possible tail of an increasing subsequence of length k + 1. For each x, binary-search the first position k in tails with tails[k] ≥ x and overwrite; if x is larger than all, append. LIS length = tails.size().
class Solution {
public:
int lengthOfLIS(vector<int>& nums) {
vector<int> tails;
for (size_t i = 0; i < nums.size(); ++i) {
int x = nums[i];
vector<int>::iterator it = lower_bound(tails.begin(), tails.end(), x);
if (it == tails.end()) tails.push_back(x);
else *it = x;
}
return (int)tails.size();
}
};- Time.
O(n log n). - Space.
O(n).
Discussion
The tails array is not a real subsequence — it’s an invariant: the smallest possible tail for each length. This is enough because the LIS only cares about lengths, not the actual values in between.
lower_bound (first ≥ x) gives strict LIS. For non-strict LIS (allow equals), use upper_bound (first > x). Mixing these up is a common off-by-one.
To recover the actual sequence you need parent pointers and an index array — the naive tails replacement loses the structure. Ask if the interviewer wants the sequence itself vs just the length.
24. Longest Common Subsequence
LC 1143 · Medium · DP / 2D
Problem Statement
Given two strings a and b, return the length of their longest common subsequence.
Constraints. 1 ≤ |a|, |b| ≤ 1000.
Solution
Standard 2D DP. dp[i][j] = LCS of a[0..i) and b[0..j). If a[i-1] == b[j-1], dp[i][j] = dp[i-1][j-1] + 1; otherwise dp[i][j] = max(dp[i-1][j], dp[i][j-1]).
Space-optimize to two rows since each row depends only on the previous.
class Solution {
public:
int longestCommonSubsequence(string a, string b) {
int m = a.size(), n = b.size();
vector<int> prev(n + 1, 0), cur(n + 1, 0);
for (int i = 1; i <= m; ++i) {
for (int j = 1; j <= n; ++j) {
if (a[i-1] == b[j-1]) cur[j] = prev[j-1] + 1;
else cur[j] = max(prev[j], cur[j-1]);
}
prev.swap(cur);
}
return prev[n];
}
};- Time.
O(mn). - Space.
O(n).
Discussion
LCS is the parent of a surprising number of string problems: edit distance, diff tools, shortest common supersequence. Memorize the recurrence — it comes up constantly.
Why prev after the final swap? Because after swapping at the end of the last iteration of i, prev now holds what was cur. If you swap before reading you’ll get the wrong row — the swap at the end of each outer iteration puts cur into prev for the next round, which is why the answer is prev[n] after the loop.
You can compact further to one row with a single scalar saving dp[i-1][j-1] (since cur[j] is about to overwrite prev[j]), but the readability hit isn’t worth the 2× memory savings here.
25. Edit Distance
LC 72 · Medium-Hard · DP / 2D
Problem Statement
Given two strings word1 and word2, return the minimum number of insertions, deletions, or substitutions to convert word1 to word2.
Constraints. |word1|, |word2| ≤ 500.
Solution
dp[i][j] = edit distance between word1[0..i) and word2[0..j). Base cases: dp[0][j] = j, dp[i][0] = i. If last chars match, dp[i][j] = dp[i-1][j-1]; else dp[i][j] = 1 + min(dp[i-1][j-1], dp[i-1][j], dp[i][j-1]) (substitute / delete / insert).
class Solution {
public:
int minDistance(string a, string b) {
int m = a.size(), n = b.size();
vector<int> prev(n + 1, 0), cur(n + 1, 0);
for (int j = 0; j <= n; ++j) prev[j] = j;
for (int i = 1; i <= m; ++i) {
cur[0] = i;
for (int j = 1; j <= n; ++j) {
if (a[i-1] == b[j-1]) cur[j] = prev[j-1];
else cur[j] = 1 + min(prev[j-1], min(prev[j], cur[j-1]));
}
prev.swap(cur);
}
return prev[n];
}
};- Time.
O(mn). - Space.
O(n).
Discussion
The three transitions correspond exactly to the three allowed operations. dp[i-1][j-1] = substitute (replace a[i-1] with b[j-1]), dp[i-1][j] = delete a[i-1], dp[i][j-1] = insert b[j-1]. The symmetry falls out of thinking about which character “matches” which.
Base case initialization is often where people slip. dp[0][j] = j because converting an empty string to b[0..j) takes j insertions. Similarly dp[i][0] = i is i deletions.
Recovering the edit script needs parent pointers. Useful in practice (diff tools), rare in interviews.
26. Maximum Product Subarray
LC 152 · Medium · DP / Kadane variant
Problem Statement
Given an integer array nums, return the maximum product of a contiguous subarray.
Constraints. 1 ≤ n ≤ 2·10^4; values fit in int, answer fits in int.
Solution
A twist on Kadane: a large negative times a large negative is positive, so you must track both the running max and min at each index.
curMax = max(x, x * prevMax, x * prevMin), curMin = min(x, x * prevMax, x * prevMin), answer = max(curMax) seen.
class Solution {
public:
int maxProduct(vector<int>& nums) {
int best = nums[0];
int curMax = nums[0], curMin = nums[0];
for (size_t i = 1; i < nums.size(); ++i) {
int x = nums[i];
int a = x * curMax, b = x * curMin;
curMax = max(x, max(a, b));
curMin = min(x, min(a, b));
if (curMax > best) best = curMax;
}
return best;
}
};- Time.
O(n). - Space.
O(1).
Discussion
The insight is that “best product ending here” isn’t a single value — you also need “worst (most negative) product ending here” because the next element might be negative and flip it into the best.
Including x itself in the max / min lets subarrays restart at x when the running product hits zero or gets trampled.
Edge cases to watch: all zeros (answer is 0), single negative (answer is that negative), mix of positives and one zero in the middle (the zero forces a restart).
27. Partition Equal Subset Sum
LC 416 · Medium · DP / 0/1 knapsack
Problem Statement
Given a non-empty array nums of positive integers, determine whether it can be partitioned into two subsets of equal sum.
Constraints. 1 ≤ n ≤ 200; 1 ≤ nums[i] ≤ 100.
Solution
If the total sum is odd, the answer is immediately false. Otherwise target = sum / 2 and the question becomes: “is there a subset summing to target?” That’s 0/1 knapsack.
dp[s] = can we make sum s using some subset so far? Start with dp[0] = true. For each element x, update dp[s] |= dp[s - x] for s from target down to x (iterate backwards so each element is used at most once).
class Solution {
public:
bool canPartition(vector<int>& nums) {
int sum = 0;
for (size_t i = 0; i < nums.size(); ++i) sum += nums[i];
if (sum & 1) return false;
int target = sum / 2;
vector<char> dp(target + 1, 0);
dp[0] = 1;
for (size_t i = 0; i < nums.size(); ++i) {
int x = nums[i];
for (int s = target; s >= x; --s)
if (dp[s - x]) dp[s] = 1;
if (dp[target]) return true;
}
return dp[target] != 0;
}
};- Time.
O(n · target). - Space.
O(target).
Discussion
The backward inner loop is crucial for 0/1 knapsack: iterating forward would let you reuse the same item multiple times (unbounded knapsack). If you’re ever unsure which direction, think “what was the value I’m reading?” — dp[s - x] must be from the previous item, not this one.
Early-out when dp[target] becomes true is a nice practical speedup but doesn’t change the asymptotic.
For very skewed inputs (sum up to 10^6), consider a bitset trick: dp |= dp << x updates all reachable sums in one O(target / 64) operation. Same idea, 64× constant-factor speedup.
28. Target Sum
LC 494 · Medium · DP / 0/1 knapsack reformulated
Problem Statement
Given an array nums of non-negative integers and a target, count the number of ways to assign + or - to each element so the sum equals target.
Constraints. 1 ≤ n ≤ 20; 0 ≤ nums[i] ≤ 1000.
Solution
Let P = sum of elements given +, N = sum given −. Then P − N = target and P + N = sum, giving P = (sum + target) / 2. If sum + target is negative or odd, answer is 0. Otherwise count subsets summing to P — a classic 0/1 knapsack count.
dp[s] = number of subsets with sum s.
class Solution {
public:
int findTargetSumWays(vector<int>& nums, int target) {
int sum = 0;
for (size_t i = 0; i < nums.size(); ++i) sum += nums[i];
if (abs(target) > sum || ((sum + target) & 1)) return 0;
int P = (sum + target) / 2;
vector<int> dp(P + 1, 0);
dp[0] = 1;
for (size_t i = 0; i < nums.size(); ++i) {
int x = nums[i];
for (int s = P; s >= x; --s)
dp[s] += dp[s - x];
}
return dp[P];
}
};- Time.
O(n · P). - Space.
O(P).
Discussion
The algebraic rewrite is half the battle: once you turn the sign-assignment problem into “count subsets with sum P,” you have a textbook 0/1 knapsack count.
The alternative is brute-force 2^n recursion with memoization on (i, cur), also works for n ≤ 20. The knapsack version is tighter and generalizes better to larger n.
Guard the parity check: sum + target being odd means no valid P. Also guard abs(target) > sum — no assignment of signs can make a sum with magnitude exceeding the total.
29. Coin Change II
LC 518 · Medium · DP / Unbounded knapsack count
Problem Statement
Given amount and coins (unlimited supply of each denomination), return the number of distinct combinations that sum to amount.
Constraints. 1 ≤ coins.length ≤ 300; 1 ≤ amount ≤ 5000.
Solution
Outer loop over coins, inner loop over sums forward. Looping in that order ensures each combination is counted once — coin c can only extend combinations whose previous coin was ≤ c, which is exactly what you get by locking in the coin before sweeping sums.
dp[s] += dp[s - c].
class Solution {
public:
int change(int amount, vector<int>& coins) {
vector<int> dp(amount + 1, 0);
dp[0] = 1;
for (size_t i = 0; i < coins.size(); ++i) {
int c = coins[i];
for (int s = c; s <= amount; ++s)
dp[s] += dp[s - c];
}
return dp[amount];
}
};- Time.
O(n · amount). - Space.
O(amount).
Discussion
Swapping the loop order counts permutations instead of combinations: {1, 2} and {2, 1} would be counted as different ways. The outer-coin / inner-sum order is the canonical fix. This is the most common bug in unbounded knapsack problems — know which order gives which count.
Forward inner sweep (as opposed to backward in 0/1 knapsack) allows reusing the current coin multiple times within the same pass, which is exactly the unbounded-knapsack semantics.
Edge case: amount = 0 has one combination (the empty set), so dp[0] = 1 is essential.
30. Best Time to Buy and Sell Stock with Cooldown
LC 309 · Medium · DP / State machine
Problem Statement
Given daily stock prices, compute the maximum profit from any sequence of buys and sells, with the constraint that after a sell you must cool down one day before buying again.
Constraints. 1 ≤ n ≤ 5000.
Solution
Three-state DP: hold (own a share), sold (just sold today, cooldown tomorrow), rest (no share, no cooldown). Transitions:
hold[i] = max(hold[i-1], rest[i-1] - price)sold[i] = hold[i-1] + pricerest[i] = max(rest[i-1], sold[i-1])
Answer = max(sold[n-1], rest[n-1]).
class Solution {
public:
int maxProfit(vector<int>& prices) {
int n = prices.size();
if (n < 2) return 0;
int hold = -prices[0], sold = 0, rest = 0;
for (int i = 1; i < n; ++i) {
int prevHold = hold, prevSold = sold, prevRest = rest;
hold = max(prevHold, prevRest - prices[i]);
sold = prevHold + prices[i];
rest = max(prevRest, prevSold);
}
return max(sold, rest);
}
};- Time.
O(n). - Space.
O(1).
Discussion
State machines are the cleanest way to handle stock problems with constraints. Draw the states (hold / sold / rest) as nodes and the transitions as edges; the DP is just dp[day][state].
The cooldown is encoded by requiring hold[i] to be reached from rest[i-1] (not sold[i-1]) — you can only buy after a “rest” day.
For more than one cooldown day or multiple stocks in parallel, add states. Same pattern: enumerate the states, write transitions, compress to O(1) memory when the transitions only look back one day.
31. Regular Expression Matching
LC 10 · Hard · DP / 2D with wildcards
Problem Statement
Implement regex matching with . (matches any single char) and * (matches zero or more of the preceding element). The match must cover the entire string.
Constraints. |s| ≤ 20; |p| ≤ 20.
Solution
dp[i][j] = does s[0..i) match p[0..j)? - Base: dp[0][0] = true; dp[0][j] = dp[0][j-2] when p[j-1] == '*' (the x* collapses to empty). - If p[j-1] is a literal or ., dp[i][j] = dp[i-1][j-1] && match(s[i-1], p[j-1]). - If p[j-1] == '*', two cases: - use zero copies of p[j-2]: dp[i][j] = dp[i][j-2]; - use one or more: dp[i][j] ||= dp[i-1][j] && match(s[i-1], p[j-2]).
class Solution {
public:
bool isMatch(string s, string p) {
int m = s.size(), n = p.size();
vector<vector<char> > dp(m + 1, vector<char>(n + 1, 0));
dp[0][0] = 1;
for (int j = 2; j <= n; ++j)
if (p[j-1] == '*') dp[0][j] = dp[0][j-2];
for (int i = 1; i <= m; ++i) {
for (int j = 1; j <= n; ++j) {
if (p[j-1] == '*') {
dp[i][j] = dp[i][j-2];
if (p[j-2] == '.' || p[j-2] == s[i-1])
dp[i][j] = dp[i][j] || dp[i-1][j];
} else if (p[j-1] == '.' || p[j-1] == s[i-1]) {
dp[i][j] = dp[i-1][j-1];
}
}
}
return dp[m][n] != 0;
}
};- Time.
O(mn). - Space.
O(mn)(can compress toO(n)).
Discussion
The * is always paired with the preceding token, so when you see p[j-1] == '*' you must look at p[j-2]. If that token matches the current s[i-1], you have the option to consume the string character (dp[i-1][j]) — otherwise the * must collapse to zero.
Base-row init for patterns like a*b*c* needs special care. dp[0][j] is true iff every token in p[0..j) can match empty — which is only true for zero-width matches, i.e. * pairs.
Recursion + memoization (top-down) is also fine and often easier to explain. Bottom-up is faster in practice but more error-prone on base cases.
32. Wildcard Matching
LC 44 · Hard · DP / 2D with wildcards
Problem Statement
Implement wildcard matching with ? (any single char) and * (any sequence of chars, including empty). Must match the entire string.
Constraints. |s|, |p| ≤ 2000.
Solution
dp[i][j] = does s[0..i) match p[0..j)? - dp[0][0] = true; dp[0][j] = dp[0][j-1] && p[j-1] == '*'. - If p[j-1] == '?' or matches s[i-1]: dp[i][j] = dp[i-1][j-1]. - If p[j-1] == '*': dp[i][j] = dp[i-1][j] || dp[i][j-1] (use * to match s[i-1], or collapse * to empty).
Compress to two rows for memory.
class Solution {
public:
bool isMatch(string s, string p) {
int m = s.size(), n = p.size();
vector<char> prev(n + 1, 0), cur(n + 1, 0);
prev[0] = 1;
for (int j = 1; j <= n && p[j-1] == '*'; ++j) prev[j] = 1;
for (int i = 1; i <= m; ++i) {
cur.assign(n + 1, 0);
for (int j = 1; j <= n; ++j) {
if (p[j-1] == '*') cur[j] = prev[j] || cur[j-1];
else if (p[j-1] == '?' || p[j-1] == s[i-1]) cur[j] = prev[j-1];
}
prev.swap(cur);
}
return prev[n] != 0;
}
};- Time.
O(mn). - Space.
O(n).
Discussion
Compared to LC 10’s *, LC 44’s * is simpler: it’s a greedy wildcard, not tied to a preceding character. The two transitions prev[j] (consume a char) and cur[j-1] (match zero) capture both cases in one line.
An alternative two-pointer greedy solution exists: walk s and p, remember the last * position and the s index you committed from, backtrack to that * on mismatch. It’s O(mn) worst case but O(m + n) on most inputs and uses O(1) space — worth knowing.
Base-row initialization handles leading *s in the pattern. A common off-by-one: stop as soon as you see a non-* pattern character.
33. Burst Balloons
LC 312 · Hard · DP / Interval
Problem Statement
You have n balloons with values nums[i]. Bursting balloon i yields nums[left] * nums[i] * nums[right] coins, where left and right are the currently adjacent balloons (or implicit 1 at the boundary). After bursting, left and right become adjacent. Return the maximum coins you can collect.
Constraints. 1 ≤ n ≤ 300.
Solution
Pad with 1 on both ends to handle boundaries. The key reframing: which balloon is burst last in the interval (l, r)? If it’s k, then at the moment it bursts, its neighbors are nums[l] and nums[r] (because everything between has already popped). Split into (l, k) and (k, r) independently.
dp[l][r] = max coins from bursting all balloons strictly between l and r. Transition: dp[l][r] = max over k in (l, r) of dp[l][k] + dp[k][r] + nums[l]*nums[k]*nums[r].
Evaluate by increasing interval length.
class Solution {
public:
int maxCoins(vector<int>& nums) {
vector<int> a;
a.push_back(1);
for (size_t i = 0; i < nums.size(); ++i) a.push_back(nums[i]);
a.push_back(1);
int N = a.size();
vector<vector<int> > dp(N, vector<int>(N, 0));
for (int len = 2; len < N; ++len) {
for (int l = 0; l + len < N; ++l) {
int r = l + len;
int best = 0;
for (int k = l + 1; k < r; ++k) {
int v = dp[l][k] + dp[k][r] + a[l] * a[k] * a[r];
if (v > best) best = v;
}
dp[l][r] = best;
}
}
return dp[0][N - 1];
}
};- Time.
O(n³). - Space.
O(n²).
Discussion
The counterintuitive move is thinking about which balloon bursts last in a subproblem — the first-burst framing doesn’t decouple the neighbors cleanly, so subproblems overlap. Thinking “last burst” makes l and r stable throughout the subproblem, so the split is clean.
Padding with 1 sentinels removes three-case boundary logic. This trick is worth remembering for any “multiply adjacent” problem.
Complexity O(n³) handles n = 300 in a few hundred million operations — tight but fine. For larger n you’d need a smarter DP or Knuth’s optimization (which doesn’t apply here).
34. Super Egg Drop
LC 887 · Hard · DP / Binary lifting
Problem Statement
You have k identical eggs and n floors. An egg breaks when dropped from floor ≥ f (unknown). Return the minimum number of moves to guarantee finding f in the worst case.
Constraints. 1 ≤ k ≤ 100; 1 ≤ n ≤ 10^4.
Solution
The direct DP dp[k][n] = min over x of (max(dp[k-1][x-1], dp[k][n-x])) is O(kn²) and TLEs. Flip the question:
f(m, k) = the maximum floors you can handle with m moves and k eggs. Recurrence: drop one egg with m moves and k eggs remaining; if it breaks you can handle f(m-1, k-1) floors below; if not, f(m-1, k) floors above, plus the current floor. f(m, k) = 1 + f(m-1, k-1) + f(m-1, k).
Sweep m from 1 upward until f(m, k) ≥ n.
class Solution {
public:
int superEggDrop(int k, int n) {
vector<int> f(k + 1, 0);
int m = 0;
while (f[k] < n) {
++m;
for (int i = k; i >= 1; --i)
f[i] = 1 + f[i - 1] + f[i];
}
return m;
}
};- Time.
O(k · m)wherem = O(k · log(n / k)). - Space.
O(k).
Discussion
This is one of those problems where the obvious DP is too slow and you need to invert “min moves to handle n floors” into “max floors with m moves and k eggs.” The flipped recurrence is independent of the floor count, so each m takes just O(k) work.
The in-place backward update (for i from k down to 1) uses the f[i-1] and f[i] values from the previous m, exactly like 0/1 knapsack.
Reverse-direction update is wrong here — you’d read already-updated values. Always draw the dependency graph first.
35. Longest Valid Parentheses
LC 32 · Hard · DP / Stack
Problem Statement
Given a string of ( and ), return the length of the longest valid (well-formed) parentheses substring.
Constraints. |s| ≤ 3·10^4.
Solution
Stack approach: push -1 as a sentinel. For each ( push its index; for each ) pop — if the stack is now empty, push the current index as the new sentinel; otherwise the length of the valid substring ending here is i − stack.top().
class Solution {
public:
int longestValidParentheses(string s) {
stack<int> st;
st.push(-1);
int best = 0;
for (int i = 0; i < (int)s.size(); ++i) {
if (s[i] == '(') st.push(i);
else {
st.pop();
if (st.empty()) st.push(i);
else {
int len = i - st.top();
if (len > best) best = len;
}
}
}
return best;
}
};- Time.
O(n). - Space.
O(n)worst case.
Discussion
The sentinel trick is what makes this clean. stack.top() is always “the index just before the current valid run” — either the last unmatched ) or -1 for the start. Subtracting gives the run length.
Two-pass counter alternative: walk left-to-right tracking open and close counters; when close > open, reset; when open == close, update answer. Walk right-to-left symmetrically to catch cases where open > close throughout. O(n) time, O(1) space.
The dp[i] approach (dp[i] = longest valid ending at i) also works and is a cleaner fit to the “DP” category, but the stack version is shorter.
36. Handshakes That Don’t Cross
LC 1259 · Hard · DP / Catalan
Problem Statement
numPeople people stand in a circle. They shake hands simultaneously, and no two handshakes may cross. Return the number of such configurations mod 10^9 + 7. (numPeople is even.)
Constraints. 2 ≤ numPeople ≤ 1000 (even).
Solution
This is the (n/2)-th Catalan number. Let n = numPeople / 2. C_n = sum over k in [0, n-1] of C_k * C_{n-1-k}.
Compute iteratively modulo 10^9 + 7.
class Solution {
public:
int numberOfWays(int numPeople) {
const int MOD = 1000000007;
int n = numPeople / 2;
vector<long long> C(n + 1, 0);
C[0] = 1;
for (int i = 1; i <= n; ++i) {
long long s = 0;
for (int k = 0; k < i; ++k)
s = (s + C[k] * C[i - 1 - k]) % MOD;
C[i] = s;
}
return (int)C[n];
}
};- Time.
O(n²). - Space.
O(n).
Discussion
The combinatorial argument: fix person 0. They shake hands with some person 2k + 1 (must be an odd index so that people remain in two even-sized groups, otherwise some pair would be forced to cross). The group “inside” the chord has k pairs, the group “outside” has n − 1 − k pairs. Sum over k, and you get the Catalan recurrence.
The direct formula C_n = (2n)! / ((n+1)! · n!) also works but requires modular inverse; the recurrence is simpler to get right in an interview.
The trick is recognizing Catalan structure: “non-crossing pairings,” “valid parenthesizations,” “triangulations of a polygon,” “binary trees” — all the same number.
37. Group Anagrams
LC 49 · Medium · Arrays / Hashing
Problem Statement
Given a list of strings, group them so anagrams are together. Return the list of groups (any order).
Constraints. 1 ≤ n ≤ 10^4; each string up to 100 lowercase letters.
Solution
Use a signature that’s equal iff two strings are anagrams. Two common choices:
- Sorted string —
O(L log L)per word. - Count of each letter, encoded as a fixed-length string —
O(L)per word.
For lowercase-only input the count signature is fastest.
class Solution {
public:
vector<vector<string> > groupAnagrams(vector<string>& strs) {
unordered_map<string, vector<string> > groups;
for (size_t i = 0; i < strs.size(); ++i) {
int cnt[26] = {0};
for (size_t j = 0; j < strs[i].size(); ++j) ++cnt[strs[i][j] - 'a'];
string key(26, '0');
for (int k = 0; k < 26; ++k) key[k] = (char)('0' + min(cnt[k], 99));
groups[key].push_back(strs[i]);
}
vector<vector<string> > ans;
ans.reserve(groups.size());
for (unordered_map<string, vector<string> >::iterator it = groups.begin();
it != groups.end(); ++it)
ans.push_back(it->second);
return ans;
}
};- Time.
O(n · L). - Space.
O(n · L).
Discussion
Encoding counts as a fixed-length string keeps the hash key simple and collision-free. For mixed-case or Unicode input you’d switch to sorted strings — simpler, slightly slower.
You could also build a 26-tuple as the key (e.g. encoded as a1b0c2...) but fixed-length string is faster to hash.
An interviewer might ask for stable output order — if so, bucket by insertion order: use vector<vector<string>> keyed by first-occurrence index, and a separate unordered_map<string,int> from signature to index.
38. Longest Consecutive Sequence
LC 128 · Medium · Arrays / Hashing
Problem Statement
Given an unsorted array of integers, return the length of the longest sequence of consecutive integers (e.g. {1, 2, 3, 4}).
Constraints. 0 ≤ n ≤ 10^5.
Solution
Put all elements in a hash set. For each element x, only start counting a run if x - 1 is not in the set — this guarantees each run is counted exactly once, from its smallest element. From there, count forward while x + k is in the set.
class Solution {
public:
int longestConsecutive(vector<int>& nums) {
unordered_set<int> s(nums.begin(), nums.end());
int best = 0;
for (unordered_set<int>::iterator it = s.begin(); it != s.end(); ++it) {
int x = *it;
if (s.find(x - 1) != s.end()) continue; // not the run start
int len = 1;
while (s.find(x + len) != s.end()) ++len;
if (len > best) best = len;
}
return best;
}
};- Time.
O(n)amortized. - Space.
O(n).
Discussion
The “skip if x - 1 exists” guard is what makes this linear. Without it, each element in a long run would trigger its own forward count and you’d get O(n²) worst case.
Sorting is the naive O(n log n) alternative and is easier to explain. The hash-set version is strictly linear and the canonical answer.
Duplicate handling is automatic: the set collapses duplicates, so counting from each distinct start is correct.
39. Product of Array Except Self
LC 238 · Medium · Arrays / Prefix-suffix
Problem Statement
Given an integer array nums, return an array ans where ans[i] is the product of all elements except nums[i]. You must solve it without division and in O(n) time.
Constraints. 2 ≤ n ≤ 10^5; products fit in int.
Solution
Two passes: 1. Left-to-right: ans[i] = product of everything to the left of i. 2. Right-to-left: multiply by a running product of everything to the right.
Uses only the output array — O(1) extra space.
class Solution {
public:
vector<int> productExceptSelf(vector<int>& nums) {
int n = nums.size();
vector<int> ans(n, 1);
for (int i = 1; i < n; ++i) ans[i] = ans[i-1] * nums[i-1];
int right = 1;
for (int i = n - 1; i >= 0; --i) {
ans[i] *= right;
right *= nums[i];
}
return ans;
}
};- Time.
O(n). - Space.
O(1)extra.
Discussion
The two-pass prefix-suffix structure is a recurring pattern: any time the answer at i depends on “everything on the left” and “everything on the right,” a left sweep + a right sweep wins.
The no-division rule matters because of zeros. If you divide, a single zero ruins all other outputs, and two zeros give ambiguity. The prefix-suffix approach handles zeros naturally.
Reusing the output array for the first pass is a common interview-favorite space optimization. Expect the follow-up “can you do it without extra space?”
40. Subarray Sum Equals K
LC 560 · Medium · Arrays / Prefix sums + Hash
Problem Statement
Given an integer array nums and an integer k, return the number of contiguous subarrays whose sum equals k.
Constraints. 1 ≤ n ≤ 2·10^4; nums[i] can be negative.
Solution
Let P[i] = prefix sum nums[0] + ... + nums[i-1]. A subarray (l, r] sums to P[r] - P[l]. For each r, count how many l < r satisfy P[l] = P[r] - k. A hash map count[s] tracking seen prefix sums answers this in O(1) per step.
class Solution {
public:
int subarraySum(vector<int>& nums, int k) {
unordered_map<int, int> seen;
seen[0] = 1;
int sum = 0, ans = 0;
for (size_t i = 0; i < nums.size(); ++i) {
sum += nums[i];
unordered_map<int, int>::iterator it = seen.find(sum - k);
if (it != seen.end()) ans += it->second;
++seen[sum];
}
return ans;
}
};- Time.
O(n). - Space.
O(n).
Discussion
Negative numbers rule out two-pointer / sliding window (which requires monotone prefix sums). The prefix-sum + hash trick is the canonical fix and generalizes to “count subarrays with sum in [a, b]” with a sorted multiset.
Seeding seen[0] = 1 handles subarrays starting at index 0 — without it, you’d miss cases where P[r] = k.
Be careful to read from seen before inserting the current prefix sum — otherwise the current element could count a zero-length subarray.
41. Contiguous Array
LC 525 · Medium · Arrays / Prefix sums + Hash
Problem Statement
Given a binary array, return the length of the longest contiguous subarray with an equal number of 0s and 1s.
Constraints. 1 ≤ n ≤ 10^5.
Solution
Map 0 → -1, 1 → +1. Then “equal 0s and 1s” becomes “sum equals 0.” Track prefix sums; if you’ve seen the same sum before, the interior of that range has equal 0s and 1s. Record the earliest index of each sum to maximize length.
class Solution {
public:
int findMaxLength(vector<int>& nums) {
unordered_map<int, int> first;
first[0] = -1;
int sum = 0, best = 0;
for (int i = 0; i < (int)nums.size(); ++i) {
sum += (nums[i] == 1 ? 1 : -1);
unordered_map<int, int>::iterator it = first.find(sum);
if (it == first.end()) first[sum] = i;
else if (i - it->second > best) best = i - it->second;
}
return best;
}
};- Time.
O(n). - Space.
O(n).
Discussion
The 0 → -1 remap turns a counting problem into a running-sum problem — generally useful whenever the count of two categories must be equal.
first[0] = -1 handles subarrays starting at index 0: if the running sum returns to 0 at index i, the length is i - (-1) = i + 1.
Storing only the first index of each sum is crucial — later occurrences give shorter subarrays and should be ignored.
42. Container With Most Water
LC 11 · Medium · Arrays / Two Pointers
Problem Statement
You’re given n non-negative heights. Pick two lines to form a container with the x-axis; return the max area of water it can hold.
Constraints. 2 ≤ n ≤ 10^5.
Solution
Two pointers starting at both ends. Area = min(h[l], h[r]) * (r - l). Move whichever side is shorter inward — moving the taller side can only hurt the area (width shrinks, height is still bottlenecked by the shorter side).
class Solution {
public:
int maxArea(vector<int>& h) {
int l = 0, r = h.size() - 1, best = 0;
while (l < r) {
int a = min(h[l], h[r]) * (r - l);
if (a > best) best = a;
if (h[l] < h[r]) ++l;
else --r;
}
return best;
}
};- Time.
O(n). - Space.
O(1).
Discussion
The greedy exchange argument: suppose the optimal container uses indices i < j. At some point one pointer must reach i while the other is at some k ≥ j. That’s fine — we never discarded i or j before both were considered because we always shrink the shorter side. If h[i] ≤ h[k], moving from l = i would lose nothing because i was the bottleneck.
The “move the shorter” rule is the only tricky bit — many people move the taller or both, losing correctness.
43. Trapping Rain Water
LC 42 · Hard · Arrays / Two Pointers
Problem Statement
Given n non-negative height bars, return the total water that can be trapped after raining.
Constraints. 1 ≤ n ≤ 2·10^4.
Solution
Two pointers with running leftMax / rightMax. The water at a position is determined by the minimum of the tallest bar on its left and right; so process whichever side’s running max is smaller.
class Solution {
public:
int trap(vector<int>& h) {
int l = 0, r = h.size() - 1;
int lmax = 0, rmax = 0, water = 0;
while (l < r) {
if (h[l] < h[r]) {
if (h[l] >= lmax) lmax = h[l];
else water += lmax - h[l];
++l;
} else {
if (h[r] >= rmax) rmax = h[r];
else water += rmax - h[r];
--r;
}
}
return water;
}
};- Time.
O(n). - Space.
O(1).
Discussion
The key invariant: at any step, the side being processed is bounded by its own running max and by a wall on the other side that’s at least as tall as the side’s running max. So water[i] = runningMax - h[i] on the side you’re processing, with no need to look at the other side’s exact height.
Alternative: precompute leftMax[i] and rightMax[i] arrays, then one pass to sum min(leftMax[i], rightMax[i]) - h[i]. O(n) time, O(n) space — easier to explain.
Monotonic stack also works and naturally handles arbitrary “pits.”
44. Longest Substring Without Repeating Characters
LC 3 · Medium · Strings / Sliding Window
Problem Statement
Given a string, return the length of the longest substring with all distinct characters.
Constraints. 0 ≤ |s| ≤ 5·10^4.
Solution
Sliding window with a last-seen index per character. Expand r, and when s[r] was seen at position ≥ l, jump l to lastSeen[s[r]] + 1.
class Solution {
public:
int lengthOfLongestSubstring(string s) {
vector<int> last(256, -1);
int l = 0, best = 0;
for (int r = 0; r < (int)s.size(); ++r) {
unsigned char c = (unsigned char)s[r];
if (last[c] >= l) l = last[c] + 1;
last[c] = r;
if (r - l + 1 > best) best = r - l + 1;
}
return best;
}
};- Time.
O(n). - Space.
O(σ)where σ is alphabet size.
Discussion
The “jump l” trick is faster than the classic “shrink one at a time” version because on a repeated character you skip directly past the previous occurrence rather than walking.
Using last[c] >= l (not just last[c] != -1) is what makes the jump safe — a character last seen before l is no longer in the window and shouldn’t trigger a jump.
vector<int>(256, -1) is the fastest hash for this problem (direct indexing). unordered_map<char,int> works but is 5-10x slower due to hashing overhead.
45. Minimum Window Substring
LC 76 · Hard · Strings / Sliding Window
Problem Statement
Given strings s and t, return the minimum-length substring of s that contains every character of t (with multiplicity), or "" if none.
Constraints. 1 ≤ |s|, |t| ≤ 10^5.
Solution
Sliding window with a counter of “how many required characters are still missing.” Expand r until missing == 0; then shrink l while the window remains valid, updating the best answer.
class Solution {
public:
string minWindow(string s, string t) {
if (t.empty() || s.size() < t.size()) return "";
int need[256] = {0};
for (size_t i = 0; i < t.size(); ++i) ++need[(unsigned char)t[i]];
int missing = t.size();
int bestL = 0, bestLen = INT_MAX;
int l = 0;
for (int r = 0; r < (int)s.size(); ++r) {
unsigned char c = (unsigned char)s[r];
if (need[c] > 0) --missing;
--need[c];
while (missing == 0) {
if (r - l + 1 < bestLen) { bestLen = r - l + 1; bestL = l; }
unsigned char cl = (unsigned char)s[l++];
++need[cl];
if (need[cl] > 0) ++missing;
}
}
return bestLen == INT_MAX ? "" : s.substr(bestL, bestLen);
}
};- Time.
O(|s| + |t|). - Space.
O(σ).
Discussion
The need array going negative is the key: it encodes how many extra copies of a character are in the window. Only characters with need > 0 contribute to missing, so extras don’t falsely reduce the count.
missing as a single scalar beats “check all counters on every step” by a constant factor but more importantly keeps the shrink condition cheap — while (missing == 0) is an O(1) check.
Off-by-one on the shrink: update the answer before shrinking (otherwise you might miss the smallest valid window at the exact moment missing first hits zero).
46. Longest Repeating Character Replacement
LC 424 · Medium · Strings / Sliding Window
Problem Statement
Given a string s of uppercase letters and an integer k, you can change up to k characters. Return the length of the longest substring containing only one distinct letter after the changes.
Constraints. 1 ≤ |s| ≤ 10^5.
Solution
In a valid window, (window length) − (max count of any letter) ≤ k. Slide: expand r, track maxCount as you go; if the window is invalid, slide l forward. The trick is that you never decrease maxCount — even though the true max might drop when you slide l, an over-estimated maxCount only shrinks the window and never admits a false answer.
class Solution {
public:
int characterReplacement(string s, int k) {
int cnt[26] = {0};
int l = 0, maxCount = 0, best = 0;
for (int r = 0; r < (int)s.size(); ++r) {
int c = s[r] - 'A';
++cnt[c];
if (cnt[c] > maxCount) maxCount = cnt[c];
while (r - l + 1 - maxCount > k) {
--cnt[s[l] - 'A'];
++l;
}
if (r - l + 1 > best) best = r - l + 1;
}
return best;
}
};- Time.
O(n). - Space.
O(σ).
Discussion
Why is not decrementing maxCount safe? Because the answer is monotone in the window length: if a smaller window was already achievable with a bigger majority, we already recorded it. The current window only grows when maxCount grows; a stale maxCount just stops growth, which is fine.
Subtle correctness: you only care about the largest window ever seen, not the “current window is valid.” If you rewrite as if instead of while, you still move l by at most one per step, and the window never shrinks below the previous best.
47. Permutation in String
LC 567 · Medium · Strings / Sliding Window
Problem Statement
Given two strings s1 and s2, return true if s2 contains a permutation of s1 as a substring.
Constraints. 1 ≤ |s1|, |s2| ≤ 10^4.
Solution
Fixed-size sliding window of length |s1| over s2. Maintain a character count and a matches counter of how many letters have the exact required count. When matches == 26, you’ve found a permutation.
class Solution {
public:
bool checkInclusion(string s1, string s2) {
int m = s1.size(), n = s2.size();
if (m > n) return false;
int need[26] = {0}, cur[26] = {0};
for (int i = 0; i < m; ++i) { ++need[s1[i]-'a']; ++cur[s2[i]-'a']; }
int matches = 0;
for (int c = 0; c < 26; ++c) if (need[c] == cur[c]) ++matches;
for (int r = m; r < n; ++r) {
int ia = s2[r] - 'a';
int io = s2[r - m] - 'a';
if (matches == 26) return true;
++cur[ia];
if (cur[ia] == need[ia]) ++matches;
else if (cur[ia] == need[ia] + 1) --matches;
--cur[io];
if (cur[io] == need[io]) ++matches;
else if (cur[io] == need[io] - 1) --matches;
}
return matches == 26;
}
};- Time.
O(n). - Space.
O(σ).
Discussion
The matches counter is the classic trick for fixed-window problems where you’d otherwise compare 26-element arrays on every slide. Checking “did the count of this letter just hit need[c] / leave need[c]?” turns an O(σ) comparison into O(1) per step.
A simpler O(σn) version just compares the whole cur to need on every slide — fine for small σ and short inputs, lazier in interviews.
Watch the final-check edge case: after the loop body, you haven’t yet checked the last window. Put the matches == 26 test at the top of the slide loop and again at the end.
48. 3Sum
LC 15 · Medium · Arrays / Two Pointers
Problem Statement
Given an integer array nums, return all unique triples [a, b, c] such that a + b + c == 0.
Constraints. 3 ≤ n ≤ 3000.
Solution
Sort. Fix the first element a = nums[i]; two-pointer inward for b + c = -a. Skip duplicates for both the fixed element and the inner pair to avoid repeated triples.
class Solution {
public:
vector<vector<int> > threeSum(vector<int>& nums) {
sort(nums.begin(), nums.end());
vector<vector<int> > ans;
int n = nums.size();
for (int i = 0; i + 2 < n; ++i) {
if (nums[i] > 0) break;
if (i > 0 && nums[i] == nums[i-1]) continue;
int l = i + 1, r = n - 1;
while (l < r) {
int s = nums[i] + nums[l] + nums[r];
if (s == 0) {
vector<int> t(3);
t[0] = nums[i]; t[1] = nums[l]; t[2] = nums[r];
ans.push_back(t);
++l; --r;
while (l < r && nums[l] == nums[l-1]) ++l;
while (l < r && nums[r] == nums[r+1]) --r;
} else if (s < 0) ++l;
else --r;
}
}
return ans;
}
};- Time.
O(n²). - Space.
O(1)extra (output aside).
Discussion
Sorting is what unlocks the two-pointer step — it turns “find pair summing to -a” from O(n) hash into O(n) linear. Total work: O(n log n) sort + O(n²) inner = O(n²).
Dedup by skipping adjacent equal values. Three places: after fixing i, and after each successful match for both l and r. Missing any causes duplicate triples.
Early termination if (nums[i] > 0) break avoids hopeless outer loops when even the smallest possible sum is already positive.
49. 4Sum
LC 18 · Medium-Hard · Arrays / Two Pointers
Problem Statement
Given an array nums and a target target, return all unique quadruples [a, b, c, d] such that a + b + c + d == target.
Constraints. 1 ≤ n ≤ 200; values fit in int, sums may need long.
Solution
Sort. Nest two fixed loops (i, j) and two-pointer the inner pair. Same dedup pattern as 3Sum, extended to the second fixed index.
class Solution {
public:
vector<vector<int> > fourSum(vector<int>& nums, int target) {
sort(nums.begin(), nums.end());
vector<vector<int> > ans;
int n = nums.size();
for (int i = 0; i + 3 < n; ++i) {
if (i > 0 && nums[i] == nums[i-1]) continue;
for (int j = i + 1; j + 2 < n; ++j) {
if (j > i + 1 && nums[j] == nums[j-1]) continue;
int l = j + 1, r = n - 1;
long long need = (long long)target - nums[i] - nums[j];
while (l < r) {
long long s = (long long)nums[l] + nums[r];
if (s == need) {
vector<int> t(4);
t[0] = nums[i]; t[1] = nums[j];
t[2] = nums[l]; t[3] = nums[r];
ans.push_back(t);
++l; --r;
while (l < r && nums[l] == nums[l-1]) ++l;
while (l < r && nums[r] == nums[r+1]) --r;
} else if (s < need) ++l;
else --r;
}
}
}
return ans;
}
};- Time.
O(n³). - Space.
O(1)extra.
Discussion
long long on the sum is not optional: LC feeds inputs where nums[i] + nums[j] + nums[k] + nums[l] overflows int. Always widen intermediate sums for this family of problems.
Generalizing to k-Sum is a neat exercise: recurse with kSum(target, k) that peels off one fixed index, dedups, and base-cases at k == 2 using two pointers. Runs in O(n^(k-1)).
The outer-i and outer-j dedup guards are symmetric but subtly different: i > 0 and j > i + 1 — the j guard is about “skip duplicate j values within this i”, not “skip j == i”.
50. Find All Anagrams in a String
LC 438 · Medium · Strings / Sliding Window
Problem Statement
Given two strings s and p, return all start indices of substrings of s that are anagrams of p.
Constraints. 1 ≤ |s|, |p| ≤ 3·10^4.
Solution
Same fixed-window + matches counter as LC 567, except we record start indices instead of returning early.
class Solution {
public:
vector<int> findAnagrams(string s, string p) {
vector<int> ans;
int m = p.size(), n = s.size();
if (m > n) return ans;
int need[26] = {0}, cur[26] = {0};
for (int i = 0; i < m; ++i) { ++need[p[i]-'a']; ++cur[s[i]-'a']; }
int matches = 0;
for (int c = 0; c < 26; ++c) if (need[c] == cur[c]) ++matches;
if (matches == 26) ans.push_back(0);
for (int r = m; r < n; ++r) {
int ia = s[r] - 'a';
int io = s[r - m] - 'a';
++cur[ia];
if (cur[ia] == need[ia]) ++matches;
else if (cur[ia] == need[ia] + 1) --matches;
--cur[io];
if (cur[io] == need[io]) ++matches;
else if (cur[io] == need[io] - 1) --matches;
if (matches == 26) ans.push_back(r - m + 1);
}
return ans;
}
};- Time.
O(n). - Space.
O(σ + result).
Discussion
This and LC 567 are the same algorithm; the only difference is whether you exit on first match or collect all matches. Recognizing that two problems are the same algorithm under a thin wrapper is a huge time-saver in interviews.
The matches bookkeeping on add and remove is the error-prone part. Use the same pattern every time: increment to need[c] → match gained; increment to need[c] + 1 → match lost (because you just went from valid to surplus). Remove is symmetric.
51. Sliding Window Maximum
LC 239 · Hard · Arrays / Monotonic Deque
Problem Statement
Given an array nums and a window size k, return the maximum of each sliding window of size k.
Constraints. 1 ≤ n ≤ 10^5; 1 ≤ k ≤ n.
Solution
Monotonic decreasing deque of indices. When adding i: 1. Pop from the back while nums[back] ≤ nums[i] (smaller items can never be the max while i is in the window). 2. Push i. 3. Pop from the front if it’s outside the window (front ≤ i - k). 4. Once i ≥ k - 1, the front is the window max.
class Solution {
public:
vector<int> maxSlidingWindow(vector<int>& nums, int k) {
vector<int> ans;
deque<int> dq; // indices, decreasing nums[]
for (int i = 0; i < (int)nums.size(); ++i) {
while (!dq.empty() && nums[dq.back()] <= nums[i]) dq.pop_back();
dq.push_back(i);
if (dq.front() <= i - k) dq.pop_front();
if (i >= k - 1) ans.push_back(nums[dq.front()]);
}
return ans;
}
};- Time.
O(n)amortized. - Space.
O(k).
Discussion
Each index is pushed and popped at most once, which is why the total work is O(n) despite the inner while. Amortized analysis is the fingerprint of monotonic-deque problems.
Storing indices (not values) is important: you need to know when the front drops out of the window, and identical values at different indices must be distinguishable.
The “≤” vs “<” in the pop condition matters for stability with duplicates: ≤ keeps only the rightmost of equal values (which is correct — the rightmost lives longest and can represent both), while < would leave stale equal values cluttering the deque without affecting correctness but hurting the constant factor.
52. Binary Tree Level Order Traversal
LC 102 · Medium · Trees / BFS
Problem Statement
Given a binary tree, return its level-order traversal (values grouped per level, top to bottom).
Constraints. 0 ≤ n ≤ 2000.
Solution
Standard BFS with a per-level size snapshot. Each outer iteration processes exactly one level.
class Solution {
public:
vector<vector<int> > levelOrder(TreeNode* root) {
vector<vector<int> > ans;
if (!root) return ans;
queue<TreeNode*> q;
q.push(root);
while (!q.empty()) {
int sz = q.size();
vector<int> level;
level.reserve(sz);
while (sz--) {
TreeNode* u = q.front(); q.pop();
level.push_back(u->val);
if (u->left) q.push(u->left);
if (u->right) q.push(u->right);
}
ans.push_back(level);
}
return ans;
}
};- Time.
O(n). - Space.
O(n).
Discussion
The “read the current queue size, then loop that many times” pattern is the cleanest way to do level-by-level BFS without sentinels or per-node level tags. This pattern shows up in half a dozen tree problems.
DFS with a depth parameter also works — index into ans[depth], appending a new level when you first reach it. Same O(n) asymptotics, slightly more code.
53. Binary Tree Right Side View
LC 199 · Medium · Trees / BFS
Problem Statement
Given a binary tree, return the values of the rightmost node at each depth.
Constraints. 0 ≤ n ≤ 100.
Solution
BFS level by level, keep the last value pushed at each level.
class Solution {
public:
vector<int> rightSideView(TreeNode* root) {
vector<int> ans;
if (!root) return ans;
queue<TreeNode*> q;
q.push(root);
while (!q.empty()) {
int sz = q.size();
int last = 0;
while (sz--) {
TreeNode* u = q.front(); q.pop();
last = u->val;
if (u->left) q.push(u->left);
if (u->right) q.push(u->right);
}
ans.push_back(last);
}
return ans;
}
};- Time.
O(n). - Space.
O(w)wherewis the max width.
Discussion
“Right side view” doesn’t necessarily mean “always the right child” — it means the rightmost node visible from the right at each depth. If the right child is null but the left subtree has deeper nodes, those are visible.
DFS variant: traverse right-first, record the first node seen at each depth. Also O(n), nicer for “left side view” variants (just flip to left-first).
54. Lowest Common Ancestor of a Binary Tree
LC 236 · Medium · Trees / Recursion
Problem Statement
Given a binary tree and two distinct nodes p, q, return their lowest common ancestor.
Constraints. 2 ≤ n ≤ 10^5.
Solution
Post-order recursion. At each node, recurse left and right. If both return non-null, this node is the LCA. Otherwise return whichever is non-null.
class Solution {
public:
TreeNode* lowestCommonAncestor(TreeNode* root, TreeNode* p, TreeNode* q) {
if (!root || root == p || root == q) return root;
TreeNode* L = lowestCommonAncestor(root->left, p, q);
TreeNode* R = lowestCommonAncestor(root->right, p, q);
if (L && R) return root;
return L ? L : R;
}
};- Time.
O(n). - Space.
O(h).
Discussion
The recursion returns “either p, q, or the LCA if we’ve already found both.” When both subtrees return non-null, the current node is the junction — return it all the way up.
The base case root == p || root == q short-circuits the search once you hit a target, so you don’t descend further. This is safe because if the other target is in the subtree, the returned ancestor still includes it (the current node is its ancestor).
For a BST (LC 235), walk down using the BST invariant instead of recursing both sides — O(h) without recursion.
55. Serialize and Deserialize Binary Tree
LC 297 · Hard · Trees / BFS or DFS
Problem Statement
Design an algorithm to serialize a binary tree into a string and deserialize it back.
Constraints. 0 ≤ n ≤ 10^4.
Solution
BFS serialization with "#" markers for null children. Deserialize by reading tokens in order and attaching children using a queue of nodes expecting to be populated.
class Codec {
public:
string serialize(TreeNode* root) {
if (!root) return "";
string out;
queue<TreeNode*> q;
q.push(root);
while (!q.empty()) {
TreeNode* u = q.front(); q.pop();
if (!u) { out += "# "; continue; }
out += to_string(u->val);
out += ' ';
q.push(u->left);
q.push(u->right);
}
return out;
}
TreeNode* deserialize(string data) {
if (data.empty()) return NULL;
stringstream ss(data);
string tok;
ss >> tok;
TreeNode* root = new TreeNode(atoi(tok.c_str()));
queue<TreeNode*> q;
q.push(root);
while (!q.empty() && ss >> tok) {
TreeNode* u = q.front(); q.pop();
if (tok != "#") {
u->left = new TreeNode(atoi(tok.c_str()));
q.push(u->left);
}
if (ss >> tok && tok != "#") {
u->right = new TreeNode(atoi(tok.c_str()));
q.push(u->right);
}
}
return root;
}
};- Time.
O(n). - Space.
O(n).
Discussion
BFS with null markers is the simplest format because the position of a null unambiguously encodes the tree shape. DFS (pre-order with null markers) is equivalent but slightly trickier to deserialize — you consume tokens recursively.
A subtle LC-specific gotcha: negative numbers and multi-digit numbers mean space-separated tokens are the safest delimiter. Don’t try to use a fixed-width encoding or single-char tokens.
For balanced trees you can skip null markers by storing the tree size up front, but null markers are more robust to unbalanced inputs.
56. Binary Tree Maximum Path Sum
LC 124 · Hard · Trees / Post-order DP
Problem Statement
Given a binary tree, return the maximum sum of any non-empty path. A path goes from one node to another via parent/child edges and does not reuse any node.
Constraints. 1 ≤ n ≤ 3·10^4; values can be negative.
Solution
Post-order: each recursive call returns the best downward path sum from that node (possibly 0 if going “nowhere” is better than going negative). At each node, consider the “arch” left + node + right as a candidate for the global answer.
class Solution {
public:
int maxPathSum(TreeNode* root) {
int best = INT_MIN;
down(root, best);
return best;
}
private:
int down(TreeNode* u, int& best) {
if (!u) return 0;
int L = max(0, down(u->left, best));
int R = max(0, down(u->right, best));
int arch = L + R + u->val;
if (arch > best) best = arch;
return max(L, R) + u->val;
}
};- Time.
O(n). - Space.
O(h).
Discussion
The split between the returned value (best downward path) and the tracked global (best arch) is the key insight. The arch can use both children, but when you return to the parent you can only use one child plus the current node — otherwise the path would form a Y, which isn’t a simple path.
Clamping children to max(0, ...) handles negative subtrees: if a subtree’s best contribution is negative, you’d rather not go there at all.
Initialize best = INT_MIN, not 0 — all-negative trees have a negative maximum path (a single node).
57. Diameter of Binary Tree
LC 543 · Medium · Trees / Post-order DP
Problem Statement
Given a binary tree, return the length (in edges) of the longest path between any two nodes.
Constraints. 1 ≤ n ≤ 10^4.
Solution
Similar to LC 124: post-order, each call returns the longest downward path from that node in edges. The best “arch” through a node is leftDepth + rightDepth, tracked globally.
class Solution {
public:
int diameterOfBinaryTree(TreeNode* root) {
int best = 0;
depth(root, best);
return best;
}
private:
int depth(TreeNode* u, int& best) {
if (!u) return 0;
int L = depth(u->left, best);
int R = depth(u->right, best);
if (L + R > best) best = L + R;
return 1 + max(L, R);
}
};- Time.
O(n). - Space.
O(h).
Discussion
Almost identical recurrence to LC 124, but in edges rather than node sums. The “return” value is one plus the best child; the “tracked” value is the sum of both children.
Common mistake: confusing “diameter in edges” vs “diameter in nodes” (= edges + 1). Read the problem statement carefully; LC 543 uses edges.
58. Validate Binary Search Tree
LC 98 · Medium · Trees / Recursion
Problem Statement
Given a binary tree, determine whether it’s a valid BST (strict ordering: left < node < right for every node).
Constraints. 1 ≤ n ≤ 10^4.
Solution
Carry an open interval (lo, hi) down the recursion. At each node, the value must lie strictly between lo and hi; recurse left with upper bound updated to the current value, recurse right with lower bound updated.
Use long long bounds to safely encompass INT_MIN / INT_MAX values.
class Solution {
public:
bool isValidBST(TreeNode* root) {
return check(root, LLONG_MIN, LLONG_MAX);
}
private:
bool check(TreeNode* u, long long lo, long long hi) {
if (!u) return true;
if (u->val <= lo || u->val >= hi) return false;
return check(u->left, lo, u->val) && check(u->right, u->val, hi);
}
};- Time.
O(n). - Space.
O(h).
Discussion
The classic wrong answer is “just compare with left and right children.” That fails on cases like 10 / (5, 15) / (null, null, 6, 20) — node 6 is in the right subtree of 10 but violates the BST property. You need ancestor bounds, not just immediate-child bounds.
The long long widening is necessary because LC tests with Integer.MIN_VALUE as a valid node value and you need a bound strictly less than it. Alternatively, use bool hasLo, bool hasHi flags.
Inorder traversal + “strictly increasing?” check is an equally valid alternative — may be easier to explain.
59. Kth Smallest Element in a BST
LC 230 · Medium · Trees / Inorder
Problem Statement
Given a BST, return the k-th smallest element (1-indexed).
Constraints. 1 ≤ k ≤ n ≤ 10^4.
Solution
Iterative inorder traversal with a counter. Stop as soon as you’ve popped k nodes.
class Solution {
public:
int kthSmallest(TreeNode* root, int k) {
stack<TreeNode*> st;
TreeNode* cur = root;
while (cur || !st.empty()) {
while (cur) { st.push(cur); cur = cur->left; }
cur = st.top(); st.pop();
if (--k == 0) return cur->val;
cur = cur->right;
}
return -1;
}
};- Time.
O(h + k). - Space.
O(h).
Discussion
Inorder on a BST visits nodes in sorted order, so the k-th pop is the answer. Iterative beats recursive here because you can early-exit without unwinding the call stack.
For the follow-up “what if the BST is modified frequently and you need to support many kthSmallest queries efficiently?” — augment each node with a size (number of nodes in its subtree). Then each query is O(h).
60. Count Good Nodes in Binary Tree
LC 1448 · Medium · Trees / DFS
Problem Statement
A “good” node is one where no node on the path from the root to it has a greater value. Given a binary tree, count the good nodes.
Constraints. 1 ≤ n ≤ 10^5.
Solution
DFS carrying maxSoFar. At each node, if val ≥ maxSoFar, it’s good; recurse with updated max.
class Solution {
public:
int goodNodes(TreeNode* root) {
return dfs(root, INT_MIN);
}
private:
int dfs(TreeNode* u, int maxSoFar) {
if (!u) return 0;
int c = (u->val >= maxSoFar) ? 1 : 0;
int nm = max(maxSoFar, u->val);
return c + dfs(u->left, nm) + dfs(u->right, nm);
}
};- Time.
O(n). - Space.
O(h).
Discussion
“Good” is defined as “greater than or equal to every ancestor,” so the running max along the path is all you need. Each node contributes 0 or 1 independently; just sum.
Use INT_MIN as the initial max (not root->val), so the root itself is counted if it satisfies val ≥ maxSoFar. Since root->val >= INT_MIN always, this works uniformly.
61. Top K Frequent Elements
LC 347 · Medium · Heap / Bucket sort
Problem Statement
Given an integer array and an integer k, return the k most frequent elements.
Constraints. 1 ≤ n ≤ 10^5; 1 ≤ k ≤ number of distinct elements.
Solution
Two-pass: count frequencies in a hash map, then pick top k. Options:
- Min-heap of size
k—O(n log k). - Bucket sort by frequency —
O(n).
Bucket sort is optimal because frequencies are bounded by n.
class Solution {
public:
vector<int> topKFrequent(vector<int>& nums, int k) {
unordered_map<int,int> cnt;
for (size_t i = 0; i < nums.size(); ++i) ++cnt[nums[i]];
int n = nums.size();
vector<vector<int> > buckets(n + 1);
for (unordered_map<int,int>::iterator it = cnt.begin(); it != cnt.end(); ++it)
buckets[it->second].push_back(it->first);
vector<int> ans;
for (int f = n; f >= 0 && (int)ans.size() < k; --f)
for (size_t i = 0; i < buckets[f].size() && (int)ans.size() < k; ++i)
ans.push_back(buckets[f][i]);
return ans;
}
};- Time.
O(n). - Space.
O(n).
Discussion
Bucket sort works because frequency is bounded: the longest possible bucket chain has n + 1 slots. For arbitrary frequency distributions this is strictly faster than the heap approach.
Heap-based variant: priority_queue<pair<int,int>, vector<pair<int,int>>, greater<pair<int,int>>>, push each (freq, val), pop when size exceeds k. Classic pattern for “top K” problems.
62. Kth Largest Element in an Array
LC 215 · Medium · Heap / Quickselect
Problem Statement
Return the k-th largest element in an unsorted array.
Constraints. 1 ≤ k ≤ n ≤ 10^5.
Solution
Two options:
- Min-heap of size
k: push every element, pop if size >k. Top is thek-th largest.O(n log k). - Quickselect (average
O(n)).
Here’s the simpler heap version:
class Solution {
public:
int findKthLargest(vector<int>& nums, int k) {
priority_queue<int, vector<int>, greater<int> > pq;
for (size_t i = 0; i < nums.size(); ++i) {
pq.push(nums[i]);
if ((int)pq.size() > k) pq.pop();
}
return pq.top();
}
};- Time.
O(n log k). - Space.
O(k).
Discussion
A min-heap of size k is a universal pattern for “top K by some ordering”: push, pop-if-overflow, answer is the heap root. Inverting the comparator gives “smallest K.”
Quickselect in average O(n) is the textbook answer to an interviewer who asks for better than O(n log k). Worst case is O(n²) with adversarial pivots; randomized pivots or median-of-medians restore O(n) worst case.
For k close to n, flipping to “find (n - k)-th smallest” makes the heap work smaller.
63. Find Median from Data Stream
LC 295 · Hard · Heap / Two heaps
Problem Statement
Design a data structure that supports addNum(int) and findMedian().
Constraints. 1 ≤ ops ≤ 5·10^4.
Solution
Two heaps: lo is a max-heap holding the smaller half, hi is a min-heap holding the larger half. Maintain size(lo) == size(hi) or size(lo) == size(hi) + 1.
class MedianFinder {
priority_queue<int> lo; // max-heap
priority_queue<int, vector<int>, greater<int> > hi;
public:
void addNum(int num) {
if (lo.empty() || num <= lo.top()) lo.push(num);
else hi.push(num);
if (lo.size() > hi.size() + 1) { hi.push(lo.top()); lo.pop(); }
else if (hi.size() > lo.size()) { lo.push(hi.top()); hi.pop(); }
}
double findMedian() {
if (lo.size() > hi.size()) return lo.top();
return (lo.top() + (double)hi.top()) / 2.0;
}
};- Time.
O(log n)peraddNum,O(1)perfindMedian. - Space.
O(n).
Discussion
The two-heap structure gives you O(1) access to the middle elements without having to maintain a sorted container (which would be O(n) per insertion).
The balancing rule — “lo has at most one more than hi” — keeps the median reachable from the heap tops. Always rebalance after every insert; a drift of more than one breaks findMedian.
For a sliding-window median variant, use an ordered multiset (or two multisets) instead — heaps don’t support arbitrary deletes.
64. Merge k Sorted Lists
LC 23 · Hard · Heap / Divide and conquer
Problem Statement
Merge k sorted linked lists into one sorted linked list.
Constraints. 0 ≤ k ≤ 10^4; total nodes up to 10^4.
Solution
Min-heap of (value, listIndex) or (value, node*). Repeatedly pop the smallest, append to the result, push its next node.
struct Cmp {
bool operator()(ListNode* a, ListNode* b) const { return a->val > b->val; }
};
class Solution {
public:
ListNode* mergeKLists(vector<ListNode*>& lists) {
priority_queue<ListNode*, vector<ListNode*>, Cmp> pq;
for (size_t i = 0; i < lists.size(); ++i)
if (lists[i]) pq.push(lists[i]);
ListNode dummy(0);
ListNode* tail = &dummy;
while (!pq.empty()) {
ListNode* u = pq.top(); pq.pop();
tail->next = u;
tail = u;
if (u->next) pq.push(u->next);
}
tail->next = NULL;
return dummy.next;
}
};- Time.
O(N log k)whereNis total nodes. - Space.
O(k)heap.
Discussion
The heap only ever holds one node per original list, hence O(k) space and log k per operation. Total work is O(N log k).
Divide-and-conquer merge (pairwise merge k lists down to 1) has the same O(N log k) complexity without a heap, and is slightly faster in practice due to cache behavior. Heap is easier to explain and generalizes to streaming.
Using a dummy head simplifies the “attach to tail” logic — no special case for the first node.
65. Search in Rotated Sorted Array
LC 33 · Medium · Binary Search
Problem Statement
An ascending sorted array was rotated at some pivot. Given the array and a target, return its index, or -1.
Constraints. 1 ≤ n ≤ 5000; all values distinct.
Solution
Modified binary search. At each midpoint, one half is still sorted — check which, then decide whether the target lies in the sorted half.
class Solution {
public:
int search(vector<int>& nums, int target) {
int l = 0, r = nums.size() - 1;
while (l <= r) {
int m = (l + r) / 2;
if (nums[m] == target) return m;
if (nums[l] <= nums[m]) { // left half sorted
if (nums[l] <= target && target < nums[m]) r = m - 1;
else l = m + 1;
} else { // right half sorted
if (nums[m] < target && target <= nums[r]) l = m + 1;
else r = m - 1;
}
}
return -1;
}
};- Time.
O(log n). - Space.
O(1).
Discussion
The “which half is sorted” test (nums[l] ≤ nums[m]) is the linchpin. Because the array is only rotated once, at any midpoint at least one of the two halves is strictly sorted, and that’s the half where you can do a normal ordered comparison.
Use ≤ (not <) in the left-sorted test to handle the case where l == m (length-1 left half).
For the duplicate-allowed variant (LC 81), you also need to handle nums[l] == nums[m] == nums[r] by shrinking both ends — that ruins O(log n) worst case.
66. Find Minimum in Rotated Sorted Array
LC 153 · Medium · Binary Search
Problem Statement
Given a rotated ascending sorted array with unique values, return the minimum element.
Constraints. 1 ≤ n ≤ 5000.
Solution
Binary search for the “drop” — compare nums[m] with nums[r]. If nums[m] > nums[r], the min is in (m, r]; else it’s in [l, m].
class Solution {
public:
int findMin(vector<int>& nums) {
int l = 0, r = nums.size() - 1;
while (l < r) {
int m = (l + r) / 2;
if (nums[m] > nums[r]) l = m + 1;
else r = m;
}
return nums[l];
}
};- Time.
O(log n). - Space.
O(1).
Discussion
Comparing with nums[r] (not nums[l]) is what makes this work in the non-rotated case: if the whole array is sorted, nums[m] ≤ nums[r] always, and you bisect toward index 0 — the minimum.
The termination condition l < r with r = m (not r = m - 1) avoids infinite loops when l == m.
67. Median of Two Sorted Arrays
LC 4 · Hard · Binary Search
Problem Statement
Given two sorted arrays A and B, return the median of their combined sorted order. Required time O(log(min(m, n))).
Constraints. 0 ≤ m, n ≤ 1000; m + n ≥ 1.
Solution
Binary search the partition of the smaller array. For a candidate split i in A, the split j in B is (m + n + 1) / 2 - i. The split is valid iff A[i-1] ≤ B[j] and B[j-1] ≤ A[i]. If not, shrink or grow i appropriately. When valid, the median is the right-side minimum (odd total) or the average of left-side max and right-side min (even).
class Solution {
public:
double findMedianSortedArrays(vector<int>& A, vector<int>& B) {
if (A.size() > B.size()) return findMedianSortedArrays(B, A);
int m = A.size(), n = B.size();
int half = (m + n + 1) / 2;
int lo = 0, hi = m;
while (lo <= hi) {
int i = (lo + hi) / 2;
int j = half - i;
int Aleft = (i == 0) ? INT_MIN : A[i-1];
int Aright = (i == m) ? INT_MAX : A[i];
int Bleft = (j == 0) ? INT_MIN : B[j-1];
int Bright = (j == n) ? INT_MAX : B[j];
if (Aleft <= Bright && Bleft <= Aright) {
if ((m + n) % 2) return max(Aleft, Bleft);
return (max(Aleft, Bleft) + (double)min(Aright, Bright)) / 2.0;
}
if (Aleft > Bright) hi = i - 1;
else lo = i + 1;
}
return 0.0;
}
};- Time.
O(log min(m, n)). - Space.
O(1).
Discussion
The partition view is elegant once you see it: split both arrays into “left halves” summing to (m + n + 1) / 2 elements, such that every left-half element ≤ every right-half element. That’s the median cut.
Always binary search on the smaller array so the hi = m bound is as tight as possible.
INT_MIN / INT_MAX sentinels gracefully handle boundary partitions (all of A on one side, none on the other).
68. Search a 2D Matrix II
LC 240 · Medium · Binary Search / Staircase
Problem Statement
Given an m × n matrix where rows and columns are both ascending, return whether target exists.
Constraints. 1 ≤ m, n ≤ 300.
Solution
Start from the top-right corner. At each step, if current cell > target, move left (exclude the current column); if < target, move down; else found.
class Solution {
public:
bool searchMatrix(vector<vector<int> >& M, int target) {
int m = M.size();
if (!m) return false;
int n = M[0].size();
int i = 0, j = n - 1;
while (i < m && j >= 0) {
if (M[i][j] == target) return true;
if (M[i][j] > target) --j;
else ++i;
}
return false;
}
};- Time.
O(m + n). - Space.
O(1).
Discussion
The top-right (or bottom-left) corner works because at those corners, the two “movement” directions disagree with their comparison — going one way strictly increases, the other strictly decreases. Starting at top-left or bottom-right doesn’t give that property, so the search wouldn’t narrow.
Row-by-row binary search is O(m log n). The staircase is strictly better for this “each row and column is sorted” variant.
69. Koko Eating Bananas
LC 875 · Medium · Binary Search on Answer
Problem Statement
Koko eats k bananas per hour and wants to finish all piles within h hours. Each pile is processed one at a time — she can only eat from one pile per hour. Return the minimum integer k.
Constraints. 1 ≤ piles.length ≤ 10^4; 1 ≤ h ≤ 10^9.
Solution
Binary search on k. For a candidate k, compute the total hours as sum of ceil(pile / k). Minimize k such that total hours ≤ h.
class Solution {
public:
int minEatingSpeed(vector<int>& piles, int h) {
int lo = 1, hi = 0;
for (size_t i = 0; i < piles.size(); ++i) hi = max(hi, piles[i]);
while (lo < hi) {
int k = lo + (hi - lo) / 2;
long long hours = 0;
for (size_t i = 0; i < piles.size(); ++i)
hours += (piles[i] + k - 1) / k;
if (hours <= h) hi = k;
else lo = k + 1;
}
return lo;
}
};- Time.
O(n log max_pile). - Space.
O(1).
Discussion
“Binary search on the answer” is the right tool whenever the answer is monotone in some property of the input. Here: as k increases, the total hours can only decrease. So “feasible set” is an upper interval [k*, ∞), and you search for its left boundary.
(pile + k - 1) / k is the integer ceiling idiom — don’t use floating-point for this, it’s slower and can round wrong.
Use long long for hours — each ceil(pile / k) is fine in int, but the sum over 10^4 piles of values up to 10^9 overflows.
70. Capacity To Ship Packages Within D Days
LC 1011 · Medium · Binary Search on Answer
Problem Statement
Given package weights that must be shipped in order within days days, return the minimum ship capacity per day.
Constraints. 1 ≤ weights.length ≤ 5·10^4.
Solution
Binary search on capacity. For a candidate cap, greedily pack packages into days: start a new day when adding the next package would exceed cap. Answer = smallest cap with day count ≤ days.
Low bound = max weight (must fit one package). High bound = sum (one day).
class Solution {
public:
int shipWithinDays(vector<int>& w, int days) {
int lo = 0, hi = 0;
for (size_t i = 0; i < w.size(); ++i) { lo = max(lo, w[i]); hi += w[i]; }
while (lo < hi) {
int mid = lo + (hi - lo) / 2;
int d = 1, cur = 0;
for (size_t i = 0; i < w.size(); ++i) {
if (cur + w[i] > mid) { ++d; cur = 0; }
cur += w[i];
}
if (d <= days) hi = mid;
else lo = mid + 1;
}
return lo;
}
};- Time.
O(n log sum). - Space.
O(1).
Discussion
This is structurally identical to LC 410 “Split Array Largest Sum” and many others: minimize the max over subgroups of a contiguous partition. The same binary-search-on-answer + greedy feasibility works across the family.
Setting lo = max(weights) (not 1) is important: the minimum cap can’t be less than the heaviest single package.
71. Find First and Last Position of Element in Sorted Array
LC 34 · Medium · Binary Search
Problem Statement
Given a sorted array, return [first, last] indices of a target, or [-1, -1] if absent.
Constraints. 0 ≤ n ≤ 10^5.
Solution
Two binary searches: one lower_bound, one upper_bound. The first is the leftmost index with value ≥ target; the second is one past the rightmost index with value == target.
class Solution {
public:
vector<int> searchRange(vector<int>& nums, int target) {
vector<int> ans(2, -1);
int lo = lower_bound(nums.begin(), nums.end(), target) - nums.begin();
if (lo == (int)nums.size() || nums[lo] != target) return ans;
int hi = upper_bound(nums.begin(), nums.end(), target) - nums.begin() - 1;
ans[0] = lo;
ans[1] = hi;
return ans;
}
};- Time.
O(log n). - Space.
O(1).
Discussion
lower_bound / upper_bound are the reusable primitives; once you have them, this problem is two calls. Writing them by hand is a good exercise: the invariant is “answer is in [lo, hi]”, and you conservatively update to maintain it.
The nums[lo] != target guard is crucial: lower_bound returns the first index with value ≥ target, which might be a value strictly greater (no target present).
72. Daily Temperatures
LC 739 · Medium · Monotonic Stack
Problem Statement
Given an array of daily temperatures, return an array where ans[i] is the number of days until a warmer day appears (or 0 if none).
Constraints. 1 ≤ n ≤ 10^5.
Solution
Monotonic decreasing stack of indices. While the current temperature exceeds the stack top’s, pop and fill in ans[top] = i - top.
class Solution {
public:
vector<int> dailyTemperatures(vector<int>& T) {
int n = T.size();
vector<int> ans(n, 0);
stack<int> st;
for (int i = 0; i < n; ++i) {
while (!st.empty() && T[st.top()] < T[i]) {
int j = st.top(); st.pop();
ans[j] = i - j;
}
st.push(i);
}
return ans;
}
};- Time.
O(n). - Space.
O(n).
Discussion
Monotonic stacks are the universal solver for “next greater / next smaller” problems. The stack holds candidates waiting for their next greater element; when one arrives, every candidate it beats is popped and resolved.
Store indices, not temperatures — you need both the value and the position.
The “<” vs “≤” question: < makes the stack strictly decreasing, which is correct for “strictly warmer day.”
73. Next Greater Element II
LC 503 · Medium · Monotonic Stack / Circular
Problem Statement
Given a circular array, return the next greater element for each position (going around the end if needed). If none exists, return -1.
Constraints. 1 ≤ n ≤ 10^4.
Solution
Walk the array twice (indices 0..2n-1, mod n) with the same monotonic-stack pattern. Two passes are enough: by the time you finish two full loops, any element that has a greater neighbor has been resolved.
class Solution {
public:
vector<int> nextGreaterElements(vector<int>& nums) {
int n = nums.size();
vector<int> ans(n, -1);
stack<int> st;
for (int i = 0; i < 2 * n; ++i) {
int v = nums[i % n];
while (!st.empty() && nums[st.top()] < v) {
ans[st.top()] = v;
st.pop();
}
if (i < n) st.push(i);
}
return ans;
}
};- Time.
O(n). - Space.
O(n).
Discussion
Two passes are sufficient because after the second pass, every element has “seen” all other elements. Three passes would be wasted work; one pass misses the wraparound case.
Push onto the stack only in the first pass (i < n) — the second pass is only for resolving existing entries. Pushing twice would cause double-counting.
74. Largest Rectangle in Histogram
LC 84 · Hard · Monotonic Stack
Problem Statement
Given heights of histogram bars, return the largest rectangular area enclosed.
Constraints. 1 ≤ n ≤ 10^5.
Solution
Monotonic increasing stack of indices. When the current bar is shorter than the top’s, pop and compute the area using that popped bar as the height. The width is bounded by the current index on the right and the new stack top (or -1) on the left.
Sentinel: treat the end of the array as an effective 0-height bar by looping one past the end.
class Solution {
public:
int largestRectangleArea(vector<int>& h) {
int n = h.size();
stack<int> st;
int best = 0;
for (int i = 0; i <= n; ++i) {
int cur = (i == n) ? 0 : h[i];
while (!st.empty() && h[st.top()] > cur) {
int top = st.top(); st.pop();
int L = st.empty() ? -1 : st.top();
int area = h[top] * (i - L - 1);
if (area > best) best = area;
}
st.push(i);
}
return best;
}
};- Time.
O(n). - Space.
O(n).
Discussion
The stack holds bars whose “next smaller right” hasn’t been found yet. When a smaller bar arrives, you know the right boundary for the top; the left boundary is the new stack top (or -1).
The sentinel 0 at the end triggers the stack to flush at the end of the array, avoiding a separate cleanup loop.
This is the engine for LC 85 “Maximal Rectangle” — for a 2D binary matrix, accumulate histograms row by row and call this on each.
75. Car Fleet
LC 853 · Medium · Monotonic Stack / Sorting
Problem Statement
n cars are on a one-way road. Each has a starting position and speed, all toward target. A faster car catches up to slower ones and they form a fleet (moving at the slower speed). Return the number of fleets that reach the target.
Constraints. 1 ≤ n ≤ 10^5.
Solution
Sort cars by position descending. For each, compute time = (target - position) / speed. Walk from the car closest to the target: any later car with time ≤ current fleet's time gets absorbed; otherwise it’s a new fleet.
class Solution {
public:
int carFleet(int target, vector<int>& position, vector<int>& speed) {
int n = position.size();
vector<pair<int, double> > cars;
cars.reserve(n);
for (int i = 0; i < n; ++i)
cars.push_back(make_pair(position[i], (double)(target - position[i]) / speed[i]));
sort(cars.begin(), cars.end()); // ascending by position
int fleets = 0;
double slowest = 0.0;
for (int i = n - 1; i >= 0; --i) {
if (cars[i].second > slowest) {
++fleets;
slowest = cars[i].second;
}
}
return fleets;
}
};- Time.
O(n log n). - Space.
O(n).
Discussion
Time-to-target is the right quantity: two cars behind the same bottleneck are in the same fleet iff the back car’s raw time is ≤ the front car’s. Sorting by position and walking from front-to-target means each new “max time seen so far” starts a new fleet.
The key is that merged fleets take the time of the slowest car — so the running slowest only goes up, and any car with time ≤ slowest is absorbed silently.
Using double for time is fine given the constraints. If you need exact comparisons, store (target - position) * speed_j vs (target - position_j) * speed (cross-multiply).
76. Jump Game
LC 55 · Medium · Greedy
Problem Statement
Given an array where nums[i] is the max jump length from i, return whether you can reach the last index from index 0.
Constraints. 1 ≤ n ≤ 10^4.
Solution
Track the farthest index reachable so far. At each step, if i is beyond reach, return false; otherwise update reach = max(reach, i + nums[i]). If you make it past n - 1, return true.
class Solution {
public:
bool canJump(vector<int>& nums) {
int reach = 0;
int n = nums.size();
for (int i = 0; i < n; ++i) {
if (i > reach) return false;
if (i + nums[i] > reach) reach = i + nums[i];
if (reach >= n - 1) return true;
}
return true;
}
};- Time.
O(n). - Space.
O(1).
Discussion
Greedy works because reachability is monotone: if you can reach i, you can reach any j such that j ≤ i + max(nums[k]) for k ≤ i. Tracking the running max reach is enough — you never need to know which index was used to get there.
DP is overkill: dp[i] = can we reach i? is O(n²) with the same answer.
77. Jump Game II
LC 45 · Medium · Greedy
Problem Statement
Same array semantics as LC 55. Return the minimum number of jumps to reach the last index.
Constraints. 1 ≤ n ≤ 10^4; always reachable.
Solution
Think of each jump as expanding a “level” of reachable indices (BFS on an implicit graph). Maintain curEnd (right edge of the current level) and farthest (best reach seen during this level). When i passes curEnd, commit a jump and set curEnd = farthest.
class Solution {
public:
int jump(vector<int>& nums) {
int n = nums.size();
int jumps = 0, curEnd = 0, farthest = 0;
for (int i = 0; i < n - 1; ++i) {
if (i + nums[i] > farthest) farthest = i + nums[i];
if (i == curEnd) {
++jumps;
curEnd = farthest;
}
}
return jumps;
}
};- Time.
O(n). - Space.
O(1).
Discussion
Loop to n - 2, not n - 1 — the last index doesn’t need a jump out of it. Stopping early avoids an off-by-one where you’d increment jumps one too many times.
This is BFS on an implicit graph where each index has edges to [i+1, ..., i+nums[i]]. Level-by-level BFS finds the shortest path, and the greedy formulation compresses it into a single pass.
78. Gas Station
LC 134 · Medium · Greedy
Problem Statement
You have n gas stations in a circle. gas[i] is the gas available at station i, cost[i] is the cost to drive from station i to i + 1. Return the starting index that lets you complete a full loop, or -1 if none exists.
Constraints. 1 ≤ n ≤ 10^5.
Solution
If sum(gas) < sum(cost), the loop is impossible. Otherwise, walk once tracking a running tank. If the tank ever goes negative at index i, no start in [prevStart..i] works — they can’t even reach here. Reset start to i + 1 and tank to 0.
class Solution {
public:
int canCompleteCircuit(vector<int>& gas, vector<int>& cost) {
int total = 0, tank = 0, start = 0;
int n = gas.size();
for (int i = 0; i < n; ++i) {
int diff = gas[i] - cost[i];
total += diff;
tank += diff;
if (tank < 0) {
start = i + 1;
tank = 0;
}
}
return total >= 0 ? start : -1;
}
};- Time.
O(n). - Space.
O(1).
Discussion
The key insight: if a journey starting at s fails at position i, then no position in [s, i] can succeed either, because any of them would hit i with less tank than s did. So skipping forward to i + 1 is safe.
The total >= 0 check on total gas - cost is what tells you whether any start is feasible. If yes, the greedy pointer start must be valid.
79. Hand of Straights
LC 846 · Medium · Greedy
Problem Statement
Given an array hand and integer groupSize, return whether the cards can be arranged into groups of groupSize consecutive integers.
Constraints. 1 ≤ n ≤ 10^4.
Solution
Count frequencies in a map (sorted). For each smallest available card x with count c > 0, decrement x, x+1, ..., x+groupSize-1 each by c. If any of these are missing enough copies, return false.
class Solution {
public:
bool isNStraightHand(vector<int>& hand, int groupSize) {
int n = hand.size();
if (n % groupSize) return false;
map<int,int> cnt;
for (int i = 0; i < n; ++i) ++cnt[hand[i]];
while (!cnt.empty()) {
int x = cnt.begin()->first;
int c = cnt.begin()->second;
for (int k = 0; k < groupSize; ++k) {
map<int,int>::iterator it = cnt.find(x + k);
if (it == cnt.end() || it->second < c) return false;
it->second -= c;
if (it->second == 0) cnt.erase(it);
}
}
return true;
}
};- Time.
O(n log n). - Space.
O(n).
Discussion
Greedy on the smallest available card is forced: that card must be the start of some group (no smaller card exists to extend into it). So the choice is unique at every step.
Using map (sorted) gives begin() in O(log n). A min-heap of distinct values is equivalent but requires more bookkeeping to handle multiplicities.
80. Non-overlapping Intervals
LC 435 · Medium · Greedy / Intervals
Problem Statement
Given intervals, return the minimum number you must remove to make the rest non-overlapping.
Constraints. 1 ≤ n ≤ 10^5.
Solution
Sort by end time. Greedily keep intervals whose start is ≥ the last kept end. Every skipped interval contributes 1 to the removal count.
class Solution {
public:
int eraseOverlapIntervals(vector<vector<int> >& intervals) {
int n = intervals.size();
sort(intervals.begin(), intervals.end(), cmp);
int kept = 0, lastEnd = INT_MIN;
for (int i = 0; i < n; ++i) {
if (intervals[i][0] >= lastEnd) {
++kept;
lastEnd = intervals[i][1];
}
}
return n - kept;
}
private:
static bool cmp(const vector<int>& a, const vector<int>& b) {
return a[1] < b[1];
}
};- Time.
O(n log n). - Space.
O(1).
Discussion
“Sort by end” is the textbook greedy for activity selection: always picking the interval that ends earliest leaves the most room for future intervals. Exchange argument: any optimal solution can be transformed to the greedy solution by swapping intervals without hurting the count.
Sorting by start would be wrong: [1, 100] starts earliest but blocks many shorter intervals.
81. Merge Intervals
LC 56 · Medium · Intervals
Problem Statement
Given a collection of intervals, merge overlapping ones and return the result.
Constraints. 1 ≤ n ≤ 10^4.
Solution
Sort by start. Walk through; either extend the last merged interval or start a new one.
class Solution {
public:
vector<vector<int> > merge(vector<vector<int> >& intervals) {
vector<vector<int> > ans;
if (intervals.empty()) return ans;
sort(intervals.begin(), intervals.end());
ans.push_back(intervals[0]);
for (size_t i = 1; i < intervals.size(); ++i) {
if (intervals[i][0] <= ans.back()[1]) {
if (intervals[i][1] > ans.back()[1]) ans.back()[1] = intervals[i][1];
} else {
ans.push_back(intervals[i]);
}
}
return ans;
}
};- Time.
O(n log n). - Space.
O(n)for output.
Discussion
After sorting by start, two adjacent intervals overlap iff intervals[i].start ≤ lastMerged.end. Touching intervals (==) are usually considered to overlap for this problem — LC expects them merged.
Don’t overwrite end with min — you want max to absorb fully nested intervals.
82. Insert Interval
LC 57 · Medium · Intervals
Problem Statement
Given a list of non-overlapping sorted intervals and a new interval, insert it and merge if necessary.
Constraints. 0 ≤ n ≤ 10^4.
Solution
Three phases: 1. Copy intervals strictly before the new one. 2. Merge all intervals overlapping with the new one, expanding the new one in place. 3. Copy remaining intervals.
class Solution {
public:
vector<vector<int> > insert(vector<vector<int> >& intervals,
vector<int>& newInterval) {
vector<vector<int> > ans;
int i = 0, n = intervals.size();
while (i < n && intervals[i][1] < newInterval[0]) ans.push_back(intervals[i++]);
while (i < n && intervals[i][0] <= newInterval[1]) {
newInterval[0] = min(newInterval[0], intervals[i][0]);
newInterval[1] = max(newInterval[1], intervals[i][1]);
++i;
}
ans.push_back(newInterval);
while (i < n) ans.push_back(intervals[i++]);
return ans;
}
};- Time.
O(n). - Space.
O(n)for output.
Discussion
The clean three-phase walk avoids any merging phase afterwards — the new interval is fully expanded before it’s pushed. This is why you don’t need LC 56’s merge pass.
Strict < in the first while (intervals before newInterval) and ≤ in the second (intervals overlapping — including touching) are the correct comparators.
83. LRU Cache
LC 146 · Medium · Design
Problem Statement
Implement an LRU cache with O(1) get and put.
Constraints. 1 ≤ capacity ≤ 3000; up to 3·10^5 operations.
Solution
Doubly linked list of (key, value) nodes + hash map from key to node pointer. get moves the node to the head (most recent); put inserts/updates and evicts the tail if over capacity.
class LRUCache {
struct Node { int key, val; Node *prev, *next; };
int cap;
unordered_map<int, Node*> m;
Node *head, *tail;
void remove(Node* u) { u->prev->next = u->next; u->next->prev = u->prev; }
void insertFront(Node* u) {
u->next = head->next; u->prev = head;
head->next->prev = u; head->next = u;
}
public:
LRUCache(int capacity) : cap(capacity) {
head = new Node(); tail = new Node();
head->next = tail; tail->prev = head;
}
int get(int key) {
unordered_map<int, Node*>::iterator it = m.find(key);
if (it == m.end()) return -1;
remove(it->second);
insertFront(it->second);
return it->second->val;
}
void put(int key, int value) {
unordered_map<int, Node*>::iterator it = m.find(key);
if (it != m.end()) {
it->second->val = value;
remove(it->second);
insertFront(it->second);
return;
}
if ((int)m.size() == cap) {
Node* old = tail->prev;
remove(old);
m.erase(old->key);
delete old;
}
Node* u = new Node();
u->key = key; u->val = value;
insertFront(u);
m[key] = u;
}
};- Time.
O(1)amortized per op. - Space.
O(capacity).
Discussion
Two sentinels (head, tail) eliminate the null-pointer edge cases around insertion and removal — you never need to check “is this the first / last node?” because the sentinels are always there.
The doubly-linked list gives O(1) removal given a pointer (no traversal); the hash map gives O(1) lookup to get that pointer. You need both.
list<pair<int,int>> + unordered_map<int, list<pair<int,int>>::iterator> is the STL idiomatic version. Writing your own linked list is better for interviews because interviewers want to see the pointer manipulation.
84. LFU Cache
LC 460 · Hard · Design
Problem Statement
Implement an LFU cache with O(1) get and put. On eviction, remove the least frequently used key; break ties by LRU.
Constraints. 0 ≤ capacity ≤ 10^4.
Solution
Maintain (1) key → (value, freq, iterator) hash map and (2) freq → list<key> hash map. On access, move the key from freqList[f] to freqList[f+1]. Track minFreq so eviction pops from freqList[minFreq].back() (the LRU within that bucket).
class LFUCache {
int cap, sz, minFreq;
unordered_map<int, pair<int, int> > kv; // key -> (val, freq)
unordered_map<int, list<int>::iterator> it; // key -> iter in freq list
unordered_map<int, list<int> > freqList; // freq -> keys (front = MRU)
public:
LFUCache(int capacity) : cap(capacity), sz(0), minFreq(0) {}
int get(int key) {
unordered_map<int, pair<int,int> >::iterator kit = kv.find(key);
if (kit == kv.end()) return -1;
touch(key);
return kit->second.first;
}
void put(int key, int value) {
if (cap == 0) return;
unordered_map<int, pair<int,int> >::iterator kit = kv.find(key);
if (kit != kv.end()) {
kit->second.first = value;
touch(key);
return;
}
if (sz == cap) {
int old = freqList[minFreq].back();
freqList[minFreq].pop_back();
kv.erase(old);
it.erase(old);
--sz;
}
kv[key] = make_pair(value, 1);
freqList[1].push_front(key);
it[key] = freqList[1].begin();
minFreq = 1;
++sz;
}
private:
void touch(int key) {
int f = kv[key].second;
freqList[f].erase(it[key]);
if (freqList[f].empty() && minFreq == f) ++minFreq;
++kv[key].second;
int nf = f + 1;
freqList[nf].push_front(key);
it[key] = freqList[nf].begin();
}
};- Time.
O(1)per op. - Space.
O(capacity).
Discussion
Three hash maps look like a lot but each is necessary: kv for value+freq lookup, it for O(1) removal from the frequency list, freqList for per-frequency LRU ordering.
minFreq only increases when the current min-freq bucket empties after an access. On insertion of a new key, minFreq drops back to 1 — but that’s the correct new min because a freshly-inserted key has freq 1, which is ≤ any current bucket.
Iterators into std::list are stable across erase of other elements, which is why storing them in it is safe.
85. Design Twitter
LC 355 · Medium · Design / Heap
Problem Statement
Design a simplified Twitter: - postTweet(userId, tweetId) - getNewsFeed(userId) — 10 most recent tweets from the user and those they follow - follow(followerId, followeeId) - unfollow(followerId, followeeId)
Constraints. Up to 3000 operations.
Solution
Per-user deque of recent tweets tagged with a global timestamp. getNewsFeed uses a max-heap over the first tweet of each followed user (plus self), popping 10 times.
class Twitter {
int clock;
unordered_map<int, vector<pair<int,int> > > tweets; // user -> (time, id)
unordered_map<int, unordered_set<int> > following; // user -> set
public:
Twitter() : clock(0) {}
void postTweet(int userId, int tweetId) {
tweets[userId].push_back(make_pair(++clock, tweetId));
}
vector<int> getNewsFeed(int userId) {
// max-heap of (time, tweetId, user, index)
priority_queue<vector<int> > pq;
unordered_set<int>& fs = following[userId];
fs.insert(userId);
for (unordered_set<int>::iterator it = fs.begin(); it != fs.end(); ++it) {
vector<pair<int,int> >& v = tweets[*it];
if (!v.empty()) {
vector<int> e(4);
e[0] = v.back().first;
e[1] = v.back().second;
e[2] = *it;
e[3] = (int)v.size() - 1;
pq.push(e);
}
}
fs.erase(userId);
vector<int> ans;
while (!pq.empty() && (int)ans.size() < 10) {
vector<int> e = pq.top(); pq.pop();
ans.push_back(e[1]);
if (e[3] > 0) {
--e[3];
vector<pair<int,int> >& v = tweets[e[2]];
e[0] = v[e[3]].first;
e[1] = v[e[3]].second;
pq.push(e);
}
}
return ans;
}
void follow(int followerId, int followeeId) {
if (followerId != followeeId) following[followerId].insert(followeeId);
}
void unfollow(int followerId, int followeeId) {
following[followerId].erase(followeeId);
}
};- Time.
postTweetO(1);getNewsFeedO(f log f + 10 log f);follow/unfollowO(1). - Space.
O(tweets + follows).
Discussion
Storing tweets in per-user lists (instead of one global list) keeps postTweet cheap and lets getNewsFeed k-way-merge across followed lists. It’s exactly the LC 23 “Merge k Sorted Lists” pattern applied to reverse-chronological tweets.
Including self in the merge is easiest done by inserting-then-erasing on following[userId].
The monotonic clock gives a total order on tweets without any time synchronization logic. Each tweet is timestamped at post time.
86. Permutations
LC 46 · Medium · Backtracking
Problem Statement
Given an array of distinct integers, return all possible permutations.
Constraints. 1 ≤ n ≤ 8.
Solution
Backtracking: swap-in-place. At depth d, try each nums[i] for i ∈ [d, n), swap it into position d, recurse to depth d + 1, swap back.
class Solution {
public:
vector<vector<int> > permute(vector<int>& nums) {
vector<vector<int> > ans;
bt(nums, 0, ans);
return ans;
}
private:
void bt(vector<int>& nums, int d, vector<vector<int> >& ans) {
if (d == (int)nums.size()) { ans.push_back(nums); return; }
for (int i = d; i < (int)nums.size(); ++i) {
swap(nums[d], nums[i]);
bt(nums, d + 1, ans);
swap(nums[d], nums[i]);
}
}
};- Time.
O(n · n!). - Space.
O(n)recursion.
Discussion
Swap-in-place avoids building a used[] array and uses the array itself as the state. The tradeoff: ans.push_back(nums) copies the current array on every leaf hit, which is unavoidable.
The alternative with used[] + an accumulator is easier to adapt to the “distinct permutations from duplicates” variant (LC 47), where you need to skip nums[i] == nums[i-1] && !used[i-1] to avoid duplicates.
87. Subsets
LC 78 · Medium · Backtracking
Problem Statement
Return all subsets of a set of distinct integers.
Constraints. 1 ≤ n ≤ 10.
Solution
At each step, decide whether to include nums[i]. Either backtrack (include/exclude) or iterative power-set expansion.
class Solution {
public:
vector<vector<int> > subsets(vector<int>& nums) {
vector<vector<int> > ans;
ans.push_back(vector<int>());
for (size_t i = 0; i < nums.size(); ++i) {
int sz = ans.size();
for (int j = 0; j < sz; ++j) {
vector<int> s = ans[j];
s.push_back(nums[i]);
ans.push_back(s);
}
}
return ans;
}
};- Time.
O(n · 2^n). - Space.
O(n · 2^n)output.
Discussion
The iterative power-set expansion is cleaner than recursion: start with {{}}, then for each new element, append a copy of every existing subset with the new element added. After processing all n elements, you have all 2^n subsets.
For the duplicate-allowed variant (LC 90), sort the input and skip duplicates when you would otherwise generate a repeated subset. Easier to reason about with recursion.
88. Combination Sum
LC 39 · Medium · Backtracking
Problem Statement
Given distinct positive candidates and a target, return all unique combinations (multiset) summing to target. Each candidate may be used any number of times.
Constraints. 1 ≤ candidates.length ≤ 30; 1 ≤ target ≤ 500.
Solution
Sort for early pruning. Backtrack with an index — for each depth, either pick the current candidate (and stay) or skip forward. Staying lets you reuse the same candidate multiple times.
class Solution {
public:
vector<vector<int> > combinationSum(vector<int>& cands, int target) {
sort(cands.begin(), cands.end());
vector<vector<int> > ans;
vector<int> cur;
bt(cands, target, 0, cur, ans);
return ans;
}
private:
void bt(vector<int>& c, int rem, int i, vector<int>& cur,
vector<vector<int> >& ans) {
if (rem == 0) { ans.push_back(cur); return; }
for (int k = i; k < (int)c.size() && c[k] <= rem; ++k) {
cur.push_back(c[k]);
bt(c, rem - c[k], k, cur, ans); // k, not k+1 (reuse allowed)
cur.pop_back();
}
}
};- Time. Output-sensitive, bounded by the number of combinations.
- Space.
O(target / min(cands))recursion.
Discussion
Passing k (not k+1) to the recursive call is what allows reusing a candidate. For LC 40 (“each used at most once”), pass k+1 and skip duplicates with if (k > i && c[k] == c[k-1]) continue;.
Sorting enables c[k] <= rem early termination — once you exceed rem with the smallest remaining candidate, no further work is needed.
89. Word Search
LC 79 · Medium · Backtracking
Problem Statement
Given a 2D board of letters and a word, return whether the word exists as a path of adjacent cells (no reuse).
Constraints. 1 ≤ m, n ≤ 6; 1 ≤ |word| ≤ 15.
Solution
DFS from each cell matching word[0]. At each step, temporarily mark the cell (overwrite with a sentinel), recurse into 4 neighbors, restore on the way out.
class Solution {
public:
bool exist(vector<vector<char> >& board, string word) {
int m = board.size(), n = board[0].size();
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
if (dfs(board, i, j, word, 0, m, n)) return true;
return false;
}
private:
bool dfs(vector<vector<char> >& b, int i, int j, const string& w,
int k, int m, int n) {
if (k == (int)w.size()) return true;
if (i < 0 || j < 0 || i >= m || j >= n || b[i][j] != w[k]) return false;
char save = b[i][j];
b[i][j] = '#';
bool ok = dfs(b, i+1, j, w, k+1, m, n) ||
dfs(b, i-1, j, w, k+1, m, n) ||
dfs(b, i, j+1, w, k+1, m, n) ||
dfs(b, i, j-1, w, k+1, m, n);
b[i][j] = save;
return ok;
}
};- Time.
O(m · n · 4^|word|)worst case. - Space.
O(|word|)recursion.
Discussion
The in-place mark-and-restore avoids allocating a visited matrix per path. This matters because the DFS is worst-case exponential in path length — any per-call allocation is painful.
For multiple words (LC 212), preprocess all words into a trie and walk the trie alongside the DFS. The trie-DFS variant turns k · 4^L into a single shared traversal.
90. Palindrome Partitioning
LC 131 · Medium · Backtracking
Problem Statement
Given a string, return all ways to partition it into palindromic substrings.
Constraints. 1 ≤ |s| ≤ 16.
Solution
Backtrack: try every prefix length; if that prefix is a palindrome, recurse on the rest. Precompute isPal[i][j] in O(n²) to check palindromes in O(1).
class Solution {
public:
vector<vector<string> > partition(string s) {
int n = s.size();
vector<vector<char> > isPal(n, vector<char>(n, 0));
for (int len = 1; len <= n; ++len)
for (int i = 0; i + len - 1 < n; ++i) {
int j = i + len - 1;
if (s[i] == s[j] && (len <= 2 || isPal[i+1][j-1]))
isPal[i][j] = 1;
}
vector<vector<string> > ans;
vector<string> cur;
bt(s, 0, isPal, cur, ans);
return ans;
}
private:
void bt(const string& s, int i, vector<vector<char> >& isPal,
vector<string>& cur, vector<vector<string> >& ans) {
if (i == (int)s.size()) { ans.push_back(cur); return; }
for (int j = i; j < (int)s.size(); ++j) {
if (!isPal[i][j]) continue;
cur.push_back(s.substr(i, j - i + 1));
bt(s, j + 1, isPal, cur, ans);
cur.pop_back();
}
}
};- Time.
O(n · 2^n)output-dominated. - Space.
O(n²)for the palindrome table.
Discussion
The precomputed palindrome table is crucial: without it you’d rechecked the same substrings exponentially often. With it, each recursive call is O(1) per branch decision.
The DP recurrence for isPal: s[i..j] is a palindrome iff s[i] == s[j] and either j - i < 2 (0 or 1 chars between) or s[i+1..j-1] is also a palindrome. Build by increasing length so the subcase is ready when needed.
91. N-Queens
LC 51 · Hard · Backtracking
Problem Statement
Place n queens on an n × n board so none attack each other. Return all distinct board configurations.
Constraints. 1 ≤ n ≤ 9.
Solution
One queen per row. Track used columns and both diagonal types (r + c and r - c) in boolean arrays. Backtrack row by row.
class Solution {
vector<vector<string> > ans;
public:
vector<vector<string> > solveNQueens(int n) {
vector<string> board(n, string(n, '.'));
vector<char> col(n, 0), d1(2*n, 0), d2(2*n, 0);
bt(0, n, board, col, d1, d2);
return ans;
}
private:
void bt(int r, int n, vector<string>& board, vector<char>& col,
vector<char>& d1, vector<char>& d2) {
if (r == n) { ans.push_back(board); return; }
for (int c = 0; c < n; ++c) {
int a = r + c, b = r - c + n;
if (col[c] || d1[a] || d2[b]) continue;
board[r][c] = 'Q';
col[c] = d1[a] = d2[b] = 1;
bt(r + 1, n, board, col, d1, d2);
board[r][c] = '.';
col[c] = d1[a] = d2[b] = 0;
}
}
};- Time.
O(n!)worst case. - Space.
O(n).
Discussion
r + c is constant along the / diagonals; r - c is constant along the \ diagonals (shift by n to keep it non-negative). Tracking three boolean arrays gives O(1) conflict checks instead of scanning the board.
For LC 52 (just count, no board), drop board and increment a counter in the base case.
92. Word Break
LC 139 · Medium · DP or DFS+memo
Problem Statement
Given a string s and a list of dictionary words, return whether s can be segmented into dictionary words.
Constraints. 1 ≤ |s| ≤ 300; dictionary up to 1000 words.
Solution
dp[i] = can s[0..i) be segmented? dp[0] = true; for each i > 0, dp[i] = true iff some j < i has dp[j] && s[j..i) is in dict.
class Solution {
public:
bool wordBreak(string s, vector<string>& dict) {
unordered_set<string> d(dict.begin(), dict.end());
int maxLen = 0;
for (size_t i = 0; i < dict.size(); ++i)
if ((int)dict[i].size() > maxLen) maxLen = dict[i].size();
int n = s.size();
vector<char> dp(n + 1, 0);
dp[0] = 1;
for (int i = 1; i <= n; ++i) {
int jMin = max(0, i - maxLen);
for (int j = i - 1; j >= jMin; --j) {
if (dp[j] && d.find(s.substr(j, i - j)) != d.end()) {
dp[i] = 1;
break;
}
}
}
return dp[n] != 0;
}
};- Time.
O(n · maxLen)hash lookups. - Space.
O(n).
Discussion
Limiting the inner loop to maxLen is a huge constant-factor win: without it you’d check every substring of s[0..i), which is O(n²).
A trie-based version can walk the suffix in O(i · maxLen) without substring hashing and is slightly faster in practice.
For LC 140 “Word Break II” (return all segmentations), use DFS + memoization keyed by starting index — caching the list of strings at each index.
93. Generate Parentheses
LC 22 · Medium · Backtracking
Problem Statement
Given n, return all well-formed strings of n pairs of parentheses.
Constraints. 1 ≤ n ≤ 8.
Solution
Backtrack tracking open and close counts. Can place ( if open < n; can place ) if close < open.
class Solution {
public:
vector<string> generateParenthesis(int n) {
vector<string> ans;
string cur;
bt(n, 0, 0, cur, ans);
return ans;
}
private:
void bt(int n, int open, int close, string& cur, vector<string>& ans) {
if ((int)cur.size() == 2 * n) { ans.push_back(cur); return; }
if (open < n) { cur.push_back('('); bt(n, open+1, close, cur, ans); cur.pop_back(); }
if (close < open) { cur.push_back(')'); bt(n, open, close+1, cur, ans); cur.pop_back(); }
}
};- Time.
O(C_n)Catalan —~4^n / n^1.5. - Space.
O(n)recursion.
Discussion
The two constraints (open ≤ n, close ≤ open) exactly define a valid prefix of a well-formed string. Backtracking that only explores valid prefixes is much faster than generating all 2^(2n) strings and filtering.
The output size is the n-th Catalan number. You can’t do better asymptotically — you must enumerate them all.
94. Single Number
LC 136 · Easy/Medium · Bit Manipulation
Problem Statement
Given an array where every element appears twice except one, find the one that appears once.
Constraints. 1 ≤ n ≤ 3·10^4.
Solution
XOR everything. Pairs cancel, leaving the odd one out.
class Solution {
public:
int singleNumber(vector<int>& nums) {
int x = 0;
for (size_t i = 0; i < nums.size(); ++i) x ^= nums[i];
return x;
}
};- Time.
O(n). - Space.
O(1).
Discussion
XOR’s two key properties: a ^ a = 0 and a ^ 0 = a. The order of operations doesn’t matter (commutative and associative), so all pairs cancel regardless of how they’re interleaved.
For LC 137 “Single Number II” (every element thrice except one), track two bitmasks ones and twos to count occurrences mod 3 per bit — cleverer but the same core idea.
95. Number of 1 Bits
LC 191 · Easy/Medium · Bit Manipulation
Problem Statement
Return the number of 1 bits in an unsigned integer.
Constraints. 0 ≤ n ≤ 2^32 - 1.
Solution
Brian Kernighan’s trick: n & (n - 1) clears the lowest set bit. Iterate until n == 0.
class Solution {
public:
int hammingWeight(uint32_t n) {
int count = 0;
while (n) { n &= n - 1; ++count; }
return count;
}
};- Time.
O(popcount(n)). - Space.
O(1).
Discussion
Brian Kernighan is faster than iterating every bit — you only do as much work as there are set bits, which averages half the bits on random inputs.
The compiler builtin __builtin_popcount is even faster (single instruction on modern x86) but may be frowned upon in interviews depending on the interviewer.
96. Pow(x, n)
LC 50 · Medium · Math / Binary Exponentiation
Problem Statement
Implement pow(x, n), where n can be negative.
Constraints. -2^31 ≤ n ≤ 2^31 - 1.
Solution
Fast exponentiation: if n is even, x^n = (x^(n/2))^2; if odd, x^n = x · x^(n-1). Iterative version avoids stack frames.
Handle n negative by converting to long long and inverting x (beware INT_MIN overflow on negation).
class Solution {
public:
double myPow(double x, int n) {
long long N = n;
if (N < 0) { x = 1.0 / x; N = -N; }
double result = 1.0;
while (N > 0) {
if (N & 1) result *= x;
x *= x;
N >>= 1;
}
return result;
}
};- Time.
O(log |n|). - Space.
O(1).
Discussion
Widening n to long long before negating is mandatory: -INT_MIN overflows int. This is a common integer-overflow pitfall.
The iterative form is the one to memorize — it has no recursion overhead and one clean invariant: x holds the current power, result accumulates the selected factors.
97. Sqrt(x)
LC 69 · Easy/Medium · Binary Search
Problem Statement
Return floor(sqrt(x)) for a non-negative integer x.
Constraints. 0 ≤ x ≤ 2^31 - 1.
Solution
Binary search for the largest m with m * m ≤ x. Use long long for m * m to avoid overflow.
class Solution {
public:
int mySqrt(int x) {
long long lo = 0, hi = x;
long long ans = 0;
while (lo <= hi) {
long long m = (lo + hi) / 2;
if (m * m <= (long long)x) { ans = m; lo = m + 1; }
else hi = m - 1;
}
return (int)ans;
}
};- Time.
O(log x). - Space.
O(1).
Discussion
Newton’s method (m = (m + x/m) / 2) converges in O(log log x) and is the asymptotically best approach — but it requires careful handling to return the integer floor exactly.
Binary search is the safer answer in interviews: easy to reason about, easy to get right.
98. Implement Trie (Prefix Tree)
LC 208 · Medium · Trie
Problem Statement
Implement insert, search, and startsWith on a trie.
Constraints. ≤ 3·10^4 operations; lowercase letters only.
Solution
Each node has 26 child pointers and a terminal flag.
class Trie {
struct Node { Node* ch[26]; bool term;
Node() : term(false) { for (int i = 0; i < 26; ++i) ch[i] = NULL; } };
Node* root;
public:
Trie() { root = new Node(); }
void insert(string word) {
Node* cur = root;
for (size_t i = 0; i < word.size(); ++i) {
int c = word[i] - 'a';
if (!cur->ch[c]) cur->ch[c] = new Node();
cur = cur->ch[c];
}
cur->term = true;
}
bool search(string word) { return walk(word, true); }
bool startsWith(string prefix) { return walk(prefix, false); }
private:
bool walk(const string& s, bool needTerm) {
Node* cur = root;
for (size_t i = 0; i < s.size(); ++i) {
int c = s[i] - 'a';
if (!cur->ch[c]) return false;
cur = cur->ch[c];
}
return !needTerm || cur->term;
}
};- Time.
O(|word|)per op. - Space.
O(σ · total word length).
Discussion
Fixed-size child array (26 pointers) is fastest for lowercase-only input. For sparse alphabets, use unordered_map<char, Node*> instead — trades cache locality for memory savings.
The term flag distinguishes “word ends here” from “prefix pass-through”; search vs startsWith is exactly this distinction.
99. Design Add and Search Words Data Structure
LC 211 · Medium · Trie / DFS
Problem Statement
Design a data structure supporting addWord(word) and search(word) where word may include '.', a wildcard matching any single letter.
Constraints. Up to 10^4 operations; words up to 40 chars.
Solution
Build a trie as in LC 208. For search, DFS over the trie — on a wildcard, try all 26 children.
class WordDictionary {
struct Node { Node* ch[26]; bool term;
Node() : term(false) { for (int i = 0; i < 26; ++i) ch[i] = NULL; } };
Node* root;
public:
WordDictionary() { root = new Node(); }
void addWord(string word) {
Node* cur = root;
for (size_t i = 0; i < word.size(); ++i) {
int c = word[i] - 'a';
if (!cur->ch[c]) cur->ch[c] = new Node();
cur = cur->ch[c];
}
cur->term = true;
}
bool search(string word) { return dfs(root, word, 0); }
private:
bool dfs(Node* u, const string& w, int i) {
if (!u) return false;
if (i == (int)w.size()) return u->term;
char c = w[i];
if (c == '.') {
for (int k = 0; k < 26; ++k)
if (u->ch[k] && dfs(u->ch[k], w, i + 1)) return true;
return false;
}
return dfs(u->ch[c - 'a'], w, i + 1);
}
};- Time. Worst case
O(26^w)with many wildcards; usually much less. - Space. Proportional to the total length of added words.
Discussion
Wildcards turn this from a simple trie walk into a branching DFS. In the pathological case of all-wildcard queries, you visit every node — but in practice most queries have enough non-wildcard characters to prune aggressively.
Iterative BFS with a queue of current nodes is an alternative; functionally equivalent but slightly more code.
100. Word Search II
LC 212 · Hard · Trie / DFS
Problem Statement
Given a 2D board and a list of words, return all words found on the board as paths of adjacent cells (no cell reuse per word).
Constraints. 1 ≤ m, n ≤ 12; up to 3·10^4 words.
Solution
Build a trie of all words. DFS from each cell, walking the trie alongside. When you reach a trie node with a stored word, record it and clear the word (to avoid duplicates). Prune empty trie branches to keep the DFS bounded.
struct TrieNode {
TrieNode* ch[26];
string word;
TrieNode() : word("") { for (int i = 0; i < 26; ++i) ch[i] = NULL; }
};
class Solution {
vector<string> ans;
public:
vector<string> findWords(vector<vector<char> >& board, vector<string>& words) {
TrieNode* root = new TrieNode();
for (size_t i = 0; i < words.size(); ++i) {
TrieNode* cur = root;
const string& w = words[i];
for (size_t j = 0; j < w.size(); ++j) {
int c = w[j] - 'a';
if (!cur->ch[c]) cur->ch[c] = new TrieNode();
cur = cur->ch[c];
}
cur->word = w;
}
int m = board.size(), n = board[0].size();
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
dfs(board, i, j, root, m, n);
return ans;
}
private:
void dfs(vector<vector<char> >& b, int i, int j, TrieNode* u, int m, int n) {
char ch = b[i][j];
if (ch == '#' || !u->ch[ch - 'a']) return;
TrieNode* nxt = u->ch[ch - 'a'];
if (!nxt->word.empty()) {
ans.push_back(nxt->word);
nxt->word = ""; // dedup
}
b[i][j] = '#';
if (i + 1 < m) dfs(b, i + 1, j, nxt, m, n);
if (i > 0) dfs(b, i - 1, j, nxt, m, n);
if (j + 1 < n) dfs(b, i, j + 1, nxt, m, n);
if (j > 0) dfs(b, i, j - 1, nxt, m, n);
b[i][j] = ch;
}
};- Time. Hard to bound tightly; dominated by the trie DFS across the board. Pruning via
word = ""and optional empty-child pruning keeps it manageable. - Space.
O(total word length)for the trie.
Discussion
The trie is what makes this efficient. Running LC 79 (Word Search) words.size() times would be O(m · n · 4^L · |words|) — too slow. Walking the trie lets every DFS step contribute to all words sharing that prefix.
Clearing nxt->word after reporting is a cheap dedup — you never report the same word twice.
A further optimization is pruning empty trie branches: after a DFS finishes, if the current trie node has no children, its parent can delete the link. This keeps the working trie small as words are found. Not required for correctness but useful for tight time limits.