Code Review Trade-offs
Code review by pull request is a near-universal practice in modern software teams. The author writes code on a branch, opens a PR, waits for a reviewer, the reviewer asks for changes, the author responds, the cycle repeats, the PR is merged. The practice has real benefits: a second pair of eyes catches bugs, knowledge spreads across the team, code quality holds up over time. The practice also has costs that are easy to overlook.
Goal
To make a deliberate choice about when PR-based code review is the right tool, and when teaming or pairing produces better results for less cost.
Context
PR review is asynchronous by design. The author finishes a piece of work, hands it off, and moves to something else while waiting. The reviewer eventually picks it up, reads in, comments, and hands back. This pattern produces three costs that are particularly hard on stream teams trying to optimise for lead time.
Cost 1: Code review takes real time, because reviewing code is hard
A reviewer is asked to evaluate three different things in one pass:
- Correctness. Does the code do what it is supposed to? Are there bugs? This requires understanding the requirements the author was working from.
- Code quality. Is the code understandable, maintainable, well-structured, simple, documented, and consistent with the team's conventions?
- Non-functionals. Security, performance, test coverage, test quality.
Each of those is a different mental model. Doing all three well on a 200-line PR is a 30-to-60-minute task, and most reviewers don't block out 30 minutes for it. So the review either happens in fragments or is rushed.
Cost 2: PR review forces context switching
Each handoff is a context-switch event for somebody. The author switches off the work to wait for the review and starts something else. The reviewer switches off their own work to read in on the author's. When the review comes back, the author switches back to the original work, often hours or a day later. By the time it merges, the author has paid two context switches and the reviewer has paid one.
Context switching has a measured cost. APA estimates it at up to a 40% productivity loss in knowledge work. On a typical PR cycle, that loss is paid two or three times.
Cost 3: Lead time grows with team size
The wait time for a review is a function of how busy the reviewer is. As the team grows and more PRs need review, the average wait grows with it. Teams that started with same-day reviews drift to next-day, then to two-day, and the lead time of every individual change grows accordingly.
What teaming and pairing offer instead
When two or more people work on the same piece of code at the same time, the review happens during the writing. The trade-off changes:
- Correctness, quality, and non-functionals are evaluated continuously, with the context fully loaded into both people's heads. The decisions are made when they are cheapest to make.
- No handoff. There is no separate review step, so there is no waiting and no context-switch cost.
- Knowledge transfer is automatic. PR review distributes knowledge slowly, in small fragments. Teaming and pairing distribute it as the work happens.
The cost of teaming and pairing is that the work is written more slowly in absolute terms, since two or three people are working on the same piece of code at once. The compensating gain is that the work avoids the review queue, the context-switch cost, and most of the rework. Hunter Industries and SVT Interactive both report shorter end-to-end lead times than their PR-based equivalents, with higher quality.
When PR review still makes sense
Teaming and pairing are not always the right answer:
- Cross-team contributions. When someone outside the team contributes code (open source, a different stream team), PR review is the right tool, because no shared context exists.
- Trivial changes. Typos, dependency bumps, documentation fixes. The cost of pulling in a second person to pair on these is higher than the cost of a fast review.
- Audit and compliance trails. Some regulated industries require evidence of independent review. PRs produce an artefact that teaming does not.
- Fully remote teams without strong synchronous habits. Teaming over screen-share is workable but has a higher friction cost than co-located teaming. A team that hasn't built the habit may be better off with PRs and short feedback loops.
Anti-patterns
- Treating PR review as the default for all changes. It accumulates lead time without anyone noticing, because the cost is distributed across many small waits.
- Long-lived branches with batch reviews. Compounds the review cost: the reviewer is now reading hundreds of lines at once, the author has lost context on the early parts, and merge conflicts are likely.
- Approving without reading. When PR review becomes a bottleneck, reviewers start rubber-stamping. The team has all the cost of the practice and none of the benefit.
- Refusing to consider alternatives. "We've always done PRs" is not an argument. The question is whether the practice is producing the value the team is paying for.