Audits, not Reviews
Most product organisations rely on a chain of reviews and sign-offs to control quality: peer reviews of research, design crits, content crits, architecture sign-offs, PR code reviews. Each of them inserts a wait into the work. Together they explain a large fraction of why lead times are longer than they need to be.
The ZeroBlockers position is that the quality benefit of these reviews is real but the blocking nature of them is not. Replacing them with non-blocking audits run by the Enabling Team preserves the quality and removes the wait.
Goal
To keep the cross-team, cross-discipline quality scrutiny that reviews provide, without making every piece of work wait for someone else's calendar before it can move.
Common review approaches
Common examples include the following — not an exhaustive list:
| Approach | Who uses it | What gets reviewed |
|---|---|---|
| Peer Review | Researchers | Methodology and findings |
| Design Crits | Designers | Prototypes and internal designs |
| Content Crits | UX Writers | Copy and micro-copy |
| Architecture Sign Off | Developers | Planned system architecture |
| PR Code Review | Developers | Code changes before merge |
| Change Control / CAB | Operations / Release Management | Production changes before deployment |
| Security Review | Security team | Threat models and sensitive changes |
Each of these is well-intentioned. Each of them adds wait time, context-switching, and the overhead of scheduling another person's attention.
How audits replace reviews
An audit is a non-blocking review run by an Enabling Team after the work has happened (or alongside it), rather than a blocking review run by a peer before the work can proceed.
The core idea is that the Stream Team owns the quality of its own work, with peer support inside the team via teaming and pairing. The Enabling Team periodically samples work across teams, audits it against the agreed standards, and feeds findings back to the team and to the wider community of practice.
For each common review type, there is an audit equivalent:
| Blocking review | Non-blocking audit |
|---|---|
| Peer Review of research methodology | The Research Enabling Team audits a sample of research outputs each cycle, surfaces findings, updates the playbook. |
| Design Crit | The Design Enabling Team audits design output across teams, runs a community-of-practice forum where designers share work openly. |
| Content Crit | The Content Enabling Team audits published content against tone-of-voice and writing standards. |
| Architecture Sign Off | The Engineering Enabling Team audits architectural decisions via Architectural Decision Records and aggregate fitness functions. |
| PR Code Review | Inside-team review happens through teaming or pairing. The Engineering Enabling Team audits aggregate code quality metrics and samples PRs from across teams. |
| Change Control / CAB | Continuous deployment with automated checks and progressive rollout. The Operations Enabling Team audits change patterns and incident causes. |
| Security Review | The Security Enabling Team runs proactive threat modelling on high-risk areas and audits security posture across teams via automated scanning and sampled deep-dives. |
The audit cadence depends on the risk level of the work. High-risk areas (security, payments, regulatory compliance) get more frequent or more thorough audits. Low-risk areas may be audited only quarterly.
Why this works
Four reasons audits produce comparable quality with much shorter lead times:
- Quality shifts left. Most issues that a downstream reviewer would catch are caught during creation instead. Pairing surfaces design and code issues as the work happens; type checks, automated tests, security scanners, and accessibility linters catch their classes of issue at commit time. The audit covers the residual, not the front line.
- The Stream Team owns the outcome. Quality is not someone else's problem to enforce. The team has skin in the game and is responsible for the quality of what it ships.
- The Enabling Team's audit is more useful than a typical peer review. A reviewer asked to approve one PR sees one PR. An auditor sampling across the org sees patterns: "three teams have made the same architectural mistake this quarter" is a more useful insight than approving the third instance of it.
- Standards stay current. When the Enabling Team is doing the audits, they see where the playbook is wrong or out of date. The standards evolve based on real evidence rather than the author's preferences.
When you still need blocking reviews
Audits don't replace blocking review for everything. Cases where blocking is the right call:
- Regulatory and compliance. Some industries require evidence of independent review at specific gates. Treat these as the exception, not the default.
- External contributions. Code or content from outside the team has no shared context, so a review acts as the team's first interaction with the work.
For everything else, audits are the better trade-off.
Anti-patterns
- Audits with no teeth. If the audit produces findings that are never acted on, the practice degrades into a tick-box and quality drifts down.
- Audits used as performance management. Auditing is about systemic improvement, not individual blame. Findings should feed the playbook and the community of practice, not annual reviews.
- Audits that re-create blocking reviews. "We'll audit every change before merge" is just a renamed PR review with extra steps.
- No transparency about the audit findings. Sharing only the negatives gives auditors a reputation for being scolds. Share what worked too.