Moon
NoteAPR 2, 2026 ยท 1 MIN READ

Most teams think AI code review happens after a PR opens. That's too late. Here's the two-Claude pattern Anthropic uses internally โ€” and how to steal it.

Don't let Claude grade its own homework.

BY ALEX TONGยทTAGGED Hands-On

Most teams think of AI code review as something that happens after a PR is opened.

That's too late.

The Claude Code team at Anthropic does something I think more teams should straight up steal:

  • Claudey A (my pet name for Claude) writes the plan.
  • Claudey B reviews it as a "staff engineer" โ€” skeptical, looking for edge cases, questioning assumptions.

Two agents. Different roles. The writer doesn't review its own work.

Think about why this matters. When you let the same agent that wrote the code also validate it, you're literally asking the student to grade their own test. Of course it looks correct โ€” it was designed to look correct.

๐Ÿ“ The key insight that changed how I work: When it's time to review, I kill my session and start a brand new one โ€” fresh context window, zero contamination from the implementation. It's like getting a second opinion from a completely different engineer. Issues that used to show up days later in PR review now get caught before a PR even exists.

Layer it: AI review during dev. AI review at the PR level via CI. Humans for judgment calls.

Don't let the agent grade its own homework, unless you're spidey. ๐Ÿ•ท๏ธ


Related: Stop Reading Every Line of Code โ†’ โ€” the full essay on rethinking code review for the AI era.

What did you think?

Keep reading

Enjoyed this? Get the next one in your inbox.

Already subscribed?