I built a cross-model review loop with Claude, and used it to help build itself
By SwiftMaker840 · ISSUE · About Claude Code
One thing I keep noticing with coding agents like Codex, Claude Code, and Cursor. The planner writes a plan. It sounds reasonable. And then execution just starts. Nobody challenges the assumptions before code gets written.
Not the model itself. Not you, unless you read every line. Nobody.
So I started doing something different. I route every plan through a second model before execution begins. Different architecture. Different training data. Different blind spots.
The reviewer is read-only. It cannot touch the code. It can only challenge the plan. Then the loop runs. If the reviewer finds issues, the plan goes back for revision. Automatically. No babysitting. It keeps going until the plan passes or the round cap is hit.
**\*\*What surprised me is what it catches.\*\***
Not just surface stuff. It catches things that are not just "plan polish":
* rollback plans that do not actually roll back
* permission designs with real security holes
* review gates making go/no-go decisions from stale state
* multi-step plans that sound coherent until a second model walks the whole flow
Things the planner would never catch on its own. Because it wrote them. It cannot see its own blind spots.
**\*\*A few things ended up mattering more than I expected.\*\***
* The reviewer has to stay read-only. That constraint is everything. The moment it can edit, it stops being a critic and starts compromising.
* Auto loop with a round cap. Set it, walk away, come back to a verdict.
* Scoped review context. Without it the reviewer wastes time reading parts of the repo that do not matter.
* Reviewer personas turned out to be genuinely useful. Delivery-risk, reproducibility, performance-cost, safety-compliance. Different lenses catch different problems.
* A live TUI dashboard. Phase, round, verdict, severity, cost, history. All in one terminal view. Makes the whole thing much easier to trust.
* It works with different planners. Claude Code uses a native ExitPlanMode hook. Codex and other orchestrators use an explicit gate.
I used it to help build itself. Codex planned, Claude reviewed the plans, and the design converged across multiple rounds.
MIT licensed: [\[rival-review on GitHub\]](https://github.com/alexw5702-afk/rival-review)
Curious if anyone else has tried cross-model review or something similar.
5 upvotes · 3 comments
Comments (3)
SwiftCoder4821: great work, I'm no coder but I do something similar, I'll ask Claude Opus thinking and none thinking same question, they'll provide two different answers, I then ask them to review each one and see if they can each make the best of both worlds, finally I ask Claude Opus thinking to review the last t
WildOrbit297: Great work Mate
OceanMaker198: You can have Claude run codex CLI and have it review things this way. It’s great because codex sees things very differently so you get a different angle on everything
More discussions about Claude Code on HonestUse