Recently, Boris Cherny (creator of Claude Code) shared his workflow on Twitter, and it sparked a lot of discussion. His approach is sophisticated—parallel sessions, shared .claude conventions, human-in-the-loop review cycles. But here’s what I found most interesting: how many of his techniques depend on context, and how some need refinement based on real constraints.
Let me walk through Boris’s approach and share what I’ve learned doing this differently.
The Thread
I'm Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit.
— Boris Cherny (@bcherny) January 2, 2026
My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. There is no one correct way to…
1/ I run 5 Claudes in parallel in my terminal. I number my tabs 1-5, and use system notifications to know when a Claude needs input https://t.co/nmRJ5km3oZ pic.twitter.com/CJaX1rUgiH
— Boris Cherny (@bcherny) January 2, 2026
My approach is simpler. I run one Ghostty (iTerm2 competitor) terminal per project and work on 3-5 projects at a time on macOS. Right now I’m limited to 8 GB RAM on an M1 MacBook Air. Once I upgrade to a MacBook Pro with 16 GB or more, RAM won’t be the constraint.
2/ I also run 5-10 Claudes on https://t.co/XJ8WxOxjo0, in parallel with my local Claudes. As I code in my terminal, I will often hand off local sessions to web (using &), or manually kick off sessions in Chrome, and sometimes I will --teleport back and forth. I also start a few… pic.twitter.com/HHFyconymH
— Boris Cherny (@bcherny) January 2, 2026
I like that he switches between laptop, web, and iOS. I am not there yet, but I find myself coding mostly on desktop because it gives me the most control and speed.
3/ I use Opus 4.5 with thinking for everything. It's the best coding model I've ever used, and even though it's bigger & slower than Sonnet, since you have to steer it less and it's better at tool use, it is almost always faster than using a smaller model in the end.
— Boris Cherny (@bcherny) January 2, 2026
Even though Opus 4.5 takes longer, Boris does less re-work and less review with a smarter model. That comes with more money and more time. This is a catch-22 that improves when you use the right model for the right task. I believe Opus 4.5 is best for implementation, but it is not the best for discovery. See the screenshot for my current recommendations.
4/ Our team shares a single https://t.co/v4FOLUBHz9 for the Claude Code repo. We check it into git, and the whole team contributes multiple times a week. Anytime we see Claude do something incorrectly we add it to the https://t.co/v4FOLUBHz9, so Claude knows not to do it next… pic.twitter.com/zftuPx67oK
— Boris Cherny (@bcherny) January 2, 2026
5/ During code review, I will often tag @.claude on my coworkers' PRs to add something to the https://t.co/v4FOLUBHz9 as part of the PR. We use the Claude Code Github action (/install-github-action) for this. It's our version of @danshipper's Compounding Engineering pic.twitter.com/VIQYZ2hFq5
— Boris Cherny (@bcherny) January 2, 2026
I like the human-in-the-loop component where you retrospectively review Claude Code sessions and update CLAUDE.md. In agency work, I have seen this done with a separate Claude Code session that reviews ~/.claude/projects and offers suggestions based on how the conversation goes.
6/ Most sessions start in Plan mode (shift+tab twice). If my goal is to write a Pull Request, I will use Plan mode, and go back and forth with Claude until I like its plan. From there, I switch into auto-accept edits mode and Claude can usually 1-shot it. A good plan is really… pic.twitter.com/Rcy7szkLRR
— Boris Cherny (@bcherny) January 2, 2026
I do not recommend using Opus 4.5 in Plan mode, since sometimes it uses Opus 4.5 and other times it switches to Haiku 4.5. I recommend running Plan mode using Haiku 4.5, since planning is closer to discovery and involves research.
7/ I use slash commands for every "inner loop" workflow that I end up doing many times a day. This saves me from repeated prompting, and makes it so Claude can use these workflows, too. Commands are checked into git and live in .claude/commands/.
— Boris Cherny (@bcherny) January 2, 2026
For example, Claude and I use a… pic.twitter.com/LSbvtQrc6l
8/ I use a few subagents regularly: code-simplifier simplifies the code after Claude is done working, verify-app has detailed instructions for testing Claude Code end to end, and so on. Similar to slash commands, I think of subagents as automating the most common workflows that I… pic.twitter.com/9lAwfAg78S
— Boris Cherny (@bcherny) January 2, 2026
I do not recommend sub-agents due to context issues (maybe I will describe this in another post).
9/ We use a PostToolUse hook to format Claude's code. Claude usually generates well-formatted code out of the box, and the hook handles the last 10% to avoid formatting errors in CI later. pic.twitter.com/XBMG5fmK4P
— Boris Cherny (@bcherny) January 2, 2026
10/ I don't use --dangerously-skip-permissions. Instead, I use /permissions to pre-allow common bash commands that I know are safe in my environment, to avoid unnecessary permission prompts. Most of these are checked into .claude/settings.json and shared with the team. pic.twitter.com/T5h0TkND4W
— Boris Cherny (@bcherny) January 2, 2026
11/ Claude Code uses all my tools for me. It often searches and posts to Slack (via the MCP server), runs BigQuery queries to answer analytics questions (using bq CLI), grabs error logs from Sentry, etc. The Slack MCP configuration is checked into our .mcp.json and shared with… pic.twitter.com/S4nAHlHbvX
— Boris Cherny (@bcherny) January 2, 2026
12/ For very long-running tasks, I will either (a) prompt Claude to verify its work with a background agent when it's done, (b) use an agent Stop hook to do that more deterministically, or (c) use the ralph-wiggum plugin (originally dreamt up by @GeoffreyHuntley). I will also use… pic.twitter.com/o87PZZeWxi
— Boris Cherny (@bcherny) January 2, 2026
I love the ralph wiggum technique dreamt up by @GeoffreyHuntley, which I use for many new projects to bootstrap quickly.
13/ A final tip: probably the most important thing to get great results out of Claude Code -- give Claude a way to verify its work. If Claude has that feedback loop, it will 2-3x the quality of the final result.
— Boris Cherny (@bcherny) January 2, 2026
Claude tests every single change I land to https://t.co/XJ8WxOxjo0…
I love the idea of giving Claude Code simple ways to test against tests, simulations, or DOM controls.
The Setup: More Terminals, More Thinking
Boris runs 5 Claudes in parallel locally plus 5-10 more on the web, juggling them with system notifications and browser tabs. I do something simpler: one Ghostty terminal per project, typically working 3-5 projects simultaneously on my M1 MacBook Air (8GB RAM limit).
His approach scales beautifully if you have the hardware. Mine works because I’ve accepted the constraint—and it forces better session isolation anyway.
The Model Question: Opus 4.5 Isn’t Always Right
Boris uses Opus 4.5 with extended thinking for everything, and his reasoning is solid: less steering, better tool use, faster overall despite longer inference time.
But here’s where I diverge:
For implementation: Opus 4.5 is unbeatable. The quality improvement justifies the cost and latency.
For discovery and planning: It’s actually overkill. This is where I’d use GPT-5.2 Pro or even Haiku 4.5. Planning is research-adjacent—you need breadth and speed, not deep reasoning.
One important caveat: don’t use Opus 4.5 in Plan mode. Claude Code sometimes switches to Haiku 4.5 mid-session, creating inconsistency. Use Haiku 4.5 explicitly for planning.
The .claude Convention: Shared Rules Matter
I love that Boris’s team checks .claude/commands/ into git and reviews it like code. They’ve also built a shared .claude/settings.json with pre-approved permissions, and tag @claude in PRs during code review to update their playbook.
This is the human-in-the-loop component done right. You don’t need sub-agents for this (I’d warn against those due to context bleed). Instead, retrospectively review sessions and update your guides. This compounds knowledge over time—similar to what Dan Shipper calls ‘Compounding Engineering.’
Feedback Loops Are Everything
Boris’s final tip is the most important: give Claude a way to verify its work. Test it. Run it. Show the results.
This is where I’d push further: give Claude direct feedback loops. Tests. DOM simulation. Quick verification scripts. When Claude can see immediately whether something works, output quality jumps 2-3x.
What I’d Add to Boris’s Workflow
- Use the right model for the right task, not Opus for everything
- Plan with lightweight models to avoid model switching confusion
- Build verification into
.claude/commands—make testing as easy as a slash command - Avoid sub-agents—they fragment context unnecessarily
- Consider the ralph-wiggum technique (shoutout to @GeoffreyHuntley) for bootstrapping new projects quickly
The Bottom Line
Boris’s workflow is sophisticated and battle-tested. But it’s also optimized for a well-resourced team with unlimited compute. Most of us need to be more selective: right model, right task, tight feedback loops, and ruthless about context.
The real win isn’t running more Claudes in parallel. It’s running smarter ones.