Boris's 9 Practical Tips for Claude Code: It Turns Out Experts' Configurations Are So "Simple" Boris Cherny has a nickname at Anthropic: the Father of Claude Code. He has been very active on X recently, so many people ask Boris: how do you use Claude Code yourself? He just shared 9 practical tips on X. There aren't as many tips as you might think, and each one is straightforward. 【1】Core Idea: There Is No Standard Answer for Best Practices with Claude Code Boris starts by saying: > My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. > My configuration might be surprisingly "vanilla" to you. Claude Code works well right out of the box, and I personally haven't customized it much. It's understandable; those best practices, like Skills and Plugins, have already been built into the functionality by the Claude Code developers. There is no single correct way to use Claude Code. The team intentionally designed it to be flexible; you can use it however you want, modify it however you like, and everyone on the Claude Code team uses it differently. So there's no need to struggle to find "best practices"; finding a rhythm that suits you is the most important. 【2】Multi-Agent Task Parallelism: Running a Dozen Claudes at Once Boris's daily routine looks like this: he opens 5 instances of Claude Code in the terminal, numbered tabs 1 to 5, with system notifications on, switching to whichever needs input. At the same time, he runs 5 to 10 tasks on the web version. The terminal and web can "hand off" to each other: using the & symbol to transfer local sessions to the web, or using --teleport to switch back and forth. Every morning and during the day, he starts several tasks from the Claude app on his phone and checks back later for results. The core logic of this "multithreading" work style is that Claude Code excels at autonomous execution; many tasks don't require your constant attention. You start a task, give it a direction, let it run, and you can focus on other things. You can switch back when it needs your confirmation. This is a completely different rhythm from the traditional "human types a line of code, AI fills in a few lines." But it also places higher demands on the user; you need to be good at assigning tasks to Agents and be able to switch between multiple tasks at any time. For those used to traditional development with only one task at a time, this is a significant challenge. I must admit, although I often use Coding Agents, I'm still not accustomed to running too many tasks simultaneously; I need to strengthen my practice in this area this year. 【3】Model Selection: Why Use Opus Instead of the Faster Sonnet Boris says he uses Opus 4.5 with the thinking mode for all his tasks. This is the best programming model he has used. Some might ask: Isn't Opus larger and slower than Sonnet? Boris's answer is: Although the single response is a bit slower, you need to correct it much less often, and the tool calls are more accurate, making it ultimately faster. I actually agree with this; when it comes to writing code, speed shouldn't be the priority; quality is more important. If a fast model requires you to correct it three times, it's better to use a slower model that gets it right the first time. Time isn't just about model response time; it's also about your attention and energy costs. The only issue is that Opus is more expensive. 【4】 is a special configuration file for Claude Code, placed in the project root directory. Each time you start Claude Code, it automatically reads this file and treats its contents as "background knowledge." You can think of it as a project brief you write for the AI, informing it about the project's architecture, specifications, and considerations. Boris's team does this: the entire Claude Code repository shares one Git, maintained by everyone together. Every week, someone adds something to it. The rule is simple: whenever you see Claude make a mistake, write "don't do this" in, and next time it will know. Interestingly, they also use this mechanism during code reviews. Boris will @.claude in a colleague's PR, asking Claude to add a new rule to . This is implemented through Claude Code's GitHub Action. Dan Shipper calls this approach "compound engineering": every correction becomes a team asset, helping the AI understand your project better. If you haven't used the command yet, Claude will automatically analyze the project structure and generate an initial version. Then you can supplement it as you use it, adding in corrections as you see fit. 【5】Plan Mode: Think Clearly Before You Start Boris says that most of his sessions start in Plan mode. You can switch to it in Claude Code by pressing Shift+Tab twice. In Plan mode, Claude doesn't directly modify the code but first gives you an execution plan. You can discuss and modify the plan back and forth until you're satisfied. Then switch to auto-accept mode, and Claude usually completes it in one go. "A good plan is really important"; this habit actually brings classic wisdom from software development into AI collaboration: design before coding. Many people jump straight into coding with AI, resulting in high rework costs when the direction is wrong. Spending a few minutes aligning the plan can save hours of rework. 【6】Automating Repetitive Tasks: Slash Commands and Sub-Agents Boris has several operations he uses dozens of times a day, which he has turned into slash commands. For example, "/commit-push-pr" completes the commit, push, and create PR in one click. Slash commands are essentially Markdown files placed in the .claude/commands/ directory. You can write instructions in natural language and embed bash scripts to pre-fetch some information, reducing the number of model calls. These commands can be submitted to Git and shared across the team. In addition to slash commands, he also uses sub-Agents (Agents are independent Claude instances dedicated to specific tasks). For example, he has a code-simplifier sub-Agent that automatically simplifies code after the main Claude completes its work; and a verify-app sub-Agent responsible for end-to-end testing. The commonality of these two functions is: they solidify the repetitive tasks you do, allowing Claude to call them on its own. You don't have to explain every time, nor do you need to remember various command details. Use the PostToolUse Hook to format the code generated by Claude. Claude usually generates well-formatted code automatically, and this Hook handles the last 10% of the code to avoid formatting errors during the continuous integration (CI) process. 【7】Security and Integration: Permission Configuration and External Tools Boris doesn't use the --dangerously-skip-permissions "dangerous" option. Instead, he uses the /permissions command to pre-approve some commonly used security commands, avoiding confirmation pop-ups each time. These configurations are saved in .claude/settings.json and shared across the team. Even more powerful is the MCP server integration. MCP stands for Model Context Protocol, a standard protocol introduced by Anthropic to connect AI with external tools. Through MCP, Claude Code can directly: - Search and send Slack messages - Run BigQuery queries to answer data questions - Pull error logs from Sentry Boris's team has also submitted the Slack MCP configuration to the repository, making it available for everyone out of the box. This means Claude Code is not just a programming tool but a "universal assistant" that can call your entire toolchain. 【8】Long Task Handling: Let Claude Validate Itself For long-running tasks, Boris has a few strategies: One is to let Claude automatically validate the results using a background Agent after completion. You can request this in the prompt or use the Stop Hook to trigger it more deterministically. > Note: Hooks are Claude Code's "hook" mechanism that allows you to insert custom logic at specific moments when Claude performs operations. You can think of it as a "trigger": when a certain event occurs, it automatically executes your preset commands or scripts. > The Stop Hook is triggered when Claude completes a response and is about to hand back control. > Related documentation: The second is to use the ralph-wiggum plugin, which is essentially a Bash loop: imagine a simple infinite loop (while true) that continuously feeds the same task brief (prompt file) to the AI agent, allowing it to improve its work over and over until it's completely finished. The third is to use --permission-mode=dontAsk or --dangerously-skip-permissions in a sandbox environment, allowing Claude to run to completion without being interrupted by permission confirmations. The core idea is: since it's a long task, don't make it wait for you. Give it enough autonomy and self-correction ability. 【9】The Most Important Point: Give Claude Validation Capability Boris places this point last, saying it might be the most crucial factor in achieving good results. If Claude can validate its own work, the final output quality can improve by 2 to 3 times. He gives an example: for every change they submit to , Claude tests it using a Chrome extension: opening the browser, testing the UI, identifying issues, and iterating until the functionality is normal and the experience is reasonable. The validation method varies by scenario. It could be running a bash command, running a test suite, or testing applications in a browser or mobile simulator. The form doesn't matter; what's important is: giving the AI a feedback loop. This principle is actually quite simple. Human engineers also rely on the "write code—test—see results—modify" loop to ensure quality. AI is no different. If it can only write but not test, it's like working with your eyes closed, and quality depends entirely on luck. Boris's advice is: invest effort in solidifying the validation mechanism. This is the highest return on investment. 【10】Experts Use the Sword Without Moves to Win with Moves In martial arts novels, experts wield swords without flashy moves; no moves triumph over moves. Boris doesn't flaunt complex custom configurations or mysterious private prompts; he uses only official features. The difference is: he truly understands the logic behind these features and combines them into an efficient workflow. Parallel work is possible because Claude can execute autonomously; using Opus is due to its overall efficiency; turns error correction into an asset; Plan mode is about thinking clearly before acting; slash commands and sub-Agents automate repetitive tasks; and the validation mechanism provides feedback loops for the AI. If you're just starting to use Claude Code, there's no need to rush into studying various advanced configurations. First, get the basics right: learn to work in parallel, learn to plan, and learn to accumulate AI validation methods. When you truly encounter bottlenecks, then you can experiment with those fancy features.
Boris Cherny
Boris ChernyJan 3, 03:58
I'm Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit. My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently. So, here goes.
One thing Boris didn't mention is the basic CI/code review workflow. For those used to working in large companies, these things might seem routine and should be taken for granted. For example, when he completes a task using Claude Code, he doesn't just merge it directly into the main branch; instead, he submits a PR. After submitting the PR, all linting and automated tests will automatically run on the CI server. If the tests fail, the PR cannot be merged. Even if a PR passes all automated tests, it still needs to be reviewed by someone (of course, AI can assist, but human confirmation is still required). If issues are found during the code review, modifications are necessary. Many individual developers are not accustomed to setting up a CI/code review workflow, and some don't even manage their code with Git, making it impossible to roll back if issues arise.
[10] The things you can't see One thing Boris didn't mention is the basic source code management / CI (Continuous Integration) / code review workflow. These things might be commonplace for large companies that are used to them, and they should be considered standard practices. For example, when he completes a task using Claude Code, he doesn't just merge it directly into the main branch; instead, he submits a PR. After submitting the PR, all linting and automated tests will automatically run on the CI server. If the tests fail, the PR cannot be merged. Even if a PR passes all automated tests, it still needs someone to conduct a code review (of course, AI can assist, but human confirmation is still required). If issues are found during the code review, modifications are necessary. These are also the foundation for their ability to multitask in parallel. Without a well-established basic workflow, multitasking in parallel is not achievable. Many individual developers are not accustomed to setting up a CI/code review workflow, and some don't even manage their code with Git, making it impossible to roll back when issues arise.
2.2K