Most discourse about AI coding agents frames it as a one-winner-takes-all race: Claude Code vs. Codex vs. Cursor vs. Gemini, pick your champion, defend it in Hacker News threads. My daily reality is that I run five of them in rotation — Claude Code, Codex CLI, Gemini CLI, PI, and OpenCode — each good at a different kind of local minimum, switched atomically via a tool I wrote out of personal frustration. This post is the case for why that’s the correct model for serious work in 2026.

The problem with one-agent commitment

Every coding agent is good at slightly different things. The differences aren’t obvious from benchmarks — they show up in your actual work, in the shape of the tasks you give them:

  • Claude Code tends to be the most careful about not breaking existing behaviour. It reads files thoroughly before editing, stays close to project conventions, and is the one I trust most on a codebase I care deeply about.
  • Codex CLI is quick to generate plausible-looking scaffolds. It’s where I go when I want a first draft of something where the exact output doesn’t matter as much as speed.
  • Gemini CLI has a different reasoning texture that sometimes breaks through when another model has been stuck in a loop for 15 minutes. Its failure modes are different, which is the point.
  • PI and OpenCode are still finding their niches in my workflow — both have moments where they surprise me upward, especially on tasks where the other three feel interchangeable.

None of those observations are absolute. They’d all be wrong for someone else’s workflow. They are, however, real to my work, and the only way to discover them was to run every agent on real tasks for months.

The config-switching problem

Running five agents sounds harmless until you realise every one of them has its own:

  • Auth token or API key.
  • Config file in its own flavour (JSON, YAML, TOML, homegrown).
  • Rule files / skill files / instruction files under slightly different paths.
  • Opinions about where “the project root” is.
  • Opinions about which shell profile to source.

I tried the obvious first — a big dotfiles setup with conditional blocks per agent. It worked for a week. Then I added a new engagement that needed a different API key, and the whole thing became a mess of “which env is active in which terminal.” Multiply by five agents and two clients and you get a full-time job that is not writing software.

What I actually built

I built raise — a Go CLI that swaps between per-engagement “AI profiles” via atomic symlink swaps. One command, the whole set of AI-tool configs flips over. It currently supports 17 AI tools across the major coding-agent and general-assistant families.

The design is deliberately boring:

  • Each profile is a directory under ~/.raise/profiles/<profile>/.
  • Each AI tool has a well-known config path (e.g. ~/.claude/config, ~/.config/gemini/config.toml).
  • Switching profiles is ln -sfn under the hood — atomic, no partial-state failure mode.
  • Credentials stay outside the profile tree and get re-linked on activation so you don’t sync API keys into the profile dir.

That’s it. The whole thing is a few hundred lines of Go. The reason it’s useful isn’t that it’s clever — it’s that it’s the only tool in its category that doesn’t assume you’re committed to one vendor. Every other “AI config manager” I looked at was either vendor-tied or didn’t handle the credential-preservation problem cleanly.

Shared skills across all agents

The second problem: every coding agent has its own rule file format (CLAUDE.md, AGENTS.md, .cursorrules, etc.). If you maintain five of them independently, they drift. They go out of sync. One of them ends up three months behind the team’s coding standards.

I maintain one source of truth — rice-shared-skills — and each agent’s rule file is a thin wrapper that includes the canonical skills from that source. When a convention changes, I update it once, and every agent inherits the change on next activation.

The shared-skills idea isn’t novel. What’s notable is how much daily friction it removes once it’s working. The background hum of “wait, does Claude Code know we switched to pnpm 10 in this repo?” just… goes away.

Why five, not two or ten

I’m not dogmatic about the number five. It’s an observation, not a prescription. If you only ever do one kind of work, two agents are probably enough. If you do research-adjacent work on exotic tech stacks, you might want more than five.

The principle is: run enough agents that you can escape local minima, and not so many that the switching cost exceeds the escape benefit. For me that’s five. The number will probably shift as the agent landscape shifts.

The meta-skill this trains

Running multiple agents trains a skill that single-agent users often don’t develop: recognising when you’re stuck. When a single agent fails, the temptation is to keep trying. When you have five agents, switching is cheap enough that you notice the diminishing returns of insisting. That noticing is, at the end of the day, the same senior-engineering skill that distinguishes “stuck for an hour debugging alone” from “pinged a colleague at the 15-minute mark and unblocked in 20 seconds.”

The agents are training wheels for a discipline that was always good engineering practice. The friction is the point.

The 2026 claim

The teams that are winning on AI-assisted engineering velocity in 2026 are not picking a champion and marrying it. They are running fleets of agents, maintaining a shared-skills layer across them, and treating vendor choice as a swappable commodity. Everyone else is optimizing a single tool they’ll regret being locked into in 18 months.

Five agents. One shared skills layer. Atomic profile switching. That’s the stack.