Scaling Output Without Pretending AI Is A Team
AI can make one person look like a small team.
That does not mean AI is a team.
This distinction matters. I have had weeks where my personal output, using AI heavily, looked nothing like a normal solo-development pace. In one March week, the repo showed 130 commits, 485 files touched, and nearly 200,000 lines changed. There was product work, tooling work, UI work, runtime work, and design exploration happening in parallel.
It would be easy to tell the wrong story about that.
The wrong story is: “AI replaced a team.”
The more useful story is: “AI amplified one person, but only because that person took on more of the leadership work that normally sits across a team.”
AI Does Not Remove Coordination
When a team grows, coordination becomes the tax. People need context. Work needs boundaries. Decisions need to be made. Quality has to be reviewed. Tradeoffs have to be resolved.
AI does not remove that tax. It moves more of it onto the person directing the work.
If I run multiple AI sessions, I am not just coding faster. I am assigning work, sequencing work, reviewing work, defining done, protecting shared systems, and deciding what gets promoted from experiment to product.
That is management.
The agents do not wake up with a shared roadmap. They do not know which tradeoffs matter unless I tell them. They do not naturally coordinate file ownership. They do not understand product taste. They can help execute, inspect, and test, but the operating model still has to come from me.
This is why AI leverage rewards seniority.
Not seniority as status. Seniority as pattern recognition, judgment, and the ability to decompose ambiguous work into safe, useful chunks.
Output Is Not The Same As Progress
High AI output can be intoxicating. You ask for something, and it appears. You ask for more polish, and it finds more polish. You ask for options, and suddenly you have ten.
That speed can hide a basic question: is the work converging?
In a traditional team, headcount growth can create the same illusion. More people create more motion. More meetings, more projects, more branches, more releases. But if strategy is unclear, the organization scales activity faster than outcomes.
AI creates the same risk at the individual level.
The leader’s job is to keep the work pointed at outcomes:
- Is this making the product better?
- Is this reducing future friction?
- Is this a useful experiment or just more surface area?
- Can I verify it?
- Can I explain why it matters?
Those questions matter more when output is cheap.
Where The Leverage Actually Came From
The most valuable work was not simply “more code.”
It was the ability to run more iterations through a judgment loop. I could try a direction, test it, see what felt wrong, and ask for a narrower improvement. That made the feedback cycle much faster.
The AI was not replacing taste. It was increasing the number of times I could apply taste.
That is the part leaders should pay attention to. The value of AI is not only automation. It is iteration density.
If your organization has strong product judgment, AI can let that judgment touch more options. If your organization lacks judgment, AI can flood it with plausible work that still misses the point.
What This Means For Teams
I do not think the right conclusion is “hire fewer people and use AI instead.”
That is too crude.
A better conclusion is that teams need to rethink the ratio between builders, reviewers, and system designers. If each builder can generate more candidate work, the organization needs stronger mechanisms for deciding what good looks like.
That changes the profile of high-value engineering work:
- clearer problem framing
- better acceptance criteria
- faster verification
- stronger internal tooling
- sharper technical boundaries
- more explicit product strategy
Those are leadership skills, not just coding skills.
The Real Bar
AI can scale output without scaling people directly, but it does not scale accountability.
That accountability still sits with the human leader. The work has to be chosen, shaped, reviewed, and connected to outcomes.
The best AI users will not be the people who generate the most code. They will be the people who can turn cheap generation into trusted progress.
That is a different bar.
And it is a much more interesting one.