Code stopped being the bottleneck
For most of my career, the slow part of building software was writing it. You had to carve out big, uninterrupted blocks of time just to grind through implementation details, and every new feature felt like a small marathon. Today, with competent AI assistants, a lot of that grind has evaporated. You can outline a service, paste in a few interfaces, ask for tests, and have a working first cut in the time it used to take to argue about estimates. That doesn't make engineering trivial, but it does mean that "who can type the fastest" is no longer the thing that separates great teams from average ones.
That speed changes how we should build
If code is cheap, the real constraint is how fast we can move from "vague idea" to "running thing we trust." The best use of AI isn't sprinkling autocomplete on top of the same old process; it's collapsing the loop between intent, implementation, and feedback. Small, well-scoped changes become trivial when you can generate the boring parts on demand. Migrations and refactors that would've eaten a sprint can be turned into structured prompts and mostly handled by the assistant. The teams that benefit most are the ones willing to shorten planning cycles, ship smaller increments, and treat the first implementation as something to learn from rather than something to defend.
New development workflow when code is cheap
In an AI-first workflow, I spend more time describing what we're trying to achieve and less time worrying about the mechanics of getting there. I'll sketch the behavior, constraints, and edge cases, then let the assistant scaffold the module, tests, and sometimes even the documentation. From there, the work is a sequence of tight iterations: run it, inspect the output, adjust the spec, regenerate. We lean into smaller, composable changes that the AI can handle end-to-end, and we're comfortable throwing away generated code when the design evolves. Velocity comes less from typing faster and more from eliminating handoffs, coordination overhead, and time lost waiting for someone to "get around" to the boring parts.
Why software is still expensive
None of this means software itself is cheap. Integration is still hard. Good data modeling is still hard. Picking the right boundaries between services, thinking through failure modes, and making trade-offs around performance, security, and operability are as demanding as ever. AI can happily generate three different implementations of the wrong abstraction, all of them plausible on the surface. The real cost sits in the accumulation of decisions: where responsibility lives, how systems talk, which use cases you prioritize, and how you handle the ugly edge cases. It's entirely possible to move faster and still end up with a mess that's costly to maintain if you don't stay intentional.
Practical ways to trade cheap code for real speed
In practice, the teams I see getting real leverage out of AI treat it as a way to remove friction, not as a magic wand. They bias toward narrow, well-defined changes that can be automated cleanly. They invest in tests, CI, and clear interfaces so they can safely accept more AI-generated diffs without endless bikeshedding. They use assistants to handle migrations, bulk edits, and glue code that would usually soak up senior attention. Most importantly, they redesign their process around shorter feedback loops: smaller PRs, more frequent deploys, and quicker validation with users. Speed became a property of the whole system you build around it.
If code is cheap, what's the moat?
When anyone with basic skills can ask an AI to rebuild a whole webbrowser in a weekend, "I wrote a lot of code" stops being a convincing moat. The durable advantages shift elsewhere: deep domain understanding, proprietary data, distribution, trust, and tight integration into real workflows. Judgment becomes a moat too: the ability to say "no" to most feature ideas, to pick the right level of abstraction, and to recognize when a problem is better solved with process than with more software. In that world, speed still matters, but it's the speed of learning and adaptation, not the speed of feature cloning. The winners will be the teams that can turn insight into improved behavior quickly and repeatedly.
What it means for developers, in my opinion
It helps to think of code as a disposable asset. The thing you're really optimizing is the loop: observe, decide, change, observe again. Let the AI handle as much of the mechanical work as possible so your people can focus on architecture, clarity of intent, and the conversations that decide what is worth building at all. Encourage to practice describing problems precisely and reviewing AI output critically. Push for more automation in testing and deployment so the increased volume of changes doesn't overwhelm. And when prioritizing work, remember that features are cheap now; the hard part is building something that stays coherent and valuable as it evolves.