When I published The Eternal Promise, tracing sixty years of attempts to eliminate programmers from COBOL through to today’s AI coding tools, the post resonated far beyond what I expected. It reached the front page of Hacker News and generated hundreds of comments from engineers who recognized the pattern in their own careers.
But one response in particular stopped me. A commenter articulated something I had been circling for years without ever naming precisely:
Each abstraction layer does not eliminate complexity. It relocates it.
That single sentence reframes the entire history of software development. It explains why every wave of “simplification” creates new specialists instead of eliminating old ones. It explains why AI-assisted coding feels simultaneously easier and harder. And it points to the most important, and most undervalued, skill in modern software engineering.
The Conservation Law of Software
In 1986, Fred Brooks published “No Silver Bullet,” arguing that software development contains two kinds of complexity: essential and accidental. Essential complexity comes from the problem itself. If your banking system needs to handle thirty different regulatory requirements, that complexity must live somewhere in your software. Accidental complexity comes from the tools and implementation choices we make along the way.
Brooks argued that most of the easy gains had already been captured by reducing accidental complexity through better languages, better tools, and better practices. The essential complexity, the hard part, remained stubbornly resistant to any single technological breakthrough. There would be no silver bullet.
What Brooks described conceptually, Larry Tesler formalized into what became known as Tesler’s Law, sometimes called the Law of Conservation of Complexity. Tesler, who worked at Xerox PARC and Apple, observed that every system has an inherent amount of complexity that cannot be removed. It can only be shifted. If you make one part of a system simpler, some other part absorbs the complexity you displaced.
Larry Wall, the creator of Perl, independently arrived at the same insight and called it the Waterbed Theory of Complexity. Push down on a waterbed in one place and it bulges up somewhere else. The water, the complexity, is conserved. It just moves.
This is not an abstract theoretical observation. It is the precise mechanism that explains every cycle of technological promise and partial delivery in the history of software. And it is the mechanism operating right now with AI-assisted coding.
A History of Relocations
Once you see the relocation pattern, you cannot unsee it.
Assembly language was hard because you managed machine instructions, registers, and memory addresses directly. High-level languages like FORTRAN and COBOL relocated that complexity. Developers no longer thought in machine instructions. Instead, they thought in business logic, data structures, and control flow. The simple cases became dramatically simpler. But the complexity did not disappear. It moved to a new boundary: specifying correct business logic in a language that was more forgiving syntactically but just as demanding logically. An entire generation of COBOL programmers emerged as specialists at that new boundary.
Fourth-generation languages in the 1980s relocated complexity again. Instead of writing procedural code, developers worked with higher-level constructs for database queries, report generation, and screen design. The simple cases became simpler still. But the complexity accumulated at data modeling, integration, and performance optimization. When business logic exceeded what the 4GL could express cleanly, developers hit walls that required deeper skills than before, not shallower ones.
CASE tools promised to relocate complexity from code to diagrams. Visual modeling would replace textual programming. The simple cases worked. Complex systems generated code that was inefficient, difficult to customize, and nearly impossible to debug. The complexity relocated from writing code to understanding and correcting generated code. Sound familiar?
Object-oriented programming relocated complexity from procedure management to class design and inheritance hierarchies. Web frameworks relocated it from HTTP handling to configuration and convention. Microservices relocated it from monolithic codebases to network communication and distributed systems management. Cloud computing relocated it from hardware to configuration management.
Each relocation was genuinely valuable. Each made the simple cases dramatically simpler. And each created a new class of specialists to manage the complexity that accumulated at the new boundary.
Where AI Relocates Complexity
LLMs are following the same pattern, but the destination of the relocation has implications that most people have not fully absorbed.
With previous abstractions, the complexity moved laterally, from one form of creation to another form of creation. From machine instructions to business logic. From procedural code to data models. From code to configuration. The nature of the work changed, but the fundamental activity remained constructive. You were still building something.
With AI-generated code, the complexity is relocating from a constructive activity (writing code) to an evaluative activity (verifying code). This is a categorically different kind of work. It requires different skills, different mental models, and different organizational structures.
When you write code yourself, you understand it because you built it. The decisions are yours. The assumptions are visible to you because you made them. When something breaks, you have a mental model of why it might have broken because you assembled the system piece by piece.
When you verify AI-generated code, you are auditing decisions made by a system whose reasoning you cannot inspect. You are reading code shaped by assumptions you did not make, in a structure you did not design, reflecting patterns from training data rather than from your architectural intent. The verification problem is not just “does this code work?” It is “does this code work for the right reasons, handle the edge cases I care about, maintain the invariants I depend on, and avoid introducing subtle defects that will surface in production three months from now?”
That is a fundamentally harder question than “can I write code that works?”
The Verification Paradox
Here is where the data tells a story that should concern everyone.
Sonar’s 2025 State of Code report surveyed over 1,100 developers globally and found that 96% do not fully trust that AI-generated code is functionally correct. Only 48% always check their AI-assisted code before committing it. That gap, between near-universal distrust and barely half-hearted verification, is what Sonar calls “verification debt.”
Amazon CTO Werner Vogels captured this shift precisely at AWS re:Invent 2025: “Now, the world is changing. You will write less code, because generation is so fast. You will review more code because understanding it takes time.”
The numbers support his claim. 38% of developers say reviewing AI-generated code requires more effort than reviewing code written by human colleagues. 66% report spending significant time fixing AI code that is “almost right but not quite.” 61% agree that AI often produces code that looks correct but is not reliable.
This is the verification paradox. AI makes generating code nearly effortless, so developers generate more of it, but their capacity to verify code has not increased at all. In fact, it may have decreased, because verifying code you did not write is inherently harder than verifying code you did, and because the sheer volume of generated code exceeds any individual’s review bandwidth.
The complexity did not disappear. It relocated from generation to verification. And verification, it turns out, is harder.
Why Verification Is Harder Than Writing
This claim deserves examination because it sounds counterintuitive. How can checking code be harder than creating it?
Consider what happens when you write code from scratch. You start with a problem. You decompose it into smaller problems. You make explicit decisions about data structures, algorithms, error handling, and edge cases. Each decision builds on the ones before it. By the time you have working code, you also have a mental model of every significant choice embedded in that code. Verification is almost free because understanding is a byproduct of creation.
Now consider what happens when you verify AI-generated code. You start with output. You must reverse-engineer the decisions the model made. Why this data structure? What edge cases does it handle? What edge cases does it miss? Does the error handling match the failure modes of this specific system? Are the concurrency assumptions safe in this particular deployment context?
Every question requires you to understand something about the code that you did not decide, and therefore do not inherently know. You have to develop the same depth of understanding you would get from writing the code, but without the benefit of having written it. In some ways, this is like reviewing a colleague’s code, but with a crucial difference: your colleague can explain their reasoning when asked. The AI cannot.
CodeRabbit’s analysis of 470 GitHub pull requests found that AI-generated code produces 1.7 times more issues per review, with business logic errors appearing at twice the rate and error handling gaps nearly doubling. But the more revealing finding is about the types of issues. The largest relative increases appeared in readability, naming consistency, and structural quality: categories that increase cognitive load for reviewers and make verification harder over time.
The code does not just have more bugs. It is harder to audit. The complexity has relocated to a boundary that demands skills the industry has historically undervalued.
The Auditor Problem
Programming culture has always celebrated creation over evaluation. The developer who ships a feature gets more recognition than the one who catches a critical bug during code review. Writing code is glamorous. Reading code is grunt work. Building is fun. Reviewing is tedious.
This cultural bias has practical consequences in the AI era because the skill set required for effective verification of AI-generated code is closer to financial auditing than to traditional software development.
A financial auditor does not rebuild the company’s accounting system from scratch to determine whether it is correct. They examine the output, trace transactions through the system, test assumptions against reality, and identify discrepancies between what the system claims and what actually happened. They work backwards from results to reasoning. They look for patterns that indicate systemic problems rather than inspecting every individual entry.
Verifying AI-generated code requires the same approach. You cannot rewrite every function to confirm it is correct; that defeats the purpose of using AI. Instead, you must develop the ability to read code critically, identify patterns that indicate likely problems, trace logic through components you did not design, and evaluate whether the system’s behavior matches its specification.
This is auditing. And auditing is a discipline with its own methodologies, its own skill development path, and its own career structure that the software industry has barely begun to develop.
The irony runs deep. For decades, the software industry has tried to make development more like engineering: rigorous, measurable, predictable. AI may finally achieve that goal, but not in the way anyone expected. The rigorous discipline it demands is not engineering. It is audit.
Tesler’s Law in Action: The Specifics
Let me trace the complexity relocation with concrete specificity, because the pattern becomes more striking the closer you look.
Before AI, a developer building an authentication system would make a series of deliberate choices. How to hash passwords. Whether to use bcrypt or argon2. What salt length to use. How to handle session tokens. How to manage token expiration. How to implement rate limiting on login attempts. Each choice required research, understanding, and explicit decision-making.
With AI, the developer describes “build an authentication system” and receives an implementation. The complexity of choosing bcrypt versus argon2 did not disappear. It relocated from a conscious design decision to a verification question: did the AI make a safe choice? The developer still needs to know enough about password hashing to evaluate the output. They also need to know enough to recognize when the AI made an insecure choice, like using SHA-256 for password hashing, which will look correct to anyone who does not understand why it is wrong.
The essential complexity, choosing a secure hashing algorithm for this specific threat model, is conserved. It moved from the writing boundary to the verification boundary. And at the verification boundary, the failure mode is worse: instead of a developer who explicitly chose the wrong algorithm (and might catch the mistake during implementation), you have a developer who accepted the wrong algorithm without realizing a decision was made at all.
This pattern repeats across every category of technical decision. Error handling, concurrency models, data validation, API design, security boundaries, performance characteristics. Each one represents essential complexity that AI does not eliminate but relocates from explicit design decisions to implicit verification challenges.
The New Specialist
Every previous relocation of complexity created a new class of specialist. Assembly gave way to COBOL programmers. COBOL gave way to database administrators when 4GLs pushed complexity to data modeling. Web frameworks created a generation of DevOps engineers when complexity accumulated at deployment and operations. Cloud computing created infrastructure architects. Microservices created platform engineers.
AI is creating the need for a specialist that does not yet have a widely recognized name. The closest existing role is “code reviewer,” but that understates the scope. The specialist AI demands is someone who can evaluate code they did not write, against specifications they must verify independently, in systems whose architecture was shaped by decisions they must reconstruct, for correctness criteria that span functional behavior, security properties, performance characteristics, and long-term maintainability.
This is not a junior role. It requires deep technical expertise, broad systems knowledge, and the kind of judgment that comes from years of building and debugging production systems. You cannot effectively audit code if you have never built a similar system yourself, because you would not know what to look for.
Here is the uncomfortable truth: the developers best positioned to verify AI-generated code are experienced senior engineers. The ones who have built enough systems to recognize patterns, the ones who have debugged enough production incidents to know where subtle bugs hide, the ones who have enough architectural judgment to evaluate whether a design choice is sound even when it compiles and passes tests.
These are exactly the developers the industry periodically claims AI will make redundant. In reality, AI has made their skills more valuable, not less. It has just relocated the application of those skills from construction to verification.
What This Means for the Industry
The complexity relocation framework has practical implications for how organizations should be thinking about AI adoption.
First, investing in AI coding tools without equally investing in verification practices is like installing a high-speed assembly line without quality control. You will produce more, faster. Much of what you produce will need to be caught, corrected, or recalled. The savings from faster generation will be consumed by the costs of insufficient verification. Sonar’s finding that only 48% of developers always check AI code before committing is an early warning sign of verification debt accumulating across the industry.
Second, the skills that matter most in the AI era are not prompt engineering skills. They are verification skills. The ability to read code critically. The ability to identify security antipatterns. The ability to recognize architectural decisions that will cause problems at scale. The ability to evaluate whether a test suite actually tests the things that matter, or merely tests the things that are easy to test. Organizations that train developers to generate code faster without training them to evaluate code more rigorously are building a capability imbalance that will eventually collapse.
Third, the role of senior engineers must evolve to emphasize review, mentoring, and architectural oversight rather than raw implementation output. If AI handles the construction, then human expertise must concentrate at the verification boundary, which is where the relocated complexity lives. This means valuing code review as a first-class engineering activity rather than a chore that delays shipping.
Fourth, tooling must evolve. AI-assisted code review tools are emerging, and they will help with the volume problem. But they face the same limitation as any verification tool: they can catch pattern-level issues but they cannot evaluate whether the code serves the business intent. That judgment requires humans. The optimal workflow is likely AI generating code, AI performing first-pass review for patterns and common vulnerabilities, and humans performing deep review for business logic, architectural coherence, and specification compliance.
The Pattern Will Repeat
If history is any guide, the verification boundary will not remain the final destination. Some future technology will relocate complexity from verification to whatever lies beyond it. Perhaps formal verification tools will become practical enough to automate much of the auditing. Perhaps AI systems will become reliable enough that statistical verification, sampling and testing rather than comprehensive review, becomes an accepted practice.
When that happens, the complexity will not disappear. It will relocate again. A new boundary will emerge, a new class of specialist will be needed, and a new generation of commentators will announce that the previous generation of specialists is finally, truly, definitively obsolete.
They will be wrong, for exactly the same reason every previous prediction was wrong. The complexity is conserved. It is only relocated. And wherever it lands, it demands human judgment to manage.
Fred Brooks saw this forty years ago. Larry Tesler saw it at Xerox PARC. Every experienced engineer who has lived through a technology transition has felt it, even if they could not name it precisely. Now the pattern has a name, and a mechanism, and a sixty-year history of evidence.
The complexity is never eliminated. It is only relocated. And the engineers who understand where it is going will always be the ones who matter most.
Final Words
Whether you have experienced this complexity relocation firsthand or see it differently, I would genuinely like to hear your perspective. The best insights come from people comparing notes across different environments and career stages.
You can find me on LinkedIn, X, and Threads where I regularly share thoughts on software development, AI, and the evolving role of engineers.
If you want to discuss something in more depth or explore how these ideas apply to your specific context, feel free to reach out directly.
Where has complexity relocated in your world? Has AI moved the hard part of your work from building to verifying, or is it landing somewhere else entirely? I want to know.
If this post made you think, you'll probably like the next one. I write about what's actually changing in software engineering, not what LinkedIn wants you to believe. No spam, unsubscribe anytime.