At the World Economic Forum in Davos on January 20, 2026, Anthropic CEO Dario Amodei said something that landed like a grenade across the technology industry. In an interview with The Economist, he claimed that AI models could do “most, maybe all” of what software engineers currently do within six to twelve months. He pointed to engineers at Anthropic who told him they no longer write code from scratch at all. “I just let the model write the code, I edit it,” he said they told him.
The reaction was immediate and predictably polarized. Panic in developer forums. Dismissal from senior engineers. Viral posts alternating between “we’re cooked” and “he’s wrong and here’s why.” Zoho founder Sridhar Vembu went further than Amodei, publicly advising software developers to begin considering alternative livelihoods. Elon Musk posted about building an AI-based software competitor to Microsoft. The discourse reached the kind of intensity that guarantees more heat than light.
I want to offer a different reading of this moment. Not a dismissal of what Amodei said, and not a panic-driven response to a headline that almost certainly oversimplifies what he actually meant. A careful, honest look at what is actually changing, what is not, and what this shift means for the engineers, architects, and technical leaders navigating it in real time.
Because I think Amodei is right about something important, and importantly wrong about something else. And the distinction between those two things matters enormously for how software engineers should be thinking about their careers and their craft right now.
What Amodei Is Right About
Let me start with what is actually true in this claim, because dismissing it entirely would be as intellectually dishonest as accepting it uncritically.
The productivity of AI coding tools has crossed a genuine threshold. This is not incremental improvement. In the past eighteen months, the gap between what an experienced engineer can produce with AI assistance and what they could produce without it has widened from meaningful to substantial. Prototypes that took days now take hours. Boilerplate that consumed genuine cognitive energy now gets generated correctly on the first attempt. Test coverage that teams perpetually deferred because of the time cost is now achievable within the scope of normal development cycles.
Anthropic’s own CEO noted in a January 2026 essay that AI models have reached a stage where some of the strongest engineers he knows hand over almost all of their coding work to AI, contrasting this with just three years earlier when AI struggled with elementary arithmetic and could barely write a single line of code. That trajectory is real. I have experienced it directly. The tools available today are meaningfully more capable than the tools available twelve months ago, and there is no obvious reason to expect that rate of improvement to slow.
AI researcher Andrej Karpathy, who coined the term “vibe coding” a year ago, has since noted that the approach has evolved: programming via AI agents is “increasingly becoming a default workflow for professionals, except with more oversight and scrutiny.” He now prefers the term “agentic engineering,” reflecting a genuine shift in how developers interact with code rather than just an incremental improvement in autocomplete.
So yes: the way code gets written is changing dramatically. Senior engineers at capable AI companies are legitimately spending less time writing code line by line. That observation is accurate, and engineers who resist acknowledging it are not doing themselves any favors.
Where the Claim Breaks Down
Here is where the reasoning goes wrong, and it goes wrong at a definitional level that compounds into a significant error.
Dario Amodei is one of the most technically sophisticated people in the industry. He understands AI deeply. But “software engineering” and “writing code” are not the same thing, and the example he offered to illustrate his point, engineers who no longer write code and instead edit AI-generated output, does not demonstrate that software engineering is becoming obsolete. It demonstrates that one component of software engineering, the act of translating decided solutions into working syntax, is being automated.
That component was never the hard part.
The hard part of software engineering has always been the thinking that precedes and follows the writing. What problem are we actually solving? What are the constraints, stated and unstated? What are the failure modes of this approach under load, under adversarial input, under the organizational conditions that will exist when this system is maintained two years from now by people who were not in the room when it was designed? How does this component interact with the rest of the system? What do we need to leave out of version one without making version two impossible?
These questions do not become easier when AI generates the code. In many cases they become harder, because the gap between “AI produced something that passes tests” and “AI produced something that is correct for this specific context” requires exactly the kind of deep understanding that experienced engineers develop over years of making decisions and living with their consequences.
I have spent more than twenty years building and operating production systems across fintech, blockchain, and high-traffic platforms. In my experience, the most common source of expensive engineering failures is not bad code. It is decisions made without adequate understanding of the domain, the system boundaries, the security model, or the long-term maintenance implications. AI tools make the code-writing faster. They do not make those decisions for you. They cannot, because those decisions require context that lives in the organization, the regulatory environment, the product strategy, and the engineering team’s accumulated understanding of how their specific system behaves under real conditions.
The Confusion Between Coding and Software Engineering
This is the distinction that I think is most important to articulate clearly, because it is the one most consistently collapsed in these conversations.
Coding is the act of expressing a decided solution in a programming language. It requires syntax knowledge, familiarity with language idioms, understanding of the tools and frameworks in use, and the mechanical skill of translating intent into instructions a computer can execute. This is genuinely what AI tools are getting very good at. The translation layer between “here is what we want to happen” and “here is code that makes it happen” is being automated at an impressive rate, and that rate will likely continue to increase.
Software engineering is something much larger. It includes understanding what the system needs to do before anyone has specified it precisely. It includes architecture: the decisions about how components relate to each other, what responsibilities they hold, how they communicate, and how they will evolve as requirements change. It includes security modeling: identifying what needs to be protected, what the realistic threat model looks like, where the boundaries between trusted and untrusted input exist. It includes scalability thinking: how does this system behave when the load doubles, when the data volume grows by a factor of ten, when the number of concurrent users exceeds the assumptions made during design?
It includes the organizational dimension of technology: how do you build systems that teams can understand, maintain, and extend over time? How do you make decisions that survive the departure of the people who made them? How do you document not just what a system does but why it was designed that way? How do you build the shared understanding that allows a team to make coherent decisions over years without the original architects in the room?
None of these are coding problems. None of them become obsolete when AI generates code more efficiently. Some of them actually become more important as AI-generated code becomes more prevalent, because the risk of mistaking “the code runs” for “the system is correct” grows when the code was produced at speed by a tool that has no understanding of the broader context.
What Is Actually Changing, and Why It Matters
I want to be careful not to minimize the genuine change that is happening, because minimizing it would be a different kind of error.
The role of software engineers is shifting. That shift is real, it is already underway, and it will accelerate. Engineers who spend most of their time on work that AI can now do competently will face genuine pressure. Entry-level positions that were primarily about coding are changing in character. The path from junior engineer to senior engineer, which historically involved accumulating years of experience writing code, is going to look different as the code-writing component becomes something that AI handles with increasing competence.
But “the role is changing” and “the role is becoming obsolete” are not the same claim, and confusing them leads to conclusions that are not useful for the engineers trying to navigate this moment.
What the shift actually looks like, from my perspective and from the experience of the engineers I work with and observe, is that the work of software engineering is moving up the abstraction stack. The questions that senior engineers have always spent time on, but often could not spend enough time on because the code had to be written by someone, are becoming more central to the role. Those questions include the ones I described above: architecture, security, scalability, organizational knowledge management, product alignment. They also include a new category of questions specific to this moment: how do you effectively direct AI tools to produce good software? How do you review AI-generated code with the critical eye necessary to catch what the AI got wrong? How do you integrate AI-assisted development into workflows that preserve the institutional knowledge and architectural coherence that good systems require?
These are genuinely hard questions. They require more engineering sophistication, not less. The engineers who will be most valuable in this environment are not the ones who resisted AI tools, and they are not the ones who trusted AI tools uncritically. They are the ones who understand their systems deeply enough to know when AI-generated code is right and when it is plausible-looking but wrong.
The Argument from Anthropic’s Own Engineers
I want to address directly the example Amodei used to support his claim, because it is both the most compelling part of his argument and the most misleading.
He said that engineers at Anthropic told him they no longer write code, they just let the model write it and edit the result. This is real, and it reflects something genuine about how work has changed at the frontier of AI development. But Anthropic’s engineers are not a representative sample of software engineering, and their experience at a company building the tools themselves does not generalize straightforwardly to the broader industry.
The engineers at Anthropic who are editing rather than writing are still making the architectural decisions that determine what the AI generates. They are still specifying the behavior, reviewing the output for correctness, catching the cases where the model produced something that looks right but is subtly wrong, and making the judgment calls about when the generated code is acceptable and when it needs to be directed differently. That work requires deep engineering expertise. It is not coding in the traditional sense, but it is absolutely software engineering.
There is also a selection effect worth noting. The engineers at Anthropic are working on systems where the tooling is optimized for exactly this workflow, where the code generation capabilities are at the frontier, and where the organizational context supports this mode of work in ways that most companies are not yet set up to replicate. The experience of an Anthropic engineer editing AI-generated code for an AI system is not the same as the experience of an engineer at a financial institution trying to use AI tools to modify a fifteen-year-old codebase with complex regulatory requirements and inadequate documentation.
Why I Am Actually Optimistic About This Shift
Here is the perspective that I think gets lost in the panic: the change that Amodei is pointing to, if understood correctly, is one that most experienced engineers should genuinely welcome.
The parts of software engineering that are being automated are, in many cases, the parts that experienced engineers found least interesting and most draining. Writing boilerplate. Translating known patterns into language-specific implementations. Producing test cases for well-specified behavior. Documenting code whose function is straightforward. These tasks consumed real time and cognitive energy that could have gone toward more interesting and more valuable work.
The parts of software engineering that are not being automated, the architecture, the security modeling, the product alignment, the organizational and leadership dimensions, the deep domain understanding that produces good systems rather than just working ones, are the parts that skilled engineers have always found most engaging and most valuable. A shift that moves more of engineering time toward those tasks and away from mechanical implementation is not obviously bad for people who got into this field because they wanted to solve hard problems.
I am genuinely glad that software engineers will spend more time on product vision, technical scalability, and software stability. Those are areas that have historically been chronically underinvested in, not because they were less important than coding, but because the coding demanded so much time that the thinking was often deferred. If AI tools give engineering teams the capacity to do more of the thinking, and less of the mechanical translation of that thinking into syntax, that seems like a net improvement for the quality of software that gets built.
The engineers who will struggle in this environment are the ones whose value was primarily in their ability to write code quickly and correctly, and who have not developed the deeper capabilities that good software engineering requires. That is a real challenge, and it is worth taking seriously. But it is a challenge about which skills to develop, not a signal that the profession is disappearing.
What Engineers Should Actually Be Doing Right Now
If you are a software engineer watching this conversation and wondering what it means for your career, here is my honest advice based on two decades of watching the industry evolve through multiple waves of tools and platforms that were supposed to change everything.
Invest in the capabilities that AI cannot replicate. Deep domain understanding in your area. Architectural thinking. Security modeling. The ability to reason about systems under failure conditions. The organizational skills of building shared understanding, making decisions legible, and managing technical complexity in ways that teams can navigate over time. These are the capabilities that have always separated good engineers from exceptional ones, and they are the capabilities that will matter most as the code-writing layer becomes more automated.
Become genuinely good at directing and reviewing AI-generated code. This is not the same skill as writing code, and it is not trivially easier. It requires understanding what AI tools do well, where they make characteristic mistakes, and what kinds of review are necessary to catch the cases where output looks correct but is not. This is a new skill, and it is one that will differentiate engineers who are effective in AI-assisted workflows from those who are not.
Invest in understanding the broader context of the systems you work on. The engineers who are most valuable in AI-assisted development environments are the ones who understand the business domain, the regulatory environment, the organizational constraints, and the product strategy well enough to direct AI tools effectively and review their output critically. That understanding comes from engagement with the broader context, not just from mastery of the code.
Resist the binary framing of this conversation. The choice is not between “AI replaces software engineers” and “nothing is changing.” The actual situation is more nuanced and more interesting: the tools are changing, the workflows are changing, the mix of skills that is most valuable is changing, and the engineers who navigate that change thoughtfully will be more valuable, not less.
A Final Word on Predictions at Davos
It is also worth noting, without dismissing Amodei’s observation, that the World Economic Forum in Davos has a long and somewhat undistinguished history as a venue for confident predictions about technological transformation that turn out to be premature, overstated, or simply wrong in their timeline.
Amodei himself admitted he was not completely certain how fast the transition will happen, noting that some components, like chip manufacturing and model training, cannot yet be automated. The hedged version of his claim is more defensible than the headline version, and the headline version is what drove most of the reaction.
The gap between “AI is getting very good at generating code and this will change how software engineers work” and “software engineering will be completely obsolete in six to twelve months” is large. The first claim is clearly true. The second is almost certainly not. Understanding the difference between them is the kind of precise thinking that, incidentally, is exactly what good software engineers are trained to do.
The discipline is not going anywhere. The nature of the work is changing. That is a distinction worth holding onto, especially when the headlines are generating more heat than the underlying reality warrants.
This is a topic I think about constantly and will continue writing about as the AI and software engineering landscape develops. If you have thoughts, disagreements, or experiences from your own team that push back on this analysis, I genuinely want to hear them. Follow along on LinkedIn or reach me through the contact page. And if this post resonated with how you are thinking about your career or your team, share it with someone who is navigating the same questions. The comments section below is open.