Skip to content

Signal Through the Noise

Honest takes on code, AI, and what actually works

Menu
  • Home
  • My Story
  • Experience
  • Services
  • Contacts
Menu

You Don’t Want a Claude Code Guru

Posted on February 9, 2026February 9, 2026 by ivan.turkovic

The job posting practically writes itself these days. “Looking for a senior developer proficient with AI coding tools. Must be comfortable using Claude Code, Cursor, or Copilot to rapidly produce production-ready code. We need someone who can 10x our output.”

I have seen variations of this everywhere over the past year. Companies scrambling to find someone who can sit in front of an AI assistant and generate mountains of code every single day. The promise is irresistible: plug an AI-fluent engineer into your team and watch features materialize at unprecedented speed.

But here is the uncomfortable truth that two decades of building software keeps teaching me, over and over again: that is not actually what you want. And if you think it is, you are about to learn an expensive lesson.

The Bottleneck Was Never Typing Speed

Think about the last project that failed or stalled at your company. Was the root cause that your developers could not write code fast enough? Almost certainly not. The project stalled because requirements were unclear. Because three departments had conflicting visions for the product. Because someone made an architectural decision in month one that created compounding technical debt by month six. Because nobody truly understood the user’s actual problem, so the team built an elegant solution to the wrong question.

This pattern has repeated itself across every company, every team, and every technology generation I have worked with. From early startups wrestling with monolithic architectures to fintech platforms processing millions of transactions, the constraint was never the speed of code production. It was always clarity. Clarity about what to build, why it matters, and how it fits into the larger system.

The limiting factor of software engineering has always been product vision and organizational friction. Writing code faster does not fix either of those things. If anything, it makes them worse, because now you can build the wrong thing at remarkable speed.

AI Amplifies What Already Exists

There is a principle I have come to rely on after years of integrating AI tools into real engineering workflows: AI is an amplifier, not a transformer. It takes whatever exists in your organization and makes it louder. If you have clear product thinking, strong architectural foundations, and a team that genuinely understands user problems, AI tools like Claude Code will make that team extraordinarily productive. They will move mountains.

But if you have muddy requirements, disconnected stakeholders, and engineers who have never been given the context to understand why they are building what they are building, AI will amplify that chaos. You will get more code, faster, solving problems nobody asked for, in architectures nobody planned, creating dependencies nobody anticipated. The codebase grows. The clarity shrinks. And debugging AI-generated code that was written without architectural intent is a special kind of frustrating, because the code often looks reasonable on the surface while being fundamentally misaligned with the system’s actual needs.

I have watched this unfold in real time. A team introduces Claude Code with great enthusiasm. Output triples in the first sprint. But by sprint three, integration issues start surfacing. By sprint five, half the velocity gains have been consumed by rework. The AI wrote exactly what it was told to write. The problem was that nobody told it the right things, because nobody in the room had a clear enough picture of the whole system to give the right instructions.

The Skills That Actually Matter Now

When I look at the engineers who thrive with AI tools, they share a set of characteristics that have nothing to do with prompt engineering or knowing the latest IDE plugin. They are the same characteristics that made great engineers before AI: deep understanding of business domains, the ability to decompose complex problems into well-defined pieces, architectural thinking that considers how components interact over time, and genuine empathy for the end user.

What has changed is that these skills now have leverage they never had before. An architect who can clearly define a system boundary, specify the contracts between services, and anticipate the failure modes can now use AI to implement the pieces at extraordinary speed. But the architecture came first. The clarity came first. The AI just executed.

This is something I have observed consistently across every domain I have worked in, from blockchain payment systems to high-availability fintech platforms. The complexity was never in writing the code. The complexity was in understanding what the payment flow needed to handle, how the compliance layer intersected with the transaction pipeline, what happened when a third-party API timed out at the worst possible moment, and how the system needed to behave when everything went wrong at once. Those are questions that require deep business understanding, systems thinking, and years of seeing what actually breaks in production. No AI tool, no matter how sophisticated, can substitute for that kind of earned knowledge.

Delegation, Not Dictation

There is a useful analogy that I keep returning to: working with AI coding tools is much closer to leading a team than it is to writing code yourself. When you manage a team of engineers, you do not dictate every line of code. You set direction. You define the problem clearly. You establish architectural constraints and quality standards. Then you trust your team to execute while you review their work critically.

The engineers who struggle with AI tools are often the ones who try to dictate. They write overly specific prompts, trying to control every implementation detail, and then get frustrated when the output does not match their mental model. The engineers who excel are the ones who delegate. They describe the problem, set the boundaries, and then evaluate the result with experienced eyes.

This is why team leadership experience translates so directly into AI effectiveness. If you have spent years learning how to communicate intent clearly to other humans, how to review work critically without micromanaging, and how to detect when someone is confidently producing something subtly wrong, you already have the skills that matter most. The AI is just another team member. A very fast, very tireless team member who needs clear direction and careful review.

I have found that the ability to spot when AI is confidently wrong is perhaps the most underrated skill in this new landscape. Large language models do not hesitate. They do not say “I’m not sure about this architectural choice.” They produce code with the same confidence whether the approach is brilliant or deeply flawed. Catching those moments requires the kind of experience that only comes from having built, broken, and rebuilt real systems over many years.

What You Actually Need to Build

If you are a founder, a CTO, or an engineering leader trying to figure out how to harness AI effectively, stop looking for a “Claude Code guru.” Instead, look for (or develop) people who have these qualities:

First, genuine empathy for users. Not the surface-level “we did user research” variety, but the deep kind where the engineer can articulate why a user behaves a certain way, what frustrates them, and what they actually need versus what they say they want. When an engineer with this understanding sits down with an AI tool, every prompt carries intent. Every feature request is grounded in reality.

Second, architectural fluency. Someone who can look at a system and see not just the code, but the forces acting on it: traffic patterns, data flows, failure cascades, scaling pressures, and integration boundaries. This is the person who prevents the AI from generating a technically correct but architecturally catastrophic solution. This kind of thinking comes from years of designing systems that had to survive contact with real users, real data volumes, and real business constraints.

Third, ownership mindset. The difference between a developer who writes features and an engineer who owns outcomes is enormous, and it becomes even more pronounced with AI tools. When someone truly owns a problem, they do not just generate code. They question whether the feature should exist at all. They consider how it affects the rest of the system. They think about what happens six months from now when requirements change. They use AI to move faster toward a destination they have chosen deliberately, not to generate output for its own sake.

Fourth, the courage to say “this is the wrong problem.” The most valuable thing an experienced engineer can do in a room full of stakeholders is redirect the conversation. Not just build what was asked for, but challenge whether it should be built at all. AI makes building cheap. That makes choosing what to build the most important decision in the room. And that decision requires business understanding, technical depth, and the communication skills to bring people along.

Then, and Only Then, Mountains Move

Here is what I have seen work, consistently. You build a team (or find an individual) that understands the business deeply. Someone who has sat in the meetings where strategy is set and also in the trenches where production incidents happen at 2 AM. Someone who can translate between the language of business stakeholders and the language of systems. Someone who has designed architectures that had to evolve over years, not just pass a sprint demo.

You give that team real ownership. Not just responsibility for writing code, but ownership of outcomes. Let them talk to users. Let them push back on requirements. Let them make architectural decisions and live with the consequences.

Then you hand them Claude Code, or whatever AI tool fits their workflow. And something remarkable happens. The bottleneck dissolves. Not because the AI writes code fast, but because every line of code the AI writes is aimed at the right target. The architecture is sound because someone with experience defined it. The features solve real problems because someone with empathy specified them. The code integrates cleanly because someone with systems thinking anticipated the interactions.

That is when mountains actually move. Not when you hire a prompt engineer who can generate 10,000 lines a day, but when you empower someone who knows which 200 lines actually matter.

The Uncomfortable Implication

There is an uncomfortable implication in all of this that most organizations do not want to face: AI does not solve organizational problems. If your product vision is unclear, AI will build unclear products faster. If your team lacks autonomy, AI will help them execute bad decisions more efficiently. If nobody in your organization truly understands the user, AI will generate plausible-looking features that miss the mark at unprecedented speed.

The organizations that will win with AI are the ones that were already doing the hard, unglamorous work of building clear product thinking, strong engineering culture, and genuine user understanding. AI just removes the last remaining friction between a good idea and its implementation.

This is why I keep writing about the human side of software engineering, even as the tools become more powerful. Because the tools are never the constraint. The thinking is the constraint. The clarity is the constraint. The willingness to truly understand a problem before rushing to solve it is the constraint. And those are fundamentally human capabilities that no amount of AI sophistication will replace.

The Real Question to Ask

So the next time you are tempted to hire a “Claude Code guru” or search for someone who can “leverage AI to 10x productivity,” pause and ask a different question. Ask: “Do we have someone who truly understands our users, our architecture, and our business? Someone who can see the whole board, not just the next move? Someone who has the experience to know what works in production, not just in demos?”

If the answer is yes, give them AI tools and get out of their way. You will be amazed at what happens.

If the answer is no, no AI tool in the world will fill that gap. Start there instead.


I write about the intersection of AI, software architecture, and engineering leadership at ivanturkovic.com. If these ideas resonate with how you think about building software, I would love to hear your perspective. Follow me for more writing on what actually matters in this new era of AI-assisted development, or reach out directly if you want to continue the conversation. What has been your experience: has AI changed what you build, or just how fast you build it?

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • You Don’t Want a Claude Code Guru
  • An Honest Take on Deploying Rails with Kamal: What Works and What Doesn’t
  • The New Bottleneck: Why Clarity Matters More Than Code
  • Evaluate: Why Human Judgment Is Non-Negotiable
  • Prompt Patterns Catalog, Part 2: Iteration, Verification, and Persona

TOP 3% TALENT

Vetted by Hire me
  • Instagram
  • Facebook
  • GitHub
  • LinkedIn

Recent Comments

  • Prompt Patterns Catalog: Iteration, Verification, and Persona on Prompt Patterns Catalog: Decomposition, Exemplar, Constraint
  • Top AI Code Bugs: Semantic Errors, API Misuse, and Security Risks Unveiled – Trevor Hinson on Code Is for Humans, Not Machines: Why AI Will Not Make Syntax Obsolete
  • ADD: AI-Driven Development Methodology for Modern Engineers on The Future Engineer: What Software Development Looks Like When AI Handles the Code
  • The Future Engineer: Skills for AI-Era Software Development on Contact Me
  • A CTO Would Be Bored by Tuesday - Signal Through the Noise on Contact Me

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2021
  • April 2015
  • November 2014
  • October 2014
  • June 2014
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • April 2012
  • October 2011
  • September 2011
  • June 2011
  • December 2010

Categories

  • ADD Methodology
  • AI
  • AI development
  • AI-Driven Development
  • AngularJS
  • Artificial Intelligence
  • blockchain
  • Business Strategy
  • Career Development
  • development
  • Development Methodology
  • ebook
  • Introduction
  • leadership
  • mac os
  • personal
  • personal development
  • presentation
  • productivity
  • Requirements
  • ruby
  • ruby on rails
  • sinatra
  • Software Development
  • Software Engineering
  • Specification
  • start
  • startup
  • success
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Signal Through the Noise | Powered by Superbs Personal Blog theme