Skip to content

Signal Through the Noise

Honest takes on code, AI, and what actually works

Menu
  • Home
  • My Story
  • Experience
  • Services
  • Contacts
Menu

The AI isn’t going to be on call at 2 AM when things go down.

Posted on May 22, 2025May 22, 2025 by ivan.turkovic

Large Language Models (LLMs) like ChatGPT, Copilot, and others are becoming a regular part of software development. Many developers use them to write boilerplate code, help with unfamiliar syntax, or even generate whole modules. On the surface, it feels like a productivity boost. The work goes faster, the PRs are opened sooner, and there’s even time left for lunch.

But there’s something underneath this speed, something we’re not talking about enough. The real issue with LLM-generated code is not that it helps us ship more code, faster. The real issue is liability.


Code That Nobody Owns

There’s a strange effect happening in teams using AI to generate code: nobody feels responsible for it.

It’s like a piece of code just appeared in your codebase. Sure, someone clicked “accept,” but no one really thought through the consequences. This is not new, we saw the same thing with frameworks and compilers that generated code automatically. If no human wrote it, then no human cares deeply about maintaining or debugging it later.

LLMs are like that, but on a massive scale.


The “Average” Problem

LLMs are trained on a massive corpus of public code. What they produce is a kind of rolling average of everything they’ve seen. That means the code they generate isn’t written with care or with deep understanding of your system. It’s not great code. It’s average code.

And as more and more people use LLMs to write code, and that code becomes part of new training data, the model quality might even degrade over time, it becomes an average of an average.

This is not just about style or design patterns. It affects how you:

  • Deliver software
  • Observe and monitor systems
  • Debug real-world issues
  • Write secure applications
  • Handle private user data responsibly

LLMs don’t truly understand these things. They don’t know what matters in your architecture, how your team works, or what your specific constraints are. They just parrot what’s most statistically likely to come next in the code.


A Fast Start, Then a Wall

So yes, LLMs speed up the easiest part of software engineering: writing code.

But the hard parts remain:

  • Understanding the domain
  • Designing for change
  • Testing edge cases
  • Debugging production issues
  • Keeping systems secure and maintainable over time

These are the parts that hurt when the codebase grows and evolves. These are the parts where “fast” turns into fragile.


Example: Generated Code Without Accountability

Imagine you ask an LLM to generate a payment service. It might give you something that looks right, maybe even works with your Stripe keys or some basic error handling.

But:

  • What happens with race conditions?
  • What if fraud detection fails silently?
  • What if a user gets double-charged?
  • Who is logging what?
  • Is the payment idempotent?
  • Is sensitive data like credit cards being exposed in logs?

If no one really “owned” that code because it was mostly written by an AI and these questions might only surface after things go wrong. And in production, that can be very costly.


So What’s the Better Approach?

LLMs can be great tools, especially for experienced engineers who treat them like assistants, not authors.

To use LLMs responsibly in your team:

  • Review AI-generated code with care.
  • Assign clear ownership, even for generated components.
  • Add context-specific tests and documentation.
  • Educate your team on the why, not just the how.
  • Make accountability a core part of your development process.

Because in the end, you are shipping the product. The AI isn’t going to be on call at 2 AM when things go down.


Final Thoughts

LLMs give us speed. But they don’t give us understanding, judgment, or ownership. If you treat them as shortcuts to ship more code, you may end up paying the price later. But if you treat them as a tool and keep responsibility where it belongs they can still be part of a healthy, sustainable development process.

Thanks for reading. If you’ve seen this problem in your team or company, I’d love to hear how you’re dealing with it.

Recent Posts

  • ADD in Context: Greenfield, Legacy, Refactoring, and Testing
  • WebMCP Is Coming: How AI Agents Will Reshape the Web
  • No, Average People Will Not Build Their Own Software With AI
  • Full-Time CTO vs. Fractional: The Real Math Nobody Shows YouThe Math on Hiring a Full-Time CTO (And Why It Rarely Adds Up)
  • Architect or Extinct: Why Software Developers Must Evolve Beyond Writing Code

TOP 3% TALENT

Vetted by Hire me
  • Instagram
  • Facebook
  • GitHub
  • LinkedIn

Recent Comments

  • Prompt Patterns Catalog: Iteration, Verification, and Persona on Prompt Patterns Catalog: Decomposition, Exemplar, Constraint
  • Top AI Code Bugs: Semantic Errors, API Misuse, and Security Risks Unveiled – Trevor Hinson on Code Is for Humans, Not Machines: Why AI Will Not Make Syntax Obsolete
  • ADD: AI-Driven Development Methodology for Modern Engineers on The Future Engineer: What Software Development Looks Like When AI Handles the Code
  • The Future Engineer: Skills for AI-Era Software Development on Contact Me
  • A CTO Would Be Bored by Tuesday - Signal Through the Noise on Contact Me

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2021
  • April 2015
  • November 2014
  • October 2014
  • June 2014
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • April 2012
  • October 2011
  • September 2011
  • June 2011
  • December 2010

Categories

  • ADD Methodology
  • AI
  • AI development
  • AI-Driven Development
  • AngularJS
  • Artificial Intelligence
  • blockchain
  • Business Strategy
  • Career Development
  • Code Integration
  • Code Review
  • development
  • Development Methodology
  • ebook
  • Introduction
  • leadership
  • Legacy Code
  • mac os
  • personal
  • personal development
  • presentation
  • productivity
  • Quality Assurance
  • Refactoring
  • Requirements
  • ruby
  • ruby on rails
  • sinatra
  • Software Development
  • Software Engineering
  • Software Testing
  • Specification
  • start
  • startup
  • success
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Signal Through the Noise | Powered by Superbs Personal Blog theme