The AI isn’t going to be on call at 2 AM when things go down.

Large Language Models (LLMs) like ChatGPT, Copilot, and others are becoming a regular part of software development. Many developers use them to write boilerplate code, help with unfamiliar syntax, or even generate whole modules. On the surface, it feels like a productivity boost. The work goes faster, the PRs are opened sooner, and there’s even time left for lunch.

But there’s something underneath this speed, something we’re not talking about enough. The real issue with LLM-generated code is not that it helps us ship more code, faster. The real issue is liability.


Code That Nobody Owns

There’s a strange effect happening in teams using AI to generate code: nobody feels responsible for it.

It’s like a piece of code just appeared in your codebase. Sure, someone clicked “accept,” but no one really thought through the consequences. This is not new, we saw the same thing with frameworks and compilers that generated code automatically. If no human wrote it, then no human cares deeply about maintaining or debugging it later.

LLMs are like that, but on a massive scale.


The “Average” Problem

LLMs are trained on a massive corpus of public code. What they produce is a kind of rolling average of everything they’ve seen. That means the code they generate isn’t written with care or with deep understanding of your system. It’s not great code. It’s average code.

And as more and more people use LLMs to write code, and that code becomes part of new training data, the model quality might even degrade over time, it becomes an average of an average.

This is not just about style or design patterns. It affects how you:

  • Deliver software
  • Observe and monitor systems
  • Debug real-world issues
  • Write secure applications
  • Handle private user data responsibly

LLMs don’t truly understand these things. They don’t know what matters in your architecture, how your team works, or what your specific constraints are. They just parrot what’s most statistically likely to come next in the code.


A Fast Start, Then a Wall

So yes, LLMs speed up the easiest part of software engineering: writing code.

But the hard parts remain:

  • Understanding the domain
  • Designing for change
  • Testing edge cases
  • Debugging production issues
  • Keeping systems secure and maintainable over time

These are the parts that hurt when the codebase grows and evolves. These are the parts where “fast” turns into fragile.


Example: Generated Code Without Accountability

Imagine you ask an LLM to generate a payment service. It might give you something that looks right, maybe even works with your Stripe keys or some basic error handling.

But:

  • What happens with race conditions?
  • What if fraud detection fails silently?
  • What if a user gets double-charged?
  • Who is logging what?
  • Is the payment idempotent?
  • Is sensitive data like credit cards being exposed in logs?

If no one really “owned” that code because it was mostly written by an AI and these questions might only surface after things go wrong. And in production, that can be very costly.


So What’s the Better Approach?

LLMs can be great tools, especially for experienced engineers who treat them like assistants, not authors.

To use LLMs responsibly in your team:

  • Review AI-generated code with care.
  • Assign clear ownership, even for generated components.
  • Add context-specific tests and documentation.
  • Educate your team on the why, not just the how.
  • Make accountability a core part of your development process.

Because in the end, you are shipping the product. The AI isn’t going to be on call at 2 AM when things go down.


Final Thoughts

LLMs give us speed. But they don’t give us understanding, judgment, or ownership. If you treat them as shortcuts to ship more code, you may end up paying the price later. But if you treat them as a tool and keep responsibility where it belongs they can still be part of a healthy, sustainable development process.

Thanks for reading. If you’ve seen this problem in your team or company, I’d love to hear how you’re dealing with it.

AI Isn’t Leveling the Playing Field, it’s Amplifying the Gap

We were told that AI would make development more accessible. That it would “level the playing field,” empower juniors, and help more people build great software.

That’s not what I’m seeing.

In reality, AI is widening the gap between junior and senior developers and fast.


Seniors Are 10x-ing With AI

For experienced engineers, AI tools like ChatGPT and GitHub Copilot are a multiplier.

Why?

Because they know:

  • What to ask
  • How to evaluate the answers
  • What matters in their system
  • How to refactor and harden code
  • When to ignore the suggestion completely

Seniors are using AI the same way a great chef uses a knife: faster, safer, more precise.


Juniors Are Being Left Behind

Many junior developers, especially those early in their careers, don’t yet have the experience to judge what’s good, bad, or dangerous. And here’s the issue:

AI makes it look like they’re productive until it’s time to debug, optimize, or maintain the code.

They’re often:

  • Copy-pasting solutions without understanding the trade-offs
  • Relying on AI to write tests they wouldn’t know how to write themselves
  • Shipping code that works on the surface, but is fragile underneath

What they’re building is a slow-burning fire of tech debt, and they don’t even see the smoke.


Prompting Isn’t Engineering

There’s a new kind of developer emerging: one who can write a great prompt but can’t explain a stack trace.

That might sound harsh, but I’ve seen it first-hand. Without a foundation in problem-solving, architecture, debugging, and security prompting becomes a crutch, not a tool.

Good engineering still requires:

  • Judgment
  • Pattern recognition
  • Systems thinking
  • Curiosity
  • Accountability

AI doesn’t teach these. Mentorship does.


Where Is the Mentorship?

In many teams, mentorship is already stretched thin. Now we’re adding AI to the mix, and some companies expect juniors to “just figure it out with ChatGPT.”

That’s not how this works.

The result? Juniors are missing the critical lessons that turn coding into engineering:

  • Why things are built the way they are
  • What trade-offs exist and why they matter
  • How to debug a system under load
  • When to break patterns
  • How to think clearly under pressure

No AI can give you that. You only get it from real experience and real guidance.


What We Can Do

If you’re a senior engineer, now is the time to lean into mentorship not pull away.

Yes, AI helps you move faster. But if your team is growing and you’re not helping juniors grow too, you’re building speed on a weak foundation.

If you’re a junior, use AI but don’t trust it blindly. Try to understand everything it gives you. Ask why. Break it. Fix it. Learn.

Because here’s the truth:

AI won’t make you a better engineer. But it will make great engineers even better.

Don’t get left behind.


Final Thoughts

AI isn’t the enemy. But it’s not a shortcut to seniority either. We need to be honest about what it’s good for and where it’s failing us.

Let’s stop pretending it’s a magic equalizer. It’s not.

It’s a magnifier.
If you’re already strong, it makes you stronger.
If you’re still learning, it can hide your weaknesses until they blow up.

May 1st, Workers’ Day: Reflecting on the Future of Developers in the Age of AI

Every May 1st, we celebrate Workers’ Day a moment to recognize the hard work, progress, and dignity of people across every profession. Traditionally, it’s a day to honor laborers, craftsmen, and knowledge workers alike.

Today, in 2025, it’s worth asking: What does Workers’ Day mean for developers, engineers, and tech builders when AI is rewriting the rules of work?

Software development once a purely manual craft is being transformed. AI coding assistants, automated testing, and AI-driven architecture design are fundamentally reshaping what “developer work” looks like.

This shift is both exciting and unsettling. And on a day meant to honor workers, it’s important to pause, reflect, and imagine the future of our profession.


Developers and the Changing Nature of Work

Writing software has historically been seen as highly skilled, highly manual labor an exercise in translating human logic and creativity into machine instructions. It’s an intellectual craft.

But the arrival of advanced AI tools like GitHub Copilot, ChatGPT, and autonomous coding agents is shifting where the true value lies:

  • Repetitive coding is being automated.
  • Simple application patterns are being commoditized.
  • Technical excellence is being augmented and sometimes replaced by strategic and creative excellence.

Just as factory workers once faced automation during the industrial revolutions, developers today are facing an “intellectual automation” wave. The tasks that defined junior and even mid-level engineering roles are changing fast.


AI Is Not Eliminating Developers. But It’s Redefining Them

There’s a common fear that AI will “replace” developers.

That’s not exactly what’s happening.

Instead, AI is redefining the developer’s role:

  • From a pure code generator → to a system designer and critical thinker.
  • From solving known problems → to exploring unknown problems.
  • From typing faster → to asking better questions.

The core of great software development will become less about how many lines of code you can write, and more about:

  • How clearly you understand the problem.
  • How creatively you design the solution.
  • How effectively you validate assumptions with real users.

In short: technical execution is being democratized, but insight, creativity, and judgment are becoming even more valuable.


Workers’ Rights and Respect in a Post-AI World

Workers’ Day isn’t just about jobs, it’s about dignity, fairness, and the value of human effort.

As AI reshapes engineering, companies, leaders, and communities must ensure that:

  • Developers are given opportunities to reskill and evolve.
  • Workplaces value critical thinking and creativity over brute-force output.
  • Workers are recognized for their judgment, leadership, and empathy, not just their typing speed.

The dignity of developer work must evolve but it must not disappear.

The risk is that companies, chasing efficiency, might treat developers like “AI supervisors” or “prompt engineers” only undervaluing the strategic, human essence of great engineering work.

This May 1st, we should celebrate not just what developers do but how they think, imagine, and create.


The Future of Developer Work: What Matters Now

So, if you’re a developer, or leading a tech team, what skills and mindsets will matter most in the future?

1. Product Thinking
Understanding user needs, business goals, and the “why” behind the “what” will be essential.

2. Critical Reasoning
Knowing when AI-generated code is wrong, incomplete, or misaligned with real-world requirements.

3. Creativity and Innovation
Thinking beyond what’s obvious. AI can suggest patterns, but it can’t invent bold new paradigms yet.

4. Collaboration and Empathy
Working with designers, PMs, stakeholders, and users will become even more central.

5. Strategic Use of AI
Knowing how to use AI, when to trust it, and when human judgment must take over.


A New Kind of Craftsmanship

In many ways, the evolution we’re living through isn’t the end of craftsmanship in software, it’s a renaissance.

The tools have changed. The basic tasks have changed. But the core spirit of the developer the builder, the solver, the creator remains vital.

If anything, AI makes the human parts of engineering even more valuable:

  • Vision.
  • Judgement.
  • Ethical thinking.
  • Originality.
  • Connection to human needs.

That’s worth celebrating this Workers’ Day and fighting for in the years ahead.


Final Thoughts: Building the Future Intentionally

This May 1st, let’s honor developers not for how fast they can code but for how thoughtfully they can shape the future.

Let’s champion workplaces that invest in human skills, not just AI tools.

And let’s remember that while technology evolves, the heart of work the pursuit of meaning, the quest to make things better remains uniquely, beautifully human.


Need help preparing your tech team for the AI era?

I help companies and teams rethink their product strategies, evolve their engineering practices, and align AI innovation with real-world user needs.

With 20+ years of experience across tech, fintech, and startup ecosystems, I can help your organization:

  • Embrace AI smartly and strategically
  • Foster product thinking across engineering
  • Future-proof your development practices

📩 Contact me here or visit ivanturkovic.com to learn more.

Let’s build a smarter, more human-centered future together.