The AI isn’t going to be on call at 2 AM when things go down.

Large Language Models (LLMs) like ChatGPT, Copilot, and others are becoming a regular part of software development. Many developers use them to write boilerplate code, help with unfamiliar syntax, or even generate whole modules. On the surface, it feels like a productivity boost. The work goes faster, the PRs are opened sooner, and there’s even time left for lunch.

But there’s something underneath this speed, something we’re not talking about enough. The real issue with LLM-generated code is not that it helps us ship more code, faster. The real issue is liability.


Code That Nobody Owns

There’s a strange effect happening in teams using AI to generate code: nobody feels responsible for it.

It’s like a piece of code just appeared in your codebase. Sure, someone clicked “accept,” but no one really thought through the consequences. This is not new, we saw the same thing with frameworks and compilers that generated code automatically. If no human wrote it, then no human cares deeply about maintaining or debugging it later.

LLMs are like that, but on a massive scale.


The “Average” Problem

LLMs are trained on a massive corpus of public code. What they produce is a kind of rolling average of everything they’ve seen. That means the code they generate isn’t written with care or with deep understanding of your system. It’s not great code. It’s average code.

And as more and more people use LLMs to write code, and that code becomes part of new training data, the model quality might even degrade over time, it becomes an average of an average.

This is not just about style or design patterns. It affects how you:

  • Deliver software
  • Observe and monitor systems
  • Debug real-world issues
  • Write secure applications
  • Handle private user data responsibly

LLMs don’t truly understand these things. They don’t know what matters in your architecture, how your team works, or what your specific constraints are. They just parrot what’s most statistically likely to come next in the code.


A Fast Start, Then a Wall

So yes, LLMs speed up the easiest part of software engineering: writing code.

But the hard parts remain:

  • Understanding the domain
  • Designing for change
  • Testing edge cases
  • Debugging production issues
  • Keeping systems secure and maintainable over time

These are the parts that hurt when the codebase grows and evolves. These are the parts where “fast” turns into fragile.


Example: Generated Code Without Accountability

Imagine you ask an LLM to generate a payment service. It might give you something that looks right, maybe even works with your Stripe keys or some basic error handling.

But:

  • What happens with race conditions?
  • What if fraud detection fails silently?
  • What if a user gets double-charged?
  • Who is logging what?
  • Is the payment idempotent?
  • Is sensitive data like credit cards being exposed in logs?

If no one really “owned” that code because it was mostly written by an AI and these questions might only surface after things go wrong. And in production, that can be very costly.


So What’s the Better Approach?

LLMs can be great tools, especially for experienced engineers who treat them like assistants, not authors.

To use LLMs responsibly in your team:

  • Review AI-generated code with care.
  • Assign clear ownership, even for generated components.
  • Add context-specific tests and documentation.
  • Educate your team on the why, not just the how.
  • Make accountability a core part of your development process.

Because in the end, you are shipping the product. The AI isn’t going to be on call at 2 AM when things go down.


Final Thoughts

LLMs give us speed. But they don’t give us understanding, judgment, or ownership. If you treat them as shortcuts to ship more code, you may end up paying the price later. But if you treat them as a tool and keep responsibility where it belongs they can still be part of a healthy, sustainable development process.

Thanks for reading. If you’ve seen this problem in your team or company, I’d love to hear how you’re dealing with it.

AI Isn’t Leveling the Playing Field, it’s Amplifying the Gap

We were told that AI would make development more accessible. That it would “level the playing field,” empower juniors, and help more people build great software.

That’s not what I’m seeing.

In reality, AI is widening the gap between junior and senior developers and fast.


Seniors Are 10x-ing With AI

For experienced engineers, AI tools like ChatGPT and GitHub Copilot are a multiplier.

Why?

Because they know:

  • What to ask
  • How to evaluate the answers
  • What matters in their system
  • How to refactor and harden code
  • When to ignore the suggestion completely

Seniors are using AI the same way a great chef uses a knife: faster, safer, more precise.


Juniors Are Being Left Behind

Many junior developers, especially those early in their careers, don’t yet have the experience to judge what’s good, bad, or dangerous. And here’s the issue:

AI makes it look like they’re productive until it’s time to debug, optimize, or maintain the code.

They’re often:

  • Copy-pasting solutions without understanding the trade-offs
  • Relying on AI to write tests they wouldn’t know how to write themselves
  • Shipping code that works on the surface, but is fragile underneath

What they’re building is a slow-burning fire of tech debt, and they don’t even see the smoke.


Prompting Isn’t Engineering

There’s a new kind of developer emerging: one who can write a great prompt but can’t explain a stack trace.

That might sound harsh, but I’ve seen it first-hand. Without a foundation in problem-solving, architecture, debugging, and security prompting becomes a crutch, not a tool.

Good engineering still requires:

  • Judgment
  • Pattern recognition
  • Systems thinking
  • Curiosity
  • Accountability

AI doesn’t teach these. Mentorship does.


Where Is the Mentorship?

In many teams, mentorship is already stretched thin. Now we’re adding AI to the mix, and some companies expect juniors to “just figure it out with ChatGPT.”

That’s not how this works.

The result? Juniors are missing the critical lessons that turn coding into engineering:

  • Why things are built the way they are
  • What trade-offs exist and why they matter
  • How to debug a system under load
  • When to break patterns
  • How to think clearly under pressure

No AI can give you that. You only get it from real experience and real guidance.


What We Can Do

If you’re a senior engineer, now is the time to lean into mentorship not pull away.

Yes, AI helps you move faster. But if your team is growing and you’re not helping juniors grow too, you’re building speed on a weak foundation.

If you’re a junior, use AI but don’t trust it blindly. Try to understand everything it gives you. Ask why. Break it. Fix it. Learn.

Because here’s the truth:

AI won’t make you a better engineer. But it will make great engineers even better.

Don’t get left behind.


Final Thoughts

AI isn’t the enemy. But it’s not a shortcut to seniority either. We need to be honest about what it’s good for and where it’s failing us.

Let’s stop pretending it’s a magic equalizer. It’s not.

It’s a magnifier.
If you’re already strong, it makes you stronger.
If you’re still learning, it can hide your weaknesses until they blow up.

Homebrew: How to start and stop background services

Anyone that has installed application running in the background from Homebrew, knows how to use launchctl that actually runs every application when the computer restarts. It is pretty straight forward task but most of the time you need to know the location of the .plist file that defines how to run it installed with homebrew.

Now there is a quicker path how to control them by simply adding following package to home-brew:

brew tap gapple/services
 And you have available to start, stop services directly from home-brew with following command to get a list of active services:
brew services list
So for example if you have installed postgresql you can start it as a background service with:
brew services start postgresql
or stop the service with
brew services stop postgresql

Which is much more easier and simpler than finding the right .plist file inside cellar folder of brew

AngularJS ngInclude directive and scope inheritance

ngInclude directive and scope

There are many times when you want to include a html snippet code from another file but preserve the scope of it. It is usually when you have different form fields for the various objects and you want to have a global controller that oversees the updating of different forms. So if you want to take the quickest route and use ngInclude directive you would be surprised that it is not properly linking to your controller and you cannot access the form instance.

This is due to ngInclude internals and how they work. ngInclude creates for each use as a new child scope so overwriting anything inside the new included HTML file content will be written into child scope and not in the one you’ve anticipated to be. So there are few workaround around this as creating a new object inside the scope for example

$scope.data = {}

inside the controlling controller and then in the imported html file set values inside the

<input type="text" ng-model="data.name"/>

This works if you don’t have a problem with static value being inserted into all html files, but if you want maximum flexibility then this is not the perfect solution. So after inspecting the source code inside ngInclude.js, I have seen a room for improvement and created a similar directive to ngInclude called ngInsert, which instead of making new child scope it inherits the current scope and continue using it inside. You can pick up the whole source code at this gist. You can use it in the same manner as existing ngInclude. Continue reading “AngularJS ngInclude directive and scope inheritance”

How to record iPhone screen

To record iPhone screen has never been easier

Developing great new iOS application requires not just a usable application, but also a great video presentation of it. How it works, basic usability tricks to make users want to get your new application which you spent numerous hours to make it perfect.

If you are working alone on next big thing or inside a small agency or team and you probably have a limited budget to outsource your video production one of the hardest parts was shooting the actual content from iOS device, whether it is iPhone or iPad. Until now you could use some kind of recording of iOS simulator or using airplay method to sync the video. It worked but it was somehow clunky experience and you needed to buy extra software.

With the release of Mac OS Yosemite version everything has changed. Now you can record you iOS device screen directly without buying additional software whatsoever. All you need to have the computer upgraded to Yosemite version and an iPhone or iPad to latest iOS 8. iOS device needs to connect to your computer through lighting cable.

Continue reading “How to record iPhone screen”

AngularJS and Ruby on Rails work together

Finding the best integration of AngularJS and Ruby on Rails

Recently I got really excited with AngularJS so to make it work perfectly with Ruby on Rails there are some configurations needed. There are available blog posts on how to integrate it perfectly but somehow I wasn’t happy with the available preferences. Some offered to add the javascript files manually to the project and to manual project organization, others offered some kind of gem packager or even automated as a rails app template.

My goal is to describe how to start new rails app project from the scratch but the instructions should be succinct enough to be able to reuse them for the existing project (Actually I did extract it from the existing application I am working on). For the front end development I recently discovered great gem that really can make it more closer to pure full stack javascript development.
Bower is a great javascript package manager by the people who are working on Twitter Boostrap. It is like Bundler but made for javascript instead of ruby language. When I found out that there is a ruby gem that is integrated with rake tasks so I can easily update all javascript libraries without needing adding gem library for each one. The gem is called bower-rails.

My initial plan is to evolve this post into a series of blog posts on how to develop a fully functional demo application so I’ve included some steps that are might not needed but are good to have. Don’t worry I will provide explanation why I am using each of them.
Here is my plan what I will try to achieve with this series of posts:

  • creating a new demo project with angularJS from scratch, showing all my changes along the way, and trying to explain every step. this will include creating basic rails 4 app
  • adding basic gems
  • setup front end development with Bower
  • adding angularJS
  • implementing basic Rails and AngularJS controllers

Continue reading “AngularJS and Ruby on Rails work together”

Working OAuth2 with Foursquare on Sinatra

require 'rubygems'
require 'sinatra'
require 'oauth2'
require 'json'
require 'net/https'
require 'foursquare2'

set :port, 80

CLIENT_ID = '****************************************************'
CLIENT_SECRET = '****************************************************'
CALLBACK_PATH = '/callbacks/foursquare'

def client
OAuth2::Client.new(CLIENT_ID, CLIENT_SECRET,
{:site => 'https://foursquare.com/',
:token_url => "/oauth2/access_token",
:authorize_url => "/oauth2/authenticate?response_type=code",
:parse_json => true,
:ssl => {:ca_path => '/etc/ssl/certs' }
})
end

def redirect_uri()
uri = URI.parse(request.url)
uri.path = CALLBACK_PATH
uri.query = nil
uri.to_s
end

get CALLBACK_PATH do
puts redirect_uri
if params[:code] != nil
token = client.auth_code.get_token(params[:code], :redirect_uri => redirect_uri).token
client = Foursquare2::Client.new(:oauth_token => token)
email = client.user('self')['contact'].email.to_s
return "Authenticated user: #{email}"
else
'Missing response from foursquare'
end
end

get '/' do
redirect client.auth_code.authorize_url(:redirect_uri => redirect_uri)
end

Latest eclipse app on Mac OS Lion preference pane

When upgrading to a new eclipse app and selecting existing workspace it makes preference pane unavailable. So to be able to access settings you have to either create a new workspace and migrate projects to a new workspace or remove settings files from old one. This saved my time a couple of times.