The Art of Reusability and Why AI Still Doesn’t Understand It

AI can generate code but lacks understanding of design intent, making it struggle with reusability. True reusability involves encoding shared ideas and understanding context, which AI cannot grasp. This leads to overgeneralized or underabstracted code. Effective engineering requires human judgment and foresight that AI is currently incapable of providing.

After writing about the team that deleted 200,000 lines of AI-generated code without breaking their app, a few people asked me:

“If AI is getting so good at writing code, why can’t it also reuse code properly?”

That’s the heart of the problem.

AI can produce code.
It can suggest patterns.
But it doesn’t understand why one abstraction should exist and why another should not.

It has no concept of design intent, evolution over time, or maintainability.
And that’s why AI-generated code often fails at the very thing great software engineering is built upon: reusability.


Reusability Isn’t About Copying Code

Let’s start with what reusability really means.

It’s not about reusing text.
It’s about reusing thought.

When you make code reusable, you’re encoding an idea a shared rule or process in one place, so it can serve multiple contexts.
That requires understanding how your domain behaves and where boundaries should exist.

Here’s a small example in Ruby 3.4:

# A naive AI-generated version
class InvoiceService
  def create_invoice(customer, items)
    total = items.sum { |i| i[:price] * i[:quantity] }
    tax = total * 0.22
    {
      customer: customer,
      total: total,
      tax: tax,
      grand_total: total + tax
    }
  end

  def preview_invoice(customer, items)
    total = items.sum { |i| i[:price] * i[:quantity] }
    tax = total * 0.22
    {
      preview: true,
      total: total,
      tax: tax,
      grand_total: total + tax
    }
  end
end

It works. It looks fine.
But the duplication here is silent debt.

A small tax change or business rule adjustment would require edits in multiple places which the AI wouldn’t warn you about.

Now, here’s how a thoughtful Rubyist might approach the same logic:

class InvoiceCalculator
  TAX_RATE = 0.22

  def initialize(items)
    @items = items
  end

  def subtotal = @items.sum { |i| i[:price] * i[:quantity] }
  def tax = subtotal * TAX_RATE
  def total = subtotal + tax
end

class InvoiceService
  def create_invoice(customer, items, preview: false)
    calc = InvoiceCalculator.new(items)

    {
      customer: customer,
      total: calc.subtotal,
      tax: calc.tax,
      grand_total: calc.total,
      preview: preview
    }
  end
end

Now the logic is reusable, testable, and flexible.
If tax logic changes, it’s centralized.
If preview behavior evolves, it stays isolated.

This is design thinking not just text prediction.


Why AI Struggles with This

AI doesn’t understand context it understands correlation.

When it generates code, it pulls from patterns it has seen before. It recognizes that “invoices” usually involve totals, taxes, and items.
But it doesn’t understand the relationship between those things in your specific system.

It doesn’t reason about cohesion (what belongs together) or coupling (what should stay apart).

That’s why AI-generated abstractions often look reusable but aren’t truly so.
They’re usually overgeneralized (“utility” modules that do too much) or underabstracted (duplicate logic with slightly different names).

In other words:
AI doesn’t design for reuse it duplicates for confidence.


A Real Example: Reusability in Rails

Let’s look at something familiar to Rubyists: ActiveRecord scopes.

An AI might generate this:

class Order < ApplicationRecord
  scope :completed, -> { where(status: 'completed') }
  scope :recent_completed, -> { where(status: 'completed').where('created_at > ?', 30.days.ago) }
end

Looks fine, right?
But you’ve just duplicated the status: 'completed' filter.

A thoughtful approach is:

class Order < ApplicationRecord
  scope :completed, -> { where(status: 'completed') }
  scope :recent, -> { where('created_at > ?', 30.days.ago) }
  scope :recent_completed, -> { completed.recent }
end

It’s subtle but it’s how reusability works.
You extract intent into composable units.
You think about how the system wants to be extended later.

That level of foresight doesn’t exist in AI-generated code.


The Human Element: Judgment and Intent

Reusability isn’t just an engineering principle it’s a leadership one.

Every reusable component is a promise to your future self and your team.
You’re saying: “This logic is safe to depend on.”

AI can’t make that promise.
It can’t evaluate trade-offs or organizational conventions.
It doesn’t know when reuse creates value and when it adds friction.

That’s why good engineers are editors, not just producers.
We don’t chase volume; we curate clarity.


My Takeaway

AI is incredible at generating examples.
But examples are not design.

Reusability real, human-level reusability comes from understanding what stays constant when everything else changes.
And that’s something no model can infer without human intent behind it.

So yes AI can write Ruby.
It can even generate elegant-looking methods.
But it still can’t think in Ruby.
It can’t feel the rhythm of the language, or the invisible architecture behind a clean abstraction.

That’s still our job.

And it’s the part that makes engineering worth doing.


Written by Ivan Turkovic; technologist, Rubyist, and blockchain architect exploring how AI and human craftsmanship intersect in modern software engineering.

The AI Detox Movement: Why Engineers Are Taking Back Their Code

In 2025, AI tools transformed coding but led developers to struggle with debugging and understanding their code. This sparked the concept of “AI detox,” a period where developers intentionally stop using AI to regain coding intuition and problem-solving skills. A structured detox can improve comprehension, debugging, and creativity, fostering a healthier relationship with AI.

The New Reality of Coding in 2025

Over the last year, something remarkable happened in the world of software engineering.

AI coding tools Cursor, GitHub Copilot, Cody, Devin became not just sidekicks, but full collaborators. Autocomplete turned into full functions, boilerplate became one-liners, and codebases that once took weeks to scaffold could now appear in minutes.

It felt like magic.

Developers were shipping faster than ever. Teams were hitting deadlines early. Startups were bragging about “AI-assisted velocity.”

But behind that rush of productivity, something else began to emerge a quiet, growing discomfort.


The Moment the Magic Fades

After months of coding with AI, many developers hit the same wall.
They could ship fast, but they couldn’t debug fast.

When production went down, it became painfully clear: they didn’t truly understand the codebase they were maintaining.

A backend engineer told me bluntly:

“Cursor wrote the service architecture. I just glued things together. When it broke, I realized I had no idea how it even worked.”

AI wasn’t writing bad code it was writing opaque code.
Readable but not intuitive. Efficient but alien.

This is how the term AI detox started spreading in engineering circles developers deliberately turning AI off to reconnect with the craft they’d begun to lose touch with.


What Is an AI Detox?

An AI detox is a deliberate break from code generation tools like Copilot, ChatGPT, or Cursor to rebuild your programming intuition, mental sharpness, and problem-solving confidence.

It doesn’t mean rejecting AI altogether.
It’s about recalibrating your relationship with it.

Just as a fitness enthusiast might cycle off supplements to let their body reset, engineers are cycling off AI to let their brain do the heavy lifting again.


Why AI Detox Matters

The longer you outsource cognitive effort to AI, the more your engineering instincts fade.
Here’s what AI-heavy coders have reported after several months of nonstop use:

  • Reduced understanding of code structure and design choices.
  • Slower debugging, especially in unfamiliar parts of the codebase.
  • Weaker recall of language and framework features.
  • Overreliance on generated snippets that “just work” without deeper understanding.
  • Loss of flow, because coding became about prompting rather than creating.

You might still be productive but you’re no longer learning.
You’re maintaining an illusion of mastery.


The Benefits of an AI Detox

After even a short AI-free period, developers often notice a profound change in how they think and code:

  • Deeper comprehension: You start to see the architecture again.
  • Better debugging: You can trace logic without guesswork.
  • Sharper recall: Syntax, libraries, and idioms return to muscle memory.
  • Creative problem solving: You find better solutions instead of the first thing AI offers.
  • Reconnection with craftsmanship: You take pride in code that reflects your thought process.

As one engineer put it:

“After a week without Cursor, I remembered how satisfying it is to actually solve something myself.”


How to Plan Your AI Detox (Step-by-Step Guide)

You don’t need to quit cold turkey forever.
A structured plan helps you recoup your skills while keeping your work flowing.

Here’s how to do it effectively:


Step 1: Define Your Motivation

Start by asking:

  • What do I want to regain?
  • Is it confidence? Speed? Understanding?
  • Do I want to rebuild my debugging skills or architectural sense?

Write it down. Clarity gives your detox purpose and prevents you from quitting halfway.


Step 2: Choose Your Detox Duration

Different goals require different lengths:

Detox LevelDurationBest For
Mini-detox3 daysA quick reset and self-check
Weekly detox1 full weekRebuilding confidence and recall
Extended detox2–4 weeksDeep retraining of fundamentals

If you’re working on a production project, start with a hybrid approach:
AI-free mornings, AI-assisted afternoons.


Step 3: Set Clear Rules

Be explicit about what’s allowed and what’s not.

Example rules:

✅ Allowed:

  • Using AI for documentation lookups
  • Reading AI explanations for existing code
  • Asking conceptual questions (“How does event sourcing work?”)

❌ Not allowed:

  • Code generation (functions, modules, tests, migrations)
  • AI refactors or architecture design
  • Using AI to debug instead of reasoning it out yourself

The stricter the rule set, the greater the benefit.


Step 4: Pick a Suitable Project

Choose something that forces you to think but won’t jeopardize production deadlines.

Good choices:

  • Refactor an internal service manually.
  • Build a small CLI or API from scratch.
  • Rewrite a module in a different language (e.g., Ruby → Rust).
  • Add integration tests by hand.

Bad choices:

  • Complex greenfield features with high delivery pressure.
  • Anything that will make your manager panic if it takes longer.

The goal is to practice thinking, not to grind deadlines.


Step 5: Journal Your Learning

Keep a daily log of what you discover:

  • What took longer than expected?
  • What concepts surprised you?
  • What patterns do you now see more clearly?
  • Which parts of the language felt rusty?

At the end of the detox, you’ll have a personal reflection guide a snapshot of how your brain reconnected with the craft.


Step 6: Gradually Reintroduce AI (With Boundaries)

After your detox, it’s time to reintroduce AI intentionally.

Here’s how to keep your skills sharp while benefiting from AI assistance:

Use CaseAI Usage
Boilerplate✅ Yes (setup, configs, tests)
Core logic⚠️ Only for brainstorming or reviewing
Debugging✅ For hints, but reason manually first
Architecture✅ As a sounding board, not a decision-maker

You’ll quickly find a balance where AI becomes an amplifier not a crutch.


Example AI-Detox Schedule (4-Week Plan)

Here’s a simple structure to follow:

Week 1 – Awareness

  • Turn off AI for 3 days.
  • Focus on small, isolated tasks.
  • Note moments where you instinctively reach for AI.

Goal: Realize how often you rely on it.


Week 2 – Manual Mastery

  • Full AI-free week.
  • Rebuild a module manually.
  • Write comments before coding.
  • Practice debugging from logs and stack traces.

Goal: Relearn problem-solving depth.


Week 3 – Independent Architecture

  • Design and code a feature without any AI input.
  • Document design decisions manually.
  • Refactor and test it by hand.

Goal: Restore confidence in end-to-end ownership.


Week 4 – Rebalance

  • Reintroduce AI, but only for non-critical parts.
  • Review old AI-generated code and rewrite one section by hand.
  • Evaluate your improvement.

Goal: Reclaim control. Let AI assist, not lead.


Practical Tips to Make It Work

  • Disable AI in your editor: Don’t rely on willpower remove temptation.
  • Pair program with another human: It recreates the reasoning process that AI shortcuts.
  • Keep a “questions log”: Every time you’re tempted to ask AI something, write it down. Research it manually later.
  • Revisit fundamentals: Review algorithms, frameworks, or patterns you haven’t touched in years.
  • Read real code: Open-source repositories are the best detox material real logic, real humans.

The Mindset Behind the Detox

The purpose of an AI detox isn’t to prove you can code without AI.
It’s to remember why you code in the first place.

Good engineering is about understanding, design, trade-offs, and problem-solving.
AI tools are brilliant at generating text but you are the one making decisions.

The best developers I know use AI with intent. They use it to:

  • Eliminate repetition.
  • Accelerate boilerplate.
  • Explore ideas.

But they write, refactor, and debug the hard parts themselves because that’s where mastery lives.


The Future Is Balanced

AI isn’t going away. It’s evolving faster than any tool in tech history.
But if you want to stay valuable as a developer, you need to own your code, not just generate it.

The engineers who thrive over the next decade will be those who:

  • Think independently.
  • Understand systems deeply.
  • Use AI strategically, not passively.
  • Keep their fundamentals alive through intentional detox cycles.

AI is a force multiplier not a replacement for your mind.


So take a week. Turn it off.
Write something from scratch.
Struggle a little. Think a lot.
Reignite the joy of building with your own hands.

When you turn the AI back on, you’ll see it differently not as your replacement, but as your apprentice.

When 200,000 Lines of AI Code Disappeared and Nothing Broke

A team deleted 200,000 lines of AI-generated code yet maintained app functionality, highlighting the pitfalls of unchecked AI development. AI may accelerate chaos in weak systems, making existing issues worse. Effective engineering culture remains crucial; AI should enhance rather than replace human judgment in creating a quality codebase.

A few weeks ago, someone I know a smart, capable engineering lead told me about their team’s strange success story.

They deleted 200,000 lines of AI-generated code.

And their app still worked.

That alone tells you everything you need to know about the quiet cost of unchecked AI-assisted development.

The project had originally been around 100,000 lines already a decent size for what it did. But over time, it ballooned to more than double that number. Most of the bloat came not from features or performance improvements, but from auto-generated boilerplate, duplicated logic, and abstractions no one really understood anymore.

When they finally audited the system, they realized how much noise had crept in how much invisible entropy had been introduced under the banner of “productivity.”

They cleaned it up. They deleted code. They refactored by hand. And the product kept running, smoother than before.


The Illusion of Productivity

This is the side of AI coding no one talks about.

Yes AI can make you faster. But “faster” at what, exactly?

If your processes, architecture, and reviews are already weak, AI will accelerate your chaos. It doesn’t understand your domain. It doesn’t see the trade-offs. It just predicts what “looks right.”

And that’s exactly the problem: AI-generated code looks right.
It compiles. It passes shallow tests. It feels complete.

But under the surface, it’s often redundant, brittle, and opaque a kind of technical debt that doesn’t announce itself until you try to build on top of it.

I’ve seen teams overwhelmed by maintenance of code they didn’t truly write.
I’ve seen projects bloated with functions that appear useful but contribute nothing.
I’ve even seen leaders puzzled when productivity metrics looked great while actual delivery velocity slowed to a crawl.

The AI didn’t break the system.
It just quietly magnified the team’s existing weaknesses.


AI Is a Force Multiplier not a Substitute for Discipline

This story reinforced something I’ve believed for a while:

AI won’t fix your architecture.
It won’t make your team more thoughtful.
It won’t improve communication.
And it definitely won’t tell you when the thing it just generated is complete nonsense.

If your engineering culture is strong clean codebase, thoughtful design reviews, experienced developers who understand trade-offs then AI can be a genuine accelerant. It can help prototype ideas, fill in routine boilerplate, or refactor safely with guidance.

But without that foundation, AI becomes an amplifier of dysfunction.
It scales everything the good, the bad, and the ugly.


The Temptation of the “Autonomous Engineer”

I understand the temptation.
The promise of AI development tools is seductive: faster output, lower costs, instant scaffolding.

But I’ve learned that software isn’t about writing more code it’s about writing less code that does more work.

The best engineers I’ve worked with are ruthless editors.
They remove complexity.
They delete unnecessary abstractions.
They value clarity over cleverness, and design over automation.

That discipline doesn’t go away just because a machine can now autocomplete functions.

If anything, it becomes more important than ever.


My Takeaway

When that lead told me they’d deleted 200,000 lines of AI-generated code and everything still worked, I didn’t see it as a failure of the technology.

I saw it as a reminder that tools don’t replace engineering principles.

AI is a powerful assistant.
But trust it blindly, and it will quietly erode your system from the inside out.

The real productivity gain isn’t in the speed of generation it’s in the quality of judgment behind what stays and what gets deleted.

Use AI. Experiment with it.
But never forget: your codebase reflects your discipline, not your tools.

And discipline is still something only humans can provide.


Written by Ivan Turkovic; a technologist, Rubyist, and blockchain architect exploring how AI, code quality, and engineering culture shape the future of software.

Why AI Can’t (Yet) Write Maintainable Software

In the past few years, large language models (LLMs) have burst onto the software development scene like a meteor bright, exciting, and full of promise. They can write entire applications in seconds, generate boilerplate code with ease, and explain complex algorithms in plain English. It’s hard not to be impressed.

But after spending serious time testing various AI platforms as coding assistants, I’ve reached a clear conclusion:

AI is not yet suitable for generating long-term, maintainable, production-grade software.

It’s fantastic for prototyping, disposable tools, and accelerating development, but when it comes to real-world, evolving, multi-developer systems it falls short. And the root cause is simple but fundamental: non-determinism.


The Non-Determinism Problem

At the heart of every LLM lies a probabilistic process. When you ask an AI to write or modify code, it doesn’t “recall” what it said before it predicts the next most likely word or token based on the context it sees. Even when you give it the exact same prompt twice, you often get subtly (or wildly) different answers.

In casual conversation, this doesn’t matter much. But in software engineering, determinism is sacred. A build must produce the same binary every time. Tests must behave consistently. A function’s output must depend solely on its input.

LLMs break this rule by design.

When you ask AI to “add a new field to this API,” it might add the field but it might also rename unrelated variables, adjust indentation styles, reorder imports, or subtly alter unrelated logic. These incidental changes make it almost impossible to track what actually changed and why. In version control, that’s noise. In production code, that’s risk.


The Illusion of Velocity

Using AI for coding can feel like flying until you realize you’ve lost track of where you’re going.

AI-generated code feels fast. You type a prompt, and it spits out a function that looks plausible. But as any experienced engineer knows, code that looks correct is not the same as code that is correct.

Worse still, AI often gets 90% right just enough to lull you into trusting it, but that last 10% (the edge cases, performance issues, or security vulnerabilities) can be costly. In long-lived systems, those flaws become ticking time bombs.

So yes, AI saves time but only if you’re ready to spend that saved time reviewing, refactoring, and making it consistent with your project standards. Otherwise, you’re borrowing technical debt against future maintenance.


“Vibe Coding” vs. Real Engineering

There’s a growing trend I like to call “vibe coding” relying on AI to produce code that “feels” right without understanding it deeply. It’s seductive, especially for less experienced developers or under time pressure.

But the truth is: software longevity is built on understanding, not vibes.

A healthy codebase is not just functional it’s coherent, documented, and maintainable. Every class, function, and comment exists for a reason that another human can later understand. AI-generated code often lacks that intentionality. It can mimic style, but it doesn’t comprehend architecture, team conventions, or long-term evolution.

AI doesn’t “see” the whole system it only sees your current prompt.


Where AI Does Shine

Despite these limitations, I’m not anti-AI. In fact, I use it daily strategically.

AI is brilliant at:

  • Prototyping ideas getting something working fast, even if it’s messy.
  • Generating boilerplate writing repetitive CRUD or setup code.
  • Explaining code translating complex logic into human-readable summaries.
  • Brainstorming solutions helping you think through alternative approaches.
  • Writing tests drafting coverage you can refine manually.

In other words, AI accelerates cognition, not automation. It’s a thinking partner, not a replacement for engineering discipline.


What It Means for the Future

As LLMs improve, we’ll likely see more deterministic, context-aware systems perhaps ones that can “anchor” to a codebase and learn its structure persistently. But until then, the responsibility for coherence, maintainability, and correctness still lies with us the humans.

AI might be the apprentice, but we’re still the architects.

My takeaway after months of experimentation is simple:

Use AI to accelerate development, not to abdicate responsibility.

Treat its output like an intern’s draft: useful, fast, and full of potential but never production-ready without review, cleanup, and integration into your project’s ecosystem.


The Bottom Line

AI coding tools are a revolution but like every revolution, they require balance and maturity to use effectively. They’re not replacing software engineers; they’re augmenting them.

So go ahead, let the AI write your prototypes, mock APIs, or test scaffolds. But when it comes to the production systems that real users depend on make sure there’s a human behind the keyboard who understands every line.

Because in the end, the difference between disposable and durable code isn’t who (or what) wrote it it’s who owns it.


Returning to the Rails World: What’s New and Exciting in Rails 8 and Ruby 3.3+

It’s 2025, and coming back to Ruby on Rails feels like stepping into a familiar city only to find new skyscrapers, electric trams, and an upgraded skyline.
The framework that once defined web development simplicity has reinvented itself once again.

If you’ve been away for a couple of years, you might remember Rails 6 or early Rails 7 as elegant but slightly “classic.”
Fast-forward to today: Rails 8 and Ruby 3.4 together form one of the most modern, high-performance, and full-stack ecosystems in web development.

Let’s explore what changed from Ruby’s evolution to Rails’ latest superpowers.


The Ruby Renaissance: From 3.2 to 3.4

Over the last two years, Ruby has evolved faster than ever.
Performance, concurrency, and developer tooling have all received major love while the language remains as expressive and joyful as ever.

Ruby 3.2 (2023): The Foundation of Modern Ruby

  • YJIT officially production-ready: Introduced a new JIT compiler written in Rust, delivering 20–40% faster execution on Rails apps.
  • Prism Parser (preview): The groundwork for a brand-new parser that improves IDEs, linters, and static analysis.
  • Regexp improvements: More efficient and less memory-hungry pattern matching.
  • Data class proposal: Early syntax experiments to make small, immutable data structures easier to define.

Ruby 3.3 (2024): Performance, Async IO, and Stability

  • YJIT 3.3 update: Added inlining and better method dispatch caching big wins for hot code paths.
  • Fiber Scheduler 2.0: Improved async I/O great for background processing and concurrent network calls.
  • Prism Parser shipped: Officially integrated, paving the way for better tooling and static analysis.
  • Better memory compaction: Long-running apps now leak less and GC pauses are shorter.

Ruby 3.4 (2025): The Next Leap

  • Prism as the default parser making editors and LSPs much more accurate.
  • Official WebAssembly build: You can now compile and run Ruby in browsers or serverless environments.
  • Async and Fibers 3.0: Now tightly integrated into standard libraries like Net::HTTP and OpenURI.
  • YJIT 3.4: Huge startup time and memory improvements for large Rails codebases.
  • Smarter garbage collector: Dynamic tuning for better throughput under load.

Example: Native Async Fetching in Ruby 3.4

require "async"
require "net/http"

Async do
  ["https://rubyonrails.org", "https://ruby-lang.org"].each do |url|
    Async do
      res = Net::HTTP.get(URI(url))
      puts "#{url} → #{res.bytesize} bytes"
    end
  end
end

That’s fully concurrent, purely in Ruby no threads, no extra gems.
Ruby has quietly become fast, efficient, and concurrent while keeping its famously clean syntax.


The Rails Revolution: From 7 to 8

While Ruby evolved under the hood, Rails reinvented the developer experience.
Rails 7 introduced the “no-JavaScript-framework” movement with Hotwire.
Rails 8 now expands that vision making real-time, async, and scalable apps easier than ever.

Rails 7 (2022–2024): The Hotwire Era

Rails 7 changed the front-end game:

  • Hotwire (Turbo + Stimulus): Replaced complex SPAs with instant-loading server-rendered apps.
  • Import maps: Let you skip Webpack entirely.
  • Encrypted attributes: encrypts :email became a one-line reality.
  • ActionText and ActionMailbox: Brought full-stack communication features into Rails core.
  • Zeitwerk loader improvements: Faster boot and reloading in dev mode.

Example: Rails 7 Hotwire Simplicity

# app/controllers/messages_controller.rb
def create
  @message = Message.create!(message_params)
  turbo_stream.append "messages", partial: "messages/message", locals: { message: @message }
end

That’s a live-updating chat stream with no React, no WebSocket boilerplate.


Rails 8 (2025): Real-Time, Async, and Database-Native

Rails 8 takes everything Rails 7 started and levels it up for the next decade.

Turbo 8 and Turbo Streams 2.0

Hotwire gets more powerful:

  • Streaming updates from background jobs
  • Improved Turbo Frames for nested components
  • Async rendering for faster page loads
class CommentsController < ApplicationController
  def create
    @comment = Comment.create!(comment_params)
    turbo_stream.prepend "comments", partial: "comments/comment", locals: { comment: @comment }
  end
end

Now you can push that stream from Active Job or Solid Queue, enabling real-time updates across users.

Solid Queue and Solid Cache

Rails 8 introduces two built-in frameworks that change production infrastructure forever:

  • Solid Queue: Database-backed job queue think Sidekiq performance without Redis.
  • Solid Cache: Native caching framework that integrates with Active Record and scales horizontally.
# Example: background email job using Solid Queue
class UserMailerJob < ApplicationJob
  queue_as :mailers

  def perform(user_id)
    UserMailer.welcome_email(User.find(user_id)).deliver_now
  end
end

No Redis, no extra service everything just works out of the box.

Async Queries and Connection Pooling

Rails 8 adds native async database queries and automatic connection throttling for multi-threaded environments.
This pairs perfectly with Ruby’s improved Fiber Scheduler.

users = ActiveRecord::Base.async_query do
  User.where(active: true).to_a
end

Smarter Defaults, Stronger Security

  • Active Record Encryption expanded with deterministic modes
  • Improved CSP and SameSite protections
  • Rails generators now use more secure defaults for APIs and credentials

Developer Experience: Rails Feels Modern Again

The latest versions of Rails and Ruby have also focused heavily on DX (developer experience).

  • bin/rails console --sandbox rolls back all changes automatically.
  • New error pages with interactive debugging.
  • ESBuild 3 & Bun support for lightning-fast JS builds.
  • Improved test parallelization with async jobs and Capybara integration.
  • ViewComponent and Hotwire integration right from generators.

Rails in 2025 feels sleek, intelligent, and incredibly cohesive.


The Future of Rails and Ruby Together

With Ruby 3.4’s concurrency and Rails 8’s async, streaming, and caching power, Rails has evolved into a true full-stack powerhouse again capable of competing with modern Node, Elixir, or Go frameworks while staying true to its elegant roots.

It’s not nostalgia it’s progress built on the foundation of simplicity.

If you left the Rails world thinking it was old-fashioned, this is your invitation back.
You’ll find your favorite framework faster, safer, and more capable than ever before.


Posted by Ivan Turkovic
Rubyist, software engineer, and believer in beautiful code.

What You Should Learn to Master but Never Ship

Every engineer should build a few things from scratch search, auth, caching just to understand how much complexity lives beneath the surface. But the real skill isn’t rolling your own; it’s knowing when not to. In the age of AI, understanding how things work under the hood isn’t optional it’s how you keep control over what your tools are actually doing.

There’s a quiet rite of passage every engineer goes through. You build something that already exists. You write your own search algorithm. You design your own auth system. You roll your own logging framework because the existing one feels too heavy.

And for a while, it’s exhilarating. You’re learning, stretching, discovering how the pieces actually work.

But there’s a difference between learning and shipping.


The Temptation to Reinvent

Every generation of engineers rediscovers the same truth: we love building things from scratch. We tell ourselves our use case is different, our system is simpler, our constraints are unique.

But the moment your code touches production when it has to handle real users, scale, security, and compliance you realize how deep the rabbit hole goes.

Here’s a short list of what you probably shouldn’t reinvent if your goal is to ship something that lasts:

  • Search algorithms
  • Encryption
  • Authentication
  • Credit card handling
  • Billing
  • Caching systems
  • Logging frameworks
  • CSV, HTML, URL, JSON, XML parsing
  • Floating point math
  • Timezones
  • Localization and internationalization
  • Postal address handling

Each one looks simple on the surface. Each one hides decades of hard-won complexity underneath.


Learn It, Don’t Ship It

You should absolutely build these things once.

Do it for the same reason musicians practice scales or pilots train in simulators. You’ll understand the invisible edges where systems fail, what tradeoffs libraries make, how standards evolve.

Build your own encryption to see why key rotation matters.
Write your own caching layer to feel cache invalidation pain firsthand.
Parse CSVs manually to understand why “CSV” isn’t a real standard.

You’ll emerge humbled, smarter, and far less likely to call something “trivial” again.

But then don’t ship it.


The Cost of Cleverness

Production is where clever ideas go to die.

The real cost of rolling your own isn’t just the initial build. It’s the invisible tax that compounds over time: maintenance, updates, edge cases, security audits, integration testing.

That custom auth system? It’ll need to handle password resets, MFA, SSO, OAuth, token expiration, brute-force protection, and GDPR deletion requests.

Your homegrown billing service? Get ready for tax handling, currency conversion, refund flows, audit trails, and legal exposure.

Most of us underestimate this cost by an order of magnitude. And that gap between what you think you built and what reality demands is where projects go to die.


The Wisdom of Boring Software

Mature engineering isn’t about novelty it’s about leverage.

When you use battle-tested libraries, you’re not being lazy. You’re standing on top of millions of hours of debugging, testing, and iteration that others have already paid for.

The best engineers I know are boring. They use Postgres, Redis, S3. They trust Stripe for billing, Auth0 for authentication, Cloudflare for caching. They’d rather spend their creative energy on business logic and user experience the parts that actually differentiate a product.

Boring software wins because it doesn’t collapse under its own cleverness.


Why This Matters Even More in the AI Era

Today, a new kind of abstraction has arrived: AI.
We don’t just import libraries anymore we import intelligence.

When you integrate AI into your workflow, you’re effectively outsourcing judgment, reasoning, and data handling to a black box that feels magical but is still software under the hood.

If you’ve never built or understood the underlying systems search, parsing, data handling, caching, numerical precision you’ll have no intuition for what the AI is actually doing. You’ll treat it as oracle instead of a tool.

Knowing how these fundamentals work grounds you. It helps you spot when the model hallucinates, when latency hides in API chains, when an embedding lookup behaves like a fuzzy search instead of real understanding.

The engineers who will thrive in the AI era aren’t the ones who blindly prompt. They’re the ones who know what’s happening behind the prompt.

Because AI systems don’t erase complexity they just bury it deeper.

And if you don’t know what lives underneath, you can’t debug, govern, or trust it.


When It’s Worth Reinventing

There are exceptions. Sometimes the act of rebuilding is the product itself.

Search at Google. Encryption at Signal. Auth at Okta.

If your business is the infrastructure, then yes go deep. Reinvent with intention. But if it’s not, your job is to assemble reliable systems, not to recreate them.

Learn enough to understand the tradeoffs, but don’t mistake knowledge for necessity.


The Real Lesson

Here’s the paradox: you can’t truly respect how hard these problems are until you’ve built them yourself.

So do it once. In a sandbox, on weekends, or as a thought exercise. Feel the pain, appreciate the elegance of the libraries you once dismissed, and move on.

That humility will make you a better engineer and a more trusted builder in the AI age than any clever homegrown library ever could.


Final thought:
Master everything. Ship selectively.

That’s the difference between engineering as craft and engineering as production.
And it’s the difference between using AI and actually understanding it.

AI Vibe Coding vs. Outsourcing vs. Local Developers. What Really Works Best

The way we build software is changing fast.
You can now code alongside AI in real time. You can hire an offshore team across time zones. Or you can build with local developers right next to you the old-school way that suddenly feels new again.

Each model works, but they work differently. And when it comes to product quality, iteration speed, and long-term success only one consistently delivers.

Let’s unpack all three, step by step.


AI Vibe Coding is Building at the Speed of Thought

AI vibe coding is when you work directly with AI tools like ChatGPT, Claude, or GitHub Copilot as your pair developer.

It’s not about asking for snippets it’s about co-developing live.
You describe your intent, get code, refine it instantly, and iterate in a tight feedback loop.

Process (Pseudocode-Style)

while (building_feature):
    describe_intent_to_ai("Create an onboarding flow with email + OAuth")
    ai.generates_scaffold()
    you.review_and_edit_live()
    ai.adjusts_structure_and_tests()
    run_tests()
    deploy_if_ready()

Pros

✅ Extremely fast iteration
✅ Context-aware (if you prompt consistently)
✅ Great for prototyping and boilerplate
✅ Ideal for solo founders or small teams

Cons

⚠️ Requires strong technical judgment
⚠️ AI lacks product intuition and domain empathy
⚠️ Risk of hidden bugs or overconfidence in generated code
⚠️ Limited long-term maintainability without review

AI vibe coding accelerates early-stage building but it still needs human oversight, structured review, and context. It’s great for speed, not yet for strategy.


Outsourcing Development, Slow Communication, Slow Momentum

Outsourcing means hiring remote developers (often overseas) to build parts or all of your product.

The promise: cost savings and flexibility.
The reality: delays, misunderstandings, and low-context execution.

Process (Pseudocode-Style)

while (project_in_progress):
    product_owner.create_ticket():
        - detailed specs
        - screenshots, examples, acceptance criteria

    offshore_team(next_day):
        - reads ticket
        - implements as interpreted
        - opens PR for review

    product_owner(next_morning):
        - reviews PR
        - finds edge cases or misalignment
        - leaves comments

Pros

✅ Lower hourly cost
✅ Access to global talent
✅ Good for simple, well-scoped tasks

Cons

⚠️ Delayed feedback loops (timezone lag)
⚠️ Communication loss and misinterpretation
⚠️ Reduced ownership or creative input
⚠️ Code that “works” but doesn’t “fit”
⚠️ High hidden cost in management and rework

Outsourcing works if your specs are crystal clear and your needs don’t change.
But in reality product needs always change.


Local Developers, Real-Time Collaboration and True Ownership

Now let’s talk about the model that consistently wins: local developers.

Whether sitting next to you or just in the same time zone, local developers bring both technical skill and product empathy. They understand your users, your goals, and your market context intuitively.

This creates a feedback loop that’s impossible to replicate with outsourcing or AI.

Process (Pseudocode-Style)

while (building_product):
    morning_sync():
        - quick discussion on goals and blockers
        - align on what matters today

    devs.start_coding():
        - spontaneous chat or screen share
        - brainstorm architecture in real time
        - fix and test instantly

    afternoon_review():
        - peer review and refactor collaboratively
        - same-day deploy

Pros

✅ Real-time communication
✅ Shared product understanding
✅ Collaborative brainstorming
✅ High accountability and quality
✅ Culture and creativity alignment

Cons

⚠️ Higher cost per developer
⚠️ Limited local hiring pool
⚠️ Needs strong leadership and culture

But what you gain far outweighs the cost.
When your developers vibe with your product, decisions are faster, reviews are deeper, and every line of code carries intent.


The Hidden Layer, How Context Shapes Code

Here’s the truth:
Every developer human or AI writes code based on context.

When context is broken (as in outsourcing), code quality drops.
When context is partial (as with AI), you get speed but need oversight.
When context is shared (as with local devs), you get clarity, accountability, and alignment.

The Context Pyramid

LayerOutsourcingAI Vibe CodingLocal Developer
Product intuitionLowMediumHigh
SpeedLowVery HighHigh
Collaboration depthLowMediumVery High
Communication lagHighNoneMinimal
Code qualityVariableGood with reviewConsistently strong
OwnershipLowNoneHigh
Best forFixed-scope tasksRapid prototypingLong-term, evolving products

The Hybrid Reality

The future is likely hybrid:

  • Use AI for ideation, scaffolding, and acceleration.
  • Avoid outsourcing for evolving or strategic projects.
  • Anchor everything around a local team that owns the product, understands the users, and ensures quality.

That’s the winning setup AI for speed, local developers for soul.


💬 Final Thoughts

Building great software isn’t just about writing code.
It’s about alignment shared context, communication, and ownership.

  • AI vibe coding gives you momentum.
  • Outsourcing gives you manpower.
  • Local developers give you meaning and mastery.

If you want code that not only runs but lasts go local, stay collaborative, and use AI as your accelerator, not your replacement.


👨‍💻 Need Help Structuring Your Team or Workflow?

If you’re building an MVP, scaling a startup, or managing tech in fintech or SaaS that needs both speed and reliability, I can help.

With nearly two decades of experience building, scaling, and advising startups and complex systems, I offer consulting on how to structure teams, integrate AI effectively, and build codebases that actually scale and stand the test of time.

Let’s make your team and your code vibe.

?? → BI → ML → AI → ??

AI’s past and the future

Where acronyms in business come from, what they sold, who won, and what might come after “AI”

Acronyms are the currency of business storytelling. They compress complex technology into a neat package a salesperson can pitch in a single slide: CRM, ERP, BI, ML, AI. Each one marked a shift in what companies sold to their customers and how value was captured. I want to walk through that history briefly, honestly, with business examples and what “winning” looked like in each era and then make a practical, evidence-based prediction for what comes after AI. I’ll finish with concrete signs companies and entrepreneurs should watch if they want to be on the winning side next.


The pre-acronym age: data collectors and automation (before CRM/ERP took over)

Before the catchy three-letter packages, businesses bought automation and niche systems: financial ledgers, bespoke reporting scripts, and the earliest mainframe systems. The selling point was efficiency: replace paper, reduce human error, scale payroll or accounting.

Winners: large system integrators and early software firms that could deliver reliability and scale. Value to the customer was operational: fewer mistakes, faster month-end closes, predictable processes.

This era set the expectation that software replaces tedious human work an expectation every later acronym exploited and monetized.


CRM / ERP the era of process standardization and cross-company suites

Acronyms like ERP and CRM told customers what problem a vendor solved: enterprise resource planning for the core business, customer relationship management for sales and marketing. The message was simple: centralize and standardize.

Business sales example: SAP and Oracle sold ERP as a bet on process control; Siebel (then Oracle) sold CRM as the way to professionalize sales organizations. Projects were expensive, multi-year, and became investments in repeatability and governance. The commercial model was license + services. Success looked like longer, stickier contracts and high services revenue.

Winners: vendors who could sell a vision of stability and then deliver implementation expertise.


BI (Business Intelligence) data becomes a product

BI formalized the idea that data itself is valuable: dashboards, reports, and the ability to make decisions from consolidated datasets. The term was popularized in the late 1980s and 1990s as companies realized that aggregated data and fact-based dashboards could change executive decision making. BI vendors promised that data could be turned into actionable insight.

Business sales example: BusinessObjects, Cognos, MicroStrategy sold a reliable narrative centralize data, produce dashboards, enable managers to make informed choices. Customers were large enterprises whose decisions had big dollar consequences: pricing, inventory, and marketing allocation.

Success metric: adoption by management, ROI from better decisions, and a move to subscription models as vendors evolved. BI also laid the foundation for data warehouses and ETL pipelines the plumbing later eras would rely on.


ML (Machine Learning) predictions replace static dashboards

Machine learning shifted the promise from describing the past to predicting the future. ML isn’t a single product but a set of techniques that let systems learn patterns recommendations, fraud detection, demand forecasting. Its commercialization accelerated as larger datasets and compute made models useful in production. (Timeline of ML milestones is long from perceptrons to ImageNet and modern deep learning.)

Business sales example: Netflix used ML for recommendations (watch time → retention); Amazon used ML for recommendations and dynamic pricing; banks used ML for fraud detection. The product pitch became “we will increase revenue (or reduce losses) by X% using model-driven predictions.”

Success metric: measurable impact on key business metrics (conversion, churn, fraud rate) and repeatable MLops pipelines. Winning companies built both models and the integration into products and workflows the second part mattered as much as the model.


AI (Artificial Intelligence) foundation models, agents, and ubiquity

“AI” is a broader, more emotionally charged badge than ML. It promises not just predictions, but agency: systems that write, design, plan, and interact. The recent leap in capability comes from large foundation models and multimodal systems, and the market’s attention has become concentrated on a smaller set of platform players. OpenAI is the obvious poster child widely integrated and publicly visible and it’s now part of a small club of companies shaping how enterprises adopt AI. Others Anthropic, Google/DeepMind, Microsoft (as a partner and investor), Nvidia (as the infra champion) are also core to who wins in the AI era. Recent reporting and market movement underscore how concentrated and influential these players are.

Business sales example: AI is sold as both a strategic platform and as task automation. Microsoft + OpenAI integrations sell enterprise productivity gains; Anthropic partners with platforms and enterprise vendors to bring chat/agent capabilities into products; Nvidia sells the hardware that makes large models economically viable. Sales morph into partnerships (platform + integration) and usage-based monetization (API calls, seats for AI assistants, compute consumption).

Success metric: ecosystem adoption and sticky integrations. The winners aren’t just model makers they are the platforms that make models reliably usable within enterprise apps, the cloud vendors that provide infra, and the companies that embed AI into workflows to measurably lower costs or multiply revenue.


What’s next? Predicting the post-AI acronym

Acronyms rise from what businesses need to sell next. Right now, AI sells capability; tomorrow, the market will demand something different: not raw capability but safe, contextual, composable, and human-centric value. Based on where the money, engineering effort, and regulatory focus are going, here are a few candidate acronyms and my pick.

Candidate futures (short list)

  • CAI: Contextual AI
    Focus: models that understand user context (company data, regulations, customer history) and deliver context-aware outputs with provenance. Selling point: trust and relevance. Businesses pay for AI that “knows the company” and can operate under constraints.
  • A^2I / AI²: Augmented & Autonomous Intelligence
    Focus: agents that both augment humans and act autonomously on behalf of businesses (book meetings, negotiate, execute trades). Selling point: time reclaimed and tasks delegated with measurable outcomes.
  • DAI: Distributed AI
    Focus: moving models to the edge, on-device privacy, and federated learning. Selling point: privacy, latency, and regulatory compliance. Monetization: device + orchestration + certification.
  • HXI: Human-Centered Experience Intelligence (or HCI reimagined)
    Focus: design + AI that measurably improves human outcomes (productivity, wellbeing). Selling point: human adoption and long-term retention; less hype, more stickiness.
  • XAI: Explainable AI (commercialized)
    Focus: regulations and auditability breed a market for explainable models as first-class products. Selling point: compliance, audit trails, and legally defensible automation.

My prediction (the one I’d bet money on)

CAI: Contextual AI.
Why? The immediate commercial friction after capability is trust and integration. Companies will not pay forever for raw creativity if outputs can’t be traced to corporate data, policies, and goals. The era of foundation models created broad capabilities; the next era will productize those capabilities into contextualized, policy-aware services that integrate directly into enterprise systems (CRMs, ERPs, legal, finance) and produce auditable actions. In short: AI + enterprise context = the next product category.

Concrete signs for CAI already exist: enterprises demanding model fine-tuning on private corpora, partnerships between model-makers and enterprise software vendors, and regulatory attention pushing for explainability and provenance. Those are the ingredients for a context-first commercial product.

(If you prefer the agent narrative, A^2I where agents actually do things reliably and accountably is a close second. But agents without context are liability; agents with context are product.)


What winning looks like in CAI

If CAI becomes the next category, how do businesses win?

  1. Data integration champions vendors that make it trivial to connect enterprise data (ERP, CRM, contracts) to models with privacy and governance baked in. The sales pitch: “We connect, govern, and make AI outputs auditable.”
  2. Actionable interfaces not just a chat box, but agents that produce auditable actions inside workflows (e.g., “Create invoice,” “Propose contract clause,” “Adjust inventory reorder”). The pitch: “We reduce X hours/week for role Y.”
  3. Regulatory and risk products explainability, model cards, audit logs, and compliance workflows become table stakes. Vendors packaging those for regulated industries will command higher multiples.
  4. Infra + economics hardware and cloud vendors that optimize cost/performance for fine-tuned, context-rich models (Nvidia-like infra winners) will capture a slice. Recent market moves show infrastructure captures enormous value; watch the hardware and cloud players.

Practical advice for sellers and builders today

  • If you sell to enterprises: stop pitching “we use AI.” Start pitching what measurable outcome you deliver and how you keep it governed. Show integration architecture diagrams: where the data lives, what’s fine-tuned, and where the audit logs are.
  • If you build products: invest in connectors, provenance, and reversible actions. A product that lets customers roll back an AI decision will win trust and enterprise POs.
  • If you’re an investor or operator: look for companies that own context (industry datasets, domain rules, vertical workflows). Horizontal foundation models will be commoditized; contextual wrappers will be the economic moat.
  • If you’re an infra player: optimize for cost + compliance. The market will pay a premium for infra that matches enterprise security and cost constraints.

Example scenarios; how each era turned into commercial value

  • BI era: a retail chain buys a BI suite to consolidate POS data across stores. Result: optimized promotions, fewer stockouts, 3% margin improvement. The seller (BI vendor) expanded into recurring maintenance and cloud hosting.
  • ML era: an e-commerce platform adds recommendation models. Result: personalized homescreens boost AOV by 7%. The ML vendor sells models + integration and gets paid per API call and for model retraining.
  • AI era: an agency uses generative models to prototype marketing copy at scale. Result: faster iteration and lower creative costs; large platforms (OpenAI, Anthropic, Google) sell the models, cloud vendors sell the compute. OpenAI’s integrations made it a visible “winner” for developers and enterprises adopting chat/assistant features.
  • CAI era (predicted): the same retail chain buys a contextual assistant that reads contracts, vendor SLAs, and inventory rules, then suggests optimal promotions aligned with margin and regulatory rules. Result: promotions that respect contracts, better margins, and an auditable decision trail. Pricing: subscription + outcome share.

Acronyms are marketing. Value is behavioral change.

Acronyms succeed when they promise a specific, repeatable business result and when vendors can deliver measurable change in behavior. BI helped managers act on facts. ML helped products predict user intent. AI made interaction and creativity broadly available. The next profitable acronym my money is on CAI (Contextual AI) will sell trustworthy, context-aware automation that actually becomes part of the way companies operate.

If you’re building, selling, or investing: focus less on the label and more on the edges where value is realized integration, governance, measurable business outcomes. That’s where the next winners will be, and where your clients will write the checks.

The Vibe Code Tax

Momentum is the oxygen of startups. Lose it, and you suffocate. Getting it back is harder than creating it in the first place.

Here’s the paradox founders hit early:

  • Move too slowly searching for the “perfect” technical setup, and you’re dead before you start.
  • Move too fast with vibe-coded foundations, and you’re dead later in a louder, more painful way.

Both paths kill. They just work on different timelines.

Death by Hesitation

Friendster is a perfect example of death by hesitation. They had the idea years before Facebook. They had users. They had momentum.

But their tech couldn’t scale, and instead of fixing it fast, they stalled. Users defected. Momentum bled out. By the time they moved, Facebook and MySpace had already eaten their lunch.

That’s hesitation tax: waiting, tinkering, second-guessing while the world moves on.

Death by Vibe Coding

On the flip side, you get the vibe-coded death spiral.

Take Theranos. It wasn’t just fraud, it was vibe coding at scale. Demos that weren’t real. A prototype paraded as a product. By the time the truth surfaced, they’d burned through billions and a decade of time.

Or look at Quibi. They raced to market with duct-taped assumptions the whole business was a vibe-coded bet that people wanted “TV, but shorter.” $1.75 billion later, they discovered the foundation was wrong.

That’s the danger of mistaking motion for progress.

The Right Way to Use Vibe Coding

Airbnb is the counterexample. Their first site was duct tape. Payments were hacked together. Listings were scraped. It was vibe code but they treated it as a proof of concept, not a finished product.

The moment they proved demand (“people really will rent air mattresses from strangers”), they rebuilt. They didn’t cling to the prototype. They moved fast, validated, then leveled up.

That’s the correct use: vibe code as validation, not as production.

The Hidden Tax

The vibe code tax is brutal because it’s invisible at first. It’s not just money.

  • Lost time → The 6–12 months you’ll spend duct-taping something that later has to be rebuilt from scratch.
  • Lost customers → Early adopters churn when they realize your product is held together with gum and string. Most won’t return.
  • Lost momentum → Investors don’t like hearing “we’re rebuilding.” Momentum is a story you only get to tell once.

And you don’t get to dodge this tax. You either pay it early (by finding a technical co-founder or paying real engineers), or you pay it later (through rebuilds, lost customers, and wasted months).

How to Stay Alive

  1. Be honest. Call your vibe-coded MVP a prototype. Never pitch it as “production-ready.”
  2. Set a timer. Airbnb didn’t stay in duct tape land for years. They validated and moved on. You should too.
  3. Budget for the rebuild. If you don’t have a co-founder, assume you’ll need to pay engineers once the prototype proves itself.
  4. Go small but real. One feature built right is more valuable than ten features that crumble.

Final Word

The startup graveyard is full of companies that either waited too long or shipped too fast without a foundation. Friendster hesitated. Theranos faked it. Quibi mistook hype for traction.

Airbnb survived because they paid the vibe code tax on their terms. They used duct tape to test, then rebuilt before the cracks became fatal.

That’s the playbook.

Because no matter what the vibe code tax always comes due.

Is AI Slowing Everyone Down?

Over the past year, we’ve all witnessed an AI gold rush. Companies of every size are racing to “adopt AI” before their competitors do, layering chatbots, content tools, and automation into their workflows. But here’s the uncomfortable question: is all of this actually making us more productive, or is AI quietly slowing us down?

A new term from Harvard Business Review “workslop” captures what many of us are starting to see. It refers to the flood of low-quality, AI-generated work products: memos, reports, slide decks, emails, even code snippets. The kind of content that looks polished at first glance, but ultimately adds little value. Instead of clarity, we’re drowning in noise.

The Illusion of Productivity

AI outputs are fast, but speed doesn’t always equal progress. Generative AI makes it effortless to produce content, but that ease has created a different problem: oversupply. We’re seeing more documents, more proposals, more meeting summaries but much of it lacks originality or critical thought.

When employees start using AI as a crutch instead of a tool, the result is extra layers of text that someone else has to review, fix, or ignore. What feels like efficiency often leads to more time spent filtering through workslop. The productivity gains AI promises on paper are, in practice, canceled out by the overhead of sorting the useful from the useless.

Numbers Don’t Lie

The MIT Media Lab recently published a sobering study on AI adoption. After surveying 350 employees, analyzing 300 public AI deployments, and interviewing 150 executives, the conclusion was blunt:

  • Fewer than 1 in 10 AI pilot projects generated meaningful revenue.
  • 95% of organizations reported zero return on their AI investments.

The financial markets noticed. AI stocks dipped after the report landed, signaling that investors are beginning to question whether this hype cycle can sustain itself without real business impact.

Why This Happens

The root cause isn’t AI itself it’s how organizations are deploying it. Instead of rethinking workflows and aligning AI with core business goals, many companies are plugging AI in like a patch. “We need to use AI somewhere, anywhere.” The result is shallow implementations that create surface-level outputs without driving real outcomes.

It’s the same mistake businesses made during earlier tech booms. Tools get adopted because of fear of missing out, not because of a well-defined strategy. And when adoption is guided by FOMO, the outcome is predictable: lots of activity, little progress.

Where AI Can Deliver

Despite the noise, I don’t think AI is doomed to be a corporate distraction. The key is focus. AI shines when it’s applied to specific, high-leverage problems:

  • Automating repetitive, low-value tasks (think: data entry, scheduling, or document classification).
  • Enhancing decision-making with real-time insights from complex data.
  • Accelerating specialized workflows in domains like coding, design, or customer support if humans remain in the loop.

The companies that will win with AI aren’t the ones pumping out endless AI-generated documents. They’re the ones rethinking their processes from the ground up and asking: Where can AI free humans to do what they do best?

The Human Factor

We have to remember: AI isn’t a replacement for judgment, creativity, or strategy. It’s a tool one that can amplify our abilities if used thoughtfully. But when used carelessly, it becomes a distraction that actually slows us down.

The real productivity gains won’t come from delegating everything to AI. They’ll come from combining human strengths with AI’s capacity, cutting through the noise, and resisting the temptation to let machines do our thinking for us.


Final thought: Right now, most companies are stuck in the “workslop” phase of AI adoption. They’re generating more content than ever but producing less clarity and value. The next phase will belong to organizations that stop chasing hype and start asking harder questions: What problem are we actually solving? Where does AI fit into that solution?

Until then, we should be honest with ourselves: AI isn’t always speeding us up. Sometimes, it’s slowing everyone down.