Ruby 5.0: What If Ruby Had First-Class Types?

The article envisions a reimagined Ruby with optional, inline type annotations called TypedRuby, addressing limitations of current solutions like Sorbet and RBS. It proposes a syntax that integrates seamlessly with Ruby’s philosophy, emphasizing readability and gradual typing while considering generics and union types. TypedRuby represents a potential evolution in Ruby’s design.

After imagining a typed CoffeeScript, I realized we need to go deeper. CoffeeScript was inspired by Ruby, but what about Ruby itself? Ruby has always been beautifully expressive, but it’s also been dynamically typed from day one. And while Sorbet and RBS have tried to add types, they feel bolted on. Awkward. Not quite Ruby.

What if Ruby had been designed with types from the beginning? Not as an afterthought, not as a separate file you maintain, but as a natural, optional part of the language itself? Let’s explore what that could look like.

The Problem with Sorbet and RBS

Before we reimagine Ruby with types, let’s acknowledge why the current solutions haven’t caught on widely.

Sorbet requires you to add # typed: true comments and use a separate type checker. Types look like this:

# typed: true
extend T::Sig

sig { params(name: String, age: Integer).returns(String) }
def greet(name, age)
  "Hello #{name}, you are #{age}"
end
Code language: PHP (php)

RBS requires separate .rbs files with type signatures:

# user.rbs
class User
  attr_reader name: String
  attr_reader age: Integer
  
  def initialize: (name: String, age: Integer) -> void
  def greet: () -> String
end
Code language: CSS (css)

Both solutions have the same fundamental problem: they don’t feel like Ruby. Sorbet’s sig blocks are verbose and repetitive. RBS splits your code across multiple files, breaking the single-file mental model that makes Ruby so pleasant.

What we need is something that feels native. Something Matz might have designed if static typing had been a priority in 1995.

Core Design Principles

Let’s establish what TypedRuby should be:

  1. Types are optional everywhere. You can gradually type your codebase.
  2. Types are inline. No separate files, no sig blocks.
  3. Types feel like Ruby. Natural syntax that matches Ruby’s philosophy.
  4. Duck typing coexists with static typing. You choose when to be strict.
  5. Generic types are first-class. Collections, custom classes, everything.
  6. The syntax is minimal. Ruby is beautiful; types shouldn’t ruin that.

Basic Type Annotations

In TypeScript, you use colons. In Sorbet, you use sig blocks. TypedRuby could use a more natural Ruby approach with the :: operator we already know:

# Current Ruby
name = "Ivan"
age = 30

# TypedRuby with inline types
name :: String = "Ivan"
age :: Integer = 30

# Or with type inference
name = "Ivan"  # inferred as String
age = 30       # inferred as Integer
Code language: PHP (php)

The :: operator already means “scope resolution” in Ruby, but in this context (before assignment), it means “has type”. It’s familiar to Ruby developers and reads naturally.

Method Signatures

Current Sorbet approach:

extend T::Sig

sig { params(name: String, age: T.nilable(Integer)).returns(String) }
def greet(name, age = nil)
  age ? "Hello #{name}, #{age}" : "Hello #{name}"
end
Code language: JavaScript (javascript)

TypedRuby approach:

def greet(name :: String, age :: Integer? = nil) :: String
  age ? "Hello #{name}, #{age}" : "Hello #{name}"
end
Code language: JavaScript (javascript)

Or with Ruby 3’s endless method syntax:

def greet(name :: String, age :: Integer? = nil) :: String =
  age ? "Hello #{name}, #{age}" : "Hello #{name}"
Code language: JavaScript (javascript)

Much cleaner. The types are right there with the parameters, and the return type is at the end where it reads naturally: “define greet with these parameters, returning a String.”

Classes and Attributes

Current approach with Sorbet:

class User
  extend T::Sig
  
  sig { returns(String) }
  attr_reader :name
  
  sig { returns(Integer) }
  attr_reader :age
  
  sig { params(name: String, age: Integer).void }
  def initialize(name, age)
    @name = name
    @age = age
  end
end

TypedRuby approach:

class User
  attr_reader of String, :name
  attr_reader of Integer, :age
  
  def initialize(@name :: String, @age :: Integer)
  end
  
  def birthday :: void
    @age += 1
  end
  
  def greet :: String
    "I'm #{@name}, #{@age} years old"
  end
end
Code language: CSS (css)

Even better, we could introduce parameter properties like TypeScript:

class User
  def initialize(@name :: String, @age :: Integer, @email :: String)
    # @name, @age, and @email are automatically instance variables
  end
end
Code language: CSS (css)

Generics: The Ruby Way

This is where it gets interesting. Ruby already has a beautiful way of working with collections. TypedRuby needs to extend that naturally.

TypeScript uses angle brackets:

class Container<T> {
  private value: T;
  constructor(value: T) { this.value = value; }
}
Code language: JavaScript (javascript)

Sorbet uses square brackets:

class Container
  extend T::Generic
  T = type_member
  
  sig { params(value: T).void }
  def initialize(value)
    @value = value
  end
end

TypedRuby could use a more natural syntax with of:

class Container of T
  def initialize(@value :: T)
  end
  
  def get :: T
    @value
  end
  
  def map of U, &block :: (T) -> U :: Container of U
    Container.new(yield @value)
  end
end

# Usage
container = Container of String with.new("hello")
lengths = container.map { |s| s.length }  # Container of Integer

For multiple type parameters:

class Pair of K, V
  def initialize(@key :: K, @value :: V)
  end
  
  def map_value of U, &block :: (V) -> U :: Pair of K, U
    Pair.new(@key, yield @value)
  end
end
Code language: CSS (css)

Generic Methods

Methods can be generic too:

def identity of T, value :: T :: T
  value
end

def find_first of T, items :: Array of T, &predicate :: (T) -> Boolean :: T?
  items.find(&predicate)
end

# Usage
result = find_first([1, 2, 3, 4]) { |n| n > 2 }  # Integer?
Code language: PHP (php)

Array and Hash Types

Ruby’s arrays and hashes need type support:

# Arrays
numbers :: Array of Integer = [1, 2, 3, 4, 5]
names :: Array of String = ["Alice", "Bob", "Charlie"]

# Or using shorthand
numbers :: [Integer] = [1, 2, 3, 4, 5]
names :: [String] = ["Alice", "Bob", "Charlie"]

# Hashes
user_ages :: Hash of String, Integer = {
  "Alice" => 30,
  "Bob" => 25
}

# Or using shorthand
user_ages :: {String => Integer} = {
  "Alice" => 30,
  "Bob" => 25
}

# Symbol keys (very common in Ruby)
config :: {Symbol => String} = {
  host: "localhost",
  port: "3000"
}
Code language: PHP (php)

Union Types

Ruby’s dynamic nature often uses union types implicitly. Let’s make it explicit:

# TypeScript: string | number
value :: String | Integer = "hello"
value = 42  # OK

# Method with union return type
def find_user(id :: Integer) :: User | nil
  User.find_by(id: id)
end

# Multiple unions
status :: "pending" | "active" | "completed" = "pending"
Code language: PHP (php)

Nullable Types

Ruby uses nil everywhere. TypedRuby needs to handle this elegantly:

# The ? suffix means "or nil"
name :: String? = nil
name = "Ivan"  # OK

# Methods that might return nil
def find_user(id :: Integer) :: User?
  User.find_by(id: id)
end

# Safe navigation works with types
user :: User? = find_user(123)
email = user&.email  # String? inferred
Code language: PHP (php)

Interfaces and Modules

Ruby uses modules for interfaces. TypedRuby could extend this:

interface Comparable of T
  def <=>(other :: T) :: Integer
end

interface Enumerable of T
  def each(&block :: (T) -> void) :: void
end

# Implementation
class User
  include Comparable of User
  
  attr_reader :name :: String
  
  def initialize(@name :: String)
  end
  
  def <=>(other :: User) :: Integer
    name <=> other.name
  end
end
Code language: HTML, XML (xml)

Type Aliases

Creating reusable type definitions:

type UserId = Integer
type Email = String
type UserStatus = "active" | "inactive" | "banned"

type Result of T = 
  { success: true, value: T } |
  { success: false, error: String }

def create_user(name :: String) :: Result of User
  user = User.create(name: name)
  
  if user.persisted?
    { success: true, value: user }
  else
    { success: false, error: user.errors.full_messages.join(", ") }
  end
end
Code language: JavaScript (javascript)

Practical Example: A Repository Pattern

Let’s build something real. Here’s a generic repository in TypedRuby:

interface Repository of T
  def find(id :: Integer) :: T?
  def all :: [T]
  def create(attributes :: Hash) :: T
  def update(id :: Integer, attributes :: Hash) :: T?
  def delete(id :: Integer) :: Boolean
end

class ActiveRecordRepository of T implements Repository of T
  def initialize(@model_class :: Class)
  end
  
  def find(id :: Integer) :: T?
    @model_class.find_by(id: id)
  end
  
  def all :: [T]
    @model_class.all.to_a
  end
  
  def create(attributes :: Hash) :: T
    @model_class.create!(attributes)
  end
  
  def update(id :: Integer, attributes :: Hash) :: T?
    record = find(id)
    return nil unless record
    
    record.update!(attributes)
    record
  end
  
  def delete(id :: Integer) :: Boolean
    record = find(id)
    return false unless record
    
    record.destroy!
    true
  end
end

# Usage
user_repo = ActiveRecordRepository of User .new(User)
users :: [User] = user_repo.all
user :: User? = user_repo.find(123)
Code language: CSS (css)

Blocks and Procs with Types

Blocks are fundamental to Ruby. They need proper type support:

# Block parameter types
def map of T, U, items :: [T], &block :: (T) -> U :: [U]
  items.map(&block)
end

# Proc types
callback :: Proc of (String) -> void = ->(msg) { puts msg }
transformer :: Proc of (Integer) -> String = ->(n) { n.to_s }

# Lambda types
double :: Lambda of (Integer) -> Integer = ->(x) { x * 2 }

# Method that accepts a block with types
def with_timing of T, &block :: () -> T :: T
  start_time = Time.now
  result = yield
  duration = Time.now - start_time
  
  puts "Took #{duration} seconds"
  result
end

# Usage
result :: String = with_timing { expensive_operation() }
Code language: PHP (php)

Rails Integration

Ruby is often Rails. TypedRuby needs to work beautifully with Rails. Here’s where we need to think carefully about syntax. For method calls that take parameters, we can use a generic-style syntax that feels natural.

Generic-style method calls for associations:

class User < ApplicationRecord
  # Using 'of' with method calls (like generic instantiation)
  has_many of Post, :posts
  belongs_to of Company, :company
  has_one of Profile?, :profile
  
  # Or postfix style (reads more naturally)
  has_many :posts of Post
  belongs_to :company of Company
  has_one :profile of Profile?
  
  # For validations, types on the attribute names
  validates :email of String, presence: true, uniqueness: true
  validates :age of Integer, numericality: { greater_than: 0 }
  
  # Scopes with return types
  scope :active of Relation[User], -> { where(status: "active") }
  scope :by_name of Relation[User], ->(name :: String) {
    where("name LIKE ?", "%#{name}%")
  }
  
  # Typed callbacks still use :: for return types
  before_save :normalize_email
  
  def normalize_email :: void
    self.email = email.downcase.strip
  end
  
  # Typed instance methods
  def full_name :: String
    "#{first_name} #{last_name}"
  end
  
  def posts_count :: Integer
    posts.count
  end
end
Code language: HTML, XML (xml)

Alternative: Square bracket syntax (like actual generics):

class User < ApplicationRecord
  # Using square brackets like generic type parameters
  has_many[Post] :posts
  belongs_to[Company] :company
  has_one[Profile?] :profile
  
  # With additional options
  has_many[Post] :posts, dependent: :destroy
  has_many[Comment] :comments, through: :posts
  
  # Validations
  validates[String] :email, presence: true, uniqueness: true
  validates[Integer] :age, numericality: { greater_than: 0 }
  
  # Scopes
  scope[Relation[User]] :active, -> { where(status: "active") }
  scope[Relation[User]] :by_name, ->(name :: String) {
    where("name LIKE ?", "%#{name}%")
  }
end
Code language: HTML, XML (xml)

Comparison of syntaxes:

# Option 1: Postfix 'of' (most Ruby-like)
has_many :posts of Post
validates :email of String, presence: true

# Option 2: Prefix 'of' (generic-like)
has_many of Post, :posts
validates of String, :email, presence: true

# Option 3: Square brackets (actual generics)
has_many[Post] :posts
validates[String] :email, presence: true

# Option 4: 'as:' keyword (traditional keyword argument)
has_many :posts, as: [Post]
validates :email, as: String, presence: true

# Option 5: '<>' Angled brackets (traditional keyword argument)
has_many<[Post]> :posts
validates<String> :email, presence: true
Code language: PHP (php)

I personally prefer Option 2 (prefix ‘of’) because:

  • It reads naturally in English: “has many of Post type”
  • The symbol comes first (Ruby convention)
  • It’s unambiguous and parser-friendly
  • It feels like a natural Ruby extension

Full Rails example with postfix ‘of’:

class User < ApplicationRecord
  has_many :posts of Post, dependent: :destroy
  has_many :comments of Comment, through: :posts
  belongs_to :company of Company
  has_one :profile of Profile?
  
  validates :email of String, presence: true, uniqueness: true
  validates :age of Integer, numericality: { greater_than: 0 }
  validates :status of "active" | "inactive" | "banned", inclusion: { in: %w[active inactive banned] }
  
  scope :active of Relation[User], -> { where(status: "active") }
  scope :by_company of Relation[User], ->(company_id :: Integer) {
    where(company_id: company_id)
  }
  
  before_save :normalize_email
  after_create :send_welcome_email
  
  def normalize_email :: void
    self.email = email.downcase.strip
  end
  
  def full_name :: String
    "#{first_name} #{last_name}"
  end
  
  def recent_posts(limit :: Integer = 10) :: [Post]
    posts.order(created_at: :desc).limit(limit).to_a
  end
end

class PostsController < ApplicationController
  def index :: void
    @posts :: [Post] = Post.includes(:user).order(created_at: :desc)
  end
  
  def show :: void
    @post :: Post = Post.find(params[:id])
  end
  
  def create :: void
    @post :: Post = Post.new(post_params)
    
    if @post.save
      redirect_to @post, notice: "Post created"
    else
      render :new, status: :unprocessable_entity
    end
  end
  
  private
  
  def post_params :: Hash
    params.require(:post).permit(:title, :body, :user_id)
  end
end
Code language: HTML, XML (xml)

How it works under the hood:

The of keyword in method calls would be syntactic sugar that the parser recognizes:

# What you write:
has_many :posts of Post

# What the parser sees:
has_many(:posts, __type__: Post)

# Rails can then use this:
def has_many(name, **options)
  type = options.delete(:__type__)
  
  # Define the association
  define_method(name) do
    # ... normal association logic
  end
  
  # Store type information for runtime validation/documentation
  if type
    association_types[name] = type
    
    # Optional runtime validation in development
    if Rails.env.development?
      define_method(name) do
        result = super()
        validate_type!(result, type)
        result
      end
    end
  end
end
Code language: PHP (php)

This approach:

  • Keeps the symbol first (Ruby convention)
  • Uses familiar of keyword (like we use for generics)
  • Works with all existing parameters
  • Is parser-friendly and unambiguous
  • Reads naturally in English

Complex Example: A Service Object

Let’s build a realistic service object with full type safety:

type TransferResult = 
  { success: true, transaction: Transaction } |
  { success: false, error: String }

class MoneyTransferService
  def initialize(
    @from_account :: Account,
    @to_account :: Account,
    @amount :: BigDecimal
  )
  end
  
  def call :: TransferResult
    return error("Amount must be positive") if @amount <= 0
    return error("Insufficient funds") if @from_account.balance < @amount
    return error("Accounts must be different") if @from_account == @to_account
    
    transaction :: Transaction? = nil
    
    Account.transaction do
      @from_account.withdraw(@amount)
      @to_account.deposit(@amount)
      
      transaction = Transaction.create!(
        from_account: @from_account,
        to_account: @to_account,
        amount: @amount,
        status: "completed"
      )
    end
    
    { success: true, transaction: transaction }
  rescue ActiveRecord::RecordInvalid => e
    error(e.message)
  end
  
  private
  
  def error(message :: String) :: TransferResult
    { success: false, error: message }
  end
end

# Usage
service = MoneyTransferService.new(from_account, to_account, 100.50)
result :: TransferResult = service.call

case result
in { success: true, transaction: tx }
  puts "Transfer successful: #{tx.id}"
in { success: false, error: err }
  puts "Transfer failed: #{err}"
end

Pattern Matching with Types

Ruby 3 introduced pattern matching. TypedRuby makes it type-safe:

type Response of T = 
  { status: "ok", data: T } |
  { status: "error", message: String } |
  { status: "loading" }

def handle_response of T, response :: Response of T :: String
  case response
  in { status: "ok", data: data :: T }
    "Success: #{data}"
  in { status: "error", message: msg :: String }
    "Error: #{msg}"
  in { status: "loading" }
    "Loading..."
  end
end

# Usage
user_response :: Response of User = fetch_user(123)
message = handle_response(user_response)
Code language: PHP (php)

Metaprogramming with Types

Ruby’s metaprogramming is powerful but dangerous. TypedRuby could make it safer:

class Model
  def self.has_typed_attribute of T, name :: Symbol, type :: Class
    define_method(name) :: T do
      instance_variable_get("@#{name}")
    end
    
    define_method("#{name}=") :: void do |value :: T|
      instance_variable_set("@#{name}", value)
    end
  end
end

class User < Model
  has_typed_attribute of String, :name, String
  has_typed_attribute of Integer, :age, Integer
end

user = User.new
user.name = "Ivan"  # OK
user.age = 30       # OK
user.name = 123     # Type error!
Code language: HTML, XML (xml)

Gradual Typing

The beauty of TypedRuby is that it’s optional. You can mix typed and untyped code:

# Completely untyped (classic Ruby)
def process(data)
  data.map { |x| x * 2 }
end

# Partially typed
def process(data :: Array)
  data.map { |x| x * 2 }
end

# Fully typed
def process of T, data :: [T], &block :: (T) -> T :: [T]
  data.map(&block)
end

# The three can coexist in the same codebase
Code language: PHP (php)

Type System and Object Hierarchy

Here’s a crucial question: how do types relate to Ruby’s object system? In Ruby, everything is an object, and every class inherits from Object (or BasicObject). TypedRuby’s type system needs to respect this.

Types ARE classes (mostly)

In TypedRuby, most types would literally be the classes themselves:

# String is both a class and a type
name :: String = "Ivan"
puts String.class  # => Class
puts String.ancestors  # => [String, Comparable, Object, Kernel, BasicObject]

# User is both a class and a type
user :: User = User.new
puts User.class  # => Class
puts User.ancestors  # => [User, ApplicationRecord, ActiveRecord::Base, Object, ...]

This is fundamentally different from TypeScript, where types exist only at compile time. In TypedRuby, types are runtime objects too.

Special type constructors

Some type syntax creates type objects at runtime:

# Array type constructor
posts :: [Post] = []

# This is roughly equivalent to:
posts :: Array[Post] = []

# Which could be implemented as:
class Array
  def self.[](element_type)
    TypedArray.new(element_type)
  end
end

# Hash type constructor
ages :: {String => Integer} = {}

# Roughly:
ages :: Hash[String, Integer] = {}

The Type class hierarchy

TypedRuby would introduce a parallel type hierarchy:

# New base classes for type system
class Type
  # Base class for all types
end

class GenericType < Type
  # For parameterized types like Array[T], Hash[K,V]
  attr_reader :type_params
  
  def initialize(*type_params)
    @type_params = type_params
  end
end

class UnionType < Type
  # For union types like String | Integer
  attr_reader :types
  
  def initialize(*types)
    @types = types
  end
end

class NullableType < Type
  # For nullable types like String?
  attr_reader :inner_type
  
  def initialize(inner_type)
    @inner_type = inner_type
  end
end

# These would be used like:
array_of_posts = GenericType.new(Array, Post)  # [Post]
string_or_int = UnionType.new(String, Integer)  # String | Integer
nullable_user = NullableType.new(User)  # User?
Code language: CSS (css)

Runtime type checking

Because types are objects, you could check them at runtime:

def process(value :: String | Integer)
  case value
  when String
    value.upcase
  when Integer
    value * 2
  end
end

# The type annotation creates a runtime check:
def process(value)
  # Compiler inserts:
  unless value.is_a?(String) || value.is_a?(Integer)
    raise TypeError, "Expected String | Integer, got #{value.class}"
  end
  
  case value
  when String
    value.upcase
  when Integer
    value * 2
  end
end
Code language: PHP (php)

Type as values (reflection)

Types being objects means you can work with them:

def type_info of T, value :: T :: Hash
  {
    value: value,
    type: T,
    class: value.class,
    ancestors: T.ancestors
  }
end

result = type_info("hello")
puts result[:type]  # => String
puts result[:class]  # => String
puts result[:ancestors]  # => [String, Comparable, Object, ...]

# Generic types are objects too:
array_type = Array of String
puts array_type.class  # => GenericType
puts array_type.type_params  # => [String]

Method objects with type information

Ruby’s Method objects could expose type information:

class User
  def greet(name :: String) :: String
    "Hello, #{name}"
  end
end

method = User.instance_method(:greet)
puts method.parameter_types  # => [String]
puts method.return_type  # => String

# This enables runtime validation:
def call_safely(obj, method_name, *args)
  method = obj.method(method_name)
  
  # Check argument types
  method.parameter_types.each_with_index do |type, i|
    unless args[i].is_a?(type)
      raise TypeError, "Argument #{i} must be #{type}"
    end
  end
  
  obj.send(method_name, *args)
end

Duck typing still works

Even with types, Ruby’s duck typing philosophy is preserved:

# You can still use duck typing without types
def quack(duck)
  duck.quack
end

# Or enforce types when you want safety
def quack(duck :: Duck) :: String
  duck.quack
end

# Or use interfaces for structural typing
interface Quackable
  def quack :: String
end

def quack(duck :: Quackable) :: String
  duck.quack  # Works with any object that implements quack
end
Code language: CSS (css)

Type compatibility and inheritance

Types follow Ruby’s inheritance rules:

class Animal
  def speak :: String
    "Some sound"
  end
end

class Dog < Animal
  def speak :: String
    "Woof"
  end
end

# Dog is a subtype of Animal
def make_speak(animal :: Animal) :: String
  animal.speak
end

dog = Dog.new
make_speak(dog)  # OK, Dog < Animal

# Liskov Substitution Principle applies
animals :: [Animal] = [Dog.new, Cat.new, Bird.new]

The as: keyword and runtime behavior

When you write:

has_many :posts, as: [Post]
Code language: CSS (css)

This could be expanded by the Rails framework to:

has_many :posts, type_checker: -> (value) {
  value.is_a?(Array) && value.all? { |item| item.is_a?(Post) }
}
Code language: JavaScript (javascript)

Rails could use this for runtime validation in development mode, giving you immediate feedback if you accidentally assign the wrong type.

Performance considerations

Runtime type checking has overhead. TypedRuby could handle this smartly:

# In development/test: full runtime checking
ENV['RUBY_TYPE_CHECKING'] = 'strict'

# In production: types checked only at compile time
ENV['RUBY_TYPE_CHECKING'] = 'none'

# Or selective checking for critical paths
ENV['RUBY_TYPE_CHECKING'] = 'public_apis'
Code language: PHP (php)

Integration with existing Ruby

Since types are objects, they integrate seamlessly:

# Works with reflection
User.instance_methods.each do |method|
  m = User.instance_method(method)
  if m.respond_to?(:return_type)
    puts "#{method} returns #{m.return_type}"
  end
end

# Works with metaprogramming
class User
  [:name, :email, :age].each do |attr|
    define_method(attr) :: String do
      instance_variable_get("@#{attr}")
    end
  end
end

# Works with monkey patching (for better or worse)
class String
  def original_upcase :: String
    # Type information is preserved
  end
end

This approach makes TypedRuby feel like a natural evolution of Ruby rather than a foreign type system bolted on. Types are just objects, following Ruby’s “everything is an object” philosophy.

TypedRuby should infer types aggressively:

# Inferred from literal
name = "Ivan"  # String inferred

# Inferred from method return
def get_age
  30
end

age = get_age  # Integer inferred

# Inferred from array contents
numbers = [1, 2, 3, 4]  # [Integer] inferred

# Inferred from hash
user = {
  name: "Ivan",
  age: 30,
  active: true
}  # {Symbol => String | Integer | Boolean} inferred

# Explicit typing when inference isn't enough
mixed :: [Integer | String] = [1, "two", 3]
Code language: PHP (php)

Why This Could Work

Unlike Sorbet and RBS, TypedRuby would be:

  1. Native: Types are part of the language syntax, not bolted on
  2. Optional: You choose where to add types
  3. Gradual: Mix typed and untyped code freely
  4. Readable: Syntax feels like Ruby, not like Java
  5. Powerful: Full generics, unions, intersections, pattern matching
  6. Practical: Works with Rails, metaprogramming, blocks, procs

The syntax respects Ruby’s philosophy. It’s minimal, expressive, and doesn’t get in your way. When you want types, they’re there. When you don’t, they’re not.

The Implementation Challenge

Could this be built? Technically, yes. You’d need to:

  1. Extend the Ruby parser to recognize type annotations
  2. Build a type checker that understands Ruby’s semantics
  3. Make it work with Ruby’s dynamic features
  4. Integrate with existing tools (RuboCop, RubyMine, VS Code)
  5. Handle the massive existing Ruby ecosystem

The hard part isn’t the syntax. It’s making the type checker smart enough to handle Ruby’s dynamism while still being useful. Ruby’s metaprogramming, method_missing, dynamic dispatch, these all make static typing hard.

But not impossible. Crystal proved you can have Ruby-like syntax with static types. Sorbet proved you can add types to Ruby code. TypedRuby would combine the best of both: native syntax with gradual typing.

The Dream

Imagine opening a Rails codebase and seeing:

class User < ApplicationRecord
  has_many :posts :: [Post]
  
  def full_name :: String
    "#{first_name} #{last_name}"
  end
end

class PostsController < ApplicationController
  def create :: void
    @post :: Post = Post.new(post_params)
    @post.save!
    redirect_to @post
  end
end

The types are there when you need them, documenting the code and catching bugs. But they don’t dominate. The code still looks like Ruby. It still feels like Ruby.

That’s what TypedRuby could be. Not a separate type system bolted onto Ruby. Not a different language inspired by Ruby. But Ruby itself, evolved to support the type safety modern developers expect.

Would It Succeed?

Honestly? Probably not. Ruby’s community values dynamism and flexibility. Matz has explicitly said he doesn’t want mandatory typing. The ecosystem is built on duck typing and metaprogramming.

But that doesn’t mean it wouldn’t be useful. A significant portion of Ruby developers would adopt optional typing if it felt natural. Rails applications would benefit from type safety in controllers, models, and services. API clients would be more reliable. Refactoring would be safer.

The key is making it optional and making it Ruby. Not Sorbet’s verbose sig blocks. Not RBS’s separate files. Just Ruby, with types when you want them.

Conclusion

TypedRuby is a thought experiment, but it’s a valuable one. It shows what’s possible when you design types into a language from the start, rather than bolting them on later.

Ruby is beautiful. Types don’t have to ruin that beauty. With the right syntax, the right philosophy, and the right implementation, they could enhance it.

Maybe someday we’ll see Ruby 4.0 with native, optional type annotations. Maybe we won’t. But it’s fun to imagine a world where Ruby has the expressiveness we love and the type safety we need.

Until then, we have Sorbet and RBS. They’re not perfect, but they’re what we’ve got. And who knows? Maybe they’ll evolve. Maybe the syntax will improve. Maybe they’ll feel more Ruby-like over time.

Or maybe someone will read this and decide to build TypedRuby for real.

A developer can dream.

The Hidden Economics of “Free” AI Tools: Why the SaaS Premium Still Matters

This post discusses the hidden costs of DIY solutions in SaaS, emphasizing the benefits of established SaaS tools over “free” AI-driven alternatives. It highlights issues like time tax, knowledge debt, reliability, support challenges, security risks, and scaling problems. Ultimately, it advocates for a balanced approach that leverages AI to enhance, rather than replace, reliable SaaS infrastructure.

This is Part 2 of my series on the evolution of SaaS. If you haven’t read Part 1: The SaaS Model Isn’t Dead, it’s Evolving Beyond the Hype of “Vibe Coding”, start there for the full context. In this post, I’m diving deeper into the hidden costs that most builders don’t see until it’s too late.

In my last post, I argued that SaaS isn’t dead, it’s just evolving beyond the surface-level appeal of vibe coding. Today, I want to dig deeper into something most builders don’t realize until it’s too late: the hidden costs of “free” AI-powered alternatives.

Because here’s the uncomfortable truth: when you replace a $99/month SaaS tool with a Frankenstein stack of AI prompts, no-code platforms, and API glue, you’re not saving money. You’re just moving the costs somewhere else, usually to places you can’t see until they bite you.

Let’s talk about what really happens when you choose the “cheaper” path.

The Time Tax: When Free Becomes Expensive

Picture this: you’ve built your “MVP” in a weekend. It’s glorious. ChatGPT wrote half the code, Zapier connects your Airtable to your Stripe account, and a Make.com scenario handles email notifications. Total monthly cost? Maybe $20 in API fees.

You’re feeling like a genius.

Then Monday morning hits. A customer reports an error. The Zapier workflow failed silently. You spend two hours digging through logs (when you can find them) only to discover that Airtable changed their API rate limits, and now your automation hits them during peak hours.

You patch it with a delay. Problem solved.

Until Wednesday, when three more edge cases emerge. The Python script you copied from ChatGPT doesn’t handle timezone conversions properly. Your payment flow breaks for international customers. The no-code platform you’re using doesn’t support the webhook format you need.

Each fix takes 30 minutes to 3 hours.

By Friday, you’ve spent more time maintaining your “free” stack than you would have spent just using Stripe Billing and ConvertKit.

This is the time tax. And unlike your SaaS subscription, you can’t expense it or write it off. It’s just gone, stolen from building features, talking to customers, or actually running your business.

The question isn’t whether your DIY solution costs less. It’s whether your time is worth $3/hour.

The Knowledge Debt: Building on Borrowed Understanding

Here’s a scenario that plays out constantly in the AI-first era:

A developer prompts Claude to build a payment integration. The AI generates beautiful code, type-safe, well-structured, handles edge cases. The developer copies it, tests it once, and ships it.

It works perfectly for two months.

Then Stripe deprecates an API endpoint. Or a customer discovers a refund edge case. Or the business wants to add subscription tiers.

Now what?

The developer stares at 200 lines of code they didn’t write and don’t fully understand. They can prompt the AI again, but they don’t know which parts are safe to modify. They don’t know why certain patterns were used. They don’t know what will break.

This is knowledge debt, the accumulated cost of using code you haven’t internalized.

Compare this to using a proper SaaS tool like Stripe Billing or Chargebee. You don’t understand every line of their code either, but you don’t need to. They handle the complexity. They migrate your data when APIs change. They’ve already solved the edge cases.

When you build with barely-understood AI-generated code, you get the worst of both worlds: you’re responsible for maintenance without having the knowledge to maintain it effectively.

This isn’t a knock on AI tools. It’s a reality check about technical debt in disguise.

The Reliability Gap: When “Good Enough” Isn’t

Let’s zoom out and talk about production-grade systems.

When you use Slack, it has 99.99% uptime. That’s not luck, it’s the result of on-call engineers, redundant infrastructure, automated failovers, and millions of dollars in operational excellence.

When you stitch together your own “Slack alternative” using Discord webhooks, Airtable, and a Telegram bot, what’s your uptime?

You don’t even know, because you’re not measuring it.

And here’s the thing: your customers notice.

They notice when notifications arrive 3 hours late because your Zapier task got queued during peak hours. They notice when your checkout flow breaks because you hit your free-tier API limits. They notice when that one Python script running on Replit randomly stops working.

Reliability isn’t a feature you can bolt on later. It’s the foundation everything else is built on.

This is why companies still pay for Datadog instead of writing their own monitoring. Why they use PagerDuty instead of email alerts. Why they choose AWS over running servers in their garage.

Not because they can’t build these things themselves, but because reliability at scale requires obsessive attention to details that don’t show up in MVP prototypes.

Your vibe-coded solution might work 95% of the time. But that missing 5% is where trust dies and customers churn.

The Support Nightmare: Who Do You Call?

Imagine this email from a customer:

“Hi, I tried to upgrade my account but got an error. Can you help?”

Simple enough, right?

Except your “upgrade flow” involves:

  • A Stripe Checkout session (managed by Stripe)
  • A webhook that triggers Make.com (managed by Make.com)
  • Which updates Airtable (managed by Airtable)
  • Which triggers a Zapier workflow (managed by Zapier)
  • Which sends data to your custom API (deployed on Railway)
  • Which updates your database (hosted on PlanetScale)

One of these broke. Which one? You have no idea.

You start debugging:

  • Check Stripe logs. Payment succeeded.
  • Check Make.com execution logs. Ran successfully.
  • Check Airtable. Record updated.
  • Check Zapier. Task queued but not processed yet.

Ah. Zapier’s free tier queues tasks during high-traffic periods. The upgrade won’t process for another 15 minutes.

You explain this to the customer. They’re confused and frustrated. So are you.

Now imagine that same scenario with a proper SaaS tool like Memberstack or MemberSpace. The customer emails them. They check their logs, identify the issue, and fix it. Done.

When you own the entire stack, you own all the problems too. And most founders don’t realize how much time “customer support for your custom infrastructure” actually takes until they’re drowning in it.

The Security Illusion: Compliance Costs You Can’t See

Pop quiz: Is your AI-generated authentication system GDPR compliant?

Does it properly hash passwords? Does it prevent timing attacks? Does it implement proper session management? Does it handle token refresh securely? Does it log security events appropriately?

If you’re not sure, you’ve got a problem.

Because when you use Auth0, Clerk, or AWS Cognito, these questions are answered for you. They have security teams, penetration testers, and compliance certifications. They handle GDPR, CCPA, SOC2, and whatever acronym-soup regulation applies to your industry.

When you roll your own auth with AI-generated code, you own all of that responsibility.

And here’s what most people don’t realize: security incidents are expensive. Not just in terms of fines and legal costs, but in reputation damage and customer trust.

One breach can kill a startup. And saying “but ChatGPT wrote the code” isn’t a legal defense.

The same logic applies to payment handling, data storage, and API security. Every shortcut you take multiplies your risk surface.

SaaS tools don’t just sell features, they sell peace of mind. They carry the liability so you don’t have to.

The Scale Wall: When Growth Breaks Everything

Your vibe-coded MVP works perfectly for your first 10 customers. Then you get featured on Product Hunt.

Suddenly you have 500 new signups in 24 hours.

Your Airtable base hits record limits. Your free-tier API quotas are maxed out. Your Make.com scenarios are queuing tasks for hours. Your Railway instance keeps crashing because you didn’t configure autoscaling. Your webhook endpoints are timing out because they weren’t designed for concurrent requests.

Everything is on fire.

This is the scale wall, the moment when your clever shortcuts stop being clever and start being catastrophic.

Real SaaS products are built to scale. They handle traffic spikes. They have redundancy. They auto-scale infrastructure. They cache aggressively. They optimize database queries. They monitor performance.

Your vibe-coded stack probably does none of these things.

And here’s the brutal part: scaling isn’t something you can retrofit easily. It’s architectural. You can’t just “add more Zapier workflows” your way out of it.

At this point, you face a choice: either rebuild everything properly (which takes months and risks losing customers during the transition), or artificially limit your growth to stay within the constraints of your fragile infrastructure.

Neither option is appealing.

The Integration Trap: When Your Stack Doesn’t Play Nice

One of the biggest promises of the AI-powered, no-code revolution is that everything integrates with everything.

Except it doesn’t. Not really.

Sure, Zapier connects to 5,000+ apps. But those integrations are surface-level. You get basic CRUD operations, not deep functionality.

Want to implement complex business logic? Want custom error handling? Want to batch process data efficiently? Want real-time updates instead of 15-minute polling?

Suddenly you’re writing custom code anyway, except now you’re writing it in the weird constraints of whatever platform you’ve chosen, rather than in a proper application where you have full control.

The irony is thick: you chose no-code to avoid complexity, but you ended up with a different kind of complexity, one that’s harder to debug and impossible to version control properly.

Meanwhile, a well-designed SaaS tool either handles your use case natively or provides a proper API for custom integration. You’re not fighting the platform; you’re using it as intended.

The Real Cost Comparison

Let’s do some actual math.

Vibe-coded stack:

  • Zapier Pro: $20/month
  • Make.com: $15/month
  • Airtable Pro: $20/month
  • Railway: $10/month
  • Various API costs: $15/month
  • Total: $80/month

Your time:

  • Initial setup: 20 hours
  • Weekly maintenance: 3 hours
  • Monthly debugging: 5 hours
  • Customer support for stack issues: 2 hours
  • Monthly time cost: ~20 hours

If your time is worth even $50/hour (a modest rate for a technical founder), that’s $1,000/month in opportunity cost.

Total real cost: $1,080/month.

Proper SaaS stack:

  • Stripe Billing: Included with processing fees
  • Memberstack: $25/month
  • ConvertKit: $29/month
  • Vercel: $20/month
  • Total: $74/month + processing fees

Your time:

  • Initial setup: 4 hours
  • Weekly maintenance: 0.5 hours
  • Monthly debugging: 1 hour
  • Customer support for stack issues: 0 hours (vendor handles it)
  • Monthly time cost: ~3 hours

At $50/hour, that’s $150/month in opportunity cost.

Total real cost: $224/month.

The “more expensive” SaaS stack actually costs 80% less when you account for time.

And we haven’t even factored in:

  • The revenue lost from downtime
  • The customers lost from poor reliability
  • The scaling issues you’ll hit later
  • The security risks you’re accepting
  • The knowledge debt you’re accumulating

When DIY Makes Sense (And When It Doesn’t)

Look, I’m not saying you should never build anything custom. There are absolutely times when DIY is the right choice.

Build custom when:

  • The functionality is core to your competitive advantage
  • No existing tool solves your exact problem
  • You have the expertise to maintain it long-term
  • You’re building something genuinely novel
  • You have the team capacity to own it forever

Use SaaS when:

  • The functionality is commodity (auth, payments, email, etc.)
  • Reliability and uptime are critical
  • You want to focus on your core product
  • You’re a small team with limited time
  • You need compliance and security guarantees
  • You value your time more than monthly fees

The pattern is simple: build what makes you unique, buy what makes you functional.

The AI-Assisted Middle Ground

Here’s where it gets interesting: AI doesn’t just enable vibe coding. It also enables smarter SaaS integration.

You can use Claude or ChatGPT to:

  • Generate integration code for SaaS APIs faster
  • Debug webhook issues more efficiently
  • Build wrapper libraries around vendor SDKs
  • Create custom workflows on top of stable platforms

This is the sweet spot: using AI to accelerate your work with reliable tools, rather than using AI to replace reliable tools entirely.

Think of it like this: AI is an incredible co-pilot. But you still need the plane to have wings.

The Evolution Continues

My argument isn’t that AI tools are bad or that vibe coding is wrong. It’s that we need to be honest about the tradeoffs.

The next generation of successful products won’t be built by people who reject AI, and they won’t be built by people who reject SaaS.

They’ll be built by people who understand when to use each.

People who can vibe-code a prototype in a weekend, then have the discipline to replace it with proper infrastructure before it scales. People who use AI to augment their capabilities, not replace their judgment.

The future isn’t “AI vs. SaaS.” It’s “AI-enhanced SaaS.”

Tools that are easier to integrate because AI helps you. APIs that are easier to understand because AI explains them. Systems that are easier to maintain because AI helps you debug.

But beneath all that AI magic, there’s still reliable infrastructure, accountable teams, and boring old uptime guarantees.

Because at the end of the day, customers don’t care about your tech stack. They care that your product works when they need it.

Build for the Long Game

If you’re building something that matters, something you want customers to depend on, something you want to grow into a real business, you need to think beyond the MVP phase.

You need to think about what happens when you hit 100 users. Then 1,000. Then 10,000.

Will your clever weekend hack still work? Or will you be spending all your time keeping the lights on instead of building new features?

The most successful founders I know aren’t the ones who move fastest. They’re the ones who move sustainably, who build foundations that can support growth without collapsing.

They use AI to move faster. They use SaaS to stay reliable. They understand that both are tools, not religions.

Final Thoughts: Respect the Craft

There’s a romance to the idea of building everything yourself. Of being the 10x developer who needs nothing but an AI assistant and pure willpower.

But romance doesn’t ship products. Discipline does.

The best software is invisible. It just works. And making something “just work”, consistently, reliably, at scale, is harder than anyone admits.

So use AI. Vibe-code your prototypes. Move fast and experiment.

But when it’s time to ship, when it’s time to serve real customers, when it’s time to build something that lasts, respect the craft.

Choose boring, reliable infrastructure. Pay for the SaaS tools that solve solved problems. Invest in quality over cleverness.

Because the goal isn’t to build the most innovative tech stack.

The goal is to build something customers love and trust.

And trust, as it turns out, is built on the boring stuff. The stuff that works when you’re not looking. The stuff that scales without breaking. The stuff someone else maintains at 3 AM so you don’t have to.

That’s what SaaS really sells.

And that’s why it’s not dead, it’s just getting started.


What’s your experience balancing custom-built solutions with SaaS tools? Have you hit the scale wall or the reliability gap? Share your stories in the comments. I’d love to hear what you’ve learned.

If you found this useful, follow me for more posts on building sustainable products in the age of AI, where we embrace new tools without forgetting old wisdom.

Saving Money With Embeddings in AI Memory Systems: Why Ruby on Rails is Perfect for LangChain

In the exploration of AI memory systems and embeddings, the author highlights the hidden costs in AI development, emphasizing token management. Leveraging Ruby on Rails streamlines the integration of LangChain for efficient memory handling. Adopting strategies like summarization and selective retrieval significantly reduces expenses, while maintaining readability and scalability in system design.

Over the last few months of rebuilding my Rails muscle memory, I’ve been diving deep into AI memory systems and experimenting with embeddings. One of the biggest lessons I’ve learned is that the cost of building AI isn’t just in the model it’s in how you use it. Tokens, storage, retrieval these are the hidden levers that determine whether your AI stack remains elegant or becomes a runaway expense.

And here’s the good news: with Ruby on Rails, managing these complexities becomes remarkably simple. Rails has always been about turning complicated things into something intuitive and maintainable and when you pair it with LangChain, it feels like magic.


Understanding the Cost of Embeddings

Most people think that running large language models is expensive because of the model itself. That’s only partially true. In practice, the real costs come from:

  • Storing too much raw content: Every extra paragraph you embed costs more in tokens, both for the embedding itself and for later retrieval.
  • Embedding long texts instead of summaries: LLMs don’t need the full novel they often just need the distilled version. Summaries are shorter, cheaper, and surprisingly effective.
  • Retrieving too many memories: Pulling 50 memories for a simple question can cost more than the model call itself. Smart retrieval strategies can drastically cut costs.
  • Feeding oversized prompts into the model: Every extra token in your prompt adds up. Cleaner prompts = cheaper calls.

I’ve seen projects where embedding every word of a document seemed “safe,” only to realize months later that the token bills were astronomical. That’s when I started thinking in terms of summary-first embeddings.


How Ruby on Rails Makes It Easy

Rails is my natural playground for building systems that scale reliably without over-engineering. Why does Rails pair so well with AI memory systems and LangChain? Several reasons:

Migrations Are Elegant
With Rails, adding a vector column with PgVector feels like any other migration. You can define your tables, indexes, and limits in one concise block:

 class AddMemoriesTable < ActiveRecord::Migration[7.1] 
   def change 
     enable_extension "vector" 
     create_table :memories do |t| 
       t.text :content, null: false 
       t.vector :embedding, limit: 1536 
       t.jsonb :metadata 
       t.timestamps 
     end 
   end 
end 


There’s no need for complicated schema scripts. Rails handles the boring but essential details for you.

ActiveRecord Makes Embedding Storage a Breeze
Storing embeddings in Rails is almost poetic. With a simple model, you can create a memory with content, an embedding, and metadata in a single call:

Memory.create!(
  content: "User prefers Japanese and Mexican cuisine.", 
  embedding: embedding_vector,
  metadata: { type: :preference, user_id: 42 }
)Code language: CSS (css)

And yes, you can query those memories by similarity in a single, readable line:

Memory.order(Arel.sql("embedding <=> '[#{query_embedding.join(',')}]'")).limit(5)Code language: HTML, XML (xml)

Rails keeps your code readable and maintainable while you handle sophisticated vector queries.

LangChain Integration is Natural
LangChain is all about chaining LLM calls, memory storage, and retrieval. In Rails, you already have everything you need: models, services, and job queues. You can plug LangChain into your Rails services to:


Saving Money with Smart Embeddings

Here’s the approach I’ve refined over multiple projects:

  1. Summarize Before You Embed
    Instead of embedding full documents, feed the model a summary. A 50-word summary costs fewer tokens but preserves the semantic meaning needed for retrieval.
  2. Limit Memory Retrieval
    You rarely need more than 5–10 memories for a single model call. More often than not, extra memories just bloat your prompt and inflate costs.
  3. Use Metadata Wisely
    Store small, structured metadata alongside your embeddings to filter memories before similarity search. For example, filter by user_id or type instead of pulling all records into the model.
  4. Cache Strategically
    Don’t re-embed unchanged content. Use Rails validations, background jobs, and services to embed only when necessary.

When you combine these strategies, the savings are significant. In some projects, embedding costs dropped by over 70% without losing retrieval accuracy.


Why I Stick With Rails and PostgreSQL

There are many ways to build AI memory systems. You could go with specialized databases, microservices, or cloud vector stores. But here’s what keeps me on Rails and Postgres:

  • Reliability: Postgres is mature, stable, and production-ready. PgVector adds vector search without changing the foundation.
  • Scalability: Rails scales surprisingly well when you keep queries efficient and leverage background jobs.
  • Developer Happiness: Rails lets me iterate quickly. I can prototype, test, and deploy AI memory features without feeling like I’m juggling ten different systems.
  • Future-Proofing: Rails projects can last years without a complete rewrite. AI infrastructure is still evolving having a stable base matters.

Closing Thoughts

AI memory doesn’t have to be complicated or expensive. By thinking carefully about embeddings, summaries, retrieval, and token usage and by leveraging Rails with LangChain you can build memory systems that are elegant, fast, and cost-effective.

For me, Rails is more than a framework. It’s a philosophy: build systems that scale naturally, make code readable, and keep complexity under control. Add PgVector and LangChain to that mix, and suddenly AI memory feels like something you can build without compromise.

In the world of AI, where complexity grows faster than budgets, that kind of simplicity is priceless.

The SaaS Model Isn’t Dead, it’s Evolving Beyond the Hype of “Vibe Coding”

The article critiques the rise of “vibe coding,” emphasizing the distinction between quick prototypes and genuine MVPs. It argues that while AI can accelerate product development, true success relies on accountability, stability, and structure. Ultimately, SaaS is evolving, prioritizing reliable infrastructure and reinforcement over mere speed and creativity.

“The SaaS model is dead. Long live vibe-coded AI scripts.”

That’s the kind of hot take lighting up LinkedIn half ironic, half prophetic.

Why pay $99/month for a product when you can stitch together 12 AI prompts, 3 no-code hacks, and a duct-taped Python script you barely understand?

Welcome to vibe coding.

It feels fast. It feels clever.
Until the vibes break and no one knows why.


The Mirage of Instant Software

We live in an era of speed.
AI gives us instant answers, mockups, and even “apps.” The line between prototype and product has never been thinner and that’s both empowering and dangerous.

What used to take months of product design, testing, and iteration can now be faked in a weekend.
You can prompt ChatGPT to generate a working landing page, use Bubble or Replit for logic, and Zapier to glue it all together.

Boom “launch” your MVP.

But here’s the truth no one wants to say out loud:
Most of these AI-fueled prototypes aren’t MVPs. They’re demos with good lighting.

A real MVP isn’t about how fast you can ship; it’s about how reliably you can learn from what you ship.

And learning requires stability.
You can’t measure churn or retention when your backend breaks every other day.
You can’t build trust when your app crashes under 20 users.

That’s when the vibes start to fade.


The Boring Truth Behind Great Products

Let’s talk about what SaaS really sells.
It’s not just the product you see it’s everything beneath it:

  • Uptime: Someone is on-call at 3 AM keeping your app alive.
  • Security: Encryption, audits, GDPR, SOC2 the invisible scaffolding of trust.
  • Maintenance: When APIs change or libraries break, someone fixes it.
  • Versioning: “Update Available” didn’t write itself.
  • Support: Human beings who care when you open a ticket.

When you pay for SaaS, you’re not paying for buttons.
You’re paying for accountability for the guarantee that someone else handles the boring stuff while you focus on your business.

And boring, in software, is beautiful.
Because it means stability. Predictability. Peace of mind.


The Myth of the One-Prompt MVP

There’s a growing illusion that AI can replace the entire MVP process.
Just write a long enough prompt, and out comes your startup.

Except… no.

Building an MVP is not about output. It’s about the iteration loop testing, learning, refining.

A real MVP requires:

  • Instrumentation: Analytics to track usage and retention.
  • UX Design: Understanding user friction.
  • Scalability: Handling 500 users without collapse.
  • Product Roadmap: Knowing what not to build yet.
  • Legal & Compliance: Because privacy questions always come.

AI can accelerate this process but it can’t replace it.
Because AI doesn’t understand your market context, users, or business model.
It’s a tool not a cofounder.


From Vibes to Viability

There’s real power in AI-assisted building.
You can move fast, experiment, and prototype ideas cheaply.

But once something works, you’ll need to replace your prompt stack and Zapier web of glue code with solid infrastructure.

That’s when the SaaS mindset returns.
Not because you need to “go old school,” but because you need to go sustainable.

Every successful product eventually faces the same questions:

  • Who maintains this?
  • Who owns the data?
  • Who ensures it still works next month?
  • Who’s responsible when it breaks?

The answer, in true SaaS fashion, must always be: someone accountable.


SaaS Isn’t Dead, it’s Maturing

The world doesn’t need more quick hacks.
It needs more craftsmanship builders who blend speed with discipline, creativity with structure, and vibes with reliability.

SaaS isn’t dying; it’s evolving.

Tomorrow’s SaaS might not look like subscription dashboards.
It might look like AI agents, private APIs, or personalized data layers.

But behind every “smart” layer will still be boring, dependable infrastructure databases, authentication, servers, and teams maintaining uptime.

The form changes.
The value reliability, scalability, trust never does.


Final Thought: Build With Vibes, Ship With Discipline

There’s nothing wrong with vibe coding. It’s an amazing way to experiment and learn.

But if you want to launch something that lasts, something customers depend on you’ll need more than vibes.
You’ll need product thinking, process, and patience.

That’s what separates a weekend project from a real business.

So build with vibes.
But ship with discipline.

Because that’s where the magic and the money really happens.

If you liked this post, follow me for more thoughts on building real products in the age of AI hype where craftsmanship beats shortcuts every time.

Artisanal Coding (職人コーディング): A Manifesto for the Next Era of Software Craftsmanship

Artesanal coding emphasizes the importance of craftsmanship in software development amidst the rise of AI and “vibe coding.” It advocates for intentional, quality-driven coding practices that foster deep understanding and connection to the code. By balancing AI assistance with craftsmanship, developers can preserve their skills and create sustainable, high-quality software.

In an age where code seems to write itself and AI promises to make every developer “10x faster,” something essential has quietly started to erode our craftsmanship. I call the counter-movement to this erosion artisanal coding.

Like artisanal bread or craft coffee, artisanal coding is not about nostalgia or resistance to progress. It’s about intentionality, quality, and soul; things that can’t be automated, templated, or generated in bulk. It’s the human touch in a field that’s rushing to outsource its own intuition.

What Is Artisanal Coding (職人コーディング)?

Artisanal coding is the conscious resistance to that decay.
It’s not anti-AI, it’s anti-carelessness. It’s the belief that the best code is still handmade, understood, and cared for.

Think of an artisan carpenter.
He can use power tools but he knows when to stop and sand by hand. He knows the wood, feels its resistance, and adjusts. He doesn’t mass-produce he perfects.

Artisanal coding applies that mindset to software. It’s about:

  • Understanding the problem before touching the code.
  • Writing it line by line, consciously.
  • Refactoring not because a tool says so, but because you feel the imbalance.
  • Learning from your errors instead of patching them away.

It’s slow. It’s deliberate. And that’s the point.

Artisanal coding is the deliberate act of writing software by hand, with care, precision, and understanding. It’s the opposite of what I call vibe coding the growing trend of throwing AI-generated snippets together, guided by vibes and autocomplete rather than comprehension.

This is not about rejecting tools it’s about rejecting the loss of mastery. It’s a mindset that values the slow process of creation, the small victories of debugging, and the satisfaction of knowing your code’s structure like a craftsman knows the grain of wood.

Why We Need Artisanal Coding

  1. We’re losing our muscle memory.
    Developers who rely too heavily on AI are forgetting how to solve problems from first principles. Code completion is helpful, but when it replaces thought, the skill atrophies.
  2. Code quality is declining behind pretty demos.
    Vibe coding produces software that “works” today but collapses tomorrow. Without deep understanding, we can’t reason about edge cases, performance, or scalability.
  3. We risk becoming code operators instead of creators.
    The satisfaction of crafting something elegant is replaced by prompt-tweaking and debugging alien code. Artisanal coding restores that connection between creator and creation.
  4. AI cannot feel the friction.
    Friction is good. The process of struggling through a bug teaches lessons that no autocomplete can. That frustration is where true craftsmanship is born.

The Role (and Limitations) of AI in Artisanal Coding

Artisanal coding doesn’t ban AI. It just defines healthy boundaries for its use.

Allowed AI usage:

  • Short code completions: Using AI to fill in a few lines of boilerplate or repetitive syntax.
  • Troubleshooting assistance: Asking AI or community-like queries outside the codebase similar to how you’d ask Stack Overflow or a mentor for advice.

🚫 Not allowed:

  • Generating entire functions or components without understanding them.
  • Using AI to “design” the logic of your app.
  • Copy-pasting large sections of unverified code.

AI can be your assistant, not your replacement. Think of it as a digital apprentice, not a co-author.


The Future Depends on How We Code Now

As we rush toward AI-assisted everything, we risk raising a generation of developers who can’t code without help. Artisanal coding is a statement of independence a call to slow down, think deeply, and keep your hands on the keyboard with intent.

Just as artisans revived craftsmanship in industries overtaken by automation, we can do the same in tech. The software we write today shapes the world we live in tomorrow. It deserves the same care as any other craft.

Artisanal coding is not a movement of the past, it’s a movement for the future.
Because even in the age of AI, quality still matters. Understanding still matters. Humans still matter.

If vibe coding is the fast food of software, artisanal coding is the slow-cooked meal: nourishing, deliberate, and made with care.
It takes more time, yes. But it’s worth every second.

Let’s bring back pride to our craft.
Let’s code like artisans again.

In many ways, artisanal coding echoes the Japanese philosophies of Shokunin [職人] (the pursuit of mastery through mindful repetition), Wabi-sabi [侘寂] (the acceptance of imperfection as beauty), and Kaizen [改善] (the quiet dedication to constant improvement). A true craftsperson doesn’t rush; they refine. They don’t chase perfection; they respect the process. Coding, like Japanese pottery or calligraphy, becomes an act of presence a meditative dialogue between the mind and the material. In a world driven by automation and speed, this spirit reminds us that the deepest satisfaction still comes from doing one thing well, by hand, with heart.

Final Thoughts

This post marks a turning point for me and for this blog.
I’ve spent decades building software, teams, and systems. I’ve seen tools come and go, frameworks rise and fade. But never before have we faced a transformation this deep one that challenges not just how we code, but why we code.

Artisanal coding is my response.
From this point forward, everything I write here every essay, every reflection will revolve around this principle: building software with intention, understanding, and care.

This isn’t just about programming.
It’s about reclaiming craftsmanship in a world addicted to shortcuts.
It’s about creating something lasting in an era of instant everything.
It’s about remembering that the hands still matter.

“職人コーディング – Writing software with heart, precision, and purpose.”

Brainrot and the Slow Death of Code

The rise of AI tools in software development is leading to a decline in genuine coding skills, as developers increasingly rely on automation. This reliance dampens critical thinking and creativity, replacing depth with superficial efficiency. Ultimately, the industry risks producing inferior code devoid of understanding, undermining the essence of craftsmanship in programming.

It’s an uncomfortable thing to say out loud, but we’re witnessing a slow decay of human coding ability a collective brainrot disguised as progress.

AI tools are rewriting how we build software. Every week, new developers boast about shipping apps in a weekend using AI assistants, generating entire APIs, or spinning up SaaS templates without understanding what’s going on beneath the surface. At first glance, this looks like evolution a leap forward for productivity. But beneath that veneer of efficiency, something essential is being lost.

Something deeply human.

The Vanishing Craft

Coding has always been more than just typing commands into a terminal. It’s a way of thinking. It’s logic, structure, and creativity fused into a single process the art of turning chaos into clarity.

But when that process is replaced by autocomplete and code generation, the thinking disappears. The hands still move, but the mind doesn’t wrestle with the problem anymore. The apprentice phase the long, painful, necessary stage of learning how to structure systems, debug, refactor, and reason gets skipped.

And that’s where the rot begins.

AI gives us perfect scaffolding but no understanding of the architecture. Developers start to “trust” the model more than themselves. Code review becomes an act of blind faith, and debugging turns into a guessing game of prompts.

The craft is vanishing.

We Are Losing Muscle Memory

Just like a musician who stops practicing loses touch with their instrument, coders are losing their “muscle memory.”

When you stop writing code line by line, stop thinking about data flow, stop worrying about algorithms and complexity your instincts dull. The small patterns that once made you fast, efficient, and insightful fade away.

Soon, you can’t feel when something’s wrong with a function or a model. You can’t spot the small design flaw that will turn into technical debt six months later. You can’t intuit why the system slows down, or why memory leaks appear.

AI-generated code doesn’t teach you these instincts it just hides the consequences long enough for them to explode.

Inferior Code, Hidden in Abundance

We’re producing more code than ever before but most of it is worse.

AI makes quantity trivial. Anyone can spin up ten microservices, fifty endpoints, and thousands of lines of boilerplate in an hour. But that abundance hides a dangerous truth: we are filling the digital world with code that nobody understands.

Future engineers will inherit layers of opaque, AI-generated software systems without authors, without craftsmanship, without intention. It’s digital noise masquerading as innovation.

This isn’t progress. It’s entropy.

The Myth of “Productivity”

The industry loves to equate productivity with success. But in software, speed isn’t everything. Some of the best systems ever built took time, reflection, and human stubbornness.

We’re now in a paradox where developers produce more but learn less. Where every shortcut taken today adds future friction. The so-called “productivity gains” are borrowed time a loan with heavy interest, paid in debugging, maintenance, and fragility.

When code becomes disposable, knowledge follows. And when knowledge fades, innovation turns into imitation.

The Future Is Not Hopeless If We Choose Discipline

The solution isn’t to reject AI it’s to reestablish the boundaries between tool and craftsman.

AI should be your assistant, not your brain. It should amplify your understanding, not replace it. The act of writing, reasoning, and debugging still matters. You still need to understand the stack, the algorithm, the data flow.

If you don’t, the machine will own your craft and eventually, your value.

Software built by people who no longer understand code will always be inferior to software built by those who do. The future of code depends on preserving that human layer of mastery the part that questions, improves, and cares.

Closing Thought

What’s happening isn’t the death of coding it’s the death of depth.

We’re watching a generation of builders raised on autocomplete lose touch with the essence of creation. The danger isn’t that AI will replace programmers. The danger is that programmers will forget how to think like programmers.

Brainrot isn’t about laziness it’s about surrender. And if we keep surrendering our mental muscles to the machine, we’ll end up with a future full of code that works but no one knows why.

The Art of Reusability and Why AI Still Doesn’t Understand It

AI can generate code but lacks understanding of design intent, making it struggle with reusability. True reusability involves encoding shared ideas and understanding context, which AI cannot grasp. This leads to overgeneralized or underabstracted code. Effective engineering requires human judgment and foresight that AI is currently incapable of providing.

After writing about the team that deleted 200,000 lines of AI-generated code without breaking their app, a few people asked me:

“If AI is getting so good at writing code, why can’t it also reuse code properly?”

That’s the heart of the problem.

AI can produce code.
It can suggest patterns.
But it doesn’t understand why one abstraction should exist and why another should not.

It has no concept of design intent, evolution over time, or maintainability.
And that’s why AI-generated code often fails at the very thing great software engineering is built upon: reusability.


Reusability Isn’t About Copying Code

Let’s start with what reusability really means.

It’s not about reusing text.
It’s about reusing thought.

When you make code reusable, you’re encoding an idea a shared rule or process in one place, so it can serve multiple contexts.
That requires understanding how your domain behaves and where boundaries should exist.

Here’s a small example in Ruby 3.4:

# A naive AI-generated version
class InvoiceService
  def create_invoice(customer, items)
    total = items.sum { |i| i[:price] * i[:quantity] }
    tax = total * 0.22
    {
      customer: customer,
      total: total,
      tax: tax,
      grand_total: total + tax
    }
  end

  def preview_invoice(customer, items)
    total = items.sum { |i| i[:price] * i[:quantity] }
    tax = total * 0.22
    {
      preview: true,
      total: total,
      tax: tax,
      grand_total: total + tax
    }
  end
end

It works. It looks fine.
But the duplication here is silent debt.

A small tax change or business rule adjustment would require edits in multiple places which the AI wouldn’t warn you about.

Now, here’s how a thoughtful Rubyist might approach the same logic:

class InvoiceCalculator
  TAX_RATE = 0.22

  def initialize(items)
    @items = items
  end

  def subtotal = @items.sum { |i| i[:price] * i[:quantity] }
  def tax = subtotal * TAX_RATE
  def total = subtotal + tax
end

class InvoiceService
  def create_invoice(customer, items, preview: false)
    calc = InvoiceCalculator.new(items)

    {
      customer: customer,
      total: calc.subtotal,
      tax: calc.tax,
      grand_total: calc.total,
      preview: preview
    }
  end
end

Now the logic is reusable, testable, and flexible.
If tax logic changes, it’s centralized.
If preview behavior evolves, it stays isolated.

This is design thinking not just text prediction.


Why AI Struggles with This

AI doesn’t understand context it understands correlation.

When it generates code, it pulls from patterns it has seen before. It recognizes that “invoices” usually involve totals, taxes, and items.
But it doesn’t understand the relationship between those things in your specific system.

It doesn’t reason about cohesion (what belongs together) or coupling (what should stay apart).

That’s why AI-generated abstractions often look reusable but aren’t truly so.
They’re usually overgeneralized (“utility” modules that do too much) or underabstracted (duplicate logic with slightly different names).

In other words:
AI doesn’t design for reuse it duplicates for confidence.


A Real Example: Reusability in Rails

Let’s look at something familiar to Rubyists: ActiveRecord scopes.

An AI might generate this:

class Order < ApplicationRecord
  scope :completed, -> { where(status: 'completed') }
  scope :recent_completed, -> { where(status: 'completed').where('created_at > ?', 30.days.ago) }
end
Code language: HTML, XML (xml)

Looks fine, right?
But you’ve just duplicated the status: 'completed' filter.

A thoughtful approach is:

class Order < ApplicationRecord
  scope :completed, -> { where(status: 'completed') }
  scope :recent, -> { where('created_at > ?', 30.days.ago) }
  scope :recent_completed, -> { completed.recent }
end
Code language: HTML, XML (xml)

It’s subtle but it’s how reusability works.
You extract intent into composable units.
You think about how the system wants to be extended later.

That level of foresight doesn’t exist in AI-generated code.


The Human Element: Judgment and Intent

Reusability isn’t just an engineering principle it’s a leadership one.

Every reusable component is a promise to your future self and your team.
You’re saying: “This logic is safe to depend on.”

AI can’t make that promise.
It can’t evaluate trade-offs or organizational conventions.
It doesn’t know when reuse creates value and when it adds friction.

That’s why good engineers are editors, not just producers.
We don’t chase volume; we curate clarity.


My Takeaway

AI is incredible at generating examples.
But examples are not design.

Reusability real, human-level reusability comes from understanding what stays constant when everything else changes.
And that’s something no model can infer without human intent behind it.

So yes AI can write Ruby.
It can even generate elegant-looking methods.
But it still can’t think in Ruby.
It can’t feel the rhythm of the language, or the invisible architecture behind a clean abstraction.

That’s still our job.

And it’s the part that makes engineering worth doing.


Written by Ivan Turkovic; technologist, Rubyist, and blockchain architect exploring how AI and human craftsmanship intersect in modern software engineering.

The AI Detox Movement: Why Engineers Are Taking Back Their Code

In 2025, AI tools transformed coding but led developers to struggle with debugging and understanding their code. This sparked the concept of “AI detox,” a period where developers intentionally stop using AI to regain coding intuition and problem-solving skills. A structured detox can improve comprehension, debugging, and creativity, fostering a healthier relationship with AI.

The New Reality of Coding in 2025

Over the last year, something remarkable happened in the world of software engineering.

AI coding tools Cursor, GitHub Copilot, Cody, Devin became not just sidekicks, but full collaborators. Autocomplete turned into full functions, boilerplate became one-liners, and codebases that once took weeks to scaffold could now appear in minutes.

It felt like magic.

Developers were shipping faster than ever. Teams were hitting deadlines early. Startups were bragging about “AI-assisted velocity.”

But behind that rush of productivity, something else began to emerge a quiet, growing discomfort.


The Moment the Magic Fades

After months of coding with AI, many developers hit the same wall.
They could ship fast, but they couldn’t debug fast.

When production went down, it became painfully clear: they didn’t truly understand the codebase they were maintaining.

A backend engineer told me bluntly:

“Cursor wrote the service architecture. I just glued things together. When it broke, I realized I had no idea how it even worked.”

AI wasn’t writing bad code it was writing opaque code.
Readable but not intuitive. Efficient but alien.

This is how the term AI detox started spreading in engineering circles developers deliberately turning AI off to reconnect with the craft they’d begun to lose touch with.


What Is an AI Detox?

An AI detox is a deliberate break from code generation tools like Copilot, ChatGPT, or Cursor to rebuild your programming intuition, mental sharpness, and problem-solving confidence.

It doesn’t mean rejecting AI altogether.
It’s about recalibrating your relationship with it.

Just as a fitness enthusiast might cycle off supplements to let their body reset, engineers are cycling off AI to let their brain do the heavy lifting again.


Why AI Detox Matters

The longer you outsource cognitive effort to AI, the more your engineering instincts fade.
Here’s what AI-heavy coders have reported after several months of nonstop use:

  • Reduced understanding of code structure and design choices.
  • Slower debugging, especially in unfamiliar parts of the codebase.
  • Weaker recall of language and framework features.
  • Overreliance on generated snippets that “just work” without deeper understanding.
  • Loss of flow, because coding became about prompting rather than creating.

You might still be productive but you’re no longer learning.
You’re maintaining an illusion of mastery.


The Benefits of an AI Detox

After even a short AI-free period, developers often notice a profound change in how they think and code:

  • Deeper comprehension: You start to see the architecture again.
  • Better debugging: You can trace logic without guesswork.
  • Sharper recall: Syntax, libraries, and idioms return to muscle memory.
  • Creative problem solving: You find better solutions instead of the first thing AI offers.
  • Reconnection with craftsmanship: You take pride in code that reflects your thought process.

As one engineer put it:

“After a week without Cursor, I remembered how satisfying it is to actually solve something myself.”


How to Plan Your AI Detox (Step-by-Step Guide)

You don’t need to quit cold turkey forever.
A structured plan helps you recoup your skills while keeping your work flowing.

Here’s how to do it effectively:


Step 1: Define Your Motivation

Start by asking:

  • What do I want to regain?
  • Is it confidence? Speed? Understanding?
  • Do I want to rebuild my debugging skills or architectural sense?

Write it down. Clarity gives your detox purpose and prevents you from quitting halfway.


Step 2: Choose Your Detox Duration

Different goals require different lengths:

Detox LevelDurationBest For
Mini-detox3 daysA quick reset and self-check
Weekly detox1 full weekRebuilding confidence and recall
Extended detox2–4 weeksDeep retraining of fundamentals

If you’re working on a production project, start with a hybrid approach:
AI-free mornings, AI-assisted afternoons.


Step 3: Set Clear Rules

Be explicit about what’s allowed and what’s not.

Example rules:

✅ Allowed:

  • Using AI for documentation lookups
  • Reading AI explanations for existing code
  • Asking conceptual questions (“How does event sourcing work?”)

❌ Not allowed:

  • Code generation (functions, modules, tests, migrations)
  • AI refactors or architecture design
  • Using AI to debug instead of reasoning it out yourself

The stricter the rule set, the greater the benefit.


Step 4: Pick a Suitable Project

Choose something that forces you to think but won’t jeopardize production deadlines.

Good choices:

  • Refactor an internal service manually.
  • Build a small CLI or API from scratch.
  • Rewrite a module in a different language (e.g., Ruby → Rust).
  • Add integration tests by hand.

Bad choices:

  • Complex greenfield features with high delivery pressure.
  • Anything that will make your manager panic if it takes longer.

The goal is to practice thinking, not to grind deadlines.


Step 5: Journal Your Learning

Keep a daily log of what you discover:

  • What took longer than expected?
  • What concepts surprised you?
  • What patterns do you now see more clearly?
  • Which parts of the language felt rusty?

At the end of the detox, you’ll have a personal reflection guide a snapshot of how your brain reconnected with the craft.


Step 6: Gradually Reintroduce AI (With Boundaries)

After your detox, it’s time to reintroduce AI intentionally.

Here’s how to keep your skills sharp while benefiting from AI assistance:

Use CaseAI Usage
Boilerplate✅ Yes (setup, configs, tests)
Core logic⚠️ Only for brainstorming or reviewing
Debugging✅ For hints, but reason manually first
Architecture✅ As a sounding board, not a decision-maker

You’ll quickly find a balance where AI becomes an amplifier not a crutch.


Example AI-Detox Schedule (4-Week Plan)

Here’s a simple structure to follow:

Week 1 – Awareness

  • Turn off AI for 3 days.
  • Focus on small, isolated tasks.
  • Note moments where you instinctively reach for AI.

Goal: Realize how often you rely on it.


Week 2 – Manual Mastery

  • Full AI-free week.
  • Rebuild a module manually.
  • Write comments before coding.
  • Practice debugging from logs and stack traces.

Goal: Relearn problem-solving depth.


Week 3 – Independent Architecture

  • Design and code a feature without any AI input.
  • Document design decisions manually.
  • Refactor and test it by hand.

Goal: Restore confidence in end-to-end ownership.


Week 4 – Rebalance

  • Reintroduce AI, but only for non-critical parts.
  • Review old AI-generated code and rewrite one section by hand.
  • Evaluate your improvement.

Goal: Reclaim control. Let AI assist, not lead.


Practical Tips to Make It Work

  • Disable AI in your editor: Don’t rely on willpower remove temptation.
  • Pair program with another human: It recreates the reasoning process that AI shortcuts.
  • Keep a “questions log”: Every time you’re tempted to ask AI something, write it down. Research it manually later.
  • Revisit fundamentals: Review algorithms, frameworks, or patterns you haven’t touched in years.
  • Read real code: Open-source repositories are the best detox material real logic, real humans.

The Mindset Behind the Detox

The purpose of an AI detox isn’t to prove you can code without AI.
It’s to remember why you code in the first place.

Good engineering is about understanding, design, trade-offs, and problem-solving.
AI tools are brilliant at generating text but you are the one making decisions.

The best developers I know use AI with intent. They use it to:

  • Eliminate repetition.
  • Accelerate boilerplate.
  • Explore ideas.

But they write, refactor, and debug the hard parts themselves because that’s where mastery lives.


The Future Is Balanced

AI isn’t going away. It’s evolving faster than any tool in tech history.
But if you want to stay valuable as a developer, you need to own your code, not just generate it.

The engineers who thrive over the next decade will be those who:

  • Think independently.
  • Understand systems deeply.
  • Use AI strategically, not passively.
  • Keep their fundamentals alive through intentional detox cycles.

AI is a force multiplier not a replacement for your mind.


So take a week. Turn it off.
Write something from scratch.
Struggle a little. Think a lot.
Reignite the joy of building with your own hands.

When you turn the AI back on, you’ll see it differently not as your replacement, but as your apprentice.

When 200,000 Lines of AI Code Disappeared and Nothing Broke

A team deleted 200,000 lines of AI-generated code yet maintained app functionality, highlighting the pitfalls of unchecked AI development. AI may accelerate chaos in weak systems, making existing issues worse. Effective engineering culture remains crucial; AI should enhance rather than replace human judgment in creating a quality codebase.

A few weeks ago, someone I know a smart, capable engineering lead told me about their team’s strange success story.

They deleted 200,000 lines of AI-generated code.

And their app still worked.

That alone tells you everything you need to know about the quiet cost of unchecked AI-assisted development.

The project had originally been around 100,000 lines already a decent size for what it did. But over time, it ballooned to more than double that number. Most of the bloat came not from features or performance improvements, but from auto-generated boilerplate, duplicated logic, and abstractions no one really understood anymore.

When they finally audited the system, they realized how much noise had crept in how much invisible entropy had been introduced under the banner of “productivity.”

They cleaned it up. They deleted code. They refactored by hand. And the product kept running, smoother than before.


The Illusion of Productivity

This is the side of AI coding no one talks about.

Yes AI can make you faster. But “faster” at what, exactly?

If your processes, architecture, and reviews are already weak, AI will accelerate your chaos. It doesn’t understand your domain. It doesn’t see the trade-offs. It just predicts what “looks right.”

And that’s exactly the problem: AI-generated code looks right.
It compiles. It passes shallow tests. It feels complete.

But under the surface, it’s often redundant, brittle, and opaque a kind of technical debt that doesn’t announce itself until you try to build on top of it.

I’ve seen teams overwhelmed by maintenance of code they didn’t truly write.
I’ve seen projects bloated with functions that appear useful but contribute nothing.
I’ve even seen leaders puzzled when productivity metrics looked great while actual delivery velocity slowed to a crawl.

The AI didn’t break the system.
It just quietly magnified the team’s existing weaknesses.


AI Is a Force Multiplier not a Substitute for Discipline

This story reinforced something I’ve believed for a while:

AI won’t fix your architecture.
It won’t make your team more thoughtful.
It won’t improve communication.
And it definitely won’t tell you when the thing it just generated is complete nonsense.

If your engineering culture is strong clean codebase, thoughtful design reviews, experienced developers who understand trade-offs then AI can be a genuine accelerant. It can help prototype ideas, fill in routine boilerplate, or refactor safely with guidance.

But without that foundation, AI becomes an amplifier of dysfunction.
It scales everything the good, the bad, and the ugly.


The Temptation of the “Autonomous Engineer”

I understand the temptation.
The promise of AI development tools is seductive: faster output, lower costs, instant scaffolding.

But I’ve learned that software isn’t about writing more code it’s about writing less code that does more work.

The best engineers I’ve worked with are ruthless editors.
They remove complexity.
They delete unnecessary abstractions.
They value clarity over cleverness, and design over automation.

That discipline doesn’t go away just because a machine can now autocomplete functions.

If anything, it becomes more important than ever.


My Takeaway

When that lead told me they’d deleted 200,000 lines of AI-generated code and everything still worked, I didn’t see it as a failure of the technology.

I saw it as a reminder that tools don’t replace engineering principles.

AI is a powerful assistant.
But trust it blindly, and it will quietly erode your system from the inside out.

The real productivity gain isn’t in the speed of generation it’s in the quality of judgment behind what stays and what gets deleted.

Use AI. Experiment with it.
But never forget: your codebase reflects your discipline, not your tools.

And discipline is still something only humans can provide.


Written by Ivan Turkovic; a technologist, Rubyist, and blockchain architect exploring how AI, code quality, and engineering culture shape the future of software.

Why AI Can’t (Yet) Write Maintainable Software

In the past few years, large language models (LLMs) have burst onto the software development scene like a meteor bright, exciting, and full of promise. They can write entire applications in seconds, generate boilerplate code with ease, and explain complex algorithms in plain English. It’s hard not to be impressed.

But after spending serious time testing various AI platforms as coding assistants, I’ve reached a clear conclusion:

AI is not yet suitable for generating long-term, maintainable, production-grade software.

It’s fantastic for prototyping, disposable tools, and accelerating development, but when it comes to real-world, evolving, multi-developer systems it falls short. And the root cause is simple but fundamental: non-determinism.


The Non-Determinism Problem

At the heart of every LLM lies a probabilistic process. When you ask an AI to write or modify code, it doesn’t “recall” what it said before it predicts the next most likely word or token based on the context it sees. Even when you give it the exact same prompt twice, you often get subtly (or wildly) different answers.

In casual conversation, this doesn’t matter much. But in software engineering, determinism is sacred. A build must produce the same binary every time. Tests must behave consistently. A function’s output must depend solely on its input.

LLMs break this rule by design.

When you ask AI to “add a new field to this API,” it might add the field but it might also rename unrelated variables, adjust indentation styles, reorder imports, or subtly alter unrelated logic. These incidental changes make it almost impossible to track what actually changed and why. In version control, that’s noise. In production code, that’s risk.


The Illusion of Velocity

Using AI for coding can feel like flying until you realize you’ve lost track of where you’re going.

AI-generated code feels fast. You type a prompt, and it spits out a function that looks plausible. But as any experienced engineer knows, code that looks correct is not the same as code that is correct.

Worse still, AI often gets 90% right just enough to lull you into trusting it, but that last 10% (the edge cases, performance issues, or security vulnerabilities) can be costly. In long-lived systems, those flaws become ticking time bombs.

So yes, AI saves time but only if you’re ready to spend that saved time reviewing, refactoring, and making it consistent with your project standards. Otherwise, you’re borrowing technical debt against future maintenance.


“Vibe Coding” vs. Real Engineering

There’s a growing trend I like to call “vibe coding” relying on AI to produce code that “feels” right without understanding it deeply. It’s seductive, especially for less experienced developers or under time pressure.

But the truth is: software longevity is built on understanding, not vibes.

A healthy codebase is not just functional it’s coherent, documented, and maintainable. Every class, function, and comment exists for a reason that another human can later understand. AI-generated code often lacks that intentionality. It can mimic style, but it doesn’t comprehend architecture, team conventions, or long-term evolution.

AI doesn’t “see” the whole system it only sees your current prompt.


Where AI Does Shine

Despite these limitations, I’m not anti-AI. In fact, I use it daily strategically.

AI is brilliant at:

  • Prototyping ideas getting something working fast, even if it’s messy.
  • Generating boilerplate writing repetitive CRUD or setup code.
  • Explaining code translating complex logic into human-readable summaries.
  • Brainstorming solutions helping you think through alternative approaches.
  • Writing tests drafting coverage you can refine manually.

In other words, AI accelerates cognition, not automation. It’s a thinking partner, not a replacement for engineering discipline.


What It Means for the Future

As LLMs improve, we’ll likely see more deterministic, context-aware systems perhaps ones that can “anchor” to a codebase and learn its structure persistently. But until then, the responsibility for coherence, maintainability, and correctness still lies with us the humans.

AI might be the apprentice, but we’re still the architects.

My takeaway after months of experimentation is simple:

Use AI to accelerate development, not to abdicate responsibility.

Treat its output like an intern’s draft: useful, fast, and full of potential but never production-ready without review, cleanup, and integration into your project’s ecosystem.


The Bottom Line

AI coding tools are a revolution but like every revolution, they require balance and maturity to use effectively. They’re not replacing software engineers; they’re augmenting them.

So go ahead, let the AI write your prototypes, mock APIs, or test scaffolds. But when it comes to the production systems that real users depend on make sure there’s a human behind the keyboard who understands every line.

Because in the end, the difference between disposable and durable code isn’t who (or what) wrote it it’s who owns it.