The Email That Started This Post
A friend who is a founder sent me a Telegram message last month. “We’re three engineers. We have an AI feature to ship by Q3. Everyone says Python. Should I just give up on Rails?”
He had a working Rails 8 app. Auth done. Billing done. Three years of customer data. Real revenue. And he was about to throw it out because a YouTube influencer told him “AI means Python.”
I told him what I’m telling you. Don’t.
In 2026, if you are building an AI-powered SaaS product, Rails with RubyLLM is the better default for most teams. Not because Python is bad. Because you are not building a research lab. You are building a product. And the gap between “I have an LLM idea” and “I have paying customers” is mostly plumbing. Rails ships the plumbing. LangChain ships an orchestration layer and leaves the plumbing as your problem.
That’s the whole post. The rest is evidence.
What LangChain Actually Is in 2026
LangChain is an orchestration layer for calling LLMs. As of April 2026 the main langchain Python package sits at version 1.0.4, with the legacy abstractions split off into langchain-classic. The repo has roughly 135,000 stars. PyPI shows around 55 million weekly downloads across the package family, which is real. Mindshare won. PyPI Stats
LangGraph is now the centerpiece. LangChain Inc. positions itself as “the agent engineering platform” and tells you to use LangGraph for any non-trivial workflow, with LangSmith for tracing and evals and a hosted “LangGraph Platform” for deployment. LangGraph is at version 1.1.x with 31,000+ stars. PyPIGitHub
This is three products. Plus a vendor-hosted runtime. Plus a paid observability tool. Plus the actual Python web stack you still have to bring yourself.
Harrison Chase wrote in late 2025 that LangChain “grew too fast” early on, that the original library was “too opinionated,” and that they later “rewrote the original langchain in 2025 to be more streamlined.” Read that sentence twice. The framework you bet on in 2023 got rewritten under you, twice. v0.1, v0.2, v0.3, the deprecation of LLMChain, the migration to LCEL, the reposition around LangGraph, the introduction of langchain-classic. That is not a stable foundation. That is a product company iterating in public. LangChainMedium
Practitioners noticed. Octomind’s “Why we no longer use LangChain” post is the canonical one: 12 months of production use, then ripped out because “high-level abstractions soon made our code more difficult to understand and frustrating to maintain.” Max Woolf’s “The Problem with LangChain” post made the same case earlier: simple things become complex, debugging means reading framework internals, the Agent abstraction breaks any time you touch the system prompt. Hamel Husain, who is genuinely warm toward the LangChain team, still recommends LangSmith decoupled from LangChain itself. The most cited counterargument is “they listen to feedback well,” which is a culture compliment, not a technical one. OctomindX
LangGraph is genuinely better designed than the original chains. It is also still just an orchestration runtime. It does not give you HTTP, auth, jobs, persistence, a UI, or deployment. You still have to build all of that around it.
What Rails 8 Actually Gives You
Rails 8 ships with the Solid trifecta as defaults: Solid Queue for background jobs, Solid Cache for caching, Solid Cable for WebSockets. All three run on your Postgres or MySQL. No Redis required. No separate worker fleet required. Solid Queue alone is running 20+ million jobs a day at HEY in production. GitHub + 3
Add the rest of the box: ActiveRecord, ActionCable for streaming, Hotwire and Turbo for live UI without React, Kamal 2 for one-command deploys to any VPS, the new built-in authentication generator, ActiveJob, ActionMailer, ActiveStorage, encrypted credentials, Propshaft for assets, and Thruster as the HTTP/2 proxy. CodeMiner42
DHH calls the alternative “merchants of complexity.” His Rails World 2025 keynote was a 60-minute argument that we have collectively forgotten how to ship. Eight-day deploy cycles. Three-hour CI. Forty-seven services to render a CRUD form. He was not subtle. “I’m a CRUD monkey. My career is CRUD monkeying. Reading things from a database, creating records, updating records, occasionally deleting them.” Most AI SaaS apps are CRUD plus a model call. That is the entire job. Prateek Codes + 2
The under-discussed point: Ruby is unusually token-efficient for LLMs. users.select(&:active?) packs more semantics per token than the equivalent in almost any other ecosystem. Convention over configuration is a token-compression strategy now. Agents working on Rails code stay coherent across longer contexts than agents working on equivalent Python or TypeScript codebases. I’ve seen this repeatedly on client work. The Pragmatic EngineerIvan Turkovic
RubyLLM, The Quiet Default
RubyLLM is a single gem by Carmine Paolino. As of late April 2026 it’s at version 1.14.1, around 3,900 stars on GitHub, and over 5 million downloads. Three runtime dependencies: Faraday, Zeitwerk, Marcel. That’s it. RubyGemsRubyGems
It speaks the same API to OpenAI, Anthropic, Gemini, Bedrock, Azure, OpenRouter, DeepSeek, Ollama, VertexAI, Perplexity, Mistral, xAI, and any OpenAI-compatible endpoint. Tokens normalize across providers. Streaming, tool calling, structured output via RubyLLM::Schema, embeddings, image generation, audio transcription, content moderation, multimodal attachments. Same one interface for all of it. RubyGems + 5
class WeatherAssistant < RubyLLM::Agent
model "gpt-5-nano"
instructions "Be concise and always use tools for weather."
tools Weather
end
WeatherAssistant.new.ask "What's the weather in Berlin?"
That’s an agent. With tools. With instructions. With provider-agnostic streaming. Compare it to the LangGraph equivalent that imports create_agent, ChatOpenAI, builds a StateGraph, manages a MessagesState, and asks you to think about the runtime. Same outcome. Different cognitive tax. GitHub
The Rails integration is the killer. acts_as_chat and acts_as_message give you persisted conversations as ActiveRecord models. rails generate ruby_llm:chat_ui scaffolds a Tailwind chat UI with Turbo streaming, model selection, and tool-call display in under two minutes. New Rails app to working AI chat in 1:46 in Carmine’s own demo. The streaming job is six lines: Carmine Paolino + 2
class ChatResponseJob < ApplicationJob
def perform(chat_id, content)
chat = Chat.find(chat_id)
chat.ask(content) do |chunk|
chat.messages.last.broadcast_append_chunk(chunk.content) if chunk.content.present?
end
end
endCode language: JavaScript (javascript)
Solid Queue runs the job. Turbo broadcasts the chunks. ActiveRecord persists the conversation. Pgvector handles embeddings as native ActiveRecord scopes. Done.
A founder told Carmine: “Our first pass at the AI Agent used langchain. It was so painful that we built it from scratch in Ruby. Like a cloud had lifted.” That quote shows up repeatedly. It tracks with what I see on engagements. Carmine Paolino
Other Ruby AI Options, Briefly
Andrei Bondarev’s langchainrb is the original Ruby LangChain port and still maintained, useful if you want LangChain-style abstractions in Ruby. Sublayer takes a generator and agent DSL approach. Raix focuses on object-oriented prompt patterns and now ships MCP integration. instructor-rb mirrors the Python instructor library for structured output. The official Anthropic and OpenAI Ruby gems are fine for direct API access. RubyLLM won the default slot because it has the lowest cognitive overhead, the cleanest Rails integration, the broadest provider support in one interface, and a single maintainer with strong taste shipping weekly. The ecosystem is now large enough to matter: MCP support, OpenTelemetry instrumentation, evaluation frameworks like ruby_llm-tribunal, and prompt templating gems all hang off it. GitHub + 4
What You Actually Need Around LangChain
Here is the honest picture of the Python AI stack for a production SaaS in 2026:
LangChain or LangGraph for orchestration. FastAPI for the web layer. SQLAlchemy and Alembic for the database. Pydantic for schemas. Celery and Redis for background jobs. A separate Redis for caching. A separate WebSocket layer (or you bolt on Channels-style infrastructure). gunicorn or uvicorn for serving. Docker plus Kubernetes or a managed PaaS for deploy. A frontend in React or Next.js because Hotwire’s Python equivalent doesn’t exist. Auth via Auth0 or Clerk because there’s no canonical answer. Then LangSmith on top, which is a paid SaaS with usage-based pricing, for tracing.
That is ten production dependencies before you’ve written a single business rule. Each one has its own version churn, its own maintainer drama, its own deploy story. I’ve shipped this stack. I’ve also paid the bill on it. After 20 years of running engineering teams in fintech, payments and blockchain, I will tell you plainly: every additional moving part is a 2 AM page waiting to happen.
Rails gives you all of that in the framework. ActiveRecord, ActionCable, Solid Queue, Solid Cache, Hotwire, the auth generator, Kamal. One Gemfile. One deploy command. RubyLLM slots in as one more gem.
The Real Production Comparison
Shopify is the loud datapoint here. Sidekick, the AI assistant embedded in every Shopify admin, is built on Rails. The 2025 Rails World talk by Andrew McNamara and Charlie Lee is titled, literally, “LLM Evaluations & Reinforcement Learning for Shopify Sidekick on Rails.” Same company that runs the largest commerce monolith in the world is also running a production agentic system on the same stack. Not a microservice in Python with an API. A Rails service with LLM calls, tool routing, JIT instructions, and an evaluation framework on top. Shopify + 2
GitHub still runs Rails. Basecamp and HEY still run Rails. 37signals serves their entire AI feature surface from Rails. Airbnb’s monolith is still Rails. The pattern is unambiguous: companies with real revenue and real scale ship AI features on top of the framework they already trust. They do not rewrite to Python because they need a model call.
LangChain has its own production roster. Klarna, Replit, Uber, J.P. Morgan, Elastic, Ally Bank are all named on LangGraph’s site. Notice the pattern. These are large enterprises with platform teams who can absorb the operational complexity of running LangGraph plus FastAPI plus the rest. That is not your three-engineer SaaS startup. That is a different shape of problem.
When Python Genuinely Wins
I am not going to pretend otherwise. If you are training or fine-tuning models, Python is the only adult option. PyTorch, transformers, the entire research ecosystem lives there. If you need numerical work, vector math, or ML-adjacent code, Python wins. If your team is full of ML engineers who think in notebooks, do not force them into Ruby. Carmine Paolino
LangGraph itself is a legitimately good piece of engineering for one specific case: long-running, durable, stateful, multi-agent workflows where you genuinely need checkpointing, human-in-the-loop, and time-travel debugging across complex graphs. If that is your product, use it. LangSmith for traces and evals is a strong product even if you abandon the rest of LangChain. Hamel Husain runs his consulting practice on top of it, and he’s correct to. Hamel
There is also a talent argument. The pool of engineers who know LangChain is larger than the pool who know RubyLLM. If you are hiring fast and don’t care about the rest of the stack, that matters.
But notice what these cases have in common. They are all about ML depth or framework familiarity, not about shipping a SaaS faster. The minute the question is “how do we get auth, billing, background jobs, a chat UI and an AI feature into production by Friday,” Rails wins.
Why This Matters More in 2026 Than Ever
Two trends compound. First, AI agents are now writing meaningful chunks of production code. I wrote about this in my piece on the middle loop, where engineering moved upstream into spec, review and architecture. The constraint is no longer typing speed. It is human decision capacity. Frameworks that minimize cognitive overhead per feature shipped now dominate. Rails was already optimized for this. Convention over configuration is exactly what an LLM needs to produce correct code on the first pass. The Pragmatic Engineer
Second, the AI hype cycle is repeating the same script we ran in the 80s and the 2010s. I covered this in the eternal promise of eliminating programmers. Every cycle promises that complexity is going away. Every cycle, complexity gets relocated. The teams that win are the ones that pick the boring stack and ship. The teams that lose are the ones that build a Kubernetes cluster to host an OpenAI API call.
DHH said in his Pragmatic Engineer interview last year that beautiful code is a signal of correctness. He’s right, and it now matters more than it ever did, because you and your agents are reviewing code at speed. Rails reads like prose. LangChain reads like Spring. The Pragmatic Engineer
My Recommendation
If you are a product team of one to fifteen engineers building an AI-powered SaaS in 2026, your default should be:
Rails 8 with RubyLLM. Postgres with pgvector. Solid Queue and Solid Cache. Hotwire for the UI. Kamal to a small fleet of VPS instances. LangSmith or OpenTelemetry plus the OpenTelemetry RubyLLM gem if you need traces. That’s it.
If you are doing fine-tuning, custom inference, model hosting, or research, build that part in Python. Talk to it over an API. Keep your product on Rails.
If you are convinced LangGraph’s graph runtime is necessary because your workflow has fifty stateful nodes with branching durability, you have a different product. Use it. But verify that claim hard before you commit. Most of the time, “we need a graph framework” turns out to mean “we have one prompt and a tool call.”
The gap I see most often on consulting calls in 2026 is not that teams picked the wrong AI library. It’s that teams picked an AI orchestration framework and forgot to pick a product framework. Rails is the product framework. RubyLLM is the AI orchestration. Together they are the whole package. LangChain is half a stack pretending to be a complete one.
If your competitor is shipping features twice as fast as you with a quarter of the engineers, go look at what they’re using. It’s increasingly Rails.
If this post made you think, you'll probably like the next one. I write about what's actually changing in software engineering, not what LinkedIn wants you to believe. No spam, unsubscribe anytime.