Kamal has been positioned as the answer to a question Rails developers have asked for years: how do you deploy to your own servers without the overhead of Kubernetes or the cost of a managed platform? And in many ways, it delivers. But the gap between what Kamal promises and what it actually requires in a production environment is wider than most tutorials suggest.
I have been using Kamal to deploy Rails applications across multiple projects. Some of those deployments were clean and straightforward. Others turned into multi-day debugging sessions that made me question whether I should have just stayed on the platform I was migrating from. This is an honest account of both sides.
The Promise: Heroku on Your Own Servers
The pitch is compelling. Kamal, built by the team at 37signals, uses Docker containers and SSH to deploy your Rails application to any VPS. It handles zero-downtime deployments, rolling restarts, and SSL certificates through its built-in kamal-proxy. You write a single deploy.yml configuration file, run kamal setup, and your application is live. No Kubernetes manifests. No Terraform files. No $500/month Heroku bills for what amounts to a few Puma processes and a database.
For a fresh application deployed to a clean VPS, this promise is largely kept. And that is worth acknowledging before diving into the complications.
Where Kamal Genuinely Shines
If you are deploying a new Rails application to a fresh server with no existing infrastructure to work around, Kamal is genuinely excellent. The initial setup is minimal. You configure your server IP, your Docker registry credentials, and your application secrets. You run kamal setup and within a few minutes, your application is running in production with SSL certificates provisioned automatically through Let’s Encrypt.
The zero-downtime deployment model works well. Kamal boots a new container, waits for it to pass health checks, and then switches traffic from the old container to the new one. If the new container fails its health check, the old one keeps running. This is exactly what you want from a deployment tool, and Kamal handles it with very little configuration.
Once a deployment pipeline is set up and stable, the day-to-day experience is smooth. Push your code, run kamal deploy (or let your CI do it), and your changes are live in minutes. The integration with CI tools like GitHub Actions is straightforward. A basic workflow that builds, pushes, and deploys on every merge to main takes about twenty lines of YAML to configure. For teams that want to own their infrastructure without hiring a dedicated DevOps engineer, this is a meaningful improvement over what existed before.
Kamal also handles multiple applications on a single server surprisingly well. Each application gets its own set of containers, and kamal-proxy routes traffic based on hostnames. For side projects or small applications that do not justify their own dedicated server, this multi-tenant approach is practical and cost-effective.
The First Wall: Postgres on the Same Host
Most Rails tutorials for Kamal show you how to set up Postgres as a Kamal “accessory,” a Docker container running on the same host as your application. This works, but it introduces a problem that is rarely discussed in those tutorials: security.
When you define a Postgres accessory in your deploy.yml and expose its port, Docker adds its own iptables rules that bypass your server’s firewall. If you are using UFW (which most Ubuntu servers do), you might assume that running ufw deny 5432 will block external access to your database. It will not. Docker operates on the NAT table of iptables, while UFW operates on the filter table. Traffic destined for your Docker container’s published port never passes through UFW’s rules at all.
This is not a Kamal-specific problem. It is a Docker problem that Kamal inherits. But because Kamal’s documentation encourages you to run Postgres as an accessory on the same host, you are very likely to encounter it. The solution is to bind the Postgres port to localhost only by configuring the port as 127.0.0.1:5432:5432 instead of 5432:5432 in your accessory configuration. But if you do not know to do this, your database is sitting on the public internet with whatever password you chose, and UFW will happily report that port 5432 is blocked while it absolutely is not.
If you want additional protection beyond localhost binding, you need to work with the DOCKER-USER iptables chain directly or use an external firewall provided by your cloud provider. Neither of these approaches is covered in Kamal’s documentation, and both require knowledge that goes well beyond what a typical Rails developer is expected to have.
The Health Check Trap: Solid Cache and Rack Attack
Rails 8 ships with the Solid trifecta: Solid Cache, Solid Queue, and Solid Cable. It also encourages the use of Rack Attack for rate limiting and request throttling. Both of these are sensible defaults for production applications. Both of them can break your Kamal deployment in ways that are not immediately obvious.
When Kamal deploys a new container, it hits the /up health check endpoint to verify that the application has booted successfully. If the health check does not return a 200 response within the configured timeout (30 seconds by default), Kamal considers the container unhealthy and rolls back the deployment. The error message you see is: target failed to become healthy within configured timeout.
Here is where it gets interesting. If you are using Rack Attack with strict rate limiting or IP-based throttling, the health check requests coming from kamal-proxy might get blocked. The health check runs frequently during deployment (every few seconds), and depending on your Rack Attack configuration, those rapid successive requests from the same internal IP can trigger your throttling rules. Your application is perfectly healthy, but Rack Attack is doing exactly what you told it to do: blocking what looks like suspicious traffic.
The fix is to add an exception for the health check path in your Rack Attack configuration. Something like excluding requests to /up from your throttling rules. But you only discover this after a failed deployment, and the error message gives you no indication that Rack Attack is the culprit. The logs show a timeout. The container appears to start. The application boots. But the health check never succeeds.
Solid Cache can introduce a similar issue if your cache backend is not yet available during the initial deployment. If your application’s boot sequence tries to connect to a cache store that does not exist yet (because the database has not been migrated, or because the Solid Cache tables have not been created), the /up endpoint can fail or hang. Again, the error message is unhelpful, and the root cause is several layers removed from what Kamal reports.
The general pattern here is that Kamal’s health check system is simple and reliable in isolation, but any middleware, initializer, or framework feature that interferes with the /up endpoint during the critical boot window can cause deployments to fail. You need to audit your entire request pipeline for anything that might affect that single endpoint, and you need to create exceptions for it. This is manageable once you know about it, but discovering it through failed deployments is not a pleasant experience.
The Nginx Question: Do Not Try to Combine Them
This is the most opinionated advice in this post, and I stand by it: do not try to run nginx alongside kamal-proxy, especially if SSL is involved.
The temptation is understandable. Many Rails developers have years of experience with nginx as a reverse proxy. It handles static file serving, request buffering, gzip compression, and SSL termination. You might want nginx in front of your application for features that kamal-proxy does not provide, or because your existing infrastructure already uses nginx for other services on the same server.
The problem is that kamal-proxy and nginx both want to own the same ports (80 and 443) and both want to handle SSL termination. You can configure nginx to listen on the standard ports and forward traffic to kamal-proxy on internal ports, but now you have two reverse proxies in series, each with its own SSL configuration, its own header forwarding behavior, and its own opinions about how health checks should work. The X-Forwarded-For and X-Forwarded-Proto headers get mangled. SSL redirects loop. Health checks that work through kamal-proxy fail when they pass through nginx first. The HTTP Origin header mismatches cause Rails’ forgery protection to reject legitimate POST requests.
I have seen developers spend days trying to make this combination work, including myself. The number of moving parts is simply too high. You are debugging interactions between nginx’s proxy_pass configuration, kamal-proxy’s TLS settings, Rails’ force_ssl behavior, host authorization rules, and Docker’s internal networking, all at the same time. Each component works correctly in isolation. Together, they create a debugging nightmare where fixing one issue introduces another.
If you need nginx features, my recommendation is to choose one approach and commit to it. Either use kamal-proxy for everything and accept its limitations, or disable kamal-proxy entirely and manage nginx yourself outside of Kamal’s lifecycle. Trying to layer them is not worth the complexity.
More Ways It Goes Wrong
Beyond the major issues, there are several smaller problems that collectively add up to significant debugging time during your first few deployments.
The force_ssl redirect loop. Rails production environments enable force_ssl by default. Kamal’s health check hits /up over HTTP internally. The health check gets redirected to HTTPS, which kamal-proxy interprets as a failure. Your application is running, SSL is working for actual users, but deployments fail because the internal health check cannot complete. The fix is to exclude /up from SSL redirects in your production configuration, but this is a non-obvious interaction between two reasonable defaults.
The host authorization block. Rails 7 and later include host authorization that rejects requests from unexpected hostnames. Kamal’s health check may use an IP address or internal Docker hostname that does not match your configured allowed hosts. The fix is another exclusion rule for the /up path, but the error manifests as a generic health check failure with no clear indication of what went wrong.
Architecture mismatches. If you develop on an Apple Silicon Mac and deploy to an AMD64 server (which is most VPS providers), you need to configure cross-platform builds. Kamal supports remote builders and multi-architecture builds, but misconfiguring this results in containers that build successfully but crash immediately on the server. The error output is cryptic, and the root cause is not always obvious.
The missing CMD instruction. If your Dockerfile does not include an explicit CMD instruction, the base Ruby image defaults to running IRB. Your container starts, passes the Docker health check (because the container is “running”), but kamal-proxy cannot connect to your application because it is sitting in an interactive Ruby console instead of running Puma. The default Rails Dockerfile includes the correct CMD, but if you have customized your Dockerfile, this is an easy thing to accidentally remove.
Credentials during the build phase. If you use environment-specific credentials (staging vs. production), the Docker build phase uses the production environment regardless of what you specify in your deploy configuration. This means staging-specific credentials are not available during asset precompilation, which can cause the build to fail with errors that look completely unrelated to credential management.
Memory pressure on small VPS instances. Building Docker images is memory-intensive. If your VPS has limited RAM (2GB or less), the build process can push the system into swap, making everything slow enough to trigger deployment timeouts. The container is not unhealthy. The server is just too slow to boot the application within the default timeout window. You can increase the timeout, use a remote builder, or add swap space, but diagnosing this as the root cause requires monitoring server resources during deployment.
Accessory boot ordering. Kamal does not guarantee that accessories (like Postgres or Redis) are fully ready before your application container starts. If your application’s boot process tries to connect to a database that is still initializing, it will fail. You may need to add retry logic to your entrypoint script or configure health checks that account for accessory startup time.
Once It Works, It Really Works
Here is the thing that makes all of the above frustrations tolerable: once you have worked through the initial setup issues and your deployment pipeline is stable, Kamal is genuinely pleasant to use.
The daily workflow becomes simple. You push code, your CI pipeline runs tests, and if they pass, it deploys. Zero-downtime. Automatic SSL renewal. Rolling restarts. No platform vendor to negotiate with, no surprise bills, no dependency on someone else’s infrastructure decisions. Your $10/month VPS runs your application just as reliably as the $200/month platform you migrated from.
GitHub Actions integration is particularly smooth. A basic deployment workflow looks roughly like this: check out your code, set up Docker buildx, log into your registry, run kamal deploy. The Kamal team provides a GitHub Action that handles most of the setup. You add your secrets (registry password, Rails master key, server SSH key) to your repository’s GitHub Secrets, reference them in your workflow, and you have a fully automated deployment pipeline in under thirty minutes.
For teams that have been deploying with Capistrano, the shift to containerized deployments through Kamal removes an entire class of “works on my machine” problems. Your production environment is defined by your Dockerfile, not by whatever packages happen to be installed on the server. New team members do not need to understand the server’s history to deploy safely.
Who Should Use Kamal
Kamal is a strong choice if you are deploying a new Rails application to a fresh server and you are comfortable with Docker basics. It is also a good fit if you are migrating away from expensive managed platforms and want to retain a similar developer experience at a fraction of the cost. If your infrastructure is simple (one or two servers, a single application per server, managed database), the setup process is close to what the documentation promises.
It is a harder sell if you have existing infrastructure that includes nginx, custom firewall rules, or services that were not designed to coexist with Docker’s networking model. The more your server already does, the more edge cases you will encounter during the initial setup. This is not because Kamal is poorly designed. It is because deploying containerized applications to shared infrastructure is inherently more complex than deploying to a clean slate.
If you are considering Kamal, my advice is to start with a clean server and a simple application. Get the deployment pipeline working end to end before adding complexity. Add your database accessory and verify the firewall situation. Configure your health check exclusions before enabling Rack Attack or force_ssl. Layer in complexity one piece at a time, and test each addition with a full deployment cycle. The problems I described in this post are all solvable. They are just much easier to solve when you encounter them one at a time instead of all at once.
The Bigger Picture
Kamal represents a meaningful step forward for Rails deployment. It brings the convenience of platform-as-a-service to self-hosted infrastructure without requiring deep DevOps expertise. The rough edges I have described are real, but they are the kind of problems that get solved through better documentation, community knowledge sharing, and incremental improvements to the tool itself.
The Rails community has always valued convention over configuration. Kamal extends that philosophy to deployment, and for the most common case (a new application on a clean server), it succeeds. The challenge is that production infrastructure is rarely the most common case. Real servers have history, existing services, security requirements, and operational constraints that no deployment tool can fully anticipate.
What Kamal does well is give you a foundation. What it requires is the understanding to adapt that foundation to your specific situation. And that understanding, as with most things in software engineering, comes from deploying, breaking things, debugging, and learning the hard way why certain defaults exist.
If you found this useful, I write regularly about software engineering, architecture, and the practical realities of building production systems at ivanturkovic.com. You can follow me on LinkedIn for shorter takes and updates, or reach out directly if you want to discuss deployment strategies, Rails architecture, or anything covered in this post. I would love to hear about your own experience with Kamal, especially the edge cases and workarounds that are not covered in the official documentation.