A new kind of logic is spreading through developer communities, startup circles, and engineering Slack channels. It goes something like this: why pay a monthly subscription for software someone else built when you now have access to AI coding assistants powerful enough to help you build it yourself? The reasoning sounds compelling on the surface. The economics look attractive in a spreadsheet that has not been filled in all the way. And the energy behind it is real, driven by genuine breakthroughs in what AI tools can do.
But there is a significant gap between what AI coding tools make possible and what they make wise. And in my experience building and operating real systems across fintech, blockchain, and high-traffic platforms over the past two decades, I have seen this exact pattern before, dressed in different clothing.
This post is not a defense of every SaaS product that has ever existed. Many deserve to be replaced. Some are overpriced, underbuilt, and held together by aggressive pricing locked behind annual contracts. But the current wave of enthusiasm for building everything yourself, powered by AI assistants and a three-weekend sprint, deserves serious scrutiny before engineering teams and founders stake their infrastructure on it.
How a Reasonable Idea Becomes an Expensive Distraction
The sequence tends to follow a recognizable pattern. It starts with a legitimate grievance. A SaaS tool your team has been using raises its prices. Or adds a feature behind a higher tier that you need. Or the product has simply not improved in two years while your requirements have grown. The frustration is valid. You open a conversation about alternatives and someone on the team points out that with AI coding assistance, building a replacement looks genuinely feasible.
The conversation gets exciting. Engineers are curious people. The idea of owning your own stack, having exactly the features you need, and eliminating a vendor relationship has real appeal. You sketch out what the custom version would look like. It seems manageable. Someone volunteers to spend a weekend on a proof of concept.
The proof of concept works, or at least it works well enough to generate momentum. The happy path is solid. The AI assistant produced clean-looking code faster than anyone expected. Leadership approves continuing the effort. The team is energized. Three weekends in, you have something that feels like a real product.
Then reality begins its slow arrival.
The edge cases surface. The timezone handling is wrong for users in certain regions. The export functionality breaks for datasets above a certain size. The permission system that looked complete in the prototype turns out to need five more states than anyone anticipated. Each of these is fixable. Each fix takes time. The engineer who built the prototype is now the de facto owner of an internal tool, and that ownership is expanding to consume more of their week than the original estimate suggested it would.
Meanwhile, the core product features that were on the roadmap before this effort began are still waiting. The customers who were expecting them are still waiting. The competitive pressure that made those features urgent has not paused while your team debugged an internal invoicing edge case at 2am.
This is the trap. Not the technical challenge of building the tool, though that is harder than it looks. The trap is the attention and focus it consumes from the work that actually moves your business forward.
The Incomplete Economics of the “Cancel SaaS” Argument
The financial case for building your own tooling almost always looks better at the start of the conversation than it does twelve months later. This is not because people are being dishonest. It is because the costs that matter most are the ones that do not appear until after the decision has been made.
The initial calculation is straightforward: what you pay for SaaS subscriptions versus what it would cost to build a replacement. That comparison favors building, at least on the surface. AI coding tools are expensive, but many teams already have them. Cloud infrastructure has real costs, but teams that are already running production systems have existing infrastructure they assume they can leverage. The SaaS subscription fees stand out as the obvious line item to eliminate.
What does not appear in that initial calculation: the engineering hours required to build the first working version, including the iteration after the prototype reveals what was missed. The hours spent on security review before you can responsibly put real business data through a custom-built system. The ongoing hours for dependency updates, monitoring, incident response, and the feature requests that accumulate once internal users have something that mostly works and start asking for the ten percent that does not.
The most significant cost that almost never makes it into the initial calculation is the one that should weigh most heavily on any engineering leader: the cost of the product work that does not happen. Senior engineers are the scarcest resource in most technology companies. Every hour a senior engineer spends maintaining an internal tool is an hour not spent on the capability that differentiates your product in the market. That is not a zero cost. In many cases, it is the highest cost in the entire analysis, and it never appears on the spreadsheet where the SaaS subscription fees are listed.
I have seen this calculation play out in multiple organizations. A team replaces three SaaS tools over the course of six months. The annual subscription savings are real: perhaps sixty thousand dollars. What does not appear in the accounting is that the engineers who built and now maintain those tools contributed meaningfully less to the core product during that period, and the product fell further behind competitors who kept their engineering attention focused on what customers actually paid them to build. The true cost of that gap is not sixty thousand dollars. It is almost always larger.
What a SaaS Subscription Actually Pays For
There is a persistent misunderstanding in the “build it yourself” conversation about what you are actually buying when you pay for a mature SaaS product. The assumption is that you are paying for software, and that the software is the primary thing of value. This framing makes the comparison to building your own seem straightforward: if the software can be replicated, the cost can be eliminated.
But in most cases, the software is not primarily what you are paying for. You are paying for a running service, operated continuously by a team whose entire professional focus is making that one thing work reliably for people whose businesses depend on it.
Consider what is included in a subscription to a mature payment processing platform. The software that accepts and routes transactions is only the beginning. Behind it is PCI compliance infrastructure that required years and significant capital to achieve and certify. Fraud detection models trained on hundreds of millions of real transactions. Customer support for your end users when something fails. Uptime SLAs backed by on-call engineering teams who are responsible for incidents, not you. Relationships with card networks that took years to establish and are not available to a team that built its own payment system last month.
Consider an email delivery service. The software that sends emails is genuinely not the valuable part. The valuable part is IP reputation built over years of careful management, deliverability relationships with major inbox providers that you cannot replicate by writing code, bounce and complaint handling infrastructure, and compliance tooling for a regulatory landscape that changes continuously. A team that decides to build its own email sending infrastructure typically discovers the real cost of that decision when their first campaign lands in spam for seventy percent of recipients and they have no obvious path to fix it.
This does not mean every SaaS product is this valuable. Some are thin wrappers around open-source libraries with pricing that reflects sales team ambition rather than genuine product depth. Those deserve to be scrutinized, and in some cases replaced. But the evaluation should be honest about what you are taking on when you decide to own the problem yourself, including which of these categories the specific tool you want to replace actually belongs to.
The Security Responsibility Nobody Volunteers For
AI coding tools generate plausible-looking, functional code very quickly. They have been trained on enormous amounts of existing code, which means they have also absorbed the security vulnerabilities present in that code. Research into AI-generated code has found vulnerability patterns appearing at rates that are not meaningfully better than human-written code, and sometimes worse, depending on the domain and the specific patterns involved.
Common vulnerability categories appear regularly in AI-generated code: SQL injection, broken access control, insecure direct object references, improper input validation, hardcoded secrets. These are not exotic vulnerabilities requiring sophisticated attacks to exploit. They are the ones that appear in breach reports year after year because they are straightforward to find and exploit once they exist in a production system.
The more fundamental issue is not whether AI-generated code contains these vulnerabilities. It is what happens when they are discovered. A mature SaaS product has a security function, a vulnerability disclosure process, the ability to patch all customers simultaneously, and some history of having been reviewed by people whose job is to find problems before attackers do. When a vulnerability is found in a mature SaaS product, the vendor patches it and notifies customers.
When a vulnerability is found in your custom-built internal tool, you are the vendor. You patch it, if you catch it before someone else does. You notify affected users, if notification is required. You are responsible for understanding the scope of exposure, preserving the right forensic evidence, and managing any regulatory or legal obligations that follow. This is not hypothetical overhead. It is a realistic scenario for any system that handles real business data, and it arrives without warning.
I have spent enough time in fintech to have seen how security incidents in custom-built systems unfold. The pattern is consistent: the system was built with genuine care, the engineers were competent, and the vulnerability was something that did not seem important at the time it was introduced. The cost of managing the incident, in engineering hours, legal fees, customer trust, and regulatory attention, invariably exceeds the cost of the SaaS subscription that the custom system was built to replace. Sometimes by a considerable multiple.
The Invisible Organizational Cost: Knowledge That Lives in One Person
Custom software creates organizational dependencies that are often not visible until they become problems. When a team builds an internal tool, the knowledge of how that tool works, why certain decisions were made, what edge cases were handled and which were deliberately deferred, lives primarily in the people who built it. That knowledge is not fully captured in the code, even when the code is well-commented. It is not fully captured in documentation, even when the documentation is unusually thorough. It lives in the intuition and memory of the engineers who sweated through the hard decisions.
This creates a fragility that grows quietly over time. The engineer who built the system moves to a different team or a different company. The person who inherits the system has the code and the documentation, but not the reasoning. When something breaks in a non-obvious way, debugging it requires reconstructing context that the original engineer carried in their head. That reconstruction takes time, often at the worst possible moment.
AI-assisted development makes this specific problem somewhat more acute. Code produced through rapid AI-assisted sprints tends to be code that the team understands at the level of “it works and the tests pass.” The deeper understanding, the architectural reasoning, the deliberate choices about what not to support and why, these are less thoroughly embedded in teams that built quickly rather than slowly and carefully. When the rapid sprint produces the first version and then the engineer moves on, the organizational knowledge gap is larger than it would have been for the same system built through a more deliberate process.
The SaaS tools being replaced have accumulated organizational knowledge of their own, embedded in documentation, support resources, community knowledge, and the product decisions of teams that have been thinking about nothing else for years. When you cancel the subscription, that accumulated knowledge becomes inaccessible. You are trading a known knowledge base for a system whose knowledge base exists primarily in the heads of the people who built it last month.
The Core Product Problem: What Is Actually Getting Built?
Here is the question that engineering leaders are not asking directly enough in these conversations: if your team spends the next six months replacing your operational tooling with custom-built alternatives, what does not get built?
This question has a concrete answer in every organization, and the answer is almost always more important than the question of whether you can save on SaaS subscriptions. There are features your customers have been requesting. There are architectural improvements that would meaningfully improve system reliability. There are integrations that would open new revenue channels. There are performance problems that are degrading user experience in ways that your competitors do not share. All of these represent genuine business value that engineering effort can create.
When engineers spend their capacity building and maintaining internal tooling instead, that business value does not get created. The customers waiting for the feature keep waiting. The architectural debt continues accumulating. The competitors who kept their engineering teams focused on their core products continue pulling ahead.
The software that runs your business operations is, for most companies, not the software that differentiates your business in the market. Your invoicing system, your internal support tool, your project management workflow, your employee directory: these are important, and they need to work reliably, but they are not why your customers chose you over the alternatives. Treating them as worthy of significant engineering investment, especially at the expense of the product that actually creates value for customers, is a strategic confusion about what your engineering team is actually for.
The irony is that the AI coding tools driving this enthusiasm could be directed at the core product instead. The same capability that allows a team to build a custom project management tool in three weekends could be used to accelerate the development of the features that customers are actually paying for. The productivity gain is real in both directions, but only one direction creates competitive advantage.
I watched a fintech team go through exactly this choice over the course of a year. They had a genuine frustration with the pricing of several tools they used for internal operations, and they had genuine enthusiasm about what their engineers could build with AI assistance. They spent eight months building custom replacements for five SaaS tools. The replacements worked. The subscription costs were eliminated. And when the year-end product review happened, they had shipped fewer customer-facing features than in any previous year, and a competitor had filled a gap in the market that they had intended to address.
The custom tools were good. The product fell behind. The trade-off was never made explicit, but it was made.
Why This Moment Feels Different From Previous Cycles
The argument that engineers should build rather than buy their operational tooling is not new. It has appeared in different forms at each major inflection point in developer capability. Each time, the argument has been partially right about the new capabilities and partially wrong about what remains hard regardless of the tools available.
The CASE tools of the 1980s and 1990s promised that visual modeling would generate production software, eliminating the need for much of what external vendors sold. The capability was real in narrow domains and failed in broader ones because the complexity of production software is not primarily in the writing. It is in the maintenance, the edge cases, the operational reliability, and the accumulated decisions made over years of real-world use.
Offshore outsourcing in the 2000s promised that cheap labor markets would make it cost-effective to build everything custom. The capability was real for certain categories of well-specified work and failed for others because specification quality turned out to be the constraint, and specification quality requires the same engineering judgment that you were supposedly eliminating the cost of.
No-code and low-code platforms in the 2010s promised that visual development would make it feasible for non-engineers to build what previously required engineering teams. The capability was real for certain internal tools and simple workflows. The category found its genuine scope and settled into it. Enterprise software requiring real complexity still required real engineers.
AI coding assistants are more powerful than any of these predecessors. The gap between what they can produce and what CASE tools generated is not marginal. It is substantial. But the fundamental shape of the problem they are being asked to solve has not changed. Production software is not primarily a writing problem. Writing the initial version is the cheapest part. Operating it, securing it, maintaining it, extending it, and doing all of that without consuming the engineering attention that should go toward your actual product: these are still where the real cost lives.
What is genuinely different this time is the emotional momentum. The enthusiasm for AI-assisted development is real and justified by real capability. That enthusiasm creates a pull toward using the capability, even in contexts where using it displaces work that matters more. Recognizing that pull, and making deliberate choices about where to direct it, is the engineering leadership challenge of this particular moment.
When Building Your Own Tooling Is Actually the Right Call
I do not want to argue that building custom internal software is never the right decision. There are clear circumstances where it is not just acceptable but necessary, and where the strategic logic is sound even after accounting for all the costs.
When the internal tool is adjacent to your core product competency, building it may make sense. If you are a data infrastructure company and you need a custom monitoring tool, building it exercises and develops capabilities that are directly relevant to your product work. The maintenance cost is not pure overhead; it is operational product development that feeds back into what you know and build for customers.
When no adequate external option exists, building is necessity rather than preference. Some industries and use cases are genuinely underserved by the existing SaaS market. I encountered this repeatedly in blockchain and digital finance, where the tooling assumptions of mainstream SaaS products were built for traditional financial infrastructure that did not map to decentralized systems. In those cases, custom development was not a cost-saving exercise; it was the only viable path.
When data sovereignty or regulatory requirements genuinely preclude external services, you build and own the system regardless of the cost, because the alternative is not being able to operate in your regulatory environment at all.
When a specific SaaS product has become genuinely extractive, providing minimal value while extracting significant cost, the replacement analysis may still justify building. But it needs to be a real analysis, accounting for total cost of ownership including ongoing maintenance, security, and opportunity cost, not just a comparison of subscription fees to prototype build time.
In all of these cases, AI coding tools genuinely help. They accelerate the initial build, reduce the cost of iteration, and make it feasible for smaller teams to own and maintain systems that previously required larger ones. The productivity improvement is real and should be applied where it matters. The question is whether it is being applied to work that creates business value or work that displaces the creation of business value.
The Market Signal Worth Reading Correctly
There is a genuine market signal in the current enthusiasm for building over buying, and it is worth reading correctly rather than dismissing. The signal is not that SaaS as a model is dying. It is that the competitive pressure on SaaS products has meaningfully increased, and the products that were coasting on the difficulty of building alternatives are about to face real consequences.
For the past decade, many SaaS products operated in markets where the switching cost was not just the subscription fee but the engineering effort required to build an alternative. That effort was high enough to make many products defensible even when they were not particularly good. AI coding tools have lowered that effort, which means that defensibility based primarily on build difficulty is eroding.
The SaaS products that survive this shift will be the ones that were genuinely valuable rather than merely difficult to replace. Products with deep domain expertise embedded in years of product decisions. Products with compliance infrastructure that is genuinely costly to reproduce. Products with network effects that make the shared platform more valuable than any individual instance. Products with operational reliability that comes from teams whose entire focus is a single problem.
Some SaaS products will face real pressure and deserve to. The ones that have not invested in genuine product depth, that have relied on switching costs rather than value creation, will find the current environment clarifying. That pressure will improve the market for everyone, producing better products at more defensible price points.
But this is a market correction that plays out at the level of product strategy, not a signal that every engineering team should immediately redirect their capacity toward building internal tooling. Understanding the difference between those two things is part of what engineering leadership is for.
A Framework for Making the Decision Without Fooling Yourself
If you are evaluating whether to build a custom alternative to a SaaS tool your team currently uses, here is the framework I would apply. It is based on two decades of making these decisions and living with their consequences.
Start by being honest about the strategic category. Is the tool you want to replace adjacent to your core product competency, or is it genuinely commodity operational software? If it is commodity, the bar for building it yourself should be high, because the ongoing maintenance cost will fall on engineers whose time has a higher-value alternative use.
Calculate the full build cost, not the prototype cost. The prototype gets you to a working happy path. The full build gets you to a system you can put real business data through, with real error handling, real security review, real monitoring, and real coverage of the edge cases that will surface in production. That typically costs three to five times what the prototype took. Budget that honestly before comparing to the subscription fee.
Calculate the ongoing maintenance cost with a realistic ownership model. Who maintains this system? What is their fully-loaded hourly cost? How many hours per month will ongoing maintenance require, realistically? Include dependency updates, security patching, incident response, and the feature requests that will come once internal users have something that mostly works. This number is almost always larger than teams estimate at the start.
Account for the opportunity cost explicitly. This is the step that most teams skip and should not. Write down, specifically, what product work will not happen if the engineers who would build and maintain this tool spend that capacity on internal tooling instead. If you cannot write down a specific answer, you have not thought about it carefully enough. If the specific answer is “features that customers have been requesting for two quarters,” that is a concrete cost that should appear in your analysis.
Account for the security and compliance responsibility. Does the tool handle sensitive data? What is your plan when a vulnerability is discovered? What regulatory obligations apply? Who owns those responsibilities on an ongoing basis? These are not hypothetical questions; they are operational realities that begin the day the custom system goes into production.
If the analysis still favors building after all of those inputs have been considered honestly, then build with confidence and build well. Use AI tools to accelerate the work. Use them to write tests, to explore implementation options, to generate the boilerplate that would otherwise slow you down. But treat what you are building with the seriousness it deserves, because you are committing to own it for a long time.
The Judgment That AI Cannot Replace
The build-versus-buy conversation is ultimately a product strategy question, not a technical question. It is a question about where to direct the scarcest resource your organization has, which is the focused attention of engineers who understand your domain well enough to build good software in it.
AI coding tools change some of the economics of that question by reducing the cost of writing the initial version of a system. They do not change the cost of operating it, securing it, maintaining it, or the opportunity cost of the engineering attention it consumes. They do not change the fact that the time your engineers spend on commodity operational tooling is time they are not spending on the work that actually creates competitive advantage.
The engineering leaders who will make the best decisions in this environment are not the ones who are most enthusiastic about what AI tools can produce. They are the ones who remain disciplined about where that production capability should be directed. They ask not just “can we build this?” but “should we build this, and what does not get built if we do?”
That judgment has always been the constraint. The tools have changed. The judgment required to use them well has not.
Build what differentiates you. Buy what does not. And be honest about which is which before the sprint begins.
If this analysis connects with how you think about engineering strategy, I write regularly on these topics at ivanturkovic.com. You can follow along on LinkedIn or reach me through the contact page if you want to discuss a specific situation your team is navigating. I am genuinely curious how different organizations are thinking through these trade-offs right now, particularly in fintech and product-led companies where engineering focus is a direct input to business outcomes. Share your experience and perspective in the comments below.