I don’t hand you a PDF and disappear. I work inside your repository, asynchronously, on the same GitHub your team is already using. No slide decks. No quarterly reviews. Just a senior technical partner who reads every important PR and tells you, plainly, where the risk is.
My approach is shaped by 20+ years of shipping production software in fintech, payments, and high-traffic platforms, combined with a deep working thesis on how AI changes what engineering actually is. I’ve written extensively about both. You can read the argument on the blog. This page is about how we’d work together.
“AI amplifies whatever direction your codebase is already heading. The first 1,000 lines determine the next 100,000.”
Ivan Turkovic, ivanturkovic.com
The approach
Production scars meet AI-era engineering judgment
Two decades in fintech and payments taught me one thing that no framework or AI tool will ever replace: a sense for where a codebase will break under real load, real regulation, and real money. You can’t learn that from a book. You learn it the hard way, the first time an on-call pager wakes you at 3 AM because a background job silently double-charged customers.
The AI era didn’t retire that experience. It made it rarer. AI writes code that looks senior. Reviewing it, steering it, and refusing it when it’s wrong is now the central skill of the job. That’s the lens I bring to every engagement.
01. Read the whole before the parts
Every review starts with the domain model, not the controller. If the model is wrong, no amount of clever abstraction above it will save you. I read your codebase the way I read a production incident: whole system first, then the components, then the glue between them.
02. Entropy is the real enemy
Every codebase drifts. Duplication creeps in. Abstractions stack up. Tests get silently disabled. The question isn’t “what’s broken today.” The question is “where is entropy winning, and what’s the compounding cost of ignoring it for another quarter.”
03. Rails conventions are a safety net. AI doesn’t know them.
Rails gives you 15 years of hard-won defaults. AI tools regularly ignore them and produce code that works, looks reasonable, and violates the framework’s grain in ways that only bite you six months later. I know where those traps live because I’ve walked into most of them at least once.
04. Strategy over prescription
Not every refactor needs to happen this sprint. Sometimes the right call is “accept this complexity for six more months, and here’s how we’ll reverse it when the product case is clearer.” A good technical partner tells you which hills to die on and which ones to let go of.
05. Specs before code
Clarity of specification has replaced implementation speed as the primary constraint on engineering output. If your team is figuring out what to build while building it, AI will accelerate you into a wall. I work with you to tighten the spec first. Code comes second.
The process
Four phases, from first read to steady state
01. Assessment
I read your codebase cold. First the structure, then the domain model, then the architecture choices that hold everything up. I look for the usual orientation marks: tests, conventions, naming, documentation. In AI-assisted codebases, most of those marks are missing or misleading. That’s fine. I know what to look for when they’re gone.
02. Diagnosis
Where is entropy winning? What’s the quality of the domain model versus what’s been layered on top? Where has AI generated plausible-looking patterns that won’t survive a real traffic spike? I separate configuration issues from architectural ones, and framework built-ins from reinvented wheels.
03. Strategy
Sometimes the answer is “migrate.” Sometimes it’s “refactor this one module and leave the rest alone.” Sometimes it’s “this is actually fine, keep shipping.” I’ll be honest even if the honest answer means less work for me. The goal is to make your team more self-sufficient, not more dependent on me.
04. Ongoing
For retainer clients: async PR reviews through GitHub, architecture guidance on every important decision, and optional sync deep-work blocks when the async loop feels too slow. Your CTO is watching. Every week. Not once a quarter.
What I actually catch
AI writes code that looks senior. It usually isn’t.
After reviewing dozens of Rails codebases, including heavily AI-assisted ones, I know where the patterns break. Most of what I find isn’t a bug. It’s a structural decision that compounds silently until something fails under real load, real auditors, or a real growth spike.
Ruby stays one of the best languages for AI-assisted development. But conventions only protect you if someone on the team knows what they are.
From dozens of reviews, the recurring themes:
- Rails convention violations that AI doesn’t know are violations
- Configuration issues mis-diagnosed by AI as requiring infrastructure changes
- Architecture decisions that create predictable scaling problems 12 months out
- Junior-developer-level mistakes produced by AI output that reads senior
- Reinvented wheels: code that duplicates what the framework already ships
- Background jobs with no idempotency, no retry strategy, no observability
- N+1 queries hidden under layers of service objects
- Auth hardening gaps on endpoints that AI cheerfully generated
- The gap between “all tests pass” and “this will hold in production”
The difference
This isn’t typical consulting
| Typical consultant | How I work |
|---|---|
| Checklist-driven audit | Perception-driven: multiple lenses on the same code |
| “Here’s what’s wrong” | “Here’s where entropy is winning and the strategic moves to reverse it” |
| Feature-by-feature review | Foundation first: domain model, then architecture, then layers |
| Prescriptive: “do this” | Strategic: “accept this complexity now for this advantage later” |
| No AI perspective | Knows how AI generates code and where it systematically fails |
| Business unaware | 20+ years as a CTO; decisions framed in revenue, risk, and runway |
The async loop
You ship. I review. Nothing reaches production unchecked.
This is how retainer clients work with me day to day. Everything runs through GitHub. No calendars. No meetings unless you want them.
- You spec the desired behavior in a GitHub issue
- We converge on the spec, then move it to “ready”
- Your developer or AI agent creates a branch, writes a system test, opens a PR linked to the issue
- I review the PR: architecture, conventions, security, edge cases, test quality
- We iterate until the test (and all tests) pass with real coverage
- We decide together when to merge. Only then is the work finished.
Need to move faster on a specific decision? Optional 3-hour sync deep-work blocks collapse the loop when async isn’t enough for an architectural shift.
Pricing
Three ways to work together
Most engagements start with a review. If we’re a good fit, the retainer is how I stay in the room. The one-off consulting slot is there when you need a second opinion and nothing more.
One-time: Codebase Review
From €2,000
A deep audit of your codebase: architecture, security, performance, and the AI-assisted patterns that will cost you later. You get a prioritized report. What to fix now. What can wait. What is actually fine and doesn’t need your attention.
- Architecture and domain model quality
- Security vulnerabilities and auth hardening
- Performance bottlenecks and N+1 queries
- AI-generated anti-patterns
- Test coverage gaps
- Deployment and infrastructure risk
- Clear, prioritized action plan
Final price depends on codebase size. Free 30-minute intro call to scope the work.
Ongoing: Fractional CTO Retainer (most popular)
€4,000 / month. 20 hours per month.
A senior technical partner in the room every week, not once a quarter. I build the workflows, review the PRs, and set the conventions that let your team and your AI agents make real, dependable progress.
- Async PR reviews through GitHub
- Architecture guidance on every major decision
- AI workflow setup: specs, conventions, and review gates your agents follow
- Security and performance oversight, continuously
- Optional sync deep-work blocks when async isn’t enough
- Direct access: no ticketing, no account managers, no handoffs
No long-term commitment. Cancel anytime.
One-off: Single-Session Consulting
Sometimes you don’t need a retainer and you don’t need a full audit. You need 90 minutes with someone who has seen the exact problem you’re staring at, and an honest answer you can act on Monday.
Typical reasons clients book this:
- A specific architectural decision: monolith vs. services, Rails vs. split stack, framework choice for a greenfield product
- A second opinion on an in-house proposal before you sign off on a quarter of work
- Hiring: reading a candidate’s take-home or pairing on a technical interview
- A pricing or tech strategy review for a fintech product line
- “Is this AI workflow setup actually going to scale or are we fooling ourselves”
Price depends on prep and scope. Pay per session, no retainer, no ongoing commitment.
Find out where you stand
A 30-minute intro call to understand your codebase, your team, and whether we’re a good fit. No pitch. No pressure. If the right answer is “you don’t need me,” I’ll tell you.
Beyond Rails: TypeScript, React, NestJS, Node
Rails is the default, not the ceiling. A significant part of modern product work lives in TypeScript, on the server and in the browser, and I’ve been shipping in both worlds for a long time.
Where the TypeScript and Node side of an engagement typically matters:
- NestJS services sitting in front of or alongside a Rails monolith. Real boundaries, real domain separation, no microservices for the sake of microservices.
- React and Next.js frontends that need a sane state story, genuine component boundaries, and a build system that doesn’t collapse under its own weight.
- Node background workers for streaming, webhooks, and third-party integrations where Ruby isn’t the right tool for the specific job.
- Shared type systems between API and client. Done right, this kills entire classes of bugs. Done wrong, it’s three days of yak shaving per week.
- Strict TypeScript configurations that the team actually keeps strict, instead of slowly regressing to
anyunder deadline pressure.
The lens stays the same across stacks. Domain model first. Conventions over cleverness. Boring wins. AI agents only go as far as the spec and the review gates let them.
SaaS, fintech, and the rest of the stack
Code is only one layer of the problem. If you’re building a regulated product, a high-traffic platform, or a SaaS business that plans to survive its next growth stage, the harder questions are usually above the codebase.
SaaS
Multi-tenancy design. Billing systems that don’t silently bleed revenue. Usage metering you can actually audit. Subscription lifecycle edge cases. Migration paths from one pricing model to another without breaking existing customers. Observability that tells you what’s happening in production instead of what you hoped was happening.
Fintech and payments
Two decades across fintech and payments gave me a practical taste for where these systems actually fail. Idempotency. Reconciliation. KYC and AML pipelines. Regulatory timelines that will eat your roadmap if you plan them late. Compliance is not a sprint you tack on at the end. It’s the foundation you build on.
A few themes I see repeatedly in fintech engagements:
- Background job pipelines that happily double-charge when something retries
- Reporting systems that disagree with the ledger of record and nobody noticed
- Webhooks from payment processors handled optimistically, with no replay story
- KYC integrations treated as feature work when they are core risk infrastructure
- PSD3 and similar regulatory changes landing in a quarter nobody budgeted for
High-traffic platforms
Caching layers, queue design, read replicas, Postgres tuning, sane sharding strategies when you truly need them. Kamal deployments that don’t require a tired DevOps contractor. Database migrations that don’t lock a busy table for 40 minutes at the worst time of day.
Team and process
I also advise on the things that surround the code. Hiring. Tech interviews that actually predict performance. AI-workflow conventions the team will actually follow. Org structure for engineering teams in the 5-to-50 person range. When to split a monolith. When to refuse to split it.
Start with a conversation
Thirty minutes. No prep required from your side. I’ll ask about your codebase, your team, and the decision you’re currently circling. You’ll leave with at least one clear opinion, whether or not we end up working together.
Final Words
If something on this page rings true, or if something feels wrong, I want to hear it either way. The pushback is often more useful than the agreement.
You can find me on LinkedIn (linkedin.com/in/ivanturkovic), X (x.com/ithora), and Threads (threads.com/@ithora). I read every reasonable message. I don’t read the ones that sound like they were written by a model.
If you’d rather talk about your specific situation, the contact page is at ivanturkovic.com/contact, or book a call directly at cal.eu/ivan-turkovic/30min.
One question I’d genuinely like to know the answer to: what’s the one architectural or hiring decision your team is circling right now, where a second opinion would actually change what you do next?