A developer I know got pulled into a “productivity review” last month. Not because their output dropped. Because their AI tool usage was below the team average. Their manager wanted to know why they weren’t using AI for coding enough.
Not why their code had fewer bugs. Not why their PRs moved through review faster. Why the dashboard said they weren’t vibing with AI hard enough.
This is where we are now. They even have a leaderboard in the company and the leader spent $52k in tokens last month.
The metric that measures nothing
Large companies love measurable adoption. They spent millions on AI tooling licenses. Somebody has to justify that spend. So they started tracking prompts per day, acceptance rates, lines generated. Some even built internal dashboards that rank teams by AI usage.
The assumption is simple: more AI usage equals more productivity. If a developer writes 60% of their code with AI assistance, they must be 60% more productive. Right?
Wrong. Completely wrong.
Writing code was never the bottleneck. Not in any company I’ve worked with over the past twenty years. The bottleneck was always everything around the code. Meetings. Standups. Sprint planning. Backlog grooming. Architecture reviews. Stakeholder calls. Status reports. Slack threads that could have been a sentence. Follow-up meetings about the meeting you just had.
In most large organizations, a developer is lucky to get three to four hours of actual coding time per day. Some get less. The rest is process. Necessary process, sometimes. Bureaucratic theater, often. But either way, it’s the reality of working inside a large organization.
AI didn’t change any of that.
More output, same bandwidth
Here’s what actually happened. AI tools made the coding part faster. A task that took two hours of focused implementation might now take forty-five minutes with AI assistance. The code gets generated quicker. Boilerplate disappears. Scaffolding is instant.
But the developer still has the same number of meetings. The same number of Jira tickets to update. The same cross-team dependencies to chase. The same architectural constraints to navigate. The same review process to follow.
So what changed? The coding got faster, but the cognitive load got heavier. Because now that developer has to verify everything the AI produced. Line by line. Pattern by pattern. Edge case by edge case.
AI doesn’t produce code you can trust blindly. Anyone who has spent real time with these tools knows that. The output looks right. It often is right. But “often” is not “always,” and the gap between those two words is where production incidents live.
Reviewing AI-generated code is a specific kind of mental work. It’s not the same as reviewing a colleague’s pull request, where you can infer intent from their comments, their commit history, their known patterns. AI-generated code has no intent. It has statistical probability shaped into syntax. You can’t ask it why it chose that approach. You can’t assume it understood the business rule. You have to verify everything from scratch, every time.
That verification requires deep focus. The same deep focus that was already scarce because of all the meetings and process overhead.
The fatigue nobody talks about
This is vibing fatigue. The accumulated cognitive cost of continuously generating, reviewing, validating, and correcting AI output while maintaining all the other responsibilities that never went away.
It’s not burnout in the traditional sense. It’s a specific kind of mental exhaustion that comes from context-switching between trusting a tool and verifying that same tool, dozens of times a day. Generate. Read. Check. Fix. Generate again. Read again. Did it hallucinate a dependency? Did it introduce a subtle type mismatch? Did it follow the team’s conventions or invent its own?
Multiply that by eight hours. Subtract the three hours of meetings. Add the Slack interruptions. That’s a normal Tuesday.
And now your manager wants to know why your AI usage metric dipped this week.
The irony is brutal. The developers who use AI most carefully, who take time to verify and validate, who refuse to blindly merge generated code, those developers will show lower throughput on the dashboards. They’ll look less productive by the metric. The ones who accept everything Copilot suggests without reading it will look like stars.
Until something breaks in production.
Coding speed was never the problem
Let me say this plainly because I think the industry needs to hear it.
In large organizations, the primary constraint on software delivery is not how fast individual developers type code. It has never been that. The constraint is organizational. It’s the time spent in alignment meetings. The weeks waiting for another team’s API to be ready. The days lost to unclear requirements that should have been resolved before a single line was written. The review cycles. The compliance gates. The deployment windows.
AI made one part of the pipeline faster. The part that was already the least broken.
If you want to make engineering teams genuinely more productive, you don’t measure how much AI they use. You measure how much uninterrupted focus time they have. You measure how many meetings could have been a document. You measure how long a pull request sits in review before someone looks at it. You measure the gap between “we decided to build this” and “a developer actually started building it.”
Those are the real bottlenecks. AI doesn’t touch them.
The mental cost of verification at scale
There’s a cognitive science angle here that the dashboards completely miss.
Human attention is finite. Decision fatigue is real and well-documented. Every time a developer looks at AI-generated code and makes a judgment call about whether it’s correct, safe, and aligned with the system’s architecture, that’s a decision. A non-trivial one.
Now stack fifty of those decisions into a single afternoon, between a sprint retrospective and a cross-team sync. That’s the actual developer experience in 2026.
The tools got smarter. The models got better. The code quality improved. All true. But the human being reviewing that code has the same prefrontal cortex they had in 2023. Same working memory. Same attention span. Same vulnerability to fatigue, distraction, and the slow erosion of judgment that comes from sustained cognitive load.
Companies that track AI usage as a KPI are optimizing for the machine side of the equation while completely ignoring the human side. They’re measuring the accelerator without looking at the brake wear.
What should be measured instead
If you’re a leader who genuinely wants to understand whether AI is helping your engineering teams, stop looking at adoption dashboards. Start looking at outcomes.
Are defect rates going down? Is time-to-production shrinking? Are developers reporting that they have enough focus time? Is the ratio of coding time to process time improving? Are fewer incidents caused by subtle bugs that slipped through review?
And the hardest question: are your developers telling you the truth about how they feel, or are they performing AI adoption because the dashboard is watching?
I’ve seen teams where AI tools genuinely help. Where a senior developer uses them to eliminate tedious boilerplate and spends the recovered time on architecture, mentoring, and design. That’s the ideal. That’s what the sales pitch promised.
But I’ve also seen teams where AI tools created a new layer of anxiety. Where developers feel pressure to accept suggestions they aren’t sure about because the metric demands throughput. Where the cognitive load of constant verification is quietly eroding code quality in ways that won’t show up until six months from now.
The dashboard can’t tell you which scenario you’re in. Only your developers can. If you’re willing to listen.
The real productivity unlock
You want a productivity win from AI in a large organization? Here’s the actual play.
Reduce the non-coding overhead first. Cut the meetings that don’t produce decisions. Replace status updates with async documents. Give developers blocks of protected focus time. Fix the review bottleneck so pull requests don’t age in a queue.
Then let AI do what it does well within that recovered time. Let it handle boilerplate, generate tests, scaffold implementations. And give developers enough breathing room to actually verify the output properly. Not rushed between two meetings. Not with one eye on Slack. Properly.
That’s not a dashboard metric. That’s an organizational design problem. It’s harder to solve than buying licenses and tracking usage. But it’s the only approach that actually works.
The vibing will slow down
Here’s my prediction. The initial rush of AI-assisted coding is going to plateau. Not because the tools get worse. They’ll keep getting better. But because the humans using them will hit a wall.
The wall isn’t technical. It’s cognitive. There is a natural limit to how much AI output a single developer can responsibly verify in a day. We haven’t mapped that limit yet because everyone is still in the honeymoon phase, generating code as fast as the model can produce it.
When the fatigue sets in, and it will, the developers who last are the ones who learned to use AI selectively. Who figured out where the tool saves real time and where it creates hidden work. Who optimized for judgment, not throughput.
The ones chasing usage metrics will burn out. Or worse, they’ll keep going and stop verifying. And the codebase will pay for it later.
That’s vibing fatigue. Not a rejection of AI. A recognition that human attention is the actual scarce resource, and no model can substitute for it.
Final Words
I’d genuinely like to hear whether this matches your experience or if you see it differently. Agreement and disagreement are both welcome. You can find me on LinkedIn, X, and Threads if you want to continue the conversation there.
If you’re dealing with engineering productivity challenges, AI adoption strategy, or just want to talk through how your teams are actually using these tools, reach out. I’m always happy to talk shop.
Are your teams tracking AI usage as a KPI? And if so, is anyone tracking the cognitive cost that comes with it?
If this post made you think, you'll probably like the next one. I write about what's actually changing in software engineering, not what LinkedIn wants you to believe. No spam, unsubscribe anytime.