The Corporate Culture Charade Part 2: How AI Is Killing What Little Culture We Had Left

While executives blame remote work for destroying company culture, they’re missing the real culprit: AI-generated content is creating a closed loop of meaningless communication where everyone is reading summaries of summaries, and nobody is thinking anymore.

I need to start with a confession: I’ve used AI to write emails. I’ve used it to summarize meeting notes. I’ve watched colleagues use it to generate reports that get sent to other colleagues who use AI to summarize those reports. And I’ve realized we’re all participating in the creation of a vast corporate content ouroboros a snake eating its own tail, except the snake is made of statistical predictions and nobody notices there’s no actual substance being consumed.


The Real Problem: It’s Not Remote Work, It’s Synthetic Work

Leadership loves to blame remote work for the decline in culture. They point to the loss of hallway conversations, the absence of spontaneous collaboration, the way Zoom calls feel transactional. But they’re looking in entirely the wrong direction.

The actual culture killer isn’t physical distance it’s cognitive distance. It’s the growing gap between the work people pretend to do and the work that actually matters. And AI is widening that gap exponentially.

Here’s what’s actually happening in companies right now:

Sarah uses AI to write a quarterly report. The report sounds professional, hits all the expected notes, includes data visualizations that look impressive. She sends it to her manager.

Her manager, who receives dozens of such reports, uses AI to summarize it into three bullet points. These bullets go into his own AI-generated executive summary.

That summary gets presented in a meeting where attendees use AI to generate their meeting notes, which get distributed to people who weren’t there, who use AI to extract action items, which go into project management tools where AI generates status updates.

At no point did anyone actually think deeply about anything. At no point did genuine human judgment get applied. At no point was there real synthesis, real analysis, real understanding.

It’s all just AI talking to AI, with humans serving as the medium of exchange.


The Degradation Cycle: Summaries of Summaries

We’re witnessing something unprecedented in corporate history: the rapid degradation of information quality through successive AI transformations. It’s like making a photocopy of a photocopy of a photocopy each iteration loses fidelity, introduces artifacts, strips away nuance.

Generation 1: AI-Generated Content

Someone uses AI to write initial content a proposal, a report, an analysis. The AI is trained on existing corporate documents, so it produces something that sounds right, uses the appropriate jargon, follows expected formats. But it lacks genuine insight. It can’t make intuitive leaps. It doesn’t understand context the way humans do.

This content is polished but shallow. It says what’s expected without saying anything surprising or truly valuable. It’s the corporate equivalent of empty calories.

Generation 2: AI-Summarized Content

The recipient of this content doesn’t read it carefully who has time? so they use AI to summarize it. The summary extracts what the AI thinks are the key points, which may or may not be what a human would consider important. Nuance disappears. Qualifications get stripped away. Uncertainty becomes certainty.

Now we’re working with a summary of shallow content. We’ve gone from empty calories to nutritional vapor.

Generation 3: AI-Synthesized Responses

Based on this summary, someone uses AI to generate a response or create derivative content. Maybe it’s action items. Maybe it’s a presentation. Maybe it’s input for another report. The AI is now working with twice-distilled information, generating content based on a summary of shallow original content.

We’re three generations removed from human thought, and each generation has introduced errors, lost context, and stripped away meaning.

Generation N: The Content Wasteland

Now imagine this cycle repeating dozens, hundreds, thousands of times across an organization. Every document, every email, every report going through this process. The institutional knowledge base becomes increasingly detached from reality. Decisions get made based on information that has been transformed so many times that it bears little resemblance to the original insights, data, or understanding that prompted the first document.

We’re creating a corporate Tower of Babel where everyone is technically communicating but nobody is actually understanding each other because all the communication is mediated through AI that’s optimizing for surface-level coherence rather than deep meaning.


The Quality Collapse: When Everyone Stops Thinking

The most insidious effect of this AI-mediated communication isn’t just the degradation of information quality it’s the atrophy of human judgment and thinking.

When you know you can generate a report with AI, why spend hours thinking through the problem deeply? When you know your manager will just summarize it with AI anyway, why craft your arguments carefully? When everyone is using AI to process information, why develop your own analytical capabilities?

We’re in a race to the bottom of cognitive effort. The tools that were supposed to free us from drudgery are instead freeing us from thinking. And companies are celebrating this because it looks like efficiency.

The Illusion of Productivity

Look at the metrics: More reports produced! Faster response times! Higher email volume! More documentation! By every quantitative measure, people are more productive than ever.

But productivity of what? We’re producing more content, yes. But is any of it meaningful? Is any of it moving the organization forward? Is any of it based on genuine analysis and insight?

We’ve optimized for the appearance of work rather than the substance of work. We’ve created systems that reward volume over value, speed over thoughtfulness, and polish over profundity.

The Death of Institutional Knowledge

Institutional knowledge the deep understanding of how things really work, why decisions were made, what was tried and failed, what context matters is built through human experience, interpretation, and communication. It’s the stories people tell. It’s the judgment that comes from seeing patterns over time. It’s the wisdom that emerges from collective experience.

AI-mediated communication short-circuits this entire process. The stories don’t get told because they get summarized away. The judgment doesn’t develop because decisions are based on statistical predictions rather than experience. The wisdom doesn’t accumulate because nobody is actually thinking deeply about what’s happening.

In 10 years, companies are going to find themselves with senior people who can’t explain why the organization does what it does, because they’ve spent their entire careers processing AI-generated content rather than building genuine understanding.


How This Actually Kills Culture (Unlike Remote Work)

Remember all that talk about culture dying because of remote work? About the loss of spontaneous moments and shared experiences? It was always a red herring. Here’s what actually kills culture:

Loss of Authentic Voice

When everyone’s writing is AI-generated, there’s no personality. No individual voice. No quirks or humor or unique ways of thinking. Everything sounds the same professional, polished, and utterly generic.

Culture is carried in how people communicate. It’s in the shared language, the inside jokes, the particular ways of framing problems. When AI mediates all communication, that distinctive voice disappears. Everyone starts sounding like a corporate chatbot.

Erosion of Trust

Trust requires authentic communication. You need to believe that what someone is saying represents their actual thinking, their real analysis, their genuine perspective.

But when you know (or suspect) that the email you just received was AI-generated, and that the person didn’t even read your last AI-generated message but just had AI summarize it, what basis is there for trust? You’re not actually communicating with each other you’re playing an elaborate game of telephone with AI as the intermediary.

Culture requires trust. Trust requires authentic communication. AI-mediated communication is inherently inauthentic. The logic is inescapable.

Disappearance of Shared Understanding

Strong cultures have shared mental models common ways of understanding problems, shared frameworks for making decisions, collective intuitions about what matters.

These shared models develop through repeated, meaningful interaction. Through debates where people articulate their thinking. Through collaboration where different perspectives get synthesized. Through mistakes where everyone learns together what not to do.

AI-mediated communication prevents this shared understanding from developing. Everyone is operating based on their own AI’s interpretation of other people’s AI-generated content. There’s no genuine meeting of minds, no real synthesis, no collective learning.


The Feedback Loop of Mediocrity

Here’s where things get truly dystopian: AI models are trained on human-generated content. But as more and more of the internet and corporate communication becomes AI-generated, future AI models will be trained increasingly on AI-generated content.

This creates a feedback loop. AI generates content that sounds like previous content. That content gets used to train new AI models. Those models generate content that’s even more derivative. Which gets used to train the next generation. And so on.

Researchers call this “model collapse” the progressive degradation of AI capabilities when models are trained on AI-generated data. But we’re also experiencing a kind of cultural and cognitive collapse in organizations. As AI-generated content dominates, the baseline of what constitutes acceptable thinking and communication keeps lowering.

The standard becomes: “It sounds professional.” Not “It’s insightful.” Not “It’s accurate.” Not “It advances our understanding.” Just: “It sounds like what a corporate document should sound like.”

And because everyone’s using the same AI tools, trained on the same data, optimizing for the same surface-level qualities, corporate communication becomes increasingly homogeneous and increasingly disconnected from actual thinking.


The Missing Diagnosis: Why Leadership Gets This Wrong

So why do executives blame remote work instead of recognizing the AI problem? A few reasons:

Remote work is visible. You can see that people aren’t in the office. You can measure it. You can mandate change. The degradation of thinking and communication quality is much harder to see, especially when it’s happening gradually and everyone’s doing it.

Leaders use AI too. The CEO writing the memo about culture is probably using AI to draft it. The executives advocating for return-to-office are using AI to summarize reports. They’re participating in the same system, so they can’t see it clearly.

The metrics look good. More content produced, faster turnaround times, higher volume of communication by quantitative measures, everything is improving. The qualitative collapse is invisible to metrics-driven management.

It challenges the narrative of progress. Companies have invested heavily in AI tools. They’ve sold shareholders on the productivity gains. They’ve trained employees on these systems. Admitting that AI is degrading the quality of work and culture would mean admitting they’ve been moving in the wrong direction.


What This Means For The Future

We’re at an inflection point. The next few years will determine whether companies become AI-mediated content factories where humans serve as simple pass-through nodes, or whether we figure out how to use these tools while preserving genuine human thinking and communication.

The latter requires recognizing some uncomfortable truths:

Volume Is Not Value

Stop measuring productivity by how much content gets produced. Start measuring by the quality of decisions made, problems solved, and understanding developed. This means fewer reports, fewer emails, fewer meetings but better ones.

Thinking Is Work

Deep analysis, genuine synthesis, original insight these take time. They can’t be rushed or automated. Organizations need to create space for actual thinking, which means accepting that people can’t be constantly responsive, always producing, perpetually communicating.

Authentic Communication Matters More Than Ever

In a world of AI-generated content, authentic human communication becomes more valuable, not less. The companies that figure out how to preserve and prioritize genuine human interaction whether remote or in-person will have a massive advantage.

Tools Should Augment, Not Replace

AI should help people think better, not think less. It should handle truly routine tasks so people can focus on higher-order thinking. But when AI starts doing the thinking itself, and people just route the AI’s output to other people’s AI, we’ve crossed a line.


The Real Culture Crisis

So let’s return to where we started: the hand-wringing about culture, the blame placed on remote work, the calls for return-to-office to restore the magic.

It’s all missing the point. The crisis isn’t about physical proximity. It’s about cognitive proximity. It’s about whether people are actually thinking together, communicating authentically, building shared understanding.

You can have all the hallway conversations you want, but if everyone’s using AI to write their thoughts and AI to process everyone else’s thoughts, there’s no genuine meeting of minds happening. You can mandate office presence, but if all the work is mediated through AI, you’re just moving the AI-content-generation machines into the same building.

The real culture crisis is this: We’re building organizations where thinking is optional, where communication is performative, where understanding is assumed but never achieved. We’re creating elaborate theater where everyone pretends to work, produces content that looks like work, and processes other people’s work-looking content, but nobody is actually solving hard problems or developing genuine insights.

And we’re blaming remote work for the emptiness we feel, when the real culprit is staring at us from our text editors and email clients and document generators.

The future of work isn’t about where we work. It’s about whether we work at all, or whether we’ve outsourced thinking to machines that can’t actually think, leaving us as mere moderators of a conversation between algorithms.

That’s the conversation we should be having about culture. Not whether people are in the office, but whether anyone is actually thinking anymore.


A Final Provocation

I’ll leave you with an uncomfortable question: How much of what you’ve written, read, or processed in the last week was genuinely thought through by a human being?

How much was AI-generated? How much was an AI summary of AI-generated content? How many layers removed from actual human thinking are you operating?

And if the answer troubles you if you realize you’re not sure, if you suspect it’s more than you’d like to admit then maybe it’s time to stop worrying about where people sit and start worrying about whether anyone’s actually thinking.

Because that’s where culture actually lives: not in offices or video calls, but in the quality of thinking and authenticity of communication that happens between people. And AI, for all its capabilities, is steadily eroding both.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.