AI Didn’t Create the Toxicity. It Exposed It
Why AI is making broken expectations, bad planning, and toxic cultures harder to hide
For a while, the public story around AI in software has been almost childish.
Either AI replaces engineers in a few months, or it turns every company into some kind of magical high-output machine.
From where I stand, neither description is honest. What I see is something else.
AI often makes work faster, yes. But in many teams, it also makes work more toxic. Not because the tool is evil, but because speed changes pressure, and pressure changes people.
That is the part I think many people are missing.
I am not even basing this mainly on studies (examples: 1 2 3 ), although some of them point in a similar direction. I am basing it on the pattern itself, on what I have seen in software work, leadership dynamics, and the way organizations behave as soon as they believe something should now be possible faster. That is the real shift. The moment a company believes work can move faster, deadlines do not stay where they are. They move.
And they do not just move a little. They compress.
What could previously be discussed becomes expected. What used to be a reasonable turnaround becomes hesitation. What used to count as responsiveness becomes slowness.
That is what AI changes first in many environments. Not reality; expectations.
And once expectations are moved, they are very hard to move back.
You might know my personal stance on deadline-driven development; current trends in companies adopting AI seem to amplify the issue rather than provide possible relief.
Faster work does not feel like relief
A lot of people still talk about AI as if speed automatically creates spaciousness. As if faster output should naturally give people more time, more calm, more margin. Show me the organisation in which the sudden removal of a “bottleneck” leads to a general change of mind that things can be tackled.
That is not how organizations usually work.
If a team can create drafts faster, more drafts get requested. If engineers can prototype faster, more ideas get pushed into motion. If the product can produce specs faster, more specs arrive half-cooked. If non-specialists can suddenly generate something that looks plausible, more low-quality work enters the system disguised as progress.
So the work does not simply shrink. It expands.
And because it expands under the illusion of efficiency, leadership often notices the visible acceleration but misses the invisible cost.
That cost lands somewhere.
Usually on the people who still have to think.
Usually on the people who still have to review.
Usually on the people who are accountable when the machine-generated speed meets real-world complexity.
That is why I do not think the AI story is mainly about replacement. I think it is about expansion, pressure transfer, and the concentration of accountability.
The bottleneck does not disappear. It moves upward.
Deadlines become faster
If nobody dies when the deadline slips, it was probably never an emergency, only pressure dressed up as one.
This is one of the clearest patterns.
AI does not just affect how fast something can be produced. It changes the social meaning of time inside the company. A deadline is never just a neutral date on a calendar. A deadline is a cultural signal. It tells people what pace is normal, what urgency is acceptable, and how much room there is for thought.
No one dies if a natural person sets an artificial date for an artificial, badly defined “goal”; often not even connected to any outcome.
Tech Leaders and AI
Once AI enters the system, many leaders quietly reinterpret time itself. They do not always say it openly. Sometimes they do not even realize they are doing it. But suddenly, the same amount of work is expected sooner. Or more work is expected in the same time. Or the planning stays sloppy because “the team has AI now.”
That last part is especially dangerous.
Instead of becoming more precise, organizations become more careless. They assume the tool can absorb the mess. They assume ambiguity is cheaper now. They assume bad upstream decisions can be compensated for downstream with speed.
But ambiguity does not vanish because generation is faster. A bad strategy does not become good just because drafts appear in ten seconds. Weak leadership does not become strong because someone can prompt out a prototype.
So what happens?
The deadline becomes faster on paper, while the cognitive burden becomes heavier in practice.
People have to validate more. Reject more. Clarify more. Repair more. Review more. Defend more. The calendar says acceleration. The nervous system says overload.
And then the usual cultural lie enters the room:
“If this is faster now, why are you still under pressure?”
That question is poison. Because it assumes typing was the work. It assumes the hard part was producing visible artifacts. It assumes software engineering was mostly transcription. It assumes that leadership was mostly about coordinating tasks. But the hard part was never mainly typing. The hard part was judgment. Trade-offs. Sequencing. Responsibility. Restraint. Integration. Human alignment.
AI can absolutely accelerate parts of the process. But when that acceleration is interpreted naively, it does not create peace. It creates deadline inflation.
This is where the toxicity starts
I do not mean toxicity in the cheap internet sense. I mean something more structural.
Work becomes more toxic when pressure rises faster than clarity.
Work becomes more toxic when expectations rise faster than system quality.
Work becomes more toxic when visible speed becomes the moral standard and thoughtful resistance starts looking like underperformance.
Or in simpler terms, it becomes toxic when we lose our own ethics.
And that is exactly the kind of atmosphere AI can intensify. Not always. But very easily. Suddenly, there is more subtle suspicion in the room.
Why is this taking so long?
Why is review such a bottleneck?
Why is engineering pushing back?
Why do we still need that specialist?
Why can’t we just ship the AI draft and clean it up later?
And beneath those questions sits a more corrosive one:
🔴 What exactly are these people doing all day, if the machine is already doing so much?
That is where a team begins to rot.
Because once the visible output becomes disconnected from the invisible responsibility, the people carrying the real responsibility start being measured by the wrong thing.
Then, senior engineers become cleanup crews.
Then tech leads become emotional shock absorbers.
Then, principals become permanent reviewers of work they did not initiate and do not fully endorse.
And then everyone starts working in a low-level defensive mode.
More checking. More proving. More justification. More surface activity. Less calm thinking. Less deep work. Less patience.
And then comes the interpersonal toxicity.
Impatience rises. Respect falls. People become shorter with each other. Work gets handed over too early. The line between “draft” and “done” gets blurred. People stop owning quality fully because the whole system is already subtly signaling that speed matters more than completeness.
That is how culture degrades. Not through one dramatic event, but through repeated compression.
What is your opinion here? Let me know 👇
AI exposes the company’s character
I do not think AI creates these problems from nothing. It reveals what kind of company was already there.
If a company has strong thinking, clear ownership, real technical standards, and leaders who understand the difference between speed and haste, AI can be useful. It can remove friction. It can help with exploration. It can support experienced people.
But if a company already confuses activity with value, AI becomes an accelerant. It accelerates bad planning. It accelerates premature execution. It accelerates the fantasy that every problem is now mainly a throughput problem.
And once that fantasy takes hold, the whole organization becomes harsher. Because now there is a permanent comparison in the air. Not just between one employee and another, but between a human pace and an imagined machine pace.
That comparison is usually stupid. But it is emotionally powerful.
It changes how people speak. It changes how quickly they get frustrated. It changes what they classify as acceptable delay. It changes how much empathy they have for careful work.
That is why I keep coming back to culture. Not as a soft topic, but as the operating environment in which tools either help or hurt.
If we don’t develop our culture deliberately and intentionally, it cannot become a culture worth living in; instead of taking ownership, people will silently quit.
If AI replaces developers by making them quit, it wasn’t AI’s fault, my good management friend.
The CEO-CTO gap gets wider
This is another pattern I keep noticing.
The higher up you go, the easier it is to mistake visible acceleration for genuine leverage.
A CEO sees that more can be produced.
A CTO sees that more now has to be governed.
A CEO sees compressed cycle times.
A CTO sees review burden, architecture drift, security risk, maintenance cost, and coordination overhead.
A CEO sees a possibility.
A CTO sees surface area.
Both are looking at something real. But they are not looking at the same layer.
That is where the internal story starts to split. A split is not alignment. I simply wanted to point that out once more, since it seems to be a major difficulty to understand that simple fact.
One side says, “We can move much faster now.” The other side says, “Only if you are comfortable lowering standards, increasing risk, or burning people out.”
If that gap is not handled honestly, toxicity grows very quickly.
Because then engineering starts feeling gaslit. They are told the work is easier now, while their lived experience is that the work has become noisier, more fragmented, and more difficult to defend – more and more a source of ever-compounding anxiety.
And leadership starts feeling resisted. They think the team is being conservative, slow, maybe territorial. But often the team is simply seeing the second-order effects earlier. This is why mature AI adoption is not mainly a tooling question.
It is a leadership maturity question.
Some older companies can actually be healthier here
This is one reason I sometimes think old-school SMBs can be healthier than modern, highly performative tech environments.
Not because they are more advanced. Often, they are much less advanced. But they are sometimes closer to reality.
A customer problem is more concrete. A bad process hurts faster. A useless feature is harder to justify for long. A relationship is more visible. Trust matters more openly. The company cannot hide behind dashboards and internal theater as easily.
In some modern software cultures, there is too much abstraction between action and consequence. Too much room for performative productivity. Too much room for people to confuse visible motion with value.
AI fits perfectly into that kind of environment. It produces a lot of motion. And if the culture was already addicted to motion, the addiction gets worse.
“Make the best use of what is in your power, and take the rest as it happens.”
—Epictetus
My own conclusion
I am not anti-AI. I use AI. I see the value. I think it can absolutely make strong people stronger.
But I also think many companies are telling themselves a flattering story.
They say AI is removing friction…
Often, what it is really doing is removing hesitation at the point of creation while increasing toxicity at the point of responsibility.
They say AI makes teams faster…
Often what it really does is make deadlines more aggressive, management more impatient, and the invisible work of good engineers even less legible.
They say AI boosts productivity…
Often what it really boosts is the amount of work entering the system, the amount of review required, and the amount of emotional pressure carried by the people who still have to make reality hold together.
That is the part I would watch very closely.
Not just output. Not just adoption. Not just speed.
Watch tone. Be aware of impatience. Obserce review load. Look for how often thoughtful people feel rushed. Realize whether the company is becoming more precise or just more hurried. Stay in control of whether deadlines are becoming smarter or simply shorter – that is absolutely in the power of the humans working together.
Because once speed becomes a cultural weapon, the damage is much larger than a missed estimate.
It changes how people relate to one another. It changes whether good people can still do good work without becoming cynical. It changes whether leadership is building an actual system or just driving everyone harder with a newer excuse.
That is why I think the real AI question for leaders is not whether the tool makes work faster.
It obviously can.
The real question is this:
What kind of culture does that new speed create?
And if the honest answer is more pressure, less thought, more suspicion, and faster deadlines with blurrier standards, then the problem is not that AI failed.
The problem is that the company used acceleration without wisdom.
A few external pieces have begun to describe adjacent patterns, which is worth noting. But I would not base the argument on them. The argument stands for me even without them, because the mechanism is visible in practice: once organizations believe output can be produced faster, they rarely convert that gain into calm; they convert it into expectation.
And expectation, unmanaged, becomes pressure.
Pressure, without clarity, becomes toxicity.
“If you seek tranquillity, do less. Or, more accurately, do what’s essential.”
— Marcus Aurelius, Meditations 4.24, paraphrased from a common translation
—Adrian
https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study
https://www.gitclear.com/ai_assistant_code_quality_2025_research




📍Question of the day: What is your advice to work on AI in tech orgs for this year?