And I say that with the most empathy I can muster because most of the time, the people with the gun didn't load it themselves.
A new report landed this week that should make every leader pause, and every employee sit up straighter.
According to research by Writer in their 2026 AI Adoption Survey, covered by Fast Company, nearly one in three knowledge workers across the US, UK and Europe admit to sabotaging their company's AI strategy. The survey ran across 2,400 people — 1,200 executives and 1,200 employees. Among Gen Z respondents, the figure jumps to 44%.
According to the report, workers admit to ignoring guidelines, opting out of AI training, refusing to use the tools, feeding sensitive company information into unapproved public AI tools, and even tampering with performance metrics to make the technology look less effective.
Then there's the other side of the room.
60% of C-suite executives say they plan to lay off employees who can't, or won't, use AI. But only 24% of employees say they fear being laid off for that reason.
75% of companies are even saying that their AI strategy is more for show, (for example, PR, investor relations), rather than actual internal guidance.
And the wider picture backs it up. Fast Company notes that AI accounted for 25% of US job cuts in March, and cites Goldman Sachs data showing workers displaced by AI take longer to find new jobs than those displaced by other causes.
This isn't a standoff. It's a slow-motion, career accident for employees, and for the organisations they work in.
So let me say what I think needs saying on both sides...
Part one — to the 29%: sabotaging the results is shooting yourself in the foot.
I understand the fear. Commentators have started calling it FOBO (the Fear Of Becoming Obsolete) and it's a rational response to a genuinely disruptive moment. Anyone who pretends otherwise isn't paying attention. I've felt versions of it myself in every major organisation or tech shift of my career.
But understanding the fear and endorsing the response are two different things.
If you deliberately tank an AI output, misuse a tool, or leak data into a public chatbot to make a point, three things happen and none of them are good for you.
One: it will hurt you in the long run. The labour-market data is already unkind to resisters. AI is responsible for a rising share of job cuts, and Goldman Sachs' own research suggests workers hit by AI-driven displacement take longer to find their next role. The very behaviour people are using to protect their jobs is the behaviour most likely to cost them one.
Two: if you're found out, it almost certainly leads to disciplinary action. And nasty, unnecessary disciplinary action at that. Entering proprietary or customer data into a public AI tool isn't a protest, in most organisations it's a breach of the acceptable use policy, the data protection policy, and possibly in regulated sectors, the law. "I was making a point about AI" is not a defence that survives an HR meeting. I've seen careers end over less.
Three: you're burning the bridge you'll need to walk back over. The version of you in twelve months will want to be part of the team rebuilding the workflow, not the person flagged on an audit report.
That's the hard part said. Now the part leaders don't get to skip.
Part two — to leaders: most of your 29% is a reflection of you, not of them.
The Fast Company piece lists the reasons workers actually give for their pushback: fear of job loss, dissatisfaction with the AI tools their company has rolled out, and frustration that the technology has diminished their value and creativity.
Not one of those reasons is "I don't understand the tech." They're all about trust, meaning, and how change is being done to people rather than with them.
If that describes your organisation, the fix isn't to punish the 29%. The fix is to understand why they don't believe you and then to lead differently.
Which brings me to the model I use in my work.
The AI Maths Model. A diagnostic for leaders.
I talk to leaders about AI as a simple question: which mathematical operation are you actually running?
Because every organisation I see is doing one of four things with AI right now. Three of them are common. Only one of them works.
Division — using AI to cut jobs. A race to the middle.
This is the one making headlines and driving every piece of resistance. Leaders look at AI, see a cost line, and divide. Fewer people, same output, call it a productivity gain. The trouble with division is that it's the easiest operation to copy, so everyone ends up in the same place: smaller teams, flatter capability, a depleted middle, and a workforce that has learned, correctly, that your AI strategy is a headcount strategy in a T-shirt. This is the culture that produces sabotage. It's also, in the long run, the culture that gets out-competed by the organisation next door that chose a different operation.
Subtraction — trimming process but capturing none of the uplift.
This one looks more sophisticated. Leaders use AI to remove steps; a bit less admin here, a faster draft there, a report that writes itself. Real work is taken out of the week. But nothing is put back in. The time saved doesn't go to clients, to creativity, to strategy, to growth. It just quietly disappears into the next meeting. People feel busier, not better. The organisation gets leaner without getting stronger. Subtraction leaves value on the table and calls it efficiency.
Addition — giving people tools without showing them how to use them.
This is the most well-intentioned failure of the four. Licences handed out. A 45-minute webinar. A Slack channel with some prompt tips. "We've given everyone AI; go and be more productive." But Writer's data is clear: 29% of people are actively undermining the rollout, and the reasons are dissatisfaction with the tools and a feeling of being devalued. Addition without enablement is how you get exactly that. You're scaling inconsistency, shadow use and quiet resentment, and you're calling it transformation. You've added something to the workforce, but you haven't changed anything.
Multiplication — the holy grail.
Multiplication is what happens when you use AI to genuinely compound the effectiveness of your people — so they add more value to the organisation, more value to their clients, and more value to their own careers at the same time. It's not about how many licences you've deployed. It's about whether the output, the capability and the confidence of your teams is measurably larger than it was before. Multiplication is slower to set up and harder to explain to a board, but it's the only operation that produces a durable advantage, because it changes what your people are capable of — and capable people are the one thing your competitor can't buy on the same day you did.
Every leadership team I work with is running one of these four, whether they've named it or not. Most are running some blend of Division, Subtraction and Addition and wondering why their culture feels brittle and their AI metrics look underwhelming.
How to know which operation you're actually running.
A few honest tests:
- If your first concrete outcome from AI was a headcount number — you're dividing.
- If your teams are saving time but you can't point to where that time is now creating value — you're subtracting.
- If you've bought the licences and the training but you can't name three role-specific workflows that have genuinely changed — you're adding.
- If your people are visibly producing more and better work, your clients are noticing, and your teams are talking about AI as a multiplier of them rather than a threat to them — you're multiplying. Keep going.
My position, stated plainly.
I am, and always have been, a firm believer in taking people on the journey with you.
Not dragging them. Not mandating them. Not issuing ultimatums in an all-hands. Taking them with you, helping them through the process, helping them adapt, and putting the tools, the time and the support in their hands so they can do the work.
I'll be honest: I haven't always been able to do that fully in every role I've held. Budgets, timelines and structural realities get in the way. But that's exactly why I'm this vocal about it now. With a technology as disruptive as this one, and with stakes this high for ordinary people's livelihoods, "we didn't have time to bring everyone along" is no longer an acceptable excuse.
It's the whole job.
Multiplication is not a nicer-sounding version of the same AI strategy. It's a fundamentally different one and it's the only one in which the 29% in the survey data ever become advocates instead of saboteurs.
Where this leaves us.
To the 29%: the fear is real, but please don't let a rational fear push you into an irrational response that costs you the career you're trying to protect. Sabotage is the slowest possible route back into the room. There is a version of the next twelve months where you move through this, not around it and the door is still open.
To leaders: the people pushing back aren't your problem. They're your signal. If a third of your workforce is quietly working against your AI strategy, you're almost certainly running Division, Subtraction or Addition, and calling it transformation. The fix isn't another tool or another mandate. It's choosing Multiplication and doing the harder, slower, more human work of taking your people with you.
Disruptive technology doesn't excuse us from leadership. It demands more of it.
Pick the right operation. Take your people with you.
They'll meet you there.
