On 31 March, Sequoia Capital published a piece co-authored with Jack Dorsey called "From Hierarchy to Intelligence." It's about Block (the company behind Square and Cash App) and their plan to replace traditional management layers with an AI-powered coordination system.
I've read it three times now and I keep landing in the same place.
The ambition is genuinely exciting if you're building an AI-native company from the ground up. The diagnosis of hierarchy is sharp. But the prescription has a blind spot the size of your average ops floor, and it's one that could cause real damage in organisations that move too fast on structure before they understand what they are actually dismantling.
What They're Proposing
The argument goes like this:
corporate hierarchy was never the goal, it was a workaround.
When humans were the only mechanism for moving information up and decisions down, you needed layers. The Romans figured this out with their military structure. Railroads copied it. Taylor refined it, McKinsey packaged it, and everyone since has been iterating on the same basic constraint: a person can only manage somewhere between three and eight other people, so you keep adding management tiers as you grow.
Block's position is that AI can now absorb enough of that coordination work to make permanent management layers unnecessary. They're building what they call a "company world model" (a machine-readable picture of everything happening internally) alongside a "customer world model" (built from transaction data across Cash App and Square). An intelligence layer sits on top, composing financial products for specific customers at specific moments, without a product manager deciding to build each one.
Instead of the traditional hierarchy, they propose three roles:
- Individual Contributors who build and run system layers with high autonomy,
- Directly Responsible Individuals who own cross-cutting problems for defined periods,
- Player-Coaches who combine building with developing people.
No permanent middle management.
That last line is the one getting all the attention.
Why I Think They're Onto Something
A lot of what middle management does in practice is context-carrying. Translating strategy into tasks. Reporting status upward. Aligning teams sideways. Chasing updates. Sitting in meetings that exist because information doesn't flow on its own. If you've run operations at any scale, you know the feeling. Half the management layer exists to compensate for the fact that the other half of the management layer sometimes can't see what's going on.
AI genuinely can absorb a significant chunk of that. Not theoretically. Right now.
I use my own framework, the AI Maths model, to categorise what organisations are actually doing with AI. The spectrum runs from Division (using AI to cut people, reduce costs, shrink the organisation), through Subtraction (removing tasks without redesigning roles), Addition (bolting AI tools onto existing structures), and up to Multiplication (expanding what people and the organisation can do).
Block's proposal, on paper, is a genuine attempt at Multiplication. They're not just adding copilots to the existing hierarchy, they're redesigning workflows and redefining roles. The whole question of how decisions get made is on the table. That's rare when most organisations I speak to are stuck somewhere between Addition and Subtraction.
So credit where it's due. This is the most ambitious public statement I've seen from a major company about what AI-era organisation design could look like.
Where I Think They're Wrong
Here's where I part company with the article.
The piece treats middle management primarily as an information-routing layer. And that framing makes the whole argument work, because if managers mainly move information up and down, then a system that moves information better makes managers unnecessary.
But that's not the only thing that good managers and leaders actually do. Not even close.
I ran operational teams of up to 200 people across financial services and insurance for more than 20 years. The information-routing part of management was maybe 25% of the job. Probably less. The rest was the stuff that doesn't show up in any machine-readable artifact.
Conflict that never escalated because someone read the room and had a quiet word on a Tuesday afternoon. Institutional knowledge about why a process exists that way, not because anyone documented it, but because Sharon in compliance lived through the regulatory incident in 2017 and quietly steers people away from repeating it. Cultural continuity and the unwritten rules about how things actually get done versus how the process document says they get done. Performance conversations that are really about someone's confidence, not their output. Hiring decisions based on a gut sense of team chemistry that no model can replicate (yet).
I'm not romanticising middle management here. Plenty of management layers exist for no good reason and add nothing but delay. I've seen that too, from the inside.
But there's a difference between removing layers that route information and removing layers that hold institutional memory and provide the human connective tissue that keeps organisations from becoming brittle.
The article acknowledges "edge judgement" as the domain where humans still matter. Ethics, novelty, high-stakes calls. That's true but somewhat incomplete. It positions human value as something that happens at the boundary, when the system hits its limits. I'd argue human value runs through the middle of the organisation, not just at the edges. It's woven into the daily, unglamorous, largely invisible work of keeping 200 people roughly aligned and roughly pointed in the same direction, even on the days when motivation is thin.
The Question Behind the Question
This is where my AI Maths framework is probably most useful as a lens.
Block is aiming for Multiply. But the gap between describing Multiply and delivering it is where most organisations fail. And the failure modes are predictable.
If the practical result of this redesign is mainly layer removal and headcount reduction, then the rhetoric says "intelligence" but the maths being done on people is Division. If the system mostly removes admin and coordination tasks but leaves roles, expectations, and value creation unchanged, that's Subtraction. If AI becomes a powerful coordination tool sitting on top of a company that still runs on informal hierarchies and unspoken approval chains, that's Addition with better marketing.
It only becomes Multiplication when people can solve broader or better problems than before. Role value has to expand rather than thin out, decision rights genuinely shift and there's hard evidence of new organisational capability, not just speed or productivity metrics.
I'm not sure whether Block will get there. Maybe they will. They have structural advantages most companies don't: they're remote-first (so work already creates artifacts), they sit on extraordinary transaction data, they probably do not have a lot of legacy technical debt and they have a CEO willing to make bold structural bets.
But for the other 99% of businesses reading this article and wondering what it means for them, the question shouldn't be "can we remove middle management?" The question is sharper than that.
What This Means If You Run a Business
The Sequoia piece is most valuable as a forcing question. And the question is this:
If AI can coordinate more of your business, what should your people be freed up to become better at?
Not "which roles can we eliminate?" That's Division thinking dressed up in strategy language and a race to the bottom.
The real opportunity is redesigning roles so the coordination burden drops and the judgement and relationship work expands. The creative capacity too. That's Multiplication. And it requires treating your managers and leaders as people whose roles need reshaping, not as an information-routing layer that technology has made redundant.
I keep coming back to something I've observed in every restructure I've been part of, and a few I probably caused. The institutional knowledge that walks out the door when you flatten a management layer doesn't show up on any dashboard. You don't notice it's gone until six months later, when decisions start getting slightly worse and nobody can quite explain why the culture feels different.
That's not a technology problem. It's a human capital problem which no world model, however sophisticated, can capture.
The Bottom Line
This article from Sequoia and Jack Dorsey is worth reading. It's the most serious attempt I've seen to articulate what an AI-native organisation could look like, and it pushes the conversation well beyond "give everyone a copilot and see what happens."
But if you're a senior leader reading it and feeling either excited or threatened, I'd encourage you to sit with one question before you act on anything:
What maths are you actually doing on your people as you redesign around AI?
Because the language of intelligence and the language of cost reduction can sound remarkably similar, right up until you see the results.
