I walked into the ExCeL Centre on Tuesday expecting a technology showcase. Wall-to-wall demos of shiny new features, model benchmarks, and the usual arms race of who has the biggest AI. What I got was something more interesting.
Microsoft spent the day talking about people.
Not in the vague, hand-wavy "people are our greatest asset" sense. In a very deliberate, architecturally-considered way. The entire event, from Satya Nadella's keynote through to the breakout sessions on financial services and Power Platform, was built around one central argument: AI's job is to make your people better at their jobs, not to replace them.
The Intelligence Layer That Actually Matters
One thing Satya kept coming back to was a distinction I think a lot of organisations are still missing. He wasn't talking about the intelligence of AI models. He was talking about the intelligence of the organisation. The combination of your systems and your human capital, and how they compound together inside your enterprise.
His framing was clear: the goal is not to celebrate some AI model on the outside. It's to make sure your organisation has measurably built intelligence on the inside. And that intelligence is a combination of human capability that gets augmented by AI, not replaced by it.
That matters because it changes how you think about deployment.
The practical expression of this is something Microsoft calls WorkIQ. In simple terms, it's a layer that understands how you work, who you work with, and the content you work over. Your emails, Teams messages, files, calendar, project data, communications. All of it becomes contextual intelligence that any agent or copilot experience can draw from.
This isn't just search. It's state. It follows you across every application. When you're in Excel, it knows what you were discussing in Teams. When you're in Outlook, it knows what's in your project files. And when you hand a task to an agent, that agent has the same contextual awareness you do.
I'll be honest, when comparing the connector-based approach you get with something like ChatGPT, the depth of integration was striking. This isn't bolting AI onto your workflow. It's embedding intelligence into the fabric of how you already work.
What's Actually Happening in the Real World
The temptation with these events is to get swept up in demos and possibility. So the sessions I found most valuable were the ones where actual organisations talked about what's working today.
In the keynote, Microsoft's UK lead shared that 84% of UK organisations now have a formal AI strategy, up from 46% a year ago. That's not experimentation anymore. That's commitment.
Some of the specific examples:
- Lloyd's Bank, 250 years old, now runs one of the largest Copilot deployments. The way they described it was telling. People who used to spend days producing reports can now redirect that time to the services they're actually there to deliver. It wasn't about headcount reduction. It was about attention reallocation.
- The NHS deployment at Manchester University Hospital was probably the most human example of the day. Emergency clinicians talked about the challenge of trying to document everything while also looking patients in the eye. With Dragon Copilot listening to conversations and automatically populating electronic patient records, the cognitive load of remembering everything from bed to bed drops significantly. One clinician described it as the difference between documenting care and actually delivering it.
- Mott Macdonald has built their own internal system called Emma, connecting their 20,000 staff. Employees use it to understand procedures, locate subject-matter experts or check compliance details.
- HSBC runs one of the largest Copilot deployments globally, with over 32,000 engineers enabled. 87% are actively using it. But what made their example interesting was the governance layer. Strong guardrails, clear analytics, and leaders who can see where real business value is being created, all within a highly regulated environment.
- Barclays reported almost one million hours of cumulative productivity gain in their first year of scaled Copilot deployment. A million hours. That's time redirected back into serving customers.
These aren't pilot programmes. These are at-scale deployments with measurable outcomes.
The Financial Services Panel: Where It Got Interesting
The financial services breakout was the session where the conversation moved beyond productivity and into something I've been thinking about for a while.
Simon Bullers, CTO at the Bank of England, Emily Prince, Group Head of Analytics and Group AI from the London Stock Exchange Group, and Will Hyams, Director of AI Productivity at Howden Group, were all on stage. The conversation quickly moved past "are we using AI?" to "how is it changing the way we think?"
One observation that stuck with me: the point about democratising access to intelligence. Previously, for an executive to get meaningful insight, there was a chain of operations. Different teams had to process data, analyse it, package it, and pass it up. Now, with Copilot and agent-based tools, those executives have direct access. The barriers to information and innovation have dropped significantly.
Emily Prince from LSEG made an observation about how people are already using data differently. When presented with AI-assisted analysis, people are drawing on more comprehensive data types to inform decisions, not just the sources they'd habitually reach for. Cross-sectional data, reference data, pricing data, internal and external information, all brought together in ways that expand the quality of thinking.
Will Hyams from Howden made a point I thought was spot on. He talked about upskilling domain experts rather than trying to create hybrid technologists. Take an insurance broker with 30 years of experience, teach them the art of the possible with AI, and let them fly. They know where the inefficiencies are. They know what their job actually requires. The barrier to entry for AI skills is lower than it's ever been. So instead of forcing technology experts and domain experts into the same person, give the domain expert the tools and get out of the way.
He also made an interesting point about AI moving from optional to embedded. His challenge to the room: pick one business process and make AI a mandatory part of it. Not adjacent to the workflow, but part of it. That shift from "available if you want it" to "this is how we work now" is probably the next step a lot of organisations are wrestling with.
Will also made a compelling case for training leaders on AI first, not last. His approach at Howden has been to sit down with senior leaders individually and show them what AI can do for them personally. A leader who wants competitor intelligence can now have a tailored report generated every week. A leader who prefers everything in one place can use AI to pull together data from across their digital estate. When it clicks for them as individuals, the penny drops for the organisation. As Will put it, if it's this good for me, I can see how it would be this good for all of my staff. And there's a secondary effect: those same leaders are now using AI to evaluate the board papers and proposals being pitched to them, which in turn means the people presenting have to raise their game. The quality bar lifts from the top down.
Platform: Enterprise Vibe Coding Has Arrived
This was the session that got me most excited from a practical standpoint.
Microsoft showed a coding agent inside Power Apps that took a single paragraph describing a training programme management system and built it. Not a prototype. A multi-screen, data-connected, role-based application with a proper data model, registration forms, and business logic. In minutes.
And it wasn't generating throwaway code. It was building on Microsoft Dataverse, with role-based access control, business rules, and the ability to scale to millions of users. The developer could then iterate on it through natural language, voice, or traditional editing.
What this means in practice is that people across an organisation, people who understand the problem but aren't software engineers, can now build functional applications within a governed, secure enterprise platform. The Power Platform provides the guardrails, the authentication, the data policies, and the observability. The agent does the building.
This is what vibe coding looks like when you put enterprise architecture around it. And I think it's going to fundamentally change how quickly organisations can move from "we need a tool for this" to actually having one.
They also brought Laura Macleod, COE Lead at Virgin Money, on stage. Virgin Money went from only accepting handwritten letters for certain regulatory processes to winning industry awards for AI-powered customer engagement in under two years. Their AI agent handles the majority of customer interactions with high completion rates, and they've seen measurable improvements in customer satisfaction. The key to their success? They spent time face to face with customers understanding what they actually wanted, built trust incrementally, and iterated constantly.
M-Files and the Digital Twin Question
The M-Files session was the one that connected most directly to something I've been thinking about for a while around codifying organisational judgement.
Tony Grout, their Chief Product & Technology Officer, used a great analogy from Good Will Hunting. You can tell me everything about the Sistine Chapel, when it was built, who the architect was, who painted it. But you can't tell me what it smells like. That's context.
His argument: AI knows all of your documents, but it doesn't know the context around them. And if all you're doing is throwing everything into SharePoint and hoping Copilot will figure out the relationships, you're going to be disappointed.
M-Files builds what he called an explicit knowledge graph, where documents, people, projects, and business objects are connected through defined relationships, not inferred ones. So when Copilot needs to answer a complex question that spans contracts, projects, people, and outcomes, it's following explicit paths rather than trying to guess.
But the part that really caught my attention was his point about tracking not just what decisions are made, but why. He referenced Ray Dalio's approach at Bridgewater, where every business decision is recorded in an app, and over time you build a "believability index" for decision-makers. Good decisions increase your score. Bad ones decrease it.
Now apply that to AI agents. Your agents aren't all going to be equally capable. The annotations and decisions they learn from aren't all equally reliable. So tracking the quality of decisions, and building that into your knowledge graph over time, is going to be as important as tracking the decisions themselves.
He called it the shift from a "knowledge graph" (a point-in-time snapshot of what your organisation is) to a "context graph" (how your organisation has changed over time and why). That's where organisational learning lives. And that's where AI agents will get their learning from too.
This is exactly the kind of thinking I've been exploring around leadership judgement and how organisations can build systems that capture, measure, and improve the quality of decisions over time, not just the speed of them.

