The practice

Why Human–AI Systems exists

Most organisations are at the early stages of AI adoption. They're running pilots. Some of them are working. Fewer are changing how work actually happens at scale.

The gap isn't technology. Organisations have access to excellent tools. The gap is operational adoption — the hard work of redesigning workflows, building governance that is rigorous without being obstructive, and developing the leadership capability to manage teams where AI is part of how work gets done.

Human–AI Systems was founded to help organisations close that gap. The approach is practical and evidence-based, shaped by direct experience of what it takes to move from experimentation to operational adoption — and what goes wrong when organisations try to skip the difficult parts.

"I work with organisations that are serious about adoption, not just experimentation. That usually means starting with an honest diagnosis of where AI can genuinely create value — and designing programmes that are measurable, governable, and built to last."

Commercial AI transformation

Leading it at Verimatrix

At Verimatrix — a publicly listed cybersecurity company — Mike led the company's AI transformation programme. This was not a pilot project or an innovation initiative. It was a programme designed to change how engineering teams actually worked: how they wrote code, reviewed it, managed technical debt, and shipped product.

That meant confronting what most AI transformation programmes avoid: the workflow design questions, the accountability questions, and the governance questions. What happens when AI makes a mistake in production code? Who owns the output? How do you measure productivity when the nature of the work has changed?

Outcomes — Verimatrix AI Transformation

~£1M engineering productivity improvement

~£3M revenue impact attributed to the programme

AI governance framework designed, built and approved at board level in a listed company

The board approval process for the governance framework was, in some ways, the most instructive part. It required translating the operational reality of AI adoption into language that directors could interrogate, challenge and ultimately stand behind. That experience — of making AI governance real at the highest level of a listed company — is directly relevant to any organisation facing similar scrutiny.

Public sector AI adoption

Cabinet Member at Cotswold District Council

As a Cabinet Member at Cotswold District Council, Mike held the digital and climate strategy portfolio — including responsibility for the council's approach to AI. He co-authored the council's AI Policy and oversaw the rollout of Microsoft Copilot to around 250 officers and elected members.

AI adoption in local government is different from commercial settings in almost every dimension: the governance environment is more complex, the consequences of getting it wrong are more public, the resource constraints are tighter, and the workforce is more varied in its starting point with technology. None of that makes it slower because people are resistant. It's slower because the design constraints are genuine — and need to be taken seriously.

Outcomes — Cotswold District Council

AI Policy co-authored and adopted by the council

~250 officers and elected members onboarded to Microsoft Copilot

Governance framework designed for a politically accountable environment

That combination of commercial and public sector experience is not common. It means the governance instincts developed at Verimatrix translate directly into the scrutiny environment of local government — and vice versa.

Earlier career

Enterprise technology at scale

Before founding Human–AI Systems, Mike spent over two decades in enterprise technology — including a senior leadership role at Cisco, where he held responsibility for a £430M EMEA Public Sector P&L. That background — building and leading large commercial technology organisations across major enterprise and public sector accounts — provides the business context within which AI adoption advice is grounded.

The career has run across enterprise SaaS, cybersecurity, public sector digital, and AI transformation. It is not a background in AI research or software development. It is a background in making technology work inside real organisations — and in understanding what leadership, governance and operational design are required to do that.

Mike McKeown, founder of Human–AI Systems
The approach

How the work gets done

Three principles that shape every engagement, regardless of size or sector.

Start with diagnosis, not prescription

Every organisation is different. The right starting point for AI adoption depends on what workflows exist, what governance environment applies, and what the organisation is genuinely ready for. The Radar exists because good advice has to be grounded in reality, not in a standard model.

Governance is a design problem, not a compliance exercise

Well-designed governance enables adoption — it doesn't obstruct it. The organisations that get AI wrong usually have governance that arrived too late, was too vague to be useful, or was never connected to how work actually happens. Getting it right from the start is the whole point.

The goal is an organisation that works without us

Every engagement is designed to leave the organisation more capable, not more dependent. That means building internal understanding, designing governance that the organisation can sustain, and developing the leadership capability to manage human-AI work going forward.

The team

Human leadership, AI workforce

Human–AI Systems operates the same model it helps clients build. Mike provides the leadership, judgement and accountability. An AI team — each named after a computing pioneer — handles defined roles under his direction. It's a practical demonstration of the Worker model in operation.

Meet the team

Start with a conversation

If you'd like to talk through your situation — where you are with AI adoption, what you're trying to achieve, and whether there's a fit — book 30 minutes.