AI governance programmes produce policy. HIA tests whether that policy can be enacted.
Governance frameworks define what should happen when a system behaves in ways it should not. What they cannot produce is the organisational capability to act on that policy under real conditions: under time pressure, with incomplete information, against authority structures that were not designed for this moment.
HIA operates in that gap. We assess whether your organisation can identify a failure in time, escalate with the correct authority, and intervene before consequences compound. That is a different question from whether your governance framework is well designed. It is the more consequential one.
- Whether your escalation paths function under real operational conditions, not just on paper.
- Whether decision-makers have the authority, information, and clarity to act at speed.
- Whether your board would know, in time, that it needed to.
Where the failure lives
Governance Design Layer
Well-designed. Rigorously documented. Cannot, by itself, ensure a board can act when it needs to.
GAP
HIA: Intervention Layer
Policy governs intent. HIA tests whether that intent can be enacted.
Three areas of focused engagement. Each is designed to answer a specific question that governance frameworks alone cannot answer.
Boardroom Integrity Advisory
Decision clarity when governance is tested under pressure.
View serviceIntervention & Decision Architecture Review
The mechanics of escalation, override, and authority in AI-dependent operations.
View serviceExecutive Intervention Labs
Scenario-based work to expose intervention capability before an event makes discovery unavoidable.
View serviceMost organisations that need HIA do not have a defined problem. They have a set of conditions that a board member would recognise.
Your AI governance framework is documented and has been reviewed. You are not certain what would happen if a live system produced an outcome that required an urgent response. You do not know, precisely, who would be the first person with authority to act, or how long that would take. You have not tested it.
You are not negligent. You are in the same position as most organisations running consequential AI systems. The gap between governance design and operational capability is structural. It is not resolved by better documentation.
If any of that is recognisable, it is worth a conversation.
The crisis runs from the moment a system fails. Authority to stop it travels upward through people who cannot act, past approval layers that add latency and nothing else. By the time the board is told, the moment for low-cost intervention has passed. What follows is not crisis management. It is damage limitation.
An AI-driven decision produces outputs outside expected parameters. The system runs on.
Board unaware. No signal has reached executive level. No one with authority to act has been told.
Deviation noticed. Damage accumulating. Decision still running. Escalation route unclear.
Signal reaches operations. First person in the chain lacks authority to halt the system.
Still running. Cost compounding with every hour. The window of low-cost intervention is closing.
Issue received at management level. Authority to act uncertain. Escalation continues upward.
Uncontainable. Regulatory exposure. Reputational damage. Financial liability. The cost of discovery is no longer manageable.
The board is informed. The intervention window has already closed. They are not a safeguard. They are a witness.
A board that cannot intervene is not a safeguard. It is a witness.
The core question HIA exists to answer
Specialist governance firms, including Ethica Group, focus on the design and structure of AI ethics frameworks, board oversight mechanisms, and institutional accountability. This is essential work. A well-designed governance system defines what should happen.
HIA operates in the space that governance design cannot reach: the moment of live system failure. Policy governs intent. HIA tests whether that intent can be enacted when it needs to be.
These are not competing positions. Most organisations that engage HIA already have a governance framework in place. The question is not whether it is well designed. The question is whether it will function under real conditions — and whether the people responsible for acting on it actually can.
HIA operates independently of governance design and review functions. That independence is deliberate.
Most score well on governance design. They score poorly on operational intervention capability. The gap between the two is where HIA works.
An HIA engagement begins with a single question: can your organisation actually intervene, right now, in a live AI-driven failure?
Not in theory. Not according to your governance documentation. In practice, under real time pressure, with the people and authority structures you actually have.
The first session is a confidential conversation with relevant senior leadership. No questionnaire. No preparatory documents. We map the escalation chain as it currently exists — who gets told, in what order, with what authority to act. Most organisations discover the critical gaps in that first conversation.
- Assessment findings are presented directly to the board or executive team — not filtered through a project sponsor, not softened for internal politics. A clear account of what your organisation can and cannot do, and what needs to change.
- Where intervention labs are engaged, scenarios are drawn from real system failure patterns, not hypotheticals. Participants make decisions under conditions that approximate genuine operational pressure.
- HIA works with a deliberately small number of organisations at any one time. That is a constraint, not a growth ambition. Each engagement receives the full attention of senior advisors with direct operational experience of AI systems.
Most organisations do not know their actual intervention capability until they need it.
The question is not whether your governance framework is well designed. It is whether, when an AI-driven system produces an outcome that demands a response, your organisation can provide one — in time, with the right authority, acting on the right information.
If you do not know the answer with confidence, that is the answer.
A confidential initial conversation costs nothing and commits you to nothing. It will tell you whether an HIA engagement is relevant.