Integrity & Intervention Advisory for AI Systems

About HIA

Built at the boundary of AI systems and decision-making. Independent by design. Grounded in applied operational experience, not theoretical frameworks.

00

Photo
James Saint
Founder

James Saint

Founder

James Saint is the founder of Human Integrity Advisory. His background spans applied AI systems work in financial intelligence environments, including direct operational exposure to how AI-driven decisions are made, contested, and escalated in high-consequence contexts. That experience is the foundation HIA is built on.

HIA works in collaboration with Professor Nicholas Ryman-Tubb (University of Surrey), whose research on neural network explainability in financial crime detection informs the technical grounding of HIA's assessments. The combination of operational experience and rigorous technical foundation is what distinguishes HIA's approach from advisory that operates at the level of frameworks alone.

01

HIA was established to address a specific gap: the distance between what governance structures are designed to do, and what they are capable of doing when a real system event demands a response.

Governance frameworks define what should happen. They almost never determine what will happen under operational pressure: when information is incomplete, time is short, and the people responsible for acting must navigate authority structures that were not designed with this moment in mind.

HIA exists for the moment after governance design. The moment that governance is tested.

This is not ethics consulting. It is not AI safety philosophy. It is a practical intervention into the gap between governance as written and governance as it functions in live conditions.

If you are looking for reassurance, HIA is not the right advisor. If you want a clear account of what is actually true about your organisation's intervention capability and are prepared to act on it, we may work well together.
02

HIA's work is grounded in applied experience with AI systems as they operate in practice: in the conditions under which real systems produce real outputs and real people are required to make consequential decisions about those outputs.

That includes direct experience in Financial Intelligence & Technical Systems (FITS) environments, defined by real-time decision requirements, significant regulatory exposure, and the human cost of errors in either direction. These environments expose precisely the failure modes HIA now assesses.

The technical foundation comes from collaborative work with Professor Nicholas Ryman-Tubb (University of Surrey) on neural network explainability for payment card fraud detection, a domain where the tension between model behaviour, regulatory scrutiny, and human consequence is immediate.

Technical fluency is not the goal. It is what keeps the assessment anchored in how systems actually behave, not how they are described.

03

HIA operates independently. We are not affiliated with any technology vendor, governance body, regulatory authority, or advisory network. Our assessments are not shaped by the interests of any governance framework vendor or by the relationships that govern how most advisory work is commissioned.

This matters because the question HIA answers (can your organisation intervene in time?) requires an assessor whose interests are entirely aligned with an accurate answer. Not a reassuring one. Not a commercially convenient one. An accurate one.

Where HIA works alongside specialist governance firms, including Dr. Joanna Michalska of Ethica Group, which focuses on ethics frameworks and board-level governance design, we do so as a separate, independent voice. The governance layer and the intervention layer must be assessed independently to have any value.

04
  • An independent assessment of intervention capability that is not shaped by those who designed your governance framework.
  • Practitioner-level understanding of AI system behaviour in operational environments, not a compliance checklist.
  • Accurate translation between engineering, legal, risk, and executive domains, without distortion in either direction.
  • A clear account of where your decision architecture fails before a system event makes that account unavoidable.
The throughline is precision. We tell you what is real about your organisation's intervention capability, not what is reassuring about your governance framework.
05
Financial Services
Critical Infrastructure
Regulated Technology
Healthcare Systems
Defence & Intelligence
Professional Services
06

Assessment before advice

Every engagement begins with structured assessment. We do not arrive with pre-formed recommendations. We determine what is actually true about your organisation's intervention capability before we say anything about it.

Practitioner perspective

Our assessments are conducted by people who have worked inside complex AI systems environments. We evaluate your governance structure against operational reality, not a framework.

Accuracy over assurance

If you are looking for reassurance, we are not the right advisor. If you want a clear view of what is actually true, and are prepared to act on it, we may work well together.

Limited engagements

HIA works with a deliberately limited number of organisations. Each engagement receives the full attention of senior advisors. We are not a volume business.

07

If you are responsible for AI governance, board oversight of AI systems, or the decision architecture surrounding AI-dependent operations, we offer an initial confidential conversation to establish context and determine whether an engagement is a fit.

All communication is treated as private and off record unless explicitly agreed otherwise in writing.