Your AI needs a governance case. We build it.
The EU AI Act applies to UK businesses selling into European markets — and most haven't started preparing. We help organisations build practical AI governance that meets regulatory requirements and survives audit, without the overhead of a Big Four consultancy.
The EU AI Act is not optional for UK businesses
If your AI system affects anyone in the EU — customers, employees, users — you're in scope. Brexit doesn't exempt you. Here's what most businesses are getting wrong.
Extraterritorial reach
The Act applies to any AI system whose output is "used" in the EU — regardless of where the provider is based. If your product serves European customers, you need to comply.
High-risk classification
AI used in recruitment, credit scoring, education, healthcare, and critical infrastructure is classified "high-risk" under Annex III — requiring conformity assessments, documentation, and human oversight.
No one owns it yet
Most organisations have no designated AI governance owner. The Act requires documented accountability — "the IT team handles it" is not a compliance position.
Governance designed around your people, not just your risk register
Most compliance programmes fail because they're designed for auditors, not for the teams who have to live with them. We use user-centred design methodology to build governance that your people actually follow — because it was built with them, not imposed on them.
Understand your context
We start by mapping how AI decisions actually flow through your organisation — who touches them, who's affected, where the pressure points are. Not a questionnaire. A conversation.
Design with your teams
Governance only works if the people using it helped shape it. We co-design oversight processes with your product, engineering, and leadership teams — so what we build fits how you actually work.
Build for reality
We produce documentation, risk frameworks, and oversight mechanisms that are usable — not 80-page PDFs that sit in SharePoint. Every deliverable is tested against your actual workflows before handover.
Make it defensible
The end result isn't just compliant — it's defensible. If a regulator, client, or board member asks "how do you govern AI?", you have a clear, evidenced answer that holds up under scrutiny.
Typical consultancy
- Sends a questionnaire
- Produces a risk matrix
- Hands over a 60-page PDF
- Leaves you to implement
How we work
- Maps your actual decision flows
- Co-designs with the people involved
- Builds governance into your workflows
- Tests it works before handover
Three ways to begin
Every engagement starts with a conversation about your specific situation. These are typical starting points — not rigid packages. We scope to your needs.
“We need to know where we stand”
You suspect your AI systems are in scope for the EU AI Act, but you're not sure what that means in practice. We map your exposure, classify your risk, and give you a clear picture — so your next move is informed, not reactive.
- Walk through your AI systems together — how they work, who they affect
- Classify each system under the Act's risk framework
- Identify the gaps between where you are and what's required
- Hand you a prioritised action plan you can act on immediately
“We need to get compliant”
You know you're in scope and you need governance that actually works — not a framework document that sits in a drawer. We co-design your compliance programme with the people who'll use it, then build the documentation and processes to back it up.
- Co-design oversight processes with your product and engineering teams
- Fundamental Rights Impact Assessments for each high-risk system
- Documentation and evidence packages built for audit, not for show
- Team training so governance is embedded, not consultant-dependent
“We need someone in our corner”
AI regulation is moving fast. You've done the initial work, but you need a trusted adviser who understands your systems, tracks the regulatory landscape, and is there when your board or clients have questions.
- Monthly governance review — we know your systems, so advice is specific
- Regulatory monitoring — enforcement actions, guidance changes, national rules
- Board-ready reporting so leadership has a clear AI governance narrative
- Direct access when something urgent comes up — client queries, incidents, procurement
See the thinking behind our approach
Download these resources to understand what the EU AI Act means for your business — no email gate, no sales pitch.
EU AI Act: What UK Businesses Need to Do Before August 2026
One-page briefing covering the deadline, who's in scope, what high-risk means, and the 5 steps to start now. Cited sources throughout.
Read the briefing →AI Risk Classifier
Answer 5 questions about your AI system and find out whether it's likely classified as prohibited, high-risk, limited-risk, or minimal-risk under the Act.
Try the classifier →EU AI Act Compliance Checklist
The 12-point checklist we use with our own clients. Covers system inventory, risk classification, documentation, human oversight, and registration requirements.
Get the checklist →Find out where your AI system sits
The EU AI Act classifies AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. Your obligations — and the deadline pressure — depend entirely on which tier applies to you. This 60-second assessment gives you an indicative classification based on five questions about what your system does and where it operates.
Is your AI system high-risk?
Answer these questions to get an indicative risk classification under the EU AI Act.
Does your AI system make or influence decisions about individual people?
For example: hiring decisions, credit scoring, medical diagnosis, student assessment, insurance pricing, or benefit eligibility.
Not another generic consultancy
AI governance is a design problem — it requires processes that real people can actually follow. That's where 20 years of user-centred design meets regulatory compliance.
-
Active government delivery
Currently delivering on UK government AI programmes — we see how governance is interpreted in practice, not just in policy documents.
-
Human-centred methodology
20+ years designing processes people actually use. Your governance framework won't gather dust — it'll be embedded in how your teams work.
-
Cross-sector experience
Defence, professional services, and public sector. AI governance challenges are sector-specific — we bring patterns from across industries.
-
SME-friendly pricing
Big Four firms charge £200k+ for AI governance programmes. We deliver the same rigour at a fraction of the cost, because SMEs deserve expert guidance too.
AI governance tailored to your industry
Every sector has its own regulatory landscape, procurement requirements, and operational realities. We build governance frameworks that speak your industry's language.
Common questions about AI governance
Straight answers, no jargon.
Yes, if your AI system's output is used within the EU — by customers, employees, or users — you're in scope regardless of where your company is based. This is similar to how GDPR applies extraterritorially.
The Act's Annex III defines specific use cases: recruitment and HR decisions, credit scoring, education assessment, healthcare diagnosis, critical infrastructure management, and others. If your AI makes or influences significant decisions about people, it's likely high-risk.
High-risk AI system obligations take effect on 2 August 2026. Prohibited AI practices (social scoring, certain biometric uses) are already banned as of February 2025. General-purpose AI model rules apply from August 2025.
GDPR governs personal data. The EU AI Act governs AI systems specifically — including their design, testing, documentation, and deployment. They overlap (AI often processes personal data) but the AI Act adds requirements around risk assessment, human oversight, and transparency that go beyond data protection.
Size doesn't determine scope — use case does. A 10-person startup selling an AI hiring tool into France has the same obligations as a multinational. The difference is that SMEs can build lean governance frameworks; they don't need the same infrastructure as HSBC.
Fines up to €35 million or 7% of global annual turnover, whichever is higher. Beyond fines, non-compliance can mean your AI system is banned from the EU market entirely — which means losing access to 450 million potential users.
AI governance readiness takes months, not weeks.
Start now — regardless of deadline shifts.
Procurement teams, investors, and auditors will ask for evidence of AI governance readiness — whether the deadline is August 2026 or later. Book a free 30-minute scoping call to assess your position.