EU AI Act - Compliance and Risk Classification

Navigate EU AI Act requirements with confidence. From risk classification to conformity assessment - we make AI regulation practical.


What we offer

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive AI regulation. It introduces a risk-based classification system for AI systems, with requirements ranging from transparency obligations to full conformity assessments.

We help you cut through the regulatory complexity and build a practical compliance roadmap. Our EU AI Act consulting covers:

Our approach

The EU AI Act is complex, but compliance doesn’t have to be overwhelming. We break it down into manageable steps:

  1. AI system inventory - We help you identify and catalog every AI system in your organization, including those you might not think of as “AI.”
  2. Risk classification - We determine the risk level for each system based on the regulation’s criteria. Many systems fall under minimal risk with only transparency requirements.
  3. Gap analysis - For systems that need attention, we identify exactly what’s missing and what needs to change.
  4. Compliance roadmap - We create a prioritized plan that aligns with the regulation’s phased enforcement timeline.
  5. Implementation support - We work with your team to implement the required changes, from documentation to technical controls.

Why choose us

The EU AI Act doesn’t exist in isolation. It intersects with GDPR, the NIS2 Directive, and sector-specific regulations. Our experience with EU regulatory frameworks means we understand how these pieces fit together.

We also know ISO 42001. If you’re pursuing both ISO 42001 certification and EU AI Act compliance, there’s significant overlap. We’ll help you build a unified approach that satisfies both without duplicating work.

And honestly? Most of your AI systems probably fall under minimal or limited risk. We’ll tell you that upfront instead of inflating the scope to justify higher fees.

Key timelines

The EU AI Act entered into force in August 2024, with a phased enforcement schedule:

Next step

Not sure how the EU AI Act applies to your AI systems? Let’s have a conversation. We’ll help you understand your obligations - and if the honest answer is “you’re mostly fine,” we’ll tell you that too.

Let’s talk - no commitment, just clarity on where you stand.


Frequently Asked Questions

Does the EU AI Act apply to companies outside the EU?
Yes. If your AI system is used by people in the EU or its output is used in the EU, the regulation applies regardless of where your company is based. Similar to how GDPR reaches beyond EU borders.
How do I know if our AI systems are high-risk?
The EU AI Act defines high-risk AI through two main paths: AI systems used as safety components of regulated products (medical devices, vehicles), and AI systems in specific use cases listed in Annex III (employment, credit scoring, law enforcement, etc.). We'll help you classify every system in your inventory.
What are the penalties for non-compliance?
Fines can reach up to 35 million EUR or 7% of global annual turnover for prohibited AI practices, and up to 15 million EUR or 3% for other violations. But the real risk is market access — non-compliant high-risk AI systems cannot be placed on the EU market.
When do we need to be compliant?
Prohibited practices are already in effect since February 2025. GPAI model requirements apply from August 2025. High-risk AI system requirements become enforceable in August 2026. The timeline depends on your specific AI systems and their risk classification.
Can we handle EU AI Act compliance and ISO 42001 together?
Absolutely — and we recommend it. There's significant overlap between the two. ISO 42001's AI risk assessment, documentation, and monitoring requirements map directly to many EU AI Act obligations. A unified approach saves time, reduces duplication, and gives you both a certification and regulatory compliance.