Frameworks

EU AI Act Explained: Who It Affects, Requirements & Penalties

MT
Metrica.uno Team
5 min read
#EU AI Act #artificial intelligence #regulation #compliance #risk classification
EU AI Act Explained: Who It Affects, Requirements & Penalties
Share:

The EU AI Act is the world’s first comprehensive law regulating artificial intelligence. It establishes rules for how AI systems are developed, deployed, and used across the European Union. Your algorithm, your responsibility.

The regulation takes a risk-based approach: the higher the risk an AI system poses to people’s safety, rights, or well-being, the stricter the requirements. Some AI systems are banned outright. Others need rigorous conformity assessments. Most face transparency obligations.

Who Does the EU AI Act Affect?

The EU AI Act has a broad reach. It applies to:

  • Providers — anyone who develops or has an AI system developed and places it on the EU market (regardless of where they’re based)
  • Deployers — organizations that use AI systems within the EU
  • Importers and distributors — entities bringing AI systems into the EU market
  • Product manufacturers — when AI is embedded in regulated products (medical devices, vehicles, machinery)

The Extraterritorial Effect

Like GDPR, the AI Act reaches beyond EU borders. If you’re a US company selling AI-powered software to European customers, or a Chinese manufacturer exporting AI-equipped products to the EU market, the AI Act applies to you.

Risk Categories

The EU AI Act classifies AI systems into four risk tiers:

Risk LevelTreatmentExamples
UnacceptableBannedSocial scoring, real-time biometric surveillance in public spaces, manipulation of vulnerable groups
High riskStrict requirements + conformity assessmentAI in hiring, credit scoring, criminal justice, critical infrastructure, education, healthcare
Limited riskTransparency obligationsChatbots, emotion recognition, deepfake generators
Minimal riskNo specific obligationsSpam filters, AI in video games, basic recommendation systems

Key Requirements

For High-Risk AI Systems

High-risk systems face the most demanding requirements:

  • Risk management system — ongoing identification, analysis, and mitigation of risks throughout the AI system’s lifecycle
  • Data governance — training data must be relevant, representative, free of errors, and complete. Bias detection and mitigation are mandatory
  • Technical documentation — comprehensive documentation covering design, development, testing, and performance
  • Record-keeping — automatic logging of events to enable traceability
  • Transparency — deployers must be informed about the system’s capabilities, limitations, and intended use
  • Human oversight — systems must be designed so humans can effectively oversee their operation and intervene when necessary
  • Accuracy, robustness, and cybersecurity — systems must perform consistently and resist manipulation
  • Conformity assessment — before market placement, high-risk systems must undergo assessment (self-assessment or third-party, depending on the domain)

For General-Purpose AI (GPAI) Models

The AI Act includes specific rules for foundation models and general-purpose AI:

  • Technical documentation and transparency — model providers must document training methods, data sources, and capabilities
  • Copyright compliance — models must respect EU copyright law, including detailed summaries of training data
  • Systemic risk — GPAI models with systemic risk (e.g., trained with more than 10^25 FLOPs) face additional obligations: adversarial testing, incident reporting, and risk mitigation

Fundamental Rights Impact Assessment

Deployers of high-risk AI in public services, banking, insurance, and healthcare must conduct an assessment of the impact on fundamental rights before deployment.

Why the EU AI Act Matters

  • First-mover standard: The EU AI Act is setting the global template. Other jurisdictions are watching and following.
  • Market access: To sell AI products and services in the EU, compliance is non-negotiable. The EU single market is 450 million consumers.
  • Trust advantage: Organizations that can demonstrate AI governance build stronger relationships with customers, partners, and regulators.
  • Risk mitigation: The Act’s requirements — bias testing, human oversight, transparency — are simply good AI engineering practices. Compliance makes your systems better.

What Happens If You Don’t Comply

The Fines

The EU AI Act has the highest potential fines of any EU regulation:

ViolationMaximum Fine
Prohibited AI practicesUp to €35 million or 7% of global annual turnover
Other AI Act violationsUp to €15 million or 3% of global annual turnover
Supplying incorrect informationUp to €7.5 million or 1.5% of global annual turnover

For SMEs and startups, fines are calculated proportionally (the lower of the two figures applies).

A Scenario That Destroys Careers

This is an illustrative scenario based on real discrimination patterns in AI systems.

A recruitment platform uses AI to screen 100,000 CVs per year for its enterprise clients. The model was trained on 10 years of historical hiring data from a large corporation. What nobody noticed: that corporation historically promoted men to leadership positions at twice the rate of women.

The AI learned this pattern. Without anyone configuring it to discriminate, the system systematically scored women 20% lower for management positions. Over two years, approximately 12,000 qualified women were silently ranked below less-qualified male candidates.

A journalist investigation exposes the pattern. The company faces:

  • €35 million fine (7% of turnover) — the AI was never classified as high-risk, despite being used in employment decisions
  • No conformity assessment was performed
  • No fundamental rights impact assessment existed
  • No human oversight — hiring managers trusted the AI scores blindly
  • No bias testing was ever conducted on the training data
  • A discrimination lawsuit from 12,000 rejected candidates
  • A PR crisis that destroys the company’s employer brand and their clients’ reputations

The AI worked exactly as designed. The problem was that nobody checked what it was designed to do.

How to Get Started

1. Inventory Your AI Systems

You can’t manage what you don’t know. Document every AI system your organization develops, deploys, or uses. Include:

  • What does it do?
  • What data does it use?
  • Who does it affect?
  • What decisions does it influence?

2. Classify Risk Levels

For each AI system, determine its risk category under the EU AI Act. Pay special attention to AI used in hiring, credit decisions, law enforcement, healthcare, and education — these are likely high-risk.

3. Address High-Risk Systems First

For high-risk systems, start building the required documentation: risk management system, data governance, technical documentation, and human oversight mechanisms. These take time to implement properly.

4. Establish AI Governance

Create an AI governance framework: policies, roles, processes, and tools for managing AI responsibly. This is the foundation everything else builds on.

5. Assess and Monitor

Use a compliance assessment tool to evaluate your current posture against EU AI Act requirements. Set up continuous monitoring for bias, performance, and compliance drift.


The EU AI Act doesn’t ban AI — it demands responsible AI. Organizations that embrace this will build better products, earn more trust, and access the world’s largest single market. Those that ignore it will learn the hard way that algorithms have consequences.

Ready to assess your compliance?

Start your free assessment today and find out where you stand with GDPR, NIS2, DORA, ISO 27001, and more.

MT

Written by

Metrica.uno Team

Content Team

Metrica.uno Team is part of the Metrica.uno team, helping organizations navigate AI compliance with practical insights and guidance.

Related Articles