EU AI Act Becomes Law: Key Dates and Compliance Roadmap
After years of development, the EU AI Act has officially become law following its publication in the EU Official Journal. This landmark regulation establishes the world’s first comprehensive legal framework for artificial intelligence.
Official Publication and Entry into Force
The EU AI Act was published in the Official Journal of the European Union on July 12, 2024, as Regulation (EU) 2024/1689. The regulation entered into force on August 1, 2024—20 days after publication, as is standard for EU regulations.
Implementation Timeline
The EU AI Act follows a phased implementation approach, giving organizations time to adapt:
Phase 1: Prohibited AI Systems (February 2025)
6 months after entry into force
The following AI practices become prohibited:
- Social scoring by public authorities
- Exploitation of vulnerabilities (age, disability, social situation)
- Subliminal manipulation causing harm
- Real-time remote biometric identification in public spaces (with exceptions)
- Emotion recognition in workplace and education
- Biometric categorization for sensitive attributes
- Untargeted scraping for facial recognition databases
Phase 2: GPAI and Governance (August 2025)
12 months after entry into force
- General-Purpose AI (GPAI) model obligations apply
- AI Office becomes fully operational
- National competent authorities must be designated
- Codes of practice for GPAI providers
Phase 3: High-Risk AI Systems (August 2026)
24 months after entry into force
Full compliance required for:
- High-risk AI systems listed in Annex III
- Transparency obligations for limited-risk AI
- Conformity assessment requirements
- CE marking requirements
- Post-market monitoring obligations
Phase 4: Annex I Systems (August 2027)
36 months after entry into force
- AI in regulated products (machinery, toys, medical devices)
- Integration with existing product safety legislation
Key Provisions Now in Effect
AI Office
The EU AI Office within the European Commission:
- Oversees implementation across member states
- Enforces GPAI model rules directly
- Develops guidelines and standards
- Coordinates with national authorities
AI Board
Composed of member state representatives:
- Advises on consistent implementation
- Coordinates between national authorities
- Develops recommendations
Scientific Panel
Independent experts providing:
- Technical advice to AI Office
- Alerts on systemic risks from GPAI
- Support for enforcement activities
What Organizations Must Do Now
Immediate Actions (Before February 2025)
-
Audit for Prohibited Practices
- Review all AI systems for prohibited uses
- Identify any social scoring applications
- Check for manipulation or exploitation risks
-
Establish AI Inventory
- Catalog all AI systems in use
- Document purposes and deployment contexts
- Identify risk classifications
Short-Term Actions (Before August 2025)
-
Assess GPAI Obligations
- If using or providing GPAI models
- Understand systemic risk designations
- Prepare transparency documentation
-
Designate Responsibilities
- Appoint AI compliance officers
- Establish governance structures
- Create internal policies
Medium-Term Actions (Before August 2026)
-
High-Risk AI Compliance
- Implement risk management systems
- Establish data governance practices
- Create technical documentation
- Set up human oversight mechanisms
- Implement logging and monitoring
-
Conformity Assessment
- Identify required assessment procedures
- Engage notified bodies if needed
- Prepare for CE marking
Sector-Specific Considerations
Different sectors face unique challenges:
Financial Services
| AI Application | Classification | Key Requirements |
|---|---|---|
| Credit scoring | High-risk | Full conformity assessment |
| Fraud detection | High-risk | Human oversight, transparency |
| Customer service bots | Limited-risk | Disclosure requirements |
Healthcare
| AI Application | Classification | Key Requirements |
|---|---|---|
| Diagnostic AI | High-risk (medical device) | CE marking + AI Act |
| Administrative AI | Varies | Case-by-case assessment |
| Patient chatbots | Limited-risk | Disclosure, accuracy |
Human Resources
| AI Application | Classification | Key Requirements |
|---|---|---|
| Recruitment AI | High-risk | Full compliance, bias testing |
| Performance evaluation | High-risk | Transparency, oversight |
| Workforce planning | Varies | Risk assessment needed |
SME Provisions
The regulation includes some accommodations for smaller organizations:
- Proportionate obligations - Requirements scaled to resources
- Regulatory sandboxes - Testing environments with reduced burden
- Priority access - SMEs get priority for sandbox participation
- Guidance - Dedicated guidance for SMEs from AI Office
However, risk classifications and prohibitions apply equally regardless of size.
Documentation Requirements
Organizations must maintain:
For All AI Systems
- Purpose and intended use documentation
- Risk classification rationale
- User instructions
For High-Risk AI Systems
- Technical documentation (Annex IV)
- Risk management system records
- Data governance documentation
- Accuracy and robustness testing results
- Human oversight procedures
- Logging capabilities and records
- Post-market monitoring plans
For GPAI Models
- Model cards and technical documentation
- Training data descriptions
- Evaluation results
- Known limitations
How Metrica.uno Supports Compliance
Our platform helps you navigate EU AI Act requirements:
-
Risk Classification Tool
- Automatic categorization of AI systems
- Prohibition screening
- Gap identification
-
Compliance Assessment
- Requirements mapped to your AI inventory
- Progress tracking against deadlines
- Documentation templates
-
Audit-Ready Reports
- Generate documentation for conformity assessments
- Export evidence for regulatory inspections
- Track compliance over time
-
Continuous Monitoring
- Regulatory update alerts
- Compliance drift detection
- Re-assessment reminders
Conclusion
The EU AI Act is no longer a future concern—it’s current law with binding deadlines. Organizations that begin compliance efforts now will be better positioned to meet requirements as they phase in.
The first deadline (February 2025) for prohibited practices is fast approaching. Use this time to audit your AI systems and ensure none fall into prohibited categories.
Further Reading
Ready to assess your AI compliance?
Start your free assessment today and get actionable insights.
Written by
Metrica.uno Team
Content Team
Metrica.uno Team is part of the Metrica.uno team, helping organizations navigate AI compliance with practical insights and guidance.
Related Articles
The EU AI Act's Global Impact: Brussels Effect on AI Regulation
How the EU AI Act is shaping AI governance worldwide and why organizations globally must pay attention to European AI regulation.
EU AI Act Penalties and Enforcement: What to Expect in 2026
A comprehensive guide to EU AI Act fines, enforcement mechanisms, and what organizations should prepare for as penalties become applicable.
Whistleblowing Protections Under the EU AI Act
Understanding the whistleblower protections in the EU AI Act and how they encourage reporting of AI compliance violations.