What is the EU AI Act and how does its risk-based framework work?
The EU AI Act defines a risk-based compliance system determining the extent of regulation each AI system faces.
Every AI system is assessed based on its potential impact on individuals and society, leading to one of four categories:
Unacceptable Risk: AI practices that threaten human rights or Union values are banned. Violations can result in the highest administrative fines.
High Risk: Systems affecting safety or fundamental rights—such as those used in critical infrastructure, law enforcement, healthcare, or education—must comply with detailed technical and procedural requirements.
Specific Risk (Transparency): Systems not deemed high-risk, including General-Purpose AI (GPAI) models, must fulfill transparency obligations, such as disclosing AI-generated content.
Minimal or No Risk: Most AI applications fall in this category and face minimal requirements, though voluntary codes of conduct are encouraged.
Which AI practices are banned as “unacceptable risk,” and when do the bans apply?
Practices posing an unacceptable risk are strictly prohibited as they violate EU values and fundamental rights.
These prohibitions become applicable on 2 February 2025.
Banned AI practices include:
Manipulative or subliminal techniques that distort human behavior and cause significant harm.
Exploitation of vulnerable groups, such as children or persons with disabilities, leading to behavioral manipulation.
Social scoring systems that unfairly classify or penalize individuals based on predicted characteristics or behavior.
Creation of facial recognition databases through large-scale scraping of internet or CCTV data.
Emotion recognition in workplaces or schools, except for safety or medical purposes.
Biometric categorization systems used to infer sensitive attributes, such as race, religion, or political opinion.
“Real-time” remote biometric identification in public spaces for law enforcement, except in strictly defined cases, such as locating missing persons or preventing serious crimes.
What are the mandatory requirements for providers of High-Risk AI systems?
Providers of high-risk AI systems must meet the obligations set out in Chapter III, Section 2 of the Regulation.
These include:
Risk Management System (RMS): Establish and maintain a continuous process to identify and mitigate foreseeable risks throughout the system’s lifecycle.
Data Governance: Use training and testing datasets that are relevant, representative, and as free from errors and bias as possible.
Technical Documentation and Logging: Maintain updated documentation and automatic event logging to ensure traceability and support post-market monitoring.
Transparency: Provide clear instructions for use, including limitations, expected accuracy, and possible risk conditions.
Human Oversight: Design systems that allow effective human monitoring, understanding, and override of AI outputs.
Accuracy, Robustness, and Cybersecurity: Ensure reliable performance, resilience to faults and attacks, and protection against unauthorized interference.
What are the obligations for providers of General-Purpose AI (GPAI) models?
Providers of General-Purpose AI (GPAI) models must comply with specific transparency, documentation, and copyright obligations when placing models on the EU market.
They must:
Prepare and maintain technical documentation describing training, testing, and evaluation processes, available to the EU AI Office or national authorities upon request.
Provide information to downstream providers to enable compliant integration and use.
Implement a copyright compliance policy respecting rights under Directive (EU) 2019/790, including right reservations under Article 4(3).
Publish a training data summary describing the nature and source of data used.
If a GPAI model presents a systemic risk—for example, when the cumulative computation used for training exceeds 10²⁵ floating-point operations (FLOPs)—the provider must conduct model evaluations, such as adversarial testing, and implement continuous monitoring and mitigation at the Union level.
Open-source models with publicly available architecture, parameters, and usage information are exempt from most documentation obligations unless they present systemic risk.
How is conformity assessed for High-Risk AI systems, and what markings are required?
Before being placed on the market or put into service, high-risk AI systems must undergo a conformity assessment to demonstrate compliance.
Three types of procedures apply:
Internal Control (Annex VI): For most high-risk systems, such as those in education or employment, providers perform internal checks of documentation and quality management.
Notified Body Assessment (Annex VII): Required for biometric systems or when harmonized standards are not applied. A third-party notified body reviews documentation and may examine data used for training or validation.
Sectoral Product Integration: For systems that are safety components of regulated products (e.g., medical devices), conformity is assessed under existing EU product safety procedures.
Once conformity is established, providers must:
What is the phased timeline for compliance under the EU AI Act?
The Regulation entered into force in June 2024, with staggered application dates to allow for gradual implementation.
Key milestones include:
2 February 2025: Prohibitions on unacceptable AI practices take effect.
2 May 2025: Codes of Practice for GPAI models must be completed.
2 August 2025: Governance structures and GPAI transparency obligations become applicable.
2 August 2026: General application of the Regulation, including most high-risk AI obligations.
2 August 2027: High-risk AI systems linked to EU product safety laws.
2 August 2030: High-risk systems intended for use by public authorities.
31 December 2030: AI systems integrated into large-scale IT systems in Freedom, Security, and Justice domains.