Table of Contents
- Overview & Background
- Scope & Applicability
- Risk Classification System
- Prohibited AI Practices (Article 5)
- High-Risk AI Requirements
- General-Purpose AI Models
- Conformity Assessment
- Governance Structure
- Penalties & Enforcement
- Implementation Timeline
- Relationship with Other EU Law
- References & Official Sources
1. Overview & Background
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament and the Council of the European Union, it establishes harmonised rules for the development, placing on the market, putting into service, and use of AI systems within the European Union.
The regulation takes a risk-based approach, imposing obligations on providers and deployers of AI systems proportionate to the level of risk those systems pose to health, safety, and fundamental rights. It applies across all sectors and covers the entire AI value chain — from development through deployment to end use.
Legislative History
| Date | Milestone | Details |
|---|---|---|
| April 2018 | European AI Strategy | European Commission publishes "Artificial Intelligence for Europe" communication, setting out the EU's approach to AI |
| April 2019 | Ethics Guidelines for Trustworthy AI | High-Level Expert Group on AI (AI HLEG) publishes seven key requirements for trustworthy AI |
| February 2020 | White Paper on AI | Commission launches public consultation on regulatory options for AI, receiving over 1,200 responses |
| 21 April 2021 | Commission Proposal | European Commission formally proposes the AI Act (COM(2021) 206), the first-ever legal framework for AI globally |
| 6 December 2022 | Council General Approach | Council of the EU adopts its negotiating position, introducing changes to high-risk classifications and law enforcement provisions |
| 14 June 2023 | Parliament Position | European Parliament adopts its negotiating mandate with amendments including rules for generative AI and foundation models |
| 9 December 2023 | Trilogue Agreement | Parliament, Council, and Commission reach political agreement after marathon 36-hour trilogue negotiations |
| 2 February 2024 | COREPER Endorsement | Member states' ambassadors unanimously endorse the agreed text |
| 13 March 2024 | Parliament Vote | European Parliament adopts the AI Act by 523 votes to 46, with 49 abstentions |
| 21 May 2024 | Council Adoption | Council of the EU formally adopts the Regulation |
| 13 June 2024 | Signing | Presidents of Parliament and Council sign the Act into law |
| 12 July 2024 | Publication | Published in the Official Journal of the European Union (OJ L 2024/1689) |
| 1 August 2024 | Entry into Force | The AI Act officially enters into force (20 days after publication) |
The AI Act was significantly reshaped during trilogue negotiations following the public release of ChatGPT in November 2022. The original 2021 proposal did not address foundation models or general-purpose AI — these provisions were added in response to the rapid emergence of large language models and generative AI systems.
2. Scope & Applicability
Who Does It Apply To?
| Role | Definition | Key Obligations |
|---|---|---|
| Provider | Natural or legal person that develops an AI system or GPAI model and places it on the market or puts it into service under its own name or trademark (Article 3(3)) | Compliance with requirements for high-risk systems, conformity assessment, CE marking, registration, post-market monitoring, incident reporting |
| Deployer | Natural or legal person that uses an AI system under its authority, except where used in the course of a personal non-professional activity (Article 3(4)) | Use in accordance with instructions, human oversight, input data relevance, monitoring, inform affected persons, DPIA for high-risk, keep logs |
| Importer | Natural or legal person located in the Union that places on the market an AI system that bears the name or trademark of a person established outside the Union (Article 3(6)) | Verify conformity assessment, CE marking, technical documentation, provider identity, storage/transport conditions |
| Distributor | Natural or legal person in the supply chain that makes an AI system available on the Union market without affecting its properties (Article 3(7)) | Verify CE marking, verify provider/importer compliance, storage/transport conditions, cooperate with authorities |
Territorial Scope
The AI Act has significant extraterritorial reach, similar to the GDPR (Article 2):
- Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where they are established
- Deployers located within the EU
- Providers and deployers located in third countries, where the output produced by their AI system is used in the EU
- Importers and distributors of AI systems
- Product manufacturers placing AI systems on the EU market together with their product under their own name or trademark
- Authorised representatives of providers established outside the EU
What Is Excluded?
- AI systems developed or used exclusively for military, defence, or national security purposes
- AI systems used by public authorities in third countries or international organisations for law enforcement/judicial cooperation, under agreements with the EU or Member States
- AI used for scientific research and development only (before placing on market)
- AI systems released under free and open-source licences, unless they are high-risk, prohibited, or subject to transparency obligations
- Personal, non-professional use of AI systems
- AI systems placed on the market before the application date are grandfathered, unless significantly modified
3. Risk Classification System
The cornerstone of the EU AI Act is its four-tier risk classification system. Each tier carries different regulatory obligations, ranging from outright prohibition to voluntary codes of conduct.
Unacceptable Risk
BANNED
AI practices considered a clear threat to safety, livelihoods, and fundamental rights. These are completely prohibited under Article 5.
High Risk
REGULATED
AI systems posing significant risk to health, safety, or fundamental rights. Subject to comprehensive mandatory requirements before market placement.
Limited Risk
TRANSPARENCY
AI systems with specific transparency risks. Subject to disclosure and transparency obligations so users know they interact with AI.
Minimal Risk
VOLUNTARY
The vast majority of AI systems. No mandatory obligations, but encouraged to adopt voluntary codes of conduct.
High-Risk Categories (Annex III)
| Category | Annex III Area | Examples of High-Risk AI Systems |
|---|---|---|
| 1. Biometrics | Remote biometric identification, biometric categorisation by sensitive attributes, emotion recognition in workplace/education | Facial recognition for verification, fingerprint matching systems, gait recognition |
| 2. Critical Infrastructure | Safety components of critical infrastructure in traffic, energy, water, gas, heating, electricity | AI managing power grid distribution, water treatment AI, traffic management systems |
| 3. Education & Training | AI determining access to education, evaluating learning outcomes, monitoring assessments | Automated admissions, AI grading, proctoring software, learning analytics |
| 4. Employment & Workers | AI for recruitment, screening, filtering, evaluating, promotion/termination decisions, task allocation, performance monitoring | CV screening algorithms, interview evaluation AI, performance scoring, workforce management |
| 5. Essential Services | AI for public benefits eligibility, credit scoring, insurance risk, emergency dispatch | Credit scoring algorithms, benefits systems, insurance pricing AI, emergency call prioritisation |
| 6. Law Enforcement | Individual risk assessment, polygraph tools, evidence reliability, profiling, crime analytics | Predictive policing, recidivism prediction, AI evidence analysis, criminal profiling |
| 7. Migration & Border | Polygraph tools for migration, asylum assessment, irregular migration risk, identity verification | Automated visa assessment, asylum processing AI, border surveillance analytics |
| 8. Justice & Democracy | AI assisting judicial authorities, AI used to influence elections/referenda/voting | Legal research AI in adjudication, sentencing tools, voter targeting AI |
AI systems that are safety components of products covered by EU harmonisation legislation listed in Annex I (machinery, medical devices, toys, lifts, pressure equipment, radio equipment, civil aviation, vehicles, rail) are also high-risk if they require third-party conformity assessment under existing legislation. Governed by Article 6(1).
Limited Risk — Transparency Obligations (Article 50)
| AI System Type | Transparency Requirement | Responsible Party |
|---|---|---|
| AI interacting with persons (chatbots, assistants) | Inform the person they are interacting with AI, unless obvious from circumstances | Provider |
| Emotion recognition systems | Inform exposed persons of system operation; process data per GDPR/LED | Deployer |
| Biometric categorisation | Inform exposed persons of system operation | Deployer |
| AI-generated content (deepfakes) | Disclose content is AI-generated/manipulated; provider must enable watermarking | Deployer & Provider |
4. Prohibited AI Practices (Article 5)
Article 5 establishes an exhaustive list of AI practices that are completely banned in the EU. These bans apply from 2 February 2025.
| # | Prohibited Practice | Description |
|---|---|---|
| 1 | Subliminal manipulation | AI deploying subliminal techniques beyond a person's consciousness, or purposefully manipulative/deceptive techniques, materially distorting behaviour and causing significant harm |
| 2 | Exploitation of vulnerabilities | AI exploiting vulnerabilities due to age, disability, or social/economic situation to materially distort behaviour causing significant harm |
| 3 | Social scoring | AI evaluating/classifying persons based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts |
| 4 | Criminal risk prediction | AI predicting criminal offence risk based solely on profiling or personality traits (not objective facts linked to criminal activity) |
| 5 | Untargeted facial image scraping | Creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV |
| 6 | Workplace/education emotion recognition | Inferring emotions in workplace or education, except for medical or safety reasons (e.g., pilot drowsiness) |
| 7 | Biometric categorisation (sensitive attributes) | Categorising persons via biometrics to deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation |
| 8 | Real-time remote biometric ID in public | Real-time remote biometric identification in public spaces for law enforcement, except: targeted search for specific crime victims, prevention of imminent terrorist threat, identification of serious crime suspects. Requires judicial authorisation. |
Violation of Article 5 carries the highest penalties: up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher.
5. High-Risk AI System Requirements
Providers of high-risk AI systems must comply with comprehensive mandatory requirements before placing systems on the EU market (Chapter III, Section 2, Articles 8–15).
| Article | Requirement | Key Obligations |
|---|---|---|
| Art. 9 | Risk Management System | Continuous, iterative risk management throughout the AI system's lifecycle. Identify and analyse known/foreseeable risks, estimate risks from misuse, adopt risk management measures, test residual risk acceptability. |
| Art. 10 | Data & Data Governance | Training, validation, and testing datasets must be subject to data governance: design choices, collection processes, labelling, bias examination, gap identification. Datasets must be representative, free of errors, and complete. |
| Art. 11 | Technical Documentation | Draw up technical documentation before market placement, kept up to date. Must demonstrate compliance and enable authority assessment. Contents specified in Annex IV. |
| Art. 12 | Record-Keeping | Automatic logging capabilities to ensure traceability. Record reference data, input data, and results. Appropriate log retention periods. |
| Art. 13 | Transparency & Information | Sufficiently transparent operation for deployers to interpret output. Clear instructions for use including: provider identity, capabilities/limitations, intended purpose, accuracy levels, oversight measures, expected lifetime. |
| Art. 14 | Human Oversight | Effective human oversight during use. Individuals must be able to: understand capabilities/limitations, interpret output, decide not to use/override output, intervene or interrupt the system. Biometric ID requires two persons to verify. |
| Art. 15 | Accuracy, Robustness & Cybersecurity | Appropriate accuracy, robustness, and cybersecurity throughout lifecycle. Resilient to errors and faults. Robust against adversarial manipulation (data poisoning, adversarial examples, model extraction). AI-specific cybersecurity measures. |
Provider Obligations (Articles 16–18)
- Ensure compliance with all Chapter III requirements
- Establish a quality management system (Article 17)
- Keep technical documentation for 10 years after market placement
- Undergo conformity assessment before placing on market
- Affix CE marking to compliant systems
- Register in the EU database of high-risk AI systems
- Implement post-market monitoring system
- Report serious incidents to authorities within 15 days
- Cooperate with national competent authorities
Deployer Obligations (Article 26)
- Use AI system in accordance with instructions of use
- Ensure human oversight by trained, competent personnel
- Ensure input data is relevant and representative
- Monitor operation and report serious incidents
- Conduct fundamental rights impact assessment (public bodies and certain private entities)
- Inform affected persons that they are subject to a high-risk AI system
- Keep automatically generated logs for at least 6 months
- Inform workers' representatives before deploying high-risk AI in the workplace
6. General-Purpose AI Models (Chapter V)
Chapter V establishes specific obligations for general-purpose AI (GPAI) models — AI models trained with a large amount of data using self-supervision at scale, displaying significant generality and competently performing a wide range of distinct tasks.
All GPAI Models — Transparency Obligations (Article 53)
| Obligation | Details |
|---|---|
| Technical Documentation | Draw up and maintain technical documentation (Annex XI). Provide to AI Office on request and to downstream providers. |
| Information for Downstream Providers | Provide information enabling downstream AI system providers to understand capabilities/limitations and comply with their obligations. |
| Copyright Compliance Policy | Comply with EU copyright law, particularly Article 4(3) DSM Directive (text and data mining opt-out). Identify and comply with rights reservations. |
| Training Data Summary | Publicly available sufficiently detailed summary of training content, per AI Office template. |
GPAI with Systemic Risk — Additional Obligations (Article 55)
GPAI models are presumed to have systemic risk if:
- Cumulative training compute exceeds 1025 FLOPs
- Or designated by the Commission based on Annex XIII criteria (business users, autonomy, end users)
| Additional Obligation | Details |
|---|---|
| Model Evaluation | Perform evaluations including adversarial testing to identify and mitigate systemic risks |
| Systemic Risk Assessment | Assess and mitigate possible systemic risks at Union level |
| Incident Tracking & Reporting | Track, document, and report serious incidents to the AI Office without undue delay |
| Cybersecurity Protections | Adequate cybersecurity for the model and its physical infrastructure |
The AI Office is developing codes of practice (Article 56) with GPAI providers, civil society, and academia. These will detail practical compliance and are due by May 2025.
7. Conformity Assessment
| Assessment Type | Applies To | Procedure |
|---|---|---|
| Self-Assessment | Most Annex III high-risk systems (standalone AI) | Provider conducts internal assessment (Annex VI). Checks QMS and technical documentation. Self-declares conformity. Affixes CE marking. |
| Third-Party Assessment | Real-time remote biometric ID (Annex III, pt 1); AI safety components of Annex I products | Assessment by independent Notified Body. Issues certificate under Annex VII procedures. |
After successful conformity assessment, the provider:
- Draws up an EU declaration of conformity (Article 47)
- Affixes the CE marking (Article 48)
- Registers the system in the EU database (Article 49)
8. Governance Structure
| Body | Level | Role & Responsibilities |
|---|---|---|
| European AI Office | EU (within Commission) | Central enforcement for GPAI models. Develops codes of practice, guidelines, templates. Monitors systemic risks. Established February 2024. |
| European AI Board | EU | Advisory body with one representative per Member State. Advises Commission, facilitates consistent application, issues recommendations. |
| National Competent Authorities | Member State | Each MS designates notifying authority and market surveillance authority. Enforce AI Act nationally. |
| Market Surveillance Authorities | Member State | Powers to access source code, documentation, training data. Can require corrective actions, withdraw products. |
| Scientific Panel | EU | Independent experts providing scientific/technical expertise to AI Office. Supports enforcement for systemic risk models. |
| Advisory Forum | EU | Balanced stakeholder body: industry, SMEs, civil society, academia. Provides technical expertise. |
| Notified Bodies | Member State | Independent conformity assessment bodies for third-party assessments. Must meet Article 31 qualifications. |
9. Penalties & Enforcement
| Violation Type | Max Fine (Fixed) | Max Fine (% Turnover) | SMEs/Startups |
|---|---|---|---|
| Prohibited practices (Art. 5) | €35,000,000 | 7% worldwide annual turnover | Lower of the two, proportionate |
| Other AI Act violations | €15,000,000 | 3% worldwide annual turnover | Lower of the two, proportionate |
| Incorrect/misleading information | €7,500,000 | 1.5% worldwide annual turnover | Lower of the two, proportionate |
Factors considered in determining penalties:
- Nature, gravity, and duration of the infringement
- Size, market share, and financial situation of the operator
- Nature and size of harm to affected persons
- Intentional or negligent character
- Corrective actions taken
- Degree of cooperation with authorities
- Previous infringements
10. Implementation Timeline
| Date | Phase | What Becomes Applicable |
|---|---|---|
| 1 Aug 2024 | Entry into Force | AI Act enters into force. AI literacy obligations begin (Article 4). |
| 2 Feb 2025 | +6 months | Prohibited practices take effect. General provisions (Chapters I & II). |
| 2 Aug 2025 | +12 months | GPAI model obligations (Chapter V). AI Office governance. Notified Bodies. Penalties framework. |
| 2 Feb 2026 | +18 months | Member States designate national competent authorities and market surveillance authorities. |
| 2 Aug 2026 | +24 months | High-risk AI (Annex III). Transparency obligations (Art. 50). Deployer obligations. Registration. |
| 2 Aug 2027 | +36 months | High-risk AI (Annex I products). Full application of all provisions. |
11. Relationship with Other EU Law
| EU Legislation | Interaction with AI Act |
|---|---|
| GDPR (2016/679) | Both apply concurrently. GDPR governs personal data processing; AI Act governs AI system safety and fundamental rights. DPAs have AI Act enforcement role. |
| Digital Services Act | VLOPs using AI for recommender systems/content moderation regulated by both. DSA algorithmic transparency complements AI Act. |
| Digital Markets Act | Gatekeepers using AI must comply with both. DMA interoperability and fairness interact with AI Act transparency. |
| Product Safety | AI Act complements product safety rules. AI in products must comply with both frameworks. |
| NIS2 Directive | Cybersecurity requirements overlap for AI in critical infrastructure sectors. |
| Cyber Resilience Act | Software AI systems (products with digital elements) must meet CRA cybersecurity requirements. CRA assessment may fulfil AI Act requirements. |
| EU Charter of Fundamental Rights | AI Act rooted in the Charter. Fundamental rights impact assessments reference Charter rights. |
| Law Enforcement Directive | LED applies instead of GDPR for law enforcement AI data processing. AI Act law enforcement provisions compatible with LED. |
| Sector-specific legislation | AI Act is lex generalis; sector rules (Medical Devices, Machinery, etc.) take precedence. Annex I products use sector conformity assessment. |
12. References & Official Sources
-
[1]
Official Text — EU AI Act — Regulation (EU) 2024/1689 of the European Parliament and of the Council.
https://eur-lex.europa.eu/eli/reg/2024/1689/oj -
[2]
European Commission — Regulatory Framework for AI
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai -
[3]
European AI Office
https://digital-strategy.ec.europa.eu/en/policies/ai-office -
[4]
European Parliament — AI Act Legislative Observatory
https://www.europarl.europa.eu/legislative-train/ -
[5]
Council of the EU — AI Act
https://www.consilium.europa.eu/en/policies/artificial-intelligence/ -
[6]
Original 2021 Proposal — COM(2021) 206 final
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 -
[7]
EU AI Act Explorer — Interactive navigation tool (Future of Life Institute)
https://artificialintelligenceact.eu/ -
[8]
Ethics Guidelines for Trustworthy AI — AI HLEG (April 2019)
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai -
[9]
AI Act Impact Assessment — SWD(2021) 84
https://digital-strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence -
[10]
EU AI Act Corrigendum
https://eur-lex.europa.eu/eli/reg/2024/1689/corr/2024-08-01/oj