99 Articles
173 Recitals
€20M Max Fine (or 4% Global Turnover)
30+ DPAs Enforcing

Contents

  1. Overview — GDPR & AI
  2. Article 22 — Automated Individual Decision-Making
  3. Lawful Basis for AI Processing
  4. Data Protection Impact Assessments for AI
  5. Data Minimization & Purpose Limitation
  6. Transparency & Explainability Requirements
  7. The Right to Explanation
  8. Data Subject Rights in AI Systems
  9. Profiling Under GDPR
  10. International Data Transfers & AI
  11. GDPR Enforcement Actions on AI Systems
  12. GDPR vs EU AI Act — Interaction & Overlap
  13. Practical Compliance Checklist for AI
  14. References & Official Sources

1. Overview — GDPR & AI

The General Data Protection Regulation (GDPR) is the European Union's comprehensive data protection law that has become the global benchmark for privacy regulation. While not specifically designed for artificial intelligence — it was drafted primarily between 2012 and 2016 — the GDPR contains several provisions that directly govern how AI systems may collect, process, store, and make decisions based on personal data.

The GDPR applies to AI systems whenever they process personal data of individuals in the EU/EEA, regardless of where the AI system operator is located. This extraterritorial reach (Article 3) means that an AI company in Silicon Valley processing EU residents' data must comply fully with GDPR requirements.

Why GDPR Matters for AI

Almost every AI system that interacts with individuals involves personal data processing — whether through training data, input data, inference outputs, or behavioral predictions. The GDPR's principles of purpose limitation, data minimization, transparency, and individual rights create fundamental constraints on how AI can be developed and deployed in any context involving EU residents.

Key GDPR Provisions Relevant to AI

Article Title AI Relevance
Art. 5 Principles of Processing Lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, accountability — all constrain AI design and operation
Art. 6 Lawful Basis Every AI processing activity must have one of six legal bases; consent and legitimate interest are most common for AI
Art. 9 Special Categories AI processing of biometric data, health data, racial/ethnic data requires explicit consent or specific exemption
Art. 13-14 Information to Data Subjects Requires disclosure of "meaningful information about the logic involved" in automated processing
Art. 15 Right of Access Data subjects can request information about automated decision-making, including logic, significance, and consequences
Art. 17 Right to Erasure Raises complex questions about erasing data from trained ML models ("machine unlearning")
Art. 22 Automated Decision-Making Core AI provision — right not to be subject to solely automated decisions with legal/significant effects
Art. 25 Data Protection by Design Requires privacy-by-design in AI system development from the earliest stages
Art. 35 Data Protection Impact Assessment Mandatory for high-risk AI processing, including profiling and systematic monitoring
Art. 36 Prior Consultation Must consult DPA before high-risk processing where risks cannot be mitigated

2. Article 22 — Automated Individual Decision-Making

Article 22 is the most directly AI-relevant provision in the GDPR. It establishes a general prohibition on decisions based solely on automated processing — including profiling — that produce legal effects or similarly significantly affect individuals.

Article 22(1) — Full Text

"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

Scope & Conditions

Article 22 applies when all three conditions are met simultaneously:

  1. Solely automated — The decision is made without meaningful human involvement. A human who rubber-stamps an automated recommendation without genuine review does not constitute meaningful involvement.
  2. Decision-making, including profiling — The automated processing results in a decision about an individual. Profiling alone (without a decision) is covered under other GDPR provisions.
  3. Legal or similarly significant effects — The decision has legal consequences (e.g., contract termination, denial of public benefit) or significantly affects the individual (e.g., loan denial, job rejection, insurance pricing).

Exceptions — When Automated Decisions Are Permitted

Exception (Art. 22(2)) Description Additional Safeguards Required
(a) Contract Performance The decision is necessary for entering into or performing a contract between the data subject and the controller Right to obtain human intervention, express point of view, contest decision (Art. 22(3))
(b) EU/Member State Law Authorized by EU or Member State law which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests Safeguards as specified in the authorizing law
(c) Explicit Consent Based on the data subject's explicit consent (not merely informed consent) Right to obtain human intervention, express point of view, contest decision (Art. 22(3))

Required Safeguards Under Article 22(3)

When automated decision-making is permitted under exceptions (a) or (c), the data controller must implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least:

  • Right to obtain human intervention — A qualified person must be able to review and potentially override the automated decision
  • Right to express point of view — The individual must have an opportunity to present their case and provide additional information
  • Right to contest the decision — There must be a mechanism to challenge the automated outcome

Special Category Data & Article 22

Article 22(4) adds extra restrictions when automated decisions are based on special category data (Article 9(1)) — including racial/ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data, health data, sex life, or sexual orientation. Such decisions are only permitted under Article 9(2)(a) (explicit consent) or Article 9(2)(g) (substantial public interest) with suitable safeguards.

EDPB Guidance on "Solely Automated"

The European Data Protection Board (EDPB, formerly Article 29 Working Party) issued Guidelines on Automated individual decision-making and Profiling (WP251rev.01) clarifying:

  • Human involvement must be meaningful — the person must have authority and competence to change the decision
  • A "human in the loop" who simply confirms an automated decision without real scrutiny does not remove the decision from Article 22 scope
  • The human must have access to all relevant data and genuinely consider it before making a decision
  • Article 22 applies at the moment the decision produces effects, even if profiling occurred earlier

Examples of Article 22 Decisions

Sector Automated Decision Legal/Significant Effect Art. 22 Applies?
Banking AI credit scoring — automated loan rejection Denial of financial service (legal effect) ✅ Yes
Insurance Algorithmic risk assessment setting premium Significant financial impact ✅ Yes
Employment AI-powered CV screening rejecting applicant Denial of employment opportunity ✅ Yes
Social Media Content recommendation algorithm Personalization, not typically legal/significant ❌ Usually not
Government Automated benefit eligibility determination Denial of public benefit (legal effect) ✅ Yes
Healthcare AI diagnostic tool suggesting treatment Doctor makes final decision (human involvement) ⚠️ Depends on human review
E-commerce Dynamic pricing based on user profile May significantly affect (case-by-case) ⚠️ Context-dependent
Education Automated exam grading (e.g., UK A-levels 2020) Determines educational/career outcome ✅ Yes

3. Lawful Basis for AI Processing

Under Article 6(1), every processing operation involving personal data must have a lawful basis. AI systems typically require a lawful basis at multiple stages: data collection, model training, inference, and output delivery. Each stage may require a different legal basis.

Lawful Basis (Art. 6(1)) Application to AI Key Considerations Suitable For
(a) Consent Data subject freely gives specific, informed, unambiguous consent to AI processing Must be granular (not bundled), revocable, genuinely free (no power imbalance). Difficult at scale for training data. Must explain AI use specifically. Consumer AI apps, personalization features, research with volunteer participants
(b) Contract AI processing is necessary for performing a contract with the individual Must be genuinely "necessary" (not merely useful). The AI service must be a core part of the contract. Cannot stretch to cover training on user data. AI-powered services the user signed up for (e.g., AI financial advisor, AI tutor)
(c) Legal Obligation Processing is required by law Rare for AI but may apply to fraud detection mandated by financial regulations (e.g., AML/KYC) Mandated AI monitoring (anti-fraud, tax compliance)
(d) Vital Interests Processing necessary to protect someone's life Very narrow — only in life-threatening emergencies. May apply to AI medical diagnostics in emergencies. Emergency AI medical triage, disaster response AI
(e) Public Interest / Official Authority Processing necessary for public interest tasks or official authority Typically for government AI. Requires legal basis in national law. Subject to proportionality. Government AI services, public health AI, law enforcement AI
(f) Legitimate Interest Processing necessary for legitimate interests of the controller or third party, not overridden by data subject rights Requires three-part balancing test: (1) identify legitimate interest, (2) necessity, (3) balance against individual rights. Must document Legitimate Interest Assessment (LIA). Most flexible basis for AI. AI model training, fraud detection, cybersecurity AI, product improvement

Training Data vs. Inference — Different Bases May Apply

A critical challenge for AI developers is that the lawful basis for collecting training data may differ from the basis for training the model, which may differ again from the basis for deploying the model to make inferences. The EDPB has emphasized that purpose limitation (Article 5(1)(b)) requires careful assessment of whether using personal data to train an AI model is compatible with the original collection purpose.

Web Scraping for AI Training

The legality of scraping publicly available personal data for AI training is heavily contested. The Italian DPA (Garante) ordered ChatGPT's temporary ban partly over this issue. The EDPB's ChatGPT Taskforce report (2024) noted that "legitimate interest" may serve as a basis but requires robust balancing tests. Publicly available does not mean freely processable under GDPR.

4. Data Protection Impact Assessments for AI

Article 35 requires a Data Protection Impact Assessment (DPIA) when processing is "likely to result in a high risk to the rights and freedoms of natural persons." AI systems frequently trigger this requirement.

When a DPIA Is Mandatory for AI

Article 35(3) specifies three scenarios that always require a DPIA, all of which commonly involve AI:

  • (a) Systematic and extensive evaluation of personal aspects — including profiling, on which decisions are based that produce legal or similarly significant effects (AI credit scoring, hiring tools, insurance pricing)
  • (b) Large-scale processing of special category data — biometric AI, health AI, AI processing racial/ethnic data
  • (c) Systematic monitoring of public areas on a large scale — AI video surveillance, facial recognition systems

Additionally, the EDPB's guidelines (WP248rev.01) identify nine criteria where meeting two or more triggers a DPIA requirement. AI systems commonly meet multiple:

EDPB Criterion AI Relevance
1. Evaluation or scoring ✅ Core function of many AI systems
2. Automated decision-making with legal/significant effect ✅ Any Art. 22 AI system
3. Systematic monitoring ✅ AI surveillance, behavioral tracking
4. Sensitive data or highly personal data ✅ Health AI, biometric AI
5. Data processed on a large scale ✅ Most production AI systems
6. Matching or combining datasets ✅ AI training on merged datasets
7. Data concerning vulnerable data subjects ✅ AI in education, healthcare, welfare
8. Innovative use or applying new technology ✅ Novel AI models, generative AI
9. Processing preventing data subjects from exercising a right or using a service ✅ AI gatekeeping access to services

What a DPIA for AI Must Contain

Article 35(7) requires the DPIA to contain at minimum:

  1. Systematic description of the AI processing operations, purposes, and legitimate interest (if applicable)
  2. Assessment of necessity and proportionality — is the AI approach the least intrusive means?
  3. Assessment of risks to rights and freedoms — discrimination risk, accuracy, fairness, transparency
  4. Measures to address risks — safeguards, security measures, mechanisms for data subject rights

Prior Consultation (Article 36)

If the DPIA indicates that the AI processing would result in a high risk that cannot be mitigated, the controller must consult the supervisory authority before processing begins. The DPA has up to 14 weeks to respond and can prohibit the processing.

5. Data Minimization & Purpose Limitation

The GDPR's principles of data minimization (Article 5(1)(c)) and purpose limitation (Article 5(1)(b)) create fundamental tensions with modern AI development practices.

Data Minimization (Article 5(1)(c))

Personal data must be "adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed." This principle directly challenges:

  • Large-scale training data — Modern ML models improve with more data. The temptation to collect as much data as possible conflicts with minimization.
  • Feature engineering — Using proxy variables that correlate with sensitive attributes may violate minimization even if the sensitive data itself is excluded.
  • Data retention for retraining — Keeping training data indefinitely for model updates may exceed what is "necessary."

Purpose Limitation (Article 5(1)(b))

Personal data must be collected for "specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes." Key challenges:

  • Repurposing data for AI training — Data collected for customer service may not be repurposable for AI training without a compatible purpose assessment
  • Transfer learning — Pre-training on one dataset, then fine-tuning for a different purpose
  • Emergent capabilities — AI systems used for purposes not foreseen at data collection time

Anonymization vs. Pseudonymization

Technique GDPR Status AI Application Limitations
Anonymization Falls outside GDPR scope (Recital 26) — truly anonymous data is not personal data If training data is genuinely anonymized, GDPR does not apply to that processing Re-identification risk from ML models. Modern AI can re-identify from supposedly anonymous data. Standard is "reasonably likely" re-identification.
Pseudonymization Still personal data under GDPR. Art. 4(5). Encouraged as safeguard but does not remove GDPR obligations. Reduces risk and may support legitimate interest balancing. Required in some DPIAs. Pseudonymized data can be re-linked. AI model may memorize and output pseudonymized attributes.
Synthetic Data If generated without personal data input, likely outside GDPR. If derived from personal data, may still be personal data. Growing technique for AI training without personal data. But generative models can reproduce training data. Quality may be lower. Risk of statistical similarity to real individuals. No clear regulatory guidance yet.
Differential Privacy Technical measure that may support anonymization claims. No explicit GDPR mention. Adding calibrated noise during training to prevent memorization of individual records. Reduces model accuracy. Implementation complexity. No agreed epsilon threshold for GDPR compliance.
Federated Learning Model trains locally — raw data not centralized. GDPR still applies to local processing. Keeps data on-device. Only model updates shared. Supports data minimization principle. Model updates can still leak personal data (gradient attacks). Communication overhead.

6. Transparency & Explainability Requirements

GDPR's transparency requirements create significant obligations for AI operators. Articles 13 and 14 require controllers to provide specific information to data subjects at the point of data collection (Art. 13) or when data is obtained from other sources (Art. 14).

Mandatory Information Disclosures for AI

When automated decision-making including profiling (as per Article 22) takes place, controllers must provide:

Requirement Article What Must Be Disclosed Practical Implications for AI
Existence of automated decision-making Art. 13(2)(f) / 14(2)(g) The fact that automated decision-making, including profiling, takes place Must clearly state "We use AI to make decisions about you"
Meaningful information about the logic Art. 13(2)(f) / 14(2)(g) "Meaningful information about the logic involved" Must explain how the AI works in terms the individual can understand. Not full source code, but the general logic, key factors, and decision criteria.
Significance and envisaged consequences Art. 13(2)(f) / 14(2)(g) The significance and envisaged consequences of such processing for the data subject Must explain what the AI decision means for the person and what outcomes are possible
Right of access to logic Art. 15(1)(h) On request, provide the same information about automated decisions Must be prepared to explain AI decisions to individuals who ask

EDPB Guidance on "Meaningful Information"

The Article 29 Working Party (now EDPB) clarified in WP251rev.01 that "meaningful information about the logic" does not require a full technical explanation of the algorithm. Instead, it should include:

  • The categories of data used (e.g., "your payment history, income, and employment status")
  • Why these data are relevant to the decision
  • How the profile is built and used
  • The main factors in the decision (e.g., "credit score below threshold")
  • Information that would help the individual challenge the decision

Layered Transparency Approach

The EDPB recommends a layered approach to AI transparency:

  1. Layer 1 — Short notice: "This decision was made using AI" (at point of decision)
  2. Layer 2 — Summary: Key factors, data categories, general logic (in privacy policy / AI explanation page)
  3. Layer 3 — Detailed: Full methodology description (available on request via Art. 15)

7. The Right to Explanation

Whether the GDPR creates a standalone "right to explanation" for AI decisions is one of the most debated questions in AI law. The answer is nuanced.

The Legal Basis

The text of Articles 13(2)(f), 14(2)(g), and 15(1)(h) requires "meaningful information about the logic involved" in automated decisions. Recital 71 goes further:

Recital 71 — Key Text

"…the data subject should have the right… to obtain an explanation of the decision reached after such assessment and to challenge the decision."

Note: Recitals are interpretive aids, not directly binding. However, they guide interpretation of the operative articles.

Academic & Regulatory Debate

Position Arguments Proponents
GDPR creates a right to explanation Recital 71 explicitly mentions "explanation." Articles 13-15 require "meaningful information about the logic." Purposive interpretation supports explanation. Without understanding, the right to contest is hollow. Goodman & Flaxman (2017), Selbst & Powles (2017), ICO (UK), many DPAs
GDPR does not create a right to explanation Recitals are not binding. "Information about the logic" is not the same as "explanation of a specific decision." Articles only require general system information, not case-specific reasoning. Wachter, Mittelstadt & Floridi (2017)
Practical middle ground GDPR requires enough information to enable meaningful challenge. This functionally requires explanation even if not explicitly named. Combined with Art. 22(3) right to contest, some form of explanation is necessary. EDPB, ICO (practical approach), most operational guidance

Technical Approaches to AI Explainability

Regardless of the legal debate, controllers deploying AI must provide meaningful information. Common technical approaches:

Method Type Description GDPR Suitability
LIME (Local Interpretable Model-agnostic Explanations) Post-hoc, local Creates simple local approximation around a prediction to identify key features Good for individual explanations (Art. 15). Shows which factors influenced a specific decision.
SHAP (SHapley Additive exPlanations) Post-hoc, local/global Game-theoretic approach assigning feature importance values to each prediction Strong for both individual and system-level transparency. Widely accepted.
Counterfactual Explanations Post-hoc, local "Your loan was denied. If your income were €5,000 higher, it would have been approved." Excellent for Art. 22(3) right to contest. Actionable and understandable. Recommended by ICO.
Feature Importance Global Ranking of input features by influence on model outputs Useful for Art. 13-14 general logic disclosure. Shows which data matters most.
Decision Trees / Rule Extraction Interpretable by design Using inherently interpretable models or extracting rules from complex models Strongest transparency guarantee. May sacrifice accuracy.
Attention Visualization Post-hoc (neural networks) Showing which parts of input the model attends to Limited GDPR value — shows correlation not causation. Useful for NLP transparency.

8. Data Subject Rights in AI Systems

Chapters III of the GDPR grants data subjects comprehensive rights that apply to AI processing. Each right raises unique challenges in the AI context.

Right Article Application to AI Challenges
Right of Access Art. 15 Access to personal data used in AI processing, confirmation of automated decision-making, logic disclosure What constitutes "personal data" in a trained model? Are model weights personal data if they encode individual information?
Right to Rectification Art. 16 Correction of inaccurate personal data used for AI training or inference Correcting training data may not change the model's behavior. Retraining may be required. "Accuracy" of inferences/predictions is debated.
Right to Erasure Art. 17 Deletion of personal data from AI training datasets and, potentially, from trained models Machine unlearning — Can a model "forget" specific data? Full retraining is costly. Approximate unlearning techniques exist but are imperfect. Memorization in LLMs is a key concern.
Right to Restriction Art. 18 Restrict processing while accuracy is contested or where processing is unlawful May require temporarily suspending AI system or marking certain data as restricted in training pipeline
Right to Data Portability Art. 20 Receive personal data in structured, machine-readable format for transfer to another controller Applies to data "provided by" the data subject — unclear if this includes AI inferences or only input data
Right to Object Art. 21 Object to processing based on legitimate interest (Art. 6(1)(f)) or public interest (Art. 6(1)(e)), including profiling Controller must demonstrate "compelling legitimate grounds" that override data subject's interests to continue. Right to object to direct marketing profiling is absolute.
Right re: Automated Decisions Art. 22 Not be subject to solely automated decisions with legal/significant effects; right to human intervention, express view, contest Core AI provision — see Section 2 above

Machine Unlearning — The Erasure Challenge

The right to erasure (Art. 17) poses fundamental technical challenges for AI:

  • Training data deletion — Relatively straightforward to delete from a database, but the model has already learned from the data
  • Model memorization — Large language models can memorize and reproduce training data verbatim. Research (Carlini et al., 2021) demonstrated extraction of training data from GPT-2.
  • Approximate unlearning — Techniques like SISA (Sharded, Isolated, Sliced, and Aggregated training) allow targeted retraining. Gradient-based unlearning methods exist but are imperfect.
  • Full retraining — The gold standard for erasure but prohibitively expensive for large models (millions of dollars for frontier models)
  • Regulatory uncertainty — No DPA has definitively ruled on whether erasure from a trained model is required when training data is deleted

9. Profiling Under GDPR

Profiling is defined in Article 4(4) as:

Article 4(4) — Definition

"'Profiling' means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements."

Three Categories of Profiling

Category Description GDPR Requirements AI Examples
1. General Profiling Profiling that does not produce legal/significant effects Standard GDPR compliance: lawful basis, transparency, data minimization, DPIA if high-risk Personalized content recommendations, user segmentation, behavioral analytics
2. Profiling with Significant Effects Profiling that significantly affects the individual but involves human decision-making Heightened safeguards: DPIA mandatory, Art. 13-14 disclosure, right to object (Art. 21) AI-assisted credit assessment (with human final review), AI-flagged candidates reviewed by HR
3. Solely Automated Profiling with Legal/Significant Effects Profiling under Article 22 — fully automated with legal/significant effects All above + Art. 22 restrictions: prohibited unless exception applies. Human intervention, contest, express view rights. Automated loan decisions, automated insurance pricing, automated job rejection

Sector-Specific Profiling Guidance

  • Financial services — EBA guidelines on creditworthiness assessment intersect with GDPR profiling rules. Must explain scoring factors.
  • Insurance — EIOPA guidance on AI in insurance. Profiling for risk assessment must not discriminate on protected grounds.
  • Employment — EDPB opinion on workplace monitoring. Employee profiling requires strong justification and DPIA.
  • Health — Art. 9 special category restrictions apply. Health profiling requires explicit consent or specific exemption.
  • Children — Enhanced protection under Art. 8. Profiling of children for marketing is strongly discouraged by EDPB.

10. International Data Transfers & AI

Chapter V of the GDPR restricts transfers of personal data to countries outside the EU/EEA that lack "adequate" data protection. This has profound implications for AI systems that rely on global data flows, cloud computing, and cross-border model training.

Transfer Mechanisms for AI

Mechanism Legal Basis AI Application Post-Schrems II Status
Adequacy Decision Art. 45 Free transfer to countries with adequate protection Active for: Japan, South Korea, UK, Canada (commercial), Argentina, New Zealand, Israel, Switzerland, etc. EU-US Data Privacy Framework (2023) replaces invalidated Privacy Shield.
Standard Contractual Clauses (SCCs) Art. 46(2)(c) Most common mechanism for AI cloud processing. New modular SCCs (June 2021) with 4 modules for different transfer scenarios. Valid but require Transfer Impact Assessment (TIA) after Schrems II. Must assess destination country surveillance laws.
Binding Corporate Rules (BCRs) Art. 47 For multinational companies with global AI infrastructure Valid. Expensive and time-consuming to establish. Good for large enterprises with consistent cross-border AI processing.
Codes of Conduct Art. 40, 46(2)(e) Industry codes for AI data transfers EU Cloud Code of Conduct approved. Potential for AI-specific codes.
Derogations Art. 49 Explicit consent, contract performance, public interest, vital interests, public registers, compelling legitimate interest Narrow scope — not suitable for systematic/repeated AI data transfers. Only for occasional transfers.

Schrems II & AI Implications

The CJEU's Schrems II judgment (C-311/18, July 2020) invalidated the EU-US Privacy Shield and established that transfers using SCCs require case-by-case assessment of the destination country's surveillance regime. For AI:

  • Cloud AI processing — Training models on US cloud providers (AWS, Google, Azure) requires SCCs + TIA showing supplementary measures protect against US surveillance
  • Model sharing — Transferring a trained model to a third country may transfer personal data if the model memorizes training data
  • EU-US Data Privacy Framework — Adopted July 2023, provides new adequacy basis for US transfers. Subject to challenge (NOYB/Schrems has indicated potential "Schrems III")

11. GDPR Enforcement Actions on AI Systems

Data protection authorities across the EU/EEA have increasingly targeted AI systems in enforcement actions. Below are the most significant cases:

Year Company/Entity DPA AI Issue Fine/Outcome Key Findings
2020 Clearview AI 🇮🇹 Italy (Garante) Facial recognition — scraping billions of images from social media for AI facial recognition database €20 million No lawful basis for processing. No transparency. Violated purpose limitation. Clearview not established in EU but GDPR applies extraterritorially.
2021 Clearview AI 🇬🇧 UK (ICO) Same — facial recognition scraping £7.5 million + deletion order No lawful basis. Failed transparency. Biometric data processed without Art. 9 basis. Ordered to delete UK residents' data.
2022 Clearview AI 🇫🇷 France (CNIL) Facial recognition scraping €20 million Unlawful collection of biometric data. No consent. Non-compliance with access and erasure requests.
2022 Clearview AI 🇬🇷 Greece (HDPA) Facial recognition scraping €20 million Violations of lawfulness, transparency, purpose limitation, storage limitation, data subject rights.
2023 OpenAI (ChatGPT) 🇮🇹 Italy (Garante) LLM — training data, automated processing, lack of age verification Temporary ban (March-April 2023), then €15 million (2024) No lawful basis for mass web scraping for training. Insufficient transparency. No age verification. Inaccurate outputs about individuals.
2020 Dutch Tax Authority (Belastingdienst) 🇳🇱 Netherlands (AP) Discriminatory AI — childcare benefit fraud detection algorithm discriminated on nationality/ethnicity €3.7 million + system shutdown Dual nationality used as risk indicator. Violated non-discrimination. Led to "toeslagenaffaire" scandal and Dutch government resignation (2021).
2021 Municipality of Skellefteå 🇸🇪 Sweden (IMY) Facial recognition in schools — pilot to track student attendance using facial recognition SEK 200,000 (~€20,000) School processed biometric data of children. Consent not valid due to power imbalance. DPIA inadequate. Art. 9 violated.
2022 PimEyes 🇫🇷 France (CNIL) Facial recognition search engine — allows anyone to upload face and find matches online Formal notice (compliance required) Processing biometric data without valid basis. CNIL noted similarity to Clearview AI violations.
2021 Deliveroo Italy 🇮🇹 Italy (Garante) Algorithmic management — automated ranking/booking system for riders €2.5 million Profiling without adequate transparency. Art. 22 automated decisions affecting rider income. Inadequate DPIA.
2021 Foodinho (Glovo subsidiary) 🇮🇹 Italy (Garante) Algorithmic management of delivery riders €2.6 million Automated decisions on rider allocation and performance scoring without transparency or safeguards.
2023 Meta (Facebook) 🇮🇪 Ireland (DPC), EDPB binding decision Behavioral advertising AI — personalized ad targeting using profiling €390 million (combined for Facebook + Instagram) Cannot rely on "contractual necessity" for behavioral advertising. Forced consent not valid. Must offer genuine choice for AI-driven ad profiling.
2024 Meta (Llama / AI training) 🇮🇪 Ireland (DPC) + multiple DPAs AI model training on user posts — planned use of Facebook/Instagram posts to train AI Paused in EU after DPA intervention Meta planned to use legitimate interest for AI training. NOYB filed complaints. DPC requested pause. Debate over lawful basis for using public social media posts.

Enforcement Trends

  • Facial recognition — Most heavily fined AI category. Clearview AI alone faced €60M+ across four countries.
  • Generative AI — Italian ChatGPT action set precedent. EDPB ChatGPT Taskforce coordinating cross-border approach. Multiple DPAs investigating.
  • Algorithmic management — Growing focus on gig economy AI (Deliveroo, Glovo). Platform Workers Directive will add requirements.
  • AI training data — Increasing scrutiny of web scraping. Meta AI training pause signals DPA resistance to mass data use.
  • Children & AI — Swedish school case signals strong protection for minors. Age verification increasingly required.

12. GDPR vs EU AI Act — Interaction & Overlap

The EU AI Act (Regulation 2024/1689) operates alongside the GDPR, creating a dual regulatory framework for AI in the EU. The AI Act explicitly states in Article 2(7) that it does not affect the GDPR and that the GDPR takes precedence on personal data protection matters.

Aspect GDPR (2016/679) EU AI Act (2024/1689) Interaction
Focus Personal data protection AI system safety & fundamental rights Complementary — GDPR protects data, AI Act protects against AI harms
Scope Any processing of personal data AI systems placed on market / put into service in EU Overlap when AI processes personal data (most cases)
Risk Approach DPIA for high-risk processing 4-tier risk classification (unacceptable, high, limited, minimal) Both require risk assessment but use different frameworks
Transparency Arts. 13-14: information about automated decisions Art. 50: transparency for AI-generated content, deepfakes, chatbots AI Act adds transparency requirements beyond GDPR scope (non-personal data AI)
Human Oversight Art. 22: right to human intervention in automated decisions Art. 14: human oversight requirement for high-risk AI AI Act broadens human oversight beyond personal data decisions
Bias/Discrimination Implicit — non-discrimination via lawful processing. Special categories Art. 9. Explicit — Art. 10 requires bias testing. Annex III lists high-risk use cases. AI Act is more prescriptive on bias. GDPR addresses discrimination through data processing lens.
Enforcement DPAs (30+ national authorities) AI Office (EU-level) + national authorities (may be DPAs) Many Member States may designate DPAs as AI Act supervisory authorities, creating single enforcement body
Fines Up to €20M or 4% global turnover Up to €35M or 7% global turnover Both can apply simultaneously — cumulative fines possible
Extraterritorial Yes — Art. 3 Yes — Art. 2 Both reach non-EU entities serving EU market
Training Data Lawful basis required for personal data use Art. 10: data governance requirements for training/validation/testing data AI Act Art. 10(5) allows processing special category data for bias detection (regulatory sandbox) — potential GDPR tension

Key Point: Dual Compliance Required

Organizations deploying AI in the EU must comply with both GDPR and the AI Act. A system can be fully compliant with the AI Act's technical requirements but still violate GDPR (e.g., no lawful basis for training data). Conversely, GDPR compliance alone does not satisfy AI Act obligations (e.g., conformity assessment for high-risk systems).

13. Practical Compliance Checklist for AI

The following checklist covers key GDPR compliance requirements for AI system development and deployment:

Pre-Development Phase

Requirement GDPR Basis Action
Identify lawful basis Art. 6 Determine lawful basis for each processing stage (collection, training, inference, storage)
Conduct DPIA Art. 35 Perform DPIA before processing begins if AI involves profiling, special categories, or systematic monitoring
Data protection by design Art. 25 Build privacy into AI architecture from the start — minimize data, pseudonymize, limit retention
Record of processing Art. 30 Document all AI processing activities, purposes, data categories, recipients, retention periods

Training Phase

Requirement GDPR Basis Action
Data minimization Art. 5(1)(c) Use only data necessary for training. Document why each data category is needed.
Purpose compatibility Art. 5(1)(b) Assess whether using existing data for AI training is compatible with original collection purpose
Special category protection Art. 9 Identify and protect any special category data in training sets. Obtain explicit consent if required.
International transfers Chapter V If training in non-EU cloud, ensure SCCs/adequacy + TIA in place

Deployment Phase

Requirement GDPR Basis Action
Transparency notice Arts. 13-14 Inform individuals about AI use, logic involved, significance, consequences. Use layered approach.
Art. 22 compliance Art. 22 If solely automated with legal/significant effects: ensure exception applies, implement safeguards, enable human intervention
Data subject rights Arts. 15-22 Build mechanisms for access, rectification, erasure, restriction, portability, objection rights
Accuracy Art. 5(1)(d) Monitor AI accuracy. Have process to rectify inaccurate outputs. Regular validation.
Security Art. 32 Implement appropriate security for AI systems — protect training data, model integrity, output confidentiality
Breach notification Arts. 33-34 72-hour notification to DPA for breaches involving AI-processed personal data. Notify individuals if high risk.

14. References & Official Sources

Primary Legislation

EDPB / Article 29 Working Party Guidance

National DPA Guidance on AI

Key CJEU Judgments

  • Schrems II (C-311/18) — Data Privacy Commissioner v Facebook Ireland
    curia.europa.eu — C-311/18
    Invalidated Privacy Shield. Requires assessment of third-country surveillance for SCCs.
  • SCHUFA (C-634/21) — Automated Credit Scoring Under Art. 22
    curia.europa.eu — C-634/21
    December 2023 — CJEU ruled that SCHUFA credit scoring constitutes automated decision-making under Art. 22 GDPR when the score plays a determining role in a third party's decision (e.g., bank denying loan based on SCHUFA score).
  • Meta Platforms (C-252/21) — Use of Personal Data for Advertising
    curia.europa.eu — C-252/21
    July 2023 — Data minimization applies to personalized advertising. Special category data inferred from behavior.

Academic References

  • Goodman, B., & Flaxman, S. (2017). "European Union regulations on algorithmic decision-making and a 'right to explanation'." AI Magazine, 38(3), 50-57.
    doi.org/10.1609/aimag.v38i3.2741
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation." International Data Privacy Law, 7(2), 76-99.
    doi.org/10.1093/idpl/ipx005
  • Selbst, A.D., & Powles, J. (2017). "Meaningful information and the right to explanation." International Data Privacy Law, 7(4), 233-242.
    doi.org/10.1093/idpl/ipx022
  • Carlini, N., et al. (2021). "Extracting Training Data from Large Language Models." USENIX Security Symposium.
    arxiv.org/abs/2012.07805
  • Bourtoule, L., et al. (2021). "Machine Unlearning." IEEE S&P 2021.
    arxiv.org/abs/1912.03817