Facial recognition technology (FRT) uses AI to identify or verify individuals by analyzing facial features from images or video. It is among the most regulated and contested AI applications globally, touching on fundamental rights to privacy, non-discrimination, and freedom of assembly.
1.1 Types of Biometric AI
Technology
Description
Use Cases
Regulatory Concern
Facial Recognition (1:1)
Verification: compares face to a known reference image
Phone unlock; border control; banking KYC
Consent; data security; accuracy
Facial Recognition (1:N)
Identification: searches face against a database of many
Law enforcement suspect identification; missing persons
Mass surveillance; false positives; civil liberties
Real-Time Remote Biometric ID
Live identification of individuals in public spaces
Live surveillance cameras; event security; smart cities
Chilling effect; freedom of assembly; disproportionate surveillance
Emotion Recognition
AI inferring emotional states from facial expressions
Documented Bias: Multiple studies, including landmark research by NIST (Face Recognition Vendor Test, 2019) and MIT researcher Joy Buolamwini, have demonstrated that facial recognition systems exhibit significantly higher error rates for women, people of color, and particularly dark-skinned women. NIST found false positive rates up to 100 times higher for African and East Asian faces compared to Eastern European faces in some algorithms. These findings have been central to legislative efforts restricting FRT use.
2. European Union
2.1 EU AI Act — Biometric Provisions
The EU AI Act contains the world’s most comprehensive restrictions on biometric AI systems:
Prohibited Biometric Practices (Article 5)
Prohibition
Description
Exceptions
Real-Time Remote Biometric ID in Public
Live facial recognition in publicly accessible spaces for law enforcement purposes
Three narrow exceptions: (1) targeted search for specific crime victims/missing children; (2) prevention of specific imminent threat to life/terrorist attack; (3) identification of suspects for serious crimes (listed in Annex). All require prior judicial authorization (or post-hoc within 24 hours for urgency).
Emotion Recognition (Workplace/Education)
AI systems inferring emotions of persons in workplace or educational settings
Medical or safety purposes only (e.g., detecting drowsiness in safety-critical roles)
Untargeted Facial Image Scraping
Scraping facial images from internet or CCTV to build facial recognition databases
No exceptions — absolute prohibition (targeting Clearview AI-type practices)
Biometric Categorization (Sensitive Attributes)
Categorizing persons based on biometric data to infer race, political opinions, religion, sexual orientation
Labeling of lawfully acquired biometric data in law enforcement (narrow)
High-Risk Biometric Systems (Annex III)
Biometric systems not prohibited are classified as high-risk under the AI Act if used for:
Biometric categorization: Categorizing persons based on biometric data (non-prohibited categories)
Emotion recognition: Outside workplace/education (where not prohibited)
High-risk classification triggers: conformity assessment, risk management, data governance, transparency, human oversight, accuracy requirements, and registration in EU database.
2.2 GDPR — Biometric Data
The GDPR provides additional protections for biometric data:
Article 9: Biometric data for identification is “special category” data — processing prohibited unless explicit consent or specific legal basis (e.g., substantial public interest, vital interests)
Article 22: Right not to be subject to solely automated decisions with legal effects — applies to biometric identification decisions
Article 35: Mandatory Data Protection Impact Assessment (DPIA) for systematic biometric processing
2.3 Enforcement Actions
Case
Authority
Fine
Issue
Clearview AI (Italy)
Garante (Italy)
€20 million
Unlawful processing of biometric data; scraping Italian residents’ facial images; no legal basis; lack of transparency
Clearview AI (France)
CNIL (France)
€20 million
Same violations under French law; failure to comply with CNIL enforcement notice
Clearview AI (UK)
ICO (UK)
£7.55 million
Processing UK residents’ data without consent; failure to provide right of access; no lawful basis
Clearview AI (Greece)
HDPA (Greece)
€20 million
Unlawful biometric data processing; non-compliance with GDPR
Clearview AI (Austria)
DSB (Austria)
Order to cease processing
GDPR violations related to biometric data scraping; Austria-specific cease and desist
The US has no comprehensive federal biometric or facial recognition law, though several bills have been introduced and sector-specific rules exist.
3.1 Federal Legislation (Proposed)
Bill
Status
Key Provisions
Facial Recognition and Biometric Technology Moratorium Act
Introduced 2020, 2021, 2023; not passed
Federal moratorium on government use of facial recognition; ban on federal funds for FRT; requires Congressional authorization to resume
FACE Act (Facial and Accurate Centralized Ethics)
Introduced 2023; committee
Warrant requirement for federal FRT use; prohibition on real-time mass surveillance; accuracy standards
American Data Privacy and Protection Act (ADPPA)
Passed House committee 2022; stalled
Would have included biometric data as “covered data”; consent requirements; opt-out rights
REAL ID Act (implications)
Enacted 2005; enforcement ongoing
Standardized ID requirements; DHS facial recognition for identity verification; state DMV photo databases
3.2 Federal Agency Policies
FBI — Next Generation Identification (NGI): Contains 641M+ facial images; used for criminal investigation; no specific federal law authorizing or restricting; GAO has criticized lack of accuracy testing and oversight
TSA: Expanding facial recognition at airport security checkpoints; travelers can opt out (as of 2025); controversy over expanding to non-airport settings
ICE: Uses Clearview AI and state DMV databases; criticized for use in immigration enforcement; some states restricting DMV data access
IRS: Abandoned ID.me facial recognition for tax filing (2022) after public backlash; switched to non-biometric identity verification
3.3 Executive Action
Executive Order 14110 (Oct 2023): Directs federal agencies to mitigate AI risks including biometric systems; requires equity assessments; agencies must report on FRT use
OMB Guidance M-24-10: Requires federal agencies to implement safeguards for rights-impacting AI including facial recognition; opt-out provisions where feasible
4. United States — State & Local
In the absence of federal legislation, US states and cities have enacted a patchwork of biometric and facial recognition laws ranging from comprehensive bans to targeted regulations.
4.1 State Biometric Privacy Laws
State
Law
Year
Key Provisions
Private Right of Action
Illinois
Biometric Information Privacy Act (BIPA)
2008
Informed consent before collection; retention schedule required; prohibition on profiting from biometric data; written policy disclosure
Notice and consent for commercial biometric use; enrollment database restrictions
No (AG enforcement only)
California
CCPA/CPRA (biometric provisions)
2018/2020
Biometric data as sensitive personal information; opt-out rights; disclosure requirements; data minimization
Limited (data breach claims only; AG and CPPA enforcement for other violations)
New York
SHIELD Act + NYC Biometric Identifier Info Law
2019/2021
SHIELD: biometric data breach notification; NYC: commercial establishments must disclose biometric collection with signage
NYC law: $500/violation per incident for signage; $5,000 for misuse
Colorado
Colorado Privacy Act (biometric)
2023
Biometric data as sensitive data; consent required for processing; opt-out for profiling; DPIA required
No (AG enforcement)
Maryland
Facial Recognition Transparency Act
2024
Law enforcement must disclose FRT use; judicial authorization required; annual reporting; accuracy standards
No (AG enforcement)
4.2 Illinois BIPA — Landmark Litigation
BIPA’s Impact: Illinois BIPA is the most litigated biometric law in the world. Key cases have established that: (1) every scan without consent is a separate violation (Cothron v. White Castle, IL Supreme Court 2023); (2) standing does not require actual harm (Rosenbach v. Six Flags, IL Supreme Court 2019); and (3) claims accrue per scan, not per person. These rulings have created massive class action exposure. Meta (Facebook) settled for $650M; Google settled for $100M; TikTok settled for $92M. Total BIPA settlements exceed $5 billion.
Case
Year
Settlement/Ruling
Significance
In re Facebook Biometric Info Privacy Litigation
2021
$650 million
Largest biometric privacy settlement; Tag Suggestions feature; per-user payout ~$350
In re Google Biometric Info Privacy Litigation
2022
$100 million
Google Photos face grouping feature; applied BIPA to cloud photo services
In re TikTok Inc. BIPA Litigation
2021
$92 million
Facial filter features; expanded BIPA to social media engagement tools
Rosenbach v. Six Flags
2019
IL Supreme Court ruling
No actual harm needed for BIPA standing; statutory violation alone sufficient
Cothron v. White Castle
2023
IL Supreme Court ruling
Each biometric scan is a separate BIPA violation; damages accrue per scan, not per person
Rogers v. BNSF Railway
2022
$228 million (jury verdict)
First BIPA jury trial; fingerprint scanning of truck drivers; largest single BIPA verdict
4.3 City/County Facial Recognition Bans
Jurisdiction
Year
Scope
San Francisco, CA
2019
First US city to ban government use of FRT; police and all city agencies prohibited
Oakland, CA
2019
Government use ban; aligned with San Francisco
Berkeley, CA
2019
Government use ban
Somerville, MA
2019
Government use ban; first East Coast city
Boston, MA
2020
Government use ban; largest East Coast city to ban
Portland, OR
2020
Most comprehensive: bans both government AND private sector use in public accommodations
Minneapolis, MN
2020
Government use ban
New Orleans, LA
2020
Government use ban (passed alongside predictive policing ban)
King County, WA
2021
Government use ban; most populous county to ban (~2.3M people)
New York City, NY
2021
Commercial establishments: must post signage; cannot sell biometric data (Biometric Identifier Info Law)
Baltimore, MD
2021
Government use ban
4.4 State-Level FRT Restrictions
Virginia (2021): Banned local police from using FRT unless authorized by state legislature; state police exempted; education ban for student surveillance
Vermont (2020): Prohibited law enforcement from using FRT; cannot request private FRT results
Massachusetts (2021): Statewide moratorium on government FRT use; exceptions for registry of motor vehicles comparison and NCMEC missing children
Washington State (2020): SB 6280: most detailed state FRT governance; accountability report; testing requirements; warrant requirement for ongoing surveillance; meaningful human review
5. United Kingdom
5.1 Legal Framework
The UK does not have FRT-specific legislation but regulates through existing frameworks:
UK GDPR + Data Protection Act 2018: Biometric data is “special category” data; requires explicit consent or specific legal basis; DPIA mandatory for systematic biometric processing
Surveillance Camera Code of Practice (2013): 12 guiding principles for surveillance camera use; includes guidance on facial recognition; applies to public authorities
Human Rights Act 1998 (Article 8): Right to respect for private and family life; FRT use must be necessary and proportionate; basis for legal challenges
Equality Act 2010: Relevant where FRT demonstrates bias against protected characteristics
5.2 Metropolitan Police & South Wales Police
UK police forces have been the most prominent users of live facial recognition (LFR) in a democratic country:
R (Bridges) v. South Wales Police (2020): Court of Appeal ruled South Wales Police LFR use was unlawful due to: insufficient legal framework; too much discretion to officers; no adequate equality impact assessment. Landmark case establishing that FRT use engages Article 8 rights.
Metropolitan Police: Resumed LFR deployments after Bridges decision with additional safeguards; watchlists limited to wanted suspects; mandatory deployment protocols; independent review panel
College of Policing: Authorized Professional Practice for LFR published 2023; establishes standards for police FRT use across England and Wales
5.3 ICO Enforcement
Clearview AI (2022): ICO fined Clearview £7.55 million and ordered deletion of UK residents’ data; found violations of UK GDPR Articles 5, 6, 9, 14, 15, 27
Facewatch (2024): ICO investigated retailer facial recognition system Facewatch; issued enforcement notice requiring DPIA improvements and purpose limitation
6. China
China presents a paradox: extensive government deployment of facial recognition alongside growing legal protections for individuals in the private sector.
6.1 Government Use
Skynet/Sharp Eyes: National surveillance network with hundreds of millions of cameras; facial recognition integral; used for public safety, traffic management, and social monitoring
Social Credit System: Facial recognition linked to individual tracking in some pilot cities; used for jaywalking enforcement, building access, and identity verification
Xinjiang: Extensive biometric surveillance of Uyghur population; facial recognition, DNA, and iris scanning; subject of international condemnation and sanctions
6.2 Private Sector Restrictions
China has progressively restricted private sector facial recognition use:
Law/Regulation
Year
Key Provisions for FRT
Personal Information Protection Law (PIPL)
2021
Facial data is “sensitive personal information”; separate consent required; purpose limitation; necessity principle; DPIA mandatory
“Face Recognition” Judicial Interpretation
2021
Supreme People’s Court rules: businesses cannot force customers to use FRT as sole ID method; consent must be voluntary; individuals can sue for violations
GB/T 41772–2022 Standard
2022
National standard for facial recognition in information security; data protection requirements; accuracy benchmarks
Provisions on Face Recognition Technology (Draft)
2023
CAC draft: purpose limitation; necessity requirement; alternatives must be offered; no forced FRT for services; restrictions on public space deployment
6.3 Landmark Cases
Guo Bing v. Hangzhou Safari Park (2020): First facial recognition lawsuit in China; professor sued after park required facial scan for entry (replacing fingerprint); court ruled park violated consumer rights; ordered deletion of facial data. Established precedent that businesses cannot unilaterally switch to FRT.
7. Other Jurisdictions
7.1 Canada
PIPEDA: Biometric data as sensitive personal information; meaningful consent required; proportionality principle
Clearview AI Investigation (2021): Federal and provincial privacy commissioners found Clearview violated PIPEDA; collection without consent; surveillance constituted mass identification; recommended cessation and deletion
RCMP Use: RCMP acknowledged using Clearview AI (2020); privacy commissioner found use violated Privacy Act; RCMP committed to developing governance framework
7.2 Australia
Privacy Act 1988: Biometric data falls under sensitive information provisions; consent required for collection; APPs apply
Clearview AI (2021): OAIC found Clearview violated Australian Privacy Act; interfered with privacy of Australians; ordered deletion of Australian facial data
Identity-Matching Services Bill (withdrawn 2019): Proposed national facial recognition database; withdrawn after criticism from parliamentary committee
7-Eleven Case (2021): OAIC found 7-Eleven’s in-store facial recognition (via customer surveys) violated APPs; covert collection of biometric data without consent
7.3 Japan
APPI (2022 Amendment): Biometric data treated as personal information requiring consent; specific provisions for facial recognition data handling
JR East Station Cameras (2021): Controversy over JR East railway using facial recognition to identify released offenders; suspended after public backlash; highlighted gap in Japanese FRT regulation
7.4 India
Digital Personal Data Protection Act (2023): Classifies biometric data; consent requirements; children’s data restrictions including biometric
Automated Facial Recognition System (AFRS): National Crime Records Bureau developing national FRT database; privacy concerns raised; no specific legislation governing its use
DigiYatra: Voluntary FRT for airport boarding; opt-in system; data deleted after 24 hours; privacy framework developed
7.5 Brazil
LGPD: Biometric data as sensitive data; specific consent required; ANPD enforcement
São Paulo Metro FRT (2018): Court ordered São Paulo Metro to stop using facial recognition for advertising; violated data protection and consumer rights
Public Security Use: Growing deployment in Brazilian cities; Bahia, Rio de Janeiro, São Paulo using FRT for policing; civil society challenges ongoing
7.6 South Korea
PIPA: Biometric data (including facial) is sensitive information; explicit consent required; purpose limitation strictly enforced
Facial Recognition Ban in Public (proposed): 2024 legislative proposals to restrict government FRT use in public spaces; strong public opposition to surveillance
8. Law Enforcement Use
Law enforcement use of facial recognition is the most contentious application, balancing public safety against civil liberties, privacy, and the risk of discriminatory policing.
8.1 Use Cases
Use Case
Description
Civil Liberties Concern
Regulatory Status
Live Surveillance (LFR)
Real-time identification against watchlists in public spaces
Mass surveillance; chilling effect on protest and assembly; racial profiling
Banned (EU, most US cities with laws)
Post-Event Investigation
Searching recorded footage after a crime occurs
Scope creep; retention of innocent people’s data; accuracy concerns
Restricted (warrant requirements in some jurisdictions)
Mugshot Database Search
Comparing suspect images against existing arrest photo databases
Perpetuates racial bias in policing; databases over-represent minorities
Generally permitted (varies by jurisdiction)
Missing Persons / Child Exploitation
Identifying missing children or trafficking victims
Generally supported; concern about mission creep from these exceptions
Permitted (often cited as exception in bans)
Border Control
Identity verification at borders and airports
Traveler privacy; data retention; accuracy for diverse populations
Generally permitted (CBP, Frontex, etc.)
8.2 Wrongful Arrest Cases
Documented Wrongful Arrests: As of 2025, at least seven cases of wrongful arrest based on faulty facial recognition matches have been publicly documented in the US. All involved Black individuals. These cases have been central to legislative efforts and have demonstrated the real-world consequences of deploying inaccurate technology.
Case
Year
Location
Details
Robert Williams
2020
Detroit, MI
Arrested based on facial recognition match from shoplifting footage; held 30 hours; charges dropped; ACLU lawsuit filed
Michael Oliver
2019
Detroit, MI
Falsely identified as carjacking suspect; charges dropped when alibi confirmed; victim of FRT false match
Nijeer Parks
2019
Woodbridge, NJ
Arrested for shoplifting and assault based on FRT match; spent 10 days in jail; all charges dismissed; was 30 miles away at time
Porcha Woodruff
2023
Detroit, MI
8 months pregnant; arrested for carjacking and robbery based on FRT match; detained for hours; charges dropped; federal lawsuit filed
Randal Reid
2022
Jefferson Parish, LA
Arrested in Georgia on Louisiana warrant based on FRT; held 6 days; was in Houston at time of crime; charges dismissed
8.3 Best Practice Standards
NIST FRVT (Face Recognition Vendor Test): Ongoing evaluation of FRT accuracy across demographics; results publicly available; vendors tested on 1:1 and 1:N matching; demographic differentials documented
Interpol (IFRS): Interpol Face Recognition System for international suspect identification; strict access controls; manual human review of all matches; 179 member countries
OSCE Guidelines (2019): Principles for law enforcement FRT use; necessity and proportionality; human oversight; regular accuracy auditing; public transparency
9. Private Sector Use
9.1 Common Applications
Sector
Application
Regulatory Framework
Retail
Shoplifter identification; customer analytics; age estimation for restricted products
HIPAA (US); EU MDR for medical devices; GDPR special category; less contested than surveillance uses
Real Estate
Building access; tenant identification; visitor management
Tenant protection laws in some jurisdictions; NYC proposed restrictions; GDPR consent (EU)
9.2 Self-Service Biometrics
Device Unlock (Face ID/Face Unlock): Generally unregulated; biometric data stored locally on device; Apple Face ID processes entirely on-device; consent implicit in setup
Payment Authentication: Growing adoption (Apple Pay, Samsung Pay with face); regulated under payment services directives; PSD2 SCA requirements in EU
Age Verification: UK Online Safety Act and EU Digital Services Act driving adoption; facial age estimation (not recognition) for accessing age-restricted content; less privacy-invasive than ID scanning
10. Comparative Analysis
10.1 Global Regulatory Approach Comparison
Dimension
EU
US (Patchwork)
UK
China
Government Use
Banned for real-time public (narrow exceptions)
No federal ban; 15+ city bans; some state restrictions
Active police use; court-imposed safeguards
Extensive government deployment; limited restrictions
Private Sector
GDPR consent + AI Act high-risk
BIPA (IL) most restrictive; varies by state
UK GDPR consent required
PIPL consent; separate consent for FRT; alternatives required
Emotion Recognition
Banned in workplace/education
No specific ban (some state limits)
No specific regulation
No specific ban
Scraping Ban
Absolute prohibition (AI Act)
Clearview settlements; no explicit federal ban
ICO enforcement; no explicit ban
PIPL consent requirements; draft FRT rules
Enforcement
DPAs + AI Office; fines to 6% turnover
BIPA private right of action (IL); FTC; state AGs
ICO; courts (Bridges case)
CAC; courts (Safari Park case)
Bias Requirements
AI Act: bias testing mandatory
NIST FRVT; no mandatory testing
Equality Act obligations
Standards developing
11. Trends & Future Outlook
Expanding Bans & Restrictions
The trend toward banning or heavily restricting real-time facial recognition in public spaces is accelerating. The EU AI Act’s prohibitions set a global benchmark. More US states and cities are adopting restrictions, and even countries like China are introducing private-sector FRT limits. The Clearview AI enforcement actions across 5+ countries demonstrate growing global enforcement consensus against untargeted biometric scraping.
BIPA Model Spreading
Illinois BIPA’s private right of action model is influencing legislation worldwide. The $5B+ in settlements has demonstrated the power of individual enforcement. More states are considering BIPA-like laws, though industry lobbying has often weakened the private right of action. The EU AI Act’s provisions may create similar enforcement dynamics in Europe.
Age Estimation vs. Recognition
Facial age estimation (estimating age range from facial features without identifying the individual) is emerging as a less privacy-invasive alternative to ID-based age verification. The UK Online Safety Act and EU Digital Services Act are driving adoption. Regulators are distinguishing between estimation (generally permitted with safeguards) and recognition (heavily regulated), creating a new regulatory category.
On-Device vs. Cloud Processing
The distinction between on-device biometric processing (Apple Face ID, device unlock) and cloud-based processing is becoming regulatory significant. On-device processing — where biometric data never leaves the user’s device — faces far less regulation. This is driving a technology shift toward edge processing for consent-sensitive applications.