1. Overview — The State AI Patchwork
In the absence of comprehensive federal AI legislation, US states have become the primary drivers of AI regulation in America. The result is a rapidly growing patchwork of state laws covering different aspects of AI — from biometric data and automated employment decisions to deepfakes, consumer profiling, and algorithmic accountability.
According to the National Conference of State Legislatures (NCSL), over 700 AI-related bills were introduced across state legislatures in 2023-2024 alone. At least 45 states and territories have considered AI legislation, with many enacting targeted laws on specific AI applications.
Key Trends in State AI Legislation
- Colorado enacted the first comprehensive state AI law (SB 24-205) in May 2024, effective February 1, 2026
- Biometric privacy remains the most heavily litigated AI area, driven by Illinois BIPA
- Employment AI is gaining rapid attention (NYC LL144, Colorado, Illinois, Maryland, New Jersey)
- Deepfakes are the fastest-growing category, with 20+ states enacting laws in 2023-2024
- Consumer privacy laws in 19+ states include AI-relevant provisions on profiling and automated decisions
- California SB 1047 — the most ambitious state AI safety bill — was vetoed by Gov. Newsom in September 2024
2. Colorado AI Act (SB 24-205)
Signed into law on May 17, 2024, the Colorado AI Act is the first comprehensive state-level AI law in the United States. It establishes obligations for both developers and deployers of "high-risk AI systems" and takes effect on February 1, 2026.
Scope & Definitions
- High-Risk AI System: Any AI system that, when deployed, makes or is a substantial factor in making a "consequential decision"
- Consequential Decision: A decision with material legal or similarly significant effect on a consumer regarding: education, employment, financial/lending services, government services, healthcare, housing, insurance, or legal services
- Developer: Person doing business in Colorado who develops or intentionally and substantially modifies an AI system
- Deployer: Person doing business in Colorado who deploys a high-risk AI system
Developer Obligations
| Obligation | Description | Section |
|---|---|---|
| Documentation | Provide deployers with: general capabilities and limitations, intended uses, training data description, known risks of algorithmic discrimination, evaluation results | §6-1-1702(2) |
| Risk Management | Use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination | §6-1-1702(1) |
| Public Statement | Make available on website a statement summarizing: types of high-risk AI systems developed, how risks are managed, how AI systems are evaluated for performance and mitigation of discrimination | §6-1-1702(3) |
| Known Discrimination Reporting | Disclose to AG and deployers any known or reasonably foreseeable risks of algorithmic discrimination | §6-1-1702(4) |
Deployer Obligations
| Obligation | Description | Section |
|---|---|---|
| Risk Management Policy | Implement and maintain a risk management policy and program governing deployment of high-risk AI systems | §6-1-1703(3) |
| Impact Assessment | Complete an impact assessment for each high-risk AI system annually and within 90 days of any substantial modification | §6-1-1703(4) |
| Consumer Notice | Notify consumers that the deployer uses a high-risk AI system to make consequential decisions. Include: description of AI system, purpose, contact info, right to opt out if possible | §6-1-1703(5) |
| Statement of Non-Discrimination | Provide consumers a statement that the AI system has been evaluated for algorithmic discrimination | §6-1-1703(5)(b) |
| Right to Explanation | If consumer requests, provide information about the principal reasons the AI system made a particular decision and opportunity to correct incorrect data | §6-1-1703(5)(c) |
| Appeal Process | Provide opportunity to appeal an adverse consequential decision to a human reviewer | §6-1-1703(5)(d) |
Enforcement
- Enforced by: Colorado Attorney General exclusively
- No private right of action — individuals cannot sue directly under this law
- Affirmative defense: Deployers/developers who discover and cure violations, comply with NIST AI RMF or comparable framework
- Penalties: Violations treated as deceptive trade practices under Colorado Consumer Protection Act — injunctive relief, civil penalties up to $20,000 per violation
Comparison: Colorado AI Act vs EU AI Act
While both are risk-based frameworks focusing on high-risk AI, the Colorado AI Act is narrower: it targets "algorithmic discrimination" in specific decision domains, while the EU AI Act covers all AI safety and fundamental rights risks. Colorado has no "unacceptable risk" banned category, no requirements for general-purpose AI models, and relies on AG enforcement rather than a dedicated AI authority.
3. Illinois BIPA — Biometric Information Privacy Act
The Illinois Biometric Information Privacy Act (740 ILCS 14), enacted in 2008, has become the most consequential state privacy law for AI through its private right of action and aggressive litigation landscape. It regulates the collection and use of biometric identifiers — including fingerprints, voiceprints, retina scans, and face geometry — all of which are commonly used in AI systems.
Key Provisions
| Section | Requirement | Description |
|---|---|---|
| §15(a) | Written Policy | Entity possessing biometric identifiers must develop written policy establishing retention schedule and destruction guidelines. Must be publicly available. |
| §15(b) | Informed Consent | Before collecting biometric data: (1) inform the subject in writing, (2) inform of the purpose and duration of collection/storage, (3) obtain written release from the subject. |
| §15(c) | No Profit from Biometrics | No entity may sell, lease, trade, or otherwise profit from a person's biometric identifiers or information. |
| §15(d) | No Disclosure Without Consent | No disclosure without consent unless required by law, subpoena, or for completing a financial transaction. |
| §15(e) | Reasonable Security | Store, transmit, and protect biometric data using reasonable standard of care and at least the same protections as other confidential/sensitive information. |
| §20 | Private Right of Action | Any person aggrieved may bring a civil action. $1,000 per negligent violation, $5,000 per intentional/reckless violation, plus attorney fees and injunctive relief. |
Major BIPA Litigation (AI-Relevant)
| Case | Year | Parties | Outcome | AI Significance |
|---|---|---|---|---|
| Rosenbach v. Six Flags | 2019 | Minor's mother v. Six Flags theme park | IL Supreme Court: No need to show actual injury — mere violation of BIPA is enough to sue | Opened floodgates for BIPA litigation against any AI facial recognition system |
| In re Facebook Biometric Info Privacy Litigation | 2021 | Class of IL Facebook users v. Facebook | $650 million settlement | Facebook's AI face-tagging feature "Tag Suggestions" collected face geometry without written consent. Largest BIPA settlement. |
| Rogers v. BNSF Railway | 2022 | Truck drivers v. BNSF Railway | $228 million jury verdict (45,600 violations × $5,000) | First BIPA case to go to trial. Fingerprint scanning at rail yards without proper consent. Largest BIPA verdict. |
| Cothron v. White Castle | 2023 | Employee v. White Castle restaurants | IL Supreme Court: Each scan is a separate violation (not just first collection) | Massively increased potential damages. Each AI facial scan or fingerprint read = separate $1K-$5K claim. |
| Rivera v. Google | 2024 | IL users v. Google (Photos face grouping) | $100 million settlement | Google Photos AI face-grouping feature collected face templates without BIPA consent. |
| Vance v. Amazon / Ring | 2023 | Users v. Amazon Ring doorbell | Ongoing litigation | Ring doorbell AI facial recognition allegedly collecting biometrics without consent. |
| Clearview AI — ACLU Settlement | 2024 | ACLU + IL residents v. Clearview AI | Permanent injunction in Illinois + consent decree | Clearview prohibited from selling facial recognition database to most private companies nationwide (settlement extends beyond IL). |
BIPA 2024 Amendment
In August 2024, Illinois amended BIPA (SB 2979) to address the Cothron ruling, providing that violations arising from the same biometric data constitute a single violation rather than per-scan. This significantly reduces damages exposure for companies using ongoing biometric AI systems while maintaining the core consent and private right of action framework.
4. New York City Local Law 144 — Automated Employment Decision Tools
NYC Local Law 144 (enacted December 2021, enforced from July 5, 2023) is the first US law specifically regulating AI in hiring. It requires employers and employment agencies using "automated employment decision tools" (AEDTs) to conduct annual bias audits and provide notice to candidates.
Key Requirements
| Requirement | Description | Timing |
|---|---|---|
| Bias Audit | Independent third-party bias audit conducted no more than one year before use. Must test for disparate impact across race/ethnicity and sex categories using selection rates and scoring rates. | Annual (within preceding 12 months) |
| Audit Publication | Summary of bias audit results must be publicly available on employer's website, including: source/date of data, number of individuals, selection/scoring rates by category, impact ratios. | Before use and updated annually |
| Candidate Notice | Notify candidates at least 10 business days before use that an AEDT will be used. Disclose job qualifications and characteristics the tool evaluates. | 10 business days before AEDT use |
| Alternative Process | Notify candidates of the right to request an alternative selection process or accommodation. | With notice |
| Data Disclosure | Provide information about data collected, data retention policy, and how to request data deletion. | With notice |
Definitions
- Automated Employment Decision Tool (AEDT): Any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues a simplified output (score, classification, recommendation) that is used to substantially assist or replace discretionary decision-making for employment decisions
- "Substantially assist": DCWP rules define this as the tool's output being weighted more than any other single criterion, or used to overrule human decisions
Enforcement
- Enforced by: NYC Department of Consumer and Worker Protection (DCWP)
- Penalties: $500 first violation; $500-$1,500 per subsequent violation (each day of non-compliance = separate violation)
- No private right of action
Criticism & Limitations
LL144 has been criticized for narrow scope (only hiring/promotion, not termination), weak penalties, limited audit methodology (only race and sex, not disability/age), and the DCWP's slow enforcement. The law also does not address the accuracy of the AEDT — only bias. Advocates have called for broader state legislation (proposed NY S7623 "AI Consumer Protection Act" would expand coverage).
5. California — CCPA/CPRA, SB 1047, and AI Bills
California is the most active state for AI legislation, leveraging its position as the global center of AI development. While no comprehensive AI law has been enacted, a combination of the California Consumer Privacy Act (CCPA) as amended by CPRA, targeted AI bills, and the high-profile veto of SB 1047 define its landscape.
CCPA/CPRA — AI-Relevant Provisions
| Provision | Description | AI Relevance |
|---|---|---|
| Right to Opt-Out of Automated Decision-Making | CPRA directs CPPA to issue regulations on consumers' right to opt out of automated decision-making technology (ADMT) | CPPA proposed ADMT regulations (Nov 2023, revised 2024) would require: pre-use notice, right to opt out, access to logic, human review for significant decisions. If finalized, would be most comprehensive state AI provision. |
| Right to Know about Profiling | Consumers can request information about profiling and automated decision-making | Applies to AI profiling for employment, housing, credit, insurance, healthcare, education |
| Right to Access | Access personal information collected, including data used in AI processing | Includes inferences drawn by AI about the consumer |
| Right to Delete | Delete personal information, subject to exceptions | Raises machine unlearning questions similar to GDPR Art. 17 |
| "Inferences" as Personal Information | CCPA explicitly includes "inferences drawn from any of the above information to create a profile" as personal information | AI-generated predictions, scores, and profiles are personal information subject to all CCPA rights |
SB 1047 — Safe and Secure Innovation for Frontier AI Models Act (VETOED)
SB 1047 (authored by Sen. Scott Wiener) would have been the most ambitious state AI safety law in the US. It passed both chambers of the California legislature before being vetoed by Governor Newsom on September 29, 2024.
What SB 1047 Would Have Required
- Scope: "Covered models" — AI models costing >$100 million to train, or fine-tuned versions using >$10 million of compute from a covered model
- Safety requirements: Pre-deployment safety testing, kill switch capability, cybersecurity protections, reasonable assurance against "critical harms"
- Critical harms: Mass casualties or $500M+ in damages from cyberattacks, CBRN weapons, or critical infrastructure attacks enabled by AI
- Frontier Model Division: Would have created new state agency within California Government Operations Agency
- Whistleblower protections: For employees reporting AI safety concerns
- Penalties: Attorney General enforcement, injunctive relief, civil penalties
Governor Newsom's Veto Rationale
Gov. Newsom stated the bill was "well-intentioned" but applied too broadly, did not account for whether AI deployment was high-risk, and could create a false sense of security while hampering California's AI industry. He committed to working with Legislature on targeted AI safety legislation and signed several narrower AI bills.
California AI Bills Signed Into Law (2024)
| Bill | Topic | Key Provisions | Effective |
|---|---|---|---|
| AB 2013 | AI Training Data Transparency | Developers of GenAI systems must publicly disclose training data information: sources, whether personal data included, data types | Jan 1, 2026 |
| AB 2885 | AI Definition | Establishes official California definition of "artificial intelligence" for use across state government and future legislation | Jan 1, 2025 |
| AB 2355 | Election AI Disclosure | Requires disclosure in political ads when AI-generated content is used | Jan 1, 2025 |
| AB 1836 | Digital Replicas — Deceased | Protects deceased persons' likeness from unauthorized AI replication | Jan 1, 2025 |
| AB 2602 | Digital Replicas — Performers | Contracts for AI digital replicas of performers require explicit informed consent and legal representation | Jan 1, 2025 |
| SB 942 | AI Transparency Act | AI detection tools must be available. GenAI providers must add metadata/watermarks enabling detection of AI-generated content. | Jan 1, 2026 |
| AB 1008 | AI in Healthcare | Restricts use of AI to deny or delay healthcare claims. Requires physician review. | Jan 1, 2025 |
6. Texas — Data Privacy & AI Advisory Council
Texas Data Privacy and Security Act (TDPSA)
Effective July 1, 2024, the TDPSA includes AI-relevant provisions:
- Profiling opt-out: Consumers can opt out of profiling in furtherance of decisions producing legal or similarly significant effects
- Sensitive data: Opt-in consent required for processing sensitive data (biometric, genetic, health, geolocation)
- No private right of action — AG enforcement only
Responsible AI Governance Act (HB 1709, 2023)
Created the Texas AI Advisory Council within the Department of Information Resources to:
- Study and monitor AI systems used by state agencies
- Develop recommended policies for AI use by state government
- Report to the governor and legislature on AI risks and opportunities
- Inventory AI systems used by state agencies
7. Virginia — VCDPA & Automated Profiling
Virginia Consumer Data Protection Act (VCDPA)
Effective January 1, 2023, the VCDPA was the second comprehensive state privacy law after the CCPA. Its AI-relevant provisions include:
- Right to opt out of profiling: Consumers can opt out of profiling in furtherance of decisions producing legal or similarly significant effects (§59.1-577(A)(5))
- Right to access: Confirm processing and access personal data, including data used for profiling
- Data protection assessment: Controllers must conduct assessments for targeted advertising, profiling, and processing of sensitive data (§59.1-580)
- Profiling definition: "Any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects relating to an identified or identifiable natural person's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements"
Enforcement: Virginia AG exclusively. No private right of action. 30-day cure period.
8. Connecticut — AI Inventory & Disclosure
Connecticut Data Privacy Act (CTDPA) + AI Amendments
Connecticut enacted SB 2 (2023), amending the CTDPA to add AI-specific requirements:
- AI use by state agencies: State agencies must inventory all AI systems used in government decision-making
- Public disclosure: Inventory must be publicly available
- Impact assessments: Agencies using AI for consequential decisions must conduct impact assessments
- Consumer rights: Profiling opt-out rights similar to VCDPA
SB 2 (2024) — Comprehensive AI Bill
Connecticut considered a more comprehensive AI bill in 2024 modeled partly on the Colorado AI Act, targeting high-risk AI systems in employment, housing, credit, insurance, and healthcare. The bill was modified significantly during the legislative process and key provisions were removed before passage.
9. Other Active States
| State | Key AI Legislation | Focus Area | Status |
|---|---|---|---|
| Maryland | HB 1202 — Facial Recognition in Hiring | Prohibits employers from using facial recognition during job interviews without applicant consent | Enacted 2020 |
| Illinois | AI Video Interview Act (820 ILCS 42) | Employers using AI to analyze video interviews must: notify applicant, explain how AI works, obtain consent. Applicant can request destruction of video. | Enacted 2020 |
| Washington | My Health My Data Act + AI provisions | Restricts AI processing of health data. Private right of action. Broad definition of "health data." | Enacted 2023 |
| New Jersey | S3933 — AI in Hiring | Would require impact assessments for AI hiring tools, bias audits, candidate notification, annual reporting to state | Introduced — pending |
| Massachusetts | S.31 / H.70 — AI Accountability Act | Comprehensive AI accountability: impact assessments, transparency, algorithmic discrimination testing, consumer rights | Introduced — pending |
| Vermont | H.410 — AI Bill of Rights | State-level AI bill of rights: transparency, non-discrimination, human oversight, data protection | Introduced — pending |
| Utah | Artificial Intelligence Policy Act (SB 149) | AI transparency: must disclose AI interaction to consumers. Creates AI Learning Laboratory Program for regulatory sandbox. Extends Consumer Protection Act to AI. | Enacted March 2024 |
| Indiana | SB 150 — Consumer Data Protection | Privacy law with profiling opt-out rights | Enacted 2024, effective Jan 2026 |
| Tennessee | ELVIS Act (Ensuring Likeness Voice and Image Security) | Protects voice and likeness from unauthorized AI replication. First state to specifically protect voice from AI cloning. | Enacted March 2024 |
| Louisiana | AI Advisory Council (SCR 30) | Creates AI advisory council to study AI use by state government and recommend policies | Enacted 2024 |
| Oregon | HB 4153 — AI Task Force | Creates task force to study AI impacts on workforce and recommend legislation | Enacted 2024 |
10. State-by-State Comparison Table
The following table compares AI-relevant features of enacted state privacy laws that include AI/profiling provisions:
| State | Law | Effective Date | Profiling Opt-Out | AI Transparency | Impact Assessment | Private Right of Action | Enforcement |
|---|---|---|---|---|---|---|---|
| California | CCPA/CPRA | Jan 2020 / Jan 2023 | ✅ (ADMT regs pending) | ✅ (inferences = PI) | ✅ Required for profiling | ✅ Limited (data breaches) | AG + CPPA |
| Virginia | VCDPA | Jan 2023 | ✅ | ⚠️ Partial | ✅ For profiling | ❌ | AG only |
| Colorado | CPA + AI Act | Jul 2023 / Feb 2026 | ✅ | ✅ Comprehensive | ✅ Annual for high-risk AI | ❌ | AG only |
| Connecticut | CTDPA | Jul 2023 | ✅ | ✅ State agencies | ✅ For profiling | ❌ | AG only |
| Utah | UCPA + AI Policy Act | Dec 2023 / May 2024 | ✅ (targeted advertising) | ✅ Must disclose AI use | ❌ | ❌ | AG only |
| Texas | TDPSA | Jul 2024 | ✅ | ⚠️ Partial | ✅ For sensitive processing | ❌ | AG only |
| Oregon | OCPA | Jul 2024 | ✅ | ⚠️ Partial | ✅ For profiling | ❌ | AG only |
| Montana | MCDPA | Oct 2024 | ✅ | ⚠️ Partial | ✅ For profiling | ❌ | AG only |
| Illinois | BIPA | 2008 | N/A (biometric-specific) | ✅ Written policy required | ❌ | ✅ $1K-$5K per violation | Private + AG |
| NYC | LL144 | Jul 2023 | N/A (hiring-specific) | ✅ Audit results + notice | ✅ Annual bias audit | ❌ | DCWP |
11. State Deepfake & Synthetic Media Laws
Deepfake legislation is the fastest-growing category of state AI laws. Over 20 states have enacted laws addressing AI-generated synthetic media, primarily targeting election interference and non-consensual intimate imagery.
| State | Law | Focus | Key Provisions | Enacted |
|---|---|---|---|---|
| Texas | SB 751 | Election deepfakes | First state to criminalize deepfakes intended to influence elections. Class A misdemeanor. | 2019 |
| California | AB 730 + AB 602 | Elections + non-consensual intimate | AB 730: Prohibits deceptive deepfakes of political candidates within 60 days of election. AB 602: Civil cause of action for non-consensual deepfake pornography. | 2019 |
| Virginia | §18.2-386.2 | Non-consensual intimate imagery | Criminalizes non-consensual sharing of deepfake intimate images. Class 1 misdemeanor. | 2019 |
| Minnesota | HF 1370 | Elections | Prohibits disseminating deepfakes of candidates within 90 days of election without disclosure. | 2023 |
| Washington | SB 5152 | Elections | Requires disclosure labels on synthetic media used in election campaigns. | 2023 |
| Tennessee | ELVIS Act | Voice & likeness | First state to specifically protect voice from AI cloning. Civil and criminal penalties for unauthorized AI voice replication. | 2024 |
| Indiana | SB 114 | Child deepfakes | Criminalizes creation or distribution of AI-generated child sexual abuse material. Felony. | 2024 |
| Florida | SB 1798 | Elections | Requires disclosure when AI-generated content is used in political advertising. | 2024 |
12. Trends & Analysis
Key Trends in State AI Legislation
- Acceleration: The number of state AI bills roughly doubled from 2023 to 2024. This trend is expected to continue.
- Convergence on employment AI: Multiple states following NYC's lead in regulating AI hiring tools. Colorado, Illinois, New Jersey, Maryland, and several others have enacted or proposed similar laws.
- Privacy laws as AI governance: With 19+ state privacy laws enacted, profiling opt-out rights are becoming the default AI consumer protection across the country.
- Comprehensive AI laws emerging: Colorado's AI Act may be a model. Several states (Connecticut, Massachusetts, Vermont) are considering similar comprehensive approaches.
- Deepfake legislation surge: Driven by election concerns and non-consensual intimate imagery. Expect 30+ states with deepfake laws by 2026.
- No federal preemption: In the absence of federal action, states will continue filling gaps. This creates compliance complexity for multi-state AI operators.
- Enforcement gap: Most state AI laws rely on AG enforcement with no private right of action. Actual enforcement has been limited outside BIPA.
- Industry pushback: SB 1047 veto showed that ambitious AI safety regulation faces strong industry opposition, even in tech-friendly California.
Challenges of the Patchwork Approach
- Compliance burden: Companies operating nationally must navigate potentially 50 different AI regulatory frameworks
- Forum shopping: AI companies may choose to locate in states with less regulation
- Inconsistent definitions: Each state defines "AI," "automated decision," "profiling," and "high-risk" differently
- Enforcement inconsistency: Some AGs are aggressive (Illinois, California), while others rarely enforce data/AI provisions
- Innovation concerns: Some argue the patchwork creates uncertainty that deters AI investment and development
13. References & Official Sources
National Trackers & Databases
-
NCSL — Artificial Intelligence 2024 Legislation
ncsl.org — AI 2024 Legislation -
NCSL — AI State Legislation Tracker
ncsl.org — AI State Legislation -
BSA | The Software Alliance — US State AI Legislation Tracker
bsa.org/policy/ai -
Multistate AI Policymaking — Brookings Institution
brookings.edu — State of State AI Policy -
IAPP — US State Privacy Legislation Tracker
iapp.org — US State Privacy Tracker
Specific State Laws
-
Colorado AI Act (SB 24-205) — Full Text
leg.colorado.gov/bills/sb24-205 -
Illinois BIPA (740 ILCS 14) — Full Text
ilga.gov — BIPA 740 ILCS 14 -
NYC Local Law 144 — Full Text & Rules
nyc.gov — Automated Employment Decision Tools -
California SB 1047 — Full Text (Vetoed)
leginfo.legislature.ca.gov — SB 1047 -
California AB 2013 — AI Training Data Transparency
leginfo.legislature.ca.gov — AB 2013 -
Utah Artificial Intelligence Policy Act (SB 149)
le.utah.gov — SB 149 -
Tennessee ELVIS Act — Full Text
capitol.tn.gov — ELVIS Act
Privacy Law Texts
-
California CCPA/CPRA — Full Text
leginfo.legislature.ca.gov — CCPA -
CPPA — ADMT Proposed Regulations
cppa.ca.gov — ADMT Regulations -
Virginia VCDPA — Full Text
law.lis.virginia.gov — VCDPA -
Texas TDPSA — Full Text
capitol.texas.gov — HB 4 (TDPSA)
BIPA Court Decisions
-
Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186
illinoiscourts.gov — Rosenbach v. Six Flags -
Cothron v. White Castle System, Inc., 2023 IL 128004
illinoiscourts.gov — Cothron v. White Castle