1. Overview — The US Approach to AI Governance
The United States has taken a fundamentally different approach to AI governance compared to the European Union. Rather than enacting a single comprehensive AI law, the US relies on a combination of executive orders, non-binding frameworks, existing sectoral regulations, and agency-level guidance. This sector-specific approach reflects the US tradition of industry self-regulation and a policy emphasis on promoting AI innovation alongside managing risk.
Key Distinction: US vs EU Approach
The EU AI Act creates a horizontal, risk-based regulatory framework that applies across all sectors with binding requirements. The US has no equivalent. Instead, US AI governance is a patchwork of executive orders (which can be revoked by subsequent administrations), voluntary frameworks (NIST AI RMF), sector-specific regulations (FDA for medical AI, SEC for financial AI), and proposed but unenacted legislation. This creates both flexibility and uncertainty.
Timeline of Major US Federal AI Actions
| Date | Action | Type | Status |
|---|---|---|---|
| Feb 2019 | EO 13859 — Maintaining American Leadership in AI | Executive Order | Active (Trump administration) |
| Jan 2020 | Guidance for Regulation of AI Applications (OMB) | Memorandum | Active |
| Jan 2021 | National AI Initiative Act of 2020 | Federal Law (P.L. 116-283) | Active — Enacted |
| Jan 2023 | NIST AI Risk Management Framework 1.0 | Voluntary Framework | Active |
| Oct 2022 | Blueprint for an AI Bill of Rights | White House Guidance (Non-binding) | Active (Biden administration) |
| Oct 2023 | EO 14110 — Safe, Secure, and Trustworthy AI | Executive Order | Partially revoked (Jan 2025) |
| Mar 2024 | OMB Memorandum M-24-10 | OMB Directive to Federal Agencies | Active |
| Jan 2025 | EO 14179 — Removing Barriers to American Leadership in AI | Executive Order | Active (Trump administration) — Revokes EO 14110 |
2. Executive Order 14110 — Safe, Secure, and Trustworthy AI
Issued on October 30, 2023, Executive Order 14110 was the most comprehensive AI executive action in US history. It established extensive requirements for AI safety testing, reporting obligations for developers of powerful AI models, and directives to over 50 federal agencies. On January 20, 2025, it was revoked by EO 14179 under the incoming Trump administration, though many of its agency actions had already been implemented.
Current Status
EO 14110 was revoked by Executive Order 14179 (January 20, 2025). However, many agency actions taken under EO 14110 remain in effect — agency rules, guidance documents, and established programs were not automatically rescinded by the revocation. The NIST AI Safety Institute continues operating. Understanding EO 14110 remains essential as its framework shaped current AI governance discussions.
Key Provisions of EO 14110
| Section | Topic | Requirements | Responsible Agency |
|---|---|---|---|
| §4.1 | Safety Testing & Red-Teaming | Developers of dual-use foundation models must report safety test results to the federal government. Models trained using >10²⁶ integer/floating-point operations trigger reporting requirements. | Commerce (NIST) |
| §4.2 | Dual-Use Foundation Model Reporting | Companies developing or intending to develop dual-use foundation models must notify the government. Must share red-team test results. Invoked Defense Production Act authority. | Commerce (BIS) |
| §4.3 | Infrastructure Reporting | IaaS providers must report when foreign persons transact to train large AI models. Know-your-customer requirements for cloud computing. | Commerce |
| §4.5 | AI-Generated Content Authentication | Develop standards for watermarking and authentication of AI-generated content (text, images, audio, video). | Commerce (NIST) |
| §4.6 | Cybersecurity | Use AI to enhance cybersecurity. Develop AI-specific vulnerability assessment tools. Establish AI Cyber Challenge. | DOD, DHS, NSA |
| §5 | Promoting Innovation | National AI Research Resource (NAIRR) pilot. Visa/immigration reforms for AI talent. Support for small businesses using AI. | NSF, USCIS, SBA |
| §7 | Consumer Protection | FTC, CFPB, and other agencies to use existing authority to protect consumers from AI harms. Focus on healthcare, financial services, housing. | FTC, HHS, HUD |
| §8 | Civil Rights & Equity | Address algorithmic discrimination. Guidelines for AI in criminal justice, healthcare, government benefits. | DOJ, HHS, DOL |
| §9 | Worker Protection | Principles for AI in the workplace. Address job displacement. Worker surveillance limits. | DOL, CEA |
| §10 | Government Use of AI | Chief AI Officers in every federal agency. AI use case inventories. Risk management for government AI. | OMB, all agencies |
The 10²⁶ FLOP Threshold
EO 14110 established a computational threshold of 10²⁶ integer or floating-point operations for triggering reporting requirements. This was designed to capture "dual-use foundation models" — models powerful enough to pose risks in areas like cybersecurity, biological weapons, or critical infrastructure. At the time of issuance (October 2023), this threshold captured models like GPT-4 and similar frontier systems. However, the threshold is static while compute costs decrease rapidly, meaning more models will cross this line over time.
Defense Production Act Invocation
EO 14110 invoked the Defense Production Act (DPA) to require AI companies to report certain information to the government. This was historically significant — the DPA is typically used in wartime or national emergency contexts. The invocation signaled the administration's view that AI safety constitutes a national security priority and gave the government authority to compel disclosures from private AI developers.
3. Blueprint for an AI Bill of Rights
Released by the White House Office of Science and Technology Policy (OSTP) on October 4, 2022, the Blueprint for an AI Bill of Rights is a non-binding framework articulating five principles to guide the design, use, and deployment of AI systems. It is explicitly a "white paper" — not legislation, regulation, or executive order — and creates no enforceable rights.
The Five Principles
| Principle | Description | Key Requirements | Comparable EU Requirement |
|---|---|---|---|
| 1. Safe and Effective Systems | You should be protected from unsafe or ineffective AI systems | Pre-deployment testing, ongoing monitoring, independent evaluation, domain-specific expertise in development, risk identification | EU AI Act Art. 9 (risk management), Art. 15 (accuracy, robustness) |
| 2. Algorithmic Discrimination Protections | You should not face discrimination by algorithms in ways that violate law | Equity assessments during design, use of representative data, testing for disparate impact, independent audits, plain language reporting of results | EU AI Act Art. 10 (data governance, bias testing) |
| 3. Data Privacy | You should be protected from abusive data practices and have agency over how data about you is used | Data collection limited by design, consent for sensitive domains, purpose-limited use, enhanced protections in surveillance contexts | GDPR Arts. 5-6 (principles, lawful basis) |
| 4. Notice and Explanation | You should know that an automated system is being used and understand how and why it contributes to outcomes | Clear notice of AI use, plain-language explanation of outcomes, timely communication, meaningful access to explanations | GDPR Arts. 13-14 (transparency), EU AI Act Art. 50 (transparency) |
| 5. Human Alternatives, Consideration, and Fallback | You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems | Opt-out from automated systems, access to human consideration, timely human fallback, escalation process | GDPR Art. 22 (right to human intervention), EU AI Act Art. 14 (human oversight) |
Legal Status
The Blueprint is not legally binding. It does not create new rights, obligations, or enforcement mechanisms. It is best understood as a policy aspiration — a statement of values that may influence future legislation or agency action. Several federal agencies referenced the Blueprint in subsequent guidance, but compliance is voluntary.
4. NIST AI Risk Management Framework (AI RMF 1.0)
Published on January 26, 2023, the NIST AI Risk Management Framework (AI RMF 1.0, NIST AI 100-1) is a voluntary framework for managing risks associated with AI systems throughout their lifecycle. It is the most technically detailed and practically useful US federal AI governance document.
Framework Structure — Four Core Functions
| Function | Purpose | Key Activities | Categories |
|---|---|---|---|
| GOVERN | Cultivate and implement a culture of AI risk management | Establish AI governance structures, define roles/responsibilities, create policies, ensure accountability, document risk tolerance | GV-1 through GV-6: Policies, accountability, workforce, organizational context, risk management integration, stakeholder engagement |
| MAP | Identify risks in context and establish context for risk management | Understand intended purposes, identify stakeholders, map potential harms, assess deployment context, identify legal requirements | MP-1 through MP-5: Context, requirements, benefits/costs, risks, impacts |
| MEASURE | Analyze, assess, benchmark, and monitor AI risks and impacts | Develop metrics, test for bias, evaluate performance, assess reliability, monitor for drift, conduct audits | MS-1 through MS-4: Appropriate metrics, evaluations, continuous monitoring, feedback mechanisms |
| MANAGE | Allocate resources and take action to address identified risks | Prioritize risks, deploy mitigations, establish incident response, plan for decommission, communicate residual risks | MG-1 through MG-4: Risk prioritization, treatments, residual risk management, risk communication |
Seven Characteristics of Trustworthy AI
The AI RMF identifies seven characteristics that contribute to trustworthy AI:
| Characteristic | Description |
|---|---|
| 1. Valid and Reliable | AI system performs as intended for defined conditions. Results are reproducible and consistent. |
| 2. Safe | AI does not endanger human life, health, property, or the environment under defined conditions of use. |
| 3. Secure and Resilient | AI maintains confidentiality, integrity, and availability. Can withstand adversarial attacks and unexpected inputs. |
| 4. Accountable and Transparent | Mechanisms exist to understand AI behavior, assigned responsibilities, and appropriate documentation. |
| 5. Explainable and Interpretable | AI outputs can be understood by people (explainability) and internal mechanisms can be meaningfully described (interpretability). |
| 6. Privacy-Enhanced | Privacy values (autonomy, anonymity, control over personal information) are respected. Data practices are appropriate. |
| 7. Fair — with Harmful Bias Managed | AI does not generate outputs that are systematically unfair. Bias is identified, measured, and managed throughout lifecycle. |
Companion Resources
- NIST AI RMF Playbook — Practical implementation guidance with suggested actions for each subcategory
- NIST AI RMF Crosswalk — Maps AI RMF to other frameworks (ISO 42001, EU AI Act, OECD AI Principles)
- AI RMF Generative AI Profile (NIST AI 600-1) — Published July 2024, extends AI RMF specifically for generative AI risks
- NIST AI 100-2e2023 — Adversarial Machine Learning taxonomy
5. National AI Initiative Act of 2020
The National AI Initiative Act (NAIIA), enacted as Division E of the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021 (P.L. 116-283), is the primary enacted federal AI law. Signed January 1, 2021, it establishes a coordinated national AI strategy.
Key Provisions
| Provision | Description | Status |
|---|---|---|
| National AI Initiative Office | Coordinates federal AI R&D across agencies. Located within OSTP. | Operational since 2021 |
| National AI Research Resource (NAIRR) | Shared national research infrastructure providing researchers access to compute, data, and tools for AI research. | Pilot launched January 2024. Full deployment TBD. |
| Interagency Committee on AI | Coordinates AI activities across the federal government. Chaired by OSTP and NIST directors. | Active |
| National AI Advisory Committee (NAIAC) | Advises the President and NAIIO on AI matters. Members from industry, academia, civil society, and government. | Active — issued multiple reports and recommendations |
| AI Research Institutes | NSF-funded national AI research institutes across the country focusing on different AI domains. | 25 institutes funded as of 2024, totaling $500M+ investment |
| NIST AI Standards | Directs NIST to develop AI standards, metrics, and evaluation tools. | AI RMF 1.0, AI 600-1 published. Ongoing standards development. |
6. Agency-Specific AI Guidance & Actions
In the absence of comprehensive federal AI legislation, individual agencies have used their existing regulatory authority to address AI in their sectors. This creates a patchwork of sector-specific rules.
| Agency | Jurisdiction | Key AI Actions | Date | Enforcement Authority |
|---|---|---|---|---|
| FTC (Federal Trade Commission) | Consumer protection, unfair/deceptive practices | Enforcement against AI deception (Section 5 FTC Act). "Keep Your AI Claims in Check" guidance. Enforcement actions: Rite Aid (facial recognition), Amazon Ring (voice data). AI claims scrutiny. | 2021–present | Administrative complaints, consent orders, civil penalties. §5 FTC Act — broad authority over unfair/deceptive AI practices. |
| FDA (Food & Drug Admin.) | Medical devices, drugs | AI/ML-based Software as Medical Device (SaMD) framework. Predetermined Change Control Plan (PCCP). 950+ AI-enabled medical devices authorized. Total Product Lifecycle (TPLC) approach. | 2021–present | 510(k), De Novo, PMA clearance/approval. Can mandate recalls, warnings. |
| SEC (Securities & Exchange Commission) | Securities markets | Proposed rule on Predictive Data Analytics (PDA) in broker-dealer/investment adviser conflicts. AI in trading oversight. "AI washing" enforcement (companies overclaiming AI capabilities). | 2023–present | Rulemaking, enforcement actions, civil penalties. Charged Delphia and Global Predictions for AI washing (2024). |
| EEOC (Equal Employment Opportunity Commission) | Employment discrimination | "AI and Algorithmic Fairness" initiative. Technical assistance on ADA and AI hiring tools. Guidance on AI-driven disability discrimination. Settled first AI hiring discrimination case (iTutorGroup, 2023). | 2021–present | Title VII, ADA, ADEA enforcement. Can bring discrimination suits. Focus on disparate impact from AI hiring tools. |
| CFPB (Consumer Financial Protection Bureau) | Consumer financial products | Interpretive rule requiring specific adverse action notices for AI credit decisions. ECOA + FCRA apply to AI lending. Chatbot guidance. | 2022–present | ECOA, FCRA enforcement. Can require lenders to explain AI-driven credit denials in specific terms, not just "algorithm determined." |
| DOJ (Department of Justice) | Federal law enforcement, civil rights | Updated policy on computer crimes including AI-assisted fraud. Civil rights division evaluating AI bias in criminal justice. AI in policing guidelines. | 2022–present | Federal criminal prosecution, civil rights enforcement, consent decrees with police departments. |
| HHS (Health & Human Services) | Healthcare | Rule requiring transparency for AI in clinical decision support. Algorithmic nondiscrimination in healthcare under Section 1557 ACA. | 2023–present | CMS conditions of participation, Section 1557 enforcement, HIPAA. |
| DOD (Department of Defense) | Military, national security | AI Adoption Strategy (2023). Responsible AI principles. Chief Digital and AI Office (CDAO). AI testing infrastructure. Task Force Lima (generative AI). | 2019–present | Internal policy, acquisition requirements, DIB guidelines. |
| DOT (Department of Transportation) | Transportation | Autonomous vehicle guidance (AV 4.0). NHTSA authority over automated driving systems. Standing General Order for AV crash reporting. | 2020–present | NHTSA vehicle safety standards, recall authority, defect investigations. |
| DHS (Dept. of Homeland Security) | Homeland security, immigration | AI roadmap. Responsible use of AI directive. AI in border security evaluation. CBP facial recognition oversight. | 2023–present | Immigration enforcement, border security, infrastructure protection. |
| NHTSA (Nat'l Highway Traffic Safety Admin.) | Vehicle safety | Standing General Order requiring AV crash reporting. Multiple investigations of Tesla Autopilot. Level 2+ automation guidance. | 2021–present | Recall authority, defect investigations, FMVSS authority. |
| USPTO (Patent & Trademark Office) | Intellectual property | AI inventorship guidance: AI cannot be listed as inventor (Thaler v. Vidal upheld). AI-assisted inventions can be patented if human made "significant contribution." | 2023–2024 | Patent examination, inventorship requirements. |
| Copyright Office | Copyright | AI cannot be author for copyright. Works with AI assistance may be copyrightable if human authorship is sufficient. Multiple NOIs on AI and copyright. | 2023–present | Copyright registration, policy guidance. |
7. Proposed Federal AI Legislation
The 118th Congress (2023-2024) saw an explosion of proposed AI legislation — over 70 AI-related bills were introduced. None of the comprehensive AI bills were enacted, though AI provisions were included in some appropriations and defense bills. The 119th Congress (2025-2026) is expected to continue this trend.
| Bill | Sponsors | Key Provisions | Status (118th Congress) |
|---|---|---|---|
| Algorithmic Accountability Act | Sen. Wyden, Sen. Booker, Rep. Clarke | Requires impact assessments for automated decision systems. FTC enforcement. Critical decisions in housing, employment, education, credit, healthcare, insurance. | Introduced — not enacted |
| AI Foundation Model Transparency Act | Rep. Beyer | Requires disclosure of training data sources, compute used, known limitations, and evaluation results for foundation models. | Introduced — not enacted |
| CREATE AI Act | Sen. Heinrich, Sen. Young | Establishes a National AI Research Resource (NAIRR) to provide researchers with compute, data, and AI tools. Authorizes $2.6 billion over 5 years. | Introduced — not enacted (but NAIRR pilot launched via EO) |
| REAL Political Advertisements Act | Sen. Klobuchar | Requires disclosure of AI-generated content in political advertisements. Labels for synthetic media in campaigns. | Introduced — not enacted |
| AI Labeling Act | Rep. Eshoo | Requires disclosure when consumers interact with AI-generated content. FTC enforcement. | Introduced — not enacted |
| Bipartisan Senate AI Policy Roadmap | Schumer, Rounds, Heinrich, Young | Result of Senate AI Insight Forums. Policy roadmap recommending targeted legislation on deepfakes, elections, workforce, innovation, transparency. | Published May 2024 — guiding committee work |
| DEFIANCE Act | Sen. Cruz, Sen. Klobuchar | Creates federal civil remedy for individuals depicted in non-consensual AI-generated intimate imagery (deepfakes). | Passed Senate — not enacted |
| TAKE IT DOWN Act | Sen. Cruz, Sen. Klobuchar | Criminalizes non-consensual intimate deepfakes. Requires platforms to remove within 48 hours of notice. | Passed Senate — House pending |
| NO FAKES Act | Bipartisan group | Protects voice and likeness from unauthorized AI replication. Federal right of publicity for AI-generated content. | Introduced — not enacted |
| AI Disclosure Act | Rep. Eshoo, Rep. Owens | Requires clear disclosure when interacting with AI systems rather than humans. | Introduced — not enacted |
8. OMB Memorandum M-24-10 — Federal Agency AI Governance
Issued on March 28, 2024, OMB Memorandum M-24-10 ("Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence") establishes the most concrete AI governance requirements for the federal government itself — how agencies must manage their own AI use.
Key Requirements
| Requirement | Deadline | Details |
|---|---|---|
| Chief AI Officers | 60 days from issuance | Every CFO Act agency must designate a Chief AI Officer (CAIO) to coordinate AI governance, report to agency head. |
| AI Governance Boards | Within 60 days | Establish agency AI governance bodies to oversee AI adoption and risk management. |
| AI Use Case Inventories | Annual updates | Publicly report all AI use cases. Distinguish between "safety-impacting" and "rights-impacting" AI. |
| Rights-Impacting AI Safeguards | August 2024 | AI used in contexts that impact individual rights (law enforcement, benefits, employment) must have: impact assessments, ongoing monitoring, human oversight, notice to affected parties, opt-out where feasible. |
| Safety-Impacting AI Requirements | August 2024 | AI controlling physical infrastructure or safety-critical functions must have: safety testing, human oversight, monitoring, incident response plans. |
| AI Talent Strategy | Ongoing | Agencies must develop AI workforce strategies, hire AI talent, and train existing staff. |
Classification of Government AI Use
M-24-10 introduces two critical categories that trigger heightened requirements:
- Rights-Impacting AI — AI whose output serves as a basis for decisions about individuals' rights, access to services, or treatment. Examples: AI in criminal justice risk assessment, benefit eligibility determination, hiring decisions, healthcare treatment recommendations.
- Safety-Impacting AI — AI whose failure could create safety risks. Examples: AI controlling physical infrastructure, autonomous systems, air traffic management, cybersecurity defense systems.
9. National Security & AI
AI is a central element of US national security strategy. Multiple institutions and strategies address the intersection of AI and national defense.
Key National Security AI Initiatives
| Initiative | Year | Description | Key Recommendations/Actions |
|---|---|---|---|
| National Security Commission on AI (NSCAI) Final Report | 2021 | Congressionally mandated commission chaired by Eric Schmidt. 756-page final report on AI and national security. | Recommended $40B AI investment. Warned US not prepared for AI era. Called for AI-ready military, cyber defense, intelligence. Technology competition with China framework. |
| DOD AI Strategy (2023 Update) | 2023 | Updated DOD strategy for AI adoption across defense enterprise. | Deploy AI at speed and scale. Establish AI-ready digital ecosystem. Build enduring AI advantages. Advance RAI (Responsible AI). Streamline AI procurement. |
| DOD Responsible AI Principles | 2020 | Five ethical principles for military AI: Responsible, Equitable, Traceable, Reliable, Governable. | All DOD AI must have clear chains of responsibility, avoid unfair bias, have auditable logic, perform as intended, allow human override. |
| Chief Digital and AI Office (CDAO) | 2022 | New senior office consolidating DOD AI governance. Replaces JAIC. | Oversees all DOD AI adoption, data management, analytics. Reports to Deputy Secretary of Defense. |
| Task Force Lima | 2023 | DOD task force specifically for generative AI assessment and adoption. | Evaluating LLMs for military applications. Assessing risks of adversarial use. Developing guardrails for DOD generative AI. |
| NIST AI Safety Institute (AISI) | 2024 | Established under EO 14110 at NIST. Develops AI safety evaluations and red-teaming methodologies. | Signed agreements with major AI labs for pre-release safety testing. International cooperation with UK AISI. Continues operating post EO 14110 revocation. |
| Export Controls on AI Chips | 2022–2024 | BIS export controls restricting advanced AI chip sales to China and other countries of concern. | October 2022: Initial controls. October 2023: Expanded controls. January 2025: "AI diffusion" rule controlling chip exports to 120+ countries. |
10. Intellectual Property & Copyright
The US is grappling with fundamental questions about how existing intellectual property law applies to AI. Key issues include AI inventorship, copyright of AI-generated works, and fair use of copyrighted works for AI training.
Key Decisions & Guidance
| Issue | Decision/Guidance | Authority | Outcome |
|---|---|---|---|
| AI as Patent Inventor | Thaler v. Vidal (Fed. Cir. 2022) | Federal Circuit Court of Appeals | AI (DABUS) cannot be listed as patent inventor. Patent Act requires a natural person. Supreme Court declined to hear appeal (2023). |
| AI-Assisted Inventions | USPTO Inventorship Guidance (Feb 2024) | USPTO | AI-assisted inventions can be patented if a natural person made a "significant contribution" to the invention. AI is a tool, like a calculator. |
| AI as Copyright Author | Thaler v. Perlmutter (D.D.C. 2023) | US District Court | AI-generated artwork (no human involvement) cannot receive copyright. Copyright requires human authorship. |
| AI-Assisted Works | Zarya of the Dawn Registration (2023) | Copyright Office | AI-generated images (Midjourney) in a graphic novel: images not copyrightable, but human-authored text and selection/arrangement were copyrightable. |
| AI Training & Fair Use | Multiple pending lawsuits | Various courts | Ongoing: NYT v. OpenAI, Getty v. Stability AI, Andersen v. Stability AI, Authors Guild v. OpenAI. Core question: Is using copyrighted works to train AI "fair use" under 17 U.S.C. §107? |
| Copyright Office NOIs | Notices of Inquiry on AI & Copyright (2023-2024) | Copyright Office | Multi-part study examining: (1) AI training on copyrighted works, (2) copyrightability of AI outputs, (3) liability for AI-generated infringement. Reports in progress. |
11. US vs EU — Regulatory Comparison
| Dimension | United States | European Union |
|---|---|---|
| Legislative Approach | Sector-specific, patchwork. No comprehensive federal AI law. Executive orders + agency guidance. | Comprehensive, horizontal. EU AI Act (binding regulation) + GDPR for data. |
| Binding Requirements | Limited to existing sectoral laws (FTC Act, FDA regs, civil rights laws). Voluntary frameworks (NIST AI RMF). | Extensive binding requirements. Risk classification, conformity assessment, CE marking, post-market monitoring. |
| Risk Classification | No federal risk classification system. M-24-10 classifies "safety-impacting" and "rights-impacting" for government AI only. | Four-tier risk system: Unacceptable (banned), High (regulated), Limited (transparency), Minimal (voluntary). |
| Enforcement | Multiple agencies with existing authority. FTC, FDA, EEOC, etc. Tort litigation. | EU AI Office + national authorities. DPAs for GDPR. Up to €35M / 7% turnover. |
| Innovation Focus | Strong emphasis on maintaining AI leadership, avoiding over-regulation, promoting competitiveness. | Balanced approach but regulation-first. Regulatory sandboxes to support innovation within bounds. |
| Foundation Models | EO 14110 reporting (now revoked). No permanent requirements. | AI Act Chapter V: transparency, technical documentation, copyright compliance. Systemic risk provisions for large models. |
| Transparency | Agency-specific disclosure rules. Blueprint principle (non-binding). | Mandatory transparency for limited-risk AI (chatbots, deepfakes, emotion recognition). Extensive documentation for high-risk. |
| Data Protection | No comprehensive federal privacy law. Sectoral: HIPAA, COPPA, state laws (CCPA/CPRA). | GDPR — comprehensive, extraterritorial, applies to all AI processing personal data. |
| Prohibited Practices | No specific AI prohibitions. General anti-discrimination, consumer protection laws apply. | Explicit bans: social scoring, manipulative AI, untargeted facial recognition scraping, emotion recognition in workplaces/schools. |
| Political Stability | Executive orders can be revoked by next administration (EO 14110 revoked Jan 2025). Legislation is durable but harder to pass. | EU Regulation directly applicable in all 27 Member States. Difficult to repeal. |
12. Executive Order Revocations & Policy Shifts
The change in US administration in January 2025 demonstrated the fragility of executive action as an AI governance mechanism.
EO 14179 — Removing Barriers to American Leadership in AI (January 20, 2025)
On the first day of the new Trump administration, Executive Order 14179 was issued, which:
- Revoked EO 14110 in its entirety — eliminating the reporting requirements for dual-use foundation models, the Defense Production Act invocation, and agency directives
- Stated policy goal of removing barriers to AI development and maintaining American AI dominance
- Directed agencies to review and consider revising or rescinding regulations implementing EO 14110
- Signaled a shift from safety-first to innovation-first AI policy
What Survived the Revocation
| Item | Status Post-Revocation | Reason |
|---|---|---|
| NIST AI Risk Management Framework | ✅ Active — voluntary framework | Not created by EO 14110 (published Jan 2023, before EO). Voluntary, so no conflict. |
| NIST AI Safety Institute | ✅ Operating but under review | Created under EO 14110 but has bipartisan support. Agreements with AI labs continue. |
| National AI Initiative Act | ✅ Active — enacted legislation | Congressional statute — cannot be revoked by executive order. |
| OMB M-24-10 | ⚠️ Under review | OMB memo, not EO, but may be revised under new administration priorities. |
| Agency-specific rules/guidance | ⚠️ Varies by agency | Agency rules issued under existing authority (not solely EO 14110) generally survive. Rules solely implementing EO 14110 may be withdrawn. |
| Blueprint for AI Bill of Rights | ⚠️ De-emphasized | Was always non-binding. Unlikely to be referenced by new administration. |
| 10²⁶ FLOP reporting requirements | ❌ Revoked | Directly established by EO 14110. |
| Defense Production Act AI invocation | ❌ Revoked | Directly established by EO 14110. |
13. References & Official Sources
Executive Orders
-
Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023)
whitehouse.gov — EO 14110
Federal Register: 88 FR 75191 -
Executive Order 14179 — Removing Barriers to American Leadership in Artificial Intelligence (January 20, 2025)
whitehouse.gov — EO 14179 -
Executive Order 13859 — Maintaining American Leadership in Artificial Intelligence (February 11, 2019)
Federal Register — EO 13859
NIST Publications
-
NIST AI Risk Management Framework 1.0 (AI 100-1)
nist.gov — AI RMF
Full document: doi.org/10.6028/NIST.AI.100-1 -
NIST AI RMF Playbook
airc.nist.gov — Playbook -
NIST AI 600-1 — Generative AI Profile
doi.org/10.6028/NIST.AI.600-1 -
NIST AI 100-2e2023 — Adversarial Machine Learning Taxonomy
doi.org/10.6028/NIST.AI.100-2e2023
White House & OMB
-
Blueprint for an AI Bill of Rights
whitehouse.gov/ostp/ai-bill-of-rights -
OMB Memorandum M-24-10 — Advancing Governance, Innovation, and Risk Management for Agency Use of AI
whitehouse.gov/omb — M-24-10
Congressional Resources
-
National AI Initiative Act of 2020 (P.L. 116-283, Division E)
congress.gov — H.R. 6395 (NDAA FY2021) -
Congressional Research Service — Artificial Intelligence Reports
crsreports.congress.gov — AI Reports -
Bipartisan Senate AI Policy Roadmap (May 2024)
schumer.senate.gov — AI Policy Roadmap PDF
Agency-Specific Resources
-
FTC — Artificial Intelligence
ftc.gov/technology/artificial-intelligence -
FDA — AI/ML-Based Software as a Medical Device
fda.gov — AI/ML Medical Devices -
EEOC — Artificial Intelligence and Algorithmic Fairness Initiative
eeoc.gov/ai -
CFPB — Chatbots in Consumer Finance
consumerfinance.gov — Chatbots -
DOD — Responsible AI
ai.mil/responsible_ai.html -
USPTO — AI Inventorship Guidance
uspto.gov/initiatives/artificial-intelligence -
US Copyright Office — AI Initiative
copyright.gov/ai
National Security AI
-
National Security Commission on AI — Final Report (2021)
nscai.gov — Final Report -
DOD Chief Digital and AI Office (CDAO)
ai.mil
Court Decisions
-
Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) — AI inventorship
Federal Circuit — Thaler v. Vidal -
Thaler v. Perlmutter, No. 22-1564 (D.D.C. 2023) — AI copyright authorship
Referenced in Copyright Office AI Initiative