| EU | Comprehensive horizontal regulation | AI Act (Regulation 2024/1689) | Broad — machine-based system with autonomy, inference capability | 4 tiers: Unacceptable, High, Limited, Minimal |
| US (Federal) | Sector-specific; executive action; voluntary | Executive Orders; agency guidance; NIST RMF | Varies by agency; no single federal definition | No single classification; sector-dependent |
| US (States) | Fragmented; issue-specific laws | State laws (Colorado, Illinois, NYC, etc.) | Varies by state | Colorado: “high-risk” for consequential decisions |
| China | Technology-specific regulations | Algorithm, Deep Synthesis, GenAI rules | Technology-specific definitions per regulation | By technology type, not risk level |
| UK | Pro-innovation; sector regulators | White Paper; existing regulators (FCA, Ofcom, etc.) | Adaptability and autonomy-focused | Cross-sectoral principles; sector risk assessment |
| Canada | Proposed legislation + existing directives | AIDA (proposed); ADM Directive (federal gov) | AIDA: technological system using ML or similar | ADM Directive: 4-level impact assessment |
| Japan | Social Principles; soft law; sector guidance | Social Principles of Human-Centric AI (2019) | Broad; not formally defined in binding law | No formal classification |
| South Korea | Comprehensive framework + sector laws | AI Framework Act (2025) | Defined in framework law | High-risk AI designation |
| Singapore | Governance framework; voluntary | Model AI Governance Framework | Broad; technology-neutral | Risk-proportionate; sector-based |
| Brazil | Proposed comprehensive law | AI Bill (PL 2338/2023) | Broad; based on OECD definition | Risk-based (influenced by EU AI Act) |