7+
Countries Covered
15+
Major Policies
4.7B
People Affected
2017–2026
Active Period

Table of Contents

  1. Regional Overview
  2. Japan
  3. South Korea
  4. Singapore
  5. Australia
  6. India
  7. New Zealand
  8. ASEAN Regional Frameworks
  9. Emerging Regulators
  10. Comparative Analysis
  11. Regional Trends & Future Outlook
  12. References & Resources

1. Regional Overview

The Asia-Pacific region represents the most diverse landscape of AI governance approaches globally. Home to some of the world’s most advanced AI ecosystems alongside rapidly developing digital economies, the region spans the full spectrum from light-touch innovation-first frameworks to comprehensive regulatory regimes.

Key Regional Characteristics: The Asia-Pacific approach to AI governance is distinguished by significant variation between jurisdictions. Unlike the EU’s harmonized regulatory model, Asia-Pacific nations have developed largely independent frameworks reflecting their unique economic priorities, cultural values, and governance traditions. Common threads include strong government investment in AI research, emphasis on economic competitiveness, and growing attention to ethical considerations.

Regional Landscape at a Glance

Country Approach Primary Framework Status Key Focus
Japan Soft law / Guidelines Social Principles of Human-Centric AI (2019) Active & evolving Innovation promotion, human-centric values, Society 5.0
South Korea Legislation + Guidelines AI Framework Act (2025) Enacted High-impact AI regulation, trustworthiness, industry promotion
Singapore Governance framework Model AI Governance Framework (2019, 2020) Active Business-friendly governance, AI Verify testing, ASEAN leadership
Australia Voluntary + Mandatory (emerging) AI Ethics Framework (2019) + Safe & Responsible AI consultation Transitioning Voluntary ethics principles moving toward mandatory guardrails
India Light-touch / Sector-specific National Strategy for AI (2018) + DPDPA 2023 Developing AI for social good, digital personal data protection, AI innovation
New Zealand Principles-based Algorithm Charter (2020) + AI Strategy (2024) Active Government use of algorithms, transparency, Te Tiriti considerations

2. Japan Japan

Japan has adopted a predominantly soft-law, principles-based approach to AI governance, emphasizing innovation promotion while embedding human-centric values. As one of the world’s leading AI research nations and a member of the G7, Japan has been influential in shaping global AI governance norms through both domestic frameworks and international engagement.

2.1 Social Principles of Human-Centric AI (2019)

Adopted by the Japanese Cabinet in March 2019, the Social Principles of Human-Centric AI represent Japan’s foundational AI governance document. Developed by the Council for Social Principles of Human-Centric AI under the Cabinet Office, these principles establish the philosophical and ethical framework for AI development and deployment in Japan.

Core Vision: Japan’s approach is deeply connected to the Society 5.0 concept — a super-smart society that balances economic advancement with the resolution of social problems through the integration of cyberspace and physical space. AI is viewed as a central enabling technology for this vision.

The Seven Social Principles

Principle Description Implementation Guidance
1. Human-Centric AI must not infringe upon fundamental human rights guaranteed by the Constitution; must be developed for people’s abilities and creativity to flourish Education reform, reskilling programs, preserving human decision-making authority
2. Education / Literacy Society must provide education opportunities for all to acquire AI literacy; policymakers must understand AI to make sound policy AI education across all levels, training for government officials, public awareness campaigns
3. Privacy Protection Personal data used for AI must be properly handled; people must not be profiled without consent or tracked beyond societal norms Compliance with APPI, data minimization, purpose limitation, consent mechanisms
4. Security Society must pay attention to AI system robustness and security, including resistance to intentional attacks Security-by-design, regular testing, incident response plans, supply chain security
5. Fair Competition AI must maintain fair market competition; must prevent concentration of wealth and data Anti-monopoly measures, data portability, open standards promotion
6. Fairness, Accountability, Transparency AI decisions must be explainable in a way appropriate to the context; providers must ensure fairness and non-discrimination Algorithmic auditing, bias testing, documentation of decision processes, impact assessments
7. Innovation Japan must promote continuous innovation in AI; international cooperation and cross-disciplinary research essential R&D investment, international partnerships, regulatory sandboxes, startup support

2.2 AI Strategy 2022 & Updates

Japan’s AI Strategy 2022, updated from the original 2019 strategy, establishes specific policy objectives and investment priorities. The strategy is overseen by the Integrated Innovation Strategy Promotion Council (IISPC) under the Cabinet Office.

Strategic Pillars

2.3 AI Governance Guidelines (METI)

The Ministry of Economy, Trade and Industry (METI) published the AI Governance Guidelines (Version 1.1, January 2022), providing practical guidance for businesses implementing AI. These guidelines follow the “agile governance” philosophy — flexible rules that can adapt to rapidly evolving technology.

Key Elements of Agile Governance

Element Description Implementation
Environmental Analysis Continuous monitoring of AI risks and opportunities Regular risk assessments, technology trend monitoring, stakeholder engagement
Goal Setting Define clear objectives balancing innovation and risk Multi-stakeholder goal alignment, KPI definition, periodic review cycles
System Design Create governance structures appropriate to AI risk level Internal review boards, testing protocols, documentation standards
Operation Execute governance measures with flexibility Continuous monitoring, incident response, performance measurement
Evaluation Assess effectiveness and adjust governance measures Audits, feedback loops, benchmarking against global standards

2.4 Act on the Protection of Personal Information (APPI)

While not AI-specific, Japan’s APPI (amended 2022, effective April 2022) significantly impacts AI systems that process personal data. The 2022 amendments strengthened individual rights and introduced new obligations relevant to AI.

2.5 Hiroshima AI Process

Under Japan’s G7 presidency in 2023, the Hiroshima AI Process was launched to develop international governance frameworks for advanced AI systems, particularly generative AI and foundation models. This initiative produced the International Guiding Principles and Code of Conduct for organizations developing advanced AI systems.

Hiroshima AI Process Outcomes

2.6 Key Regulatory Bodies

Body Role Website
Cabinet Office Overall AI strategy coordination; Council for Social Principles cao.go.jp/cstp/ai
METI AI governance guidelines, industrial AI policy meti.go.jp AI Governance
MIC AI networking, telecommunications AI policy soumu.go.jp
PPC Personal data protection (APPI enforcement) ppc.go.jp
AIST / RIKEN AI research, testing frameworks, safety standards aist.go.jp

3. South Korea South Korea

South Korea has taken one of the most proactive legislative approaches to AI governance in the Asia-Pacific region. In January 2025, South Korea enacted the AI Framework Act (officially the Act on the Development of Artificial Intelligence and the Establishment of Trust), becoming one of the first countries in the world to pass comprehensive AI legislation.

3.1 AI Framework Act (2025)

Enacted on January 9, 2025, with full enforcement beginning January 22, 2026, the AI Framework Act establishes a comprehensive governance structure for AI in South Korea. The Act was developed through extensive multi-stakeholder consultation and draws on both the EU AI Act model and Korea’s own innovation-first philosophy.

Legislative History: Multiple AI bills were introduced in Korea’s National Assembly from 2020 onward. After years of debate, the final version balanced industry promotion with safety requirements. The Act passed with bipartisan support, reflecting broad consensus on the need for AI governance.

Core Structure

Chapter Coverage Key Provisions
Chapter 1: General Definitions, scope, basic principles Defines AI, high-impact AI, AI agents; establishes human dignity as core principle
Chapter 2: AI Policy National AI Committee, master plans Establishes National AI Committee chaired by President; 5-year master plans; annual implementation plans
Chapter 3: Industry Promotion R&D, talent, infrastructure Government R&D support, AI cluster zones, data infrastructure, startup incubation
Chapter 4: Ethics & Trustworthiness Ethical principles, trustworthiness National AI Ethics Charter; transparency obligations; bias prevention; human oversight requirements
Chapter 5: High-Impact AI Designation, obligations, assessments Government designates high-impact AI categories; mandatory impact assessments; notification requirements; record-keeping
Chapter 6: Rights & Remedies Individual rights, damage relief Right to explanation of AI decisions; right to object; damage compensation framework
Chapter 7: Safety & Oversight Safety measures, inspections Incident reporting, safety standards, government inspections, corrective orders

High-Impact AI Classification

The Act introduces a “high-impact AI” concept (similar to but distinct from the EU’s high-risk classification). High-impact AI is defined as AI that may significantly affect the life, physical safety, or fundamental rights of individuals. Categories are designated by presidential decree and include:

3.2 Personal Information Protection Act (PIPA)

South Korea’s PIPA (amended 2023) is one of Asia’s strongest data protection laws and significantly impacts AI systems. The Personal Information Protection Commission (PIPC) serves as the independent data protection authority.

3.3 National AI Ethics Standards (2020)

Published by the Ministry of Science and ICT (MSIT) in December 2020, the National AI Ethics Standards establish Korea’s ethical framework. These standards preceded the Framework Act and informed its ethics chapter.

Three Core Principles

  1. Respect for Human Dignity: AI must serve human well-being; human rights must not be infringed; AI must be developed to enhance human dignity
  2. Public Good: AI must contribute to society’s well-being; developers must consider social impact; AI must not exacerbate inequality
  3. Technical Soundness: AI systems must be safe, reliable, and secure; developers must implement appropriate quality controls

Ten Key Requirements

# Requirement Description
1Human RightsAI must respect fundamental human rights and not discriminate
2PrivacyPersonal information must be protected throughout AI lifecycle
3DiversityAI development must consider diverse social, cultural, and individual needs
4Non-harmAI must not be used to cause physical or psychological harm
5Public InterestAI must serve the broader public good and social welfare
6SolidarityAI must promote international cooperation and shared prosperity
7Data ManagementTraining data must be collected and managed fairly and transparently
8AccountabilityClear accountability structures for AI development and deployment
9SafetyAI must be technically safe and robust throughout its lifecycle
10TransparencyAI operations and decision-making processes must be explainable

3.4 Key Regulatory Bodies

Body Role Website
National AI Committee Highest policy body, chaired by President; sets national AI strategy Established under AI Framework Act (2026)
MSIT AI industry promotion, R&D, national strategy execution msit.go.kr
PIPC Data protection (PIPA enforcement); AI data governance pipc.go.kr
NIA National Information Society Agency; AI infrastructure and digital government nia.or.kr

4. Singapore Singapore

Singapore has established itself as a global leader in practical AI governance, developing frameworks that prioritize business adoption while maintaining trust. The city-state’s approach is distinguished by its emphasis on voluntary governance tools, international collaboration, and the development of AI testing infrastructure.

4.1 Model AI Governance Framework (2019, 2020)

Published by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), the Model AI Governance Framework provides practical and actionable guidance for organizations deploying AI. First released in January 2019 with a second edition in January 2020, it has been widely cited internationally.

Four Key Principles

Principle Description Practical Guidance
1. Internal Governance Organizations should have clear internal governance structures and measures for AI Designate AI governance officer; establish review boards; define risk tolerance; create escalation procedures
2. Decision-Making Models Determine appropriate level of human involvement in AI-augmented decisions Human-in-the-loop for high-risk; human-on-the-loop for medium; human-out-of-the-loop only for low-risk applications
3. Operations Management Manage AI risks through robust development and deployment processes Data governance, model training rigor, monitoring, incident management, bias testing
4. Stakeholder Interaction Engage stakeholders throughout AI lifecycle; communicate transparently User disclosure, feedback mechanisms, explanation capabilities, grievance channels

4.2 AI Verify Foundation & Testing Framework

Launched in June 2022 and open-sourced in 2023, AI Verify is the world’s first AI governance testing framework and software toolkit. It allows companies to objectively demonstrate responsible AI practices through standardized testing.

AI Verify Testing Pillars

Testing Area What It Tests Output
Transparency Model explainability, feature importance, decision path clarity Explainability score, feature attribution reports
Fairness Bias across protected attributes (gender, race, age, etc.) Fairness metrics (demographic parity, equalized odds, etc.)
Safety & Robustness Adversarial robustness, performance stability, edge cases Robustness scores, adversarial testing results
Accountability Governance processes, documentation, oversight structures Process compliance checklist, governance gap analysis
AI Verify Foundation: In June 2023, Singapore launched the AI Verify Foundation as an open-source community initiative. Major partners include Google, Microsoft, IBM, Samsung, and Salesforce. The Foundation develops testing standards and promotes interoperability with other AI governance frameworks globally. It represents a shift from purely voluntary governance to providing objective, verifiable testing tools.

4.3 National AI Strategy 2.0 (2023)

Singapore’s updated National AI Strategy, released in December 2023, sets out the vision for Singapore to be a leader in AI for the public good. Building on the original 2019 strategy, version 2.0 emphasizes:

4.4 Personal Data Protection Act (PDPA)

Singapore’s PDPA governs the collection, use, and disclosure of personal data by organizations. While not AI-specific, it has direct implications for AI systems.

4.5 Key Regulatory Bodies

Body Role Website
IMDA AI governance framework development; digital economy regulation imda.gov.sg AI Governance
PDPC Personal data protection; AI data governance guidance pdpc.gov.sg
Smart Nation Office National AI strategy coordination; government AI adoption smartnation.gov.sg
AI Verify Foundation Open-source AI testing tools and standards aiverifyfoundation.sg

5. Australia Australia

Australia is in a period of significant transition in AI governance, moving from a purely voluntary ethics-based approach toward mandatory guardrails for high-risk AI. The Australian government has signaled clear intent to introduce binding regulations, making it one of the most dynamic regulatory environments in the Asia-Pacific.

5.1 AI Ethics Framework (2019)

Australia’s initial approach centered on the AI Ethics Framework, published by the Department of Industry, Science and Resources in 2019. This voluntary framework established eight AI Ethics Principles.

Eight AI Ethics Principles

Principle Description
1. Human, Societal & Environmental WellbeingAI must benefit individuals, society, and the environment
2. Human-Centred ValuesAI must respect human rights, diversity, autonomy, and individual freedom
3. FairnessAI must be inclusive and accessible; must not discriminate unfairly
4. Privacy Protection & SecurityAI must respect and uphold privacy rights and data protection
5. Reliability & SafetyAI must operate reliably, safely, and in accordance with intended purpose
6. Transparency & ExplainabilityThere should be transparency and responsible disclosure about AI systems
7. ContestabilityAI-influenced decisions must be contestable; effective remedy must be available
8. AccountabilityThose responsible for AI must be identifiable and accountable

5.2 Safe and Responsible AI Consultation (2023–2024)

In June 2023, the Australian government launched a major consultation on Safe and Responsible AI, signaling the shift toward mandatory regulation. The consultation paper, receiving over 500 submissions, proposed a risk-based approach to AI governance.

Key Proposals

5.3 Interim Government Response (2024)

The Australian government released its interim response in September 2024, confirming the direction toward mandatory guardrails while adopting a phased approach.

5.4 Privacy Act Review & AI Implications

Australia’s comprehensive Privacy Act Review (Attorney-General’s report, 2023) includes proposals with significant AI implications:

5.5 Sector-Specific AI Regulation

Sector Regulator AI-Relevant Actions
Financial Services ASIC, APRA Guidance on AI in financial advice; model risk management; robo-advice regulations; CPS 230 operational resilience
Healthcare TGA AI/ML-based Software as a Medical Device (SaMD) regulation; pre-market assessment; post-market surveillance
Competition ACCC Digital Platform Services Inquiry (ongoing); algorithmic transparency; merger review considering AI market power
Employment Fair Work Commission AI in workplace monitoring; algorithmic management; consultation obligations for AI deployment
National Security Department of Defence Responsible AI in Defence & National Security program; AI ethics principles for defence applications

5.6 Key Regulatory Bodies

Body Role Website
DISR Department of Industry, Science & Resources; AI policy lead industry.gov.au AI
OAIC Privacy regulation; AI data protection guidance oaic.gov.au
National AI Centre AI adoption support; industry collaboration; standards development csiro.au/naic
eSafety Commissioner Online safety including AI-generated content; deepfake regulation esafety.gov.au

6. India India

India, home to one of the world’s largest and fastest-growing AI ecosystems, has adopted a primarily light-touch approach to AI governance while developing sector-specific regulations. As the world’s most populous nation and a major AI talent hub, India’s governance choices carry global significance.

6.1 National Strategy for Artificial Intelligence (2018)

NITI Aayog (National Institution for Transforming India) published India’s National Strategy for Artificial Intelligence in 2018, framing AI as a tool for inclusive economic growth and social development. The strategy introduced the concept of “#AIforAll” — AI for everyone.

Five Priority Sectors

Sector Rationale Target Applications
Healthcare Address massive doctor-patient ratio gaps; improve rural healthcare access AI diagnostics, telemedicine triage, drug discovery, epidemic prediction
Agriculture Improve yields for 600M+ dependent population; address climate challenges Crop prediction, soil analysis, pest detection, market price forecasting
Education Personalize learning for diverse population; address teacher shortages Adaptive learning, automated assessment, language translation, content localization
Smart Cities Manage rapid urbanization; improve urban services and infrastructure Traffic management, waste optimization, energy efficiency, public safety
Smart Mobility Address transportation challenges in world’s third-largest vehicle market Autonomous driving, traffic optimization, logistics planning, accident prevention

6.2 Digital Personal Data Protection Act (DPDPA, 2023)

India’s Digital Personal Data Protection Act, enacted in August 2023, is the country’s first comprehensive data protection law. While not AI-specific, it establishes foundational rules that directly impact AI systems processing personal data.

Key Provisions Affecting AI

6.3 IndiaAI Mission (2024)

Launched in March 2024 with an allocation of INR 10,372 crore (~$1.25 billion), the IndiaAI Mission is the government’s comprehensive program to accelerate AI development and deployment.

Seven Pillars

  1. IndiaAI Compute Capacity: Build 10,000+ GPU AI compute infrastructure through public-private partnership
  2. IndiaAI Innovation Centre: Develop and deploy indigenous large multimodal models (LMMs) in Indian languages
  3. IndiaAI Datasets Platform: Unified data platform providing quality datasets for AI innovation; non-personal and anonymized datasets
  4. IndiaAI Application Development: Fund AI applications in critical sectors; startup support with up to INR 25 crore per project
  5. IndiaAI FutureSkills: Train 14,000+ AI professionals; data and AI programs in 200+ tier-2/3 city colleges
  6. IndiaAI Startup Financing: Financial support for AI startups through equity funding, loans, and grants
  7. Safe & Trusted AI: Develop responsible AI tools, guidelines, and self-assessment frameworks for Indian context

6.4 MeitY AI Governance Advisories

The Ministry of Electronics and Information Technology (MeitY) has issued several advisories on AI governance:

6.5 Sector-Specific AI Governance

Sector Regulator AI-Relevant Actions
Financial Services RBI, SEBI, IRDAI AI in lending guidelines; algorithmic trading rules; InsurTech sandbox; digital lending framework
Healthcare CDSCO, NMC AI as medical device regulation (evolving); telemedicine guidelines; health data standards (ABDM)
Telecommunications TRAI, DoT AI in spectrum management; network optimization; consumer protection in AI-powered services
Competition CCI Investigation into AI-driven market practices; algorithmic collusion studies; digital market study
Judiciary Supreme Court AI Committee AI for case management; SUPACE portal for legal research; AI translation for court proceedings

6.6 Key Regulatory Bodies

Body Role Website
MeitY Digital policy, AI governance advisories, IT Act enforcement meity.gov.in
NITI Aayog National AI strategy, responsible AI guidelines, policy advisory niti.gov.in AI
IndiaAI AI Mission implementation, compute, datasets, skilling indiaai.gov.in
DPBI Data Protection Board; DPDPA enforcement (being constituted) Establishment in progress (2024–2025)

7. New Zealand New Zealand

New Zealand has developed a distinctive approach to AI governance that emphasizes government accountability, indigenous considerations (Te Tiriti o Waitangi), and transparency in algorithmic decision-making. While smaller in scale than other Asia-Pacific nations, New Zealand’s frameworks have been internationally recognized for their innovation.

7.1 Algorithm Charter for Aotearoa New Zealand (2020)

The Algorithm Charter, launched in July 2020, is a voluntary commitment by government agencies to use algorithms in a transparent, accountable, and fair manner. It is one of the world’s first government-wide algorithm accountability frameworks.

Charter Commitments

Commitment Description Implementation
Transparency Maintain transparency about how algorithms are used and what data drives them Public algorithm registers; plain-language descriptions of systems; proactive disclosure
Partnership Embed a Te Ao Māori perspective and uphold commitments under Te Tiriti o Waitangi Māori data sovereignty considerations; consultation with iwi; cultural impact assessments
People Focus on people; ensure that individuals and communities are not harmed by algorithmic decisions Human oversight requirements; redress mechanisms; community engagement
Data Ensure data is fit for purpose, protected, and used ethically Data quality audits; privacy impact assessments; ethical data sourcing
Privacy, Ethics, Human Rights Actively protect individual privacy, uphold ethics, and respect human rights Privacy-by-design; ethical review processes; human rights impact assessments
Consultation Seek public consultation and input on significant algorithmic decisions Public submissions; community advisory groups; stakeholder workshops
Te Tiriti Considerations: New Zealand’s unique governance feature is the explicit integration of Te Tiriti o Waitangi (Treaty of Waitangi) principles into AI governance. This includes Māori data sovereignty — the principle that Māori have inherent rights and interests in data collected about them, and that AI systems must respect and uphold these rights. The Te Mana Raraunga (Māori Data Sovereignty Network) has been influential in shaping these requirements.

7.2 National AI Strategy (2024)

New Zealand released its national AI strategy in 2024, addressing both opportunities and risks of AI adoption across government and the private sector.

7.3 Privacy Act 2020

New Zealand’s Privacy Act 2020 governs personal information handling and has been applied to AI contexts:

8. ASEAN Regional Frameworks

The Association of Southeast Asian Nations (ASEAN) has developed regional AI governance frameworks to promote harmonization across its 10 member states. Led significantly by Singapore’s expertise, ASEAN’s approach balances innovation promotion with responsible AI development.

8.1 ASEAN Guide on AI Governance and Ethics (2024)

Building on the 2021 preliminary framework, the comprehensive ASEAN Guide on AI Governance and Ethics was adopted in 2024. This guide provides a reference for ASEAN member states to develop national AI governance frameworks.

Core Principles

Principle Description
Transparency & ExplainabilityAI systems should be transparent and explainable appropriate to the context
Fairness & EquityAI should be designed to minimize bias and discrimination
Security & SafetyAI must be secure, robust, and safe throughout its lifecycle
Human-CentricityAI should respect human rights and be designed for human benefit
Privacy & Data GovernanceAI must comply with data protection laws and respect privacy
Accountability & IntegrityClear accountability for AI outcomes; responsible development practices
Robustness & ReliabilityAI systems should perform reliably and handle errors gracefully

8.2 ASEAN Digital Economy Framework Agreement (DEFA)

The ASEAN DEFA, negotiated from 2023–2025, includes provisions on AI governance as part of the broader digital economy framework. Key AI-related provisions include:

8.3 Member State Developments

Country AI Strategy Key Initiatives Status
Thailand National AI Strategy 2022–2027 AI ethics guidelines; AI sandbox; smart agriculture and healthcare focus; PDPA enforcement Active
Vietnam National AI Strategy to 2030 AI R&D centres; focus on Vietnamese language AI; smart manufacturing; cybersecurity AI regulations Active
Indonesia National AI Strategy 2020–2045 (Stranas KA) Ethics and governance framework; AI in government services; PDP Law 2022; digital economy vision Active
Malaysia National AI Roadmap 2021–2025 AI ethics principles; Centre for AI & Data Analytics; AI in public service; PDPA enforcement Active
Philippines National AI Strategy Roadmap (2021) AI in agriculture, disaster response, education; data privacy (DPA 2012); proposed AI Development Act Developing

9. Emerging Asia-Pacific Regulators

Several additional Asia-Pacific jurisdictions are developing or implementing AI governance frameworks, contributing to the region’s evolving regulatory landscape.

9.1 Taiwan

9.2 Hong Kong SAR

9.3 Israel

9.4 Saudi Arabia & UAE (Cross-Reference)

While covered in detail in the Africa & Middle East section, Saudi Arabia’s SDAIA and the UAE’s AI strategy are significant Asia-Pacific adjacent developments. Both nations have established dedicated AI authorities and comprehensive national strategies.

10. Comparative Analysis

Comparing Asia-Pacific approaches reveals important patterns and distinctions that help understand the region’s evolving AI governance landscape.

10.1 Regulatory Approach Comparison

Dimension Japan South Korea Singapore Australia India
Primary Approach Soft law Comprehensive legislation Governance framework + testing Transitioning to mandatory Light-touch / sectoral
Risk Classification No formal tiers High-impact AI designation Human-in-the-loop framework High-risk (proposed) No formal tiers
Binding Requirements APPI (data) only Yes (AI Framework Act) PDPA (data); AI governance voluntary Moving toward mandatory DPDPA (data); AI advisories
AI-Specific Law No Yes (first comprehensive) No (framework-based) Under development No (sector-specific)
Right to Explanation Guidelines only Yes (statutory) Framework guidance Proposed Sector-specific
International Influence G7 Hiroshima AI Process EU-aligned model ASEAN governance leadership Five Eyes / OECD Global South voice
AI Testing/Certification AIST quality standards Trustworthiness certification (planned) AI Verify (world’s first) National AI Centre standards Sector-specific testing

10.2 Data Protection & AI Comparison

Feature Japan (APPI) Korea (PIPA) Singapore (PDPA) Australia (Privacy Act) India (DPDPA)
EU Adequacy Yes Yes No (MOUs) No (in discussion) No
AI Training Data Pseudonymized data use allowed Pseudonymized/combined data provisions Deemed consent for business improvement Under review Consent or legitimate use
Automated Decision Rights Limited Right to refuse automated decisions Guidance-based Proposed (Privacy Act review) Not yet specified
Max Penalty ¥100M (~$670K) KRW 500M (~$370K) + criminal SGD 1M or 10% revenue AUD 50M+ per contravention INR 250 crore (~$30M)

12. References & Resources

Japan

South Korea

Singapore

Australia

India

New Zealand

ASEAN & Regional

International & Academic Resources

Previous Canada AI Governance Next Latin America AI