Comprehensive overview of AI laws, policies, and regulatory frameworks across the Asia-Pacific region — covering Japan, South Korea, Singapore, Australia, India, New Zealand, and emerging regulatory efforts.
Comprehensive Reference Last Updated: February 2026 7+ Jurisdictions
The Asia-Pacific region represents the most diverse landscape of AI governance approaches globally. Home to some of the world’s most advanced AI ecosystems alongside rapidly developing digital economies, the region spans the full spectrum from light-touch innovation-first frameworks to comprehensive regulatory regimes.
Key Regional Characteristics: The Asia-Pacific approach to AI governance is distinguished by significant variation between jurisdictions. Unlike the EU’s harmonized regulatory model, Asia-Pacific nations have developed largely independent frameworks reflecting their unique economic priorities, cultural values, and governance traditions. Common threads include strong government investment in AI research, emphasis on economic competitiveness, and growing attention to ethical considerations.
Regional Landscape at a Glance
Country
Approach
Primary Framework
Status
Key Focus
Japan
Soft law / Guidelines
Social Principles of Human-Centric AI (2019)
Active & evolving
Innovation promotion, human-centric values, Society 5.0
South Korea
Legislation + Guidelines
AI Framework Act (2025)
Enacted
High-impact AI regulation, trustworthiness, industry promotion
Singapore
Governance framework
Model AI Governance Framework (2019, 2020)
Active
Business-friendly governance, AI Verify testing, ASEAN leadership
Australia
Voluntary + Mandatory (emerging)
AI Ethics Framework (2019) + Safe & Responsible AI consultation
AI for social good, digital personal data protection, AI innovation
New Zealand
Principles-based
Algorithm Charter (2020) + AI Strategy (2024)
Active
Government use of algorithms, transparency, Te Tiriti considerations
2. Japan Japan
Japan has adopted a predominantly soft-law, principles-based approach to AI governance, emphasizing innovation promotion while embedding human-centric values. As one of the world’s leading AI research nations and a member of the G7, Japan has been influential in shaping global AI governance norms through both domestic frameworks and international engagement.
2.1 Social Principles of Human-Centric AI (2019)
Adopted by the Japanese Cabinet in March 2019, the Social Principles of Human-Centric AI represent Japan’s foundational AI governance document. Developed by the Council for Social Principles of Human-Centric AI under the Cabinet Office, these principles establish the philosophical and ethical framework for AI development and deployment in Japan.
Core Vision: Japan’s approach is deeply connected to the Society 5.0 concept — a super-smart society that balances economic advancement with the resolution of social problems through the integration of cyberspace and physical space. AI is viewed as a central enabling technology for this vision.
The Seven Social Principles
Principle
Description
Implementation Guidance
1. Human-Centric
AI must not infringe upon fundamental human rights guaranteed by the Constitution; must be developed for people’s abilities and creativity to flourish
Education reform, reskilling programs, preserving human decision-making authority
2. Education / Literacy
Society must provide education opportunities for all to acquire AI literacy; policymakers must understand AI to make sound policy
AI education across all levels, training for government officials, public awareness campaigns
3. Privacy Protection
Personal data used for AI must be properly handled; people must not be profiled without consent or tracked beyond societal norms
Compliance with APPI, data minimization, purpose limitation, consent mechanisms
4. Security
Society must pay attention to AI system robustness and security, including resistance to intentional attacks
AI must maintain fair market competition; must prevent concentration of wealth and data
Anti-monopoly measures, data portability, open standards promotion
6. Fairness, Accountability, Transparency
AI decisions must be explainable in a way appropriate to the context; providers must ensure fairness and non-discrimination
Algorithmic auditing, bias testing, documentation of decision processes, impact assessments
7. Innovation
Japan must promote continuous innovation in AI; international cooperation and cross-disciplinary research essential
R&D investment, international partnerships, regulatory sandboxes, startup support
2.2 AI Strategy 2022 & Updates
Japan’s AI Strategy 2022, updated from the original 2019 strategy, establishes specific policy objectives and investment priorities. The strategy is overseen by the Integrated Innovation Strategy Promotion Council (IISPC) under the Cabinet Office.
Strategic Pillars
Human Resources: Train 250,000 AI-literate workers annually; establish AI education from elementary through university level; attract global AI talent through visa reform
Industrial Competitiveness: Promote AI adoption across manufacturing, healthcare, agriculture, and disaster prevention; support AI startups with $2B+ funding
Technology Base: Invest in large language models, quantum-AI convergence, and next-generation computing infrastructure; establish AI research hubs at RIKEN and AIST
International Cooperation: Lead G7 AI governance discussions; participate in GPAI, OECD AI Policy Observatory; promote Data Free Flow with Trust (DFFT)
AI Governance: Develop agile governance mechanisms; create sector-specific guidelines; establish AI governance social implementation council
2.3 AI Governance Guidelines (METI)
The Ministry of Economy, Trade and Industry (METI) published the AI Governance Guidelines (Version 1.1, January 2022), providing practical guidance for businesses implementing AI. These guidelines follow the “agile governance” philosophy — flexible rules that can adapt to rapidly evolving technology.
Key Elements of Agile Governance
Element
Description
Implementation
Environmental Analysis
Continuous monitoring of AI risks and opportunities
Assess effectiveness and adjust governance measures
Audits, feedback loops, benchmarking against global standards
2.4 Act on the Protection of Personal Information (APPI)
While not AI-specific, Japan’s APPI (amended 2022, effective April 2022) significantly impacts AI systems that process personal data. The 2022 amendments strengthened individual rights and introduced new obligations relevant to AI.
Pseudonymized Data: New category allowing broader use for internal analytics and AI training while maintaining privacy protections
Individual Rights: Enhanced rights to request data deletion and restrict use; right to receive data in electronic format
Cross-border Transfers: Stricter requirements for international data transfers; must inform individuals of destination country’s data protection regime
Enforcement: Personal Information Protection Commission (PPC) can now issue commands with penalties up to ¥100 million for violations
Adequacy: Japan-EU mutual adequacy decision enables data flows under GDPR, critical for AI development using international datasets
2.5 Hiroshima AI Process
Under Japan’s G7 presidency in 2023, the Hiroshima AI Process was launched to develop international governance frameworks for advanced AI systems, particularly generative AI and foundation models. This initiative produced the International Guiding Principles and Code of Conduct for organizations developing advanced AI systems.
Hiroshima AI Process Outcomes
11 Guiding Principles: For organizations developing advanced AI systems, including pre-deployment testing, transparency, content authentication, and research into societal risks
Code of Conduct: Voluntary commitments for AI developers covering safety testing, information sharing, cybersecurity investment, and watermarking
Comprehensive Policy Framework: Adopted December 2023 at G7 Leaders’ level, covering intellectual property, disinformation, transparency, and workforce impact
Friends Group: Extended participation beyond G7 to include OECD, GPAI, and additional partner nations; 49 countries endorsed by early 2024
2.6 Key Regulatory Bodies
Body
Role
Website
Cabinet Office
Overall AI strategy coordination; Council for Social Principles
South Korea has taken one of the most proactive legislative approaches to AI governance in the Asia-Pacific region. In January 2025, South Korea enacted the AI Framework Act (officially the Act on the Development of Artificial Intelligence and the Establishment of Trust), becoming one of the first countries in the world to pass comprehensive AI legislation.
3.1 AI Framework Act (2025)
Enacted on January 9, 2025, with full enforcement beginning January 22, 2026, the AI Framework Act establishes a comprehensive governance structure for AI in South Korea. The Act was developed through extensive multi-stakeholder consultation and draws on both the EU AI Act model and Korea’s own innovation-first philosophy.
Legislative History: Multiple AI bills were introduced in Korea’s National Assembly from 2020 onward. After years of debate, the final version balanced industry promotion with safety requirements. The Act passed with bipartisan support, reflecting broad consensus on the need for AI governance.
Core Structure
Chapter
Coverage
Key Provisions
Chapter 1: General
Definitions, scope, basic principles
Defines AI, high-impact AI, AI agents; establishes human dignity as core principle
Chapter 2: AI Policy
National AI Committee, master plans
Establishes National AI Committee chaired by President; 5-year master plans; annual implementation plans
Chapter 3: Industry Promotion
R&D, talent, infrastructure
Government R&D support, AI cluster zones, data infrastructure, startup incubation
Chapter 4: Ethics & Trustworthiness
Ethical principles, trustworthiness
National AI Ethics Charter; transparency obligations; bias prevention; human oversight requirements
Chapter 5: High-Impact AI
Designation, obligations, assessments
Government designates high-impact AI categories; mandatory impact assessments; notification requirements; record-keeping
Chapter 6: Rights & Remedies
Individual rights, damage relief
Right to explanation of AI decisions; right to object; damage compensation framework
Chapter 7: Safety & Oversight
Safety measures, inspections
Incident reporting, safety standards, government inspections, corrective orders
High-Impact AI Classification
The Act introduces a “high-impact AI” concept (similar to but distinct from the EU’s high-risk classification). High-impact AI is defined as AI that may significantly affect the life, physical safety, or fundamental rights of individuals. Categories are designated by presidential decree and include:
Judicial & law enforcement: AI used in criminal investigation, prosecution, adjudication, or corrections
Critical infrastructure: AI operating in energy, transportation, water, telecommunications systems
Healthcare: AI for diagnosis, treatment recommendations, or patient management
Employment: AI used in recruitment, performance evaluation, or termination decisions
Education: AI for student assessment, admissions, or educational content delivery
Financial services: AI for credit scoring, insurance underwriting, or investment advice
Public administration: AI used in welfare eligibility, immigration, or public service delivery
3.2 Personal Information Protection Act (PIPA)
South Korea’s PIPA (amended 2023) is one of Asia’s strongest data protection laws and significantly impacts AI systems. The Personal Information Protection Commission (PIPC) serves as the independent data protection authority.
Automated Decision-Making: 2023 amendments introduced rights related to automated individual decision-making; right to refuse decisions based solely on automated processing
Pseudonymized Data: Special provisions allowing use of pseudonymized data for AI research without consent, subject to safeguards
Data Combination: Designated expert agencies can facilitate data combination for AI research
Cross-border Transfers: Adequacy-based transfer mechanism; Korea has EU adequacy decision enabling AI data flows
AI-Specific Guidance: PIPC has issued guidance on AI and personal data, including generative AI guidelines
3.3 National AI Ethics Standards (2020)
Published by the Ministry of Science and ICT (MSIT) in December 2020, the National AI Ethics Standards establish Korea’s ethical framework. These standards preceded the Framework Act and informed its ethics chapter.
Three Core Principles
Respect for Human Dignity: AI must serve human well-being; human rights must not be infringed; AI must be developed to enhance human dignity
Public Good: AI must contribute to society’s well-being; developers must consider social impact; AI must not exacerbate inequality
Technical Soundness: AI systems must be safe, reliable, and secure; developers must implement appropriate quality controls
Ten Key Requirements
#
Requirement
Description
1
Human Rights
AI must respect fundamental human rights and not discriminate
2
Privacy
Personal information must be protected throughout AI lifecycle
3
Diversity
AI development must consider diverse social, cultural, and individual needs
4
Non-harm
AI must not be used to cause physical or psychological harm
5
Public Interest
AI must serve the broader public good and social welfare
6
Solidarity
AI must promote international cooperation and shared prosperity
7
Data Management
Training data must be collected and managed fairly and transparently
8
Accountability
Clear accountability structures for AI development and deployment
9
Safety
AI must be technically safe and robust throughout its lifecycle
10
Transparency
AI operations and decision-making processes must be explainable
3.4 Key Regulatory Bodies
Body
Role
Website
National AI Committee
Highest policy body, chaired by President; sets national AI strategy
Established under AI Framework Act (2026)
MSIT
AI industry promotion, R&D, national strategy execution
Singapore has established itself as a global leader in practical AI governance, developing frameworks that prioritize business adoption while maintaining trust. The city-state’s approach is distinguished by its emphasis on voluntary governance tools, international collaboration, and the development of AI testing infrastructure.
4.1 Model AI Governance Framework (2019, 2020)
Published by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), the Model AI Governance Framework provides practical and actionable guidance for organizations deploying AI. First released in January 2019 with a second edition in January 2020, it has been widely cited internationally.
Four Key Principles
Principle
Description
Practical Guidance
1. Internal Governance
Organizations should have clear internal governance structures and measures for AI
Determine appropriate level of human involvement in AI-augmented decisions
Human-in-the-loop for high-risk; human-on-the-loop for medium; human-out-of-the-loop only for low-risk applications
3. Operations Management
Manage AI risks through robust development and deployment processes
Data governance, model training rigor, monitoring, incident management, bias testing
4. Stakeholder Interaction
Engage stakeholders throughout AI lifecycle; communicate transparently
User disclosure, feedback mechanisms, explanation capabilities, grievance channels
4.2 AI Verify Foundation & Testing Framework
Launched in June 2022 and open-sourced in 2023, AI Verify is the world’s first AI governance testing framework and software toolkit. It allows companies to objectively demonstrate responsible AI practices through standardized testing.
AI Verify Testing Pillars
Testing Area
What It Tests
Output
Transparency
Model explainability, feature importance, decision path clarity
Explainability score, feature attribution reports
Fairness
Bias across protected attributes (gender, race, age, etc.)
Process compliance checklist, governance gap analysis
AI Verify Foundation: In June 2023, Singapore launched the AI Verify Foundation as an open-source community initiative. Major partners include Google, Microsoft, IBM, Samsung, and Salesforce. The Foundation develops testing standards and promotes interoperability with other AI governance frameworks globally. It represents a shift from purely voluntary governance to providing objective, verifiable testing tools.
4.3 National AI Strategy 2.0 (2023)
Singapore’s updated National AI Strategy, released in December 2023, sets out the vision for Singapore to be a leader in AI for the public good. Building on the original 2019 strategy, version 2.0 emphasizes:
AI for Public Good: Focus on using AI to address national challenges in healthcare, education, urban planning, and sustainability
Excellence in AI: Building world-class AI capabilities in research, talent, and infrastructure; target of tripling AI practitioner pool
AI Governance Leadership: Positioning Singapore as the global hub for practical AI governance through AI Verify, ASEAN coordination, and international partnerships
Compute Infrastructure: Significant investment in national AI compute resources; partnerships with cloud providers for sovereign AI capabilities
Industry Transformation: AI adoption programs for SMEs; sector-specific AI roadmaps for finance, healthcare, logistics, and government
4.4 Personal Data Protection Act (PDPA)
Singapore’s PDPA governs the collection, use, and disclosure of personal data by organizations. While not AI-specific, it has direct implications for AI systems.
Consent Framework: Requires consent or legitimate interest for data collection; notification obligations for AI processing purposes
Deemed Consent: Business improvement exception allows use of personal data for AI model training if specific conditions are met
Advisory Guidelines on AI: PDPC has issued specific guidance on using personal data in AI/ML systems, including model training and inference
Mandatory Breach Notification: 72-hour notification requirement for significant data breaches, including those caused by AI system failures
Financial Penalties: Up to SGD 1 million or 10% of annual turnover, whichever is higher (2022 amendment)
4.5 Key Regulatory Bodies
Body
Role
Website
IMDA
AI governance framework development; digital economy regulation
Australia is in a period of significant transition in AI governance, moving from a purely voluntary ethics-based approach toward mandatory guardrails for high-risk AI. The Australian government has signaled clear intent to introduce binding regulations, making it one of the most dynamic regulatory environments in the Asia-Pacific.
5.1 AI Ethics Framework (2019)
Australia’s initial approach centered on the AI Ethics Framework, published by the Department of Industry, Science and Resources in 2019. This voluntary framework established eight AI Ethics Principles.
Eight AI Ethics Principles
Principle
Description
1. Human, Societal & Environmental Wellbeing
AI must benefit individuals, society, and the environment
2. Human-Centred Values
AI must respect human rights, diversity, autonomy, and individual freedom
3. Fairness
AI must be inclusive and accessible; must not discriminate unfairly
4. Privacy Protection & Security
AI must respect and uphold privacy rights and data protection
5. Reliability & Safety
AI must operate reliably, safely, and in accordance with intended purpose
6. Transparency & Explainability
There should be transparency and responsible disclosure about AI systems
7. Contestability
AI-influenced decisions must be contestable; effective remedy must be available
8. Accountability
Those responsible for AI must be identifiable and accountable
5.2 Safe and Responsible AI Consultation (2023–2024)
In June 2023, the Australian government launched a major consultation on Safe and Responsible AI, signaling the shift toward mandatory regulation. The consultation paper, receiving over 500 submissions, proposed a risk-based approach to AI governance.
Key Proposals
Mandatory Guardrails for High-Risk AI: Binding obligations for AI in high-risk settings (healthcare, employment, law enforcement, critical infrastructure)
Voluntary Code for General AI: Industry-developed standards for medium- and low-risk applications
AI Risk Assessment: Organizations deploying high-risk AI must conduct and publish impact assessments
Transparency Obligations: Notification requirements when AI is used in decision-making affecting individuals
Regulatory Sandboxes: Safe spaces for AI innovation with temporary regulatory flexibility
Existing Law Enforcement: Better enforcement of existing laws (discrimination, consumer protection, privacy) as they apply to AI
5.3 Interim Government Response (2024)
The Australian government released its interim response in September 2024, confirming the direction toward mandatory guardrails while adopting a phased approach.
Phase 1 (Immediate): Voluntary AI Safety Standard; strengthen existing regulators’ capabilities for AI oversight; establish an AI advisory body
Phase 2 (Near-term): Develop mandatory guardrails for high-risk AI applications; create AI-specific reporting and transparency requirements
Phase 3 (Medium-term): Evaluate need for comprehensive AI legislation; consider AI-specific regulatory body or enhanced powers for existing regulators
Cross-cutting: National AI capability plan; AI education and skills programs; international engagement on AI standards
5.4 Privacy Act Review & AI Implications
Australia’s comprehensive Privacy Act Review (Attorney-General’s report, 2023) includes proposals with significant AI implications:
Right to Explanation: Proposed right for individuals to request meaningful information about how automated decisions are made
Automated Decision-Making Safeguards: New obligations for organizations using AI for decisions that significantly affect individuals
Children’s Privacy: Enhanced protections for minors, including restrictions on AI profiling of children
Enforcement Enhancement: Stronger powers for the Office of the Australian Information Commissioner (OAIC)
Privacy Impact Assessments: Mandatory PIAs for high-risk activities, including AI systems processing personal data
5.5 Sector-Specific AI Regulation
Sector
Regulator
AI-Relevant Actions
Financial Services
ASIC, APRA
Guidance on AI in financial advice; model risk management; robo-advice regulations; CPS 230 operational resilience
Healthcare
TGA
AI/ML-based Software as a Medical Device (SaMD) regulation; pre-market assessment; post-market surveillance
Competition
ACCC
Digital Platform Services Inquiry (ongoing); algorithmic transparency; merger review considering AI market power
Employment
Fair Work Commission
AI in workplace monitoring; algorithmic management; consultation obligations for AI deployment
National Security
Department of Defence
Responsible AI in Defence & National Security program; AI ethics principles for defence applications
5.6 Key Regulatory Bodies
Body
Role
Website
DISR
Department of Industry, Science & Resources; AI policy lead
India, home to one of the world’s largest and fastest-growing AI ecosystems, has adopted a primarily light-touch approach to AI governance while developing sector-specific regulations. As the world’s most populous nation and a major AI talent hub, India’s governance choices carry global significance.
6.1 National Strategy for Artificial Intelligence (2018)
NITI Aayog (National Institution for Transforming India) published India’s National Strategy for Artificial Intelligence in 2018, framing AI as a tool for inclusive economic growth and social development. The strategy introduced the concept of “#AIforAll” — AI for everyone.
Five Priority Sectors
Sector
Rationale
Target Applications
Healthcare
Address massive doctor-patient ratio gaps; improve rural healthcare access
AI diagnostics, telemedicine triage, drug discovery, epidemic prediction
Agriculture
Improve yields for 600M+ dependent population; address climate challenges
Crop prediction, soil analysis, pest detection, market price forecasting
Education
Personalize learning for diverse population; address teacher shortages
Adaptive learning, automated assessment, language translation, content localization
Smart Cities
Manage rapid urbanization; improve urban services and infrastructure
Traffic management, waste optimization, energy efficiency, public safety
Smart Mobility
Address transportation challenges in world’s third-largest vehicle market
6.2 Digital Personal Data Protection Act (DPDPA, 2023)
India’s Digital Personal Data Protection Act, enacted in August 2023, is the country’s first comprehensive data protection law. While not AI-specific, it establishes foundational rules that directly impact AI systems processing personal data.
Key Provisions Affecting AI
Consent-Based Framework: Processing personal data requires consent or falls under “certain legitimate uses”; AI training on personal data generally requires consent
Purpose Limitation: Data collected for one purpose cannot be used for AI training for unrelated purposes without fresh consent
Data Fiduciary Obligations: Organizations using AI must ensure data accuracy, implement security safeguards, and delete data when purpose is fulfilled
Significant Data Fiduciaries: Large-scale processors (likely including major AI companies) face additional obligations including data protection impact assessments, independent auditors, and consent managers
Children’s Data: Processing children’s data requires verifiable parental consent; restrictions on tracking, behavioral monitoring, and targeted advertising to children
Data Protection Board: New Data Protection Board of India (DPBI) established as adjudicatory body with power to impose penalties up to INR 250 crore (~$30M)
Cross-border Transfers: Data can be transferred to any country not specifically restricted by the Central Government (blacklist approach rather than whitelist)
6.3 IndiaAI Mission (2024)
Launched in March 2024 with an allocation of INR 10,372 crore (~$1.25 billion), the IndiaAI Mission is the government’s comprehensive program to accelerate AI development and deployment.
Seven Pillars
IndiaAI Compute Capacity: Build 10,000+ GPU AI compute infrastructure through public-private partnership
IndiaAI Innovation Centre: Develop and deploy indigenous large multimodal models (LMMs) in Indian languages
IndiaAI Datasets Platform: Unified data platform providing quality datasets for AI innovation; non-personal and anonymized datasets
IndiaAI Application Development: Fund AI applications in critical sectors; startup support with up to INR 25 crore per project
IndiaAI FutureSkills: Train 14,000+ AI professionals; data and AI programs in 200+ tier-2/3 city colleges
IndiaAI Startup Financing: Financial support for AI startups through equity funding, loans, and grants
Safe & Trusted AI: Develop responsible AI tools, guidelines, and self-assessment frameworks for Indian context
6.4 MeitY AI Governance Advisories
The Ministry of Electronics and Information Technology (MeitY) has issued several advisories on AI governance:
March 2024 Advisory: Directed AI platforms operating in India to seek government approval before launching “unreliable” or “under-tested” AI models to Indian users (later clarified as advisory, not mandatory)
Data Protection Board; DPDPA enforcement (being constituted)
Establishment in progress (2024–2025)
7. New Zealand New Zealand
New Zealand has developed a distinctive approach to AI governance that emphasizes government accountability, indigenous considerations (Te Tiriti o Waitangi), and transparency in algorithmic decision-making. While smaller in scale than other Asia-Pacific nations, New Zealand’s frameworks have been internationally recognized for their innovation.
7.1 Algorithm Charter for Aotearoa New Zealand (2020)
The Algorithm Charter, launched in July 2020, is a voluntary commitment by government agencies to use algorithms in a transparent, accountable, and fair manner. It is one of the world’s first government-wide algorithm accountability frameworks.
Charter Commitments
Commitment
Description
Implementation
Transparency
Maintain transparency about how algorithms are used and what data drives them
Public algorithm registers; plain-language descriptions of systems; proactive disclosure
Partnership
Embed a Te Ao Māori perspective and uphold commitments under Te Tiriti o Waitangi
Māori data sovereignty considerations; consultation with iwi; cultural impact assessments
People
Focus on people; ensure that individuals and communities are not harmed by algorithmic decisions
Human oversight requirements; redress mechanisms; community engagement
Data
Ensure data is fit for purpose, protected, and used ethically
Data quality audits; privacy impact assessments; ethical data sourcing
Privacy, Ethics, Human Rights
Actively protect individual privacy, uphold ethics, and respect human rights
Privacy-by-design; ethical review processes; human rights impact assessments
Consultation
Seek public consultation and input on significant algorithmic decisions
Public submissions; community advisory groups; stakeholder workshops
Te Tiriti Considerations: New Zealand’s unique governance feature is the explicit integration of Te Tiriti o Waitangi (Treaty of Waitangi) principles into AI governance. This includes Māori data sovereignty — the principle that Māori have inherent rights and interests in data collected about them, and that AI systems must respect and uphold these rights. The Te Mana Raraunga (Māori Data Sovereignty Network) has been influential in shaping these requirements.
7.2 National AI Strategy (2024)
New Zealand released its national AI strategy in 2024, addressing both opportunities and risks of AI adoption across government and the private sector.
Government AI Use: Guidelines for responsible government adoption of AI; procurement standards; transparency requirements
Workforce: AI skills development; managing workforce transitions; supporting workers displaced by AI
Innovation: Supporting AI research and development; fostering AI startups; international collaboration
Trust & Safety: Building on Algorithm Charter; considering mandatory requirements for high-risk AI; public engagement on AI risks
Māori & Pacific Peoples: Ensuring AI benefits are equitably shared; preventing AI-driven discrimination; supporting indigenous data governance
7.3 Privacy Act 2020
New Zealand’s Privacy Act 2020 governs personal information handling and has been applied to AI contexts:
Information Privacy Principles: 13 principles governing collection, storage, use, and disclosure of personal information
Cross-border Disclosure: Requirements for international data transfers; must ensure comparable protections in receiving jurisdiction
Mandatory Breach Notification: Notification to Privacy Commissioner and affected individuals for breaches likely to cause harm
OPC Guidance on AI: Office of the Privacy Commissioner has issued specific guidance on AI and personal information use
8. ASEAN Regional Frameworks
The Association of Southeast Asian Nations (ASEAN) has developed regional AI governance frameworks to promote harmonization across its 10 member states. Led significantly by Singapore’s expertise, ASEAN’s approach balances innovation promotion with responsible AI development.
8.1 ASEAN Guide on AI Governance and Ethics (2024)
Building on the 2021 preliminary framework, the comprehensive ASEAN Guide on AI Governance and Ethics was adopted in 2024. This guide provides a reference for ASEAN member states to develop national AI governance frameworks.
Core Principles
Principle
Description
Transparency & Explainability
AI systems should be transparent and explainable appropriate to the context
Fairness & Equity
AI should be designed to minimize bias and discrimination
Security & Safety
AI must be secure, robust, and safe throughout its lifecycle
Human-Centricity
AI should respect human rights and be designed for human benefit
Privacy & Data Governance
AI must comply with data protection laws and respect privacy
Accountability & Integrity
Clear accountability for AI outcomes; responsible development practices
Robustness & Reliability
AI systems should perform reliably and handle errors gracefully
8.2 ASEAN Digital Economy Framework Agreement (DEFA)
The ASEAN DEFA, negotiated from 2023–2025, includes provisions on AI governance as part of the broader digital economy framework. Key AI-related provisions include:
Cross-border Data Flows: Facilitating data flows for AI development while maintaining privacy protections
AI Standards Alignment: Working toward compatible AI standards across member states
Digital Trade & AI Services: Reducing barriers to AI service trade within ASEAN
Capacity Building: Joint programs for AI skills development and technology transfer
8.3 Member State Developments
Country
AI Strategy
Key Initiatives
Status
Thailand
National AI Strategy 2022–2027
AI ethics guidelines; AI sandbox; smart agriculture and healthcare focus; PDPA enforcement
Active
Vietnam
National AI Strategy to 2030
AI R&D centres; focus on Vietnamese language AI; smart manufacturing; cybersecurity AI regulations
Active
Indonesia
National AI Strategy 2020–2045 (Stranas KA)
Ethics and governance framework; AI in government services; PDP Law 2022; digital economy vision
Active
Malaysia
National AI Roadmap 2021–2025
AI ethics principles; Centre for AI & Data Analytics; AI in public service; PDPA enforcement
Active
Philippines
National AI Strategy Roadmap (2021)
AI in agriculture, disaster response, education; data privacy (DPA 2012); proposed AI Development Act
Developing
9. Emerging Asia-Pacific Regulators
Several additional Asia-Pacific jurisdictions are developing or implementing AI governance frameworks, contributing to the region’s evolving regulatory landscape.
9.1 Taiwan
AI Basic Act (proposed): Draft legislation introduced in 2024 for comprehensive AI governance; includes risk classification and transparency requirements
AI Action Plan (2018–2022): Investment in AI infrastructure, talent development, and industry applications
Personal Data Protection Act: Existing data protection law applied to AI; amendments under consideration for AI-specific provisions
AI Semiconductor Strategy: Leveraging Taiwan’s chip manufacturing leadership to influence global AI governance through supply chain considerations
9.2 Hong Kong SAR
Ethical AI Framework (HKMA, 2019): Guidance for AI in banking and financial services; fairness, transparency, accountability principles
PDPO Updates: Personal Data (Privacy) Ordinance applied to AI; guidance on AI and personal data by Privacy Commissioner
AI Ethics Guidelines (OGC, 2024): Guidelines for government use of AI; procurement standards; risk assessment requirements
9.3 Israel
AI Policy & Regulation (2023): Government-wide AI policy emphasizing innovation while addressing risks; sector-specific approach
Privacy Protection Regulations: Data protection regulations applied to AI; GDPR-influenced approach to personal data in AI
AI Innovation Hub: National AI program supporting responsible AI development; defense AI ethics frameworks
9.4 Saudi Arabia & UAE (Cross-Reference)
While covered in detail in the Africa & Middle East section, Saudi Arabia’s SDAIA and the UAE’s AI strategy are significant Asia-Pacific adjacent developments. Both nations have established dedicated AI authorities and comprehensive national strategies.
10. Comparative Analysis
Comparing Asia-Pacific approaches reveals important patterns and distinctions that help understand the region’s evolving AI governance landscape.
10.1 Regulatory Approach Comparison
Dimension
Japan
South Korea
Singapore
Australia
India
Primary Approach
Soft law
Comprehensive legislation
Governance framework + testing
Transitioning to mandatory
Light-touch / sectoral
Risk Classification
No formal tiers
High-impact AI designation
Human-in-the-loop framework
High-risk (proposed)
No formal tiers
Binding Requirements
APPI (data) only
Yes (AI Framework Act)
PDPA (data); AI governance voluntary
Moving toward mandatory
DPDPA (data); AI advisories
AI-Specific Law
No
Yes (first comprehensive)
No (framework-based)
Under development
No (sector-specific)
Right to Explanation
Guidelines only
Yes (statutory)
Framework guidance
Proposed
Sector-specific
International Influence
G7 Hiroshima AI Process
EU-aligned model
ASEAN governance leadership
Five Eyes / OECD
Global South voice
AI Testing/Certification
AIST quality standards
Trustworthiness certification (planned)
AI Verify (world’s first)
National AI Centre standards
Sector-specific testing
10.2 Data Protection & AI Comparison
Feature
Japan (APPI)
Korea (PIPA)
Singapore (PDPA)
Australia (Privacy Act)
India (DPDPA)
EU Adequacy
Yes
Yes
No (MOUs)
No (in discussion)
No
AI Training Data
Pseudonymized data use allowed
Pseudonymized/combined data provisions
Deemed consent for business improvement
Under review
Consent or legitimate use
Automated Decision Rights
Limited
Right to refuse automated decisions
Guidance-based
Proposed (Privacy Act review)
Not yet specified
Max Penalty
¥100M (~$670K)
KRW 500M (~$370K) + criminal
SGD 1M or 10% revenue
AUD 50M+ per contravention
INR 250 crore (~$30M)
11. Regional Trends & Future Outlook
11.1 Key Trends
Shift Toward Mandatory Regulation
Multiple Asia-Pacific nations are moving from voluntary frameworks toward binding requirements. South Korea’s AI Framework Act (2025) set a regional precedent. Australia is actively developing mandatory guardrails. Japan, while preferring soft law, is exploring binding requirements for specific AI applications. This trend is likely to accelerate as AI systems become more powerful and pervasive.
Regional Harmonization Efforts
ASEAN’s governance framework, the DEFA negotiations, and bilateral cooperation agreements (Japan-Singapore, Korea-Australia, India-ASEAN) point toward increasing regional coordination. While full harmonization remains unlikely given diverse governance traditions, interoperability of AI standards is becoming a priority. The Hiroshima AI Process has provided a global platform for Asia-Pacific influence.
Practical Testing & Certification
Singapore’s AI Verify has pioneered the concept of AI testing frameworks, and this approach is spreading across the region. South Korea plans AI trustworthiness certification. Japan is developing testing standards through AIST. Australia’s National AI Centre is working on assessment tools. The Asia-Pacific may emerge as the global leader in practical AI governance tools.
Balancing Innovation & Regulation
All Asia-Pacific jurisdictions explicitly prioritize economic competitiveness alongside AI safety. Unlike some interpretations of the EU model, Asia-Pacific frameworks generally include innovation promotion as a core objective. This “regulate to enable” philosophy distinguishes the region and may attract AI investment from companies seeking balanced regulatory environments.
Generative AI Response
The rapid deployment of generative AI has prompted regional responses. South Korea’s AI Framework Act was partly motivated by ChatGPT’s emergence. India issued specific advisories. Singapore and Japan have developed generative AI guidance. Expect generative AI-specific provisions to be integrated into existing frameworks across the region.
11.2 Future Developments to Watch
South Korea AI Framework Act Implementation (2026): Full enforcement begins January 2026; presidential decrees will define high-impact AI categories; implementation will serve as a model for other Asian nations
Australia Mandatory Guardrails: Expected legislation for high-risk AI; timing dependent on consultation outcomes and political priorities
Japan Potential Legislation: Growing domestic discussion about need for binding AI rules; potential movement from soft law to hard law
India Digital India Act: Comprehensive replacement for IT Act 2000 expected to include significant AI provisions
ASEAN DEFA Finalization: Trade agreement with AI governance provisions; standardization across Southeast Asia
Cross-border AI Governance: Bilateral and multilateral agreements on AI model testing, data sharing, and incident notification
Foundation Model Regulation: Regional responses to large language models and multimodal AI systems