1. Overview of Canada's AI Governance Approach
Canada was one of the first countries to develop a national AI strategy (2017) and has maintained a position as a global leader in AI research. However, its legislative approach to AI regulation has been slower and more cautious than the EU or China, with the proposed Artificial Intelligence and Data Act (AIDA) still making its way through the legislative process.
Key Characteristics: Canada's approach combines: (1) a pioneering national AI strategy focused on research and talent, (2) existing privacy and human rights frameworks applied to AI, (3) a federal government directive specifically governing AI in government decisions, (4) proposed but not yet enacted comprehensive AI legislation (AIDA), and (5) voluntary industry codes. The federal-provincial division of powers also means provincial privacy laws (especially Quebec's Law 25) independently regulate AI.
Timeline of AI Governance in Canada
| Date |
Development |
Significance |
| March 2017 |
Pan-Canadian AI Strategy (PCAIS) launched |
World's first national AI strategy; $125M initial investment; established CIFAR, Vector Institute, Mila, Amii |
| April 2019 |
Directive on Automated Decision-Making (DADM) |
Federal government agencies must assess and mitigate AI risks in government decisions; Algorithmic Impact Assessment tool |
| June 2022 |
Bill C-27 introduced (includes AIDA) |
First comprehensive federal AI legislation proposed; part of broader digital charter |
| September 2022 |
Quebec Law 25 takes effect (Phase 1) |
Provincial privacy law with AI-specific provisions; automated decision-making transparency requirements |
| September 2023 |
Voluntary Code of Conduct for Generative AI |
Industry voluntary code; signatories include major Canadian AI companies |
| November 2023 |
Companion Document to AIDA published |
Government clarification of AIDA's intent, addressing industry and civil society concerns |
| 2024 |
Pan-Canadian AI Strategy Phase 2 ($2.4B) |
Massive funding increase; AI compute infrastructure; safety research; commercialization support |
| 2024-2025 |
AIDA parliamentary progress |
Committee review and amendments; passage timeline uncertain due to political dynamics |
2. Artificial Intelligence and Data Act (AIDA)
The Artificial Intelligence and Data Act (AIDA) is Part 3 of Bill C-27 (Digital Charter Implementation Act, 2022). If enacted, AIDA would be Canada's first dedicated AI law, establishing requirements for "high-impact" AI systems and creating criminal offenses for harmful AI use.
Key Provisions
| Provision |
Description |
Section |
| High-Impact AI Systems |
Establishes a framework for regulating AI systems that meet criteria for "high impact" — criteria to be defined in regulations (not in the Act itself). Expected to cover employment, essential services, health, financial services, law enforcement, and critical infrastructure. |
s. 5 |
| Risk Assessment |
Persons responsible for high-impact AI systems must assess whether the system could cause harm or biased output, and establish mitigation measures |
s. 7 |
| Monitoring |
Must establish measures to monitor compliance and effectiveness of mitigation measures on an ongoing basis |
s. 8 |
| Transparency |
Must publish on a publicly available website a plain-language description of the system, how it is used, types of content it generates, risk mitigation measures, and any other prescribed information |
s. 9 |
| Record-Keeping |
Must keep records related to compliance for prescribed period |
s. 10 |
| Notification of Harm/Bias |
If a responsible person determines that use of the system could result in material harm or biased output, must notify the AI and Data Commissioner |
s. 12 |
| General-Purpose AI |
Persons who make general-purpose AI systems available must publish on a publicly available website a plain-language description of the system's capabilities and limitations |
s. 6 |
| AI and Data Commissioner |
Establishes a new AI and Data Commissioner within the government responsible for administering and enforcing AIDA |
s. 33 |
Criminal Offenses
AIDA creates three criminal offenses — notably, these would be among the first criminal penalties specifically for AI misuse in any Western country:
| Offense |
Description |
Maximum Penalty |
Section |
| Possessing/Using AI for Serious Harm |
Knowingly or recklessly causing serious physical or psychological harm to an individual through the use of an AI system |
Indictable offense (no cap specified — potentially up to life imprisonment depending on harm) |
s. 38 |
| Making AI Available for Harm |
Making an AI system available knowing or being reckless as to whether it will be used to cause serious physical or psychological harm |
Indictable offense |
s. 39 |
| AI for Economic Manipulation |
Possessing or using an AI system with intent to defraud the public and cause substantial economic loss |
Up to 5 years imprisonment and/or fine |
s. 40 |
Criticisms of AIDA
- Framework legislation concern: Most details (including the definition of "high-impact") are left to regulations — critics argue this gives too much discretion to the executive with insufficient parliamentary oversight
- Scope uncertainty: Without knowing what "high-impact" means, companies cannot assess whether they are covered
- Lack of individual rights: Unlike GDPR or the EU AI Act, AIDA does not create individual rights to contest AI decisions or seek redress
- No explicit fundamental rights framework: Critics note AIDA focuses on harm and bias but doesn't anchor its protections in fundamental rights language
- Commissioner powers: Some argue the Commissioner needs stronger enforcement powers and greater independence
Current Status (2025): AIDA's progress is tied to the broader Bill C-27. The bill has faced political challenges including a change in government dynamics and competing legislative priorities. As of early 2025, the bill's timeline remains uncertain. The government has published a detailed "Companion Document" clarifying its regulatory intentions, but the bill must still complete committee review and parliamentary votes. A federal election could further delay or reset the process.
Official Sources
3. Digital Charter Implementation Act (Bill C-27)
AIDA is Part 3 of the broader Bill C-27 — Digital Charter Implementation Act, 2022. The full bill contains three parts that together reshape Canada's digital governance:
| Part |
Name |
Purpose |
Replaces |
| Part 1 |
Consumer Privacy Protection Act (CPPA) |
Modernizes private-sector privacy law; new individual rights; stronger enforcement; maximum penalties of 5% global revenue or $25M |
PIPEDA Part 1 |
| Part 2 |
Personal Information and Data Protection Tribunal Act |
Creates a new tribunal to hear appeals from Privacy Commissioner decisions; can impose administrative monetary penalties |
New institution |
| Part 3 |
Artificial Intelligence and Data Act (AIDA) |
AI-specific regulation; high-impact AI system requirements; criminal offenses for harmful AI |
New legislation |
CPPA AI-Relevant Provisions (Part 1)
The CPPA would strengthen privacy protections that affect AI:
- Automated Decision Systems: Organizations using automated decision systems must, on request, explain the prediction, recommendation, or decision made, and how personal information was used
- De-identification Standards: New framework for de-identification; organizations may use de-identified data for socially beneficial purposes including AI research
- Legitimate Interest: New lawful basis for processing that mirrors GDPR legitimate interests — relevant for AI training data
- Algorithmic Transparency: Individual right to an explanation of automated decision-making; must be meaningful and include how personal information influenced the decision
4. PIPEDA & Automated Decision-Making
The Personal Information Protection and Electronic Documents Act (PIPEDA), in force since 2001, is Canada's current federal private-sector privacy law. While it predates modern AI, the Office of the Privacy Commissioner of Canada (OPC) has interpreted PIPEDA to apply to AI and automated decision-making systems.
PIPEDA Principles Applied to AI
| PIPEDA Principle |
Schedule 1 Ref. |
Application to AI |
| Accountability |
Principle 1 |
Organizations are responsible for personal information under their control, including when processed by AI systems or third-party AI providers |
| Identifying Purposes |
Principle 2 |
Purposes for which personal information is collected by AI must be identified at or before the time of collection |
| Consent |
Principle 3 |
Meaningful consent required for AI processing of personal information; OPC guidance indicates that AI profiling and scoring generally require express consent |
| Limiting Collection |
Principle 4 |
AI training data collection must be limited to what is necessary for identified purposes; bulk data collection for AI training may conflict |
| Accuracy |
Principle 6 |
Personal information used by AI for decision-making must be accurate, complete, and up-to-date; particularly important for AI profiling |
| Openness |
Principle 8 |
Organizations must be open about their AI practices; must make information about AI use readily available |
| Individual Access |
Principle 9 |
Individuals have the right to access personal information held about them and to challenge its accuracy — extends to AI-derived profiles and scores |
OPC AI Guidance
The OPC has published key guidance documents on AI:
- Principles for Responsible, Trustworthy and Privacy-Protective Generative AI (2024): Guidance for organizations developing or using generative AI systems
- Guidance on Inappropriate Data Practices (2018, updated): Applies to AI data collection and use
- Joint Statement on Data Scraping (2023): Joint statement with other privacy authorities on web scraping for AI training
Official Sources
5. Directive on Automated Decision-Making (Federal Government)
The Directive on Automated Decision-Making (DADM), effective April 1, 2019, issued by the Treasury Board of Canada Secretariat, is one of the world's first government-wide policies specifically governing AI use in public administration. It applies to all federal government departments and agencies.
Algorithmic Impact Assessment (AIA)
The DADM's cornerstone is the Algorithmic Impact Assessment tool — a questionnaire-based framework that federal institutions must complete before deploying automated decision systems. Based on the assessment, the system is classified into one of four impact levels:
| Impact Level |
Description |
Requirements |
Examples |
| Level I (Low) |
Little to no impact on rights or interests of individuals |
Notice that automation is being used; basic documentation |
Automated email sorting; basic form validation |
| Level II (Moderate) |
Moderate impact on rights, health, economic interests, or well-being |
Notice + explanation of decision; human review available on request; quality assurance testing |
Government benefit eligibility screening; permit processing |
| Level III (High) |
High impact — affects legal status, liberty, health, or financial standing |
All Level II + human review of all decisions; ongoing monitoring; bias testing; published documentation |
Immigration decisions; tax assessments; regulatory compliance |
| Level IV (Very High) |
Very high impact — liberty, fundamental rights, life, or extensive economic impact |
All Level III + external peer review; public notice and comment period; enhanced oversight |
Criminal justice risk assessment; national security determinations |
Key Requirements
- Transparency: Departments must publish descriptions of automated decision systems, release source code or models as open source where possible, and provide explanations of decisions
- Human Oversight: Level III and IV systems require human review of all decisions; lower levels require human review on request
- Quality Assurance: Testing for bias, accuracy, and reliability before deployment and on an ongoing basis
- Recourse: Individuals affected by automated decisions must have access to meaningful recourse mechanisms
- Data Governance: Responsible management of training data including quality, relevance, and bias assessment
Global Influence: Canada's DADM and AIA have been widely cited as models for government AI governance. The AIA tool is open source and has been adapted by other governments (including the Netherlands and New Zealand). The approach of tiered requirements based on impact level influenced the EU AI Act's risk-based framework.
Official Sources
6. Voluntary Code of Conduct for Generative AI
In September 2023, the Government of Canada published a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. This code was designed as an interim measure while AIDA works through the legislative process.
Code Commitments
| Commitment |
Description |
| Accountability |
Designate senior officials responsible for compliance; establish governance structures for AI oversight |
| Safety |
Conduct safety evaluations including red-teaming before and after public deployment; identify and mitigate risks |
| Fairness & Equity |
Assess and mitigate risks of harmful bias; test for discriminatory outputs; ensure equitable access |
| Transparency |
Publicly report on AI capabilities, limitations, and appropriate/inappropriate use cases; publish safety evaluations |
| Human Oversight |
Implement appropriate human oversight mechanisms; ensure ability to override or shut down systems |
| Content Provenance |
Develop and deploy mechanisms to label AI-generated content; support content authenticity standards (e.g., C2PA) |
| Child Safety |
Deploy reasonable measures to protect children from AI-generated harmful content |
| Collaboration |
Share safety-relevant information with government and industry peers; participate in standards development |
Official Source
7. Provincial AI Regulations
Canada's federal system means provincial privacy laws also regulate AI. Three provinces have "substantially similar" private-sector privacy legislation that applies instead of PIPEDA within their jurisdictions: Quebec, British Columbia, and Alberta.
Quebec — Law 25 (Act Respecting the Protection of Personal Information in the Private Sector)
Quebec's Law 25 (formerly Bill 64), which took effect in phases from September 2022, is the most significant provincial AI regulation:
| Provision |
Description |
Phase |
| Automated Decision Notification |
Must inform individuals at the time of or before a decision based exclusively on automated processing is made about them |
Phase 2 (Sept 2023) |
| Right to Explanation |
Individuals have the right to be informed of the personal information used in the decision, the reasons/factors that led to the decision, and the right to have the decision reviewed by a person |
Phase 2 (Sept 2023) |
| Privacy Impact Assessments |
Mandatory for any project involving the collection, use, or disclosure of personal information — including AI projects |
Phase 1 (Sept 2022) |
| Profiling |
Must inform individuals if profiling technology is used to collect their personal information |
Phase 3 (Sept 2024) |
| Consent for Profiling |
Must obtain express consent for the collection and use of personal information for profiling purposes |
Phase 3 (Sept 2024) |
| Data Portability |
Right to data portability in a structured, commonly used format — relevant for AI data ecosystems |
Phase 3 (Sept 2024) |
| Maximum Penalties |
Up to $25 million CAD or 4% of worldwide turnover (private sector) |
Phase 1 |
British Columbia & Alberta
- BC PIPA: The Personal Information Protection Act (BC) applies to private sector AI use in British Columbia; the BC OIPC has issued guidance on AI and privacy obligations
- Alberta PIPA: Alberta's Personal Information Protection Act similarly governs AI use in the private sector within Alberta
- Both provinces follow PIPEDA-like principles but have independent enforcement through their own Privacy Commissioners
Official Sources
8. Pan-Canadian AI Strategy
The Pan-Canadian Artificial Intelligence Strategy (PCAIS), launched in March 2017, was the world's first national AI strategy. Administered by CIFAR (Canadian Institute for Advanced Research), it has positioned Canada as a global AI research leader and influenced national AI strategies worldwide.
Phase 1 (2017-2022): $125 Million
- Funded three national AI institutes: Vector Institute (Toronto), Mila (Montreal), Amii (Edmonton)
- Supported AI research chairs and talent attraction (450+ researchers)
- Launched CIFAR AI Chairs program (86 Canada CIFAR AI Chairs appointed)
- Canada became home to three of the "godfathers of deep learning" — Geoffrey Hinton (Vector), Yoshua Bengio (Mila), Richard Sutton (Amii)
Phase 2 (2024-2030): $2.4 Billion
Announced in Budget 2024, Phase 2 represents a massive scale-up:
- $2 billion for AI compute infrastructure and access for Canadian researchers and startups
- $200 million for AI safety research, including a Canadian AI Safety Institute
- $100 million for AI adoption by Canadian businesses
- $50 million for AI workforce development and training
- Commitment to establish a Canadian AI Safety Institute modeled on (and cooperating with) the UK AISI
National AI Institutes
| Institute |
Location |
Focus |
Key Researchers |
| Vector Institute |
Toronto, Ontario |
Deep learning, machine learning, health AI |
Geoffrey Hinton (Emeritus), Jimmy Ba, Alec Radford (alumni) |
| Mila - Quebec AI Institute |
Montreal, Quebec |
Deep learning, reinforcement learning, AI safety, responsible AI |
Yoshua Bengio, Aaron Courville, Hugo Larochelle |
| Amii |
Edmonton, Alberta |
Reinforcement learning, machine learning applications |
Richard Sutton, Michael Bowling |
Official Sources
9. Enforcement & Key Cases
| Date |
Case |
Issue |
Outcome |
| 2020-2021 |
Clearview AI (OPC Investigation) |
Clearview AI scraped billions of images from the internet to build a facial recognition database; offered to Canadian law enforcement |
OPC found Clearview violated PIPEDA — collection and use of biometric information without consent. Clearview ceased offering services in Canada. Joint investigation with Quebec, BC, and Alberta commissioners. |
| 2023 |
OpenAI / ChatGPT (OPC Investigation) |
OPC investigation into OpenAI's collection and use of personal information to train ChatGPT models |
Investigation ongoing as of 2025; coordinated with international privacy authorities (Global Privacy Assembly) |
| 2021 |
Immigration Automated Triage |
IRCC (Immigration, Refugees and Citizenship Canada) use of automated triage system for temporary resident applications challenged |
Citizen Lab report revealed system sorted applications by country of origin; raised fairness concerns; IRCC revised system and conducted AIA |
| 2022 |
RCMP Facial Recognition |
OPC investigation found RCMP used Clearview AI facial recognition without proper authorization or privacy impact assessment |
OPC found RCMP violated Privacy Act; RCMP agreed to strengthen AI governance; published use-of-AI policy |
10. Canada's International AI Role
- GPAI (Global Partnership on AI): Canada co-founded GPAI alongside France in 2020; now has 29 member countries; Secretariat supported by OECD; Canada chairs multiple working groups
- OECD AI Principles: Canada was instrumental in developing the OECD AI Principles (2019), the first intergovernmental standard on AI governance
- G7 Hiroshima AI Process: Canada participated in developing the Hiroshima Process International Guiding Principles for AI and Code of Conduct for AI Developers (2023)
- Bletchley Declaration: Canada was among the 28 signatories of the Bletchley Declaration on AI Safety (November 2023)
- UNESCO AI Ethics: Canada supported the UNESCO Recommendation on the Ethics of AI (2021)
- AI Safety Network: Canada established a Canadian AI Safety Institute and participates in the international network of AI safety institutes
Official Sources
11. Future Developments
| Development |
Expected |
Impact |
| AIDA Enactment |
2025-2026 (uncertain) |
First comprehensive federal AI law; high-impact system requirements; criminal penalties |
| CPPA Enactment |
With AIDA (Part of C-27) |
Modernized privacy law with automated decision-making provisions; GDPR-like penalties |
| Canadian AI Safety Institute |
2025 |
Frontier AI model evaluation; safety research; international cooperation |
| AI Compute Infrastructure |
2024-2027 |
$2B investment in compute access; sovereign AI capacity |
| Quebec Law 25 Full Implementation |
Completed Sept 2024 |
All phases in effect including profiling consent and data portability |
| DADM Update |
2025 |
Expected update to reflect generative AI and LLM use in government |
12. References & Official Sources
Federal Government
Privacy Commissioners
AI Strategy & Institutes
Research & Analysis