A
- AI Act (EU)
- Regulation (EU) 2024/1689, the European Union’s comprehensive AI regulation establishing a risk-based framework for AI systems. Adopted June 2024, with phased implementation through 2027. See EU AI Act.
- AI Literacy
- The knowledge and skills required to understand, use, and critically evaluate AI systems. The EU AI Act (Article 4) requires providers and deployers to ensure sufficient AI literacy among their staff.
- AI Management System (AIMS)
- A set of interrelated elements of an organization to establish policies, objectives, and processes to achieve those objectives regarding AI. ISO/IEC 42001:2023 provides the certifiable standard. See Technical Standards.
- AI Safety
- Research and practices aimed at ensuring AI systems do not cause unintended harm. Encompasses technical safety (robustness, alignment) and governance safety (oversight, controls, deployment decisions).
- Algorithmic Bias
- Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Can arise from training data, model design, or deployment context.
- Algorithmic Impact Assessment (AIA)
- A systematic evaluation of the potential effects of an algorithmic system on individuals, groups, and society. Required in some jurisdictions (e.g., Canada’s Directive on Automated Decision-Making). See Canada.
- Artificial General Intelligence (AGI)
- Hypothetical AI that can understand, learn, and apply knowledge across any intellectual task at a human level or beyond. No consensus definition; relevant to governance of frontier AI and long-term safety planning.
- Automated Decision-Making (ADM)
- Decisions made by technological means without human involvement. GDPR Article 22 provides individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. See GDPR & AI.
- Autonomous Weapon System
- A weapon system that, once activated, can select and engage targets without further human intervention. Subject to ongoing international debate at the UN CCW. See Military AI.
B
- Biometric Data
- Personal data resulting from specific technical processing relating to physical, physiological, or behavioral characteristics of a person. Subject to heightened regulation under GDPR (Article 9) and the EU AI Act (prohibited and high-risk categories). See Facial Recognition.
- Black Box
- An AI system whose internal workings are not transparent or interpretable to users or auditors. A central concern in governance around explainability and accountability.
C
- CE Marking
- European conformity marking indicating a product meets EU safety, health, and environmental requirements. High-risk AI systems under the EU AI Act will require CE marking before placement on the EU market.
- Conformity Assessment
- The process of demonstrating that specified requirements relating to a product, process, system, person, or body are fulfilled. The EU AI Act requires conformity assessment for high-risk AI systems.
- Constitutional AI
- An alignment approach developed by Anthropic where AI systems are trained using a set of principles (a “constitution”) to guide their behavior, rather than relying solely on human feedback.
D
- Data Protection Impact Assessment (DPIA)
- A process to help identify and minimize the data protection risks of a project. Required under GDPR Article 35 for processing likely to result in high risk to individuals. Particularly relevant for AI systems processing personal data.
- Deep Synthesis
- Chinese regulatory term for technology that uses deep learning, virtual reality, and other synthesis algorithms to generate text, images, audio, video, and virtual scenes. Regulated under the Deep Synthesis Provisions (2023). See China.
- Deepfake
- Synthetic media created using AI to manipulate or generate visual and audio content with a high potential to deceive. Subject to transparency requirements under the EU AI Act and various national laws.
- Deployer
- Under the EU AI Act, any natural or legal person that uses an AI system under its authority, except where the system is used in the course of a personal non-professional activity.
E
- Emotion Recognition System
- An AI system for identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. Prohibited in certain contexts (workplaces, education) under the EU AI Act.
- Explainability
- The degree to which the internal mechanics of an AI system can be explained in human terms. Distinguished from interpretability (understanding the model itself) and transparency (disclosing system properties).
F
- Foundation Model
- A large AI model trained on broad data that can be adapted to a wide range of downstream tasks. The EU AI Act establishes specific obligations for “general-purpose AI models” (the legal term for foundation models).
- Frontier AI
- The most capable AI models at the cutting edge of performance. Subject to enhanced governance attention through the UK AI Safety Institute, Bletchley Declaration, and Hiroshima Process.
G
- General-Purpose AI (GPAI)
- Under the EU AI Act, an AI model that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. Subject to transparency and documentation requirements.
- GPAI with Systemic Risk
- Under the EU AI Act, a GPAI model classified as having systemic risk if it has high-impact capabilities (evaluated based on training compute exceeding 10^25 FLOPs or Commission designation). Subject to additional obligations including adversarial testing and incident reporting.
H
- Hallucination
- When an AI system generates content that is factually incorrect, fabricated, or inconsistent with its training data, presented with apparent confidence. A key challenge for governance of generative AI systems.
- Harmonised Standards
- European standards developed by CEN, CENELEC, or ETSI following a request from the European Commission. Compliance with harmonised standards under the EU AI Act creates a presumption of conformity with the regulation’s requirements.
- High-Risk AI System
- Under the EU AI Act, an AI system that falls within specific use cases listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or is a safety component of a product covered by EU harmonisation legislation in Annex I.
- Human-in-the-Loop (HITL)
- A design paradigm where a human operator is actively involved in the decision-making process of an AI system, typically able to intervene before any action is taken.
- Human-on-the-Loop (HOTL)
- A design paradigm where a human operator monitors AI system operation and can intervene to override or halt the system, but is not required to approve each individual decision.
- Human-out-of-the-Loop (HOOTL)
- A design paradigm where the AI system operates fully autonomously without human intervention capability during operation. Most controversial in military applications (autonomous weapons).
I
- Impact Assessment
- A systematic process for evaluating the potential consequences of an AI system on individuals, groups, organizations, or society. Various types include Algorithmic Impact Assessments, Data Protection Impact Assessments, and Fundamental Rights Impact Assessments.
L
- LAWS (Lethal Autonomous Weapon Systems)
- Weapon systems that can independently identify, select, and engage human targets without meaningful human control. Subject to ongoing debate at the UN Convention on Certain Conventional Weapons. See Military AI.
- Large Language Model (LLM)
- A type of foundation model trained on large text datasets that can generate, analyze, and transform text. Examples include GPT-4, Claude, Gemini, and Llama. Governed as GPAI models under the EU AI Act.
M
- Meaningful Human Control
- The principle that humans must retain sufficient understanding and control over AI systems to be morally responsible for their use. Central concept in military AI governance and autonomous weapons debates.
- Model Card
- A documentation framework (proposed by Mitchell et al., 2019) that provides structured information about an AI model including intended use, performance characteristics, limitations, and ethical considerations.
N
- NIST AI RMF
- The National Institute of Standards and Technology AI Risk Management Framework (AI RMF 1.0, January 2023). A voluntary framework with four core functions: Govern, Map, Measure, Manage. See Technical Standards.
- Notified Body
- An organization designated by an EU member state to assess the conformity of certain products before being placed on the market. Under the EU AI Act, certain high-risk AI systems require third-party conformity assessment by a notified body.
O
- OECD AI Principles
- The OECD Recommendation on Artificial Intelligence (2019, updated 2024), the first intergovernmental AI standard. Five principles: inclusive growth, human-centred values, transparency, robustness, and accountability. Adopted by 46 countries. See International Frameworks.
P
- Post-Market Monitoring
- Systematic activities by AI providers to collect and analyze data on the performance of AI systems after deployment. Required under the EU AI Act for high-risk AI systems.
- Prohibited AI Practices
- Under the EU AI Act (Article 5), AI applications that are banned outright, including social scoring, real-time remote biometric identification (with exceptions), exploitation of vulnerabilities, subliminal manipulation, and emotion recognition in workplaces/education.
- Provider
- Under the EU AI Act, any natural or legal person that develops an AI system or has an AI system developed and places it on the market or puts it into service under its own name or trademark.
R
- Red Teaming
- The practice of testing AI systems by simulating adversarial attacks or attempting to elicit harmful outputs. Used by major AI companies and increasingly referenced in AI regulations as a safety practice.
- Regulatory Sandbox
- A controlled environment for testing innovative AI products under relaxed regulatory requirements with regulatory oversight. The EU AI Act (Articles 57-62) requires member states to establish AI regulatory sandboxes.
- Responsible Scaling Policy (RSP)
- A framework (pioneered by Anthropic) that ties AI model deployment decisions to concrete evaluations of model capabilities, with escalating safety requirements as capabilities increase. See Corporate Governance.
- Risk-Based Approach
- A regulatory methodology that calibrates governance requirements based on the level of risk posed by an AI system. The EU AI Act’s four-tier system (unacceptable, high, limited, minimal risk) is the most prominent example.
S
- Social Scoring
- The use of AI to evaluate or classify natural persons based on their social behavior or personal characteristics, leading to detrimental treatment. Prohibited under the EU AI Act when conducted by public authorities for general purposes.
- Systemic Risk
- Under the EU AI Act, a risk specific to the high-impact capabilities of GPAI models that could have a significant effect on the Union market, and could pose actual or reasonably foreseeable negative effects on public health, safety, security, fundamental rights, or society as a whole.
T
- Text and Data Mining (TDM)
- The automated extraction of information from text and data for analysis purposes. Relevant to AI training data and copyright law; the EU DSM Directive provides TDM exceptions. See Copyright & IP.
- Transparency
- In AI governance, the principle that information about AI systems should be available to relevant stakeholders. Includes disclosure of AI use, system properties, decision rationale, and data practices.
- Trustworthy AI
- AI that is lawful, ethical, and robust. The OECD, EU, and most governance frameworks converge on trustworthiness as the overarching goal, typically comprising fairness, transparency, accountability, safety, and privacy.
U
- Unacceptable Risk AI
- Under the EU AI Act, AI systems whose use is considered unacceptable and is prohibited. Includes social scoring, exploitation of vulnerabilities, subliminal manipulation, and (with exceptions) real-time remote biometric identification in public spaces.