16Companies Profiled
15White House Signatories
$1B+Annual Safety Investment
7AI Principles Sets

Table of Contents

1. Overview

Corporate AI governance has become a critical dimension of the broader AI governance landscape. As companies develop increasingly powerful AI systems, their internal governance structures, safety practices, and ethical commitments shape the technology’s trajectory as much as — or more than — government regulation.

Common Corporate Governance Elements

2. Google / DeepMind

2.1 AI Principles (June 2018)

Google was the first major tech company to publish formal AI principles, following internal controversy over Project Maven (DoD drone AI).

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles

2.2 Governance Structure

3. Microsoft

3.1 Responsible AI Standard (v2, 2022)

Microsoft’s internal standard translates its AI principles into concrete engineering requirements:

3.2 Governance Structure

4. OpenAI

4.1 Charter & Structure

OpenAI operates under a unique capped-profit structure with a nonprofit parent. Key governance elements:

4.2 Safety Practices

5. Anthropic

5.1 Responsible Scaling Policy (RSP)

Industry First: Anthropic’s Responsible Scaling Policy (September 2023) was the first public commitment to tie model deployment decisions to concrete capability evaluations. It defines “AI Safety Levels” (ASL-1 through ASL-4) with escalating safety requirements as model capabilities increase.

5.2 AI Safety Levels

Level Capabilities Required Safeguards
ASL-1No meaningful uplift for catastrophic risksBasic safety measures
ASL-2Current large models (some uplift possible but not beyond existing info)Current security and deployment practices
ASL-3Substantially increases risk of catastrophic misuseEnhanced security; containment measures; monitoring
ASL-4Qualitative increase in dangerous capabilitiesExtraordinary measures (not yet fully defined)

5.3 Other Practices

6. Meta

6.1 Open-Source Approach

Meta’s AI governance is distinctive due to its commitment to open-weight model releases (Llama series), which raises unique governance questions about downstream use control.

6.2 Governance Elements

7. Other Major Companies

Company AI Principles Key Governance Features Notable Commitments
AmazonResponsible AI policy (2023)AI service cards; fairness tools (Clarify); safety reviewsWhite House commitments; NIST AI RMF alignment
AppleIntegrated into privacy principlesOn-device processing emphasis; differential privacy; ML reviewPrivacy-focused AI governance; limited public disclosure
IBMPrinciples for Trust and Transparency (2018)AI Ethics Board; AI FactSheets; AI Fairness 360 toolkitEarly advocate; voluntary commitments; NIST partnership
SamsungAI Ethics Principles (2022)AI Ethics Committee; internal review processGauss model safety evaluations
NvidiaTrustworthy AI principlesRed teaming for NeMo models; safety guardrails (NeMo Guardrails)White House commitments; export compliance
BaiduERNIE responsible AI frameworkEthics committee; content safety systems; regulatory complianceDeep synthesis compliance; algorithm registration (China)
HuaweiAI ethics principlesAI security framework; privacy by designChinese regulatory compliance; international standards participation

8. Voluntary Commitments

8.1 White House Voluntary AI Commitments (July 2023)

15 companies signed voluntary commitments on AI safety, security, and trust:

Commitment Area Specific Actions
SafetyInternal and external security testing before release; sharing safety information with governments and other companies; investing in cybersecurity and insider threat safeguards
SecurityEarning public trust through transparency; reporting on capabilities, limitations, and domains of appropriate/inappropriate use
TrustDeveloping technical mechanisms for users to know when content is AI-generated (watermarking); investing in research on societal risks; deploying AI to help address society’s greatest challenges

8.2 Frontier Model Forum

Founded by Anthropic, Google, Microsoft, and OpenAI (July 2023), the Frontier Model Forum facilitates AI safety research and best practice sharing among leading AI developers. Expanded membership includes additional companies. Key outputs include red teaming standards and safety benchmark sharing.

9. Comparative Analysis

Dimension Google Microsoft OpenAI Anthropic Meta
Principles Year201820182018 (Charter)2023 (RSP)2021
Governance ModelCentralized RAI team + reviewsOffice of RAI + AetherBoard + Safety AdvisoryRSP + safety levelsEmbedded RAI teams
Model ReleaseClosed (API access)Partnership (OpenAI)Closed (API access)Closed (API access)Open weights
Safety ResearchDeepMind SafetyMicrosoft ResearchAlignment teamCore missionFAIR safety group
Red TeamingInternal + externalInternal + externalExternal pre-launchInternal + externalInternal + community
TransparencyModel cards; technical reportsSystem cards; RAI dashboardSystem cards; usage dataModel cards; RSP reportsOpen weights; research papers

10. References & Resources

Company AI Principles & Policies

Voluntary Commitments

Previous Technical Standards Next Glossary