Technology Policy

AI Regulation

Also known as: AI Governance, AI Policy, AI Law

Legal frameworks, policies, and standards governing the development and deployment of artificial intelligence systems.

AI regulation encompasses the laws, rules, and standards that govern how AI systems are built, deployed, and operated.

Major Frameworks

RegionFrameworkStatus
EUAI ActIn force (2024)
USExecutive Order 14110Active (2023)
ChinaMultiple regulationsActive
UKPro-innovation approachEvolving

Risk-Based Approaches

The EU AI Act categorizes AI by risk:

  • Unacceptable: Banned (social scoring, manipulation)
  • High-risk: Strict requirements (hiring, credit, law enforcement)
  • Limited: Transparency obligations (chatbots)
  • Minimal: No specific rules (spam filters)

Key Requirements

  • Risk assessments and documentation
  • Human oversight mechanisms
  • Transparency and explainability
  • Data governance standards
  • Conformity assessments

Challenges

  • Keeping pace with technology
  • Balancing innovation and safety
  • Global coordination
  • Enforcement mechanisms
  • Defining “AI” precisely

External Resources