The Regulation Race Begins

Artificial intelligence has moved from a niche technical field to a transformative force reshaping healthcare, finance, media, law enforcement, and national security — all within a remarkably short window. Governments around the world have been scrambling to respond, but crafting effective AI regulation is genuinely difficult. The technology evolves faster than legislative cycles. Its applications are extraordinarily diverse. And the risks it poses range from subtle algorithmic biases to catastrophic misuse scenarios.

This is an overview of the major regulatory approaches being taken, what they get right, and where the hard questions remain.

The EU AI Act: The World's Most Comprehensive Framework

The European Union's AI Act, which came into force in 2024, is the world's first comprehensive legal framework specifically designed to govern AI. Its core approach is risk-based tiering:

  • Unacceptable risk: Outright banned applications, including real-time remote biometric surveillance in public spaces (with limited exceptions), social scoring systems, and AI that manipulates behavior subliminally.
  • High risk: Heavily regulated applications in areas like critical infrastructure, employment, education, credit, and law enforcement. These require conformity assessments, transparency obligations, and human oversight mechanisms.
  • Limited and minimal risk: Subject to lighter-touch transparency rules or largely unregulated.

The Act also introduces specific rules for general-purpose AI models — the large foundation models that underlie systems like ChatGPT — requiring transparency about training data and compliance with EU copyright law.

The United States: A More Fragmented Approach

The US has taken a notably different path, favoring executive action and sector-specific guidance over comprehensive legislation. Key developments include:

  • A 2023 Executive Order directing federal agencies to develop AI safety standards and requiring developers of powerful AI systems to share safety test results with the government.
  • The National Institute of Standards and Technology (NIST) AI Risk Management Framework, a voluntary set of guidelines for organizations developing and deploying AI.
  • Sector-specific guidance from agencies like the FDA (for AI in medical devices) and the EEOC (for AI in employment decisions).

Congressional efforts at comprehensive AI legislation have moved slowly, reflecting both the genuine complexity of the issue and the usual friction of the US legislative process.

China: Innovation and Control in Parallel

China has pursued a distinctive combination of fostering domestic AI development while maintaining tight state control. It has enacted specific regulations on algorithmic recommendation systems, deepfakes, and generative AI — often with a focus on preventing content that challenges social stability or state authority, alongside more conventional safety concerns.

The Core Challenges Regulators Face

Keeping Pace with the Technology

AI capabilities are advancing rapidly. Regulations risk being outdated by the time they come into force. Some frameworks attempt to address this through principles-based approaches rather than prescriptive rules, but this creates its own challenges around enforcement and certainty.

Defining the Object of Regulation

"AI" is not a single thing. The same underlying model can power a children's tutoring app and a weapons targeting system. Effective regulation needs to focus on applications and contexts, not just the technology itself.

Jurisdictional Limits

AI systems are developed globally and deployed across borders. A regulation effective within one jurisdiction can be undermined if the same capabilities are accessible from elsewhere.

What's at Stake

Get regulation right, and it can enable trustworthy AI that genuinely benefits society while managing real harms — from discriminatory hiring algorithms to AI-generated disinformation. Get it wrong — whether through overreach that stifles innovation or underreach that allows harms to accumulate — and the costs will be significant. The coming years of regulatory experimentation will be crucial in determining which path we take.