Floating Button
Home Views Artificial Intelligence

The tradeoffs of AI regulation

Raghuram G Rajan
Raghuram G Rajan • 5 min read
The tradeoffs of AI regulation
Photo by Aidin Geranrekab on Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

The problem with European regulators, a German businessman recently told me, is that they are too scared of downside risks. “In any innovative new business sector, they over-regulate and stifle any upside potential.” In contrast, he argued, Americans care more about the upside potential, and thus hold off on regulation until they know much more about the consequences. “Not surprisingly, the US has much more of a presence in innovative industries.”

AI is a case in point. The EU enacted the world’s first comprehensive AI regulation in August 2024, establishing safeguards against risks such as discrimination, disinformation, privacy violations, and AI systems that could endanger human life or threaten social stability. The law also assigns AI systems different risk levels, with different treatments for each. While AI-driven social scoring systems are banned outright, higher-risk systems are heavily regulated and supervised, with a list of fines for non-compliance.

But Europe has little presence in the burgeoning AI industry, especially relative to the US or China. Those leading the charge in generative AI are US-based firms such as OpenAI, Anthropic, and Google; no European firm meets the mark. Such a glaring gap seems to speak for itself. For now, the Trump administration’s AI Action Plan, which seeks to limit red tape and regulation in AI, looks like the better approach.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.