AI models are evolving rapidly. When EU regulators released the first draft of the AI Act in April 2021, they hailed it as “future-proof”, only to be left scrambling to update the text in response to the release of ChatGPT a year and a half later. But regulatory efforts are not in vain. For example, the law’s ban on AI in biometric policing will likely remain pertinent regardless of technological advances. Moreover, the risk frameworks contained in the AI Act will help policymakers guard against some of the technology’s most dangerous uses. While AI will develop faster than policy, the law’s fundamental principles will not need to change — though more flexible regulatory tools will be needed to tweak and update rules.
Last December, the EU set a global precedent by finalising the Artificial Intelligence Act (AI Act), one of the world’s most comprehensive sets of AI rules.
Europe’s landmark legislation could signal a broader trend towards more responsive AI policies. But while regulation is necessary, it is insufficient. Beyond imposing restrictions on private AI companies, governments must assume an active role in AI development by designing systems and shaping markets for the common good.

