The tech industry, Google among them, lobbied fiercely against the bill, making a very old argument. As the Financial Times editorial board put it, new regulations could “stunt … the emergence of an innovation that could help diagnose diseases, accelerate scientific research, and boost productivity.” Once again, such opportunity costs are deemed more harmful than whatever damage AI might do to people’s ability to control their own destinies, or even to live peacefully in their societies.
Within the past month, California Governor Gavin Newsom has vetoed an artificial intelligence (AI) safety bill, and the Royal Swedish Academy of Sciences awarded the Nobel Prize in Chemistry to David Baker, a professor at the University of Washington, and to Demis Hassabis and John M Jumper, employees of Google’s subsidiary DeepMind and its spin-off Isomorphic Labs. These two events may seem to have little in common, but, taken together, they suggest that outsourcing humanity’s future to profit-maximising private corporations is something to be celebrated.
While the California bill was not flawless, it did represent the first substantial effort to hold developers accountable for the potential harms that their AI models might cause. Moreover, it focused not just on any risk but on “critical harm,” like developing weapons of mass destruction or causing at least US$500 million ($656.6 million) worth of damage.

