Continue reading this on our app for a better experience

Open in App
Floating Button
Home News Artificial Intelligence

AI is moving fast enough to break things. Sound familiar?

Joshua Brustein
Joshua Brustein • 8 min read
AI is moving fast enough to break things. Sound familiar?
OpenAI released ChatGPT less than six months ago / Photo: Bloomberg
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

In January 2015, the newly formed — and grandly named — Future of Life Institute (FLI) invited experts in artificial intelligence (AI) to spend a long weekend in San Juan, Puerto Rico. The result was a group photo, a written set of research priorities for the field and an open letter about how to tailor AI research for maximum human benefit. The tone of these documents was predominantly upbeat.

Among the potential challenges FLI anticipated was a scenario in which autonomous vehicles reduced the 40,000 annual US automobile fatalities by half, generating not “20,000 thank-you notes, but 20,000 lawsuits”. The letter acknowledged it was hard to predict AI’s exact impact on human civilisation — it laid out some potentially disruptive consequences — but also noted that “the eradication of disease and poverty are not unfathomable”.

The open letter FLI published on March 29 was, well, different. The group warned that AI labs were engaging in “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”. It called for an immediate pause on the most advanced AI research and attracted thousands of signatures — including those of many prominent figures — and a round of mainstream press coverage.

Shifting views

For anyone trying to wrap their heads around the current freakout over AI, the letter was instructive on multiple levels. It is a vivid example of how the conversations about new technologies can shift with jarring speed from wide-eyed optimism to deep pessimism. The vibe at the 2015 Puerto Rico event was positive and collegial, says Anthony Aguirre, FLI’s vice-president and secretary of its board. He also helped draft the recent letter, inspired by what he argues is a distressing turn in technology development. “What there wasn’t then was giant companies competing with one another,” he says.

Looking back, the risk that self-interested technology companies would come to dominate the field seems obvious. But that concern is not reflected anywhere in the documents from 2015. Also absent was any mention of the industrial-scale dissemination of misinformation, an issue that many tech experts now see as one of the most frightening consequences of powerful chatbots in the near term.

See also: test article with event

Then there was the reaction to last month’s letter. Predictably, leading AI companies such as OpenAI, Google, Meta Platforms and Microsoft gave no indication that it would lead them to change their practices. FLI also faced blowback from many prominent AI experts, partially because of its association with the polarising effective altruism movement and Elon Musk, a donor and advisor known for his myriad conflicts of interest and attention-seeking antics.

Aside from any intra-Silicon Valley squabbles, critics say FLI was doing damage not for voicing concerns but for focusing on the wrong ones. There is an unmistakable tinge of existential threat in FLI’s letter, which explicitly raises the prospect of humans losing control of the civilisation we have built.

Fear about computer superintelligence is a long-standing topic within tech circles — but so is the tendency to vastly overstate the capabilities of whatever technology is the subject of the latest hype cycle (see also: virtual reality, voice assistants, augmented reality, the blockchain, mixed reality, and the Internet of Things, to name a few).

See also: Without regulator buy-in, scaling AI in financial services will be an uphill battle

Predicting that autonomous vehicles could halve traffic fatalities and warning that AI could end human civilisation seems to reside on opposite ends of the techno-utopian spectrum. But they both promote the view that what Silicon Valley is building is far more powerful than laypeople understand.

This diverts from less sensational conversations and undermines attempts to address the more realistic problems, says Aleksander Madry, faculty co-lead of Massachusetts Institute of Technology’s AI Policy Forum. “It’s counterproductive,” he says of FLI’s letter. “It will change nothing, but we’ll have to wait for it to subside to get back to serious concerns.”

The leading commercial labs working on AI have made major announcements rapidly. OpenAI released ChatGPT less than six months ago and followed by GPT-4, which performs better on many measures but whose inner workings are largely a mystery to people outside the company. Its technology is powering a series of products released by Microsoft, OpenAI’s biggest investor, some of which have done disturbing things, such as professing love for human users. Google rushed out a competing chatbot-powered search tool, Bard. Meta Platforms recently made one of its AI models available to researchers who agreed to certain parameters — and then the code quickly showed up for download elsewhere on the web.

Guarding access to AI tech

“In a sense, we’re already in the worst-of-both-worlds scenario,” says Arvind Narayanan, a professor of computer science at Princeton University. He says a few companies control the best AI models, “while slightly older ones are widely available and can even run on smartphones”. He adds that he is less concerned about bad actors getting their hands on AI models than AI development behind corporate research labs’ closed doors.

OpenAI, despite its name, takes the opposite view essentially. After its initial formation in 2015 as a nonprofit that would produce and share AI research, it added a for-profit arm in 2019 (albeit one that caps the potential profits its investors can realise). Since then, it has become a leading proponent of the need to keep AI technology closely guarded, lest bad actors abuse it.

In blog posts, OpenAI has said it can anticipate a future in which it submits its models for independent review or even agrees to limit its technology in key ways. But it has not said how it would decide to do this.

To stay ahead of Singapore and the region’s corporate and economic trends, click here for Latest Section

For now, it argues that the way to minimise the damage its technology can cause is to limit the level of access its partners have to its most advanced tools, governing their use through licensing agreements. The controls on older and less powerful tools do not necessarily have to be as strong, says Greg Brockman, an OpenAI co-founder now its president and chairman. “You want to have some gap so that we have some breathing room to focus on safety and get that right,” he says.

It’s hard not to notice how well this stance dovetails with OpenAI’s commercial interests — a company executive has said publicly that competitive considerations also play into its view on what to make public. Some academic researchers complain that OpenAI’s decision to withhold access to its core technology makes AI more dangerous by hindering disinterested research. A company spokesperson says it works with independent researchers and underwent a six-month vetting process before releasing the latest version of its model.

OpenAI’s rivals question its approach to the big questions surrounding AI. “Speaking as a citizen, I always get a little bit quizzical when the people saying ‘This is too dangerous’ are the people who know,” says Joelle Pineau, vice president for AI research at Meta, a professor at McGill University. Meta allows researchers access to versions of its AI models, hoping outsiders can probe them for implicit biases and other shortcomings.

The drawbacks of Meta’s approach are already becoming clear. In late February, the company gave researchers access to a large language model called LLaMA — a technology similar to the one that powers ChatGPT. Researchers at Stanford University soon said they would used the model as a basis for their project that approximated advanced AI systems with about US$600 ($796) investment. Pineau says she had not assessed how well Stanford’s system worked, though she says such research aligned with Meta’s goals.

But Meta’s openness, by definition, came with less control over what happened with LLaMA. It took about a week before it showed up for download on 4chan, one of the main message boards of choice for Internet trolls. “We’re not thrilled about the leak,” Pineau says.

There may never be a definitive answer about whether OpenAI or Meta has the right idea — the debate is only the latest version of one of Silicon Valley’s oldest fights. But their divergent paths do highlight how the decisions about putting safeguards on AI are being made entirely by executives at a few large companies.

In other industries, releasing potentially dangerous products comes only after private actors have satisfied public agencies that they’re safe. In a March 20 blog post, the Federal Trade Commission warned technologists that it “has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury”, Ten days later, the Center for AI and Digital Policy, an advocacy group, filed a complaint with the commission, asking it to halt OpenAI’s work on GPT-4.

Being able to build something but refraining from doing so is not a novel idea. But it pushes against Silicon Valley’s enduring impulse to move fast and break things. While AI is far different from social media, many players involved in this gold rush were around for that one, too. The services were deeply entrenched when policymakers began trying to respond earnestly, and they have arguably achieved very little. In 2015 it still seemed like there was lots of time to deal with whatever AI would bring. That seems less true today. — Bloomberg Businessweek

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.