Continue reading this on our app for a better experience

Open in App
Floating Button
Home Views Tech

Why are we all freaking out about AI

Assif Shameen
Assif Shameen • 10 min read
Why are we all freaking out about AI
We need to nip AI safety in the bud, or things could get out of hand / Photo: Bloomberg
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Last week, I wrote about the fast unfolding drama at OpenAI, an artificial intelligence startup and one of the world’s largest unicorns, or private venture capital-backed companies worth over US$1 billion ($1.3 billion).

OpenAI’s board, which included Chief Scientific Officer Ilya Sutskever, one of the world’s top AI researchers, had summarily fired CEO Sam Altman, who was then swiftly hired by its main backer, Microsoft, to lead an in-house artificial intelligence research lab as well as a staff rebellion which forced the board to resign and the eventual reinstatement of Altman as CEO of the start-up which now has a valuation of US$86 billion. 

A week later, with a steady flow of leaks from tech executives and the former boardroom, it is becoming clear what exactly transpired in OpenAI’s boardroom battle, which triggered an unprecedented tech industry crisis, what is really at stake and why everyone is starting to freak out about AI suddenly.

A year ago on Nov 30, OpenAI unveiled its initial breakthrough, a generative AI chatbot. ChatGPT is a pre-trained large language model fine-tuned with reinforcement learning from human feedback. Analysts say that tuning made ChatGPT feel casual and good at answering various questions.

Yet, the chatbot had significant limitations. While it seemed almost magical to produce great answers that a search engine like Google was never able to, too often, the chatbot churns out unreliable answers, plainly wrong, or seem completely made-up.

Moreover, ChatGPT is incapable of proper reasoning. It is also seen as a first step in the long march towards a more sophisticated AI which uses reasoning. “The next frontier is to develop the planning capabilities of the large language models, or their ability to reason towards an answer rather than spitting one straight,” says Pierre Ferragu, a tech hardware analyst at NewStreet Research in New York.

See also: Microsoft warns other firms of Russian-sponsored group in email hacking

Work on incorporating reasoning is being done by OpenAI, which is developing a multimodal large language model GPT-5, or Generative Pre-trained Transformer 5, and by Google, which is developing Gemini, a suite of large language models which employs training methodologies like integrating reinforcement learning and tree search techniques which explores nodes from the root, looking for one particular node that satisfies the conditions mentioned in a problem.   

Frontier of discovery
Just a day before he got fired on Nov 17, Altman joined a panel on AI at the APEC CEO Summit with senior executives from search giant Google and social media behemoth Meta Platform. The panel was moderated by Laurene Powell Jobs, the widow of Apple’s founder and iPhone maker Steve Jobs.

Responding to a question by moderator Jobs, Altman said: “Four times now in the history of OpenAI, most (recently) just in the last couple weeks, I have gotten to be in the room, when we push the veil of ignorance back and the frontier of discovery forward.” 

See also: Microsoft, Amazon and Google are kingmakers for AI start-ups

That “frontier of discovery”, it now turns out, was a powerful AI breakthrough that several OpenAI researchers and at least three members of its board believe could potentially threaten humanity. OpenAI has since acknowledged the breakthrough, a project called Q* (pronounced Q Star), in an internal message to its staff.

But so far, it has officially refused to comment on whether the breakthrough is in artificial general intelligence (AGI), autonomous systems capable of human-level intelligence across a broad range of tasks. Indeed, in some cases, AGI’s intelligence can even surpass humans.

Unsurprisingly, the breakthrough spearheaded by Sutskever raised concerns among some OpenAI staffers about the pace of the advances and whether the non-profit entity's for-profit subsidiary had adequate guardrails to commercialise such advanced AI models. OpenAI’s Charter is clear in its mission to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.”

Several media outlets, including Reuters and the tech-focused website The Information, reported that OpenAI researchers used a new technique to build a model that could solve math problems it had never seen before, a key milestone. How difficult was the math? The model solved a math problem at a graduate-student level. A kind of math you would be required to do if you were studying for a Master’s degree rather than an undergraduate course. 

Researchers have toiled hard getting AI models to solve complex math problems for years. While large language models like OpenAI’s consumer-facing generative AI chatbot, ChatGPT and the more enterprise-centric GPT-4 can do some math, they don’t do it reliably or well.

Currently, no algorithms or the right architectures to solve math problems reliably using AI. Deep learning and transformers, a type of neural network or a circuit of biological neurons that large language models use, are great at recognising patterns, but apparently, that is not good enough. 

Machine learning
Complex math is a particularly hard challenge because it requires AI models to have the capacity to reason and, as such, really understand what they are dealing with. Math is often considered a key benchmark for reasoning. AI experts say that a machine that can reason about mathematics could, in theory, learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. 

Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends

The AI model was reportedly developed by two senior researchers, Jakub Pachocki and Szymon Sidor, who built on the earlier work of Chief Scientist Sutskever, who was one of four board members behind Altman’s ouster.

Pachocki and Sidor were among the first two employees to resign following the ouster of Altman and OpenAI Chairman and President Greg Brockman. Sidor had worked at OpenAI on the reinforcement learning part of  'OpenAI Five', the model which achieved supra-human capabilities in 'Dota 2', a multiplayer online battle arena video game.

Before that, he had reportedly worked on causal models at Vicarious AI models, which can generalise causality relationships from limited data. Pachocki also led the research work on Dota alongside Sidor and then led the reasoning team at OpenAI. His latest role was as director of research for OpenAI and leader of the pre-training work for GPT-4.

Pachoki and Sidor’s underlying research used computer-generated rather than real-world data to train the model notes Ferragu. OpenAI recently hired Noam Brown, whose expertise is in planning development or AI models' ability to think through possible solutions rather than produce one.

Brown had previously worked on having AI masterfully play the game ‘Diplomacy’. AlphaZero, the model Google’s AI lab DeepMind Technologies developed after AlphaGo, which used reinforcement learning, is a source of inspiration for the work.  AlphaGo learned how to play the game of Go from human games, whereas AlphaZero learned from playing against itself. That’s reinforcement learning at its very basic. 

So, did we achieve a huge AI breakthrough, or did OpenAI take just another step towards its goal of AGI? “We don’t see an AI singularity, a point where AI becomes sentient and supra-human, at any task, but we potentially see a singularity, just ahead of us,” Ferragu noted in a recent report about OpenAI’s breakthrough.

Yet, a demo of the model circulated within OpenAI in recent weeks, and the pace of development certainly alarmed researchers focused on AI safety. Analysts say that tensions within OpenAI about the pace of its work are likely to continue despite Altman’s reinstatement as CEO. 

Darker side
OpenAI saga is likely to reinforce the darker side of AI. Google search the words “artificial intelligence”, and the search engine will spew out tons of rubbish like “AI will take over your job and the jobs of all your friends and family” or “AI is ultimately flawed and can’t be trusted with all the biases in facial recognition as well as gender and race-based discrimination” or that AI is empowering makers of deepfakes or those aggressively pushing disinformation.

Oh, there is that other big one: ultimately, AI will become sentient and take over and harm humanity. Already, some people are pushing the idea that AI will start wars and press nuclear buttons and might just kill us all. If you believe that, you have probably been watching too many sci-fi Hollywood movies like The Terminator, 2001: A Space Odyssey and Ex Machina.

Former US President Barack Obama has recently given several interviews where he was repeatedly asked about AI. His concern isn’t so much that we will see Terminator destroy the earth but AI autonomously hacking the global or the US financial system, which could paralyse the world.

The day Altman was ousted, US President Joe Biden issued an executive order on Safe, secure, and trustworthy development and use of Artificial Intelligence, imposing some restraints on AI development.

Biden’s executive order declares that the US should be a global leader in AI development and adoption by engaging with international allies and partners, leading efforts to develop common AI regulatory and accountability principles, and advancing responsible global technical standards for AI. With just 11 months before the next elections, the Order looks more like a political document than a serious attempt to regulate AI. 

Tesla CEO Musk, who now runs a rival AI platform, xAI, has long been more worried than most about the speed at which artificial intelligence is advancing. He told a New York conference on Nov 29 that “double-edged swords” or tech can be used for good and evil and “single-edged swords” or inherently good tech.

His prediction about when the technology can reach what’s known as “artificial general intelligence” — in his words, the point at which it “can write as good a novel as, say, J.K. Rowling, or discover new physics, or invent new technology” — is less than three years from now. That’s a very aggressive timeline. But then again, Musk has been promising a million robotaxis any day now for over five years. 

Here’s my take: AI is like the Wild West without a sheriff. Picture a chaotic Australian Rules rugby game with no umpire — players running wild, tearing each other apart. Pardon the pun, but you don’t want to play footy with AI. There has to be a global agreement on AI safety. Even if the Western countries and all of Southeast Asia sign on to a set of AI regulations, what happens if Russia, China and others don’t?

Even if you assume that China and Russia decide they need to sign on too because they don’t want to be seen going rogue, it still won’t solve anything because there is still Iran and North Korea. What if a rogue nation got hold of a bunch of Nvidia’s fastest AI chips and built their large language models? 

We need rules. For 80 years now, the world has fretted about nuclear proliferation. There were all sorts of non-proliferation treaties, yet Iran still somehow acquired the technology, and who knows how far behind North Korea is.

Can we take chances with AI? With the recent breakthroughs in AGI, we need to nip AI safety in the bud, or things could get out of hand.

Assif Shameen is a technology and business writer based in North America

 

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.