Artificial intelligence (AI) may seem like a new wave of technology thanks to newly-created generative tools such as ChatGPT, but it first emerged as an idea in the 1930s through to the 1950s.
Fast-forward to the last five years, where AI is just starting to take off, it feels as if the moment has arrived, says Marc Andreessen, co-founder and general partner at venture capital firm Andreessen Horowitz. He was speaking as part of a panel at Databricks’ Data and AI Summit 2023 in San Francisco in June.
While the concept of AI is becoming more mainstream, most still view it with suspicion. In his article dated June 16, Andreessen came out to defend the technology, saying that AI is “a way to make everything we care about better” as opposed to “killer software and robots” that may bring ruin to the human race.
“The reason to be very positive about AI is because it’s in this concept about intelligence. We actually know a lot about intelligence because we know a lot about human intelligence. Human intelligence has been the biggest topic of study in the social sciences in the last century… And it turns out, basically, when it comes to human beings, intelligence makes everything better,” he says.
He adds that the comforts that we have in the world today, the buildings and the electricity, were built with intelligence.
“We’ve built everything that makes the world work with intelligence [but] we have always been limited by our own capabilities. So now we have the opportunity to apply machine intelligence to all of these efforts, and basically do an upgrade of everybody's capability to be able to do things in the world,” he continues.
See also: test article with event
AI’s impact on jobs
On the subject of AI replacing jobs, Andreessen believes that AI – or machines – will have the ability to enable people to do more valuable things.
Likening the current AI revolution to the first Industrial Revolution where productivity was much-improved thanks to machines, Andreessen sees that there’s going to be “tremendous economic growth” that’ll come on the back of AI.
See also: Without regulator buy-in, scaling AI in financial services will be an uphill battle
“Specifically with code. Code has this property where the world can never get enough of it. There are always more programmes to write and more things that you want code to do. Nobody ever runs out of ideas of what they want the software to do, but you run out of time and the resources to build the software that you want. So I suspect what happens is both a massive increase in the amount of software in the world that comes out of this, and then also ultimately, a very big increase in the number of people actually working on software,” he adds.
Danny Allan, chief technology officer at data security firm Veeam, agrees that AI, especially generative AI, has tremendous opportunities for the future.
“It helps to save money [by automating processes] that are very manual in nature. Whether it be support for marketing, image generation or testing, there are just so many ways that generative AI can be applied to,” he tells DigitalEdge in an interview at VeeamON 2023 in Miami.
“[AI] provides huge efficiencies. And if you're looking for technologies that provide efficiencies, they always last. Virtualisation emerged because it was so much more efficient for physical systems, and look at virtualisation now. It's huge. So I do think that generative AI will be very significant now and into the future,” he adds.
Like Andreessen, Allan believes that AI will never replace humans completely. “[What it does is it causes us to focus on areas of higher value or different value,” he says. Speaking specifically on computer engineers, Allan adds that humans will still “need to train the large language models (LLMs) [and] create the algorithms and models to test generative AI”. “That is not going away anytime soon,” he notes.
He adds that it will take time before generative AI is widely adopted. “My expectation is that we’ll be able to really reuse the information [from generative AI] in a meaningful way probably in the three- to five-year timeframe, but not in the next 12 to 24 months.”
AI risks
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Dave Russell, vice president of enterprise strategy at Veeam notes that there are positive opportunities for generative AI to transform and help deal with issues such as recovery and supply chains.
However, misinformation and certain biases are real barriers that have to be overcome.
“Upfront misinformation existed before generative AI [and they may] get accelerated with generative AI,” says Satya Nadella, CEO of Microsoft, during a panel discussion at Databricks’ Data and AI Summit 2023 in San Francisco. “In the intermediate timeframe, we will have more cyber risk, bioterrorism risks… they are real-world harms.”
And when AI begins to kick off, what if humans lose control, asks Nadella. “That is obviously a science problem today because in some sense, we really need to solve the alignment problem. So we shouldn’t abdicate, at least we cannot abdicate, our responsibility to produce responsible AI,” he adds.
Beyond misinformation, Veeam’s Allan points out that there may be inherent biases that come from the creators of the models such as racism in ChatGPT or seeing problems in the output.
Safety issues are also another drawback, points out Eric Schmidt, former head of Google and co-founder of philanthropic venture, Schmidt Futures. He, too, was speaking as part of the keynote at Databricks’ event this year.
For instance, safety issues that may not have emerged yet may cause some “emergent behaviour that we have not yet seen and we can’t test them,” says Schmidt.
He adds that scenarios of “extreme risk” may emerge, potentially causing “thousands and thousands of people harmed and killed from something”.