Photo by Choong Deng Xiang on Unsplash
OpenAI has announced a major upgrade to the technology underpinning ChatGPT, the seemingly magical online tool professionals have been using to draft emails, write blog posts and more.
If you think of ChatGPT as a car, the new language model known as GPT-4 adds a more powerful engine. The old ChatGPT could only read text. The new ChatGPT can look at a photo of the contents of your fridge and suggest a dinner recipe. The old ChatGPT scored in the 10th percentile on the bar exam. The new one was in the 90th. People have used it in the hours since its release to create a website from a hand-drawn sketch or look through a dating website for an ideal partner.
But this is the fun part of unleashing a powerful language model to the public. The honeymoon period. What are the long-term consequences? OpenAI hasn’t disclosed the datasets it used to train GPT-4, so that means researchers can’t scrutinise the model to determine how it might inadvertently manipulate or misinform people. More broadly, though, it ushers in a new era of hyper-efficiency, where professionals will have to work smarter and faster — or perish.
There is no better example of this than Morgan Stanley, which has been using GPT-4 since last year. In an announcement by the bank on March 14, Morgan Stanley trained GPT-4 on thousands of papers published by its analysts on capital markets, asset classes, industry analysis and more to create a chatbot for its wealth advisers. The company said about 200 staff at the bank have been using it daily.
“Think of it as having our chief investment strategist, chief global economist, global equities strategist, and every other analyst around the globe on call for every advisor, every day,” Morgan Stanley analytics chief Jeff McMillan said in an official statement.
See also: test article with event
But here was the line that stood out from OpenAI’s write-up of the case study: “McMillan says the effort will further enrich the relationship between Morgan Stanley advisors and their clients by enabling them to assist more people more quickly.”
How much more quickly? A spokesperson for Morgan Stanley says its advisers can now do in seconds what they used to do in half an hour, such as looking at an analyst’s note to advise a client on the performance of certain companies and their shares.
Powerful AI systems like GPT-4 aren’t going to replace large swaths of professional workers, as many have instinctively feared. But they will put them under greater pressure to be more productive and faster at what they do. They will raise the bar on what is considered acceptable output and usher in an era of ultra-efficiency, unlike anything we’ve seen before.
See also: Without regulator buy-in, scaling AI in financial services will be an uphill battle
That is what partly happened to professional translators and interpreters. As artificial intelligence tools like Google Translate and DeepL grew popular among business customers, many translators feared they would be replaced. Instead, they were expected to increase their output.
Before the advent of translation tools, a professional would be expected to translate between 1,000 and 2,000 words daily, says Nuria Llanderas, who has been a professional interpreter for over 20 years. “Now they are expected to manage 7,000.” Her industry peers have predicted more AI systems will start supporting them in simultaneous translation, but that could also mean more work for the human translators in practice, checking that the machine’s output isn’t wrong. It will also raise the bar on human performance. “With the extra help, you have no excuses to leave anything out,” Llanderas adds.
Much of this is typical of the march of technology. Smartphones allow us to be connected to work at all times. Slack allowed us to communicate more seamlessly with more people inside a company. But such tools also kept us further chained to work, squeezing out minutes in the day that workers might have used in the past for contemplation, strategic thinking, or just taking a breather.
GPT-4 can potentially wring more value out of human workers, but it may come at the cost of our mental energy. However brilliant these models become, watch out for how they might take you closer to burnout. — Bloomberg Opinion