Continue reading this on our app for a better experience

Open in App
Floating Button
Home Views Artificial Intelligence

Battle of AI chatbots: Rules versus 'black boxes'

Philip M Parker
Philip M Parker • 4 min read
Battle of AI chatbots: Rules versus 'black boxes'
Image: Bollivia Intelligente via Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Generative AI methodologies have been around for more than 30 years. The famous CoverStory methodology co-authored by MIT’s John DC Little in 1990 stands out as a breakthrough application for business. The method added an authoring layer on top of an algorithm layer that leveraged scanner data. It was created to produce news for marketing managers by discovering algorithmically, among other things, competitor actions and sending a memo on ideas on how to respond.

From such early applications came companies such as Narrative Science, which created financial news for businesses using natural language generation. Research reports, weather reports, crossword puzzle books, games, short videos and a slew of other formats have used generative methodologies to create content.

So what’s new? This is where the jargon gets thick. AI folks refer to cognitive computing, symbolic AI, rule-based algorithms, machine learning, and deep learning. The layperson might see these as interchangeable. They are not. The battle across methods — old school, new school, or combinations of these — is where regulators, politicians and the general public will need to focus their attention. This will not be trivial given the lack of understanding of the complicated systems being developed.

Let’s frame the debate with a simple comparison of ChatGPT’s output and that of INSEAD’s TotoGEO AI lab. Both methods generate text, hence the name generative AI. When I asked ChatGPT to write a Wikipedia article about Tinwarrole, here is what I got (note the bot’s assertive tone — likely from rules imposed on the language model):

ChatGPT: Tinwarrole is a fictional substance with no real-world basis. It does not exist in reality and therefore does not have any known properties or uses. It is not a recognised term in any scientific or medical fields, and there is no information available about it. As a result, it is not possible to write a Wikipedia-style article about Tinwarrole, as there is no reliable, verifiable information to include in such an article.

Our lab, leveraging exacting rules, gives a factual answer:

See also: test article with event

TotoGEO: Tinwarrole was the name of an African slave aboard the NS de Montserrat. Tinwarrole was registered as a male of 44 years and enslaved in 1818; the NS de Montserrat was a Spanish schooner captained by Idriso de Pasqual. This vessel had a recorded mortality rate of 1%. Descendants of Tinwarrole, if any, were freed from slavery in the United States in 1863 following the Emancipation Proclamation.

ChatGPT ingests billions of phrases that have been scraped, cleaned, aggregated, and broadly analysed by OpenAI. TotoGEO uses a weak, but rule-based, algorithm to produce a paragraph derived from an ASCII file indexed by Google some 10 years ago.

Therein lies the crux: Old-school, rule-based AI approaches can be superior in quality to “learned”, black box approaches, provided one invests in codifying gold-standard rules (if these exist). Algorithms in rulebased AI follow known rules and can learn or adapt as new data arrive. No one today thinks of pocket calculators as fancy AI, yet in 1967 when Caltech invented the handheld device, humanity was amazed. Calculators are rule-based, and always accurate. You would never want it to get feedback from users or have a learning algorithm that scrapes data from the Internet.

See also: Without regulator buy-in, scaling AI in financial services will be an uphill battle

But when rules are not well understood, or too time-consuming to be documented or programmed, developers use deep learning AI methodology. “Let the data do the talking” in the AI context is analogous to saying “let the model discover what it thinks the rules are from the data”. This often comes at a cost though: ChatGPT, or similar language models, can make mathematical mistakes and not realise it.

Unfortunately, these mistakes can be more harmful than a misleading factoid. Bad advice or instructions rendered without attribution or caveats might kill someone or cause a riot. Think airplane autopilot systems — we do not expect an aircraft to fly through a mountain when it is on autopilot. We expect the algorithms behind the system to be accurate, reliable and not biased.

Chatbots can be programmed to be so, if one is willing to spend the time and money to blend rules and deep learning. For a sampling of the power of this hybrid AI approach, visit totopoetry.com for a comparison of ChatGPT output with the output produced by Insead’s TotoGEO AI lab, including the world’s “longest poem” and largest unabridged English dictionary in verse.

The release of any AI product to the market ought to be a management decision that weighs the pros and cons of technology and its accuracy. My forecast is that strong rule-based generative methods, supported by deep learning, will be the future of AI.

Philip M Parker is a professor of marketing at Insead and the Insead chaired professor of management science

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.