Continue reading this on our app for a better experience

Open in App
Floating Button

Blackbird.ai swoops in to the rescue as disinformation wars hit business world

Ng Qi Siang
Ng Qi Siang • 9 min read
Blackbird.ai swoops in to the rescue as disinformation wars hit business world
Disinformation costs companies US$78 billion ($106.3 billion) in annual losses in the US alone.
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Debates on disinformation today tend to be associated with the intrigue of psy-wars and culture wars. Public attention has been seized by accusations of Russian interference in the 2016 US elections and far-right misinformation techniques on social media. Conspiracy theories like QAnon and Anti-vax designed to spread falsehoods for political advantage have become household names, as the culture wars increasingly tear societies and even families asunder.

But less attention has been placed on the costs that disinformation can have on businesses. A study by the University of Baltimore has found that disinformation costs companies US$78 billion ($106.3 billion) in annual losses in the US alone, with financial disinformation in particular leading to US$17 billion knocked off market values. Consumer brands lose US$235 million annually from advertising next to fake news items, even as they spend over US$9 billion each to repair the damage from disinformation attacks such as boycotts.

Not even established brands are spared. Tesla’s stock price took a hit in 2019 due to fake videos of a self-driving Tesla car catching fire, while a fake Bloomberg report was created to claim a US$31 billion takeover bid of Twitter, causing its share price to jump 5% before the masquerade was discovered. Semiconductor firm Broadcom also experienced a fall in share prices when a fake memo was circulated, claiming that the US Department of Defense was investigating national security risks posed by its actual US$19 billion bid for CA Technologies.

According to Kroll, 84% of businesses feel threatened by market manipulation through the spread of fake news, most commonly fuelled by social media. “Additionally, brand ambassadors and influencers present a new challenge for due diligence procedures; 78% of survey respondents use them to some extent,” adds the US corporate investigations and risk consulting firm.

Disinformation threats are growing ever more sophisticated and targeted, says Brice Chambraud, APAC managing director of US cybersecurity firm Blackbird.ai. State-backed actors, “disinformation for hire”, and “black PR” offering to help clients run smear campaigns are increasingly prevalent in an increasingly “messy” space for businesses. “If you want to make up a lie about a company that is burning fossil fuels, you just need to target an echo chamber of environmentalists and you know you will be able to get a huge engagement,” he tells The Edge Singapore in an interview.

But corporations and business leaders often remain blissfully unaware of the threat that misinformation poses until it is too late, says Mike Paul, president of public affairs at Reputation Doctor. “Corporations spend hundreds of millions — even billions — to develop their brands, but they often devote an almost infinitesimal percentage of that amount to protect them,” he comments in a Harvard Business Review (HBR) White Paper. Harlan Loeb, global chair of risk and reputation management at public relations firm Edelman, observes that many firms already have their hands full with more conventional cyberattacks to even think about fake news.

The lack of attention on business-related disinformation has also meant that social media platforms do not typically provide firms sufficient protection against fake news. “Online media and platform companies are more concerned about content that incites violence or harms elections,” says fake news researcher Aviv Ovadya, founder of the Thoughtful Technology Project, who was cited in the HBR White Paper too. Without sufficient public policy to protect businesses from online falsehoods, business users are left as nothing more than sitting ducks.

Amplification of fake news

The way Chambraud sees it, due to the growing complexity, businesses, especially the high-profile Fortune 500 firms, today find it tough to stay on top of these digital narratives that involve them. But he believes that the “crisis management” approach of responding to fake news after it begins to pose a threat is insufficient, given the growing speed and complexity of today’s disinformation threats. “[Businesses] are forced to be reactive because this is a blindspot. They don’t really have the tools to look into these manipulation signs,” he says.

Dealing with fake news only after they have entered the public sphere, says Chambraud, is often a case of too little, too late. “It’s extremely easy to amplify ... stories that you piggyback off. These amplified stories get picked up organically and they start to compound in influence though volume,” he elaborates. Despite such falsehoods initially beginning within parochial echo chambers, such as special interest groups, the volume of proliferation can snowball very quickly, eventually entering mainstream discourses and inciting public outrage.

“The moment that a disinformation campaign comes out and it starts to amplify ... there is an inflexion point that happens. The moment that it gets some organic activity, it surges very quickly and it is very hard to reverse that,” says Chambraud. It is difficult to negate negative first impressions proliferated online even with fact-checking. He believes that time is the best ally of disinformation, with a slow response seeing the disinformation campaign winning more converts as it spreads, creating a critical mass of believers to pressure firms.

According to Chambraud, his company has a more proactive approach to detecting and preventing the occurrence of disinformation. In contrast, most organisations he speaks to tend to use social listening tools and excel sheets manned by human fact-checkers to identify disinformation, while Blackbird.ai seeks instead to draw on the power of AI to identify and nip fake news in the bud. Chambraud implies that such techniques usually prove insufficient to keep up with ever-evolving bad actors while also opening the process to human bias from the fact-checkers.

At Blackbird.ai, however, the power of AI is relied upon to develop a faster and more objective means of handling disinformation. Chambraud says while it is using an API to run this process on social media channels like Twitter, 4Chan and Reddit, Blackbird.ai is currently developing a software-asa-solution (SaaS) platform to monitor firm brand assets (for example, social media accounts). This bypasses potential bias from human open-source fact-checkers while allowing organisations to analyse more data at greater speed.

Blackbird.ai also claims to have a more comprehensive measure for disinformation. For example, via its in-house Blackbird risk index, the organisation measures a weighted set of factors that affect the potency of disinformation including toxicity, amplification, hyperpartisanship, communities of spread and volume. This allows for a more comprehensive understanding of the nature of disinformation that can better inform strategies to counter untrue narratives.

Based on this index, Blackbird.ai’s AI surfaces any risky patterns of discussion on these channels and provides push alerts to clients should a threat emerge. Analytical tools built into Blackbird.ai’s software then provide comprehensive intelligence about these threats such as narratives that firms are implicated in, identity of the main threat actors, and the peaks and dips of such narratives. Reports can then be produced for public relations teams to develop a proactive strategy ahead of time to deal with the fallout of disinformation around the corner.

“Today we are at a slightly under 24-hour [response] cycle, but when we launched our platform, we were targeting to go near real-time at the very least,” notes Chambraud. Such reports can be tailored according to the bespoke needs of particular clients. Professional human analysts with intelligence experience train the AI to ensure peak performance and effective recognition of emerging and culturally-specific threats that have yet to be recognised by previous algorithms.

But ultimately, Blackbird.ai’s role is limited to monitoring and risk reporting; it remains up to the client and their public relations team to develop a response to the disinformation risks identified. “We highlight risk. We don’t tell you what is real or what is fake. We leave the subject owner to decide,” explains Chambraud, recognising that different firms will likely have their own specific policies and needs vis-a-vis handling disinformation. The system is set up to complement rather than substitute for fact-checking, with Chambraud saying that fact-checkers could potentially gain deeper insights from using Blackbird.ai’s proprietary technology.

“It’s really tough if you don’t have the intervention of technology, especially if you are a firm that does not have a massive team or pool of resources to monitor social media,” he says. With attacks often emerging suddenly from unexpected places in large volume, it is helpful to have the aid of technology to detect and decipher patterns of disinformation. Blackbird.ai’s ability to identify patterns of propagation through network and time-series analysis gives analysts an edge in risk monitoring that not even a team of a hundred people can manage.

Spreading the word in Asia

So far, Blackbird.ai’s operations are centred largely in the US, but Chambraud sees Asia Pacific, with its increasingly connected population, as a growing market. But, interestingly, it was more the mindset of Asian firms rather than any particular vulnerability to disinformation per se that saw Blackbird.ai establish its first overseas presence in APAC. “For Asia, it ultimately leads down to organisations being confident with making strategic decisions for the future,” he explains. The significant growth potential of Asian markets makes it essential for multinational firms to obtain useful intelligence on disinformation in APAC.

Due to Singapore’s central role in Asia Pacific and its government’s uncompromising stance against online falsehoods, Singapore was the natural choice to site Blackbird.ai’s APAC operations. “Singapore has been proactive in addressing disinformation through policy and education. With Singapore as our Asian hub, we aim to build on these efforts with technology, expand our presence, and help neutralise the threat of disinformation in the region,” said group CEO Wasim Khaled at the firm’s APAC launch in a press release last year. Blackbird.ai has spoken with a few ministries and are working on pilots to combat fake news.

Chambraud is particularly excited about working with commercial clients to measure the extent to which news is manipulated by bad actors — something he says that nobody has yet undertaken. Strengthening intelligence on news manipulation, he says, will help benchmark incidents to assess the extent of fake news threat faced by a given sector and the implications such disinformation can have for financial markets.

For now, however, Blackbird.ai is looking to exercise thought leadership on the fake news space and promote online literacy against disinformation in APAC. Media engagement plays a significant role in Chambraud’s strategy to reach out to and educate a critical mass of the population. “Narratives are very influential, and being able to provide as much context in this space as possible is a very huge first step for us,” he remarks.

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.