Continue reading this on our app for a better experience

Open in App
Floating Button

ChatGPT poised to expose corporate secrets, cyber firm warns

Bloomberg
Bloomberg • 3 min read
ChatGPT poised to expose corporate secrets, cyber firm warns
AI chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company. Photo: Bloomberg
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Companies using generative artificial intelligence tools like ChatGPT could be putting confidential customer information and trade secrets at risk, according to a report from Team8, an Israel-based venture firm.

The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report, which was provided to Bloomberg News prior to its release. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company. There are also concerns that confidential information fed into the chatbots now could be used by AI companies in the future.

Major technology companies including Microsoft Corp. and Alphabet Inc. are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop for their queries. If these tools are fed confidential or private data, it will be very difficult to erase the information, the report said.

“Enterprise use of GenAI may result in access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, through direct user input or the API, including customer or private information and confidential information,” the report said, classifying the risk as “high.” It described the risks as “manageable” if proper safeguards are introduced.

The Team8 report stressed that chatbot queries are not being fed into large-language models to train AI, contrary to recent reports that such prompts could potentially be seen by others.

“As of this writing, Large Language Models cannot update themselves in real-time and therefore cannot return one’s inputs to another’s response, effectively debunking this concern. However, this is not necessarily true for the training of future versions of these models,” it said.

See also: Tesla Cybertruck to go on tour in China to burnish tech cred

The document flagged three other “high-risk” issues in integrating generative AI tools and underlined the heightened threat of information increasingly being shared through third-party applications. Microsoft has embedded some AI chatbot features in its Bing search engine and Microsoft 365 tools.

“On the user side, for example, third-party applications leveraging a GenAI API, if compromised, could potentially provide access to email and the web browser, and allow an attacker to take actions on behalf of a user,” it said.

There is a “medium risk” that using generative AI could increase discrimination, harm a company’s reputation, or expose it to legal action over copyright issues, it said.

See also: Samsung races Apple to develop blood sugar monitor that doesn't break skin

Ann Johnson, a corporate vice president at Microsoft, was involved in the drafting of the report. Microsoft has invested billions in Open AI, the developer of ChatGPT.

“Microsoft encourages transparent discussion of evolving cyber risks in the security and AI communities,” a Microsoft spokesperson said.

Dozens of chief information security officers of US companies are also listed as contributors to the report. The Team8 report was also endorsed by Michael Rogers, the former head of the US National Security Agency and US Cyber Command.

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.