Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Unmasking the hidden threat of shadow AI

Vivek Behl
Vivek Behl  • 5 min read
Unmasking the hidden threat of shadow AI
Shadow AI is derived from the concept of shadow IT, which encompasses systems and devices not under the control of the company’s IT department. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Picture this scenario: You want to email the board of directors to report on your latest project and ensure it sounds as professional as possible. With genera­tive AI platforms like ChatGPT or Jas­perAI, someone not entirely confident in their writing abilities can instantly access sentences phrased seemingly just right, articulating the point you are trying to get across efficiently. It is oh-so tempting. However, the problem is that it can also absorb the project’s data to train its model, possibly exposing it to unknown others.

This is not an uncommon scenario across workplaces in Singapore, where artificial intelligence (AI) is increasingly being adopted. Microsoft’s third annual Work Trends Index report found that 81% of employees will likely delegate much of their work to AI.

A survey commissioned by global tech giant Salesforce and conducted by YouGov found that 40% of Singaporean full-time office workers use generative AI. Of these, 76% said they have used it to complete a task and pass it off as their own, while 53% admitted to having done so multiple times.

Additionally, 48% of employees answered that they had used a generative AI platform their company banned. This is cause for serious concern, as employees risk leaking sensitive company information or even intellectual property.

You can’t secure what you can’t see

AI programmes or solutions that operate beyond the visibility and control of the organisation’s IT team — otherwise known as shad­ow AI — range from chatbots, like ChatGPT and Bard, to more advanced platforms, like AlphaCode and SecondBrain.

See also: Keys to achieving human-centred automation testing

The concept of shadow AI originates from shadow IT, which refers to systems and devices not managed by the company’s IT department.

While generative AI applications can enhance workers’ speed and efficiency — not to mention creativity — it also raises the spectre of data privacy violations. Employees may not understand that feeding sensitive, proprietary information to a generative AI platform is not the same as saving it on a Word document or even on the company’s cloud-based systems.

Large language models (LLMs) can replicate the exact details of customer data. Similarly, using generative AI to verify coding or plan unique strategies can inadvertently disclose proprietary information to AI algorithms.

See also: Human element still important for effective mass communication

All eyes on the data

Enforcing AI governance is crucial to avoid leak­ages that see data fall into the wrong hands. By having complete visibility and control, organisations can prevent attackers from steal­ing company or personal data and targeting key devices and systems.

As mentioned, data leaks can happen simply by exposing sensitive data to certain AI applications. Securing these valuable assets will have a knock-on effect that bolsters customer confidence and maximises trust. While blocking generative AI applications may seem logical, this approach will only lead to organisations losing out on benefits which can drive their business forward.

Given the many advantages to productivity and creativity, banning generative AI applications altogether could become a competitive disadvantage.

Organisations need to implement the right technologies and guidelines that effectively manage AI usage to lower the risk of data privacy compromise without missing out on all that generative AI offers.

Relying on traditional data loss prevention (DLP) solutions might help to detect data leak­age through specific patterns. These solutions usually work on the network level and do not provide contextual information about what they did wrong and how to avoid future incidents, making them more likely to continue engaging in behaviours that put company data in danger.

DLP solutions also require constant main­tenance, and even then, they may not be able to catch all cases. A more effective solution can inform employees on how to use AI safely and responsibly.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Encouraging more responsible AI

Digital adoption platforms (DAPs) sit as a glass layer on top of software to provide customised guidance and automation to end users. They can be user-friendly guardrails for staff as they leverage generative AI platforms.

Through segmented pop-up alerts and auto­mation, employees will receive relevant instructions and policies detailing what they should or should not do when engaging with specific AI applications.

DAPs can even redirect employees away from certain applications to more secure alternatives and even hide certain functionalities within applications. DAPs can shine a light on shadow AI usage at the leadership level by granting full visibility into how employees use all business applications, including the ever-growing list of generative AI applications, down to the click level.

While DAPs are excellent solutions for taking on the growing risk of shadow AI, leadership teams need to be educated on the ever-evolving AI landscape and the accompanying dangers and possible rewards.

With this knowledge, they will be better equipped to optimise resource allocation and align policies with business needs and compli­ance regulations. As a result, AI adoption has become safer and more intelligent.

Organisations should also host discussions and workshops to facilitate knowledge sharing, promoting transparency and strategic alignment while mitigating fears associated with new technologies like AI.

Generative AI applications can greatly benefit organisations in creating and innovating ahead of competitors. However, they also pose risks like replicating sensitive information or exposing it to outside parties.

Both can lead to privacy breaches and a loss of trust from customers and stakeholders. Ad­dressing these challenges requires organisations to implement effective solutions such as DAPs and educate employees on using generative AI while protecting company data.

Vivek Behl is the digital transformation officer at WalkMe

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.