Log In

How unsanctioned staff AI use exposes firms to data breach?

Published 6 hours ago3 minute read

As chat bots that continue to grow in prominence across the globe and grab the attention of billions of people, a silent problem of privacy breaches is brewing, putting at risk companies that process scores of personal data.

Cybersecurity firm Harmonic Security analysed over 176,000 prompts input by about 8,000 users into popular generative (gen) AI platforms like ChatGPT, Google’s Gemini, Perplexity AI, and Microsoft’s Copilot, and found that troves of sensitive information make their way into the platforms through the prompts.

In the quarter to March 2025, about 6.7 percent of the prompts tracked contained sensitive information including customer personal data, employee data, company confidential legal and finance details, or even sensitive code.

About 30 percent of the sensitive data were legal and finance data on companies’ planned mergers or acquisitions, investment portfolio, legal discourse, billing and payment, sales pipeline, or even financial projections.

Read: AIdentity crisis: How tech is easing online fraudCustomer data like credit card numbers, transactions, or profiles also made their way to these platforms through the prompts, as did employee information like payroll details and employment profiles.

Developers seeking to improve or perfect their codes using genAI tools also inadvertently passed on copyrighted or intellectual property material, security keys, and network information into the bots, exposing their companies to fraudsters.

Asked about the safety of such information, chat bots like ChatGPT always say the information is safe and is not shared with third parties. Even their terms of service say as much, but experts have a warning.

While the information may seem secure within the bots and pose no threat of breach, the experts say it is time companies start checking and restricting what information their employees feed into these platforms, or risk massive data breaches.“One of the privacy risks when using AI platforms is unintentional data leakage,” warns Anna Collard, senior vice president for content strategy at cybersecurity firm KnowBe4 Africa. “Many people don’t realise just how much sensitive information they’re inputting.”“Cyber hygiene now includes AI hygiene. This should include restricting access to genAI tools without oversight or only allowing those approved by the company.”While a majority of companies around the globe now acknowledge the importance of AI in their operations and are beginning to adopt it, only a few organisations have policies or checks for AI output.

According to McKinsey’s latest State of AI survey that interviewed business leaders across the globe, only 27 percent of companies fully review content generated by AI. Forty three percent of companies check less than 40 percent of such content.

But AI use is growing by the minute. Large language Models (LLMs) like ChatGPT have trampled social media apps that have long been digital magnets in user visits and hours of daily interactions.

Read: ‘Godfather of AI’ now fears it’s unsafe. Proposes plan to rein it inMultiple studies, including the one by McKinsey, show that today, nearly three in four employees use genAI to complete simple tasks like writing a speech, proofreading a write-up, writing an email, analysing a document, generating a quotation, or even writing computer programmes.

The rapid proliferation of Chinese-based LLMs like Deepseek is also seen increasing the threat of data breaches to companies. Over the past year, there has been an avalanche of new Chinese chat bots, including Baidu chat, Ernie Bot, Qwen chat, Manus, and Kimi Moonshot among others.“The Chinese government can likely just request access to this data, and data shared with them should be considered property of the Chinese Communist Party,” notes Harmonic in a recent report.

© Copyright 2022 Nation Media Group. All Rights Reserved. Provided by SyndiGate Media Inc. (Syndigate.info).
 

Origin:
publisher logo
Zawya
Loading...
Loading...
Loading...

You may also like...