Navigation

© Zeal News Africa

Retail's AI Future: Generative Tech Brings Security Headaches

Published 1 week ago4 minute read
Uche Emeka
Uche Emeka
Retail's AI Future: Generative Tech Brings Security Headaches

The retail industry stands at the forefront of generative AI adoption, demonstrating a near-universal embrace of this transformative technology. A recent report by cybersecurity firm Netskope reveals that 95% of retail organizations now utilize generative AI applications, a significant leap from 73% just a year prior. This rapid integration underscores retailers' urgency to leverage AI's potential and avoid falling behind competitors. However, this swift technological evolution is accompanied by substantial security costs, creating an expanded surface area for cyberattacks and the heightened risk of sensitive data leaks.

The sector is currently undergoing a notable transition from an initial phase of chaotic, early adoption to a more structured, corporate-led approach. There has been a dramatic shift away from employees using personal AI accounts, with usage more than halving from 74% to 36% since the beginning of the year. Concurrently, the adoption of company-approved GenAI tools has more than doubled, surging from 21% to 52% within the same period. This trend signifies a growing awareness among businesses regarding the inherent dangers of 'shadow AI' and a concerted effort to establish better control over their AI deployments.

In the competitive landscape of generative AI tools within the retail environment, ChatGPT retains its dominant position, utilized by 81% of organizations. Nevertheless, its supremacy is not absolute. Google Gemini has made considerable inroads, with 60% adoption, closely followed by Microsoft’s Copilot tools at 56% and 51% respectively. While ChatGPT has experienced its first-ever dip in popularity, Microsoft 365 Copilot’s usage has witnessed a significant surge, likely attributed to its deep integration with the ubiquitous productivity tools that employees frequently use.

Beneath the surface of this widespread generative AI adoption lies a burgeoning security crisis. The very attribute that makes these tools invaluable – their capacity to process vast amounts of information – also constitutes their most significant vulnerability. Retailers are increasingly observing alarming quantities of sensitive data being fed into these applications. Company source code represents the most common type of data exposure, accounting for 47% of all data policy violations in GenAI apps. Regulated data, encompassing confidential customer and business information, follows closely at 39%.

In response to these escalating risks, a growing number of retailers are proactively banning applications deemed too hazardous. ZeroGPT is the most frequently blocklisted app, with 47% of organizations prohibiting its use due to concerns over its data storage practices and reported instances of redirecting user content to third-party sites. This newfound caution is propelling the retail industry towards the adoption of more robust, enterprise-grade generative AI platforms offered by major cloud providers. These platforms offer enhanced control, enabling companies to host models privately and develop custom AI tools. OpenAI via Azure and Amazon Bedrock are currently tied for the lead, each being utilized by 16% of retail companies. However, these enterprise solutions are not infallible; a simple misconfiguration could inadvertently establish a direct link between a powerful AI system and a company’s most critical assets, posing the threat of a catastrophic data breach.

The security threat extends beyond employees interacting with AI in their web browsers. The report highlights that 63% of organizations are now directly connecting to OpenAI’s API, effectively embedding AI capabilities deep within their backend systems and automated workflows. This AI-specific risk is part of a broader, troubling pattern of inadequate cloud security hygiene. Attackers are increasingly exploiting trusted names and services to deliver malware, capitalizing on the likelihood of employees clicking links from familiar platforms. Microsoft OneDrive emerges as the most frequent culprit, with 11% of retailers experiencing monthly malware attacks originating from the platform, while the developer hub GitHub is implicated in 9.7% of attacks.

The persistent problem of employees utilizing personal applications for work-related tasks continues to exacerbate these security vulnerabilities. Social media platforms such as Facebook and LinkedIn are nearly omnipresent in retail environments (96% and 94% adoption respectively), alongside personal cloud storage accounts. It is on these unapproved personal services that the most severe data breaches tend to occur. When employees upload files to personal applications, a staggering 76% of the resulting policy violations involve regulated data.

For security leaders in the retail sector, the era of casual generative AI experimentation is definitively over. Netskope’s findings serve as a stark warning that organizations must implement decisive actions. This includes gaining comprehensive visibility of all web traffic, rigorously blocking high-risk applications, and enforcing stringent data protection policies to meticulously control what information can be transmitted and where. Without adequate governance and robust security frameworks, the very innovations poised to drive future growth could readily transform into the next headline-making data breach.

Loading...
Loading...
Loading...

You may also like...