Cybersecurity: Are Malaysians unknowingly training AI and creating new cyber risks?
This article first appeared in Digital Edge, The Edge Malaysia Weekly on May 26, 2025 - June 1, 2025
Ten years ago, in an episode of Black Mirror titled White Christmas, people willingly allowed their consciousness to be copied into an egg-shaped device containing a complete, sentient replica of their mind — one that understood their preferences, routines and emotions perfectly to help automate everyday chores with absolute precision.
What initially seemed like a harmless convenience revealed a security dilemma — when one’s own mind can be replicated and manipulated, what truly remains protected?
Fast forward a decade, that fictional dilemma is not far away from reality. In many ways, artificial intelligence (AI) tools play an integral role in our daily lives — learning from our data to automate, predict and replicate patterns of human lifestyle.
Online users, in Malaysia and in the rest of the world, today rush to embrace AI tools to assist with the daily grind, from image generators and voice assistants to businesses enthusiastically turning to chatbots and predictive tools to streamline operations, automate customer interactions and even assist in product ideation. Every single interaction contributes to a growing pool of data that trains these systems to be smarter, more precise and intuitive.
While we are reaping the benefits of AI’s convenience, there are few conversations about what it truly means to give away so much of ourselves to technologies that are not originally designed with security first in mind. This raises a burning question: As we entrust AI with more of ourselves, are we perhaps unknowingly handing cybercriminals new entry points?
Exactly how much of ourselves are we giving away to AI?
From a cybersecurity perspective, the growing dependence on AI and the continuous input of personal and business-related data into these systems lead to what is known as attack surface expansion. The more data is created, stored and transmitted through interactions, the larger the surface area that is available for cybercriminals to exploit.
What many users overlook is that AI tools are typically designed with a priority on convenience and functionality, not security. Publicly available AI platforms often have opaque data handling practices. Personal information uploaded onto AI generators, facial recognition filters or free productivity tools can be stored, reused or even may be exposed in breaches.
Last year, Kaspersky Digital Footprint Intelligence revealed that about 1,160,000 application users’ credentials (logins and passwords) from AI-powered online graphic design tool, Canva, were compromised with data stealing malware; another popular AI writing assistant, Grammarly, had around 839,000 user credentials stolen between 2021 and 2023.
In an AI-driven economy, the stakes are no longer merely privacy concerns — they extend to identity theft, financial fraud or even sophisticated social engineering attacks made so convincingly real by AI-generated content.
Around the world, we are already seeing alarming cases: a malware campaign engineered to target advanced users eager to run AI systems independently on their local hardware through a deceptive Deepseek domains app. Once installed, it enables threat actors to silently extract sensitive data, capture credentials, monitor system activity and move laterally within corporate networks.
Meanwhile, Kaspersky researchers have also uncovered ways attackers can embed hidden trojans directly into AI models during the learning process — a tactic known as model poisoning. These backdoors remain dormant until triggered by specific prompts, allowing cybercriminals to hijack AI behaviour in unpredictable ways. If companies unknowingly adopt compromised AI models, they could be inviting risks directly into their business operations without even realising.
First things first — digital literacy must evolve. Malaysians must be aware that even seemingly harmless interactions — sharing a photo with an AI app, allowing an AI service access to contacts, downloading an AI-powered browser extension or granting access to cloud storage — could be potential points of compromise.
Second, businesses, especially small and medium enterprises, need to adopt AI with cybersecurity built into their operational thinking from the ground up. That includes choosing legitimate tools that can explain their data handling practices and conducting due diligence before deploying any AI-driven tools. This means going deeper to strict application controls that prevent unauthorised AI development tools from being installed on corporate devices.
Public sectors must play a proactive role in including cybersecurity in Malaysia’s AI development road map. As the nation pushes forward with initiatives such as the Asean Digital Economy Framework, cybersecurity should be the foundation pillar that guides digital growth.
With the great opportunities brought by AI comes great responsibility. If we want to lead the AI race, we must also lead in protecting our digital frontier. As individuals, businesses and a nation, we need to foster a culture where cybersecurity is as integral to innovation as AI itself.
Because ultimately, in the AI era, it is not just machines that are learning — cybercriminals are too.
Adrian Hia is managing director for Asia-Pacific at Kaspersky
Save by subscribing to us for your print and/or digital copy.