AI's Dark Turn: Grok Deepfakes & Facial Recognition Spark Global Privacy Outcry

Published 2 days ago4 minute read
Pelumi Ilesanmi
Pelumi Ilesanmi
AI's Dark Turn: Grok Deepfakes & Facial Recognition Spark Global Privacy Outcry

Concerns are mounting over the misuse of artificial intelligence, as two distinct yet equally troubling trends emerge: the generation of non-consensual intimate images by AI chatbots and the deployment of AI facial recognition in retail leading to false accusations. Both scenarios highlight the urgent need for robust regulation and ethical guidelines in the rapidly evolving AI landscape.

Grok AI, Elon Musk’s free AI assistant, has come under intense scrutiny for being used to generate degrading images of children and women with their clothing digitally removed, which continue to be shared on X. A December update reportedly made it easier for users to alter photographs, creating sexually suggestive pictures showing individuals in minimal underwear and poses, with images of minors as young as 10 and 14-year-old celebrities like Nell Fisher being manipulated. Research by Paris-based non-profit AI Forensics, examining 50,000 mentions of @Grok on X and 20,000 generated images, revealed that more than half the images depicted people in "minimal attire," predominantly women under 30, with 2% appearing to be 18 or under, some even preschool-aged. The researchers also found requests for Nazi and Islamic State propaganda content.

Initial reactions from Elon Musk, who reportedly found amusement in a digitally manipulated toaster in a bikini, shifted after a global outcry. He later stated that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." An X spokesperson affirmed the platform's commitment to take action against illegal content, including child sexual abuse material (CSAM). However, a statement from Grok about "lapses in safeguards" being "urgently fixed" was later found to be AI-generated, casting doubt on the company's concrete actions.

Regulators and politicians have swiftly responded to the Grok controversy. The UK's communications watchdog, Ofcom, made "urgent contact" with X and xAI to assess compliance with legal duties to protect UK users. The European Commission is "very seriously" investigating complaints regarding Grok's use for sexually explicit childlike images. UK politicians and women’s rights campaigners have accused the government of delaying the enactment of legislation passed six months ago, which makes the creation of intimate images without consent illegal. While sharing non-consensual deepfake images is already unlawful, the provisions criminalizing their creation are not yet enforceable, leaving survivors vulnerable.

Simultaneously, UK high street retailers are adopting AI camera technology, such as Facewatch biometric systems, to combat a nationwide shoplifting epidemic, leading to concerns about privacy invasion and wrongful accusations. Retailers spent £1.8 billion on crime prevention in 2024, with facial recognition cameras flagging over 2,000 suspects daily in the week leading up to Christmas. While creators assert near-100% accuracy, numerous cases of innocent shoppers being wrongly blacklisted have emerged. Examples include Jenny, a B&M customer falsely accused of stealing wine; a 64-year-old woman accused of taking paracetamol; Danielle Horan, falsely accused of stealing toilet roll; and 19-year-old Sara, misidentified as a thief in Home Bargains and banned from stores across the UK. In several instances, companies later apologized, attributing errors to human staff or admitting the individual did not commit a crime.

Privacy campaign group Big Brother Watch has strongly criticized the technology, highlighting the lack of legal due process, secret watchlists, and the humiliation faced by falsely accused individuals. Director Silkie Carlo emphasized that the UK is an "outlier" compared to Europe, where live facial recognition for private company general surveillance is banned. The group advocates for proper criminal justice system involvement rather than "dangerously faulty" private AI systems. Despite these criticisms, retailers like Vince and Fiona Malone of Tenby Stores have welcomed the AI equipment, claiming it helps deter thieves and provides a sense of control where police action is perceived as insufficient. Sainsbury's, the UK's second-largest supermarket chain, is trialing Facewatch in two stores with plans for a potential nationwide rollout, emphasizing its focus on identifying violent, aggressive, or repeat offenders, not general customers. Sainsbury's claims a 99.98% accuracy rate and states that non-matching data is deleted instantly, with YouGov polling suggesting 65% public support for such technology to prevent theft and anti-social behavior.

Facewatch reported 54,312 alerts in December alone, a monthly record, with 14,885 alerts sent in the week before Christmas. While the technology aims to give staff advance warning of repeat offenders, critics worry about the systemic impact on civil liberties. The British Retail Consortium notes that only 2.5% of shoplifting offences are recorded by police annually, with 50,000 incidents going unreported daily, underscoring the pressure retailers feel to implement new crime prevention measures. However, the ethical implications of AI deployment without stringent safeguards and clear accountability remain a significant challenge for both digital platforms and physical retail environments.

Loading...
Loading...
Loading...

You may also like...