Watched Without Consent: How Meta’s Smart Glasses Are Turning Africans Into AI Training Data

Published 1 month ago3 minute read
Watched Without Consent: How Meta’s Smart Glasses Are Turning Africans Into AI Training Data

A decade after the collective rejection of Google Glass, Meta has successfully rebranded wearable technology with its Ray-Ban smart glasses.

By blending high-tech optics into iconic frames, Meta achieved remarkable success with 7 million units sold in 2025 alone, tripling the combined sales of 2023 and 2024.

However, this success has inadvertently created a massive, decentralizedsurveillance network, where individuals are often recorded without explicit consent. This raises profound ethical and legal questions about privacy in public and private spaces.

Source: Google

The fundamental issue lies not in recording everyday moments, but in the subsequent journey of that footage.

Investigative reports have unveiled a global pipeline of data annotation, where raw videos are reviewed by human contractors to train Meta's AI algorithms.

Workers in places like Nairobi, Kenya, are tasked with labeling objects and refining the algorithm, often viewing actual, unredacted videos. Because the glasses are designed for constant readiness, they capture an alarmingly intimate array of human experiences, including individuals undressing, medical documents, and bank card details.

Despite Meta's assurances of automated blurring technology, whistleblowers report consistent failures in this system. This frequently results in identifiable faces and sensitive personal information being exposed to low-wage workers across the globe.

Source: Google

This situation creates a significant 'consent gap', particularly pronounced in the Global South. While glasses purchasers agree to complex privacy policies, countless individuals recorded in gyms, clinics, or on buses have no opportunity to consent or object.

In African nations, data protection laws often remain more aspirational than effectively enforced. While the Nigeria Data Protection Act 2023 and Kenya's Data Protection Act exist on paper, their enforcement mechanisms struggle to keep pace with rapidly advancing AI technologies.

Although Nigeria's NDPC fined Meta $220 million in 2024 for separate privacy violations, achieving redress for the average citizen remains a distant and challenging prospect. African regulators often find themselves in an unequal battle compared to their European counterparts.

European regulations like the GDPR mandate a privacy-by-designapproach, compelling tech giants to compromise features to comply with the law.

Consequently, African nations are frequently treated as testing grounds or critical data annotation hubs for technologies developed elsewhere.

This creates a bitter irony: Kenyan workers earn modest wages reviewing sensitive footage of people worldwide in their most vulnerable states, while simultaneously, they and fellow Africans are being recorded with even fewer legal safeguards protecting their own data.

By early 2026, the European Union began scrutinizing these practices, with MEPs submitting formal inquiries regarding compatibility with European law.

Source: Google

Latest Tech News

Decode Africa's Digital Transformation

From Startups to Fintech Hubs - We Cover It All.

However, for Nigeria and Kenya, the absence of EU adequacy status means their citizens' data is frequently exported and processed with even less oversight.

For Nigerians and other Africans, this raises the grim possibility of becoming a permanent 'grey zone of surveillance' — a region where algorithms are continually fed with their images, intimate moments, and private documents without verifiable consent.

The critical question remains: who is currently watching their footage, and who, if anyone, will intervene to protect their fundamental right to privacy?

Loading...
Loading...
Loading...

You may also like...