Meta Receives Approval from EU Regulator to Train AI Using Social Media Content
Tech giant Meta has received the green light from the European Union’s data regulator to train its artificial intelligence (AI) models using publicly shared content from its social media platforms. This includes posts and comments from adult users on Facebook, Instagram, WhatsApp, and Messenger, as well as questions and queries directed to Meta’s AI assistant, according to an April 14 blog post by the company. Meta emphasizes the importance of training its generative AI models on diverse data to understand the nuances and complexities of European communities.
Meta states that this encompasses everything from dialects and colloquialisms to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on its products. However, private messages with friends and family, as well as data from EU account holders under the age of 18, will not be used. Users can also opt out of having their data used for AI training through a form provided in-app, via email, and designed to be easily accessible and understandable.
In July, Meta delayed its AI training plans after privacy advocacy group None of Your Business (noyb) filed complaints in 11 European countries. The Irish Data Protection Commission (IDPC) requested a pause until a review could be conducted. These complaints alleged that Meta’s privacy policy changes would have allowed the company to use years of personal posts, private images, and online tracking data to train its AI products.
Meta now asserts that it has received permission from the European Data Protection Commission, confirming that its AI training approach meets legal obligations. The company states it continues to engage constructively with the IDPC. Meta also notes that this approach mirrors how it has been training its generative AI models for other regions since launch. They emphasize that they are following the example set by companies like Google and OpenAI, both of which have already used data from European users to train their AI models.
In related news, an Irish data regulator opened a cross-border investigation into Google Ireland Limited in September to determine whether the tech giant adhered to EU data protection laws while developing its AI models. Similarly, X (formerly Twitter) faced scrutiny and agreed to cease using personal data from users in the EU and European Economic Area to train its AI chatbot, Grok. The EU launched its AI Act in August 2024, establishing a legal framework for AI technology, including data quality, security, and privacy provisions.