You Might Be Inadvertently Broadcasting Personal Information on the Meta AI App
Users of ‘s standalone artificial intelligence app are inadvertently broadcasting personal conversations with the chatbot to the public, exposing sensitive information ranging from medical concerns to potential legal troubles.
The Meta AI app, launched April 29, includes a sharing feature that allows users to post their AI conversations publicly. However, many users appear unaware that clicking the share button makes their interactions visible to anyone, creating what privacy advocates are calling a significant data exposure incident.
— Justine Moore (@venturetwins) June 12, 2025I spent an hour browsing the app, and saw:
-Medical and tax records
-Private details on court cases
-Draft apology letters for crimes
-Home addresses
-Confessions of affairs
…and much more!Not going to post any of those – but here’s my favorite so far pic.twitter.com/9KqGeLB5UN
The exposed conversations include users seeking advice on tax matters, discussing family legal situations, sharing medical symptoms, and revealing personal details about relationships and finances. Screenshots circulating online show users asking the AI about criminal liability, divorce proceedings, and other deeply personal topics.
Meta provides no clear indication of users’ privacy settings when they post conversations, according to multiple technology publications that have analyzed the app. Users who log into Meta AI through Instagram accounts set to public automatically make their AI searches equally visible.
The company launched the app as part of its push into artificial intelligence, positioning it as a personalized assistant that learns from users’ interactions across Meta’s platforms, including Facebook and Instagram. The app has been downloaded 6.5 million times since its debut, according to app intelligence firm Appfigures.
The privacy concerns extend beyond inadvertent public sharing. Unlike competitors such as OpenAI’s ChatGPT, Meta AI does not offer users the option to prevent their conversations from being used to train future AI models. The company automatically collects users’ words, photos, and voice recordings for this purpose.
Meta has also recently shifted away from human oversight of privacy reviews, instead using AI systems to evaluate potential risks in approximately 90% of cases, according to internal company documents. Current and former employees have expressed concern that this change reduces scrutiny of products that could cause real-world harm.
Users can adjust their privacy settings within the Meta AI app by accessing “Data & Privacy” under app settings and selecting “Manage your information” to limit public visibility of their conversations.
Information for this story was found via Gizmodo, TechCrunch, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.