Meta to Employ AI for 90% of Privacy and Safety Checks, Report Says

Meta Platforms is reportedly planning a significant shift in its internal review processes, intending to automate up to 90% of checks related to privacy, safety, and risk implications across its suite of applications, including Instagram, WhatsApp, and Facebook. This move, revealed through internal documents reviewed by US news publisher NPR, involves leveraging artificial intelligence to conduct these evaluations. These product risk reviews, traditionally heavily reliant on human reviewers, are crucial for assessing whether new features or changes could potentially harm users, compromise privacy, or facilitate the spread of harmful content.
Under the proposed new system, artificial intelligence tools are set to approve the majority of product updates. This includes significant changes to Meta’s core algorithms, its safety tools, and content-sharing policies, often without the need for manual scrutiny or extensive human debate. According to the internal documents, human experts will primarily be involved in cases deemed "novel or complex," while changes assessed as low-risk will be processed through full automation.
This strategic shift towards AI-driven approvals has reportedly sparked concerns within the company. A former Meta executive, speaking to NPR on the condition of anonymity, expressed apprehension that faster product rollouts with diminished checks could substantially increase the risk of real-world harm. The executive warned, “Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks.” They further added that “Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In response to these concerns, Meta has stated that its objective is to streamline decision-making processes while simultaneously maintaining robust compliance and oversight. A company spokesperson communicated to TechCrunch, “We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues.” The company emphasizes its commitment to balancing innovation with its regulatory and ethical responsibilities.
Meta's obligation to conduct internal privacy reviews stems from a 2012 agreement with the US Federal Trade Commission (FTC), and these checks have predominantly been human-led until now. The company has highlighted its substantial investment in its privacy program, amounting to over $8 billion, underscoring its dedication to these principles.
Furthermore, internal records cited in the NPR report suggest that Meta is contemplating the extension of AI oversight to encompass highly sensitive areas. These could include critical domains such as youth safety, the spread of misinformation, and risks associated with artificial intelligence technologies themselves.