Navigation

© Zeal News Africa

Meta's AI Division Reels From Layoffs: 600 Jobs Cut, Privacy Review Team Impacted

Published 23 hours ago3 minute read
David Isong
David Isong
Meta's AI Division Reels From Layoffs: 600 Jobs Cut, Privacy Review Team Impacted

Instagram's parent company, Meta, is reportedly undertaking significant organizational changes, including the layoff of 600 employees within its AI division. This strategic move, cited by the New York Times and attributed to Meta's Chief Artificial Intelligence Officer, aims to streamline decision-making processes and accelerate the development of new products. The internal memo from the AI chief, Wang, highlighted that a smaller team would necessitate fewer discussions to reach decisions, indicating a push for increased efficiency.

A notable aspect of these layoffs involves the risk review organization, where approximately 100 jobs have been eliminated. This group plays a critical role in ensuring that Meta's products adhere to an existing agreement with the Federal Trade Commission (FTC) and comply with stringent privacy regulations across various global authorities. The reduction in this team has sparked discussions about the future of Meta's compliance infrastructure.

Further elaborating on the changes, Michel Protti, Meta’s Chief Privacy Officer, communicated to employees about the downsizing of the risk team and a shift towards automated systems for the majority of manual reviews. Protti stated that this transition from bespoke, manual processes to more consistent automated systems has led to more accurate and reliable compliance outcomes. He also reaffirmed Meta's commitment to delivering innovative products while upholding regulatory obligations.

Despite the company's official stance, insiders have described the layoffs within the risk review team as a 'gutting' of employees responsible for scrutinizing projects for privacy and integrity concerns. These job cuts extend beyond specific groups, affecting more than 100 individuals across the company's risk organization, including staff in the London office. A Meta spokesman confirmed the ongoing organizational changes, asserting they are part of a restructuring to reflect the program's maturity and foster faster innovation while maintaining high compliance standards.

These recent developments are integrated into a larger restructuring initiative spearheaded by CEO Mark Zuckerberg over the past three years. This broader effort is aimed at enhancing Meta's competitiveness against emerging rivals like OpenAI, the creators of ChatGPT. Reports indicate that Meta executives have grown increasingly frustrated with the pace of product development, with the risk organization being identified as one division that, by design, contributed to delays.

The context for the risk organization's critical role stems from a 2019 directive from the FTC. At that time, Meta, then Facebook, was mandated to implement new roles and policies to improve transparency and accountability regarding user data handling. Additionally, the company incurred a historic $5 billion fine for misleading users about their control over personal privacy. Consequently, the risk organization was tasked with supervising and auditing all new products to identify potential privacy threats or changes that could breach the FTC order to which the company had committed.

Notably, in 2020, Michel Protti himself underscored the significance of these changes, stating they would usher in 'a new level of accountability' and ensure that privacy was 'everyone’s responsibility at Facebook.'

However, skepticism regarding the efficiency of replacing human oversight with automated systems has emerged among employees of the risk organization, particularly concerning sensitive issues like user privacy. Meta has faced intense scrutiny from the FTC and the Justice Department in the US for nearly a decade, alongside rigorous regulatory oversight in Europe, which amplifies concerns about any potential reduction in human review capacity.

To address some of these complexities, Meta had already begun gradually integrating automation into its risk auditing process in the previous year. This involved categorizing potential issues: 'Low risk' updates to new products were initially reviewed automatically and subsequently audited by human personnel, whereas 'High or novel risk' issues continued to necessitate immediate review by human auditors, reflecting a tiered approach to compliance management.

Loading...
Loading...

You may also like...