Log In

Mark Zuckerberg's Meta vows not to release 'high-risk' AI models

Published 1 month ago3 minute read

Meta has come up with a policy document, called the Frontier AI Framework, to share its views on two types of AI systems, “high risk” and “critical risk,” that the company considers too risky for release.

Meta, the parent company of Facebook, Instagram, WhatsApp, has suggested in a new policy document that it might not release its internally developed highly capable AI system under certain circumstances.

This comes even as Meta CEO Mark Zuckerberg has talked about making artificial general intelligence (AGI) openly available to everyone in near future.

In its policy document, called the Frontier AI Framework, Meta has discussed two types of AI systems -- “high risk” and “critical risk” -- which it considers too risky to release, TechCrunch reported.

According to Meta, “high-risk” and “critical-risk” systems have the capability to aid in cybersecurity, biological and chemical attacks.

A major difference between the two is that “critical-risk” systems may result in “catastrophic outcomes" which cannot be mitigated in a "proposed deployment context”.

On the other hand, high-risk systems may make it easy to carry out an attack, but not as dependable and reliable as the other one.

Although the list of possible attacks in the document is a lengthy one, Meta has highlighted some examples like “proliferation of high-impact biological weapons” and “automated end-to-end compromise of a best-practice-protected corporate-scale environment,” according to TechCrunch.

Meta said that its list includes the risks that are "most urgent,” which could arise due to the availability of a powerful AI system.

As per the company’s document, Meta has classified them on the basis of inputs from internal and external researchers. These are said to be subject to review by “senior-level decision-makers”.

Once a system is determined as high-risk, Meta stated that it will limit its access internally and will not unveil it to the public till the time mitigations to “reduce risk to moderate levels” are implemented.

For critical-risk systems, the company will be implementing unspecified security protections to prevent it from being exfiltrated. Also, its development will be put on halt until it gets to a less dangerous level.

Meta says that it doesn’t believe the science of evaluation is “sufficiently robust as to provide definitive quantitative metrics” for deciding a system’s riskiness.

The Frontier AI Framework of Meta is being seen as its response to the criticism over “open” approach to system development.

By considering both risks as well as benefits in making decisions about how to develop and deploy advanced artificial intelligence technology, Meta believes that it is "possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.”


It is called Llama, which has received hundreds of millions of downloads.


AGI stands for Artificial General Intelligence, which is referred to hypothetical intelligence of a machine that has the capability to understand and learn intellectual tasks just like human beings.

  • Published On Feb 5, 2025 at 09:50 AM IST

Newsletter icon
Origin:
publisher logo
TOI.in
Share this article:

Recommended Articles

Loading...

You may also like...

We use cookies!

Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it.