Bipartisan Experts Issue Critical AI Roadmap

Published 20 hours ago2 minute read
Uche Emeka
Uche Emeka
Bipartisan Experts Issue Critical AI Roadmap

In the absence of clear government regulations on artificial intelligence, a bipartisan coalition has released the Pro‑Human Declaration, a comprehensive framework for responsible AI development.

The initiative was organized in part by physicist Max Tegmark, an AI researcher at MIT, and gained momentum after recent tensions involving AI companies highlighted regulatory gaps.

The declaration begins with a stark warning that humanity stands at a critical crossroads: one path leads to AI replacing humans in work and decision‑making, while the other envisions AI enhancing human potential.

Its proposals rest on five pillars: ensuring human control, preventing concentrated power, safeguarding human experience, preserving individual liberty, and holding AI companies legally accountable for their products.

Among its most stringent measures, the Pro‑Human Declaration calls for a ban on superintelligence development until there is scientific consensus on safety and genuine democratic support.

It also advocates mandatory off‑switch mechanisms and bans on architectures capable of self‑replication, autonomous self‑improvement, or resistance to shutdown.

The urgency of the declaration was amplified by recent moves in the industry. Shortly before its release, U.S. Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk” after the company declined unlimited Pentagon use of its technology — a designation typically reserved for firms tied to adversarial nations.

Hours later, OpenAI struck a deal with the U.S. Department of Defense that legal experts say could be difficult to enforce. These developments illustrate the consequences of congressional inaction on AI oversight.

Tegmark compared the proposed AI safety framework to how the FDA regulates drug safety, where products must be proven safe before release.

He argued that AI systems especially those targeting young users should undergo mandatory pre‑deployment testing to assess risks like increased suicidal ideation, mental health impacts, and emotional manipulation.

He further stated that if laws already prohibit manipulation of children by humans, the same standards should apply to machines.

Once such testing becomes established, Tegmark predicts it could expand to ensure AI cannot assist in creating biological threats or destabilize governmental systems.

Recommended Articles

Loading...

You may also like...