Navigation

© Zeal News Africa

AI Regulation Turmoil: Federal and State Governments Clash in Power Struggle

Published 2 hours ago4 minute read
Uche Emeka
Uche Emeka
AI Regulation Turmoil: Federal and State Governments Clash in Power Struggle

Washington is currently at a critical juncture in deciding how to regulate artificial intelligence, a debate centered not on the technology itself, but on which governmental body holds the primary regulatory authority. In the absence of a comprehensive federal AI standard focused on consumer safety, numerous states have stepped forward, introducing dozens of bills to safeguard their residents from AI-related harms. Notable examples include California’s AI safety bill SB-53 and Texas’s Responsible AI Governance Act, which specifically prohibits the intentional misuse of AI systems.

Conversely, the tech giants and burgeoning startups emerging from Silicon Valley contend that such a fragmented approach, a "patchwork" of state laws, creates an unworkable regulatory environment that ultimately threatens innovation. Proponents of this view, like Josh Vlasto, co-founder of the pro-AI PAC Leading the Future, argue that a proliferation of state-specific regulations will "slow us in the race against China." Consequently, the industry, alongside several White House figures, is advocating for either a singular national standard or no regulation at all, preferring industry self-regulation to "maximize growth."

This all-or-nothing battle has spurred new efforts at the federal level to actively prohibit states from enacting their own AI legislation. Reports indicate that House lawmakers are considering using the National Defense Authorization Act (NDAA) to block state AI laws. Simultaneously, a leaked draft of a White House executive order (EO) also demonstrates robust support for preempting state-level AI regulatory efforts. The EO, though reportedly put on hold, proposes establishing an "AI Litigation Task Force" to challenge state AI laws in court, directing agencies to evaluate state laws deemed "onerous," and pushing federal commissions like the FCC and FTC towards national standards that would override state rules. Significantly, the draft EO suggests granting David Sacks, described as Trump’s AI and Crypto Czar and co-founder of VC firm Craft Ventures, co-lead authority in crafting a uniform legal framework, thereby giving him direct influence over AI policy beyond the typical purview of the White House Office of Science and Technology Policy.

Despite these federal preemption pushes, a sweeping preemption that would strip states of their rights to regulate AI remains unpopular in Congress, which earlier this year overwhelmingly voted against a similar moratorium. Many lawmakers argue that without an established federal standard, blocking state initiatives would leave consumers vulnerable to harm and allow tech companies to operate without adequate oversight. Alex Bores, a New York Assembly member who sponsored the RAISE Act requiring large AI labs to have safety plans, acknowledges the power of AI but emphasizes the need for reasonable regulations, stating that "the AI that’s going to win in the marketplace is going to be trustworthy AI." He supports a national policy but believes states can respond more swiftly to emerging risks. Indeed, by November 2025, 38 states had adopted over 100 AI-related laws, predominantly addressing deepfakes, transparency, disclosure, and governmental AI use, underscoring their quicker legislative pace compared to Congress.

The argument that a patchwork of state laws is overly burdensome has also been challenged. Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders, authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, contend that the "patchwork complaint is overblown," noting that AI companies already comply with more stringent EU regulations and that most industries navigate diverse state laws successfully. They suggest the true motivation behind the preemption push is to avoid accountability.

In response to the urgent need for federal oversight, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a comprehensive package of federal AI bills. Lieu’s proposed "megabill," anticipated to be over 200 pages and introduced in December, aims to cover a wide array of consumer protections, including fraud penalties, deepfake protections, whistleblower safeguards, compute resources for academia, and mandatory testing and disclosure requirements for large language model companies, a practice currently often voluntary. Lieu acknowledges that his bill would not be as strict as some other proposals, such as one by Sens. Josh Hawley (R-MS) and Richard Blumenthal (D-CN) requiring government-run evaluations of advanced AI systems, but he believes it stands a better chance of passing. His goal is to achieve legislative success within the current term, navigating an environment where House Majority Leader Steve Scalise is openly hostile to AI regulation. This underscores the arduous and potentially lengthy path—months, if not years—a federal megabill faces to become law, highlighting why the current push to limit state authority has become one of the most contentious battles in contemporary AI policy.

Loading...
Loading...
Loading...

You may also like...