Log In

Two Years Ago Today in AI History: The Tale of An About-face in AI Regulation

Published 9 hours ago8 minute read
The US Capitol, May 15, 2023, photo by the author, the night before the Senate’s first hearing on AI oversight

We need to maximize the good over the bad. Congress has a choice. Now. We had the same choice when we faced social media. We failed to seize that moment. The result is predators on the internet, toxic content exploiting children, creating dangers for them.

– Senator Richard Blumenthal (D-CT), May 16, 2023

I think my question is, what kind of an innovation is it going to be? Is it gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all to greater liberty? Or is it gonna be more like the atom bomb, huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day? I don't know the answer to that question. I don't think any of us in the room know the answer to that question. Cause I think the answer has not yet been written. And to a certain extent, it's up to us here and to us as the American people to write the answer.

– Senator Josh Hawley (R-MO), May 16, 2023

Thank you, Mr. Chairman [Sen. Blumenthal] and Senator Hawley for having this. I'm trying to find out how [AI] is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you're a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to hide, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can't sue. Do you all agree we don't wanna do that again?

– Senator Lindsey Graham (R-SC), May 16, 2023

We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.

– OpenAI CEO Sam Altman, May 16, 2023

Two years ago today, Sam Altman, Christina Montgomery, and I testified at the US Senate Judiciary Oversight Committee, at the behest of Senators Blumenthal and Sen Hawley.

At the time, it felt like the highlight of my life. I had a palpable sense of history - this was the Senate’s first hearing on AI. I nearly wept the evening before when I walked by the Capitol at twilight, taking the photo above and reflecting on the history of the United Sates, and the importance of AI to our future. And then, to my great and pleasant surprise, at the hearing itself, the next day, nearly everybody gathered in the room seemed to get it, to be on the same page about the importance of AI regulation and the importance of getting it right and not delaying. As the quotes above illustrate (and I could have chosen many others), Senators, both Democrats and Republicans, recognized the gravity of the moment, and expressed guilt at not having acted faster or more effectively in the regulation of social media. All seemed highly motivated to do better this time.

And it wasn’t just the bipartisan enthusiasm of the Senators that buoyed me, but also the remarks of Sam Altman, perhaps the most visible representative of the AI industry. Throughout the meeting he spoke out in favor of genuine AI regulation, at one point even endorsing my own ideas around international AI governance.

Tragically, almost none of what was discussed that day has come to fruition. We have no concretely implemented international AI governance, no national AI agency; we are no longer even positioned well to detect and address AI-escalated cybercrime. AI-fueled discrimination in job decisions is likely far more rampant than before. Absolutely nothing is being done about AI-generated misinformation, political or medical. By many accounts, AI-fueled scams have exploded, too, and again there is no coherent federal response.

Two years later, Washington seems entirely different. Government officials aren’t worrying out loud about the risks of AI, anymore. They are downplaying them. Congress has failed to pass any meaningful AI regulation, and even worse, they are now actively aiming to prevent States — probably our last hope — from passing anything meaningful. Republicans as a whole are far more resistant to AI regulation now than they were in 2023, and voices like Josh Hawley, who seemed sincerely interested in how to regulate AI, are now drowned out by the administration’s across the board anti-regulatory turn.

And when Altman returned to Senate last week, he sang an entirely different tune, effectively trying to block AI regulation at every turn. Altman is no longer talking about AI regulation, he is actively resisting it.

Which raises a question: Did Altman actually mean any of what he said two years ago? I believed him at the time, but I probably shouldn’t have.

In hindsight, Altman is phenomenal at reading the room, and telling people what they want to hear, even if he doesn’t really mean it. For example, he pretended to be doing the job purely out of love, working for health insurance and no equity, but didn’t disclose that he had indirect equity in OpenAI’s for-profit subsidiary via his holdings in YCombinator; he also conveniently forgot to mention his ownership of OpenAI Startup Fund (which he subsequently got out of, under pressure).

And even as Altman was telling Congress that he supported AI regulation, his company was lobbying the EU to water down its AI act. (At one point he even threatened to have OpenAI walk from the EU altogetherNow he is doing everything he can to stop AI regulation of any meaningful sortHe also said at the time “we think that content creators, content owners, need to benefit from this technology” but ever since his company has been pushing for free training materials and exemption from copyright laws.)

You can see Sam’s about-face for yourself in this brief clip below, from a forthcoming film called Making God, which interviewed me recently.

It’s worth two minutes of your time:

I think the question about whether Sam can be trusted by now has a clear answer. Two new books, by the journalists Keach Hagey (of The Wall Street Journal) and Karen Hao (of The Atlantic) further bear that out, in detailed reporting on why he was briefly fired from OpenAI in November 2023.

The real question is why the US government continues to place so much faith in Altman, given (a) his own track record, and (b) his own 2023 testimony to the Senate that AI could cause “cause significant harm to the world”.

The cost to humanity of being beguiled by this man may turn out to be enormous.

§

In an excerpt from her new book, Empire of AI, that appeared yesterday in The Atlantic, Karen Hao writes eloquently about how much has changed at Altman’s company, OpenAI, since he was fired and rehired in 2023:

The events of November 2023 [when Altman was fired] illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry….

The shift in norms has extended to our government, too; gone are the days when Washington was afraid of the risks of AI enough to seriously consider doing anything about them. All talk about regulation has been replaced by talk about innovation, which is really shorthand for help the companies as much as possible, no matter what it costs the citizenry. Gone too is a chance to avoid what Senators from Blumenthal to Graham warned about: a repeat of the mess of social media, in which big tech got its way, and society was left paying the consequences.

, Professor Emeritus at NYU, is author of six books, including Taming Silicon Valley, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber.

Share

p.s. The folks from the documentary excerpted above, Making God, are raising money to support the completion of their film.

Discussion about this post

Origin:
publisher logo
Marcus on AI
Loading...
Loading...
Loading...

You may also like...