Sam Altman Makes Big Tech Predictions In June
OpenAI CEO Sam Altman
AFP via Getty ImagesThe corn is growing, the fireworks tents are popping up and everybody’s getting ready for the Fourth of July.
Over at OpenAI, the man in charge, Sam Altman, penned an essay on June 10 about updated predictions for the near-term future.
Part of it actually involves a predicted timeline for AI robotics and humanoid robots, which I’ve been writing about quite a bit.
Sure enough, Altman suggests that this type of technology is on its way soon.
“We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
Contending that much of the “least likely” work on intelligent robots and agents is done, Altman gives the example of his own baby.
“ChatGPT is already more powerful than any human who has ever lived,” he notes. “Hundreds of millions of people rely on it every day and for increasingly important tasks; a small new capability can create a hugely positive impact; a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact.”
Speaking of “self-reinforcing loops,” Altman posits the expansion of automation on a swift timeline.
“The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems,” he writes. “And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.”
What is the likely human response to that?
If you read through the middle part of Altman’s essay, and you have to admit this man has a front-row seat to this innovation shift, you get a sense of how quickly we tend to acclimate ourselves to AI, or at least try. He talks about how, in a singularity scenario, “wonder becomes routine,” amazement gives way to a hunger for more.
Here’s a bit that I thought narrated this idea very well:
“Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.”
And everything keeps going…
What is the human cost of having smart robots around?
In a prior essay months ago, Altman talked about job displacement, using the example of the old lamplighter – someone whose job would be to put fire to the streetlamps used in the times before municipal electricity.
“Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.”
In this current essay, he has a little bit of a different take. Altman changes his protagonist to a subsistence farmer who watches people have ever more frivolous jobs – less physical work, less concrete roles, more creative noodling and more flexibility.
“A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries,” Altman writes. “I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them.”
Well said. That’s going to be a feature of this industrial revolution, I would assume.
Referencing the advances and efficiencies over the last few years, Altman suggests that things which are important to us as societies will suddenly be within our reach.
Positing the cost of intelligence being the cost of electricity, he lays out this way of thinking:
“The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big.”
As for headwinds facing our journey into the future, Altman does identify two of them, although you could argue these are more related to human interactions than technical challenges. One is what he calls an “alignment problem,” where the technology may fail to center on doing things that humans want, in the ways that they want them done. I go back to my colleague Stephen Wolfram’s descriptions of AI as an attention mechanism that has to be directed through the right avenues of thought to gel with human intentions.
The other one that Altman notices has to do with democratizing the tech, making sure it doesn’t just become the jurisdiction of billion-dollar tech firms and their billionaire leaders.
In other words, the two major problems that Altman brings up have to do with access and human control of AI.
Over at one of my favorite podcasts, AI Daily Brief, podcaster Nathaniel Whittemore gives us an example of a pretty scathing skeptical reaction from Jeffrey Miller, apparently at Primer.ai, to wit:
“Democracy means absolutely nothing, and people don't get to vote on whether we want the singularity, which probably leads straight to human extinction. Do you support running a global referendum on whether we allow you guys to persist in trying to summon the superintelligent demons in the hope that they'll play nice with us and destroy our current civilization gently?”
In going over responses to Altman’s essay, Whittemore also references some of the words of Ethan Mollick, who I also like to cover as an MIT-connected person with a lot to say about AI, characterizing Mollick as saying:
“One thing you could definitely say about Sam and Dario is that they are making very bold, very testable predictions. We will know whether they are right or wrong in a remarkably short time.”
That latter individual is Dario Amodei of Anthropic, who has his own bullish ideas on AI.
Then there’s Jensen Huang of Nvidia, also counted among those who believe in the coming of AI robots, evidenced in articles like this one.
So this is bigger than one man: it’s a general consensus among the cognoscenti that the AI robots are imminent in their arrival. What will it look like when they are working next to us?
Or will we be working at all?
I’ll end with this interesting little snippet from Whittemore:
“This is basically the first alarm, followed by a snooze button for some of the most important conversations we'll ever have as a human species.”
If that interesting metaphor turns out to be true, we are in for quite a wild ride over the next decade or so.