Understanding Why Tesla Didn't Launch Their Robotaxi This Week
A Tesla robotaxi, a ride-booking service, moves through traffic, Sunday, June 22, 2025, in Austin, ... More Texas. (AP Photo/Eric Gay)
Copyright 2025 The Associated Press. All rights reservedSunday’s story about how Tesla Missed their Robotaxi launch goal raised a lot of reader questions, because many, including the stock market--which sent Tesla stock up as much as 10%--got the impression that they did. As such, it’s worth exploring the subtleties of the robotaxi world and what it takes to make one to understand just where Tesla is, and what one might be optimistic or pessimistic about for their prospects.
To start, it should be understood that the move to “unsupervised,” meaning having no human needed most of the time for general operation on busy public streets, is by far . There are a handful of companies which have done this: Waymo, Cruise, May, Nuro, Baidu, WeRide, Pony, and AutoX. (Aurora did it briefly on freeways, and there are some doubts on some of the Chinese players.) There are many more who have tried and still not done it, including Tesla, and also many who have tried and died before doing it, or in the case of Cruise, after.
This is “the big one.” In 2021, I outlined a list of many milestones that robotaxi teams might attain on their path. The unsupervised milestone--also usually called “removing the safety driver” is #15. What Tesla demonstrated this week is #9, quite distant from that.
Elon Musk, however, promised the big one. In multiple statements he declared that there would be “” and the vehicle would operate “” This generated a lot of anticipation, and skepticism. Could Tesla really move from what we see in the public release of FSD version 13, which is around milestone #6 (not everybody does them in the same order or in the same way) all the way to the big one? Could they do it using machine learning and computer vision when everybody else used more, like maps and lidar (in addition to machine learning and computer vision?) It was a very bold claim. So bold as many felt it impossible, but the power of machine learning has surprised the world many times in many ways.
We feel confident in being able to do an initial launch of unsupervised, no one in the car, full self-driving in Austin in June - Elon Musk
The answer, however, was “no.” They could not yet do it. They could produce a robotaxi service with a supervisor in the car, which is something many companies have done and do today. It’s not trivial but nor is it brain surgery. (I won’t say “rocket science” for obvious reasons of irony.)
It was as if Musk’s SpaceX had promised it would send a person to the moon, and instead flew suborbital to the edge of the atmosphere like Jeff Bezos’ “Blue Origin.” That’s the level of disappointment we’re talking about for those who understand the difference.
Taking out the supervisor requires proving--to yourself, your board, your lawyers and others--a safety case that assures that putting the car out mostly on its own won’t create unacceptable risk. That it will, as Musk himself claimed, drive much better than human drivers. That’s a hard case to prove, and nobody wise pulls the safety driver until they are confident they have done it, for two companies that had serious incidents, Cruise and Uber ATG, and now dead because of them, even though there were others partly to blame in both cases.
When Tesla left the supervisor in the car, after declaring they definitely would not, it meant they couldn’t make that safety case.
But many readers felt that all was good because this supervisor isn’t a "safety driver" because they are in the passenger seat, not behind the wheel, and thus not a driver. Well, even if you think that, they are still a human supervisor, so both of Musk’s key promises are not attained, but one must also understand that “safety driver” is a term of art in self-driving, coined long ago to refer to a human who supervises the car and intervenes if something wrong is feared. They don’t actually drive the car in the conventional sense. In fact, in a near-deployment car, they will just bring the car to a safe state, and expect the car, or a remote assistance or remote driving team, to finish the job of getting the car going again. That’s important because the team is simulating removing that person and wants them to do the absolute minimum so they can test what happens when they are gone. They do the driver’s most important task--watching the road and the system to assure nothing is going wrong, and stopping it if needed. You will even find the employee who sits in shuttles with no steering wheel but has an emergency stop button called a safety driver.
Tesla’s safety driver--and yes, that’s what they are--has 3 different controls to make the car stop. Many riders have noticed these staff keep their right hand constantly near the right door handle button, which, though unconfirmed, seems likely to have been overloaded to be a stop button while the vehicle is moving. The safety driver also, like a high school driving instructor, has the ability to grab the wheel and steer. That probably also triggers a stop while steering.
If all that seems pretty silly, it is. This is not at all the way to do this. There’s no reason to do it that way rather than just put the person behind the wheel. No reason except how it looks. No reason except to make less sophisticated riders say, “wow, nobody behind the wheel!” It does express some confidence, but in fact, the driving instructor approach is used a million times a day to train teen-agers, so it is well known to work safely, so it doesn’t express that much confidence. It would not be surprising if Tesla recruited from driving school instructors for the job. Doing something like this just for the optics doesn’t leave the best impression. It would be better if Tesla underpromised and overdelivered, as it does in the EV world, rather than overpromised and overdelivered, as it often has with FSD.
I have a sidebar on all the different forms of human assist for robocars, from remote assist to safety drivers with more details.
ForbesSafety Drivers, Remote Diving And Assist—The Long Tail Of Robotaxis
It’s also worth noting that having remote supervisors who can drive the car (A photo of the Tesla control room shows several consoles with steering wheels) is still supervised, unless it is only used to get cars out of jams after the car decides to stop and ask for help. No team has as yet used this approach.
That said, there is some real accomplishment here. Nobody has had the guts to deploy even a safety-driver robotaxi with just cameras before. They haven’t felt that was a wise target. Everybody else who’s made that big milestone has taken a “Tesla Master Plan” approach of starting expensive and high end, proving the technology and then making it cheaper. Tesla’s machine-learning plan is a bold, longshot bet, and they’ve gotten further with it than many expected. (Many think it will eventually work, but they don’t want to name the year, and they don’t think it’s the best first approach when your goal is to reach that hard safety goal.)
And because the right-seat safety driver has a harder time intervening, it’s fair to say you would not have deployed in this way with early versions of FSD that couldn’t go more than a few miles without needing intervention. The greater risk taking is a sign of greater ability and greater confidence, but the level of greater ability needed to remove all humans from the loop is can be literally 100 times more! Yes, 100, as you can deploy what Tesla has done needing major intervention every 10,000 miles, but removing the human needs a million.
we’re looking for a safety level that is significantly above the average human driver. So, it’s not anywhere like much safer, not like a little bit safer than human, way safer than human - Elon Musk
The safety goal is very hard. I call it “bet your life” reliability. But as Musk says, we want it to be better than human, and we actually happily bet our lives on human driving ability every day. In spite of the surprising power of pure machine learning, it also makes strange and unexpected mistakes. No ML system has ever reached this level of reliability before. Tesla and others have faith, they hope if you pour enough data and compute on the problem it will work. It might, but it might not. On the other hand, methods combining that machine learning with other techniques and better sensors and different types of data, have worked, and now have been on the road for 6 years with no safety driver, while Tesla has declared every year for 8 years that their system will make that big milestone within a year.
Perhaps Tesla is on the cusp, and can pull the safety driver in a month. Or maybe, like many teams, they will be at it for years in this mode. Musk claimed they were seeing “interventions” every 10,000 miles back in April, though he did not define the type of interventions. We’ve seen several minor interventions and issues in early videos from the first few thousand miles, which doesn’t bode well, though we’ve not seen any crash-style interventions. To be better than humans, you need crash-preventing interventions to have perhaps a million miles between them--that’s 100,000 taxi trips in a row, or about 8 months of driving with Tesla’s small fleet of 20 taxis. I don’t see evidence of that, and Tesla has been poor about releasing what truly matters, which is bulk statistics like this. Single drives tell you nothing, they should essentially always be perfect. Tesla has only released highly misleading statistics on how Autopilot does with a supervising driver on freeways at preventing airbag deployments (a fraction of total crashes) and then compares that to the general public crash rate, which would of course be many times worse even if the systems were similar in safety, which they are. It would be great for Tesla to give us this data.
For now, we just don’t know. Because safety drivers fix mistakes, Tesla could have released a pilot robotaxi with human supervisor service quite a while ago, once FSD got able to do whole trips without issue on a regular basis. What they have shown us this week is not a safety breakthrough, but work on the other stuff, like having an app, and doing PuDo (pick-up/drop-off) and other logistics. Telsa actually has a lot more work to do in that area. Even Waymo does. Sadly, the many breathless tweets that “Tesla has done it” are misguided. We just don’t know and Tesla won’t tell us the underlying numbers. (That’s not unusual, most teams don’t tell them, though Waymo has put out good numbers after the fact.)
To summarize: