For nearly three months now, San Francisco has been a testing ground for robotaxis — driverless ride-hailing vehicles that take you from one part of the city to another, the way cars operated by Grab or Gojek in Southeast Asia do with human drivers. While Phoenix, Arizona, has had robotaxis for almost a year, the arrival of driverless cabs in the tech capital of the world was seen as the ultimate test for the robots that drive you around town. So far, though, the experiment has turned out to be a hot mess.
Robotaxis have been talked about for years. Tesla Inc CEO Elon Musk first mentioned “converting” a huge fleet of his pioneering electric cars to full self-driving (FSD) capability “soon” way back in 2016. Ironically, rivals Alphabet’s subsidiary Waymo and General Motors subsidiary Cruise beat him to it.
The key to robotaxis is collecting a ton of data. Artificial intelligence (AI) needs images, videos, audio, text and other forms of data to train machine learning models. Robotaxis start with human drivers and once they have collected sufficient data, AI uses that training data to teach them to drive around by themselves.
In August, the California Public Utilities Commission decreed that Waymo and Cruise were ready for prime time and allowed robotaxis to operate round-the-clock paid services in San Francisco. Last month, Austin, Texas, became the third US city with a commercial robotaxi operation. Fourteen other US cities including Los Angeles, Miami, Atlanta and Dallas plan to allow robotaxis next year. One thing these cities have in common is warm weather and flatter surfaces. In northeast US, there is snow in winter as well as more challenging terrain. Phoenix and Austin are more planned cities and therefore top on the list of robotaxi operators.
Initially, there were protests after robotaxis suddenly stopped at intersections, interrupting the normal flow of traffic. On one occasion, a robotaxi collided with a fire truck on its way to an emergency scene. In a couple of cases, robotaxis could not recognise the yellow-and-black caution tape that was put up to cordon off an area. They just kept driving through the tape.
Robotaxis are smart enough, however, to identify bright orange plastic traffic cones. So protest groups like Safe Street Rebel launched a campaign to ambush and incapacitate them by placing traffic cones on their bonnets. The cameras recognised the cone and the robotaxi abruptly stopped in the middle of the road.
On Oct 24, California’s Department of Motor Vehicles (DMV) told Cruise to “immediately remove all its driverless vehicles from the roads”, saying it had determined that robotaxis were causing an “unreasonable risk” to public safety. DMV alleged that Cruise had misrepresented “information related to the safety of the autonomous technology”. Cruise has now suspended all its robotaxi operations leaving Waymo as the sole operator. The ban is clearly a huge setback for Cruise, and its main shareholder, General Motors, which has been struggling to transition from a maker of internal combustion engine-based cars to pure battery electric vehicles (EVs).
See also: UBS-Credit Suisse integration opens up new tech for bigger plans
The move comes months after state regulators began investigating a spate of “concerning incidents” involving Cruise vehicles in San Francisco and ordered it to cut its robotaxi fleet in the city by 50%. It also comes just weeks after a jaywalker was struck by a car driven by a human and then flung into the path of a Cruise, which aggressively applied its brakes only to roll over the woman, dragging her for about 6m before stopping. With hundreds of sensors and cameras, how did the robotaxi miss the injured woman who was lying on the street? You might call it artificial intelligence. I call it real stupidity.
In the first few weeks of the San Francisco robotaxi experiment, a few thousand people were able to download the apps of Waymo or Cruise, just as you might download a Grab or GoJek app and get a ride in Southeast Asia. Waymo was operating about 250 cars while Cruise had cut its service to just 50 vehicles during the day and 150 at night before its Oct 24 suspension.
When I wanted to ride a robotaxi recently, I realised there was a long “wait list”. So I got in touch with Waymo to let me ride because I am a journalist who was writing about it. Then, last week, I learnt that Waymo was increasing the number of cars, the wait list would soon be abolished and robotaxis would become almost as ubiquitous as Uber and Lyft.
I am a big believer in technologies like autonomous vehicles. I think robotaxis can transform society in our lifetimes. I had originally planned to write this piece two weeks from now, after my trip to Las Vegas to see the Sphere — a huge new spherical-shaped venue for concerts and immersive shows—where Irish rock band U2 is playing. Then Cruise got banned which prompted me to write this piece.
See also: Google arguments draw scepticism from judge in ad tech case
How robotaxis benefit us
Here’s why we need robotaxis: The World Health Organization estimates that every year around the world, 1.35 million deaths are caused by auto accidents. Over 93% of those deaths are due to human error. Consultancy McKinsey & Co estimates that for every person killed in a vehicle accident in the US, eight others are hospitalised. Over 43,000 people died in auto-related accidents in the US last year, and 2.3 million are injured in auto accidents across the country annually. The two main causes of auto accidents are driving under the influence of alcohol and getting distracted while driving, like talking on the phone.
An average one-way commute in some of the busiest cities in America takes about an hour and covers less than 16km. The No. 1 reason why people want to work from home rather than go to the office every day is because they hate the stressful commute. Driverless vehicles could save half or more commuting time in most cities. In some cases, a third or up to half of the downtown area — the city’s most expensive real estate — is earmarked for parking. Here’s another factoid: A third of the traffic in America’s busiest downtowns is caused by drivers looking around for parking.
Robotaxis and other driverless vehicles would free up huge amounts of space in New York, Chicago, San Francisco and other busy downtown areas, helping to make housing and real estate affordable for residents and businesses. And, oh, one other thing: Robotaxis will dramatically reduce the number of cars on the road. Studies show one robotaxi will likely replace up to five private cars.“Autonomous taxis could be one of the most impactful innovations in history,” says Tasha Keeney, an analyst at tech-focused fund manager ARK Invest.
Here is how they work: Autonomous vehicles, like robotaxis, have a rapidly spinning tube mounted on the roof. That’s a LiDAR, or Light Detection and Ranging sensor. The role of LiDAR, which works in tandem with an array of other sensors on the car, is to measure the distance of objects relative to the position of the car in three dimensions or 3D. Remote-sensing LiDAR uses pulses of ultraviolet laser lights to also measure the distance, speed and altitude of physical objects, and map out the surrounding environment with impressive fidelity. Unlike other sensing technologies like, say, cameras, LiDAR’s performance is not degraded in low light conditions.
But here is the rub: LiDAR is a very expensive technology. Robotaxis currently on San Francisco streets cost over US$100,000 ($137,000) each to make. Add in the bells and whistles as well as marketing and distribution costs and a decent mark-up for profits, and you could be looking at close to US$130,000 per car.
Tesla has taken a completely different route with its own Full Self Driving or FSD software, which it wants to deploy for its own robotaxis rollout. CEO Musk has said that “anyone relying on LiDAR is doomed”. So Tesla wants to focus on an affordable solution that uses cameras and computer vision.
Here is how that works: Computer vision is a part of AI where humans teach computers to interpret and understand digital images or videos using AI based on neural networks, machine learning and deep learning methodologies. Computer vision uses images and videos to understand a real-world scene in order to create a 3D map, which enables self-driving cars to identify, classify, and detect different objects.
Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends
Essentially, Tesla is backing computer vision that relies is betting on relies on the training that was done during the data annotation process to recognise objects on the streets. It looks at each object the same way a human might and identifies them based on their specific characteristics. Musk has said his cameras and computer vision-based FSD is far superior and cheaper than LiDAR, which Waymo and other autonomous car makers are using. His vision is to deliver a robotaxi that costs under US$25,000 to buyers, or a fraction of what a Waymo would cost if Google’s parent Alphabet were selling it today.
Leading the pack
So, who has the edge? Tesla’s rivals like Cruise and Waymo, or indeed iPhone maker Apple Inc, which has been developing its own autonomous cars secretly for years? ARK’s Keeney argues that Tesla has more autonomous-driving data than all other car makers combined. That gives the EV pioneer an edge in developing FSD software because data is the cornerstone of AI. Tesla plans to monetise its FSD technology by selling software directly to consumers or drivers like you and me, and providing autonomous ride-hailing services with robotaxis powered by its FSD platform — in other words, become a competitor to the likes of Uber, Grab and DiDi.
The way Keeney sees it, “autonomous vehicles could be one of the most productive innovations of all time, impacting global gross domestic product by 20% over the next decade”. The ARK analyst estimates that net GDP gains globally could approach US$26 trillion by 2030.
The promise of autonomous vehicles is simple: They will be safer and help reduce congestion. The key problem with autonomous cars is that designing them to anticipate rare events like a speeding fire engine hurtling towards the site of a disaster through a red light, an ambulance racing a dying patient to a hospital or an idiot making a sharp left turn after giving an indicator that he was turning the other way is hard. Humans are easily flummoxed by lesser things so you can imagine how hard it is to make robotaxis understand what’s going on.
Autonomous vehicles have been described as the “mother of all AI projects”. Driverless cars need to process their surroundings to make judgment calls using perception and decision-making technology. Waymo and Cruise have collected data but only in a handful of cities. Tesla has been operating across the US and around the world, collecting a lot of data. But Tesla’s real advantage isn’t just in all the data it has collected over the years but in its AI capabilities. At the heart of Tesla’s push into robotaxis is Dojo, a purpose-built supercomputer designed in-house to train the full-self-driving (FSD) system that sits inside every Tesla vehicle. With Dojo, “Tesla has developed an advanced supercomputing architecture that pushes new boundaries in custom silicon” or proprietary chips, notes Morgan Stanley’s auto analyst Adam Jonas. Musk has said Tesla is likely to spend over US$1 billion on Dojo.
Robotaxis may have made a mess of things in San Francisco recently but it is still early days of the experiment. Driverless cabs are coming to our cities, though sadly they won’t be ubiquitous for years, particularly in unplanned cities, or those with challenging terrains. At some point, to help expedite the transition to robotaxis, legacy automakers too will start installing cameras in their cars to collect data that can be used to help make our cities safer with affordable robotaxis.
Assif Shameen is a technology and business writer based in North America