Jumping on the bandwagon, Australia launched its first fully driverless and electric shuttle bus in September 2016 and will begin trials ferrying passengers in South Perth.
The blame game
Barely two months into its first foray, a nuTonomy self-driving taxi collided with a lorry while changing lane at Biopolis Drive, in what is believed to be the first accident in Singapore involving an autonomous vehicle. After investigations, nuTonomy found that the accident was due to “an extremely rare combination of software anomalies”, which affected how the vehicle detected and responded to other nearby vehicles when changing lanes. They have since made improvements to their software and are now back on the roads..
This follows the world"s first-known fatal accident involving a self-driving vehicle, when, in June this year, an 18-wheel tractor-trailer made a left turn into the path of a 2015 Tesla Model S that was in Autopilot mode. This happened in Florida and, according to Tesla, “[n]either Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied”.
The obligation to compensate for damages or injury caused by negligence is fairly straightforward in a typical traffic accident. The question typically comes down to how a reasonable person ought to have acted in the circumstances and how fault ought to be apportioned between the drivers/pedestrians involved. However, this notion of "reasonableness" takes on a different meaning when it's software doing the "thinking" for the car.
For starters, there are more parties potentially involved in the liability chain. The "blame game" could involve the car manufacturers, computer programmers, software developers, algorithm designers, mapping companies and potentially even the authorities who provided the maps. Consequently, vehicle insurance coverage will have to be reconsidered as product and driver liability issues become more complex with the adoption of autonomous vehicles.
Morals or Data?
We are next confronted with a robot's ability to make reasonable moral decisions in the event of an impending collision. Does a robot possess the same moral guiding principles that humans (arguably) have in making reasonable decisions on the road?
Imagine a scenario where the brakes on a speeding vehicle have failed. A human driver may be reasonable in swerving his vehicle onto oncoming traffic to avoid mowing down a little old lady crossing the road. Is a robot driver able to make the same judgment call? If it can calculate the potential for death and destruction more accurately, it may well determine that the pedestrian grandma’s life is worth less than the alternative. It remains to be seen how the courts will ascribe reasonableness to snap decisions made by software.
If you shudder at the thought of rush hour traffic and inconsiderate drivers on your daily commute to work every day, imagine the challenges faced by autonomous vehicles amidst current road conditions. Interestingly, the Massachusetts Institute of Technology has developed a platform to gather perspectives on moral decisions that should be made by autonomous vehicles.
Ready to take the Bavarian Moral Works vehicle for a spin?
To continue reading,
Sign in to access this Premium article.
Subscription entitlements:
Less than $9 per month
3 Simultaneous logins across all devices
Unlimited access to latest and premium articles
Bonus unlimited access to online articles and virtual newspaper on The Edge Malaysia (single login)
Related Stories
- Driverless cars, hype and a reality check
- 5G crucial for self-driving cars, automated factories: Fitch Solutions