Select Page

Either a Passenger or Pedestrian Will Die, Will #Autonomous Vehicles Have to Choose?

In an interesting article by Matthew Biggins, he opens by painting a picture of the future. One in which a computer system for an autonomous vehicle must choose to save an innocent child chasing a ball into the road or swerving to save the chile, which would lead to the passenger’s death.



The year is 2025. A woman slides into her autonomous car headed for her office. She pulls out a tablet to read the morning news as the car pulls into the street. A green light — the car proceeds through the intersection. Just then, a ball bounces onto the road and a child darts into traffic after it — too late. The car’s computer calculates a crash is imminent. There are two outcomes. The car attempts to stop, but will hit and kill the child. Or the car swerves to avoid the child, but will ram into the median and kill its passenger. What should the car do? Save the owner or the innocent child?

Welcome to the ethics of driverless cars.

But the car won’t really be making a choice; it will do what its algorithms dictate. The real choice was made years earlier when the algorithm was designed. That real choice is now. We are the ones to decide who self-driving cars will allow to live and choose to kill.

If you need proof that this is a choice for right now, Waymo (owned by Google) began testing a driverless rideshare service in Phoenix in December 2018. Meanwhile, Tesla has been selling cars that have the hardware needed for full autonomy since 2016. And major automakers believe they will achieve autonomous driving by the early 2020s.

Source: Medium, Matthew Biggins