With self-driving cars being developed, one must question the ethics and morality behind such a thing. When driving, the driver must make quick decisions about what to do if a pedestrian runs out in front of them, if a car suddenly pulls in front of them without any warning, and if an animal stops in the middle of their lane. With a self-driving car, the driver is no longer responsible for making these decisions—but will the self-driving car make the correct decision?
In an effort to display the lack of ethics in self-driving cars, MIT has created a simulation which offers thirteen scenarios in which the individual must decide who lives and who dies. These scenarios are examples of emergencies that may occur while driving, such as individuals on both sides of the sidewalk when they are not supposed to be crossing the road.
After completing these scenarios, the simulation will inform you of whom you most often saved and killed. It also informs you of what most and least mattered to you and others concerning protecting passengers’ and others’ lives, upholding the law, and preference in reference to gender, species, age, fitness, and social value. This data is sent to MIT anonymously; however, you can opt to not send your results. The purpose of this experiment is to determine ethics and morality.
This is relevant as many self-driving cars are being developed or have been released. Although not fully automated, Tesla has released a self-driving car, and Waymo, a former division of Google, is undergoing the development of a fully self-driving car. How well prepared are these cars for the streets?
On the weekend of January 14, 2017; a software update was pushed out to Tesla car owners, allowing them to ask the artificial intelligence controlling the car to break the speed limit by five miles per hour on undivided roads. The car is also able to be set to a maximum speed of ninety miles per hour on highways, regardless of the speed limit. If the purpose of automated cars is to promote safety, should we really be allowing the “driver” to purposefully set their car to speed above the deemed maximum safe speed?
Yet, there are some advantageous safety features to the Tesla car. Firstly, the car is able to maintain driving in the correct lane while on highways. The car also features an adaptive cruise control, which changes the car’s speed to match with the vehicle in front. However, the car ahead may be speeding, so will the Tesla car match this illegal speed? Not being a fully automated car, the Tesla Company said in a statement that “Until full autonomy is reached, the driver is responsible for and must remain in control of their car at all times.”
Meanwhile, full autonomy is soon on the way with Waymo, a former division of Google. Their hope is that “fully self-driving technology… will deliver the biggest impact on improving road safety and mobility for everyone,” as stated on their website. So what is this fully self-driving technology composed of? What technology are you giving the responsibility of maintaining not only your life, but the life of those around you? The cars have sensors and software that are designed to detect pedestrians, cyclists, vehicles, road work from up to 200 yards away in all directions and predict others behavior. For example, the sensors are able to detect a cyclist’s hand signal of extending the left arm, and the software is able to predict that they will move to the left lane, thus the car will lower its speed.
However, are all emergencies able to be detected? Waymo seems to believe that their car will be best suited to handle the roads and predict behavior, avoiding catastrophe, due to the large amount of human driving experience that they have been able to accumulate since they started at Google in 2009. Currently, the Waymo automated car is being tested in the streets of four cities, located in California, Arizona, Texas, and Washington.
Despite the sensors and attempt to make automatic driving safer, one can only question what dire effects it may bring to society and to the lives around us. After all, behavior is unexpected, and the detection of behavior may occur too late to avoid emergency. After all, some cars are able to disobey the law by breaking the speed limit. MIT simulation helps to display that the self-driving cars may not be the best option, as they places lives in the decision of a program. How can we be sure that these programs will make the best decision?
Works Cited:
Hern, Alex. “Tesla Allows Self-driving Cars to Break Speed Limit, Again.” The Guardian. Guardian News and Media, 16 Jan. 2017. Web. 20 Jan. 2017. <https://www.theguardian.com/technology/2017/jan/16/tesla-allows-self-driving-cars-to-break-speed-limit-again>.
“Mortal Machine – Human Perspectives on Machine Ethics.” Massachusetts Institute of Technology, n.d. Web. <http://moralmachine.mit.edu/>.
“Waymo.” Waymo. Waymo, n.d. Web. 20 Jan. 2017. <https://waymo.com/>.
No Comments