Over the past 100 years, technology has advanced faster than it ever has before. It seems that now the idea of artificial intelligence has already come to fruition with the proposition that self-driving cars will be here by 2020. But with this rampant growth of innovations and advancements comes several unforeseen problems.
So allow me to elaborate more on what exactly a self-driving or autonomous car is. They are basically regular cars but without a human driver. These cars can drive completely on their own and make decisions of their own accord. Of course, there is a human driver present in case anything goes wrong, but the whole idea is that it works without human control. Self-driving cars use a variety of sensors to get information about their environment and use them to figure out what decisions should be made based off of the collected information (4).
Self-driving cars depend mainly on LIDAR, which is Light Detection and Ranging. These LIDAR systems essentially shoot out beams of light to get a 3D image of the surrounding area and allows the car to make decisions much more quickly than when using a video camera. This LIDAR system is really good at detecting the lane, position, and speed, but not as efficient for closer and more precise actions such as changing a lane or parking (3). This is where radar (Radio Detection and Ranging) is used. These radars are placed on the bumpers and sides of the car. Autonomous cars also depend on GPS. The more satellites a GPS unit can use, the more accurate they are. In these cars, their GPS units are capable of connecting to more than 48 satellites at a time. They also use a variety of gyroscopes and accelerometers in case the car passes through a tunnel or canyon where the GPS signal is weak (3).
Obviously, if an innovation like this requires so many resources and technologies, then there has to be benefits. The majority of traffic accidents are due to human error. Things such as sleep deprivation, intoxication, or inexperience are all avoidable causes for accidents, and with autonomous cars, it is promised that they will eliminate 90% of all traffic accidents (2). Also, traffic efficiency will be greatly improved. This is because slow traffic is caused when a person makes an unexpected lane change or has a slower reaction time. With self-driving cars, these decisions will be made almost instantly and with proper execution.
As great as self-driving cars sound, many scientists worry about certain situations that may or may not bring trouble to this industry. One of these proposed scenarios is similar to the trolley problem in ethics (4). The problem addresses the dilemma: if you had to choose between killing one person or ten people, how would you decide? This is a debated situation because it’s questioning the worth of one life over another. This same issue is seen with autonomous cars. For example, what if there comes an unavoidable accident where the car has to make the decision to continue straight and hit ten people, or swerve and hit one person? How should it be programmed to respond? Another theoretical situation is an unavoidable accident where the car has to make the decision between swerving to kill a motorcyclist, or swerving to hit a wall, killing the driver (1). Logically, the car would swerve to hit the motorcycle because that leaves the greatest chance for the driver to survive. But morally, how does a person, let alone a machine, make this decision? Does age and health of the drivers play into its decision? What if there are children in the car? The situation only becomes more complex.
These theoretical situations also bring up legal complications. Say the self-driving car does have an unavoidable accident and ends up killing a pedestrian. Many scientists wonder who should be held responsible? Many people propose that the manufacturer of the car be held liable for any faulty designs (4). This, however, causes another problem. For example, if manufacturers decide to create an algorithm that always saves the most amount of people, is that technically planned homicide since the decision to kill specific people was basically made as soon as the car was sold? Many people suggest that these algorithms make the decisions randomly, but then comes the question of leaving a person’s life up to chance.
With self-driving cars on the horizon, these questions only become more pertinent. In the ambiguous world of ethics, tough questions are going to have to be answered before these cars make their way onto our roads. Although we already see large companies such as Google and Tesla make their way into this industry, the climb towards self-driving cars is slow and steady. They are taking much longer than expected and for good reason. The questions they need to face and the human ingenuity it will take to tackle them will be much more than we needed to invent the wheel.
- Bonnefon, J. F., Shariff, A., & Rahwan, I. (2015). Autonomous vehicles need experimental ethics: are we ready for utilitarian cars?. arXiv preprint arXiv:1510.03346.
- DriveSafely.net. (2016, November 24). Driving Statistics and Scary Facts You Must Know. Retrieved from https://www.drive-safely.net/driving-statistics/
- Kavitha, J. C., Venugopal, A., & Pushparani, S. (2016). Algorithm for Security in Autonomous Cars. In International Conference on Computer Applications (Vol. 83, p. 88).
- Litman, T. (2014). Autonomous vehicle implementation predictions. Victoria Transport Policy Institute, 28.