There's even a race between traditional carmakers and IT companies such as Google or Apple to deliver an entirely autonomous vehicle on the road by 2020 while everyone and their cousin is actively testing driverless prototypes on various stretches of public roads.
In theory, it all sounds fine and dandy, and every carmaker's marketing team is boasting about a future with accident-free driving. Autonomous cars will apparently roam our roads like certain ocean fish are shoaling, independently of each other but working together as a group thanks to car-to-x and car-to-car communication.
In reality, these advancements in technology are happening a bit too fast and, at least in the beginning, driverless cars might prove to be even hazardous instead of making our commuting safer. Apart from the obvious legislation hurdles implied by autonomous cars co-existing with regular vehicles and pedestrians, there is also an ethical conundrum, and maybe even a philosophical one.
Isaac Asimov first devised the three laws of robotics back in 1941, when he was writing his “Runaround” short science fiction story. Little did he know that some 70 years later we are likely very close to needing a similar set of laws for cars, not for robots.
Coincidentally, “Runaround's” plot is set in 2015, the year when a downright explosion of autonomous concept cars and prototypes happened in our non-fictional world. Germans are currently leading the way regarding research, the Tesla Model S mentioned above is probably the most advanced semi-autonomous production car, while Toyota has just invested a whole lot of moolah into artificial intelligence.
So it happens that Raul Rojas, a professor of artificial intelligence at the Free University of Berlin, has postulated the three laws of autonomous cars, along with a zeroth law, just like Asimov did for robots in several of his books.
- A car may not injure a human being or, through inaction, allow a human being to come to harm.
- A car must obey the traffic rules, except when they would conflict with the First Law.
- A car must obey the orders given to it by human beings, except where such orders would conflict with the First of Second Laws.
- A car must protect its own existence as long as such protection does not conflict with the First, Second or Third Laws.
With the development of AI getting closer and closer, it probably won't be long until a modern version of the 1958 Plymouth Fury from Christine might appear. In theory, any one of the previously mentioned laws would be enough to keep us safe if they are implemented in the primary programming of any future autonomous car, but that still won't make it entirely safe.
The most basic question revolves around hypothetical paradoxes. Will an autonomous vehicle kill or maim its few occupants by swerving off the road to avoid crashing into a school bus with tens of children on board? What about simpler and more everyday situations, such as crossing a double line to clear a bunch of double-parked cars or a child jumping in front of the car?
Disobeying traffic laws so that nobody gets hurt is something that humans are very good (and bad) at, but what about an artificial brain? There are so many questions that probably need answers and solutions before we let driverless cars conquer our roads because that is exactly what is going to happen by 2030, at the latest.
Most states where autonomous cars are legally free to roam around on public streets require that a human driver should be on board ready to take control of the car whenever the situation requires it, but the legislation varies and there is nothing set in stone for what the future may bring.
Personally, I'm not exactly pro autonomous cars, but I'm very into cars with autonomous features, which I consider being simply the obvious step in the evolution of personal transportation. Humans can lie, cheat, hurt, etc. but these disadvantages are also what makes us capable of bending the law to protect us or someone else.
Maybe carmakers shouldn't remove us entirely from the wheel, no matter how safe that may sound.