Inside a self-driving car
It is a very special excitement these days to watch history being made in the global Artificial Intelligence workshop with driverless buses starting test running in Finland’s capital Helsinki, world’s nineteen major car makers declaring their fully autonomous car will be hitting the roads by 2020, renowned AI experts discussing the benefits that AI self-driving cars will bring to traffic safety “How Self-Driving Cars Will Choose Between Life or Death” and the need of moral consideration about how AI-empowered machines behave.
The choice between “life or death” is not a difficult one to solve in terms of human morality. The more challenging moral dilemmas come in everyday life where we are not given that type of choice.
Let’s have a look at a hypothetical situation on the highway:
The car marked “A” in the drawing below is equipped with AI computer managed self-driving system. Suddenly, an accident occurs somewhere ahead of car “A”, and its computer driver immediately calculates that car “B” in front will sharply break and stop in two seconds.
Within the two seconds interval of time the AI driver has to choose and execute one of the following options:
Option 1: Change nothing. In this case, provided the high speed, the following six people will be injured heavily, some of them maybe fatally: A1, A2, A3, B6, B7 and B8.
Option 2: Turn sharp right. As a result, another set of six people – A1, A4, A6, C5, C7, and C8 will get injured.
Option 3: Turn sharp left. This choice endangers passengers A2, A3, A5, E1, E2, and E4.
Option 4: Break sharp. That would make vulnerable those on seats A6, A6, A8, D1, D2, and D3.
The AI calculates that a collision is inevitable, and knows the identities of all the 24 potentially endangered passengers, among which there is a high school student, a renowned neurosurgeon in his thirties, a baseball star, a terminally ill senior woman, a senator, a young mother breastfeeding a baby, a marine going on a trip with his girlfriend, a successful businessman, a prisoner just released after heavy crime sentence, university professor, and some more; with the exact information who sits where.
How do we program the AI driver to process the above situation? Or, in moral terms, how do we, the humans, take the responsibility to teach the self-driving car computer how to solve a “Who To Kill Equation”?
Take your time (our brain is not that fast) and try to decide what your choice would be in the above situation. Study carefully the available information about each one of the 24 potentially endangered passengers: they all have their names, social status, profession, family, friends, dreams, etc. They all are still alive, but that is going to change in two seconds, because six of them do not have future.
Having somehow made your choice, ask your friend Alex to do the same exercise (Alex is a great baseball fan), or your colleague Jane, who would on the way discover, that the marine’s girlfriend is in fact her beloved niece! Or ask the local municipality for their opinion, or the Congress. Put it on Facebook, or organize a small referendum. I bet all the way you will be receiving different answers that are crucial for the 24 men, women and kids involved in the drama on the highway. And all the 24 have absolutely equal rights to stay alive in circumstances that make this impossible.
And there is an escapist fifth option: for controversial situations like this, in order to avoid moral responsibility, engineers can build into the self-driving program a “Russian roulette” – random decision to choose one of the above options. But then, in the same situation the split-second spontaneous reaction of a human driver might prove to be better.
The brutal reality is, that you can’t solve the “Who To Kill” Equation in terms that allow for the AI self-driving system to make that final, irrevocable and irreparable existential decision within two seconds, without providing the AI with the information about the human value of each of the 24 people potentially involved in the crash. 24 simple numbers is all the machine needs in order to do its job. But who will provide the 24 simple figures?
The idea itself to design methodology and the metrics that can assess the value of a living human being through a mathematical process based on weighing dynamic unquantifiable characteristics that exist and evolve in the irrational dimensions of human morality, would be a challenge beyond the scope of contemporary political philosophy. And that is exactly what makes the idea more attractive. Even if the discussions on this aspect of human morality will most probably start as a huge battlefield where philosophical, religious, anthropological, axiological, racial, legal and other concepts will clash in epic fights.
One direction for contemplation would be looking back into human history. As a result of a more or less spontaneous process, over the millennia people have imposed some sort of civilizational values matrix that already has assessed the value of numerous human beings like Einstein, Van Gogh, Mozart, Da Vinci, Mother Theresa, Pushkin, Archimedes, Alexander Fleming, Alexander Bell, as well as thousands more bright minds that Humankind will always be proud of. For one only reason: in their lives these people have exercised titanic capacity to create civilizational values that hugely changed, in a positive way, the lives of millions of people on the planet for generations ahead. They all had extremely high quotient of capacity to create civilizational values – CCCVQ, and exactly that is what gave them the unique appreciation of the rest of the people.
Obviously, at this point of time we don’t seem prepared to solve the equation with the five cars on the highway.
Could we, then start with a more simple one, projected on a real case from 19 century France:
Evariste Galois, a mathematics prodigy, in his late teens made fundamental discoveries in the theory of polynomial equations that hadn’t been solved for centuries. He also was a passionate activist for the republican idea and against the king. Galois did not live long: on May 30, 1832, at the age of twenty, he was killed in a duel. There is no clear record even about the name of the man who killed Galois, but presumably it was a fair fight with rules kept and each of the two men had the right to defend their honour and kill the other.
If an AI system was to decide the outcome of that duel on May 30, 1832, what algorithm would you suggest for the computer to follow in order to have spared the life of Evariste Galois for the benefit of the generations to come?