Artificial Intelligence and the “Who To Kill” Equation


Inside a self-driving car

It is a very special excitement these days to watch history being made in the global Artificial Intelligence workshop with driverless buses starting test running in Finland’s capital Helsinki, world’s nineteen major car makers declaring their fully autonomous car will be hitting the roads by 2020, renowned AI experts discussing the benefits that AI self-driving cars will bring to traffic safety “How Self-Driving Cars Will Choose Between Life or Death” and the need of moral consideration about how AI-empowered machines behave.

The choice between “life or death” is not a difficult one to solve in terms of human morality. The more challenging moral dilemmas come in everyday life where we are not given that type of choice.

Let’s have a look at a hypothetical situation on the highway:

The car marked “A” in the drawing below is equipped with AI computer managed self-driving system. Suddenly, an accident occurs somewhere ahead of car “A”, and its computer driver immediately calculates that car “B” in front will sharply break and stop in two seconds.


Within the two seconds interval of time the AI driver has to choose and execute one of the following options:

Option 1: Change nothing. In this case, provided the high speed, the following six people will be injured heavily, some of them maybe fatally: A1, A2, A3, B6, B7 and B8.

Option 2: Turn sharp right. As a result, another set of six people – A1, A4, A6, C5, C7, and C8 will get injured.

Option 3: Turn sharp left. This choice endangers passengers A2, A3, A5, E1, E2, and E4.

Option 4: Break sharp. That would make vulnerable those on seats A6, A6, A8, D1, D2, and D3.

The AI calculates that a collision is inevitable, and knows the identities of all the 24 potentially endangered passengers, among which there is a high school student, a renowned neurosurgeon in his thirties, a baseball star, a terminally ill senior woman, a senator, a young mother breastfeeding a baby, a marine going on a trip with his girlfriend, a successful businessman, a prisoner just released after heavy crime sentence, university professor, and some more; with the exact information who sits where.

How do we program the AI driver to process the above situation? Or, in moral terms, how do we, the humans, take the responsibility to teach the self-driving car computer how to solve a “Who To Kill Equation”?

Take your time (our brain is not that fast) and try to decide what your choice would be in the above situation. Study carefully the available information about each one of the 24 potentially endangered passengers: they all have their names, social status, profession, family, friends, dreams, etc. They all are still alive, but that is going to change in two seconds, because six of them do not have future.

Having somehow made your choice, ask your friend Alex to do the same exercise (Alex is a great baseball fan), or your colleague Jane, who would on the way discover, that the marine’s girlfriend is in fact her beloved niece! Or ask the local municipality for their opinion, or the Congress. Put it on Facebook, or organize a small referendum. I bet all the way you will be receiving different answers that are crucial for the 24 men, women and kids involved in the drama on the highway. And all the 24 have absolutely equal rights to stay alive in circumstances that make this impossible.

And there is an escapist fifth option: for controversial situations like this, in order to avoid moral responsibility, engineers can build into the self-driving program a “Russian roulette” – random decision to choose one of the above options. But then, in the same situation the split-second spontaneous reaction of a human driver might prove to be better.

The brutal reality is, that you can’t solve the “Who To Kill” Equation in terms that allow for the AI self-driving system to make that final, irrevocable and irreparable existential decision within two seconds, without providing the AI with the information about the human value of each of the 24 people potentially involved in the crash. 24 simple numbers is all the machine needs in order to do its job. But who will provide the 24 simple figures?

The idea itself to design methodology and the metrics that can assess the value of a living human being through a mathematical process based on weighing dynamic unquantifiable characteristics that exist and evolve in the irrational dimensions of human morality, would be a challenge beyond the scope of contemporary political philosophy. And that is exactly what makes the idea more attractive. Even if the discussions on this aspect of human morality will most probably start as a huge battlefield where philosophical, religious, anthropological, axiological, racial, legal and other concepts will clash in epic fights.

One direction for contemplation would be looking back into human history. As a result of a more or less spontaneous process, over the millennia people have imposed some sort of civilizational values matrix  that already has assessed the value of numerous human beings like Einstein, Van Gogh, Mozart, Da Vinci, Mother Theresa, Pushkin, Archimedes, Alexander Fleming, Alexander Bell, as well as thousands more bright minds that Humankind will always be proud of. For one only reason: in their lives these people have exercised titanic capacity to create civilizational values that hugely changed, in a positive way, the lives of millions of people on the planet for generations ahead. They all had extremely high quotient of capacity to create civilizational values – CCCVQ, and exactly that is what gave them the unique appreciation of the rest of the people.

Obviously, at this point of time we don’t seem prepared to solve the equation with the five cars on the highway.

Could we, then start with a more simple one, projected on a real case from 19 century France:

Evariste Galois, a mathematics prodigy, in his late teens made fundamental discoveries in the theory of polynomial equations that hadn’t been solved for centuries. He also was a passionate activist for the republican idea and against the king. Galois did not live long: on May 30, 1832, at the age of twenty, he was killed in a duel. There is no clear record even about the name of the man who killed Galois, but presumably it was a fair fight with rules kept and each of the two men had the right to defend their honour and kill the other.

If an AI system was to decide the outcome of that duel on May 30, 1832, what algorithm would you suggest for the computer to follow in order to have spared the life of Evariste Galois for the benefit of the generations to come?


Evariste Galois


About lubomir todorov

The years of work and studies I have spent in countries of diverse anthropological, political and cultural realities like Japan, Australia, Russia, Netherlands, New Zealand, Czechia, and my numerous visits across the globe, were abundant in meetings and conversations with not only politicians, government officials and businessmen, but also people of different existential background, ethnicity, education and profession, philosophy and religion, social status and political inclination: aboriginal painters in Australia, US generals, Japanese Buddhist and Shinto priests, Majesties and members of Royal families, Russian scientists, street Mapuche musicians in Chile, British parliamentarians, Indian philosophers, Czech university professors, CEOs of top Japanese corporations, Chinese social sciences researchers, Dutch entrepreneurs - to mention just a few. Over time all this primary data, intertwined with the everyday inflow of information about human activities all over the world and their real-time consequences, were re-assimilated in my mind through the tools of philosophy, international relations, ethics, biology, economics, history and political sciences to gradually constitute a distinct resolution to re-examine my understanding about politics and the nature of human society. Making all the time hard efforts to keep the methodology machine uncontaminated by ideological, political or personal prejudice, I clung to one rule: If you aspire for the beauty of truth, it is only facts and logic that matter. Questioning the viability of human social and political behaviour and the capacity of existing political systems to lead to where people want to be, I believe that sooner than later, in pursue of sustainable prosperity, our humanity will embrace its ultimate imperative of Civilizational Thinking. Human Civilization is the spiritual dimension of homo sapiens group survival strategy that urges humans to mutually defend their strategic self-interest by generating Civilizational Values; and the optimal political environment for that is Universal Future: a global multifaceted platform on which all polities in their ideological, political, national, cultural, ethnic, religious, racial, etc. diversity exist, interrelate and compete with each other on non-violence basis.
This entry was posted in Artificial Intelligence, Brutal Logic. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s