AI, NLG, and Machine Learning, Chatbot Development

Can AI Bots Make Life-and-Death Decisions?

Artificial intelligence (AI) bots pervade every aspect of human activity. The performance of AI often exceeds human baselines. Moreover, the MIT designed a Moral Machine application with a user interface to simulate unsolvable moral dilemmas. Fortunately, AI researchers have found ways to make AI bots reliable in life-and-death situations.

By Denis Rothman
November 6, 2020

Artificial intelligence (AI) bots, though highly efficient, cannot solve moral dilemmas in life-and-death situations. The Moral Machine, created by the Massachusetts Institute of Technology (MIT), highlights these AI limits. Autopilots and self-driving cars stop short of making critical decisions. Hospital AI scheduling software cannot decide which patient to admit when there are not enough resources for all patients.

The Moral Machine

The Moral Machine was created by MIT to represent a moral dilemma. The goal of this experiment is to understand moral dilemmas in any situation.

A moral dilemma occurs when a life-and-death situation occurs for which yes or no answers will lead to one person's death and the survival of another person, for example. The human or AI bot, making the decision, is trapped in a catch-22 situation. In a catch-22 life-and-death situation, no matter what decision is made, somebody will die.

A typical moral dilemma in a catch-22 situation can occur during a famine. A parent might have to choose which of two children will survive if there is only food for one child.

It is impossible to deploy AI bots on a large scale without taking moral dilemmas into account. Coronavirus treatments and autopilots involve critical dilemmas for AI as well as humans.

COVID-19 and the moral limits of AI

A moral dilemma can occur with a standard artificial intelligence bot optimizing resource management in a hospital during the COVID-19 pandemic. A typical AI advanced planning and scheduling (APS) bot will manage doctors, ventilators, and other standard resources. Such a system generally works well in a hospital and could be in service for several years.

However, a critical situation arises when COVID-19 patients of all ages and medical conditions come to the hospital in overwhelming numbers. The AI system cannot schedule all of the patients. In a standard scheduling system, such as an APS, the AI bot automatically begins to prioritize tasks. Prioritizing immediately leads to moral dilemmas.

The AI bot could prioritize younger patients because they have a higher survival rate and optimize the hospital's efficiency. The AI bot could exclude patients that have a low survival probability. The AI bot would only produce controversial decisions.
The conclusion here is that an AI bot cannot solve a moral dilemma humans cannot solve. We cannot allow an AI bot in a hospital to make life-and-death decisions.

You could explore the Moral Machine and experiment with your reactions if you were a hospital manager facing a moral dilemma related to selecting which COVID-19 patients you will treat or not.

Let us first add another moral dilemma to the problem the AI bot must solve.

Autopilots and the moral limits of AI

Imagine a state-of-the-art self-driving car with the best AI-driving autopilot on the market. This autonomous vehicle (AV) has an excellent safety record in the past year, and hundreds of thousands of units have been shipped worldwide. Nobody could dream of a better car.

Also, the autopilot uses specific AI algorithms, such as the Markov decision process (MDP). An MDP algorithm requires no rules and no data to learn from. The MDP uses rewards to learn, just like us, in a trial-and-error process. In this type of state-of-the-art AV, the driver will be able to read, watch a video, or have dinner while going from location A to location B.

However, we do not live in a perfect world. If we introduce pedestrians' unpredictable behavior crossing a road into this situation, the AI bot will face a dilemma.

The AI bot might detect a pedestrian crossing the street unexpectedly, decide that there is not enough time to stop the car, and swerve and hit another pedestrian. What would a human do? A human could change lanes but might hit another car or change lanes and hit another pedestrian. How to choose which pedestrian to hit? The pedestrian directly in the path? Or the unsuspecting pedestrian?

You can join the millions of viewers who have explored the Moral Machine to test your reactions facing dilemmas in self-driving cars.

When faced with moral dilemmas, AI bots can go wrong in an urgent and vital situation. AI stumbles and will make errors.

Fortunately, AI researchers have found ways to ensure that AI bots do not make life-and-death decisions.

Wisdom-learning AI bots

AI bots cannot solve moral dilemmas that humans cannot solve. AI cannot decide to run over a pedestrian to avoid another one. AI cannot choose to admit COVID-19 patients based on age, survival rate, or any other controversial criteria.

Teaching AI the value of human wisdom in three steps is an efficient way to solve moral dilemmas. The first step is an explainable AI (XAI) user interface that displays the precise parameters an AI algorithm takes into account when making a decision.

The second step is to implement a shut-off mechanism on AI autopilots and AI hospital scheduling bots as soon as a perilous situation is detected. The shut-off mechanism turns the automatic process off.

The third step consists of implementing AI-driven, real-time advice and displaying several options for a user to choose from. In this step, AI is used to predict that the shut-off mechanism needs to be activated long before the dilemma occurs. For an autopilot, AI can analyze the traffic in a given area and recommend another itinerary. For a scheduler, AI can detect a lack of resources and find other hospitals with available personnel and equipment.

However, we must be willing to accept that sometimes there is simply no solution for AI to solve a moral dilemma.

Conclusion

Bias has always been considered as the main problem AI faces. But bias can be fixed with an ethical approach and goodwill. A life-and-death moral dilemma cannot be improved, no matter how hard AI researchers try.

In some cases, wisdom-driven workarounds will improve the performance of an AI bot. Sometimes, there is no solution at all, just as for humans. In that case, we need to implement a wisdom-driven shut-off mechanism and admit that AI cannot solve all of our problems.

We humans will remain useful for many years to come, after all.