How Do We Judge Good From Bad?

Throughout life we have to make various choices, with these choices often boiled down to a binary good versus bad.  Some of these choices are very much moral judgments however, and with researchers working on AI technologies that are designed to replicate these choices and decisions, it’s vital that we better understand how we make such moral choices.

A recent study from North Carolina State University attempts to shed some light on the matter.  The project builds upon previous work that revolved around the Agent Deed Consequence (ADC) model, which was created in 2014.  The study attempts to corroborate the model in various realistic scenarios.

“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” the researchers say. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”

Moral judgments

Understanding moral judgments is a fundamentally difficult endeavor as there is seldom a black and white divide.  For instance, whilst most would argue that lying is immoral, there are nonetheless circumstances where it is certainly moral.

The ADC model attempts to address this by taking three things into account when assessing the morality of a judgment: the agent, the deed and the outcome.

“This approach allows us to explain not only the variability in the moral status of lying, but also the flip side: that telling the truth can be immoral if it is done maliciously and causes harm,” the researchers explain.

The model was tested under a range of scenarios that had been evaluated and validated by a group of 141 professional philosophers with a specific training in ethics.  The scenarios were then presented to over a thousand volunteers across a couple of experiments.  In the first of these the stakes were pretty low, whereas in the second the stakes were raised with the outcome potentially resulting in severe injury or even death.

When the stakes were lower, the nature of the task was the key factor in the morality of the choice made.  In other words, the truthfulness of the agent was key, regardless of the outcome of the task.  This flipped around when the stakes were raised however, with the consequences of the decision then the most important thing.

“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” the authors say.  “The findings from the study showed that philosophers and the general public made moral judgments in similar ways. This indicates that the structure of moral intuition is the same, regardless of whether one has training in ethics.”

In other words, whether trained or otherwise, we tend to make snap moral judgments in much the same way.  The team believe that the findings validate the value the model has in helping us to understand moral psychology and ethics.  This may in turn have valuable implications for the development of new technologies, such as artificial intelligence and autonomous driving.  They believe that the ADC model could underpin the cognitive architecture for such technologies, and this is the next step for the project.

Given the strong desire to ensure that AI-driven technologies are developed in an ethical manner, this is perhaps a project to follow with interest.

Related

Facebooktwitterredditpinterestlinkedinmail