Can AI Shed Light On How We Make Choices In Groups?

The way we make decisions is seldom anything but complex, and a new study from the University of Washington believes that our understanding of AI may shed a degree of insight into how those decisions are made.

The research suggests that when we’re in a group, even if the group is largely anonymous, we make choices based upon a mental model of the mind of the group.  This discovery emerged after utilizing a mathematical framework that has its roots in AI and robotics.

“Our results are particularly interesting in light of the increasing role of social media in dictating how humans behave as members of particular groups,” the researchers explain.  “In online forums and social media groups, the combined actions of anonymous group members can influence your next action, and conversely, your own action can change the future behavior of the entire group.”

Predicting future states

Much of our behavior revolves around predictions we make about the future state of our environment.  These predictions often involve the relative uncertainty in those predictions, which obviously increases in social contexts.  In these circumstances, we tend to form a model of the other’s mind, which is known as the theory of mind.

Doing this successfully is complicated in group scenarios, and so we tend to aggregate ‘theories of the mind’ for the entire group based upon what we know about the group.  To explore how this could be improved, the researchers focused their attention on something known as ‘the volunteer’s dilemma task’, which results in some members of the group incurring costs but the majority of the group benefiting.  Things like blood donation are a common example from the real world.

The experiment began by placing volunteers into an MRI scanner and asking them to play a public goods game.  In the game, the contribution each volunteer made to the communal pot influenced the amount everyone in the group received.  It’s a game in which ‘free riding’ is possible.

“We can almost get a glimpse into a human mind and analyze its underlying computational mechanism for making collective decisions,” the researchers say. “When interacting with a large number of people, we found that humans try to predict future group interactions based on a model of an average group member’s intention. Importantly, they also know that their own actions can influence the group. For example, they are aware that even though they are anonymous to others, their selfish behavior would decrease collaboration in the group in future interactions and possibly bring undesired outcomes.”

Modeling decision making

The researchers attempted to assign mathematical variables to the decisions made in the game, and use these to develop a model that could accurately prediction the choices people would make in the game.

When testing the model out, it was indeed able to do so with reasonable accuracy, and certainly more effective than reinforcement learning models or more traditional approaches.  The researchers believe it was able to do so because it provides a quantitative explanation for human behavior, which may in turn render it valuable for use in technologies that interact with humans.

“In scenarios where a machine or software is interacting with large groups of people, our results may hold some lessons for AI,” the researchers conclude. “A machine that simulates the ‘mind of a group’ and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans.”

Facebooktwitterredditpinterestlinkedinmail