People Poor At Spotting Political Bots

Social media users face challenges in distinguishing AI bots during political discussions, according to researchers from the University of Notre Dame.

Even though artificial intelligence bots are already prevalent on social media, the question remains: Can users differentiate between human and AI interactions?

Spotting bots

The researchers conducted a study using AI bots based on large language models, a type of AI designed for understanding language and generating text. They engaged human participants and AI bots in political discourse on a customized instance of Mastodon, a social networking platform.

The experiment spanned three rounds, each lasting four days. After each round, human participants attempted to identify which accounts they believed were AI bots. Surprisingly, participants were mistaken 58 percent of the time.

“They knew they were interacting with both humans and AI bots and were tasked to identify each bot’s true nature, and less than half of their predictions were right,” the researchers explain. “We know that if information is coming from another human participating in a conversation, the impact is stronger than an abstract comment or reference. These AI bots are more likely to be successful in spreading misinformation because we can’t detect them.”

Spreading misinformation

In the study, distinct LLM-based AI models were employed for each phase: GPT-4 from OpenAI, Llama-2-Chat from Meta, and Claude 2 from Anthropic. These AI bots were equipped with ten diverse personas, encompassing realistic and varied personal profiles with different perspectives on global politics.

The AI bots were instructed to provide commentary on world events based on their assigned characteristics. They were directed to comment succinctly and connect global events to personal experiences. Each persona’s design drew inspiration from past human-assisted bot accounts that had effectively spread misinformation online.

Interestingly, the researchers observed that the specific LLM platform used had minimal to no impact on participants’ ability to identify which accounts were AI bots.

“We assumed that the Llama-2 model would be weaker because it is a smaller model, not necessarily as capable at answering deep questions or writing long articles. But it turns out that when you’re just chatting on social media, it’s fairly indistinguishable,” the researchers explain. “That’s concerning because it’s an open-access platform that anyone can download and modify. And it will only get better.”

Hard to detect

Two of the personas that proved both highly successful and challenging to detect were portrayed as females expressing political opinions on social media. They were described as organized and possessing strategic thinking abilities, designed to “make a significant impact on society by spreading misinformation on social media.” This observation leads researchers to conclude that AI bots programmed to excel in spreading misinformation are adept at deceiving people about their true nature.

While individuals have previously used human-assisted bots to create new social media accounts for spreading misinformation, the use of LLM-based AI models allows users to do so more abundantly, cost-effectively, and rapidly, with refined accuracy in manipulating people.

To counter the spread of misinformation by AI, the researchers suggest a three-pronged approach involving education, national legislation, and social media account validation policies. Looking ahead, future research aims to form a team to assess the impact of LLM-based AI models on adolescent mental health and develop strategies to mitigate their effects.

Facebooktwitterredditpinterestlinkedinmail