We’re Not Very Good At Telling When Text Has Been Written By AI

One of the key selling points of tools like ChatGPT is its ability to rapidly create content. The launch earlier this year coincided with widespread concerns about the impact of such tools on the workplace. Research from Stanford explores whether readers are able to identify whether text has been written by AI or humans.

The researchers embarked on an exploration of this quandary by examining the degree to which we are able to differentiate between human-generated and AI-generated text on platforms such as OKCupid, Airbnb, and Guru.com.

Unable to distinguish

The team’s revelations were eye-opening: study participants could only distinguish between human and AI text with an accuracy rate of 50-52%, which is roughly equivalent to a coin flip.

The real cause for concern is that we can fashion AI that appears more human than actual humans, as we can optimize the AI’s language to leverage the same kind of presumptions that humans possess. This is worrying since it poses a risk that these machines can impersonate humans to a greater degree than us, with the potential to deceive.

“One thing we already knew is that people are generally bad at detecting deception because we are trust-default,” the researchers explain. “For this research, we were curious, what happens when we take this idea of deception detection and apply it to generative-AI, to see if there are parallels with other deception and trust literature?”

Upon administering text samples from the three social media platforms to participants, the researchers found that although we are unable to differentiate between AI and human-generated text with any significant degree of accuracy, we do not arrive at random conclusions either.

Our incorrect assessments are founded on similar assumptions, based on reasonable intuition and shared language cues. In other words, we frequently arrive at the wrong conclusion, whether it is AI or human-generated text, but we do so for similar reasons.

For instance, participants erroneously attributed high grammatical accuracy and the use of first-person pronouns to human-generated text. Similarly, referencing family life and utilizing informal, conversational language was also wrongly attributed to human-generated text.

A rise in misinformation

The researchers believe that the poor heuristics we tend to use to determine the authenticity of text combined with the ease of producing automated content will inevitably result in a rise in misinformation.

“The volume of AI-generated content could overtake human-generated content on the order of years, and that could really disrupt our information ecosystem,” they explain. “When that happens, the trust-default is undermined, and it can decrease trust in each other.”

Solutions are far from straightforward, but the researchers believe things like AI watermarking or even providing AI with a particular “accent” could help. We also need to do more to teach young people about the various risks involved in the online world.

Facebooktwitterredditpinterestlinkedinmail