As AI-based chatbots become a more common feature in our lives, there is an understandable desire to learn how conversing with such tools affects how we behave. Research from Cornell suggests that not only are our conversations more efficient when talking to a chatbot, but they’re also generally more positive.
The advent of generative AI is on the brink of revolutionizing all facets of society, communication, and labor. With each passing day, there is a growing body of evidence demonstrating the impressive technical capabilities of large language models (LLMs) such as ChatGPT and GPT-4. However, the ramifications of integrating these technologies into our everyday lives remain largely obscure.
New ways of conversing
While AI tools offer the promise of greater efficiency, they are also accompanied by potentially negative social consequences. The researchers sought to explore the impact of AI on conversations, including its effects on how individuals express themselves and perceive others.
“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” they explain. “We do not live and work in isolation, and the systems we use impact our interactions with others.”
Apart from yielding enhanced efficiency and favorable outcomes, the findings of the study revealed that participants who believe their conversational partner is employing a greater number of AI-generated responses tend to perceive them as less cooperative and exhibit reduced affinity towards them.
Put to the test
In their inaugural experiment, the researchers constructed a smart-reply platform christened “Moshi” (Japanese for “hello”) modelled after the now-defunct Google “Allo” (French for “hello”)- the first-ever smart-reply platform introduced in 2016. These smart replies are generated by LLMs to anticipate possible succeeding responses in chat-based interactions.
A total of 219 pairs of participants were enlisted to converse about a policy issue and categorized into one of three groups: both can use smart replies; only one can use smart replies; or neither can use smart replies.
The findings unveiled that using smart replies led to improved communication efficiency, positive emotional language, and favorable assessments by communication partners. Smart replies accounted for 14.3% of dispatched messages (1 in 7) on average.
Nonetheless, those participants who were suspected of using smart replies by their counterparts were evaluated more negatively than those who were assumed to have written their own responses, aligning with conventional assumptions about the adverse ramifications of AI.
In a subsequent experiment, 299 randomly assigned pairs of participants were requested to discuss a policy issue in one of four groups: no smart replies; Google’s default smart replies; smart replies with a positive emotional tone; and those with a negative emotional tone. The presence of Google’s positive smart replies caused conversations to adopt a more upbeat emotional tone than those with negative or no smart replies, underscoring the influence that AI can exert on language production in everyday conversations.
“While AI might be able to help you write,” the researchers conclude, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”
“What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”