As the capabilities of AI continue to march ahead at a considerable pace, their ability to influence how we behave is continuing to advance. While much of the research to date has focused on the ways in which chatbots behave, a recent study from Johns Hopkins looks at how the perceived gender of the chatbot may also play a role.
The study builds on a large body of previous research that explores how we behave around different genders. For instance, the researchers highlight that men are more likely to interrupt a speaker when talking to a woman than when talking to another man. They explain that this has largely transferred to the virtual world, with men more likely to interrupt virtual assistants, like Siri and Alexa, when the assistant has a female persona.
Similarly, research from Cornell found that “female” AI assistants mean that women were more likely to speak up in meetings than when the assistant was male. The researchers believe this is because women felt like the AI was a “virtual ally” and were as emboldened to speak up as when there are more (living and breathing) women in meetings in general.
The right assistance
As tech companies roll out AI assistants and agents into the workplace, there are inevitable concerns about how these tools are designed and whether these designs may reinforce gender biases already evident in the workplace. As such, the researchers question whether voice assistants should be gender neutral in order to promote more respectful workplaces.
“Conversational voice assistants are frequently feminized through their friendly intonation, gendered names, and submissive behavior,” the researchers explain. “As they become increasingly ubiquitous in our lives, the way we interact with them—and the biases that may unconsciously affect these interactions—can shape not only human-technology relationships but also real-world social dynamics between people.”
The researchers asked participants, who were an even split of men and women, to use a voice assistant to complete a simple task. What the participants didn’t know, however, was that the virtual assistant was designed to make certain mistakes, with the aim being to observe how we respond to such mistakes.
The virtual assistants were also programmed to use either a feminine, masculine, or gender-neutral voice, while also responding in various ways to their mistakes. For instance, some offered an apology whereas others offered some form of compensation.
“We examined how users perceived these agents, focusing on attributes like perceived warmth, competence, and user satisfaction with the error recovery,” the researchers explain. “We also analyzed user behavior, observing their reactions, interruptions of the voice assistant, and if their gender played a role in how they responded.”
Clear differences
The results show clear stereotypes regarding how participants perceived and interacted with their voice assistants. For example, participants would often believe that “female” voice assistants were more capable, which the researchers believe is indicative of our stereotype that women are better at supporting than men.
There were also differences when it came to how people responded that were based on their own gender. For instance, men would be more likely to interrupt the assistant when she was making an error. They would also be more likely to respond in a social way to a female assistant than a male one.
Interestingly, when the voice assistant was gender neutral, participants were generally far more polite to it and interrupted it far less. This was despite the virtual assistant being perceived as less warm and even more robotic than those that were gendered.
“This shows that designing virtual agents with neutral traits and carefully chosen error mitigation strategies—such as apologies—has the potential to foster more respectful and effective interactions,” the researchers explain.
With the latest generation of AI said to be behind various agents that can help us in various aspects of our lives, it’s crucial that developers think careful about how these agents might encourage certain behaviors. This research reminds us that the perceived gender of the agents is just as important as what they say and when they say it.
“Thoughtful design—especially in how these agents portray gender—is essential to ensure effective user support without the promotion of harmful stereotypes,” the researchers conclude. “Ultimately, addressing these biases in the field of voice assistance and AI will help us create a more equitable digital and social environment.”





