Do Virtual Assistants Do More Harm Than Good?

Virtual assistants are increasingly commonplace, with the latest versions increasingly adept at understanding not only written instruction but verbal as well.  The aim of these services is to make tools easier to use, but a new study from Chungbuk National University suggests they may actually be putting people off rather than helping them.

“We demonstrate that anthropomorphic features may not prove beneficial in online learning settings, especially among individuals who believe their abilities are fixed and who thus worry about presenting themselves as incompetent to others,” the authors say. “Our results reveal that participants who saw intelligence as fixed were less likely to seek help, even at the cost of lower performance.”

The research builds upon previous work that has shown that people often view digital systems as a kind of social being, which can make interacting with the system easier and less intimidating.  The team wanted to test whether this basic heuristic both applies, and is useful to the user when the interaction has a direct impact on performance, such as in an online learning platform.

“Online learning is an increasingly popular tool across most levels of education and most computer-based learning environments offer various forms of help, such as a tutoring system that provides context-specific help,” the team explain. “Often, these help systems adopt humanlike features; however, the effects of these kinds of help systems have never been tested.”

Virtual help or hindrance?

Participants were asked to complete a task that was designed to measure their intelligence.  On some of the more challenging tasks, help was provided by a digital assistant.  Some of the participants were shown this assistant in a humanlike form, whilst for others it appeared as a computer icon.

The participants reported higher levels of embarrassment and concern about their self-image when receiving help from the anthropomorphized icon than from the computer shaped one.  This only emerged however, if they thought that intelligence was fixed, and not malleable.

The team believe that it was the anthropomorphic nature of the assistant that triggered these concerns, so they tested this directly in a second experiment in which participants were asked their opinion on a science story about the state of intelligence.  For half of the participants, the story suggested intelligence was fixed, whereas for the other half, it suggested it was malleable.

After being primed one way or another, each then completed an intelligence test similar to that conducted in the first experiment, complete with help from the virtual assistant.

The results were pretty clear.  Those participants primed to think that intelligence was fixed were much less likely to use the help offered by the humanlike virtual assistant than those primed to think intelligence was malleable.  The team believe their findings could have clear implications, especially for the design of virtual assistants in learning environments.

“Educators and program designers should pay special attention to unintended meanings that arise from humanlike features embedded online learning features,” they conclude. “Furthermore, when purchasing educational software, we recommend parents review not only the contents but also the way the content is delivered.”

Related

Facebooktwitterredditpinterestlinkedinmail