Algorithms play an increasingly important role in our daily lives, but they are, of course, not infallible and will inevitably make mistakes. New research from the University of Texas-Austin explores how consumers respond to these mistakes.
The study examined the use of AI in marketing circumstances, such as the delivery of our news feed on Facebook or product recommendations on Amazon. Most of the time we’re oblivious to how these feeds work, which is normally fine, until they go wrong.
Consumer response
The researchers found that consumers appear to punish brands less when algorithms make mistakes than when a human does. What’s more, the perceptions consumers have of the lower agency of the algorithm mean they hold them less responsible for any errors or harm caused by the error, which in turn reduces any damage caused to the brand.
This phenomenon is reduced, however, when there are attempts made to humanize the AI then consumers are far more likely to apportion blame to it for any mistakes it makes.
“Marketers must be aware that in contexts where the algorithm appears to be more human, it would be wise to have heightened vigilance in the deployment and monitoring of algorithms and provides resources for managing the aftermath of brand harm crises caused by algorithm errors,” the researchers say.
Managing the aftermath
The researchers also explore how the aftermath of any damage to the brand can be dealt with. They advocate managers emphasize the role of the AI in the mistake and the lack of agency the algorithms had in the making of the error. This would probably dampen any blame consumers place on the brand for the mishap, providing, of course, that the AI hasn’t been anthropomorphized, in which case such an approach would be less effective.
What’s more, the results suggest that marketers might be best served by not marketing any human supervision that is undertaken of the algorithms, even if this supervision is effective in fixing any bugs in the algorithm. What might be effective, however, is in advertising technological supervision of the algorithm in the event of any events that may harm the brand.
“Overall, our findings suggest that people are more forgiving of algorithms used in algorithmic marketing when they fail than they are of humans,” the researchers conclude. “We see this as a silver lining to the growing usage of algorithms in marketing and their inevitable failures in practice.”