Feedback is a crucial part of any functioning workplace, and a typical day is usually littered with feedback, whether from our supervisor, our peers, customers, or even the technology we interact with. Research from Cambridge Judge Business School explores whether the feedback we get from machines is treated differently than that from humans in terms of our ability to improve and learn from our mistakes.
The best form of feedback
The researchers analyzed the performance of nearly 100,000 participants in an online coding challenge. A key feature of the challenge was the clear ways learning outcomes could be measured, especially by machines.
“Previous research on learning from failure has mostly focused on how people learn from failure feedback provided by other people, be it their supervisors or peers,” the researchers explain. “We have had little understanding of how people learn from failure due to machine feedback, nor the interaction between the 2 types of feedback – so our new study helps to fill this important gap.”
The study found that people tend to trust the feedback they get from machines more than they do from humans. In part because the machines were designed in this instance to capture every mistake made, which helps to build trust that they will be objective in their feedback.
“The study finds that such purely objective evaluation of failure provided by machine, whether they are GenAI or more simple software, can help someone learn better from failure based on human evaluation as well, so that’s an important finding,” the authors continue.
The study also shows that when machines give feedback on mistakes, people are more likely to learn from their peers’ feedback too. So, the more machine feedback on failure, the more folks realize they can also learn from what others have to say and put effort into getting better. Wanting to learn from mistakes is a big deal, along with chances to learn (like figuring out how mess-ups can help) and being able to pick up the skills to analyze and use failure to move forward.
Fairness matters
Could this apply to other domains too? Research from Penn State suggests that there is certainly promise, providing workers perceive the AI as being fair. Similarly, a study from researchers at Carnegie Mellon found that by and large, people are quite happy to be managed by AI.
There is a but, however. People were happy with an AI boss so long as everything was running smoothly. If there were disagreements or things the employees wanted to change, then it resulted in a rapid deterioration in the relationship.
This can have implications in areas like negotiations, where research from the University of Southern California found that virtual agents can often be more effective at negotiating than we are.
“People with less experience may not be confident that they can use the techniques or feel uncomfortable, but they have no problem programming an agent to do that,” the researchers explain.
Similar findings have emerged from the annual Automated Negotiating Agent Competition, which regularly features hundreds of participants from across the world in the development of chatbots that can effectively negotiate both with one another and with humans.
Working together
It’s clear that there is a lot of potential for man and machine to work effectively together. For instance, the Cambridge researchers highlight that utilizing AI to solicit feedback opened up employees to gain more feedback from their peers as well.
“We argue that machine failure feedback raises individuals’ awareness of the potential to learn in general. This motivates individuals to allocate resources to learn more from peer failure feedback as well,” the authors explain.
According to the study, it’s recommended that organizations give employees feedback from both machines and humans. Machine feedback can help tackle biases by learning from human-to-human interactions. In the age of GenAI, basic feedback from AI, which has learned the ins and outs of a task and ways to boost performance, could be essential. This collaboration can help people enhance their skills and pick up new ones.
As an illustration, during yearly performance reviews, companies can introduce a system that offers employees feedback using comprehensive data like sales figures, project advancements, and the number of patents filed. This can be in addition to the usual practice of supervisors giving feedback.
For tasks like writing reports, analyzing data, and creating presentations, AIs could directly suggest improvements. People might be more inclined to consider these suggestions compared to human feedback, and it could also spark more interest in seeking human input.