In a recent article, I explored the right and the wrong ways of monitoring remote workers. There is an inevitable desire among managers to understand what employees are doing and whether they’re sufficiently productive, especially when they are out of sight. If it’s done poorly though, it can erode trust and reduce in a huge decline in morale and performance.
Suffice it to say, however, monitoring is not something that’s confined to those working remotely. Research from Cornell reminds us that not only are all employees being monitored by their employers, but that AI is increasingly being deployed to do so.
Negative impact
The study finds that this use of automated monitoring is typically frowned upon by employees, who are not only less productive but also more likely to complain and even leave the organization. The only caveat to this is if the use of monitoring can be framed as supporting their professional development in some way.
According to the research, surveillance tools used to monitor physical activity, facial expressions, vocal tone, and verbal and written communication cause people to feel a greater loss of autonomy compared to human oversight.
Businesses and organizations that employ these rapidly evolving technologies to evaluate employee performance, customer interactions, and potential misconduct should be aware of their unintended consequences. These tools may lead to resistance and decreased performance among those being monitored.
Gaining acceptance
The researchers suggest that to gain acceptance, organizations should position these tools as aids rather than as judgment mechanisms. Ensuring that those under surveillance feel the assessments are accurate and contextually fair could mitigate negative reactions.
“When artificial intelligence and other advanced technologies are implemented for developmental purposes, people like that they can learn from it and improve their performance,” the researchers explain. “The problem occurs when they feel like an evaluation is happening automatically, straight from the data, and they’re not able to contextualize it in any way.”
They explain that there has already been considerable backlash against algorithmic surveillance, with some companies scrapping pilot projects that alerted them when employees were taking too many breaks. Similarly, many schools scrapped surveillance software implemented during the pandemic to monitor what pupils did at home.
Of course, previous studies have also shown that some employees don’t mind being monitored by algorithms, providing they believe them to be fair and that there is an adequate way of questioning the algorithms. This acceptance can be particularly strong when employees are confident that the technology is being deployed in their best interests, such as to help them develop.
In the spotlight
In a series of four experiments involving nearly 1,200 participants, the researchers explored whether it matters if surveillance is conducted by humans or AI, and how the context—performance evaluation versus developmental support—affects perceptions.
In the first study, participants were asked to recall and write about times they were monitored and evaluated by either humans or AI. They reported feeling less autonomy under AI surveillance and were more likely to engage in “resistance behaviors.”
The next two studies simulated real-world surveillance scenarios. Participants worked in groups to brainstorm ideas for a theme park, and then individually developed ideas for a specific segment of the park. They were told their work would be monitored by either a research assistant or AI, represented in Zoom videoconferences as “AI Technology Feed.”
After several minutes, either the human assistant or the “AI” relayed messages that participants were not generating enough ideas and should try harder. Surveys conducted after one study revealed that over 30% of participants criticized the AI surveillance, compared to about 7% who were critical of human monitoring.
The research highlights that people feel less autonomous and more resistant when monitored by AI. Businesses using AI surveillance for evaluating performance should be mindful of these perceptions and consider framing the technology as supportive tools rather than judgmental ones. This approach could help mitigate negative reactions and improve acceptance of AI monitoring.
Lowering performance
The study found that monitoring employees not only led to high levels of discontent, but it also lowered the performance levels of employees. In the experiment, those being monitored generated fewer ideas, which were also of lower quality, than those not being monitored.
“Even though the participants got the same message in both cases that they needed to generate more ideas, they perceived it differently when it came from AI rather than the research assistant,” the authors explain. “The AI surveillance caused them to perform worse in multiple studies.”
This scenario only altered when the analysis provided by AI was used in a way that helped each individual improve. This suggests that if algorithmic surveillance is framed as a way to get better, then that could help to earn the trust of employees and overcome any resistance they may feel.
“Organizations trying to implement this kind of surveillance need to recognize the pros and cons,” the authors conclude. “They should do what they can to make it either more developmental or ensure that people can add contextualization. If people feel like they don’t have autonomy, they’re not going to be happy.”