How Would We Feel If The CEO We Thought We Were Talking To Was Really A Bot?

Over the years there have been a few studies about how people feel about working under AI “bosses”. The studies were born out of the growth in platforms like Uber, where drivers behaviors are often dictated by algorithms that determine where they go, the jobs they’re assigned, and even the ratings they receive.

A defining feature of these relationships is that people are often okay with working under an algorithmic boss providing they believe the algorithm is fair, and there are people they can talk to when they believe it’s not. Of course, it’s also clear that the algorithm is doing a lot of work in this scenario, and drivers don’t harbor under any illusion that their boss is a human.

AI bosses

What might happen if those lines are slightly more blurred? That was the question posed by research from Harvard Business School, which explored how remote workers felt when the CEO account they thought was being manned by the CEO themselves was actually being run by a chatbot.

Interestingly, the study found that the responses the chatbot gave were generally of sufficient quality to fool people into thinking they came from the boss (whether that says more about the boss or the technology is open to debate), but when they found out that it was indeed a chatbot they were liaising with, they rated the answers as less helpful.

The researchers refer to their test as the “Wade Test”, after the boss of the company they analyzed. They believe it’s the first study of its kind to test whether AI can reasonably replicate the personal and unique characteristics of an individual, such as a CEO.

The researchers explain that electronic communication currently takes up around a quarter of the typical CEO’s time, and ponder whether AI could be used to automate some of the more routine communication tasks, thus freeing bosses up for the kind of strategic thinking they’re paid so handsomely for.

The study tempers expectations that AI might be riding to the rescue, and indeed reminds us that even if the technology was capable, it’s almost certainly not a good idea to automate communication that your team believe is coming from you.

Spotting the machine

Alan Turing’s famous “imitation game” asks whether a person can tell if they’re chatting with a machine or a human. If they can’t guess better than chance, the machine is said to pass the test.

In the Harvard study, three researchers took this idea in a new direction. Working with a tech company of 800 employees, they set out to see if an AI could act like a specific person—the company’s CEO. This wasn’t just a generic test of intelligence; it was about whether AI could handle the role’s demands by sounding like the CEO and responding with the right tone and authority.

To create this “CEO Bot,” the researchers trained a large language model using the CEO’s archive of internal and external messages, from emails to Slack conversations. The goal was to teach the bot not just what to say, but how to say it, capturing the CEO’s vocabulary, style, and quirks. Then, they set it loose to answer common questions, aiming to see if an AI could realistically handle some of the daily communication duties of a CEO.

The researchers selected 10 real questions from a recent “ask me anything” session with new hires, choosing from a pool of 148 queries. They posed these questions to both the CEO and the AI-driven “CEO Bot.” Then, they invited all 800 employees to take part in a challenge: figure out which responses were from the real CEO and which were from the AI. Of the 105 employees who participated, about 90 percent had been with the company for at least three years, presumably making them more familiar with their boss’s style.

Employees identified the real and AI responses correctly 59 percent of the time. This is better than random guessing, which would yield about 50 percent accuracy. When broken down, participants recognized the real CEO’s responses 61 percent of the time and the AI’s 57 percent of the time.

Lack of appreciation

Perhaps the most important aspect of the study, however, is that while employees were often unable to tell when their boss wrote the message and when by the machine, their willingness to find messages written by the machine as helpful was far lower than messages written by the CEO.

This suggests that we still tend to distrust AI, and especially so in interactions that are as meaningful as those between a CEO and their team. We want to have real relationships with our managers, and the act of using AI to write messages on their behalf cheapens that relationship.

To be clear here, this wasn’t to do with the quality of the message, as in a second experiment, the researchers used real answers to questions on earnings calls from actual CEOs and AI-driven answers to the same question, and jumbled up the labels, so some AI-generated responses were labeled as being from the actual CEO, and vice versa. The results showed that people didn’t like the AI response, even when the answer was actually from the real CEO.

The researchers strike a bullish note, and don’t believe their findings will strike the death knell of such employee-bots, but that we merely need to find a way to get them to work more effectively and to be accepted by people. I suppose it’s something that might be technologically feasible in time, but it does beg the question of why we should want it?

Facebooktwitterredditpinterestlinkedinmail