Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.
The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.
Lack of understanding
“When we ask these algorithms to select the 10 best resumes, we know there will be a second stage of interviewing. The AI doesn’t understand that,” the researchers explain.
Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.
The researchers developed a new algorithm that aligns with the hiring process. Tested on data from thousands of employees at a Fortune 500 company, this improved algorithm saved 11% on interview costs while still identifying high-quality candidates and reducing bias.
Selection vs screening
The issue is one of selection versus screening. Current AI tools are designed to pick top candidates to hire, but hiring managers just want a pool of candidates to interview.
Most resume stacks have a mix of strong and weak candidates. If the AI has to choose, it will avoid risky ones. But if it just needs to suggest potential hires, it can include high-risk, high-reward candidates for human review.
Moreover, an AI set to select hires can introduce new biases. The researchers tweaked an existing algorithm to clarify that it was for screening, not hiring. In tests on nearly 8,000 employees, this updated algorithm found better candidates and cut hiring costs.
“We need to be clear about the tasks we set for these human-AI teams,” the researchers conclude. “The AI might optimize for poorly defined goals if we don’t specify the task clearly.”