The Trust Problem In AI Adoption

Artificial intelligence (AI) promises to revolutionize the workplace by improving decision-making, driving innovation, and boosting productivity. Yet many companies struggle to fully integrate it into their operations.

New research from Aalto University points to a key reason: trust. How employees view AI’s abilities (cognitive trust) and how they feel about it (emotional trust) play a big role in whether AI succeeds or fails in an organization.

Interviews at a mid-sized software company identified four types of trust: full trust (high cognitive and emotional trust), full distrust (low trust in both), uncomfortable trust (high cognitive but low emotional trust), and blind trust (low cognitive but high emotional trust).

Trust matters

These trust levels influenced how employees behaved. Some overcontrolled their interactions with AI, while others withheld input. These behaviors created a “vicious cycle”: poor input hurt AI performance, which further weakened trust and slowed adoption.

The researchers argue that building trust is as much about leadership as it is about technology. “AI adoption requires more than technical fixes,” they write. “Managers must understand trust, address emotions, and respond to employee concerns. Without this, even the smartest AI will fail.”

The lesson for companies is simple: AI can only succeed if employees trust it, both in their minds and in their hearts.

Facebooktwitterredditpinterestlinkedinmail