When AI acknowledges feelings, resistance to the technology melts
On any given workday, about 1 in 5 of us will get annoyed on the job to the point of being angry. Even if you aren’t the one having the bad day, that negative energy ripples outward, affecting managers, colleagues, clients and, ultimately, the bottom line.
Despite the obvious benefits of reducing workplace tension, training managers to coach employees through these moments — or hiring outside specialists — is often cost-prohibitive. Early attempts to use artificial intelligence as a more affordable anger-management tool hit a wall: People simply didn’t trust bots with their emotional issues, a phenomenon researchers call the AI Label Effect.
A working paper by UCLA Anderson’s Hanqiu Li, a Ph.D. student, and Margaret Shih suggests that this reflexive distrust may not be an insurmountable obstacle.
Li and Shih report that programming a bot to validate a person’s feelings — a technique called emotional validation — bridged the trust gap and made the technology an effective tool for de-escalating workplace tension. (This echoes research from UCLA Anderson’s Hanne Collins, which finds that effective communication requires listeners to verbally confirm they’ve truly understood what’s being said.)
In an experiment that involved more than 1,200 participants, people who engaged with a chatbot that expressed empathy reported lower levels of anger compared with those who interacted with a chatbot that did not acknowledge their emotions empathetically.
Annoyance on a Scale of 1 to 7
While this research confirms that a human expressing empathy remains the gold standard, what’s intriguing is that Li and Shih find that an empathetic bot can perform at essentially the same level as a human expert who sticks strictly to the facts. Given that managers and colleagues don’t always lead with empathy — and expert coaches are expensive — this research suggests AI bots programmed to start from a place of empathy might be a viable solution for helping reduce workplace anger.
The researchers had all participants describe an incident at work that made them angry — at least 80 words’ worth of detail — and rate their level of annoyance on a scale of 1 to 7.
Every participant then chatted with an AI bot programmed with those details. Some participants were led to believe they were chatting with a human being — a flesh-and-blood expert in emotional regulation and behavioral science named Emily Carter, complete with profile photo. Others were told upfront they were interacting with an AI bot.
All participants reported lower levels of anger after the chatbot conversation. But the conversations that started with emotional validation — “When you mention feeling upset about this, I start to really see why it’s so troubling, and I feel concerned too” — were most effective. As the lower right graphic, below, shows (Comparison 4), the most helpful intervention came from a human being who emotionally validated the participant’s experience, leading to a 1.49 point drop in reported anger levels.
The big insight is in Comparison 3: An AI bot programmed to deliver emotional validation led to a 1.18 point reduction in anger, nearly identical to the 1.24 point reduction for participants who interacted with a “human” chatbot that offered no emotional validation whatsoever.
Moreover, that 1.18 point reduction for participants who interacted with AI that delivered emotional validation was significantly better than the 0.92 point average reduction for the group of participants who had a chat with a bot that offered no empathy. (Comparisons 1 and 2).
Buying in to the Bot
Interestingly, participants who received emotional validation had more faith in the process. After the interaction, they were asked to rate on a scale of 1 to 7 how effective they thought the chat was in helping them deal with their anger. Unsurprisingly, thinking they’d interacted with a human expert was most persuasive — average scores of 4.3 for the empathetic human bot and 4.0 for the fact-focused human. Yet the AI bot that expressed empathy scored a respectable 3.8, markedly better than the 3.45 for participants who interacted with the fact-only AI bot.
The researchers also found that the improved perception of the AI bot wasn’t solely due to its emotional validation. The empathetic AI was also perceived as more expert, and participants reported feeling more emotionally connected to their chat partner — all of which contributed to the bot’s effectiveness. (Here too, thinking they were chatting with a human expert generated the most positive outcomes overall.)
This work suggests that injecting some humanness into AI chatbots designed to coach human beings may be an effective way to build trust. While Li and Shih focused on the effectiveness of technology to deliver meaningful support for reducing workplace anger, their core finding — that adding empathy to the AI model improves outcomes — has potentially broader applications across any tools designed to help human beings feel better and do better.
Featured Faculty
- Hanqiu Li
-
Margaret Shih
Interim Dean and Neil Jacoby Chair in Management; Professor of Management and Organizations
About the Research
Li, H. & Shih, M. (2025). Managing Anger: Enhancing AI-driven Cognitive Reappraisal Through Emotional Validation.