Perceptions about AI are bound to change, added Wakslak. “Of course these effects may change over time, but one of the interesting things we found was that the two effects we observed were fairly similar in magnitude. Whereas there is a positive effect of getting an AI message, there is a similar degree of response bias when a message is identified as coming from AI, leading the two effects to essentially cancel each other out,” she said.
Individuals further reported an “uncanny valley” response — a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.
The research survey also asked participants about their general openness to AI, which moderated some of the effects, explained Wakslak.
“People who feel more positively toward AI don’t exhibit the response penalty as much and that’s intriguing because over time, will people gain more positive attitudes toward AI?” she posed. “That remains to be seen … but it will be interesting to see how this plays out as people’s familiarity and experience with AI grows.”
AI Offers Better Emotional Support
The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.
Yin explained that, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating. Humans may potentially learn from AI because a lot of times when our significant others are complaining about something, we want to provide that validation, but we don’t know how to effectively do so.”
Instead of AI replacing humans, the research points to different advantages of AI and human responses. The advanced technology could become a valuable tool, empowering humans to use AI to help them better understand one another and learn how to respond in ways that provide emotional support and demonstrate understanding and validation.
Overall, the paper’s findings have important implications for the integration of AI into more social contexts. Leveraging AI’s capabilities might provide an inexpensive scalable solution for social support, especially for those who might otherwise lack access to individuals who can provide them with such support. However, as the research team notes, their findings suggest that it is critical to give careful consideration to how AI is presented and perceived in order to maximize its benefits and reduce any negative responses.
The study also raises profound conceptual questions about the nature of feeling heard and the importance of the “meeting of the minds” in human relationships. Is it enough for our views to be validated, even if by a non-sentient entity, or do we inherently crave the understanding that only another human can offer?
The work of Yin, Jia, and Wakslak offers a stepping stone toward a future where AI and human empathy coexist.