A standardized test measures human awareness of emotions. Psychologists have now subjected ChatGPT to this test. The machine scored significantly higher.
The test requires humans (and machines) to show empathy in fictional scenarios: They are given 20 detailed descriptions of emotional situations, such as a funeral, a professional success, or an insult, and have to describe what emotions they might feel in that situation.
The more detailed and understandable the emotion description, the higher the score on the Levels of Emotional Awareness Scale (LEAS). Since ChatGPT does not respond to questions about its potential own emotions, the research team modified the test to ask about human emotions instead of ChatGPT’s own.
ChatGPT shows significantly higher emotional awareness than humans
The researchers evaluated ChatGPT responses using the same criteria as human responses and compared the results with previous studies conducted in the French population of people aged 17 to 84 (n = 750).
From the first test in January 2023, ChatGPT outperformed humans in all LEAS categories, achieving an overall score of 85 compared to 56 for men and 59 for women. ChatGPT’s responses were reviewed by two psychologists for contextual accuracy.
In a second test in February 2023, ChatGPT scored 98, almost reaching the maximum score of 100. In both cases, the team used only the free version of ChatGPT, which runs on the much less powerful GPT-3.5 language model instead of GPT-4. So they certainly did not measure peak emotional awareness in LLMs.
The team writes that ChatGPT demonstrated in the test that it can successfully identify and describe emotions from behavioral representations in a fictional scenario. In addition, they say, it can reflect and abstract emotional states in a deep and multidimensional integrative way.
ChatGPT as a tool in therapy and co.
For the research team, this finding suggests that ChatGPT could be a useful tool in psychotherapy, such as cognitive training for people who have difficulty recognizing emotions.
Furthermore, it is possible that ChatGPT could help diagnose mental illness or help therapists communicate their diagnoses more emotionally. A previous study using ChatGPT as a formulation aid for doctors has already shown that people perceive the machine’s responses as more empathetic than those of medical professionals.
The researchers are quick to point out the limitations of their study. First, of course, the high LEAS number says nothing about whether people really feel understood by the machine, especially when they are aware that they are talking to a machine rather than a human.
Moreover, emotional awareness can vary greatly depending on culture and language. The test presented here was conducted with ChatGPT in English and compared with the results of a test in French.