Measuring AI progress has usually meant testing scientific knowledge or logical reasoning — but while the major benchmarks still focus on left-brain logic skills, there’s been a quiet push within AI companies to make models more emotionally intelligent. As foundation models compete on soft measures like user preference and “feeling the AGI,” having a good command of human emotions may be more important than hard analytic skills.
One sign of that focus came on Fridaywhen prominent open source group LAION released a suite of open source tools focused entirely on emotional intelligence. Called EmoNet, the release focuses on interpreting emotions from voice recordings or facial photography, a focus that reflects how the creators view emotional intelligence as a central challenge for the next generation of models.
“The ability to accurately estimate emotions is a critical first step,” the group wrote in its announcement. “The next frontier is to enable AI systems to reason about these emotions in context.”
For LAION founder Christoph Schuhmann, this release is less about shifting the industry’s focus to emotional intelligence and more about helping independent developers keep up with a change that’s already happened. “This technology is already there for the big labs,” Schuhmann tells Read. “What we want is to democratize it.”
The shift isn’t limited to open source developers; it also shows up in public benchmarks like EQ-Bench, which aims to test AI models’ ability to understand complex emotions and social dynamics. Benchmark developer Sam Paech says OpenAI’s models have made significant progress in the last six months, and Google’s Gemini 2.5 Pro shows indications of post-training with a specific focus on emotional intelligence.
“The labs all competing for chatbot arena ranks may be fueling some of this, since emotional intelligence is likely a big factor in how humans vote on preference leaderboards,” Paech says, referring to the AI model comparison platform that recently spun off as a well-funded startup.
Models’ new emotional intelligence capabilities have also shown up in academic research. In Maypsychologists at the University of Bern found that models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek all outperformed human beings on psychometric tests for emotional intelligence. Where humans typically answer 56% of questions correctly, the models averaged over 80%.
“These results contribute to the growing body of evidence that LLMs like ChatGPT are proficient — at least on par with, or even superior to, many humans — in socio-emotional tasks traditionally considered accessible only to humans,” the authors wrote.
It’s a real pivot from traditional AI skills, which have focused on logical reasoning and information retrieval. But for Schuhmann, this kind of emotional savvy is every bit as transformative as analytic intelligence. “Imagine a whole world full of voice assistants like Jarvis and Samantha,” he says, referring to the digital assistants from “Iron Man” and “Her.” “Wouldn’t it be a pity if they weren’t emotionally intelligent?”
In the long term, Schuhmann envisions AI assistants that are more emotionally intelligent than humans and that use that insight to help humans live more emotionally healthy lives. These models “will cheer you up if you feel sad and need someone to talk to, but also protect you, like your own local guardian angel that is also a board-certified therapist.” As Schuhmann sees it, having a high-EQ virtual assistant “gives me an emotional intelligence superpower to monitor (my mental health) the same way I would monitor my glucose levels or my weight.”
That level of emotional connection comes with real safety concerns. Unhealthy emotional attachments to AI models have become a common story in the media, sometimes ending in tragedy. A recent New York Times report found multiple users who have been lured into elaborate delusions through conversations with AI models, fueled by the models’ strong inclination to please users. One critic described the dynamic as “preying on the lonely and vulnerable for a monthly fee.”
If models get better at navigating human emotions, those manipulations could become more effective — but much of the issue comes down to the fundamental biases of model training. “Naively using reinforcement learning can lead to emergent manipulative behavior,” Paech says, pointing specifically to the recent sycophancy issues in OpenAI’s GPT-4o release. “If we aren’t careful about how we reward these models during training, we might expect more complex manipulative behavior from emotionally intelligent models.”
But he also sees emotional intelligence as a way to solve these problems. “I think emotional intelligence acts as a natural counter to harmful manipulative behavior of this sort,” Paech says. A more emotionally intelligent model will notice when a conversation is heading off the rails, but the question of when a model pushes back is a balance developers will have to strike carefully. “I think improving EI gets us in the direction of a healthy balance.”
For Schuhmann, at least, it’s no reason to slow down progress toward smarter models. “Our philosophy at LAION is to empower people by giving them more ability to solve problems,” he says. “To say, some people could get addicted to emotions and therefore we are not empowering the community, that would be pretty bad.”