DailyGlimpse

How AI Amplifies the Identifiable Victim Effect: New Research on Empathy and Reasoning

AI
April 27, 2026 · 3:46 PM

A recent paper published on arXiv and discussed in the Daily Papers AI podcast reveals how large language models (LLMs) exhibit a stronger response to individual, identifiable victims than to statistical data—a phenomenon known as the "identifiable victim effect." The study, titled "Narrative over Numbers: The Identifiable Victim Effect and its Amplification Under Alignment and Reasoning in Large Language Models," investigates how AI systems mirror human biases in empathy and decision-making.

Researchers found that when LLMs are presented with a single, vividly described victim versus abstract numbers of affected people, the models tend to prioritize the individual's story, often leading to more emotionally charged or ethically skewed outputs. This effect is amplified when the models are fine-tuned with alignment techniques (such as RLHF) and reasoning chains.

The findings raise important questions about AI's role in fields like journalism, charity, and policy-making, where narratives can overshadow statistical realities. As AI-generated content becomes more prevalent, understanding these biases is crucial for developing fairer and more accurate systems.

The podcast episode provides a concise summary of the paper's methodology and implications, emphasizing that alignment may inadvertently reinforce human cognitive biases rather than correct them.