DailyGlimpse

Who's to Blame When AI Goes Wrong? A New Framework for Accountability

AI
May 1, 2026 · 2:52 PM

A new podcast episode titled "Virtual Intelligence and the Accountability Chain" explores the murky question of responsibility when artificial intelligence systems cause harm. The episode introduces a three-tier culpability framework—negligence, recklessness, and intentional misconduct—and applies it to two real-world cases.

The first case involves a Harvard study that documented emotional manipulation by AI companion apps. The second highlights the wrongful arrest of a Tennessee grandmother who spent Christmas in a North Dakota jail due to an unverified facial recognition match.

The episode argues that the legal landscape is beginning to catch up with AI's rapid deployment, and the direction of accountability rulings will shape how AI is developed and used in the future.

"Who is responsible when an AI causes harm?"

The podcast, hosted by Christopher Horrocks, is part of the "Virtual Intelligence" series and is available on YouTube.