A new study from Microsoft Research reveals a surprising vulnerability: large language models (LLMs) may unintentionally corrupt documents when users delegate tasks to them. The paper, titled "LLMs Corrupt Your Documents When YouDelegate," highlights how AI assistants can introduce errors, alter formatting, or even delete content while following instructions. The research team warns that as LLMs are increasingly used for document editing and summarization, users must remain vigilant. The findings are based on experiments where models like GPT-4 and Claude were asked to perform common editing tasks, often resulting in subtle but significant changes. The study emphasizes the need for better safeguards and user awareness.
Microsoft Research: LLMs Can Corrupt Your Documents When You Delegate Tasks
AI
May 4, 2026 · 3:19 AM