When a retrieval augmented generation (RAG) system returns the wrong answer, most teams immediately blame the prompt or the model. According to experts, that instinct is almost always misguided. The real culprit is the content.
Raw documents are written for human readers. A technician reading a maintenance procedure brings years of experience to every page—they know which steps are critical, which warnings apply to their equipment, and how to fill in gaps when something is ambiguous. AI systems lack that context. Without explicit structure, typed components, and precise meaning, they guess. In regulated industries, field service, and technical support, a guess is a liability.
This session, presented by Seth Earley and Heather Eisenbraun of Earley Information Science, argues that the model is the easier part of the equation. The hard work lies in engineering the content that the model retrieves.
Earley and Eisenbraun introduce the IAD-RAG methodology: Information Architecture-Directed Retrieval Augmented Generation. Instead of generic similarity-based retrieval, IAD-RAG uses a structured pipeline that transforms human-oriented documents, procedures, and expert knowledge into machine-interpretable content. The result is not just "probably relevant" answers, but specifically correct ones.
The talk also addresses a critical blind spot: tacit knowledge—the expertise held by experienced practitioners that never makes it into any document. As those experts leave the workforce, that knowledge disappears. The speakers explain how to capture that knowledge before it walks out the door, and how AI now makes it possible to preserve and deploy it.
Hosted by Earley Information Science, the video offers a practical blueprint for organizations looking to move beyond generic RAG implementations and build AI systems that can be trusted with high-stakes decisions.