AI agents often forget everything once a session ends, forcing users to re-explain context repeatedly. A new project called LLM Wiki v2, building on Andrej Karpathy's original concept, aims to solve this by creating a persistent, scalable knowledge base for AI.
The tool introduces what developers call the "missing layer" of AI memory: a lifecycle that includes confidence scoring to combat data rot and automatic forgetting curves to keep the wiki clean. Instead of flat markdown pages, LLM Wiki v2 organizes information into a typed knowledge graph, enabling smarter queries through relationship traversal.
A key feature is Hybrid Search, which combines BM25 keyword matching, vector embeddings, and graph walking. This allows agents to retrieve relevant information even from thousands of pages, without manual filing. Automation hooks and self-healing linting further reduce maintenance, turning the AI into a disciplined librarian.
For developers building autonomous coding agents or a digital second brain, these patterns offer a practical foundation for associative intelligence. The project is open-source and available via GitHub.