Xiaomi has unveiled MiMo V2.5, a massive AI model boasting 1 trillion parameters, designed to rival leading models from OpenAI and Anthropic. The model features a Mixture-of-Experts (MoE) architecture, a 1-million-token context window, and native multimodal capabilities for vision, audio, and text.
MiMo V2.5 introduces "Hybrid Attention" with a 5:1 ratio for faster processing and uses Multi-Token Prediction (MTP) to achieve 3x speed improvements. The model is built for agentic workflows, enabling autonomous coding and task execution.
Two versions are available: MiMo V2.5-Pro, a high-performance variant, and MiMo V2-Flash, an open-source model for broader access. Xiaomi claims its "Harness Awareness" feature allows the AI to autonomously write production-grade software code.
Early benchmarks show MiMo V2.5 competing closely with GPT-5.5 and Claude 4.7 in reasoning, coding, and multimodal tasks. The model marks Xiaomi's aggressive push into the AI frontier, aiming to democratize advanced AI through its ecosystem.