DeepSeek has launched two new open-weight AI models, V4 Flash and V4 Pro, that promise enterprise-grade capabilities at zero cost. The flagship V4 Pro boasts 1.6 trillion total parameters with 49 billion active per inference, making it the largest openly available model that users can run themselves. Both models support a context window of 1 million tokens, enabling tasks like ingesting entire codebases or year-long email archives for instant querying.
In benchmark tests, DeepSeek claims V4 Pro matches or exceeds GPT-5.2 and Gemini 3 Pro on reasoning challenges, and ties top models in coding competitions. While general knowledge lags by roughly three to six months, the gap is narrowing rapidly. The release democratizes advanced AI for students, developers, and small businesses, eliminating the need for costly subscriptions. As open-weight models, they can be deployed locally or in private clouds, offering unprecedented control over data and compute costs.