Alibaba has released Qwen3.6-27B, a dense open-source model with 27 billion parameters that surpasses its much larger predecessor, Qwen3.5-397B-A17B, on nearly all coding benchmarks. The new model scored 77.2 on SWE-bench Verified (vs. 76.2) and 59.3 on Terminal-Bench 2.0 (vs. 52.5). It handles both text and multimodal reasoning and, as a dense model, is easier to run than Mixture-of-Experts architectures. Qwen3.6-27B is available via Qwen Studio, Alibaba Cloud Model Studio API, and as open weights on Hugging Face and ModelScope, targeting developers who want strong coding performance without a massive model.
Alibaba's New 27B Model Outperforms Its 397B Predecessor in Coding Benchmarks
AI
April 26, 2026 · 3:58 PM