Alibaba ups ante on AI race with Qwen3 open-source LLM

Alibaba has launched the Qwen3 open-source large language model (LLM) family to intensify intensifying competition with key players such as DeepSeek, Google and OpenAI.

The Qwen3 series comes with hybrid reasoning capabilities that allow the models to switch seamlessly between fast, general-purpose responses and deeper, multi-step reasoning for complex tasks such as mathematics, coding and logical deduction.

The series features six dense models and two Mixture-of-Experts (MoE) models, spanning from 0.6 billion to 235 billion parameters. All are open-sourced and freely available worldwide, supporting applications from smartphones to autonomous vehicles and robotics.

The flagship Qwen3-235B-A22B MoE model achieves high performance with only 22 billion active parameters, significantly reducing deployment costs compared to rivals such as DeepSeek R1 and OpenAI’s o1.

On industry benchmarks, Qwen3-235B-A22B consistently matches or outperforms top competitors. For example, it leads in coding, mathematical reasoning and instruction-following, while also excelling in multilingual tasks-supporting 119 languages and dialects.

By delivering similar or superior results with lower operational costs and faster processing speeds, the Qwen3 is a formidable alternative to DeepSeek R1, Google’s Gemini 2.5 Pro and OpenAI’s o1.