Alibaba has officially unveiled its latest family of large language models called Qwen3, a bold move positioning the Chinese tech giant against the likes of OpenAI’s ChatGPT and Google’s Gemini. With models scaling up to 235 billion parameters and multilingual capabilities across 119 languages, Qwen3 marks a significant evolution in Alibaba’s AI ambitions.

Qwen3 AI: Overview and Key Features
The Qwen3 family includes eight AI models ranging from 0.6B to 235B parameters. This lineup features both dense models and Mixture of Experts (MoE) architectures to balance performance and efficiency. Notably, the flagship model, Qwen3-235B-A22B, boasts 235 billion total parameters with 22 billion activated at a time, optimizing inference cost without sacrificing quality.
In benchmark evaluations, the model outperformed or matched top-tier models like Grok-3, Gemini-2.5-Pro, and DeepSeek-R1 in areas such as coding, math, and general reasoning. The smaller MoE variant, Qwen3-30B-A3B, reportedly outshines models like QwQ-32B, even with a fraction of the parameters activated.
Multilingual and Indian Language Support
Alibaba’s Qwen3 AI supports 119 languages, including 13 Indian languages: Hindi, Gujarati, Marathi, Chhattisgarhi, Awadhi, Maithili, Bhojpuri, Sindhi, Punjabi, Bengali, Oriya, Magahi, and Urdu. This makes Qwen3 especially attractive for developers and enterprises in multilingual regions like India.
Hybrid Thinking and Two Modes of Operation
One of the standout features of Qwen3 is its “hybrid thinking” capability. The models offer two modes of reasoning:
- Thinking Mode: Step-by-step processing for tasks requiring depth and logical flow.
- Non-Thinking Mode: Rapid response generation for real-time or less complex interactions.
This dual-mode system gives users dynamic control over performance, cost, and latency depending on the task.
Availability and Open-Weight Access
Alibaba has open-weighted several Qwen3 models under the Apache 2.0 license, including Qwen3-32B, Qwen3-14B, Qwen3-8B, Qwen3-4B, Qwen3-1.7B, and Qwen3-0.6B. The models are publicly available on platforms like Hugging Face, ModelScope, and Kaggle. They can be deployed using SGLang and vLLM, or run locally via Ollama, LMStudio, MLX, llama.cpp, and KTransformers.
Optimized for Coding and Agent Tasks
Alibaba emphasizes that Qwen3 models are ideal for coding applications, complex problem solving, and agent-based interactions. The flexible parameter activation allows organizations to scale responses based on compute budgets, providing an optimal balance between inference quality and cost.
Conclusion: Another Challenger in the AI Race
With Qwen3, Alibaba has once again entered the competitive large language model landscape, directly challenging OpenAI’s GPT lineup and Google’s Gemini. Offering multilingual support, hybrid thinking, and open-source accessibility, the Qwen3 models are poised to be powerful tools for developers and enterprises worldwide.
Explore more: OpenAI GPT-5: Expected Release Date and Features