Alibaba released a new artificial intelligence (AI) model on Thursday, which is said to rival OpenAI's GPT-o1 series models in reasoning capability. Launched in preview, the QwQ-32B large language model (LLM) is said to outperform GPT-o1-preview in several mathematical and logical reasoning-related benchmarks. The new AI model is available to download on Hugging Face, however it is not fully open-sourced.
Recently, another Chinese AI firm released an open-source AI model DeepSeek-R1, which was claimed to rival ChatGPT-maker's reasoning-focused foundation models. Alibaba QwQ-32B AI Model In a blog post , Alibaba detailed its new reasoning-focused LLM and highlighted its capabilities and limitations. The QwQ-32B is currently available as a preview.
As the name suggests, it is built on 32 billion parameters and has a context window of 32,000 tokens. The model has completed both pre-training and post-training stages. Coming to its architecture, the Chinese tech giant revealed that the AI model is based on transformer technology.
For positional encoding, QwQ-32B uses Rotary Position Embeddings (RoPE), along with Switched Gated Linear Unit (SwiGLU) and Root Mean Square Normalization (RMSNorm) functions, as well as Attention Query-Key-Value Bias (Attention QKV) bias. Just like the OpenAI GPT-o1, the AI model shows its internal monologue when assessing a user query and trying to find the right response. This internal thought process lets QwQ-32B test various theories and fact-check itself before it presents the final answer.
Alibaba claims the LLM scored 90.6 percent in the MATH-500 benchmark and 50 percent in the AI Mathematical Evaluation (AIME) benchmark during internal testing and outperformed the OpenAI's reasoning-focused models. Notably, AI models with better reasoning are not proof of models becoming more intelligent or capable.
It is simply a new approach, also known as test-time compute, that lets models spend additional processing time to complete a task. As a result, the AI can provide more accurate responses and solve more complex questions. Several industry veterans have pointed out that newer LLMs are not improving at the same rate as their older versions, suggesting the existing architectures are reaching a saturation point.
As QwQ-32B spends additional processing time on queries, it also has several limitations. Alibaba stated that the AI model can sometimes mix languages or switch between them giving rise to issues such as language-mixing and code-switching. It also tends to enter reasoning loops and apart from mathematical and reasoning skills, other areas still require improvements.
Notably, Alibaba has made the AI model available via a Hugging Face listing and both individuals and enterprises can download it for personal, academic, and commercial purposes under the Apache 2.0 licence. However, the company has not made the model weights and data available, which means users cannot replicate the model or understand how the architecture functions.
For the latest tech news and reviews , follow Gadgets 360 on X , Facebook , WhatsApp , Threads and Google News . For the latest videos on gadgets and tech, subscribe to our YouTube channel . If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube .
.
Top
Alibaba Releases QwQ-32B Reasoning-Focused AI Model in Preview to Take on OpenAI’s GPT-o1
Alibaba released a new artificial intelligence (AI) model on Thursday, which is said to rival OpenAI’s GPT-o1 series models in reasoning capability. Launched in preview, the QwQ-32B large language model (LLM) is said to outperform GPT-o1-preview in several mathematical and logical reasoning-related benchmarks. The new AI model is available to download on Hugging Fac...