Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
Kimi K2 Thinking
News

Free Chinese AI claims to beat GPT-5

China's Moonshot AI lab unveiled Kimi K2 Thinking, a reasoning model claiming to outperform established US models in certain benchmarks.

Anonymous3 min read

AI Snapshot

The TL;DR: what matters, fast.

Moonshot introduced Kimi K2 Thinking, asserting it surpasses OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5 in reasoning benchmarks.

Kimi K2 Thinking is an open-source Mixture-of-Experts model trained with 1 trillion parameters, accessible via Hugging Face.

The model's training cost was under $5 million, offering a cost-effective alternative to proprietary AI solutions.

Who should pay attention: AI developers | Deep learning researchers | Open-source advocates

What changes next: The AI landscape will intensify with competition from new foundation models.

Kimi K2 Thinking: A New Contender

Moonshot introduced Kimi K2 Thinking on Thursday, positioning it as a powerful reasoning model. The company asserts that Kimi K2 Thinking outperforms OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5 in several critical areas. These include:

  1. Humanity's Last Exam: A benchmark assessing general knowledge and reasoning.
  2. BrowseComp: Evaluates an AI agent's ability to extract complex information from the internet.
  3. Seal-0: Measures advanced reasoning capabilities.

While Kimi K2 Thinking demonstrated comparable coding abilities to GPT-5 and Sonnet 4.5, its primary strength lies in its reasoning and adaptive capabilities. Moonshot describes the model's approach on its website:

By reasoning while actively using a diverse set of tools, K2 Thinking is capable of planning, reasoning, executing, and adapting across hundreds of steps to tackle some of the most challenging academic and analytical problems.

Model Architecture and Open-Source Advantage

Kimi K2 Thinking is built as a Mixture-of-Experts (MoE) model, integrating long-horizon planning with adaptive reasoning. It incorporates the use of online tools, such as web browsers, enabling it to:

  1. Continuously generate and refine hypotheses.
  2. Verify evidence and construct coherent answers.
  3. Decompose complex, ambiguous problems into clear, actionable subtasks.

The model was trained with approximately 1 trillion parameters and is accessible via Hugging Face. A significant aspect of Kimi K2 Thinking, which builds on the earlier Kimi K2 model released in July, is its open-source availability. This means developers can access and utilise its underlying code and weights without charge.

This open-source approach, combined with Moonshot's claim of superior agentic capabilities compared to proprietary models, presents a compelling proposition. Furthermore, Moonshot stated the training cost for Kimi K2 Thinking was less than $5 million – specifically $4.6 million, according to CNBC. This figure is notably small when compared to the billions invested by prominent AI laboratories in the United States. Should these performance claims be independently verified, the implications for the AI industry could be substantial.

Implications for Businesses

The rapid advancements in AI, particularly the emergence of sophisticated AI agents, have created pressure for businesses to integrate these tools. Since the launch of ChatGPT nearly three years ago, companies have been encouraged to adopt AI solutions, often marketed as productivity enhancements and virtual assistants. This typically involves investing in enterprise-level offerings, such as OpenAI's ChatGPT for Enterprise.

The arrival of an open-source model like Kimi K2 Thinking, which purports to offer advanced capabilities at a significantly lower training cost, could disrupt existing market dynamics. It might provide businesses with more accessible and cost-effective alternatives for deploying AI agents, potentially challenging the dominance of proprietary, high-cost solutions. For instance, smaller businesses might find new ways to survive Google's AI Overview by leveraging such open-source models. This shift could also impact how companies approach their generative AI adoption strategies. Further research into open-source AI models can be found in publications like those from the MIT Technology Review.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Latest Comments (2)

Priya Ramasamy@priyaram
AI
26 November 2025

moonshot claiming superior agentic capabilities from open-source always makes me a bit skeptical. we've tried integrating some open-source models here for internal use, and the local adaptation, especially to Bahasa Melayu nuances, is always where they fall short. the benchmarks are global but our market isn't.

Marcus Thompson
Marcus Thompson@marcust
AI
11 November 2025

We've seen some solid results ourselves integrating open-source models into our dev workflows. The Kimi K2's focus on long-horizon planning and breaking down "ambiguous problems" would be huge for our team. The $4.6M training cost for that kind of capability? That's really impressive for a MoE model.

Leave a Comment

Your email will not be published