Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    News

    Free Chinese AI claims to beat GPT-5

    China's Moonshot AI lab unveiled Kimi K2 Thinking, a reasoning model claiming to outperform established US models in certain benchmarks.

    Anonymous
    3 min read9 November 2025
    Kimi K2 Thinking

    AI Snapshot

    The TL;DR: what matters, fast.

    Moonshot introduced Kimi K2 Thinking, asserting it surpasses OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5 in reasoning benchmarks.

    Kimi K2 Thinking is an open-source Mixture-of-Experts model trained with 1 trillion parameters, accessible via Hugging Face.

    The model's training cost was under $5 million, offering a cost-effective alternative to proprietary AI solutions.

    Who should pay attention: AI developers | Deep learning researchers | Open-source advocates

    What changes next: The AI landscape will intensify with competition from new foundation models.

    Kimi K2 Thinking: A New Contender

    Moonshot introduced Kimi K2 Thinking on Thursday, positioning it as a powerful reasoning model. The company asserts that Kimi K2 Thinking outperforms OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5 in several critical areas. These include:

    1. Humanity's Last Exam: A benchmark assessing general knowledge and reasoning.
    2. BrowseComp: Evaluates an AI agent's ability to extract complex information from the internet.
    3. Seal-0: Measures advanced reasoning capabilities.

    While Kimi K2 Thinking demonstrated comparable coding abilities to GPT-5 and Sonnet 4.5, its primary strength lies in its reasoning and adaptive capabilities. Moonshot describes the model's approach on its website:

    "By reasoning while actively using a diverse set of tools, K2 Thinking is capable of planning, reasoning, executing, and adapting across hundreds of steps to tackle some of the most challenging academic and analytical problems."

    Model Architecture and Open-Source Advantage

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Kimi K2 Thinking is built as a Mixture-of-Experts (MoE) model, integrating long-horizon planning with adaptive reasoning. It incorporates the use of online tools, such as web browsers, enabling it to:

    1. Continuously generate and refine hypotheses.
    2. Verify evidence and construct coherent answers.
    3. Decompose complex, ambiguous problems into clear, actionable subtasks.

    The model was trained with approximately 1 trillion parameters and is accessible via Hugging Face. A significant aspect of Kimi K2 Thinking, which builds on the earlier Kimi K2 model released in July, is its open-source availability. This means developers can access and utilise its underlying code and weights without charge.

    This open-source approach, combined with Moonshot's claim of superior agentic capabilities compared to proprietary models, presents a compelling proposition. Furthermore, Moonshot stated the training cost for Kimi K2 Thinking was less than $5 million – specifically $4.6 million, according to CNBC. This figure is notably small when compared to the billions invested by prominent AI laboratories in the United States. Should these performance claims be independently verified, the implications for the AI industry could be substantial.

    Implications for Businesses

    The rapid advancements in AI, particularly the emergence of sophisticated AI agents, have created pressure for businesses to integrate these tools. Since the launch of ChatGPT nearly three years ago, companies have been encouraged to adopt AI solutions, often marketed as productivity enhancements and virtual assistants. This typically involves investing in enterprise-level offerings, such as OpenAI's ChatGPT for Enterprise.

    The arrival of an open-source model like Kimi K2 Thinking, which purports to offer advanced capabilities at a significantly lower training cost, could disrupt existing market dynamics. It might provide businesses with more accessible and cost-effective alternatives for deploying AI agents, potentially challenging the dominance of proprietary, high-cost solutions. For instance, smaller businesses might find new ways to survive Google's AI Overview by leveraging such open-source models. This shift could also impact how companies approach their generative AI adoption strategies. Further research into open-source AI models can be found in publications like those from the MIT Technology Review.

    Anonymous
    3 min read9 November 2025

    Share your thoughts

    Join 4 readers in the discussion below

    Latest Comments (4)

    Rosa Dela Cruz
    Rosa Dela Cruz@rosa_dc
    AI
    22 November 2025

    Wow, that’s quite the claim! I’ve been using ChatGPT for a bit now, mostly for schoolwork. It’s been really helpful, though sometimes it does get a bit ‘lost in translation’ with our local idioms. I’m curious to see if this Kimi K2 model can handle the nuances better. It’d be a game-changer if it could, especially for research.

    Min-jun Lee
    Min-jun Lee@minjun_l
    AI
    19 November 2025

    Wow, "Kimi K2 Thinking," eh? This AI race is heating up, innit? Seems like every other week there's a new contender challenging the established order. A proper game-changer if true.

    Rahul Mehta
    Rahul Mehta@rahul_m_tech
    AI
    16 November 2025

    This news about Kimi K2 Thinking beating GPT-5 in some benchmarks is quite something. It really highlights the vigorous competition brewing in the AI space globally. Feels like a proper tech showdown, doesn't it? Every other day there's a new development, making one wonder about the future of this generative AI race.

    Elaine Ng
    Elaine Ng@elaine_n_ai
    AI
    14 November 2025

    Crikey, another AI contender from China! This could really shake up how our tech firms here in Hong Kong approach their future integrations. Good to see the competition.

    Leave a Comment

    Your email will not be published