Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
News

MiniMax M2.7: The $0.30 Chinese Model That Evolves Itself

China's MiniMax drops a self-evolving AI that rewrites its own code and costs 50x less than Western rivals.

Intelligence DeskIntelligence Desk5 min read

MiniMax M2.7: The $0.30 Chinese Model That Evolves Itself

Shanghai-based MiniMax released M2.7 on 18 March 2026, a 10-billion-parameter language model that does something no frontier model has done before: it rewrites its own code. Over 100 autonomous iteration cycles, M2.7 handled between 30% and 50% of its own reinforcement learning workflow, from diagnosing failures to modifying its scaffold architecture to running evaluations and deciding whether to keep or revert changes.

Advertisement

The result is a model that matches OpenAI's GPT-5.3-Codex on the SWE-Pro benchmark at 56.22%, rivals Anthropic's Claude Opus 4.6 on agent tasks, and costs 50 times less on input tokens. At $0.30 per million input tokens, dropping to an effective $0.06 with cache optimisation, M2.7 is the cheapest frontier-class model on the market by a wide margin.

A Model That Debugs Its Own Training

Self-evolution is the headline feature. Using MiniMax's OpenClaw agent framework, M2.7 ran an iterative loop: analyse failure trajectories, plan changes, modify scaffold code, run evaluations, compare results, then decide to keep or revert. It completed more than 100 of these cycles autonomously, yielding a 30% internal performance gain without human intervention.

MiniMax M2.7: The $0.30 Chinese Model That Evolves Itself

Advertisement

This is not fine-tuning in the traditional sense. MiniMax describes M2.7 as a "digital engineer" that deeply participates in its own iteration, building evaluation sets, updating its memory, and improving its own skills. The company says this approach accelerated their shift towards becoming an "AI-native organisation," with the goal of full autonomy in data collection, training, and evaluation.

Benchmarks That Punch Above Its Weight

Despite activating only 10 billion parameters, the smallest in its performance tier, M2.7 posts results that would have been frontier-only territory a year ago. On the MLE-Bench Lite competition, it achieved a 66.6% medal rate, tying Google's Gemini 3.1, and won nine gold medals across 22 machine learning competitions run on a single A30 GPU.

BenchmarkM2.7 ScoreComparable Model
SWE-Pro (coding)56.22%GPT-5.3-Codex (matched)
SWE Multilingual76.5%Frontier tier
GDPval-AA (office tasks)1,495 EloHighest among open-source models
MM Claw (complex skills)97% adherenceTop tier globally
MLE-Bench Lite66.6% medal rateGemini 3.1 (tied)
Toolathon (agent tools)46.3%Global top tier

Advertisement

The model runs at 100 tokens per second, roughly three times faster than its nearest competitors. Two variants are available: the standard M2.7 for production workloads and M2.7-highspeed for latency-sensitive applications.

By The Numbers

  • $0.30 per million input tokens, making M2.7 approximately 50 times cheaper than Claude Opus 4.6 on input and 60 times cheaper on output (MiniMax)
  • 10 billion active parameters, the smallest model in Tier-1 performance class, yet matching models 10-20 times its size (MiniMax)
  • 100+ autonomous iteration cycles completed during self-evolution, with a 30% internal performance gain (VentureBeat)
  • 1.87 trillion tokens in weekly call volume, making predecessor M2.5 the most-used large model globally for five consecutive weeks (OpenRouter)
"As AI increasingly interacts with people in moments of emotional vulnerability, we as WHO and its stakeholders must ensure these systems are designed and governed with safety, accountability and human well-being at their core."
, Sameer Pujari, WHO AI Lead, on the broader implications of rapidly advancing AI capabilities

What Self-Evolution Means for the Industry

MiniMax is the second Chinese startup to release a proprietary cutting-edge model in recent months, following z.ai with its GLM-5 Turbo. But M2.7's self-evolution capability sets it apart. Where previous models required human researchers to design training pipelines, M2.7 can recursively build its own evaluation datasets, iterate on its architecture, and improve its skill library.

The implications extend beyond MiniMax's own products. If self-evolving models prove reliable at scale, the cost and timeline of AI development could compress dramatically. A process that currently takes teams of researchers months could, in theory, happen in days. For Asia's AI ecosystem, where China has embedded AI into its core economic strategy, this represents a potential acceleration of an already rapid development cycle.

"We are at a critical juncture. The pace of AI adoption in people's daily lives has far outstripped investment in understanding its impact."
, Sameer Pujari, WHO AI Lead

The Cost Gap Widens

Perhaps the most disruptive aspect of M2.7 is its pricing. At $0.30 per million input tokens, it undercuts every major Western frontier model by an order of magnitude. With cache optimisation, the effective cost drops to $0.06 per million tokens, a price point that makes enterprise AI deployment economically viable even for small and medium businesses across Asia.

This cost advantage builds on the momentum established by M2.5, which led global model usage for five consecutive weeks with 1.87 trillion tokens in weekly call volume on OpenRouter. MiniMax's approach, building smaller but more efficient models that self-optimise, stands in contrast to the brute-force scaling that has defined Western AI development. For companies across South Korea, Japan, and the rest of Asia looking to deploy AI at scale, the cost equation just shifted decisively.

The AIinASIA View: MiniMax M2.7 is not just another Chinese model release. It is a proof of concept for self-evolving AI, and that changes the game for everyone. When a model can handle half its own research workflow, the bottleneck shifts from compute and talent to imagination. We believe the 50x cost advantage over Western models will accelerate enterprise adoption across Asia faster than any government subsidy programme could. The question is no longer whether Chinese models can compete on quality. It is whether Western labs can compete on price.

What makes M2.7 different from other Chinese AI models?

M2.7 is the first domestic large model to deeply participate in its own iteration. It autonomously runs reinforcement learning workflows, handles 30-50% of its development pipeline, and completed over 100 self-improvement cycles without human input, a capability no other model has demonstrated at this scale.

How does M2.7 compare to ChatGPT and Claude?

M2.7 matches GPT-5.3-Codex on coding benchmarks and approaches Claude Opus 4.6 on agent tasks, while costing approximately 50 times less on input tokens. It runs at 100 tokens per second, roughly three times faster than competitors, though it has fewer parameters at 10 billion active.

Is M2.7 open source?

M2.7 is a proprietary model available through MiniMax's agent platform and open API platforms. While not fully open source, its low pricing makes it broadly accessible. The predecessor M2.5 achieved the highest global usage among all models on OpenRouter.

What does self-evolving AI mean for jobs in Asia?

Self-evolving AI could compress development timelines from months to days, reducing the need for large research teams. However, it also lowers the barrier for smaller companies to deploy sophisticated AI, potentially creating new roles in AI orchestration and oversight across Asia's workforce.

MiniMax M2.7 represents a new chapter in the global AI race, one where the finish line keeps moving because the models are now moving it themselves. Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the China's AI Regulatory Model learning path.

Continue the path →

No comments yet. Be the first to share your thoughts!

Leave a Comment

Your email will not be published