Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI reasoning models thinking process visualised
Learn

How AI Reasoning Models Actually Think

Your AI can now pause and think before answering. Knowing when to let it changes everything.

Intelligence Deskโ€ขโ€ข7 min read

When your AI stops to think, the answers get sharper

AI Snapshot

The TL;DR: what matters, fast.

Every major AI lab now ships reasoning models that think before answering

DeepSeek R1 trained for $294,000 and holds 89% market share in China

The real skill is knowing which tasks deserve the extra thinking cost

Advertisement

Advertisement

AI Reasoning Models Break Free From Simple Pattern Matching

Every major AI lab now ships a model that can reason. Not just predict the next word, but pause, break a problem into steps, check its own logic, and then answer. OpenAI calls it o3. Anthropic calls it extended thinking. Google DeepMind calls it deep think. And DeepSeek, the Chinese startup that shook the industry in early 2025, calls it R1.

The question is no longer whether reasoning models exist. It's whether you know when to use them, and when they're expensive overkill. The distinction matters more than you might think, particularly as Chinese AI models gain global market share through aggressive pricing strategies.

Understanding when to activate reasoning mode could save you thousands in API costs whilst dramatically improving output quality on complex tasks.

How Reasoning Models Actually Think Differently

A standard large language model generates text one token at a time. It's fast, fluent, and confident, even when it's wrong. A reasoning model adds an extra step before answering: it thinks. That thinking process, sometimes visible as a chain of internal steps, lets the model decompose a hard problem, test partial solutions, and catch its own mistakes before committing to a final answer.

Think of it this way. A standard model is like a student who blurts out the first answer that comes to mind. A reasoning model is the student who reaches for scratch paper, works through the logic, crosses out a wrong turn, and only then raises their hand.

"This combination of openness and affordability allowed DeepSeek to gain traction in markets underserved by Western AI platforms." - Microsoft AI Report, January 2026

The difference shows up most clearly on tasks involving multi-step maths, formal logic, complex code debugging, and scientific reasoning. On simpler tasks like summarisation or casual chat, reasoning models are slower and more expensive for no real gain.

The Models Reshaping Global AI Adoption

ModelDeveloperReasoning FeatureStandout StrengthAPI Cost (per 1M tokens)
o3OpenAIBuilt-in reasoning (low/medium/high effort)General reasoning, ARC-AGI$2 input / $8 output
o4-miniOpenAIBudget reasoning modeSTEM tasks at low cost~80% cheaper than o3
Claude Opus 4.6AnthropicExtended thinking (low to max)Coding, SWE-Bench leader$5 input / $25 output
Gemini 3.1 Deep ThinkGoogle DeepMindDeep think modeMathematical reasoningVaries by tier
DeepSeek R1DeepSeek (China)DeepThink togglePrice-to-performance ratio~76-99% below Western models

The pricing gap is striking. DeepSeek trained R1 for roughly $294,000, a 99.7% cost reduction compared to the estimated $100 million-plus needed for GPT-4 Turbo. That efficiency isn't just a talking point. It reshaped adoption patterns across the Asia-Pacific, as our coverage of DeepSeek's market disruption demonstrated.

Developer in Singapore testing AI reasoning models
A developer in Singapore tests reasoning model performance across different AI platforms, March 2026

By The Numbers

  • 97.3%: DeepSeek R1's score on the MATH-500 benchmark, beating GPT-4 at a fraction of the cost
  • $294,000: Total training cost for DeepSeek R1, versus an estimated $100 million-plus for GPT-4 Turbo
  • 89%: DeepSeek's market share in China as of early 2026, according to a Microsoft report
  • 51.24%: Share of DeepSeek's global monthly active users from China, India, and Indonesia alone
  • 80.9%: Claude Opus 4.5's score on SWE-Bench Verified, the top coding benchmark result in early 2026

When to Turn Reasoning On (And When Not To)

Reasoning models aren't a blanket upgrade. They're a specialist tool. Here's a practical breakdown of when the extra thinking time pays off and when it doesn't.

Use reasoning mode for:

  • Multi-step maths problems, proofs, or anything involving formal logic
  • Complex code debugging, refactoring across multiple files, or architecture decisions
  • Scientific analysis where precision matters more than speed
  • Tasks where you've previously caught the AI making confident but wrong claims
  • Legal or financial document analysis requiring step-by-step verification
  • Research synthesis across multiple contradictory sources

Skip reasoning mode for:

  • Simple Q&A, summarisation, or translation tasks
  • Creative writing, brainstorming, or casual conversation
  • Tasks where speed matters more than perfect accuracy
  • High-volume API calls where cost compounds quickly

"Open-source AI can function as a geopolitical instrument, extending Chinese influence in areas where Western platforms cannot easily operate." - Microsoft AI Division, January 2026

How DeepSeek Democratised Advanced Reasoning

Before January 2025, reasoning was a premium feature locked behind expensive Western APIs. Then DeepSeek, a Hangzhou-based startup backed by quantitative trading firm High-Flyer, released R1 as a fully open-weight model. Anyone could download it, run it locally, and fine-tune it for free.

The impact across Asia was immediate. China, India, and Indonesia now account for over half of DeepSeek's global user base. Huawei pre-installed DeepSeek on its phones, giving millions of consumers in Asia their first direct experience with a reasoning AI. The model hit 96.9 million monthly active users globally by mid-2025.

Stanford's Institute for Human-Centered AI noted that DeepSeek is just the visible tip of a much broader Chinese open-weight ecosystem. Dozens of models, from Alibaba's Qwen series to Baidu's ERNIE, are following the same playbook: train cheaply, release openly, and capture adoption across developing markets.

Platform-Specific Implementation Guide

Each platform handles reasoning slightly differently. Here's how to activate and control it across the four major providers, building on techniques we've explored in our guide to strategic AI thinking.

OpenAI (o3 and o4-mini): In ChatGPT, select the o3 or o4-mini model from the model picker. The reasoning happens automatically. Via the API, you can set reasoning_effort to low, medium, or high, trading speed and cost for accuracy. Start with medium for most tasks and only escalate to high for genuinely hard problems.

Anthropic (Claude Extended Thinking): In the Claude interface, extended thinking activates automatically on Claude Opus 4.6 for complex queries. Via the API, you can set thinking budget levels from low through max. The model explicitly shows its reasoning chain, making it easier to spot where logic goes wrong.

Google DeepMind (Gemini Deep Think): In Google AI Studio, toggle Deep Think mode on Gemini 3.1 Pro. It excels at mathematical reasoning. The model's 1-million-token context window means you can feed it an entire research paper and ask it to verify specific claims step by step.

DeepSeek R1: Visit chat.deepseek.com and toggle DeepThink mode. You can watch the thinking process unfold in real time via visible think tags. For local use, download distilled versions through Ollama or LM Studio and run them on a capable GPU, entirely free. Our tutorial on running AI models locally covers the setup process.

Cost-Benefit Analysis: When Reasoning Pays Off

The economics of reasoning models create clear use-case boundaries. For a typical 1,000-token reasoning task, you're looking at roughly $0.008 with o3 on medium effort, compared to $0.002 for GPT-4 Turbo. That 4x cost multiplier means reasoning needs to provide 4x the value to justify itself.

The sweet spot emerges in scenarios where mistakes are expensive. A reasoning model that catches a critical logic error in financial analysis or prevents a costly coding bug quickly pays for itself. But for routine content generation or simple queries, standard models remain the economical choice.

What's the difference between reasoning models and standard language models?

Standard models generate responses immediately using pattern matching. Reasoning models add an internal "thinking" step where they break down problems, test solutions, and verify logic before responding. This makes them slower and more expensive but significantly more accurate on complex tasks.

Which reasoning model offers the best value for developers?

DeepSeek R1 currently provides the strongest price-to-performance ratio, especially for mathematical and coding tasks. It's available both as a free online interface and as downloadable weights for local deployment, making it particularly attractive for budget-conscious developers.

Do reasoning models work better in English than other languages?

Most reasoning models were primarily trained on English data, so they typically perform best in English. However, models like DeepSeek R1 show strong performance in Chinese, and multilingual reasoning capabilities are rapidly improving across all major providers.

Can I use reasoning models for creative tasks?

While reasoning models can handle creative work, they often overthink simple creative prompts, leading to unnecessarily complex outputs. For brainstorming, storytelling, or artistic projects, standard models usually provide better results at lower cost and faster speed.

How do I know if a task needs reasoning mode?

If your task involves multiple logical steps, mathematical calculations, code debugging, or fact verification, reasoning mode likely provides value. If it's conversational, creative, or requires quick responses, standard models are typically sufficient and more cost-effective.

The AIinASIA View: The reasoning model revolution represents more than a technical advancement. It's a fundamental shift in how AI capabilities distribute globally. DeepSeek's success demonstrates that innovation leadership no longer requires Silicon Valley's resources. We're entering an era where the most capable AI tools may emerge from unexpected quarters, powered by efficiency rather than scale. For Asian developers and businesses, this democratisation of advanced AI reasoning creates unprecedented opportunities to build sophisticated applications without Western platform dependencies. The question isn't whether to adopt reasoning models, but how quickly you can integrate them into your competitive advantage.

The reasoning model landscape will continue evolving rapidly throughout 2026. As training costs decrease and inference speeds improve, the line between standard and reasoning models may blur entirely. The organisations that master these tools today, understanding their strengths and limitations, will be best positioned to capitalise on whatever comes next.

Have you experimented with reasoning models in your projects? Which use cases have surprised you most? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

Advertisement

Advertisement

This article is part of the This Week in Asian AI learning path.

Continue the path รขย†ย’

No comments yet. Be the first to share your thoughts!

Leave a Comment

Your email will not be published