Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Create

The Truth About OpenAI's o1: Is It Worth the Hype?

OpenAI's o1 model costs 4x more than GPT-4o but delivers specialized reasoning capabilities that may not justify the premium for most users.

Intelligence DeskIntelligence Desk8 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI's o1 model costs 4x more than GPT-4o with specialized multi-step reasoning capabilities

The model excels at complex problem-solving but lacks multimodal features and built-in tools

Early reviews suggest it's a specialized tool rather than a universal upgrade for most use cases

OpenAI's o1 Model: Premium Price for Premium Reasoning

OpenAI's latest o1 model promises sophisticated reasoning capabilities, but early reviews suggest it's a specialised tool rather than a universal upgrade. While the model excels at complex problem-solving, it comes with significant trade-offs in cost and speed compared to GPT-4o.

The o1 model, nicknamed "Strawberry," introduces a deliberate thinking phase before generating responses. This approach makes it roughly four times more expensive than GPT-4o whilst lacking many features that made its predecessor popular, including multimodal capabilities and various built-in tools.

"It's impressive, but I think the improvement is not very significant. It's better at certain problems, but you don't have this across-the-board improvement," notes Ravid Shwartz Ziv, NYU professor studying AI models.

Multi-Step Reasoning Changes the Game

The o1 model's core innovation lies in breaking complex problems into smaller, manageable steps. This multi-step reasoning approach allows the AI to identify correct and incorrect steps in its problem-solving process, a technique that builds on established principles but hasn't been practically implemented until now.

Advertisement

OpenAI charges users for "reasoning tokens," which represent the individual steps the model uses to work through problems. This pricing structure makes strategic use essential to avoid escalating costs, particularly for businesses considering enterprise deployment.

"If you can train a reinforcement learning algorithm paired with some of the language model techniques that OpenAI has, you can technically create step-by-step thinking and allow the AI model to walk backwards from big ideas you're trying to work through," explains Kian Katanforoosh, Workera CEO and Stanford adjunct lecturer.

Understanding how these AI reasoning models actually think provides crucial context for organisations evaluating whether o1's capabilities justify its premium pricing.

By The Numbers

  • o1 costs approximately 4x more than GPT-4o per query
  • The model takes 12+ seconds to process complex queries versus GPT-4o's near-instant responses
  • o1 generated 800+ words for simple queries that GPT-4o answered in three sentences
  • OpenAI released o1 preview in November 2023, sparking AGI speculation
  • The model lacks multimodal capabilities and tool integrations available in GPT-4o

Real-World Performance Testing

Practical testing reveals o1's strengths and limitations in everyday scenarios. When asked to plan Thanksgiving dinner for 11 people, o1 spent 12 seconds thinking before delivering a comprehensive response that broke down its reasoning process step by step. The model suggested practical solutions like prioritising oven space and even recommended renting portable equipment.

However, o1's thoroughness can become overwhelming for simpler tasks. A basic query about cedar tree locations in America generated an extensive 800-word response covering every cedar variety, whilst GPT-4o provided a concise three-sentence answer that addressed the core question effectively.

The following comparison illustrates key differences between the models:

Feature GPT-4o o1
Response Speed Near-instant 12+ seconds
Cost Per Query Standard 4x higher
Reasoning Depth Surface-level Multi-step analysis
Multimodal Support Yes No
Tool Integration Extensive Limited

Managing Expectations in Asia's AI Landscape

The initial hype surrounding o1 led to speculation about artificial general intelligence (AGI) breakthroughs. However, OpenAI CEO Sam Altman clarified that o1 represents incremental progress rather than a leap toward AGI, tempering market expectations significantly.

Asia's AI adoption patterns suggest organisations prioritise practical value over cutting-edge features. The region's focus on enterprise AI deployment indicates that cost-effectiveness often trumps advanced capabilities for most business applications.

The following factors determine o1's suitability for different use cases:

  • Complex analytical tasks requiring step-by-step reasoning benefit most from o1's approach
  • Simple queries and routine tasks remain better suited to GPT-4o's speed and efficiency
  • Budget-conscious organisations may find the 4x cost increase difficult to justify for marginal improvements
  • Businesses requiring multimodal capabilities must continue relying on GPT-4o or alternative solutions
  • Time-sensitive applications cannot accommodate o1's deliberate processing delays

Regional tech leaders are expanding their AI presence across Asia, but adoption decisions increasingly focus on demonstrable return on investment rather than technological novelty.

Industry Reactions and Future Implications

AI industry professionals express mixed reactions to o1's capabilities and positioning. The model's foundation builds on established techniques, with Google's 2016 AlphaGo using similar multi-step reasoning approaches for game-playing scenarios.

"The hype sort of grew out of OpenAI's control," observes Rohan Pandey, research engineer at AI startup ReWorkd. "Everybody is waiting for a step function change for capabilities, and it is unclear that this represents that," adds Mike Conover, Brightwave CEO.

The broader implications for Asia's AI development remain significant. As AI transforms traditional jobs across the region, organisations must balance advanced capabilities against practical constraints including cost, speed, and integration requirements.

Andy Harrison, former Google employee and S32 venture firm CEO, points out that o1's approach reignites fundamental debates about AI development paths. One camp advocates for automated workflows through agentic processes, whilst another believes generalised intelligence and reasoning capabilities will eventually eliminate workflow requirements entirely.

What makes o1 different from GPT-4o?

o1 uses multi-step reasoning to break complex problems into smaller parts, taking time to "think" before responding. This makes it better at complex analysis but slower and more expensive than GPT-4o.

Is o1 worth the higher cost for businesses?

o1 provides value for complex analytical tasks requiring detailed reasoning, but most routine business queries remain better suited to GPT-4o's speed and cost-effectiveness.

Does o1 represent a breakthrough toward AGI?

OpenAI CEO Sam Altman clarified that o1 is not AGI and remains flawed and limited, representing incremental rather than revolutionary progress.

Can o1 handle images and other media like GPT-4o?

No, o1 currently lacks the multimodal capabilities and tool integrations that make GPT-4o versatile for diverse content types and applications.

How should Asian businesses approach o1 adoption?

Asian organisations should evaluate o1 for specific complex reasoning tasks whilst maintaining GPT-4o for general-purpose applications, considering the significant cost and speed differences.

The AIinASIA View: OpenAI's o1 represents sophisticated engineering rather than revolutionary breakthrough. Whilst its multi-step reasoning capabilities offer genuine value for complex analytical tasks, the model's limitations and premium pricing position it as a specialised tool rather than a universal upgrade. Asian organisations should approach o1 strategically, deploying it selectively for tasks that genuinely require deep reasoning whilst continuing to rely on more efficient models for routine applications. The real test isn't whether o1 can think, but whether businesses can afford to let it.

As AI capabilities continue advancing across Asia, the balance between sophistication and practicality becomes increasingly critical. The broader AI landscape in the region suggests that sustainable adoption depends more on demonstrable business value than technological impressiveness.

What's your experience with OpenAI's o1 model, and do you think the premium pricing justifies its advanced reasoning capabilities for your specific use cases? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 6 readers in the discussion below

Advertisement

Advertisement

This article is part of the Global AI Policy Landscape learning path.

Continue the path →

Latest Comments (6)

Dr. Farah Ali
Dr. Farah Ali@drfahira
AI
1 January 2026

just catching up on this o1 discussion and it makes me wonder, if the model is so expensive due to these "reasoning tokens" and best for complex problems, how does that impact accessibility for researchers or smaller organizations in regions with fewer resources? this seems like it could exacerbate existing digital divides rather than closing them.

Benjamin Ng
Benjamin Ng@benng
AI
19 November 2024

this "thinking" step and associated "reasoning tokens" for o1 is really interesting from a cost perspective. we're building an LLM tutor that uses a similar chain-of-thought style for explaining complex concepts, breaking it down for the student. if openai is charging for those intermediate steps, it makes me wonder about the cost efficiency of trying to replicate that with other models vs just using o1 directly. still experimenting but need to price this out properly.

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
22 October 2024

i agree with Dr. Ravid Shwartz Ziv's assessment. from what we've seen in the Korean AI strategy documents, the focus is increasingly on demonstrable, significant advancements. small incremental improvements, especially with a higher cost per inference like o1's "reasoning tokens," really limit practical application and scaling for public or industrial use, which is critical for APAC's AI adoption.

Natalie Okafor@natalieok
AI
15 October 2024

we're actually looking into o1 for some clinical trial design work, where the step-by-step reasoning could really help with protocol adherence and ensuring patient safety. the cost per "reasoning token" is a concern though.

N.
N.@anon_reader
AI
15 October 2024

The "reasoning tokens" costing more adds another layer to managing LLM budgets. definitely something to factor in if this concept starts appearing in other models this year. we're already seeing the cost creep on some of the bigger projects.

Harry Wilson
Harry Wilson@harryw
AI
8 October 2024

it's interesting how they're monetizing "reasoning tokens". almost like a computational complexity cost for the breakdown of problems. makes sense from a business perspective given the multi-step reasoning, but it does push the user to be very deliberate with their prompts, doesn't it?

Leave a Comment

Your email will not be published