Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
OpenAI o1 model
Create

The Truth About OpenAI's o1: Is It Worth the Hype?

Explore the capabilities and limitations of OpenAI's o1 model, its impact on the AI community, and its potential role in the future of AI in Asia.

Intelligence Desk4 min read

OpenAI's o1 model excels at complex reasoning but is more expensive and slower than GPT-4o.,Open AI's o1 is best suited for big, complicated tasks rather than simpler questions.,The AI community has mixed feelings about o1's capabilities and its high cost.

The Arrival of OpenAI's o1: A Step Forward or Back?

OpenAI recently released its new o1 models, nicknamed "Strawberry," which pause to "think" before answering. While there's been much anticipation, the model has received mixed reviews. Compared to GPT-4o, o1 is better at reasoning and complex questions but is roughly four times more expensive. It also lacks the tools, multimodal capabilities, and speed that made GPT-4o impressive. OpenAI even admits that GPT-4o is still the best option for most prompts.

Ravid Shwartz Ziv, an NYU professor studying AI models, shares, "It’s impressive, but I think the improvement is not very significant. It’s better at certain problems, but you don’t have this across-the-board improvement."

Thinking Through Big Ideas

OpenAI o1 stands out because it breaks down big problems into small steps, attempting to identify when it gets a step right or wrong. This "multi-step reasoning" isn't new but hasn't been practical until recently. Kian Katanforoosh, Workera CEO and Stanford adjunct lecturer, explains, "If you can train a reinforcement learning algorithm paired with some of the language model techniques that OpenAI has, you can technically create step-by-step thinking and allow the AI model to walk backwards from big ideas you’re trying to work through."

However, o1 is pricey. It charges for "reasoning tokens," which are the small steps the model breaks big problems into. This makes it crucial to use o1 wisely to avoid high costs.

OpenAI o1 in Action

To test o1, I asked ChatGPT o1 preview to help plan Thanksgiving dinner for 11 people. After 12 seconds of "thinking," it provided a detailed response, breaking down its thinking at each step. It suggested prioritizing oven space and even proposed renting a portable oven. While it performed better than GPT-4o, it also suggested overwhelming solutions for simpler tasks.

For instance, when asked where to find cedar trees in America, o1 delivered an 800+ word response, outlining every variation of cedar tree. GPT-4o provided a concise, three-sentence answer.

Tempering Expectations

The hype around o1 started in November 2023, leading some to speculate that it was a form of AGI. However, OpenAI CEO Sam Altman clarified that o1 is not AGI and is still flawed and limited. The AI community is coming to terms with a less exciting launch than expected.

Rohan Pandey, a research engineer with AI startup ReWorkd, notes, "The hype sort of grew out of OpenAI’s control." Mike Conover, Brightwave CEO, adds, "Everybody is waiting for a step function change for capabilities, and it is unclear that this represents that."

The Value of OpenAI o1

The principles behind o1 date back years. Google used similar techniques in 2016 to create AlphaGo. Andy Harrison, former Googler and CEO of the venture firm S32, points out that this brings up an age-old debate in the AI world. One camp believes in automating workflows through an agentic process, while the other thinks generalized intelligence and reasoning would eliminate the need for workflows. For more on this, you might find our article on Will AI Agents Steal Your Job Or Help You Do It Better? insightful.

Katanforoosh sees o1 as a tool to question your thinking on big decisions. For example, it can help assess a data scientist's skills in a 30-minute interview. However, the question remains whether this helpful tool is worth the hefty price tag.

The Future of AI in Asia

The release of o1 raises questions about the future of AI, particularly in Asia. As AI models become more capable, they also become more expensive. The trade-off between cost and capability will shape how AI is adopted and used in the region. This is a recurring theme, as explored in APAC AI in 2026: 4 Trends You Need To Know. Many Asian economies are actively exploring AI integration; for instance, Singapore wants its workforce to be AI bilinguals. The economic impact of AI in the region is also significant, with AI set to add nearly US$1 trillion to Southeast Asia's economy by 2030.

Comment and Share:

What are your thoughts on OpenAI's o1 model? Have you tried it yet? Share your experiences and thoughts on the future of AI and AGI in the comments below. Don't forget to Subscribe to our newsletter for updates on AI and AGI developments. You can also delve deeper into the definitions of AGI by reading Deliberating on the Many Definitions of Artificial General Intelligence. For a foundational understanding of the underlying principles, you might want to review the original AlphaGo paper.

What did you think?

Written by

Share your thoughts

Join 6 readers in the discussion below

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (6)

Dr. Farah Ali
Dr. Farah Ali@drfahira
AI
1 January 2026

just catching up on this o1 discussion and it makes me wonder, if the model is so expensive due to these "reasoning tokens" and best for complex problems, how does that impact accessibility for researchers or smaller organizations in regions with fewer resources? this seems like it could exacerbate existing digital divides rather than closing them.

Benjamin Ng
Benjamin Ng@benng
AI
19 November 2024

this "thinking" step and associated "reasoning tokens" for o1 is really interesting from a cost perspective. we're building an LLM tutor that uses a similar chain-of-thought style for explaining complex concepts, breaking it down for the student. if openai is charging for those intermediate steps, it makes me wonder about the cost efficiency of trying to replicate that with other models vs just using o1 directly. still experimenting but need to price this out properly.

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
22 October 2024

i agree with Dr. Ravid Shwartz Ziv's assessment. from what we've seen in the Korean AI strategy documents, the focus is increasingly on demonstrable, significant advancements. small incremental improvements, especially with a higher cost per inference like o1's "reasoning tokens," really limit practical application and scaling for public or industrial use, which is critical for APAC's AI adoption.

Natalie Okafor@natalieok
AI
15 October 2024

we're actually looking into o1 for some clinical trial design work, where the step-by-step reasoning could really help with protocol adherence and ensuring patient safety. the cost per "reasoning token" is a concern though.

N.
N.@anon_reader
AI
15 October 2024

The "reasoning tokens" costing more adds another layer to managing LLM budgets. definitely something to factor in if this concept starts appearing in other models this year. we're already seeing the cost creep on some of the bigger projects.

Harry Wilson
Harry Wilson@harryw
AI
8 October 2024

it's interesting how they're monetizing "reasoning tokens". almost like a computational complexity cost for the breakdown of problems. makes sense from a business perspective given the multi-step reasoning, but it does push the user to be very deliberate with their prompts, doesn't it?

Leave a Comment

Your email will not be published