Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI reasoning
News

Google vs. OpenAI: The Race to Master AI Reasoning

The race between Google and OpenAI to develop AI models that can reason like humans is intensifying. This article explores the progress made by both companies, the role of chain-of-thought prompting, and the future of AI reasoning.

Intelligence Desk3 min read

The Race to Master AI Reasoning

In the fast-paced world of artificial intelligence, the competition between tech giants is heating up. Google and OpenAI are at the forefront of this race, both aiming to develop AI models that can reason like humans. This capability, known as "reasoning," allows AI to solve complex problems in a step-by-step manner, similar to how humans think.

Google's Progress in AI Reasoning

Google has made significant strides in developing AI models that can reason. According to Bloomberg, teams at Google have been working on software that enables AI models to solve multistep problems using a technique called chain-of-thought prompting. This method allows AI to break down complex tasks into smaller, manageable steps, much like a human would.

Chain-of-Thought Prompting

Chain-of-thought prompting is a powerful technique that enhances the reasoning abilities of large language models (LLMs). By using a series of intermediate reasoning steps, AI models can tackle more complex math and computer-programming-related inquiries. This technique makes the models more capable and efficient, although it may take longer for them to respond to inquiries. For a deeper dive into how AI processes information, consider our article on Running Out of Data: The Strange Problem Behind AI's Next Bottleneck.

Google's Gemini Chatbot

Google's Gemini chatbot is a key player in this race. In July, Google introduced the 1.5 Flash model, which is designed to be faster and more cost-efficient. This upgrade aims to improve Gemini's reasoning and image processing abilities, making it more responsive and helpful to users. You can explore how Google is integrating AI into other products, like Google Photos' Conversational Editing Comes To All Android Devices.

OpenAI's Advancements

OpenAI is also making waves in the AI reasoning space. The company's new o1 model, internally known as Strawberry, was released in September. This model is designed to spend more time thinking before responding, allowing it to reason through more complex tasks and problems in science, coding, and math. For a broader view of AI advancements in the region, check out APAC AI in 2026: 4 Trends You Need To Know.

OpenAI's Chain-of-Thought Prompting

Like Google, OpenAI is using chain-of-thought prompting to enhance its AI models. The o1 model, available in preview in ChatGPT and through the company's API, showcases advanced reasoning capabilities. Although it currently lacks some features like web browsing and file uploads, the model is a significant step forward in AI reasoning. The ongoing development reflects the constant evolution of definitions of artificial general intelligence.

The Competitive Landscape

The rivalry between Google and OpenAI is intense. Initially, some employees in Google's DeepMind unit were concerned that the company had fallen behind OpenAI. However, with the unveiling of more competitors to OpenAI's products, these concerns have been alleviated. Both companies are pushing the boundaries of what AI can achieve, driving innovation in the field. This competitive drive is a major factor in the rapid progress of AI, as discussed in this MIT Technology Review article on the AI arms race.

The Future of AI Reasoning

The race to master AI reasoning is just beginning. As Google and OpenAI continue to develop more advanced models, the potential applications of AI in various fields are expanding rapidly. From solving complex mathematical problems to enhancing coding capabilities, AI reasoning is set to revolutionise numerous industries.

Comment and Share:

What do you think the future holds for AI reasoning? How do you see these advancements impacting your daily life? Share your thoughts and experiences in the comments below, and don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 6 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (6)

Priya Ramasamy@priyaram
AI
31 January 2026

i'm wondering how these chain-of-thought prompting techniques scale locally. we always have to consider the latency here in malaysia, especially in areas with less robust infrastructure. will the "longer for them to respond" outweigh the gains in reasoning for everyday applications?

Carlo Ramos
Carlo Ramos@carlor
AI
3 January 2025

The part about Gemini's 1.5 Flash model being faster and more cost-efficient... that's the kind of thing that makes you wonder about the long-term impact on freelance developers like me out here. If the models get too good and too cheap, what's left for us? Just thinking out loud as someone who does this for a living.

Crystal
Crystal@crystalwrites
AI
27 December 2024

This chain-of-thought prompting sounds really useful for making models actually think through problems! I'm already seeing similar approaches in how people structure prompts for LLMs today.

Natalie Okafor@natalieok
AI
22 November 2024

the chain-of-thought prompting for multistep problems is interesting. we're seeing similar approaches considered for diagnostic AI where explainability is crucial. the longer response times are a trade-off we'd need to evaluate carefully, especially when patient safety is on the line.

Maggie Chan
Maggie Chan@maggiec
AI
8 November 2024

The chain-of-thought prompting discussion really hits home for us. We've been trying to implement similar step-by-step reasoning in our compliance automation tools, especially for nuanced regulations that require understanding context beyond just keywords. It's a constant battle between speed and accuracy. The "may take longer for them to respond" part is so true - clients want instant answers but sometimes the complexity just doesn't allow for it. We've had to manage expectations around that, explaining why a more robust, "thought-out" response is ultimately better. It's not just about getting an answer, but getting the RIGHT answer.

Priya Ramasamy@priyaram
AI
1 November 2024

Chain-of-thought prompting sounds great in theory for complex problems but I'm wondering about the "longer to respond" part. For telco solutions here in Malaysia, speed is often critical for customer experience. A slower, more 'reasoned' response might not actually be better than a quick, good-enough one in many real-world applications we face.

Leave a Comment

Your email will not be published