Chinese AI firm DeepSeek has introduced a significant upgrade to its chatbot, integrating an "interleaved thinking" feature that allows for more sophisticated multi-step research.
This innovation, rolled out across its web and mobile platforms in late November 2025, represents a departure from previous models where the AI would complete its internal processing before generating a response.
Now, the chatbot can assess information dynamically, making crucial decisions like validating a webpage's credibility before seeking further sources to confirm its findings.
This capability, powered by DeepSeek V3.2 released in early December 2025, marks the company's first model to embed thinking directly into its tool use. The technical documentation highlights that this prevents the inefficiency of re-evaluating entire problems for each subsequent tool call. This advancement places DeepSeek in direct competition with global players who are also pushing the boundaries of AI reasoning, much like the advancements seen with Qwen launches to take on Google's Nano Banana.
DeepSeek's rapid growth underscores its market impact. The platform saw its monthly active users surge by 90% in December 2025, reaching nearly 131.5 million. This is a substantial increase from 33.7 million in January and 96.88 million by April of the same year.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
While simple queries don't automatically trigger this deep research mode, complex prompts activate a transparent, step-by-step display of the interleaved thinking process. This allows users to observe the AI's reasoning in action.
DeepSeek's rivals have also been developing similar functionalities. OpenAI's Deep Research, launched in February 2025 and powered by its o3 model, offers multi-step web research and comprehensive report generation. Similarly, Anthropic's Claude 4, introduced in June 2025, supports interleaved thinking, allowing the model to reflect on tool results before deciding on the next course of action.
This highlights the ongoing race to enhance AI capabilities, a theme often discussed in our 3-Before-9 series.
Remarkably, DeepSeek V3.2 demonstrates reasoning performance on par with newer models while maintaining shorter output lengths and lower computational costs. The company proudly states that the model achieved gold-level results in prestigious mathematical competitions, including the International Mathematical Olympiad and the International Olympiad in Informatics.
The startup has garnered considerable attention for developing AI models that rival the capabilities of their US counterparts, often at significantly reduced training costs. DeepSeek continues to innovate, as evidenced by its technical paper released on 31 December 2025, which introduced a new training architecture called Manifold-Constrained Hyper-Connections. This signals their ongoing commitment to advancing foundational models. For further reading on AI's broader impact and development, the National Institute of Standards and Technology (NIST) AI Risk Management Framework offers valuable insights into responsible AI deployment.
What are your thoughts on AI models that show their reasoning process? Share your opinions in the comments below.










Latest Comments (2)
Wah, 90% user surge is mad impressive! Wondering if "interleaved thinking" is actually helping with complex problem-solving, or just making it quicker?
Wow, 90% user surge? That's mad, but I totally get it. I've been dabbling with DeepSeek for a while now, and the improvements lately are undeniable. That "interleaved thinking" isn't just some techy jargon; you really feel it in the responses. It’s like the AI isn't just spitting out info, it's actually *processing* things in a more… holistic way, for lack of a better term. Makes brainstorming so much smoother, especially when you’re trying to connect seemingly disparate ideas. It’s a definite game-changer, and I reckon more folks over here in the Philippines will be jumping on board too once they properly get the hang of it. Definitely a solid upgrade.
Leave a Comment