Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
emerging AI trends
Learn

AI's Secret Revolution: Trends You Can't Miss

Let's discuss the latest fascinating AI trends. These are subtly, yet profoundly, reshaping the AI landscape, making it more accessible, efficient, and ethical.

Anonymous7 min read

AI Snapshot

The TL;DR: what matters, fast.

Fine-tuning smaller, open-source AI models for specific tasks is becoming widespread, offering near large-model performance at significantly reduced costs.

Key developments like Andrej Karpathy's nanochat in October 2025 demonstrate how easily powerful AI models can now be trained.

This trend democratizes AI, enabling small businesses and individuals to innovate with substantial cost savings and fostering a shift away from reliance on major tech companies.

Who should pay attention: AI developers | Start-ups | Open-source enthusiasts

What changes next: The accessibility and affordability of advanced AI models will continue to expand rapidly improve.

1. Open-Source Fine-Tuning of Specialised Models: AI for Everyone

So, what exactly are we on about here? Well, it's all about taking those smaller, open-source AI models – not the huge, general-purpose beasts like GPT-5 – and tweaking them for very particular jobs. Think of it like this: instead of trying to make one giant tool that does everything okay, we're making lots of highly specialised tools that do one thing brilliantly.

These models, like Llama 3.1 or the rather exciting nanochat, can be customised for specific niches, whether that's sifting through legal documents or helping with medical diagnostics. The amazing bit is they're getting pretty close to the performance of those massive models, but at a fraction of the cost and with far less computing power needed.

This really started gaining traction back in 2023 when Meta AI and Hugging Face began releasing their open-source goodies. But it's really picked up speed in mid-2025. A real 'aha!' moment was Andrej Karpathy's release of nanochat on 13th October 2025. He basically showed everyone how to train a ChatGPT-like model on a single graphics card in just a few hours, and for less than £100! That's a game-changer, isn't it? It's blown the doors open for so many more people to get involved.

Who's driving this?

Hugging Face, with their brilliant Transformers library, is a big player. Nvidia's also jumped in with their DGX Spark, a desktop supercomputer that came out on 15th October 2025, perfect for smaller teams. But it's not just the big names; you've got loads of indie developers on platforms like GitHub and X (formerly Twitter), plus startups in Silicon Valley. Folks like Karpathy are practically evangelising about making AI accessible to everyone.

You'll find these communities thriving in places like Silicon Valley, Berlin, and Bangalore. Universities, like Stanford, and tech hubs in Shenzhen are also getting in on the act, making the most of more affordable hardware and ways of working that don't always rely on huge cloud services.

Why does this matter so much? Simply put, it's democratising AI. It means small businesses and even individual entrepreneurs can get stuck in without having to fork out loads of cash to huge cloud providers like AWS. We're talking about cutting costs by as much as 80% compared to using proprietary models. This encourages innovation in areas that might have been overlooked before, like developing AI for diagnosing rare diseases.

It's also reducing our reliance on the big tech giants. Experts reckon that by 2026, around 30% of enterprise AI will come from these kinds of fine-tuned models, creating a market worth about £50 billion. Of course, there are risks, like models being misused or quality varying, so community governance is crucial. But make no mistake, this trend is quietly levelling the AI playing field for a whole new generation of creators.

2. Decentralised AI Infrastructure: Spreading the Load

Now, let's talk about something truly innovative: decentralised AI infrastructure. Imagine distributing the heavy lifting of AI – training and running models – across a vast network of computers, often using blockchain technology, instead of relying on a few massive, centralised cloud providers. It's about tapping into all those underused graphics cards dotted around the globe, which slashes costs and makes the whole system much more resilient if one data centre decides to go on the blink.

This idea first started popping up around 2022 with projects like Bittensor. However, it really shot up in popularity in 2025, partly because energy costs rose significantly, and we saw how vulnerable we were to cloud outages. In October 2025 alone, there was a whopping 40% increase in startups joining networks like Akash and Render. The energy crisis and the appeal of cheaper, decentralised computing have really pushed this forward.

Who's leading the charge?

Bittensor and Akash Network are certainly at the forefront, and Filecoin is integrating storage solutions into the mix. You've got startups like Golem, and a lot of grass-roots crypto communities on X, making waves. Render, for instance, reported having 25,000 graphics cards online by 20th October 2025. Plus, independent developers and those fascinating decentralised autonomous organisations (DAOs) are all chipping in.

Geographically, Singapore, Dubai, and US crypto hubs like Miami are becoming hotbeds for this due to their friendly regulations. We're also seeing growth in European blockchain clusters, such as Zug in Switzerland, and even in rural parts of Asia where decentralised compute farms are taking advantage of lower energy costs.

Why is this so important? Well, those centralised cloud services are incredibly power-hungry, reportedly consuming 2% of global energy, and outages can cost billions every year. Decentralised infrastructure can cut inference costs by 50% and significantly reduces the risk of a single point of failure, just like we saw with an AWS outage in September 2025.

It also means more people globally can access AI computing, which is brilliant for regions with less developed infrastructure. Naturally, there are challenges like latency and security that need to be ironed out with better blockchain protocols. But if this continues, it could shift 20% of AI workloads away from the big tech companies by 2030, creating a £100 billion decentralised economy and much more robust AI ecosystems. For more insights into emerging trends, check out APAC AI in 2026: 4 Trends You Need To Know.

3. Agentic Systems Entering Production: AI as a Smart Collaborator

So, what are "agentic systems"? Imagine AI that's not just answering questions or following simple commands, but actually thinking, planning, and carrying out complex tasks with very little human help. We're talking about AI that can automate entire supply chains or even debug computer code. These aren't just fancy chatbots; they integrate various tools, remember past interactions, and make decisions within a framework that allows them to handle real-world workflows.

While prototypes started appearing in 2024, it's in 2025 that we've really seen these systems become ready for proper use in businesses. October has been a pivotal month, with Anthropic launching their "Skills for Claude" toolkit and OpenAI introducing their own agent kit at their Dev Day. Crucially, they're building in important safety features, like formal evaluations, to stop things from going wrong. You can learn more about how Claude brings memory to teams at work.

Who's behind all this?

Anthropic and OpenAI are definitely leading the pack. Salesforce is already using agents to automate their customer relationship management (CRM). Then there are startups like Adept, and even xAI is developing its own agent frameworks. Big companies in logistics, like Maersk, and IT service firms such as Accenture, are quickly becoming early adopters, as we've seen from various posts on X in October 2025.

You'll find major deployments happening in New York, London, and San Francisco. Singapore, Microsoft team up for AI growth is also emerging as a key Asian hub, thanks to its AI-friendly government policies. And there are pilot programmes running in places like Bengaluru's tech parks too.

Why does this all matter?

Agentic systems have the potential to boost productivity by an astonishing £1 trillion by 2030. They could automate about 40% of repetitive tasks in sectors like finance and logistics. The fact that they're being rolled out quite carefully and quietly means ethics are being considered; Anthropic's safety checks, for example, caught 95% of tricky edge-case errors during their October trials. You can read more about the importance of ethical considerations in AI in this Nature article on responsible AI development.

Of course, there are always risks, such as over-automation or unintended consequences – imagine an AI messing up a supply chain! So, rigorous oversight is essential. This trend signals a big shift.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This article is part of the Future Predictions learning path.

Continue the path →

Latest Comments (4)

Somchai Wongsa@somchaiw
AI
30 November 2025

The mention of customising models like Llama 3.1 for specific tasks, such as legal document sifting, aligns with Thailand's digital strategy under the ASEAN Digital Masterplan 2025. We are actively exploring how these fine-tuned, open-source AI solutions can enhance public sector efficiency, particularly in areas requiring nuanced language processing for regulatory compliance.

Yuki Tanaka
Yuki Tanaka@yukit
AI
24 November 2025

I appreciate the discussion on fine-tuning specialized models. While the cost figure for training a ChatGPT-like model on a single GPU for under £100 sounds appealing, I wonder if this accounts for the substantial data acquisition and pre-processing costs which often exceed compute for competitive models, particularly for legal or medical domains. It's a critical component often overlooked.

Natalie Okafor@natalieok
AI
23 November 2025

with nanochat and similar models, I'm curious about the validation pipelines for ensuring ethical application in medical diagnostics. patient safety is paramount.

Arjun Mehta
Arjun Mehta@arjunm
AI
11 November 2025

Karpathy's nanochat demo was HUGE. We've been looking at how to port something similar to our internal dev environment. The GPU setup for that kind of fine-tuning, even with Llama 3.1 sized models, is still the bottleneck for us. The DGX Spark is interesting but not really scalable for multiple teams. actually, getting infra costs down for distributed training on smaller models is where the real work is.

Leave a Comment

Your email will not be published