It's clear that Artificial Intelligence (AI) is moving beyond the experimental phase and really hitting its stride in businesses, especially across the Asia-Pacific region. We're talking about agentic AI here – those clever, autonomous systems that can actually execute tasks on their own. According to IDC, a whopping 70% of organisations in this area reckon agentic AI will shake up their business models within the next 18 months.
With increasing pressure on costs, fierce competition, and more regulatory hurdles to jump, businesses aren't just dabbling in AI anymore. They're looking to operationalise it, moving from "let's give AI a go" to "how can AI drive measurable growth?". This means leaders need to get serious about how they implement and manage these intelligent agents.
Keeping an Eye on Everything: Full-Lifecycle Visibility
One of the biggest headaches in early AI rollouts was a lack of central visibility. Imagine lots of little AI agents running around your organisation in development, cloud operations, cybersecurity, and more, but no one really knows what they're all doing, how much they're costing, or if they're even still needed. It's a recipe for chaos, or at least a big bill!
To stop this from becoming a major operational or financial headache, you'll want to:
- Create an "agent registry" or a discovery platform. This should be a central list of all your autonomous agents, detailing who owns them, what their purpose is, and their current operational status.
- Track resource consumption religiously. This means keeping tabs on compute power, storage, and data egress, and then linking these costs directly to your business key performance indicators (KPIs).
- Set up clear dashboards. These should flag any 'orphan' agents (the ones nobody owns or looks after), unusual cost spikes, or agents that aren't providing any clear business value.
Leaders also need to figure out whether their current public cloud, private cloud, or Software as a Service (SaaS) AI setups offer adequate agent visibility. If not, you might need a third-party solution to get that oversight. And, importantly, someone needs to be responsible! Who in the organisation is going to monitor these agents and their business impact? While major cloud providers and AI tools are starting to offer better management features, they often still operate in silos. Your goal should be interoperability and avoiding getting locked into one vendor.
Without this kind of visibility, you're just inviting cost creep and unmanaged risks. It's all about having the tools in place to decide who logs agents, who approves them, and who has the authority to switch off underperforming ones or those that pose a risk to the business.
Who's Who: Identity and Access Management for Agents
Our traditional identity and access management (IAM) systems are usually built around individual users and machine identities. But what happens when one autonomous agent delegates a task to another agent? They're interacting, making decisions, and triggering actions. Our existing identity frameworks can easily get confused here.
It's crucial to think about identity in terms of agentic AI activity. You'll need to:
- Extend your IAM models so that agents are treated as standalone entities. This means giving each agent its own roles, permissions, and audit logs.
- Ensure traceability. You need to know who invoked an agent, which other agents it called upon, what actions it took, and what data it accessed.
- Have robust revocation and lifecycle management. You must be able to disable agents quickly and effectively if they're no longer needed or if they're terminated. And test this capability regularly!
If you don't make these adjustments, you're leaving the door open to "shadow AI" risks – agents operating without proper oversight, which can be a real headache from a security and compliance perspective. Think of AI agents as "digital insiders" within your organisation, operating with the same privileges as the person who set them up. That's a lot of power!
Thinking About Security Teams
Security operations teams are already stretched thin, facing staff shortages and "alert fatigue" from dealing with countless security notifications. The last thing they need is to be trialling every single AI tool under the sun. Instead, organisations should pinpoint the high-impact uses for agentic AI in security and then implement them with proper governance.
You could start with things like:
- Automated alert triage, which helps prioritise urgent security alerts.
- Network-pattern recognition, to spot unusual activity that might indicate a threat.
- A common initial step for AI in cybersecurity is often scanning internal code repositories for potential security flaws.
To give AI agents clear instructions and boundaries, it's vital to create detailed documentation. This includes security playbooks, escalation paths for incidents, and decision trees. These documents can then be used to set the operating parameters for your AI security tools.
CISOs and IT leaders should focus on just one to three key areas where autonomous agents can move beyond a pilot project and deliver measurable results. They need to ensure these agents operate within the company's existing risk assessment frameworks, rather than as isolated experiments.
The Human-AI Partnership: Defining Collaboration
One of the biggest differentiators in 2026 will be how well organisations figure out which tasks are best suited for human judgment and which are ripe for automation. Getting this right will separate the winners from the losers.
Consider these points:
- Map out roles where AI agents will handle routine or repetitive tasks, clearly defining where human staff will then focus their efforts.
- Ensure you provide adequate training and change management in areas of the organisation that will be affected by AI. People need to feel supported and understand the changes.
- Make sure that valuable human skills aren't being sidelined. Crucially, agents shouldn't be making decisions that truly require nuanced human judgment.
You might even find you need to identify entirely new roles or adjust existing job descriptions. We're already seeing new positions emerge, like 'system architect', 'AI orchestrator', or 'agent supervisor'. It's all about updating those job descriptions, training your current talent, and recruiting for these specialist human-AI collaboration roles.
It's a fine balance, this human-AI collaboration. If humans continue doing mundane tasks while agents are given the nuanced ones, you're getting it wrong on both counts! For more insights, check out our article on What Every Worker Needs to Answer: What Is Your Non-Machine Premium?.
A Real-World Example
Let's look at a multinational banking group in the Asia-Pacific region. They initially deployed autonomous agents for customer onboarding, fraud monitoring, and scaling cloud resources. The problem was, each business unit built its own agents. This led to a lot of duplication, unmanaged costs, and a general lack of clarity about who owned what. Not ideal!
So, the bank took action. They set up a central "agent registry". Every agent was then aligned with a specific business KPI – for example, a reduction in onboarding time, a cut in fraud losses, or a lower compute cost per transaction. They also extended their identity management systems to treat agents just like system users, complete with audit logs, the ability to revoke access, and full traceability back to their origin.
They then picked one high-impact use case to really focus on: the first 90 days of new-customer onboarding. An agent handled all the standard Know Your Customer (KYC) checks, only escalating exceptions to human staff. The results were impressive: onboarding time dropped by 30%, and human staff could shift from doing manual checking to more advisory roles.
The bank built dashboards to keep an eye on agent costs, their value, and exception rates. They also decommissioned older scripts that weren't performing.






Latest Comments (5)
The IDC statistic about 70% of APAC organisations expecting business model shakes within 18 months due to agentic AI seems quite ambitious. While the potential for autonomous agents is clear, the operational complexities mentioned later in the article-particularly around full-lifecycle visibility and cost attribution-are significant hurdles. From my perspective in research, even with advanced models, the leap from successful task execution in a controlled environment to a truly transformative, enterprise-wide business model impact within such a short timeframe requires a level of integration and governance that many organisations are only just beginning to conceptualise, let alone implement. I'd be very interested to see the follow-up data on that prediction.
Yes, "agent registry" idea good. For LLM development, we have similar issue. Many versions fine-tuned models, different endpoints. Must track what is active, what data trained on. Dashboards show resource use, yes, and also performance metrics like latency, accuracy live. Otherwise, too easy to lose control.
70% of APAC orgs expect agentic AI to shake things up in 18 months, IDC says. From my founder side building compliance automation, the real 'shake up' is going to be tracing all those autonomous actions. An agent registry is good, but how do we audit the decisions and data flows for privacy laws across different APAC regions? That's the headache.
The IDC 70% figure for agentic AI adoption seems very optimistic for 18 months, especially for full operationalisation, not just experimentation. We see in our lab that even with advanced models like Qwen or DeepSeek, agentic systems need significant human oversight and validation for anything beyond highly constrained tasks. Scaling this across diverse business models quickly is a large challenge.
the idea of an "agent registry" is interesting from a governance standpoint, but it also highlights a significant power imbalance. if 70% of APAC organisations anticipate agentic AI transforming their models, who is ensuring these autonomous systems are developed and deployed ethically, especially concerning data privacy and potential algorithmic biases that could disproportionately affect marginalized communities? simply tracking operational status doesn't quite address the larger societal implications, particularly for regions with less robust regulatory frameworks. we need to move beyond just cost and efficiency to accountability and fairness at every stage.
Leave a Comment