Corporate leaders across Asia and beyond are cautious in their embrace of generative AI, balancing excitement with responsibility.
72% of executives are deliberately slowing down investments in generative AI, citing societal and ethical concerns.,Only 27% say their organisations are ready to scale the technology, but most see it as a revenue driver rather than a cost-cutting tool.,Corporate culture and leadership will be decisive in shaping how AI becomes embedded in business practice.
The slow acceleration of AI in the boardroom
Executives are rarely accused of being timid, but when it comes to generative AI adoption, the brakes are firmly applied. According to Accenture’s 2024 business pulse survey of 3,400 global C-suite leaders, 72% are deliberately exercising restraint in their AI investments. The reason is not simply budgets or operational bottlenecks. Instead, societal pressures to use AI responsibly, coupled with concerns around regulation, accuracy and early-stage return on investment, are forcing a more cautious hand.
Yet the paradox is clear. As with any transformative technology, hesitation carries its own risks. Fail to keep pace with competitors and an organisation may soon find itself on the outside looking in. The survey highlights that many executives are cautiously exploring generative AI’s potential — but doing so through the prism of corporate culture.
Privacy or Protection?
The privacy debate will define these glasses. Recording with a smartphone is obvious; recording with glasses is not. Yes, Meta has included a small light to signal when filming, but as critics note, that light can easily be obscured. Asking someone to put their phone away is normal; asking them to remove prescription or sun glasses is a more awkward request.
Yet there is a flipside. For runners, solo travellers or marginalised groups, the ability to discreetly record interactions could be empowering. At mass protests, such as those seen in Nepal and Hong Kong, footage captured from a participant’s viewpoint could reshape narratives and accountability. Safety and surveillance, empowerment and intrusion — these will be in constant tension.
Regulation: a double-edged sword
Interestingly, 71% of leaders view emerging technology policies and regulations not as burdens but as positives. Guardrails, it seems, are increasingly welcome. They provide clarity, reassurance, and a common framework against which to plan. For Asian markets, where data governance is already a central business issue, this sentiment carries weight. Singapore’s AI governance toolkit and Japan’s ethical AI guidelines are frequently cited as helping create a stable environment for innovation.
Why culture decides the winners
The survey underscores an important point: culture, not code, will determine success. In organisations with people-first values, AI is positioned as an enabler rather than a replacement.
Keith Farley, senior vice president at insurer Aflac, frames the issue neatly:
“We only employ AI in ways that ensure our customers’ best interests are protected. For example, we are comfortable with AI making simple claims approval decisions, but not complex ones. When it comes to making complicated assessments about an individual’s health plan and whether a health event is covered, a human always makes the final decision.” Keith Farley, Senior Vice President, Aflac
“We only employ AI in ways that ensure our customers’ best interests are protected. For example, we are comfortable with AI making simple claims approval decisions, but not complex ones. When it comes to making complicated assessments about an individual’s health plan and whether a health event is covered, a human always makes the final decision.” Keith Farley, Senior Vice President, Aflac
Farley’s perspective reflects the human reality of insurance: products that touch people at their most vulnerable moments. For Aflac, AI supports efficiency, but empathy remains human terrain. “We have to remember that the first word in AI is artificial,” he reminds, “and when you are going through a difficult time, sometimes you want something authentic.”
This sentiment resonates across industries in Asia, where family-run businesses and people-centric service models still dominate. Leaders who treat AI as a tool to amplify human judgement, rather than override it, may find employees and customers far more willing to trust its role.
Preparedness gap
Despite the optimism, readiness lags behind. Just 27% of executives say their organisations are prepared to scale generative AI. Nearly half, 44%, predict it will take six months or more to reach that point. For many, the foundations, from clean data pipelines to internal training remain under construction.
Nevertheless, optimism outweighs fear. A striking 76% see generative AI more as an opportunity than a threat, with revenue growth cited as the primary upside rather than cost-cutting. That orientation matters: when AI is framed as a growth lever, investment tends to be more strategic, not simply tactical.
Experimenting with responsibility
For some, experimentation has already begun. David Higginson, executive vice president and chief innovation officer at Phoenix Children’s Hospital, encourages broad participation:
“Technologies like GPT are so accessible to everyone. We strongly encourage all staff to use it in a safe and secure way and make their own determination of the value and opportunities for future use.” David Higginson, EVP & Chief Innovation Officer, Phoenix Children’s
“Technologies like GPT are so accessible to everyone. We strongly encourage all staff to use it in a safe and secure way and make their own determination of the value and opportunities for future use.” David Higginson, EVP & Chief Innovation Officer, Phoenix Children’s
Phoenix Children’s applies a pragmatic test: asking end-users to imagine what 90% AI accuracy would mean in their daily workflow. This grounds innovation in operational reality and avoids both overhype and underuse. It is a model that other Asian health systems — from India’s private hospitals to Singapore’s public clinics are beginning to mirror.
Autonomous agents on the horizon
Looking further out, executives see generative AI reshaping organisational architecture. Nearly half (48%) expect chatbots to drive transformational change over the next three years. Another 45% foresee AI agents collaborating with one another to perform organisational tasks, and 40% feel ready to integrate autonomous agents into workflows. This vision remains aspirational today, but it is not hard to imagine in Asia’s high-growth sectors. From financial services in Hong Kong to logistics hubs in Vietnam, agent-based automation could dramatically reconfigure customer service, supply chains, and even management structures. We also explored how AI agents will break passkeys recently.
The leadership challenge
If there is one thread running through the survey, it is this: leaders must set the tone. The adoption of generative AI is not simply a matter of procurement or software integration. It is about defining ethical boundaries, shaping culture, and maintaining trust. Those who view AI as a shortcut will find it neither fast nor sustainable. Those who weave it into strategy with responsibility may well unlock its promise.
The question, then, is not whether Asia’s executives should step on the accelerator. It is whether they can steer with enough care to stay on the road.






Latest Comments (5)
it's interesting how many executives are slowing down on generative AI investments, especially with the societal and ethical concerns. here in europe, we often see similar caution, but it also creates space for open-source initiatives and collaborative development, which i think is a good thing.
72% of execs slowing down AI investment? not surprising from what i see in Manila. clients for our BPO AI tools are excited but also really hesitant to fully commit. they talk a lot about 'responsible AI' but i think it's more about not wanting to be the first one to mess up, especially with job displacement concerns here.
that 72% of executives are slowing down investments, it tracks. we're building LLM-powered tutors and the amount of data privacy and bias issues we hit are immense. it's not just about getting the tech to work, it's about making sure it's fair and safe. how are larger corporations even beginning to tackle these ethical frameworks at scale?
72% deliberately slowing investments, this makes sense. on-device AI for things like glasses or edge devices needs serious optimization. getting models to run efficiently on low power hardware, with good battery life, is a huge engineering hurdle. it's not just about the algorithms, but the silicon too.
Okay, 72% slowing investments because of societal and ethical concerns makes total sense. But for us content creators and marketers here in Singapore, I'm seeing so much innovation already with generative AI for things like quick copy or visual ideation. It feels like even with caution, the applications are too powerful to ignore for long!
Leave a Comment