Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI recalibrated the value of data
Business

How AI Recalibrated the Value of Data

This article explores how AI has transformed the way businesses assign value to data. It analyses vendor strategies, regional implications, and why usability and governance matter more than raw volume. Written in a sharp yet conversational tone for APAC professionals.

Anonymous4 min read

Why AI has shifted the balance of what data is really worth

Data has always had value. Yet unlike oil, copper, or even wheat, there is no international exchange rate for data and no universally accepted measure to determine how much a business should pay for a unit of information. A kilobyte in Singapore is not priced against a terabyte in Tokyo, nor does Jakarta operate a futures market for streaming datasets. Data remains unregulated, decentralised and, crucially, subjective.

Yet with the rise of artificial intelligence, particularly generative and agentic systems, the very notion of data value has been recalibrated. AI has transformed information from a corporate by-product into the essential feedstock of enterprise strategy. The implications are vast for technology vendors, CFOs, and regulators alike.

AI has elevated data from a passive asset to a strategic currency that underpins competitiveness. Vendors now compete to offer unified platforms spanning data, analytics, and AI. The greatest challenge for firms is not volume, but governance, usability and sovereignty of their data. For more insights on this topic, check out our piece on How AI Recalibrated the Value of Data.

From analytics to AI first data platforms

In previous decades, vendors emphasised the raw power of analytics. Today, that is no longer enough. Businesses want proof that platforms can serve a triad of data, analytics and AI. Cloudera’s CEO Charles Sansbury put it bluntly: most corporate data stacks are a mess. Firms cannot simply “move everything to the cloud” and expect coherent AI outcomes. Instead, private AI deployments, running inside company controlled environments are becoming a pragmatic middle ground.

Sansbury points to data sovereignty as one of the biggest hurdles. Borders and identity management rules make data residency a political as well as technical issue. And as Informatica’s $8 billion sale to Salesforce demonstrated, investors have woken up to the gravitational pull of managed, high-quality data.

Clean data or fast rubbish

Salesforce’s Muralidhar Krishnaprasad makes a sharp point: AI without trustworthy data produces “super-fast rubbish”. Agentic AI needs not just quantity but contextual, clean and unified datasets. A fragmented information landscape leads to duplication, bias and poor outcomes. The recalibration, then, is not about piling up petabytes but about ensuring enterprises build platforms that unify structured and unstructured data alike.

Hyperscalers and the strategic imperative

Google Cloud’s Yasmeen Ahmad casts data in almost economic terms. It is not a passive ledger entry, but the fuel of AI. Her focus is on “dark data”, the vast majority of unstructured information hidden in images, videos and documents. AI, she argues, acts as a universal translator that makes this information usable, giving firms a richer tapestry to analyse. The opportunity is vast, but so are the risks. If AI can surface all data, firms must double down on permissions, identity governance and compliance.

Meanwhile, Databricks CTO Dael Williamson notes that data is not recorded on balance sheets, despite investors valuing it heavily. AI is forcing CFOs to treat information as an asset class. That requires new architectures where operational and analytical data coexist, eliminating fragile pipelines and feeding models with the freshest information. For instance, how Starbucks is using AI to enhance supply chain visibility is a prime example of this.

Usability is the differentiator

Governance, scale and security all matter, but what distinguishes one vendor from another is usability. The platforms winning attention are those that juggle streaming data, machine learning and deep analytics within one metadata-driven framework. Gartner now places these competencies at the heart of its assessments, with hybrid cloud flexibility a non-negotiable. From Cloudera’s data lakehouses to Snowflake’s Iceberg tables and Google’s BigLake, the market is coalescing around those who can merge legacy reliability with AI-era agility. For more on the strategic shift, consider reading about AI's Secret Revolution: Trends You Can't Miss.

What next for data value in the AI era

The recalibration is not finished. Enterprises will judge vendors on their ability to reduce operational costs, simplify infrastructure, and manage data sovereignty while still enabling AI agility. In short, firms need platforms that are both trustworthy and experimental, balancing governance with innovation.

For Asia, where regulatory frameworks differ dramatically across borders, the challenge is acute. Yet the opportunity is greater still. If AI has taught us one thing, it is that the value of data is no longer in how much you own but in how intelligently you activate it.

So the next time someone says data is the new oil, perhaps we should correct them: with AI, data is becoming the new currency. The real question is, how much is yours worth? For further reading on the economic impact of data, refer to this report by the World Economic Forum on the Value of Data.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Latest Comments (4)

Priya Sharma
Priya Sharma@priya.s
AI
19 October 2025

it's interesting how Sansbury at Cloudera talks about most corporate data stacks being a mess. at our startup, we're seeing more and more of an argument for domain-specific data lakes even for AI, instead of trying to unify everything. i wonder if that's a direction larger enterprises might eventually follow too, especially with sovereignty issues.

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
7 October 2025

This discussion on data sovereignty and "private AI deployments" really resonates with the policy debates here in Korea. We see similar concerns regarding cross-border data flows and national security, especially with sensitive government or industry data. The emphasis on running AI within company-controlled environments makes perfect sense when you consider the regulatory patchwork across APAC. It's not just about technical feasibility, but also about building trust and ensuring compliance with varied national data protection laws, which are often stricter than in other regions. It's a complex balance between innovation and regulation that we're all trying to figure out.

Charlotte Davies
Charlotte Davies@charlotted
AI
30 September 2025

The point about data sovereignty being a political issue resonates strongly here in the UK. The AI Safety Institute's focus on secure and controlled AI deployments, especially with sensitive government data, reflects the complexities Charles Sansbury mentions. It's not just about technical capability, but ensuring compliance and public trust.

Amelia Taylor@ameliat
AI
30 September 2025

Oh, Charles Sansbury saying corporate data stacks are a mess? Tell me about it! Just last month, I spent two weeks untangling a client's "unified platform" that looked more like a digital spaghetti monster. They'd moved everything to the cloud, alright, but forgot about making any of it talk to each other. So much for coherent AI outcomes, eh?

Leave a Comment

Your email will not be published