The AI Revolution Has Fundamentally Reshaped What Data is Worth
Data has always held value, yet unlike oil, copper, or wheat, no international exchange rate exists for information. A kilobyte in Singapore carries no standardised price against a terabyte in Tokyo, nor does Jakarta operate futures markets for streaming datasets. Data remains unregulated, decentralised, and subjective.
With artificial intelligence's ascent, particularly through generative and agentic systems, data value has undergone complete recalibration. AI has transformed information from a corporate by-product into the essential feedstock of enterprise strategy. The implications stretch across technology vendors, CFOs, and regulators worldwide.
AI has elevated data from passive asset to strategic currency underpinning competitiveness. Cloudera, Salesforce, Google Cloud, and Databricks now compete to offer unified platforms spanning data, analytics, and AI capabilities. The greatest challenge for firms isn't volume but governance, usability, and sovereignty of their information assets.
From Analytics to AI-First Data Architectures
Previous decades saw vendors emphasising raw analytics power. Today, businesses demand proof that platforms can serve data, analytics, and AI simultaneously. Cloudera CEO Charles Sansbury described most corporate data stacks as "a mess," arguing firms cannot simply "move everything to the cloud" and expect coherent AI outcomes.
"Most corporate data stacks are a mess. Firms cannot simply move everything to the cloud and expect coherent AI outcomes." - Charles Sansbury, CEO, Cloudera
Private AI deployments running inside company-controlled environments have become a pragmatic middle ground. Sansbury points to data sovereignty as a major hurdle, with borders and identity management rules making data residency both political and technical challenges.
The market has responded with significant investments. Informatica's $8 billion acquisition by Salesforce demonstrated that investors recognise the gravitational pull of managed, high-quality data. This mirrors broader trends across Asia-Pacific's sovereign AI spending surge, where governments prioritise data control.
By The Numbers
- AI spending worldwide reached over $2 trillion in 2026, with infrastructure software capturing 11% share and application software 13%
- 72% of organisations now use generative AI across business functions, up from 56% in 2021
- 60% of the market adopted data observability by 2026, addressing a 95% AI project failure rate tied to poor data quality
- Global AI investments hit $225.8 billion in 2025, with AI firms capturing 48% of equity funding despite comprising only 23% of deals
- 95% of AI projects fail due to poor data governance and quality issues
Quality Over Quantity in the AI Era
Salesforce's Muralidhar Krishnaprasad makes a sharp observation: AI without trustworthy data produces "super-fast rubbish". Agentic AI requires not just quantity but contextual, clean, and unified datasets. Fragmented information landscapes lead to duplication, bias, and poor outcomes.
The recalibration centres not on accumulating petabytes but ensuring enterprises build platforms unifying structured and unstructured data. This challenge becomes acute when examining Southeast Asia's AI ambitions hitting data walls, where regulatory fragmentation complicates unified approaches.
"AI without trustworthy data produces super-fast rubbish. The recalibration is about ensuring enterprises build platforms that unify structured and unstructured data alike." - Muralidhar Krishnaprasad, Salesforce
Modern platforms must handle streaming data, machine learning, and deep analytics within metadata-driven frameworks. Gartner now places these competencies at assessment centres, with hybrid cloud flexibility non-negotiable.
Hyperscalers and the Strategic Data Imperative
Google Cloud's Yasmeen Ahmad frames data in economic terms. It's not a passive ledger entry but AI's fuel. Her focus targets "dark data", the vast majority of unstructured information hidden in images, videos, and documents. AI acts as universal translator making this information usable.
The opportunity is vast, but risks match the scale. If AI can surface all data, firms must strengthen permissions, identity governance, and compliance. Meanwhile, Databricks CTO Dael Williamson notes data isn't recorded on balance sheets despite heavy investor valuations.
AI forces CFOs to treat information as an asset class requiring new architectures where operational and analytical data coexist. This eliminates fragile pipelines whilst feeding models with fresh information. The trend aligns with AI running out of training data, making existing data more valuable.
| Traditional Data Approach | AI-Era Data Strategy | Key Difference |
|---|---|---|
| Analytics-focused platforms | Unified data-analytics-AI systems | Integrated rather than siloed |
| Volume-driven metrics | Quality and governance priorities | Trustworthiness over quantity |
| Passive data storage | Active AI-ready datasets | Real-time operational integration |
| Regional data strategies | Sovereign data considerations | Political and technical complexity |
The Usability Differentiator
Governance, scale, and security matter, but usability distinguishes vendors. Platforms winning attention juggle streaming data, machine learning, and analytics within unified frameworks. From Cloudera's data lakehouses to Snowflake's Iceberg tables and Google's BigLake, the market coalesces around those merging legacy reliability with AI-era agility.
Key platform requirements include:
- Real-time streaming data processing capabilities for immediate AI model feeding
- Metadata-driven architectures supporting both structured and unstructured information
- Hybrid cloud flexibility accommodating regulatory requirements across jurisdictions
- Built-in governance tools ensuring data quality, lineage, and compliance
- Native AI/ML integration eliminating complex pipeline management
- Scalable compute resources matching fluctuating AI workload demands
Asian markets face particular challenges with data hurdles unleashing AI potential, where cross-border regulations vary dramatically. Yet opportunities remain greater, especially as Asian AI investments surge with infrastructure plays.
Future Implications for Enterprise Data Strategy
The recalibration continues evolving. Enterprises judge vendors on reducing operational costs, simplifying infrastructure, and managing data sovereignty whilst enabling AI agility. Firms need platforms balancing governance with innovation, trustworthy yet experimental.
"In the face of a 95% AI failure rate, ensuring that your data is governed, trustworthy, and semantically rich isn't just a nice initiative, it's an existential priority." - Monte Carlo Data, 2026 Predictions
For Asia, where regulatory frameworks differ dramatically across borders, the challenge proves acute. If AI teaches one lesson, data value no longer depends on ownership volume but activation intelligence.
The next time someone claims data is the new oil, perhaps correction is needed: with AI, data becomes the new currency. The question remains: how much is yours worth?
What makes data valuable in the AI era?
Data value now depends on quality, governance, and AI-readiness rather than volume. Clean, unified datasets that can immediately feed AI models command premium valuations compared to fragmented, low-quality information stores.
How has AI changed data platform requirements?
Modern platforms must integrate data storage, analytics, and AI capabilities within unified architectures. Legacy systems separating these functions create bottlenecks that prevent real-time AI model training and deployment at scale.
Why is data sovereignty becoming critical for enterprises?
Regulatory requirements across jurisdictions mandate local data residency whilst AI demands cross-border model training. Enterprises need platforms balancing compliance with AI agility, particularly in Asia's fragmented regulatory landscape.
What role do hyperscalers play in the data value shift?
Cloud giants like Google, Amazon, and Microsoft provide infrastructure enabling enterprises to activate "dark data" through AI. Their platforms transform unstructured information into valuable AI training inputs previously impossible to analyse.
How should CFOs approach data as an asset class?
Finance leaders must develop frameworks valuing data based on AI potential rather than storage costs. This requires new metrics measuring data quality, governance maturity, and model-feeding capability alongside traditional infrastructure investments.
As AI continues reshaping data's strategic importance, how is your organisation preparing for this fundamental value shift? Drop your take in the comments below.








Latest Comments (4)
it's interesting how Sansbury at Cloudera talks about most corporate data stacks being a mess. at our startup, we're seeing more and more of an argument for domain-specific data lakes even for AI, instead of trying to unify everything. i wonder if that's a direction larger enterprises might eventually follow too, especially with sovereignty issues.
This discussion on data sovereignty and "private AI deployments" really resonates with the policy debates here in Korea. We see similar concerns regarding cross-border data flows and national security, especially with sensitive government or industry data. The emphasis on running AI within company-controlled environments makes perfect sense when you consider the regulatory patchwork across APAC. It's not just about technical feasibility, but also about building trust and ensuring compliance with varied national data protection laws, which are often stricter than in other regions. It's a complex balance between innovation and regulation that we're all trying to figure out.
The point about data sovereignty being a political issue resonates strongly here in the UK. The AI Safety Institute's focus on secure and controlled AI deployments, especially with sensitive government data, reflects the complexities Charles Sansbury mentions. It's not just about technical capability, but ensuring compliance and public trust.
Oh, Charles Sansbury saying corporate data stacks are a mess? Tell me about it! Just last month, I spent two weeks untangling a client's "unified platform" that looked more like a digital spaghetti monster. They'd moved everything to the cloud, alright, but forgot about making any of it talk to each other. So much for coherent AI outcomes, eh?
Leave a Comment