The Rise of ProSocial AI as ESG's Strategic Successor
The ESG movement has shaped corporate responsibility for two decades, binding business success to environmental stewardship, social responsibility, and governance. Yet as artificial intelligence reshapes global systems at unprecedented speed, from logistics to medicine, the ESG lens is starting to look like a rear-view mirror.
ProSocial AI is emerging as the forward-facing framework, one that embeds beneficial impact into the very architecture of algorithms. Unlike the ESG checklists of yesterday, ProSocial AI is not an add-on. It's a business strategy for executives who understand that the trust and longevity of their organisations depend on AI that actively improves society, not just avoids harm.
The shift reframes AI not as a risk to be contained, but as a lever for creating value across human, social, and planetary systems. As Asia-Pacific leads global AI adoption, with 88% of companies now reporting AI use in at least one business function, the need for frameworks that ensure beneficial outcomes has never been more urgent.
ESG's Blind Spot in the Age of Algorithms
ESG frameworks were conceived long before artificial intelligence became a central force in commerce and governance. They remain effective for evaluating supply chains, board composition, or carbon footprints, but they fall short in confronting the opacity and systemic influence of AI.
Consider an AI system that optimises a retailer's supply chain. ESG will measure carbon savings from reduced truck mileage or assess whether warehouse staff receive fair wages. But it won't interrogate whether the algorithm disadvantages rural customers, undermines local skills, or fuels unsustainable consumption despite operational efficiencies.
These blind spots are not trivial. They shape communities, behaviours, and ecosystems in ways ESG cannot capture. ProSocial AI is designed to plug this gap, ensuring that algorithms are not merely efficient but socially and environmentally constructive.
"ProSocial AI is a useful operational standard: systems that are tailored, trained, tested, and targeted to bring out the best in and for people and planet." - Psychology Today Australia on harnessing hybrid intelligence
Strategic Value Beyond Ethical Compliance
Businesses adopting ProSocial AI aren't driven only by ethics; they're pursuing competitive advantage. The framework delivers value in at least five dimensions:
- Trust as capital: Companies deploying fair, transparent, human-centred AI strengthen public confidence, with 65% of consumers now trusting businesses that use AI
- Risk prevention: Proactive design removes biases and strengthens privacy protection, minimising exposure to lawsuits and reputational collapse
- Innovation with purpose: AI configured for social and ecological benefit uncovers fresh markets in health, circular economies, and regenerative industries
- Human-machine synergy: Instead of substituting workers, ProSocial AI augments human creativity and judgement, making workplaces more collaborative
- Talent magnetism: Young professionals want purposeful careers, and firms advancing ProSocial AI gain access to the best engineers and ethicists
In short, ProSocial AI repositions technology as an engine of long-term prosperity, not a source of liability. This becomes especially crucial as three in four global employees now use generative AI at work, making ethical implementation a workforce imperative.
By The Numbers
- Organisation-wide AI usage in professional services nearly doubled to 40% in 2026 from 22% in 2025
- 88% of companies report AI use in at least one business function, up from 78% last year
- 65% of consumers trust businesses that use AI, while 78% believe generative AI benefits outweigh risks
- AI companion apps surged by 700% between 2022 and mid-2025
- 15% of organisations have adopted agentic AI tools, with 53% planning or considering them
Building Double Literacy for Algorithmic Leadership
Embedding ProSocial AI requires a shift in how leaders, engineers, and even schoolchildren are educated. The foundation is double literacy: human literacy and algorithmic literacy working in tandem.
Human literacy encompasses deep understanding of how individuals and societies function, spanning emotions, aspirations, institutions, geopolitical realities, and ecosystems. Algorithmic literacy covers practical comprehension of data origins, model transparency, algorithmic limits, and ethical implications.
Double literacy ensures that product managers ask about ecological footprints, engineers anticipate psychological consequences, and CEOs grasp the societal trade-offs of AI strategy. It's not a luxury but a survival skill in an era where Asia-Pacific sovereign AI spending is about to surge.
"Companies will continue to have some human in the loop to create guardrails for agentic AI, as ongoing hallucinations and mistakes has been a wake-up call that has slowed adoption." - Thomas Davenport, MIT Sloan
| ESG Framework | ProSocial AI Framework | Key Difference |
|---|---|---|
| Measures outputs and processes | Embeds values in system design | Proactive vs reactive approach |
| Carbon footprint reporting | Algorithmic bias detection | Traditional vs digital impact |
| Board diversity metrics | Human-AI collaboration quality | Static vs dynamic measurement |
| Supply chain auditing | Ecosystem benefit assessment | Linear vs systemic thinking |
Measuring Impact Through New Metrics
If ESG gave us carbon intensity ratios and board diversity counts, ProSocial AI must offer its own measurement toolkit. Some emerging indicators include algorithmic equity through bias detection and correction across automated decision-making systems.
Human well-being assessments examine AI's influence on cognitive load, autonomy, and mental health. Environmental efficiency tracks energy use, impact on biodiversity, and contribution to circular economies. Collaborative intelligence metrics evaluate human-AI cooperation, creativity, and workforce satisfaction.
These measures expand the definition of value beyond profit margins, linking enterprise success to resilience of the ecosystems they depend on. As Southeast Asia's AI ambitions hit a data wall, proper measurement becomes essential for sustainable growth.
The 4T Implementation Framework
For businesses and individuals alike, a practical pathway exists in the 4T Framework:
- Tailor: Customise AI to values and well-being, ensuring solutions address meaningful challenges rather than just efficiency gains
- Train: Curate diverse, ethical datasets while educating humans and machines simultaneously for better outcomes
- Test: Audit systems for unintended bias, privacy gaps, and social side effects before deployment
- Target: Focus AI deployments on measurable positive goals such as health, sustainability, and empowerment
This framework helps organisations avoid the pitfalls that plagued ESG implementation, where inconsistent standards and "virtue signalling" prioritised disclosure over outcomes. For ProSocial AI to thrive, it must tie intentions directly to results.
What makes ProSocial AI different from traditional responsible AI approaches?
ProSocial AI goes beyond avoiding harm by actively designing systems that create positive societal outcomes. While responsible AI focuses on risk mitigation, ProSocial AI embeds beneficial impact into the core architecture and decision-making processes of AI systems.
How can businesses measure the success of ProSocial AI initiatives?
Success metrics include algorithmic equity scores, human well-being assessments, environmental efficiency gains, and collaborative intelligence ratings. These go beyond traditional KPIs to measure societal and ecological impact alongside business performance.
What role does employee training play in ProSocial AI adoption?
Double literacy training is essential, combining human understanding of social systems with algorithmic comprehension. This ensures teams can identify unintended consequences and design AI that truly serves human and planetary needs.
Can smaller companies implement ProSocial AI without massive resources?
Yes, the 4T Framework (Tailor, Train, Test, Target) provides a scalable approach. Small businesses can start by auditing existing AI tools for bias and gradually implementing more comprehensive ethical AI practices as they grow.
How does ProSocial AI address concerns about AI job displacement?
ProSocial AI emphasises human-AI collaboration rather than replacement, focusing on augmenting human capabilities and creating new forms of value that require both human judgement and algorithmic efficiency working together.
The ESG era showed that finance cannot be divorced from social and environmental health. The ProSocial AI era reminds us that algorithms are not neutral; they're coded visions of the future. With workers using AI more but trusting it less, the imperative for beneficial AI design has never been clearer.
As Asian businesses race to implement AI at scale, the real question for leaders is simple: will your AI leave societies stronger or more fragile than before? The choice between ProSocial AI and status quo approaches will define not just business success, but the kind of future we're building together. Drop your take in the comments below.








Latest Comments (3)
@chenming: it's interesting how the article points out ESG's blind spot for things like an AI supply chain algorithm disadvantaging rural customers. in china, we've seen similar issues with recommendation engines and delivery apps. often, the focus is on growth and efficiency, which can sometimes unintentionally create these "blind spots" that negatively impact certain demographics or smaller businesses. it's not always malicious, but a byproduct of how these systems are designed to optimize for specific metrics. moving towards ProSocial AI would definitely need to address this more directly, especially with how pervasive AI is becoming in daily life here.
the idea that ESG "falls short in confronting the opacity and systemic influence of AI" makes sense. i've been looking at some of the explainable AI literature, and it seems like even with techniques like LIME or SHAP, you're often getting local explanations, not a holistic view of how an AI system impacts society. so if ESG is trying to evaluate something as complex as, say, an algorithm's effect on rural customers or local skills as the article mentions, how would ProSocial AI actually operationalise that? it feels like a very qualitative problem, even if the AI itself is quantitative. are there established metrics for 'disadvantaging rural customers'?
i do agree that traditional ESG struggles with the "opacity and systemic influence of AI". we're seeing this in multimodal research, where it's not always clear how biases propagate through complex networks. it's a gap in current evaluation metrics, beyond just carbon footprint or labor practices.
Leave a Comment