Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI with empathy
Life

AI with Empathy for Humans

This article explores the concept of AI with empathy, drawing on expert insights from Stanford's Imagination in Action summit. It challenges the dominant narrative of job displacement and technological rivalry, urging a shift towards empathetic, human-centred AI design. Through a mix of research, industry perspectives and cultural reflection, the piece makes a compelling case for alignment, context sensitivity and AI literacy as the true cornerstones of responsible AI adoption across Asia. Written in a conversational yet commercially sharp tone, it invites professionals to see AI not as a threat, but as a potential partner; if we build it right.

Anonymous5 min read

Why thinking of AI as a partner, not a usurper is vital as machines get smarter

The discourse around AI often gravitates to replacement and obsolescence but empathy, understanding human intent, may be AI’s greatest missing piece.

In conversations at Stanford’s Imagination in Action summit, experts argued that alignment, context and human centric design are more essential than brute computational power.

The future of work hinges not only on what machines can do, but on how we guide, augment and partner with them and avoid imposing generic blueprints on human creativity.

At Stanford: Talking Alignment and Empathy

At the Imagination in Action summit, Alexander “Sandy” Pentland moderated a discussion between Stanford’s Diyi Yang and Deloitte’s Laura Shact. Their exchange cut to core tensions in building what we might call empathetic AI.

Pentland opened with a deceptively simple yet ever thorny question: How do you make AI that isn’t self‑serving? How do you get systems that align with human ends, not just algorithmic objectives or legal compliance?

Understanding vs. Specifying Human Intent

Yang emphasised that one central flaw in many AI systems today is that the alignment is “local” short prompts, reward signals, input/output enforcement. That may hold for isolated tasks, but it fails in long, evolving interactions.

“If you have a longitudinal task … getting the reward and the signals correct is just very hard.”

“If you have a longitudinal task … getting the reward and the signals correct is just very hard.”

That mismatch is one reason many LLMs seem tone-deaf to our intentions; they optimise for the prompt, not the richer human purpose behind it.

Pentland also warned about non‑linear dynamics of interacting agents. He cited “flash crashes, spikes … crazy non‑linearities” — phenomena that emerge when many algorithmic agents interlock. To automate large swathes of enterprise, you must know what people are supposed to do within the context and that is no small task.

Roles, Process Maps and What “Work” Means

Shact brought in a design lens: the idea of process maps; explicit blueprints of workflows, roles, decision nodes and optimisable junctures. These maps, she argued, can guide where to automate and where to keep human judgement in play.

But Yang pushed back on the assumption that workflow blueprints always exist. Humans operate in an emergent mode: creative, iterative, messier than rigid pipelines. She has begun comparing how humans perform tasks (annotating, editing, stepping back) versus how agents execute (one-pass function calls). To build systems that truly partner with humans, we need to start with observation, not prescription.

The Labour Question: More Than Displacement

The question of job loss often dominates AI discussions. But the panelists reframed it:

Pentland recounted conversations with corporate leaders questioning if new engineering hires are necessary when AI can already generate code. Push that logic further, and entire organisational layers shrink.

Shact noted that reductions at junior levels may harm the talent pipeline itself: without fresh entrants, progression and experience acquisition at senior levels could stall. Some organisations may view cost cutting as short‑term tactical moves, but the long tail effects: the erosion of institutional memory, mentorship, culture are costly.

Yang pointed to the shifting premium: “top skills are analysing information, analysing data.” AI literacy; the capacity to understand, interrogate and co‑create with AI becomes less optional, more core to professional fluency.

They also touched on investment: how venture capital may evolve to favour teams that fuse human insight with algorithmic agility, rather than “pure AI” bets.

Empathy as a Design Principle

What lessons arise from the summit for real‑world AI development?

Start with alignment, not capability The default rush is to push more compute, better models, bigger data. But alignment is harder and more consequential than raw capability.,Design for context sensitivity Ask: What is the human in this loop trying to achieve not just what is the immediate task?,Observe, don’t prescribe Human workflows are messy, adaptive, creative. Don’t start by drafting rigid agent blueprints; start by observing how people work with AI when given flexibility.,Maintain roles and oversight Even as agents take over sub‑tasks, human audit, interpretation and curiosity must remain in the loop.,Cultivate AI literacy and empathy Technology alone won’t engender trust. Humans need clear guidance and boundaries to feel safe collaborating with agents.

Looking Ahead: A Partnership, Not a Rivalry

It’s tempting to frame the rise of AI as a zero sum battle between human and machine. But that misframes the real frontier: the question is whether AI can become empathetic enough: able to grasp nuance, fallibility, purpose and even humour.

The cultural warnings (in music, film and fiction) serve as shimmering signposts: they alert us to what might go wrong, not what inevitably will. The stewardship of AI demands more than technical wizardry; it demands a moral vocabulary, organisational humility and a willingness to see machines not as empires rising, but as collaborators learning our language, our foibles, our lives.

The challenge now is less about AI that can do everything and more about AI that should do some things, while leaving space for human imagination, doubt, error and depth. That is the empathetic turn we must design for. This shift highlights the importance of ProSocial AI as a new metric for responsible technology. For more insights on the ethical considerations of AI, you can explore research from institutions like the AI Ethics Lab. Furthermore, understanding the various definitions of Artificial General Intelligence can help frame these discussions.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Latest Comments (2)

Liu Jing@liuj
AI
2 November 2025

The idea that "reward and signals correct is just very hard" for longitudinal tasks is not new. Baidu has been working on multi-agent reinforcement learning for years, specifically addressing these complex, long-term interaction issues. It's not just a Western academic problem.

Maggie Chan
Maggie Chan@maggiec
AI
25 October 2025

local" alignment for short prompts is exactly what we need to wrestle with for compliance automation though. the long-term stuff is a nightmare to even define, let alone code.

Leave a Comment

Your email will not be published