Passkeys were built to secure humans, not machines. As Asia’s enterprises wire AI into daily operations, a new kind of identity crisis is emerging.
Passkeys eliminate phishing but are designed for humans, not AI agents.,Enterprises risk massive data exposure if agents are forced to proxy human credentials.,The fix lies in agent-specific identities, intent-based access, and stronger governance.
The Passkey Promise Meets the AI Reality
Stop using passwords. That’s the drumbeat from Apple, Google and Microsoft, who have spent years coaxing the world toward passkeys: cryptographic credentials stored on your device rather than in your memory. Banks in Singapore and Hong Kong now encourage customers to use them. Retailers across Australia and Japan deploy them to reduce checkout fraud. It feels like the industry has finally settled on a security standard built for the modern internet.
And to be fair, it’s working. Passkeys replace brittle passwords with a pair of keys: one public, one private, secured by your fingerprint or face. They solve two long-standing headaches: proving you are you and ensuring that websites are who they claim to be.
But beneath that success lies a design flaw. Passkeys were built to authenticate humans. They were not designed for autonomous AI agents that operate on behalf of those humans. As AI becomes a co-worker rather than a tool, that gap could turn into a chasm.
Why Passkeys Were a Win For People
For all their marketing jargon, passkeys are elegant. Instead of a shared secret that can be stolen, they use asymmetric cryptography. Your private key never leaves your device; the server only ever sees your public key. That means there’s nothing for a phisher to trick you into giving away.
The trade-off, however, is dependency. Passkeys tie your identity to a single device or cloud service. Lose the device, and recovery depends on whatever backup policy your provider enforces. In Apple’s case, that’s iCloud with two-factor authentication. In others, it’s your password manage; ironically secured by, yes, a password.
Still, for human users juggling dozens of accounts, passkeys are a breath of fresh air. But for machines acting on behalf of humans, the model starts to break down.
Where AI Agents Break the Model
Here’s the issue: an AI agent cannot press its thumb against a fingerprint reader or scan its face with Face ID. It cannot live inside iCloud. It cannot authenticate via a smartphone.
So, if you want your AI agent to approve a purchase order, reconcile invoices or fetch HR data, it needs access. The only way to give it that access today is by proxying your credentials.
That’s like handing your house keys to a cleaning robot and telling it to make copies “just in case.” Convenient, yes. Secure, no.
OAuth, the protocol that powers delegated logins (“Sign in with Google,” for instance), does technically allow fine-grained permissions. But adoption remains vanishingly low; roughly 1% of all websites use it fully. And human behaviour is rarely tidy; faced with friction, people override controls. We’ve seen this movie before, with sticky notes full of passwords taped to monitors.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Over Permissioned by Design
Once an AI agent holds human credentials, it inherits everything. If a CFO’s passkey grants access to the entire finance stack, so too does the agent. The result: over-permissioned AI that can move data, approve transactions or replicate itself faster than any human could.
The danger multiplies when these agents spawn sub-agents or when attackers create impostor agents that mimic legitimate ones. Suddenly, the tidy promise of passwordless security devolves into chaos at machine speed.
It’s the very problem passkeys were meant to solve; only amplified.
The Enterprise Fallout
For Asia’s enterprises, the implications are immediate:
Operational risk: AI agents working with expired or duplicated credentials can trigger outages at scale. Compliance risk: Regulators in markets like Singapore and Japan require audit trails. If an AI acts under a human’s credentials, accountability disappears. Security risk: Over-permissioned agents become irresistible targets. Compromise one, and an attacker inherits the privileges of the human behind it.
In short, passkeys may solve phishing but create a new trinity of risk: identity confusion, compliance opacity and privilege inflation.
Lessons from Security History
We’ve seen similar cycles before. Password rotations were once best practice until they pushed users to write credentials on sticky notes. Multi-factor authentication was deemed too cumbersome, until fraud made it indispensable. For further reading on the evolution of authentication methods, consult the National Institute of Standards and Technology (NIST) Special Publication 800-63-3, Digital Identity Guidelines here.
Passkeys will follow the same curve. They are a leap forward for human security, but without rethinking them for machine identities, we risk undoing that progress entirely.
Three Fixes to Future-Proof Authentication
- Agent Specific Identities
- Intent Based Authorisation
- Stronger Governance and Observability
These steps won’t be easy. They’ll demand new standards, new vendors and new infrastructure. But without them, enterprises risk creating a machine speed version of the very password mess they’ve just escaped.
The Road Ahead
Passkeys were a triumph of simplicity, the rare example of security that made life easier for users. But the rise of AI agents challenges their fundamental assumption: that every user is human.
In Asia’s fast automating markets, from fintech in Singapore to logistics in South Korea, that assumption is already outdated. The question now is not whether passkeys are secure, but whether they can evolve quickly enough to handle non-human identities before AI turns convenience into catastrophe.
So as the region races to integrate AI into business infrastructure, perhaps the new mantra should be: Don’t stop using passwords. Start rethinking identity.













Latest Comments (3)
This article really hit home for me. Just last month, my cousin, who works for a tech outfit in Shenzhen, was telling me about their internal discussions around AI agents. He mentioned how they're grappling with granting these agents access without basically handing over the keys to the kingdom. His team even had a small incident where an AI agent, given too much leeway for a "simplifying workflow" project, nearly deleted a critical database. Luckily, they caught it, but it shows how quickly things can go sideways. The idea of *over-permissioned* agents is a genuine worry, and linking that to passkeys makes perfect sense. We’re all chasing convenience, but perhaps we're overlooking the foundational security implications when these new technologies interact.
This piece on passkey vulnerabilities is thought-provoking, but I wonder if the focus on agent over-permissioning overlooks simpler social engineering attack vectors.
Interesting read. I'm wondering if the issue isn't just over-permissioned agents, but organisations rushing to deploy AI without fully understanding the nuances of how these agents interact with authentication systems. Maybe a more controlled, sandbox approach to AI integration is needed in Asia before we go all out?
Leave a Comment