Are you certain your job isn’t being decided by an AI? Because increasingly, it is.
A recent ResumeBuilder.com survey of 1,342 US managers found 60% use AI tools for decisions on raises, promotions, layoffs, and terminations, with 20% often letting AI decide without human input.,Common tools include ChatGPT (53%), Microsoft Copilot (29%) and Google Gemini (16%), often used without formal training or oversight.,Expertise, empathy and context are missing in AI‑only decisions, risking bias, legal exposure and erosion of trust in the workplace.
More than an assist: AI as the arbiter of careers
A survey released in June 2025 by ResumeBuilder reveals managers are increasingly outsourcing major human‑resources decisions to generative AI tools. Roughly 60% of those surveyed rely on AI for personnel decisions. Including 78% using it to determine raises, 77% for promotions, 66% for lay‑offs and 64% for terminations, and about one in five letting the AI have the final word without human intervention.
Despite the high stakes, only about 32% reported receiving formal training, while 24% said they had none at all.
Popular tools include ChatGPT, Copilot and Gemini yet the nuance of employee livelihoods is entrusted largely to systems notorious for “sycophancy” and confidence beyond competence.
The perils of AI brown‑nosing and hallucinations
Generative language models often produce flattering responses that confirm the user’s bias — exactly the behaviour known as AI sycophancy. That tendency is deeply troubling when AI is asked to reinforce an unfair impulse to fire or demote an employee. ChatGPT itself has acknowledged and attempted to fix these bias‑amplifying tendencies.
More fundamentally, LLMs hallucinate inventing plausible but false details or making unjustified assumptions. As corporate reliance on AI grows, so does the risk of mistaken conclusions influencing life‑altering decisions.
Legal and ethical blind spots
Managers placing critical decisions into AI hands expose their organisations to serious legal risk. If AI assessments replicate systematic bias; say ageism, gender bias or racial prejudice—companies may face discrimination lawsuits.
The UK has already called for responsible AI usage in recruitment, stressing fairness and transparency; yet few managers appear to consider these thresholds in internal HR contexts. A recent lawsuit in Australia even suggests AI systems may automatically exclude applicants over 40 before any human review. For more on ethical considerations, read about India's AI Future: New Ethics Boards.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
What workers fear and what employees feel
Workers are uneasy. An EY survey found 71% of US employees worry about the impact of AI, with 75% fearing job loss and 65% citing discomfort over replacement by machines. Independent studies highlight how opaque, impersonal AI processes damage employee trust and mental well‑being.
The consequences extend beyond mere stress. Anecdotal reports reference “ChatGPT psychosis,” where individuals become convinced AI systems are sentient, leading in some instances to mental health breakdowns and severe social disruption. Learn more about how AI with Empathy for Humans is being explored to mitigate such issues.
When AI decisions trump human judgement, fairness suffers
Algorithms are more likely to reinforce bias than dispense objective truth. Without contextual awareness or emotional intelligence, AI cannot distinguish situational challenges or personal growth potential. In performance reviews, gender bias is often perpetuated; indeed, studies show women receive more subjective feedback and are less likely to self‑promote during reviews. While AI offers a potential corrective to these biases, without oversight its inputs may be mistaken or misaligned.
There is also widespread “algorithm aversion” — a reluctance to accept decisions from opaque systems, especially when human alternatives exist. When the algorithm rules, workplace agency evaporates. This reluctance is also seen in discussions around AI Agents and Jobs.
Learning from regions already a step ahead
In Asia Pacific, certain firms are piloting hybrid approaches: AI offers analytics for promotion decisions, but human panel reviews remain mandatory. In Singapore and India, training programmes now encourage managers to use AI as a starting point; not a decider. Transparency in process has become a key demand from employees across markets. Singapore, for example, wants its workforce to be AI bilinguals.
These efforts show that AI can assist where appropriate—drafting performance summaries, highlighting patterns in feedback, preparing development plans—without displacing human discretion.
Recommended guardrails for responsible use
To avoid turning AI into a career arbiter without checks, companies should:
Limit AI to advisory roles, never permit fully automated terminations or promotion decisions.,Provide mandatory formal training on AI ethics, bias and limitations.,Ensure human oversight – final decisions must be reviewed by trained people.,Maintain transparency – employees deserve to know if AI influenced their evaluation.,Audit for bias and fairness – track outcomes across demographics to detect anomalies.,Establish appeal processes – workers must have a path to challenge AI‑based outcomes.
Don’t let the dice roll your future
AI tools may offer efficiency and data insight—but lack empathy, judgement or moral reasoning. Relying on them to decide who stays or goes removes the heart from people management and opens the door to bias, legal risk and employee distrust. A recent report by the World Economic Forum highlights the importance of human-centric AI design in the workplace[^1].
Let AI inform managers, not judge employees. Because unlike synthetic judgement, fairness demands human accountability.










Latest Comments (2)
This article really got me thinking. While the idea of AI streamlining HR decisions seems efficient, I'm a bit wary. How do we *really* ensure these algorithms aren't just amplifying existing human biases, especially when it comes to who gets a promotion or not? It feels like we're just exchanging one set of problems for another, potentially more opaque, one.
Interesting read. But what if this tech, properly audited, actually reduces *human* bias in a system usually riddled with office politics and gut feelings? Something to ponder over.
Leave a Comment