The Algorithm Is Now Your Boss: When AI Software Decides Who Stays and Who Goes
Your next promotion, pay rise, or even your job security might not depend on your manager's judgement. It could hinge on what ChatGPT thinks of your performance data.
A June 2025 survey by ResumeBuilder of 1,342 US managers reveals that 60% now use AI tools for critical personnel decisions. This includes 78% using AI to determine raises, 77% for promotions, 66% for layoffs, and 64% for terminations. Most concerning? One in five managers let AI make the final call without human oversight.
The tools driving these decisions are familiar consumer products: ChatGPT (53%), Microsoft Copilot (29%), and Google Gemini (16%). Yet only 32% of managers received formal training on using these systems for HR decisions, while 24% admitted to having no training at all.
The Rise of the Algorithmic HR Department
This shift towards AI-drivenโฆ HR decisions reflects broader workplace trends. According to recent research, 91% of chief human resources officers now rank AI and workplace digitisation as their top immediate concern, surpassing traditional issues like governance and talent management.
The appeal is clear: AI promises efficiency, data-driven insights, and reduced human biasโฆ. Early adopters report that AI in hiring can reduce bias by 40% compared to traditional methods while cutting hiring costs by 30-40%. Some 65% of organisations using AI in HR report productivity boosts in performance and retention decisions.
However, the reality proves more complex. AI doesn't reduce work, it intensifies it, creating new layers of oversight and validation requirements that many organisations aren't prepared to handle.
By The Numbers
- 60% of US managers use AI tools for personnel decisions including raises, promotions, and terminations
- 20% of managers let AI make final HR decisions without human intervention
- 32% of managers received formal training on AI usage in HR contexts
- 91% of CHROs rank AI and workplace digitisation as their top immediate concern
- 45% of organisations currently employ AI in HR management activities
When Machines Hallucinate About Human Lives
The fundamental problem lies in what these AI systems actually are: large language models trained to predict the next word in a sequence, not to make nuanced judgements about human performance or potential. These systems suffer from well-documented issues including "sycophancy," where they tell users what they want to hear, and hallucinationโฆ, where they confidently present false information as fact.
"In 2026, an organisation's people data will rival its financial data in strategic importance. AI will transform workforce intelligence from a historical view of headcount and costs into a living map of capability, agility and operational potential," said Steve Holdridge, workforce analytics expert.
Yet current AI implementations fall far short of this vision. When ChatGPT is asked to evaluate an employee's performance based on limited data, it might fabricate plausible-sounding justifications for dismissal or promotion that have no basis in reality. The system's training on internet text doesn't equip it to understand workplace dynamics, personal circumstances, or growth potential.
Legal experts warn that organisations face significant discrimination lawsuits if AI systems replicate systematic bias around age, gender, or race. A recent Australian lawsuit suggests AI recruitment systems may automatically exclude applicants over 40 before any human review occurs.
The Human Cost of Algorithmic Efficiency
Workers are increasingly anxious about AI's role in their professional lives. An EY survey found that 71% of US employees worry about AI's workplace impact, with 75% fearing job loss and 65% expressing discomfort about being replaced by machines.
These concerns extend beyond job security to fundamental questions of fairness and dignity. When algorithms make career-altering decisions without transparency or appeal processes, workplace agency disappears. Employees report feeling powerless against opaque systems that may penalise them for factors they cannot understand or control.
"2026 will be the year of outcomes for AI. The focus will shift from potential to performance, and from experimentation to real measurement of business results across operations, talent acquisition and employee engagement," noted Amy Cappellanti-Wolf, workplace AI researcher.
The psychological impact can be severe. Reports of "ChatGPT psychosis," where individuals become convinced AI systems are sentient, highlight the mental health risks of over-reliance on AI decision-making. When people lose trust in the fairness of workplace systems, companies struggle to maintain productive relationships with their workforce.
| Decision Type | AI Usage Rate | Human Oversight | Legal Risk |
|---|---|---|---|
| Salary Increases | 78% | Limited | Medium |
| Promotions | 77% | Minimal | High |
| Layoffs | 66% | Variable | Very High |
| Terminations | 64% | Required | Extreme |
Asia-Pacific's More Measured Approach
While Western organisations rush towards AI automation, several Asia-Pacific countries are taking a more cautious approach. Singapore and India have implemented training programmes that encourage managers to use AI as a starting point rather than a decision-maker.
These hybrid approaches allow AI to offer analytics for promotion decisions while maintaining mandatory human panel reviews. Transparency has become a key demand from employees across Asian markets, with Singapore actively working to create an AI-bilingual workforce that understands both AI capabilities and limitations.
Regional firms are piloting systems where AI assists with tasks like:
- Drafting performance summaries based on collected feedback
- Highlighting patterns in employee data across departments
- Preparing personalised development plans for career progression
- Identifying potential retention risks before they become critical
- Supporting managers with objective performance metrics
The key difference lies in maintaining human discretion for final decisions while leveraging AIโฆ's analytical capabilities. Vietnam's new AI law specifically addresses workplace applications, requiring transparency when AI influences employment decisions.
What legal protections exist against AI bias in hiring?
Currently, employment discrimination laws apply to AI systems, but enforcement remains patchy. The UK has called for responsible AIโฆ usage in recruitment, while Australia and the EU are developing specific regulations requiring algorithmic transparency.
Can employees challenge AI-influenced decisions about their careers?
Most organisations lack formal appeal processes for AI-influenced decisions. However, workers retain rights under existing employment law, and several countries are considering "right to explanation" legislation for automated decision-making.
How accurate are AI systems at predicting job performance?
Studies show mixed results, with accuracy heavily dependent on data quality and system design. AI can identify patterns humans miss but often lacks context for situational factors affecting performance.
Should companies disclose when AI influences employment decisions?
Transparency advocates argue yes, and some jurisdictions are moving towards mandatory disclosure. Employees deserve to know when algorithms affect their careers, enabling them to address potential data issues.
What safeguards prevent AI from perpetuating workplace discrimination?
Effective safeguards include bias auditing, diverse training data, human oversight requirements, and regular outcome monitoring across demographic groups. However, many organisations lack these protections currently in place.
The future of work shouldn't be decided by systems that hallucinate with confidence. AI can inform better decisions, but it cannot replace the empathy, context, and moral reasoning that fair employment practices require. As businesses across Asia grapple with AI adoption challenges, the lesson is clear: technology should serve human dignity, not undermine it.
What's your experience with AI in workplace decisions? Have you encountered algorithmic bias or found AI tools helpful in HR contexts? Drop your take in the comments below.







Latest Comments (2)
This 60% figure for US managers relying on AI for HR decisions is striking. In Korea, our national AI strategy emphasizes "human-centric AI," which strongly pushes for human oversight. It makes me wonder how these US practices align with developing international standards on responsible AI.
@budi_s this whole "60% of us managers use AI for HR" thing? just makes me think about how much harder it would be to even apply such tech here. we're still sorting out basic digital identity for so many, let alone feeding clean, unbiased data into an AI for hiring or firing decisions. feels like a problem for a different planet sometimes.
Leave a Comment