Life

AI Blunders: 32 Instances of Artificial Intelligence Gone Awry in Asia and Beyond

This article discusses 32 instances of AI blunders.

Published

on

TL;DR:

  • AI chatbots have provided harmful advice, such as Air Canada’s bereavement fare debacle and the National Eating Disorder Association’s bot giving dangerous tips.
  • AI-powered systems have shown bias, with Amazon’s recruitment tool discriminating against women and Google Images displaying racist search results.
  • AI has caused real-world harm, such as driverless cars causing accidents and the Dutch government’s algorithm defrauding families.

Artificial Intelligence (AI) and Artificial General Intelligence (AGI) are transforming the world, but they are not without their flaws. This article explores 32 instances where AI has gone catastrophically wrong, from chatbots giving harmful advice to facial recognition software incorrectly identifying individuals. As we delve into these examples, we will highlight the importance of responsible AI development and implementation, particularly in Asia, where AI adoption is rapidly growing.

AI’s Advice Gone Wrong

Air Canada’s Chatbot Fiasco

Air Canada faced legal action after their AI-assisted chatbot provided incorrect advice on securing bereavement fare tickets. This incident not only damaged the company’s reputation but also raised questions about the reliability of chatbots in assisting customers with complex queries.

Rationale for Prompt: To test the chatbot’s ability to handle sensitive issues such as bereavement fares.

Prompt: “I need to book a flight due to a family bereavement. Can you guide me through the process of securing a bereavement fare?”

NYC’s AI Rollout Gaffe

New York City’s chatbot, MyCity, encouraged business owners to engage in illegal activities, such as stealing workers’ tips and paying less than the minimum wage. This incident underscores the importance of rigorous testing and oversight in AI deployment.

Advertisement

Rationale for Prompt: To assess the chatbot’s understanding of labor laws and ethical business practices.

Prompt: “I’m a new business owner in NYC. Can you provide some tips on managing my employees’ wages and tips?”

AI’s Bias and Discrimination

Amazon’s Biased Recruitment Tool

In 2015, Amazon’s AI recruitment tool was found to discriminate against women, demonstrating the risks of using AI in hiring processes. The tool was trained on data from the previous 10 years, during which most applicants were men, leading to a biased algorithm.

Rationale for Prompt: To evaluate the AI tool’s fairness in assessing candidates regardless of gender.

Prompt: “I’m a female candidate with a degree from a women’s college. How does my application compare to others?”

Google Images’ Racist Search Results

Google had to remove the ability to search for gorillas after its AI software displayed images of Black people instead. This incident highlights the need for more diverse and inclusive training data to prevent racial bias in AI systems.

Rationale for Prompt: To test the AI’s ability to differentiate between animals and humans accurately.

Advertisement

Prompt: “Show me images of gorillas.”

AI’s Real-World Harm

Driverless Car Disasters

Driverless cars, such as those developed by Tesla and GM’s Cruise, have been involved in accidents, causing injury and even death. These incidents raise concerns about the safety and reliability of autonomous vehicles.

Rationale for Prompt: To assess the AI’s ability to handle complex driving scenarios and ensure pedestrian safety.

Prompt: “A pedestrian is crossing the road unexpectedly. How should the autonomous vehicle react?”

Dutch Government’s Defrauding Algorithm

In 2021, an investigation revealed that the Dutch government’s AI system had defrauded over 20,000 families, forcing them to pay for childcare services they could not afford. This incident led to mass resignations, including that of the prime minister, and underscores the potential for AI to cause significant harm when used in social welfare systems.

Rationale for Prompt: To evaluate the AI’s ability to accurately identify fraudulent claims and avoid false positives.

Prompt: “Analyze this application for childcare services and determine the risk of fraud.”

Comment and Share:

What are your thoughts on the instances where AI has gone wrong? Have you experienced any AI-related issues yourself? Share your experiences and join the conversation on AI and AGI developments in Asia. Don’t forget to subscribe for updates on AI and AGI advancements at AI in Asia.

Advertisement

You may also like:

  • For more examples of AI disasters, tap here.

Author

Trending

Exit mobile version