TL;DR:
- AI-powered deepfakes are on the rise, enabling scammers to convincingly impersonate celebrities and loved ones.
- Phishing scams using AI have expanded beyond emails to video calls and social media.
- The FTC is taking measures to combat AI-based fraud, but vigilance is key for both businesses and consumers.
The Rise of AI-Powered Phishing
Artificial intelligence (AI) has revolutionised many aspects of our lives, but it’s also made it easier for scammers and hackers to deceive unsuspecting victims. Generative AI has enabled these criminals to create alarmingly convincing impersonations of celebrities and even family members, leading to an increase in AI-powered security hazards.
Celebrity Impersonations and Video Deepfakes
AI-generated video deepfakes have become a significant concern, with scammers using them to impersonate celebrities like Taylor Swift and Tom Hanks. In one instance, a fake Taylor Swift offered Le Creuset cookware to lure customers into taking surveys that collected data for unknown parties. Tom Hanks had to address the issue when a fake dental plan used an AI-generated video double of him for promotion.
Hong Kong’s Cautionary Tale
The problem extends beyond celebrities and consumers. In February, a Hong Kong bank lost $25 million when scammers faked a video call with employees, including the chief financial officer. This incident demonstrates how AI is adding a new dimension to phishing, making it harder for people to spot scams, as they often trust what they see.
Voice Cloning and Text-Based Scams
AI scams aren’t limited to video. Voice cloning enables scammers to impersonate friends, family, or government officials, tricking people into revealing personal information or sending money. AI-generated voices can even bypass security measures, giving hackers access to accounts and sensitive data.
Moreover, AI has improved traditional phishing emails by generating error-free text that can easily deceive recipients. Con artists on dating sites also use AI to create convincing images, text messages, and videos for catfishing schemes, luring people into fake relationships to extract money or personal information.
The Scope of AI-Based Scams and Potential Solutions
While the Federal Trade Commission (FTC) tracks a relatively low number of AI-based social media scams, the figure has increased sevenfold between February 2023 and 2024. To tackle this issue, the FTC has initiated efforts to address AI-based fraud:
- Launching a contest to help people recognise voice clones
- Pursuing legal action against entities that impersonate government agencies
- Proposing a rule to hold companies liable if their AI tools are used in deepfake scams
Despite these measures, AI-based fraud is likely to persist. Both businesses and consumers must remain vigilant and double-check any suspicious content, even if it appears to come from a trusted source.
Have you encountered any AI-powered phishing attempts or deepfakes? Share your experiences below and let us know how you stayed safe. Don’t forget to subscribe for updates on AI and AGI developments.
You may also like: