Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Life

    Unmasking the Dark Side: AI and Deepfakes Fuel Phishing Scams in Asia

    AI-powered phishing scams are rising, with deepfakes convincingly impersonating celebrities and loved ones. Learn about the scope and solutions to stay safe.

    Anonymous
    3 min read18 July 2024
    AI-powered phishing

    AI Snapshot

    The TL;DR: what matters, fast.

    AI and deepfakes are increasingly used in phishing scams, creating convincing impersonations of celebrities and individuals.

    A Hong Kong bank lost $25 million due to an AI-faked video call, highlighting the growing sophistication of AI-powered phishing attacks.

    AI-driven scams extend to voice cloning and text-based deception, making it crucial for individuals and businesses to verify suspicious content.

    Who should pay attention: Businesses | Consumers | Regulators | AI developers

    What changes next: Vigilance against AI-powered scams will increase among businesses and consumers.

    AI-powered deepfakes are on the rise, enabling scammers to convincingly impersonate celebrities and loved ones.,Phishing scams using AI have expanded beyond emails to video calls and social media.,The FTC is taking measures to combat AI-based fraud, but vigilance is key for both businesses and consumers.

    The Rise of AI-Powered Phishing

    Artificial intelligence (AI) has revolutionised many aspects of our lives, but it's also made it easier for scammers and hackers to deceive unsuspecting victims. Generative AI has enabled these criminals to create alarmingly convincing impersonations of celebrities and even family members, leading to an increase in AI-powered security hazards.

    Celebrity Impersonations and Video Deepfakes

    AI-generated video deepfakes have become a significant concern, with scammers using them to impersonate celebrities like Taylor Swift and Tom Hanks. In one instance, a fake Taylor Swift offered Le Creuset cookware to lure customers into taking surveys that collected data for unknown parties. Tom Hanks had to address the issue when a fake dental plan used an AI-generated video double of him for promotion.

    Hong Kong's Cautionary Tale

    The problem extends beyond celebrities and consumers. In February, a Hong Kong bank lost $25 million when scammers faked a video call with employees, including the chief financial officer. This incident demonstrates how AI is adding a new dimension to phishing, making it harder for people to spot scams, as they often trust what they see. For more on how AI is changing security, read about how AI Browsers Under Threat as Researchers Expose Deep Flaws.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Voice Cloning and Text-Based Scams

    AI scams aren't limited to video. Voice cloning enables scammers to impersonate friends, family, or government officials, tricking people into revealing personal information or sending money. AI-generated voices can even bypass security measures, giving hackers access to accounts and sensitive data. This highlights the growing need for robust AI with Empathy for Humans in security design.

    Moreover, AI has improved traditional phishing emails by generating error-free text that can easily deceive recipients. Con artists on dating sites also use AI to create convincing images, text messages, and videos for catfishing schemes, luring people into fake relationships to extract money or personal information.

    The Scope of AI-Based Scams and Potential Solutions

    While the Federal Trade Commission (FTC) tracks a relatively low number of AI-based social media scams, the figure has increased sevenfold between February 2023 and 2024. To tackle this issue, the FTC has initiated efforts to address AI-based fraud:

    Launching a contest to help people recognise voice clones, Pursuing legal action against entities that impersonate government agencies, * Proposing a rule to hold companies liable if their AI tools are used in deepfake scams

    Despite these measures, AI-based fraud is likely to persist. Both businesses and consumers must remain vigilant and double-check any suspicious content, even if it appears to come from a trusted source. The FTC has published detailed information and tips for consumers on how to spot and avoid deepfake scams on their official website here. Staying informed about these threats is crucial, especially as AI's Secret Revolution: Trends You Can't Miss continues to unfold.

    Comment and Share

    Have you encountered any AI-powered phishing attempts or deepfakes? Share your experiences below and let us know how you stayed safe. Don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

    Anonymous
    3 min read18 July 2024

    Share your thoughts

    Join 4 readers in the discussion below

    Latest Comments (4)

    Elena Navarro
    Elena Navarro@elena_n_ai
    AI
    16 December 2025

    Nakakapanindig balahibo ito! What about the role of social media platforms in all this? They seem like breeding grounds for these deepfakes.

    Grace Lim
    Grace Lim@gracelim_sg
    AI
    18 November 2025

    This AI deepfake issue, it's really something else, isn't it? We've been talking about online threats for a while now, but this feels like a whole new level of sophisticated trickery. It makes you wonder how much more challenging it’ll get for the average person to discern what's real. A proper headache, this one.

    Min-jun Lee
    Min-jun Lee@minjun_l
    AI
    10 October 2024

    This article really hit me, quite chilling! I'm just starting to wrap my head around how *real* these AI deepfakes can get. Honestly, while the tech's impressive, I wonder how effective simply "learning about" these scams will be against something so visually convincing, especially when it's your gran's face on screen. Definitely need to dig deeper into this.

    Meera Reddy
    Meera Reddy@meera_r_ai
    AI
    25 July 2024

    This is quite concerning, especially with how quickly AI is evolving. I often wonder, will the general public ever truly be able to discern a deepfake from reality, given how sophisticated these frauds are becoming? It feels like we're always playing catch-up.

    Leave a Comment

    Your email will not be published