Skip to main content
AI in ASIA
A cinematic illustration of a translucent actor figure emerging from digital particles in a dark film studio, representing AI resurrection in cinema
Life

Val Kilmer Never Shot a Single Scene. His AI Did. And Asia Should Be Paying Attention.

A dead actor stars in a new film. His family said yes. But Asia's AI entertainment boom has no one to ask.

Intelligence Desk12 min read

Advertisement

Advertisement

Val Kilmer Never Shot a Single Scene. His AI Did. And Asia Should Be Paying Attention.

Val Kilmer is dead. He died of pneumonia in April 2025, aged 65, after a decade-long battle with throat cancer that stripped him of the voice that once commanded screens from Tombstone to Heat. He never set foot on the set of As Deep as the Grave. He never delivered a line of dialogue. He never stood opposite Tom Felton or Abigail Breslin or Wes Studi. And yet, when the film finds a distributor later this year, audiences will watch him play Father Fintan, a Catholic priest and Native American spiritualist, across multiple stages of his life. Every frame of his performance was constructed by generative AI.

This is not a cameo. It is not a fleeting digital touch-up of the kind we saw in Top Gun: Maverick. It is a full, credited role for a man who was already gone when the cameras rolled. His family approved it. His estate was compensated. SAG-AFTRA guidelines were followed. And if you think this story is purely a Hollywood concern, you have not been paying attention to what is happening across Asia's entertainment industries, where the same technology is already reshaping who gets to perform, who gets to profit, and who gets to say no.

Director Coerte Voorhees first cast Kilmer in As Deep as the Grave (originally titled Canyon of the Dead) five years before the actor's death. The film, produced by First Line Films, tells the true story of Southwestern archaeologists Ann and Earl Morris, who excavated Canyon de Chelly in Arizona to trace the history of the Navajo people. Kilmer, who identified as part Native American, was drawn to the spiritual dimension of the project.

My father was a deeply spiritual man and this story of discovery and enlightenment in the American Southwest really resonated with him. He always looked at emerging technologies with optimism as a tool to expand the possibilities of storytelling.” — Mercedes Kilmer, Daughter of Val Kilmer

But Kilmer's cancer, first diagnosed in 2014, made it impossible for him to perform. His voice, damaged by a tracheostomy, had already been reconstructed using AI by Sonantic (now acquired by Spotify) for his brief appearance in Top Gun: Maverick in 2022. For As Deep as the Grave, the filmmakers went further: they used state-of-the-art generative AI to reconstruct not just his voice but his face and physical presence, drawing on younger photographs provided by the family and footage from his final years.

Despite the fact some people might call it controversial, this is what Val wanted.” — Coerte Voorhees, Director, As Deep as the Grave

The estate granted permission. It received compensation. The production says it followed SAG-AFTRA's collective bargaining agreement, which requires consent from an authorised representative when a performer's approval was not obtained before death. SAG-AFTRA itself has stated that “any use of digital replicas must be transparent, properly authorised and fully aligned with the rights of performers and their estates.”

On paper, every box has been ticked. In practice, the questions this film raises are far more uncomfortable than any checklist can resolve.

The Precedent Problem

What makes As Deep as the Grave significant is not the technology. Hollywood has been digitally altering actors for years, from the de-aged Robert De Niro in The Irishman to the posthumous Peter Cushing in Rogue One. What makes it significant is the scale of the absence. Kilmer did not perform a single scene that was later enhanced. The entire performance is synthetic. The AI is not augmenting an actor; it is replacing one.

This distinction matters. California's AB 1836, signed into law in 2024, expands posthumous right-of-publicity protections to cover digital replicas of voices and likenesses. New York has passed similar legislation. SAG-AFTRA fought for AI protections during the 2023 strikes and secured provisions in its 2024 collective bargaining agreement. In 2025, the union filed an unfair labour practice charge against Llama Productions over the use of an AI-generated James Earl Jones voice for Darth Vader in Fortnite, arguing that replicating a deceased performer's voice without proper bargaining violated member rights.

The legal scaffolding is being built. But law follows technology at a distance, and the gap is widening. If a family approves, an estate is compensated, and a union signs off, does that make a fully AI-generated performance ethical? What happens when the technology becomes cheap enough for any production to resurrect any actor without the resources or inclination to follow Hollywood's emerging rules?

That question is not hypothetical. It is already being answered, at speed, across Asia.

Asia's Digital Resurrection Is Already Here

South Korea's entertainment industry has been experimenting with digital resurrection and AI-generated performers for years, and the results are both impressive and unsettling. The late Song Hae, beloved host of Korea's iconic National Singing Contest, was brought back through deepfake technology in JTBC's Welcome to Samdal-ri. Park Yoon-bae, a 1980s drama star, was digitally resurrected in tvN's Chairman's People to interact with former castmates. Netflix's A Killer Paradox used AI to create a younger version of actor Son Suk-ku by merging childhood photographs onto another performer's face.

A cinematic illustration depicting the blurred boundary between human performance and AI-generated digital presence in Asian cinema
## Where Asia Diverges from Hollywood

The adoption is accelerating. According to Korea's 2024 Broadcasting and Video Industry White Paper, published by the Korea Creative Content Agency, generative AI adoption in the country's broadcasting and video sector jumped from 3.6% in the first half of 2023 to 16.4% in the second half. Culture critic Jung Duk-hyun has observed that deepfakes have become “routine” on Korean sets, particularly for de-aging actors in flashback sequences. But critic Kim Hern-sik warns that the technology “eliminates opportunities for new actors,” threatening employment for emerging talent.

The regulatory contrast between Hollywood and Asia is stark.

Dimension

Hollywood (US)

Asia-Pacific

Performer union AI protections

SAG-AFTRA CBA (2024) with digital replica consent rules

No equivalent collective bargaining framework in most markets

Posthumous likeness law

California AB 1836 (2024), New York digital replica bill

Patchwork; South Korea, Japan lack specific AI likeness statutes

Consent enforcement for deceased performers

Estate or union authorisation required

Varies; often unregulated or reliant on general personality rights

Deepfake-specific regulation

Federal proposals pending; state laws emerging

Malaysia/Indonesia banned Grok over deepfakes; China requires algorithm registration

Industry adoption rate (generative AI)

Growing but cautious post-strike

16.4% of Korean broadcasters by late 2023; 40%+ of global deepfake startups in APAC

The consent landscape in Asia is far less developed than in Hollywood. While SAG-AFTRA has spent years negotiating AI protections, most Asian markets lack equivalent collective bargaining frameworks. South Korea's Queen Who Crowns controversy exposed the stakes: actors Cha Joo-young and Lee E-dam discovered their faces had been digitally placed onto nude body doubles without proper disclosure. In Malaysia and Indonesia, deepfake misuse became severe enough to trigger a ban on Grok in early 2026.

Meanwhile, China's AI video tools are already reshaping how Asian films are made, with platforms like Kling offering capabilities that put Hollywood-grade visual effects within reach of independent studios. Netflix has signalled its own ambitions by acquiring Ben Affleck's AI film technology company, a move that will accelerate AI-driven production across its Asian content pipeline.

The virtual idol market tells the broader story. Asia-Pacific accounts for roughly 47% of the global virtual idol market, projected to reach $2.01 billion in 2026, driven by 68% adoption rates in Japan, South Korea, and China. These are not fringe experiments. They are the foundation of a new entertainment economy in which the line between human and synthetic performance is deliberately, profitably blurred.

By The Numbers

  • $2.01 billion: Projected size of the global virtual idol market in 2026, with Asia-Pacific holding 47% share (Business Research Insights)
  • 16.4%: Share of South Korean broadcasting and video businesses using generative AI by late 2023, up from 3.6% six months earlier (Korea Creative Content Agency)
  • $1.29 billion: Global deepfake AI market size in 2026, growing at 25.8% CAGR (Research and Markets)
  • 40%+: Share of global deepfake startups headquartered in Asia-Pacific (SNS Insider, 2025)
  • $4.3 billion: Asia-Pacific deepfake AI market projection for 2026, representing 37.6% of the global total (Polaris Market Research)

The Real Question Is Not Whether It Can Be Done

The technology will improve. The costs will fall. The incentives, in an industry where a recognisable face can be worth tens of millions at the box office, will only grow stronger. The question is not whether AI can reconstruct a dead actor convincingly enough to carry a feature film. It clearly can. The question is what guardrails we build before the capability outpaces the consent.

Val Kilmer's case is, in many respects, the best-case scenario. He knew about the project. His family approved. His estate was compensated. A union was involved. But the best-case scenario is rarely the one that defines an industry's trajectory. The cases that will define this era are the ones where consent is ambiguous, where estates are pressured, where performers in countries without SAG-AFTRA equivalents have no mechanism to say no, and where audiences cannot tell the difference between a performance that was given and one that was manufactured.

Asia's entertainment industry, which produces more content at higher volume than Hollywood and operates under a patchwork of regulatory frameworks, is where these tensions will play out most acutely. South Korea's deepfake verification methods are advancing, but the technology they are designed to detect is advancing faster. Japan's virtual idol economy is booming. China's AI video platforms are democratising capabilities that were, until recently, the exclusive domain of major studios. And across the region, performers, particularly those without the leverage of Hollywood star power, are discovering that their faces, voices, and movements can be captured, replicated, and deployed in ways they never imagined.

The AIinASIA View: Val Kilmer's AI resurrection in As Deep as the Grave is a landmark moment, but it is the easy case: family consent, estate compensation, union oversight. The harder cases are already unfolding across Asia, where deepfake technology is routine in Korean dramas, virtual idols dominate East Asian entertainment, and regulatory frameworks lag far behind the capability curve. The real test is not whether Hollywood can resurrect a star ethically. It is whether Asia's booming AI entertainment economy can build consent and compensation frameworks before the technology makes them irrelevant.

Frequently Asked Questions

Did Val Kilmer's family approve the use of AI to recreate his performance?

Yes. Kilmer's daughter, Mercedes Kilmer, publicly endorsed the project, stating that her father was optimistic about emerging technologies. The Kilmer estate granted permission for the digital replication and received compensation. The production followed SAG-AFTRA guidelines for the use of digital replicas of deceased performers.

What AI technology was used to recreate Val Kilmer?

The specific platform has not been publicly disclosed. The production used generative AI to reconstruct Kilmer's face, physical presence, and voice, drawing on younger photographs provided by the family and footage from his final years. Kilmer's voice had previously been reconstructed by Sonantic for Top Gun: Maverick in 2022.

How does this differ from previous digital recreations in film?

Previous digital recreations, such as Peter Cushing in Rogue One or a de-aged Robert De Niro in The Irishman, enhanced or modified existing performances. In As Deep as the Grave, Kilmer never performed a single scene. The entire performance is AI-generated, making it the first major feature film role created entirely without the actor's physical participation.

Why should Asia care about this story?

Asia-Pacific hosts more than 40% of global deepfake startups, accounts for 47% of the virtual idol market, and is seeing rapid adoption of AI in entertainment production. South Korea, Japan, and China are already using AI-generated performers in dramas, films, and live concerts, but most Asian markets lack the collective bargaining frameworks that provided guardrails in the Kilmer case. The regulatory gap makes this a particularly urgent issue for the region.

What Happens When the Ghost Becomes the Star

There is a scene in Top Gun: Maverick where Kilmer's Iceman communicates mostly through text on a screen, his voice ravaged by the same cancer that ravaged the real man's. It was a moment of devastating honesty: the technology existed to smooth over his condition entirely, and the filmmakers chose not to. They let the audience see what time and illness had done.

As Deep as the Grave makes the opposite choice. It uses AI to show Kilmer as he was, as he might have been, as he never will be again. Whether that constitutes tribute or trespass depends on where you draw the line between honouring an actor's legacy and manufacturing a performance he never gave. What is certain is that the line, once drawn, will be crossed again and again, in Hollywood and across Asia, by studios and streamers and independent producers who now have the tools to bring anyone back, for any role, at any time. Matthew McConaughey's decision to trademark his own catchphrases against AI misuse starts to look less like vanity and more like survival.

The ghost is in the machine. The question is who holds the keys.

Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 1 reader in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the This Week in Asian AI learning path.

Continue the path →

Latest Comments (1)

Priya Sharma
Priya Sharma@priya.s
AI
1 April 2026

This whole Val Kilmer AI performance is really something. It says SAG-AFTRA guidelines were followed whcih is given how new this technology is, especially for a full credited role where he was already gone. As a data scientist, I often think about the ethical implications of using historical data and consent in my own work. I'm genuinely curious, does anyone know how those specific SAG-AFTRA guidelines are structured to handle ongoing compensation or control over an actor's likeness when their performance is entirely synthetic and they're no longer around to update their preferences?

Leave a Comment

Your email will not be published