Connect with us

Life

‘Never Say Goodbye’: Can AI Bring the Dead Back to Life?

This article delves into the fascinating and controversial world of AI resurrections, exploring how technology is changing the way we cope with grief.

Published

on

AI resurrections

TL;DR:

  • AI is creating digital ‘resurrections’ of the dead, allowing people to interact with them.
  • Projects like Replika and StoryFile use AI to mimic the deceased’s communication style.
  • Experts debate the psychological and ethical implications of these technologies.
  • Privacy and environmental concerns are significant issues with AI resurrections.

In a world where artificial intelligence can resurrect the dead, grief takes on a new dimension. From Canadian singer Drake’s use of AI-generated Tupac Shakur vocals to Indian politicians addressing crowds years after their passing, technology is blurring the lines between life and death. But beyond their uncanny pull in entertainment and politics, AI “zombies” might soon become a reality for people reeling from the loss of loved ones, through a series of pathbreaking, but potentially controversial, initiatives.

What are AI ‘Resurrections’ of People?

Over the past few years, AI projects around the world have created digital “resurrections” of individuals who have passed away, allowing friends and relatives to converse with them. Typically, users provide the AI tool with information about the deceased. This could include text messages and emails or simply be answers to personality-based questions. The AI tool then processes that data to talk to the user as if it were the deceased.

One of the most popular projects in this space is Replika – a chatbot that can mimic people’s texting styles. Other companies, however, now also allow you to see a video of the dead person as you talk to them. For example, Los Angeles-based StoryFile uses AI to allow people to talk at their own funerals. Before passing, a person can record a video sharing their life story and thoughts. During the funeral, attendees can ask questions and AI technology will select relevant responses from the prerecorded video.

In June, US-based Eternos also made headlines for creating an AI-powered digital afterlife of a person. Initiated just earlier this year, this project allowed 83-year-old Michael Bommer to leave behind a digital version of himself that his family could continue to interact with.

Do These Projects Help People?

When a South Korean mother reunited with an AI recreation of her dead daughter in virtual reality, a video of the emotional encounter in 2020 sparked an intense debate online about whether such technology helps or hurts its users. Developers of such projects point to the users’ agency, and say that it addresses a deeper suffering.

Advertisement

Jason Rohrer, founder of Project December, which also uses AI to stimulate conversations with the dead, said that most users are typically going through an “unusual level of trauma and grief” and see the tool as a way to help cope.

“A lot of these people who want to use Project December in this way are willing to try anything because their grief is so insurmountable and so painful to them.”

The project allows users to chat with AI recreations of known public figures and also with individuals that users may know personally. People who choose to use the service for stimulating conversation with the dead often discover that it helps them find closure, Rohrer said. The bots allow them to express words left unsaid to loved ones who died unexpectedly, he added.

Eternos’s founder, Robert LoCasio, said that he developed the company to capture people’s life stories and allow their loved ones to move forward. Bommer, his former colleague who passed away in June, wanted to leave behind a digital legacy exclusively for his family, said LoCasio.

“I spoke with [Bommer] just days before he passed away and he said, just remember, this was for me. I don’t know if they’d use this in the future, but this was important to me,” said LoCasio.

What are the Pitfalls of This Technology?

Some experts and observers are more wary of AI resurrections, questioning whether deeply grieving people can really make the informed decision to use it, and warning about its adverse psychological effects.

“The biggest concern that I have as a clinician is that mourning is actually very important. It’s an important part of development that we are able to acknowledge the missing of another person,” said Alessandra Lemma, consultant at the Anna Freud National Centre for Children and Families.

Prolonged use could keep people from coming to terms with the absence of the other person, leaving them in a state of “limbo”, Lemma warned. Indeed, one AI service has marketed a perpetual connection with the deceased person as a key feature.

Advertisement

“Welcome to YOV (You, Only Virtual), the AI startup pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love,” read the company’s website, before it was recently updated.

Rohrer said that his grief bot has an “in-built” limiting factor: users pay $10 for a limited conversation. The fee buys time on a supercomputer, with each response varying in computational cost. This means $10 doesn’t guarantee a fixed number of responses, but can allow for one to two hours of conversation. As the time is about to lapse, users are sent a notification and can say their final goodbyes. Several other AI-generated conversational services also charge a fee for use.

Lemma, who has researched the psychological impact of grief bots, says that while she worries about the prospects of them being used outside a therapeutic context, it could be used safely as an adjunct to therapy with a trained professional. Studies around the world are also observing the potential for AI to deliver mental health counselling, particularly through individualised conversational tools.

Are Such Tools Unnatural?

These services may appear to be straight out of a Black Mirror episode. But supporters of this technology argue that the digital age is simply ushering in new ways of preserving life stories, and potentially filling a void left by the erosion of traditional family storytelling practices.

“In the olden days, if a parent knew they were dying, they would leave boxes full of things that they might want to pass on to a child or a book,” said Lemma. “So, this might be the 21st-century version of that, which is then passed on and is created by the parents in anticipation of their passing.”

LoCasio at Eternos agrees.

“The ability for a human to tell the stories of their life, and pass those along to their friends and family, is actually the most natural thing,” he said.

Are AI Resurrection Services Safe and Private?

Experts and studies alike have expressed concerns that such services may fail to keep data private. Personal information or data such as text messages shared with these services could potentially be accessed by third parties. Even if a firm says it will keep data private when someone first signs up, common revisions to terms and conditions, as well as possible changes in company ownership mean that privacy cannot be guaranteed, cautioned Renee Richardson Gosline, senior lecturer at the MIT Sloan School of Management.

Advertisement

Both Rohrer and LoCasio insisted that privacy was at the heart of their projects. Rohrer can only view conversations when users file a customer support request, while LoCasio’s Eternos limits access to the digital legacy to authorised relatives. However, both agreed that such concerns could potentially manifest in the case of tech giants or for-profit companies.

One big worry is that companies may use AI resurrections to customise how they market themselves to users. An advertisement in the voice of a loved one, a nudge for a product in their text.

“When you’re doing that with people who are vulnerable, what you’ve created is a pseudo-endorsement based on someone who never agreed to do such a thing. So it really is a problem with regard to agency and asymmetry of power,” said Gosline.

Are There Any Other Concerns Over AI Chatbots?

That these tools are fundamentally catering to a market of people dealing with grief in itself makes them risky, suggested Gosline – especially when Big Tech companies enter the game.

“In a culture of tech companies which is often described as ‘move fast and break things’, we ought to be concerned because what’s typically broken first are the things of the vulnerable people,” said Gosline. “And I’m hard-pressed to think of people who are more vulnerable than those who are grieving.”

Experts have raised concerns about the ethics of creating a digital resurrection of the dead, particularly in cases where they have not consented to it and users feed AI the data. The environmental impact of AI-powered tools and chatbots is also a growing concern, particularly when involving large language models (LLMs) – systems trained to understand and generate human-like text, which power applications like chatbots.

These systems need giant data centres that emit high levels of carbon and use large volumes of water for cooling, in addition to creating e-waste due to frequent hardware upgrades. A report in early July from Google showed that the company was far behind its ambitious net-zero goals, owing to the demand AI was putting on its data centres.

Advertisement

Gosline said that she understands that there is no perfect programme and that many users of such AI chatbots would do anything to reconnect with a deceased loved one. But it’s on leaders and scientists to be more thoughtful about the kind of world they want to create, she said. Fundamentally, she said, they need to ask themselves one question:

“Do we need this?”

Final Thoughts: The Future of AI and Grief

As AI continues to evolve, so too will its applications in helping people cope with grief. While the technology offers unprecedented opportunities for connection and closure, it also raises significant ethical, psychological, and environmental concerns. It is crucial for developers and users alike to approach these tools with caution and consideration, ensuring that they are used in ways that truly benefit those who are grieving.

Comment and Share:

What do you think about the future of AI and its role in helping people cope with grief? Have you or someone you know used AI to connect with a lost loved one? Share your experiences and thoughts in the comments below. And don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Author

Advertisement

Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Life

Adrian’s Arena: AI and the Global Shift – What Trump’s 2024 Victory Means for AI in Asia

With Trump’s 2024 re-election, Asian nations might push for self-reliant AI ecosystems, regional partnerships, and stronger privacy standards.

Published

on

donald trump happy

TL;DR

  • Donald Trump’s 2024 presidential win could reshape AI development in Asia by prompting self-reliant AI ecosystems, more regional partnerships, and increased privacy standards.
  • Asian nations may accelerate AI innovation and talent development to reduce reliance on U.S. tech, particularly as they anticipate shifts due to this result.
  • Asian companies are positioned to thrive, offering privacy-compliant, localised AI insights that align with Asia’s unique market dynamics during this new Trump era.

What now for AI?

The re-election of Donald Trump to the U.S. presidency is sure to have profound global impacts, particularly in areas like artificial intelligence (AI). In Asia, where AI adoption is already soaring, Trump’s approach to foreign policy, technology, and economic partnerships may drive significant shifts in both public and private AI ventures. This focus includes Donald Trump’s 2024 election win and subsequent implications on AI in various sectors.

This article explores how the changing political landscape could reshape AI in Asia and how businesses are poised to navigate and leverage these shifts.

Donald Trump 2024 election AI

AI Regulation and Innovation: A Push for Autonomy

Trump’s leadership may spur a greater focus on AI autonomy in Asia, encouraging countries to develop homegrown AI solutions across various industries. For example, healthcare data analytics in Singapore, fintech solutions in India, and consumer insights platforms in Japan could see accelerated development as these nations prioritise self-reliance.

Several companies in Asia are well-positioned to contribute, offering privacy-compliant AI insights that help brands tailor messaging without relying on U.S.-based tech giants.

Trade Policies and Tech Partnerships: Redrawing Lines

With Trump’s trade policies likely to maintain a “protectionist” edge, tech partnerships across the Pacific may become more complex, leading Asia’s leading economies to bolster regional AI collaborations. This may foster tighter partnerships within Asia, where companies can provide high-impact AI solutions tailored to local consumer behaviours and trends.

Research Funding and Education: A New Wave of Asian Talent

The expected restrictions on U.S. visas for Asian students and researchers could spark a wave of investment in AI education and talent retention across Asia. AI companeis can support this talent surge by offering real-world, Asia-specific AI applications, from data analytics to customer insights and digital advertising.

Practical programs in Asia, especially in Singapore —offer hands-on AI training—equip professionals with critical skills for driving regional innovation, positioning Asia as a powerhouse for AI expertise.

AI-Powered Defense and Cybersecurity: Strengthening Regional Security

As Asian nations fortify their defences in response to Trump’s renewed focus on military alliances, AI-driven cybersecurity solutions are expected to see considerable growth. AI companies in Asia are poised to address emerging threats with precision and speed.

For instance, Asian technology could support national cybersecurity initiatives by identifying threat patterns in real-time across public data sources, providing governments and enterprises with actionable insights for safeguarding critical infrastructure.

Privacy and Data Ownership: Asia’s Standards vs. the U.S. Approach

Asia’s data governance standards are set to diverge further from those in the U.S., especially with Trump’s preference for lighter tech regulation. This shift aligns with ad tech’s approach to delivering privacy-compliant audience insights, offering Asia-based companies a way to engage their customers effectively without compromising data security.

Advertisement

Impact on the AI Talent Pipeline: Challenges and Opportunities

Trump’s immigration policies could impact the AI talent pipeline to the U.S., pushing many skilled AI professionals to remain in Asia. Companies can leverage this shift by tapping into local AI talent for projects that require regional expertise, particularly in the Donald Trump 2024 election AI context.

By prioritising local talent, companies can ensure that solutions align with Asia’s unique market demands, from local consumer insights to culturally resonant AI-driven advertising.

As a result, Asian companies and their partners can benefit from deeper market understanding, making their campaigns more impactful across Asia.

A Shift Towards Pan-Asian AI Standards

With Trump’s policies creating a potential divide in AI development approaches, Asian countries may push for unified AI standards within the region. By aligning AI governance across economies, Asia could build a formidable framework that encourages innovation while ensuring ethical usage and robust privacy protections.

Countries like Japan, South Korea, and Singapore are already leaders in setting high AI standards, and an Asia-wide approach could help establish a distinctive identity in the global AI community.

This alignment would also reduce friction for companies operating across multiple Asian markets, fostering an interconnected ecosystem that accelerates growth and adaptability.

The Rise of Localised AI Applications

As trade and regulatory landscapes shift, there’s an increased incentive for Asian companies to design AI solutions that cater to local languages, cultural nuances, and consumer behaviours. Localisation has always been a critical factor for success in Asia, and AI is no exception.

From natural language processing that understands regional dialects to AI-driven marketing insights that resonate with unique consumer mindsets, tailored AI applications could see a significant boost.

This emphasis on localisation not only enhances user experience but also ensures that AI remains relevant and effective in each unique market across the continent

Conclusion: A New Era for AI in Asia

The Trump presidency may catalyse a new chapter for AI in Asia. As Asian nations brace for potential shifts in trade and technology policies, they are well-positioned to accelerate regional AI innovation, self-sufficiency, and collaboration.

By investing in local talent, fostering privacy-compliant solutions, and collaborating across the region, companies like SQREEM are driving Asia’s transformation into a global AI powerhouse.

While the future may be uncertain under a second new era of Trump, we know at least it won’t be boring for the AI industry!

Join the Conversation

As AI in Asia surges towards autonomy and privacy-first innovation, will Trump’s policies drive the region to outperform the U.S. in tech advancements? Or are we on the cusp of a global AI divide? Please share your thoughts and don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Advertisement

Author

  • Adrian Watkins (Guest Contributor)

    Adrian is an AI, marketing, and technology strategist based in Asia, with over 25 years of experience in the region. Originally from the UK, he has worked with some of the world’s largest tech companies and successfully built and sold several tech businesses. Currently, Adrian leads commercial strategy and negotiations at one of ASEAN’s largest AI companies. Driven by a passion to empower startups and small businesses, he dedicates his spare time to helping them boost performance and efficiency by embracing AI tools. His expertise spans growth and strategy, sales and marketing, go-to-market strategy, AI integration, startup mentoring, and investments. View all posts


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Protect Your Writing from AI Bots: A Simple Guide

This article explains how to protect your writing from AI bots using the robots.txt file, and discusses the copyright issues surrounding AI models.

Published

on

Protect writing from AI

TL;DR:

  • AI models like ChatGPT use vast amounts of text, often without permission.
  • The New York Times has sued OpenAI for copyright infringement.
  • You can protect your writing by editing your robots.txt file.

The Rise of AI and Its Hunger for Words

Artificial Intelligence (AI) is transforming the world, but it comes with challenges. AI models like ChatGPT require enormous amounts of text to train. For instance, the first version of ChatGPT was trained on about 300 billion words. That’s equivalent to writing a thousand words a day for over 800,000 years!

But where does all this text come from? Often, it’s scraped from the internet without permission, raising serious copyright concerns.

The Case of The New York Times vs. OpenAI

In a high-profile case, The New York Times sued OpenAI, the company behind ChatGPT, for copyright infringement. The lawsuit alleges that OpenAI scraped millions of articles from The New York Times and used them to train its AI models. Sometimes, these models even reproduce chunks of text verbatim.

“OpenAI made three hundred million in August and expects to hit $3.7 billion this year.” – The New York Times

This raises a crucial question: How would you feel if AI models were using your writing without permission?

The Looming Content Crisis

AI companies face a potential content crisis. A study by Epoch AI suggests that AI models could run out of human-generated content as early as 2026. This could lead to stagnation, as AI models need fresh content to keep improving.

Advertisement

“The AI field might face challenges in maintaining its current pace of progress once it drains the reserves of human-generated writing.” – Tamay Besiroglu, author of the Epoch AI study

Protecting Your Writing: The robots.txt File

So, how can you protect your writing? The solution lies in a simple text file called robots.txt. This file tells robots (including AI bots) what they can and can’t access on your website.

Here’s how it works:

  1. User-agent: This is the name of the robot. For example, ‘GPTBot’ for ChatGPT.
  2. Disallow: This means ‘no’.
  3. The slash (/): This means the whole website or account.

So, if you want to block ChatGPT from accessing your writing, you would add this to your robots.txt file:

User-agent: GPTBot
Disallow: /

How to Edit Your robots.txt File

If you have your own website, you can edit the robots.txt file to block AI bots.

Here’s how:

  • Using the Yoast SEO plugin: Go to Yoast > Tools > File Editor.
  • Using FTP access: The robots.txt file is in the root directory.
  • Using the WP Robots Txt plugin: This is a simple, non-technical solution. Just go to Plugins > Add New, then type in ‘WP Robots Txt’ and click install.

Once you’re in the robots.txt file, copy and paste the following to block common AI bots:

User-agent: GPTBot
Disallow: /

User-agent: ChatGPT-User
Disallow: /

User-agent: Google-Extended
Disallow: /

User-agent: Omgilibot
Disallow: /

User-agent: ClaudeBot
Disallow: /

User-agent: Claude-Web
Disallow: /

The Common Crawl Dilemma

Common Crawl is a non-profit organisation that creates a copy of the internet for research and analysis. Unfortunately, OpenAI used Common Crawl data to train its AI models. If you want to block Common Crawl, add this to your robots.txt file:

Advertisement
User-agent: CCBot
Disallow: /

The Future of AI and Copyright Law

The future of AI and copyright law is uncertain. Until the laws change, the best way to protect your writing is to block AI bots using the robots.txt file.

“Until they change copyright laws and intellectual property laws and give the rights to he with the most money — your words are yours.”

Comment and Share:

How do you feel about AI models using your writing without permission? Have you checked your robots.txt file? Share your thoughts and experiences below. And don’t forget to subscribe for updates on AI and AGI developments!

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

AI at the Polls: Is Technology Steering the 2024 US Election?

As Americans cast their votes tomorrow, artificial intelligence will play a quiet but powerful role behind the scenes.

Published

on

AI at the Polls

TL;DR:

  • Campaign ads, social media feeds, and even “news” popping up in swing states are being shaped by AI’s invisible hand
  • Campaigns in 2024 aren’t just reaching voters; they’re diving deep into our digital footprints
  • AI brings campaigns closer to voters, it also makes it easier than ever to spread misinformation

A New Political Battleground—Inside the AI-Powered Election

As Americans cast their votes tomorrow, artificial intelligence will play a quiet but powerful role behind the scenes. Campaign ads, social media feeds, and even “news” popping up in swing states are being shaped by AI’s invisible hand. This isn’t just the next step in election tech; it’s a dramatic leap that could change the game forever. Is AI enhancing democracy, or are we giving it the keys to the whole democratic car?

1. Supercharging Campaigns: Microtargeting to the Extreme

Let’s face it—if you feel like your social media feeds are eerily personal, that’s not a coincidence. Campaigns in 2024 aren’t just reaching voters; they’re diving deep into our digital footprints to send messages so tailored they feel like personal letters. Thanks to AI, campaigns can slice the electorate into precise segments, tapping into anxieties, interests, and even specific local issues.

In battleground states like Arizona and Pennsylvania, this tech-driven targeting reaches a fever pitch. AI sifts through oceans of data—social media interactions, browsing habits, even purchase history—to craft ads that connect directly with you, personally.

“Campaigns are increasingly leveraging sophisticated machine learning algorithms to analyse vast quantities of voter data, refining their strategies with pinpoint accuracy,” notes MIT Technology Review (source).

With AI knowing so much, it raises an interesting (if slightly chilling) question: where’s the line between effective campaigning and outright manipulation?

Advertisement

2. The Double-Edged Sword: AI, Deepfakes, and Digital Misinformation

Here’s the darker side. While AI brings campaigns closer to voters, it also makes it easier than ever to spread misinformation. AI-generated deepfakes—fake videos that look so real you wouldn’t know they’re fake—have added a surreal twist to this election. Imagine seeing a video of a candidate saying something outrageous… and then realising it never actually happened.

“Deepfakes have made the spread of disinformation much easier and more convincing, raising concerns about the future of truth in politics,” the Brookings Institution warns (source).

AI’s power to create convincing fakes isn’t just a technical marvel; it’s a fundamental threat to truth in politics. Without strict regulations or ways to fact-check in real-time, we’re left wondering how many people will cast their vote based on a lie.

3. Predictive Polling: AI, Sentiment Analysis, and the All-Seeing Eye

If you thought AI was only influencing what you see online, think again. Polling has evolved far beyond traditional methods. This election, campaigns are using AI-driven sentiment analysis to tap into public moods in real time, keeping a pulse on issues that resonate with voters minute by minute.

“Sentiment analysis enables campaigns to see beyond traditional polling, observing shifts in public mood and identifying emerging concerns as they happen,” reports the Pew Research Center (source).

Advertisement

Let’s say economic concerns are heating up in Georgia; Trump’s team could amplify ads focusing on job growth in just hours. Or Harris’s camp could hone in on climate change in Michigan based on AI-driven insights from yesterday’s online conversations. This real-time fine-tuning isn’t just impressive—it’s a little mind-bending. Can polls really capture the pulse of the nation, or are we just seeing what AI’s algorithms want us to?

4. Mobilising the Masses: AI Nudges and Digital Persuasion

Getting people to the polls has always been crucial, and AI’s here to make sure more people than ever get nudged, reminded, and maybe even guilt-tripped into voting. AI-driven models predict not only who’s likely to vote but also who might need a little extra encouragement. Campaigns can then send targeted texts, emails, or even pop up on your social feed reminding you to “make your voice heard.”

The Atlantic remarks on AI’s power in mobilisation, stating, “AI has transformed voter outreach into an exact science, enabling campaigns to efficiently target and mobilise segments of the electorate that might otherwise stay home” (source).

For instance, Harris’s campaign has deployed AI to boost turnout among younger voters in key states, while Trump’s team uses it to rally dedicated supporters in traditionally red zones. AI doesn’t just follow you online; it’s practically waiting outside your door with a “Don’t forget to vote” sign. This kind of outreach raises a fascinating question about voter autonomy—are we freely deciding to vote, or are we being nudged by an algorithm?

5. Navigating the Ethical Minefield: Can Democracy Keep Up?

Here’s where it all gets tricky. While AI offers stunning capabilities for reaching, engaging, and mobilising voters, it also opens up new doors for potential misuse. From deepfakes to ultra-targeted political ads, AI is testing the limits of what’s fair game in political campaigns.

Advertisement

With regulations still trying to catch up, we’re left with a significant blind spot.

“Current frameworks for AI regulation are woefully inadequate, leaving a critical gap in safeguarding electoral processes,” states the Harvard Political Review (source).

AI has handed campaigns a powerful toolkit, but with great power comes… well, you know the rest. Without real oversight, there’s a real risk of crossing ethical lines, leaving voters questioning whether their choices are truly their own or just the echoes of an algorithm.

A Glimpse into Asia’s Future?

As AI’s influence in US elections becomes clear, Asia’s political landscape might not be far behind. In a region where social media is booming and governments increasingly leverage AI for everything from citizen services to surveillance, the potential for AI-driven election strategies is immense. Imagine a world where voter preferences in Tokyo, Jakarta, or Delhi are meticulously profiled, and campaign ads are hyper-personalised to every demographic, language, and cultural nuance. But here’s the question for Asia: with AI’s rapid adoption and limited oversight, who will control this powerful tool—governments, political parties, or the people? The US election offers a glimpse of how AI can shape democracy, but will Asia be able to harness this power responsibly, or could it open doors to unprecedented political manipulation? The stakes are high, and the path ahead remains uncharted.

Join the Conversation

How do you think AI will impact elections in Asia? Will it drive democracy forward or lead to new challenges in political manipulation? Leave a comment or subscribe for AI in Asia updates.

Advertisement

You may also like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading