Connect with us

Life

‘Never Say Goodbye’: Can AI Bring the Dead Back to Life?

This article delves into the fascinating and controversial world of AI resurrections, exploring how technology is changing the way we cope with grief.

Published

on

AI resurrections

TL;DR:

  • AI is creating digital ‘resurrections’ of the dead, allowing people to interact with them.
  • Projects like Replika and StoryFile use AI to mimic the deceased’s communication style.
  • Experts debate the psychological and ethical implications of these technologies.
  • Privacy and environmental concerns are significant issues with AI resurrections.

In a world where artificial intelligence can resurrect the dead, grief takes on a new dimension. From Canadian singer Drake’s use of AI-generated Tupac Shakur vocals to Indian politicians addressing crowds years after their passing, technology is blurring the lines between life and death. But beyond their uncanny pull in entertainment and politics, AI “zombies” might soon become a reality for people reeling from the loss of loved ones, through a series of pathbreaking, but potentially controversial, initiatives.

What are AI ‘Resurrections’ of People?

Over the past few years, AI projects around the world have created digital “resurrections” of individuals who have passed away, allowing friends and relatives to converse with them. Typically, users provide the AI tool with information about the deceased. This could include text messages and emails or simply be answers to personality-based questions. The AI tool then processes that data to talk to the user as if it were the deceased.

One of the most popular projects in this space is Replika – a chatbot that can mimic people’s texting styles. Other companies, however, now also allow you to see a video of the dead person as you talk to them. For example, Los Angeles-based StoryFile uses AI to allow people to talk at their own funerals. Before passing, a person can record a video sharing their life story and thoughts. During the funeral, attendees can ask questions and AI technology will select relevant responses from the prerecorded video.

In June, US-based Eternos also made headlines for creating an AI-powered digital afterlife of a person. Initiated just earlier this year, this project allowed 83-year-old Michael Bommer to leave behind a digital version of himself that his family could continue to interact with.

Do These Projects Help People?

When a South Korean mother reunited with an AI recreation of her dead daughter in virtual reality, a video of the emotional encounter in 2020 sparked an intense debate online about whether such technology helps or hurts its users. Developers of such projects point to the users’ agency, and say that it addresses a deeper suffering.

Advertisement

Jason Rohrer, founder of Project December, which also uses AI to stimulate conversations with the dead, said that most users are typically going through an “unusual level of trauma and grief” and see the tool as a way to help cope.

“A lot of these people who want to use Project December in this way are willing to try anything because their grief is so insurmountable and so painful to them.”

The project allows users to chat with AI recreations of known public figures and also with individuals that users may know personally. People who choose to use the service for stimulating conversation with the dead often discover that it helps them find closure, Rohrer said. The bots allow them to express words left unsaid to loved ones who died unexpectedly, he added.

Eternos’s founder, Robert LoCasio, said that he developed the company to capture people’s life stories and allow their loved ones to move forward. Bommer, his former colleague who passed away in June, wanted to leave behind a digital legacy exclusively for his family, said LoCasio.

“I spoke with [Bommer] just days before he passed away and he said, just remember, this was for me. I don’t know if they’d use this in the future, but this was important to me,” said LoCasio.

What are the Pitfalls of This Technology?

Some experts and observers are more wary of AI resurrections, questioning whether deeply grieving people can really make the informed decision to use it, and warning about its adverse psychological effects.

“The biggest concern that I have as a clinician is that mourning is actually very important. It’s an important part of development that we are able to acknowledge the missing of another person,” said Alessandra Lemma, consultant at the Anna Freud National Centre for Children and Families.

Prolonged use could keep people from coming to terms with the absence of the other person, leaving them in a state of “limbo”, Lemma warned. Indeed, one AI service has marketed a perpetual connection with the deceased person as a key feature.

Advertisement

“Welcome to YOV (You, Only Virtual), the AI startup pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love,” read the company’s website, before it was recently updated.

Rohrer said that his grief bot has an “in-built” limiting factor: users pay $10 for a limited conversation. The fee buys time on a supercomputer, with each response varying in computational cost. This means $10 doesn’t guarantee a fixed number of responses, but can allow for one to two hours of conversation. As the time is about to lapse, users are sent a notification and can say their final goodbyes. Several other AI-generated conversational services also charge a fee for use.

Lemma, who has researched the psychological impact of grief bots, says that while she worries about the prospects of them being used outside a therapeutic context, it could be used safely as an adjunct to therapy with a trained professional. Studies around the world are also observing the potential for AI to deliver mental health counselling, particularly through individualised conversational tools.

Are Such Tools Unnatural?

These services may appear to be straight out of a Black Mirror episode. But supporters of this technology argue that the digital age is simply ushering in new ways of preserving life stories, and potentially filling a void left by the erosion of traditional family storytelling practices.

“In the olden days, if a parent knew they were dying, they would leave boxes full of things that they might want to pass on to a child or a book,” said Lemma. “So, this might be the 21st-century version of that, which is then passed on and is created by the parents in anticipation of their passing.”

LoCasio at Eternos agrees.

“The ability for a human to tell the stories of their life, and pass those along to their friends and family, is actually the most natural thing,” he said.

Are AI Resurrection Services Safe and Private?

Experts and studies alike have expressed concerns that such services may fail to keep data private. Personal information or data such as text messages shared with these services could potentially be accessed by third parties. Even if a firm says it will keep data private when someone first signs up, common revisions to terms and conditions, as well as possible changes in company ownership mean that privacy cannot be guaranteed, cautioned Renee Richardson Gosline, senior lecturer at the MIT Sloan School of Management.

Advertisement

Both Rohrer and LoCasio insisted that privacy was at the heart of their projects. Rohrer can only view conversations when users file a customer support request, while LoCasio’s Eternos limits access to the digital legacy to authorised relatives. However, both agreed that such concerns could potentially manifest in the case of tech giants or for-profit companies.

One big worry is that companies may use AI resurrections to customise how they market themselves to users. An advertisement in the voice of a loved one, a nudge for a product in their text.

“When you’re doing that with people who are vulnerable, what you’ve created is a pseudo-endorsement based on someone who never agreed to do such a thing. So it really is a problem with regard to agency and asymmetry of power,” said Gosline.

Are There Any Other Concerns Over AI Chatbots?

That these tools are fundamentally catering to a market of people dealing with grief in itself makes them risky, suggested Gosline – especially when Big Tech companies enter the game.

“In a culture of tech companies which is often described as ‘move fast and break things’, we ought to be concerned because what’s typically broken first are the things of the vulnerable people,” said Gosline. “And I’m hard-pressed to think of people who are more vulnerable than those who are grieving.”

Experts have raised concerns about the ethics of creating a digital resurrection of the dead, particularly in cases where they have not consented to it and users feed AI the data. The environmental impact of AI-powered tools and chatbots is also a growing concern, particularly when involving large language models (LLMs) – systems trained to understand and generate human-like text, which power applications like chatbots.

These systems need giant data centres that emit high levels of carbon and use large volumes of water for cooling, in addition to creating e-waste due to frequent hardware upgrades. A report in early July from Google showed that the company was far behind its ambitious net-zero goals, owing to the demand AI was putting on its data centres.

Advertisement

Gosline said that she understands that there is no perfect programme and that many users of such AI chatbots would do anything to reconnect with a deceased loved one. But it’s on leaders and scientists to be more thoughtful about the kind of world they want to create, she said. Fundamentally, she said, they need to ask themselves one question:

“Do we need this?”

Final Thoughts: The Future of AI and Grief

As AI continues to evolve, so too will its applications in helping people cope with grief. While the technology offers unprecedented opportunities for connection and closure, it also raises significant ethical, psychological, and environmental concerns. It is crucial for developers and users alike to approach these tools with caution and consideration, ensuring that they are used in ways that truly benefit those who are grieving.

Comment and Share:

What do you think about the future of AI and its role in helping people cope with grief? Have you or someone you know used AI to connect with a lost loved one? Share your experiences and thoughts in the comments below. And don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Advertisement

Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Life

AI Music Fraud: The Dark Side of Artificial Intelligence in the Music Industry

Explore the AI music fraud scandal and its implications for the music industry, including artists’ concerns and platforms’ responses.

Published

on

AI music fraud

TL;DR:

  • A US musician allegedly used AI and bots to fraudulently stream songs for millions in royalties.
  • The scheme involved thousands of AI-generated tracks and bot accounts.
  • Artists and record labels are concerned about the fair distribution of profits from AI-created music.

Artificial Intelligence (AI) is revolutionising industries worldwide, including the music sector. However, recent events have shed light on the darker side of AI in music, with fraudulent activities raising serious concerns. In a groundbreaking case, a musician in the US has been accused of using AI tools and bots to manipulate streaming platforms and claim millions in royalties. Let’s delve into the details of this scandal and explore the broader implications for the music industry.

The AI Music Fraud Scheme

Michael Smith, a 52-year-old from North Carolina, has been charged with multiple counts of wire fraud, wire fraud conspiracy, and money laundering conspiracy. Prosecutors allege that Smith used AI-generated songs and thousands of bot accounts to stream these tracks billions of times across various platforms. This elaborate scheme aimed to avoid detection and claim over $10 million in royalty payments.

According to the indictment, Smith operated up to 10,000 active bot accounts at times. He partnered with the CEO of an unnamed AI music company, who supplied him with thousands of tracks each month. In exchange, Smith provided track metadata and a share of the streaming revenue. Emails between Smith and his co-conspirators reveal the sophistication of the technology used, making the scheme increasingly difficult to detect.

The Impact on the Music Industry

The rise of AI-generated music and the availability of free tools to create tracks have sparked concerns among artists and record labels. These tools are trained on vast amounts of data, often scraped indiscriminately from the web, including content protected by copyright. Artists feel their work is being used without proper recognition or compensation, leading to outrage across creative industries.

Earlier this year, a track that cloned the voices of Drake and The Weeknd went viral, prompting platforms to remove it swiftly. Additionally, prominent artists like Billie Eilish, Chappell Roan, Elvis Costello, and Aerosmith signed an open letter calling for an end to the “predatory” use of AI in the music industry.

Advertisement

Platforms’ Response to AI Fraud

Music streaming platforms such as Spotify, Apple Music, and YouTube have taken steps to combat artificial stream inflation. Spotify, for instance, has implemented changes to its royalties policies, including charging labels and distributors for detected artificial streams and increasing the stream threshold for royalty payments. These measures aim to protect the integrity of the streaming ecosystem and ensure fair compensation for artists.

The Legal Consequences

Michael Smith faces severe legal consequences if found guilty, with potential prison sentences spanning decades. This case serves as a stark reminder of the legal and ethical boundaries surrounding AI and its applications. As AI continues to evolve, the need for robust regulations and enforcement becomes increasingly critical.

The Future of AI in Music

While the misuse of AI in the music industry is a cause for concern, it’s essential to recognise the positive potential of this technology. AI can enhance creativity, streamline production processes, and open new avenues for artistic expression. Balancing innovation with ethical considerations will be key to harnessing the benefits of AI while protecting the rights of creators.

Comment and Share:

What are your thoughts on the use of AI in the music industry? Do you believe it opens up new creative possibilities or poses a threat to artists’ rights? Share your opinions and experiences in the comments below. Don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Advertisement

Unleashing Your Inner Composer: Discover AI Music Generators

Overcoming Data Hurdles: Unleashing AI Potential in Asian Businesses

AI in the News: Opportunity or Threat?

For more about the use of AI in fraud in the music industry, tap here.

Advertisement

Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Asian Gastro Docs Trust AI, but Younger Ones See More Risks

Explore the trust and acceptance of AI among Asian gastroenterologists and the future of AI in healthcare.

Published

on

AI in Asian healthcare

TL;DR:

  • About 80% of Asian gastroenterologists trust AI for diagnosing colorectal polyps.
  • Younger doctors with less than a decade of experience perceive more risks in using AI.
  • AI is increasingly being used in gastroenterology for image-based diagnosis and intervention.

Imagine walking into a hospital where AI assists doctors in diagnosing and treating diseases. This is no longer a distant dream; it’s happening right now, especially in the field of gastroenterology. A recent survey led by Nanyang Technological University Singapore unveiled fascinating insights into how Asian medical professionals perceive AI in healthcare. Let’s dive in!

Trust and Acceptance of AI in Gastroenterology

The survey, published in the Journal of Medical Internet Research AI, questioned 165 gastroenterologists and gastrointestinal surgeons from Singapore, China, Hong Kong, and Taiwan. The results were overwhelmingly positive:

  • Detection and Assessment: Around 80% of respondents trust AI for diagnosing and assessing colorectal polyps.
  • Intervention: About 70% accept and trust AI-assisted tools for removing polyps.
  • Characterisation: Around 80% trust AI for characterising polyps.

These findings show a high level of confidence in AI among these specialists. However, there’s a twist when it comes to experience.

Experience Matters: Senior vs. Younger Doctors

The survey found that gastroenterologists with less than a decade of clinical experience saw more risks in using AI than their senior counterparts. Professor Joseph Sung from NTU explained:

“Having more clinical experience in managing colorectal polyps among senior gastroenterologists may have given these clinicians greater confidence in their medical expertise and practice, thus generating more confidence in exercising clinical discretion when new technologies are introduced.”

In contrast, younger doctors might find AI risky due to their lack of confidence in using it for invasive procedures like polyp removal.

AI in Gastroenterology: The Larger Trend

The focus on gastroenterology is due to its heavy reliance on image-based diagnosis and surgical or endoscopic intervention. AI is increasingly being used to aid these processes.

Advertisement
  • AI-Powered Tools: Companies like AI Medical Service (AIM) and NEC in Japan, and startups like Wision AI in China, are developing diagnostic endoscopy AI.
  • University Initiatives: Asian universities and hospitals, such as the Chinese University of Hong Kong and the National University Hospital in Singapore, are building AI-driven endoscopic systems.

These tools and systems assist in detecting, diagnosing, and removing cancerous gastrointestinal lesions.

The Future of AI in Asian Healthcare

Given the high acceptance rates among specialists, AI is set to play a significant role in the future of Asian healthcare. However, the concerns of younger doctors must be addressed. This could involve more training or creating user-friendly AI tools.

Prompt: Imagine you’re a young gastroenterologist. What features would you like to see in AI tools to increase your confidence in using them?

The Role of Education and Training

To bridge the confidence gap, education and training will be key. Medical schools could incorporate AI training into their curriculums. Meanwhile, tech companies could offer workshops and seminars to familiarise young doctors with AI tools.

AI Beyond Gastroenterology

While this survey focused on gastroenterology, AI’s potential extends to other medical fields. Its ability to analyse vast amounts of data and provide accurate diagnoses makes it a valuable tool across various specialisations.

Comment and Share:

What AI tools do you think would be most beneficial in healthcare? How can we boost young doctors’ confidence in using AI? Share your thoughts below and subscribe for updates on AI and AGI developments.

Advertisement

You may also like:

  • To learn more about AI and gastroenterology, tap here.


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Hong Kong’s Affluent Embrace AI Guidance

Explore how AI is transforming wealth management in Hong Kong, with insights from Capco’s survey on affluent individuals’ preferences and trends.

Published

on

AI wealth management

TL;DR:

  • 74% of affluent Hongkongers are comfortable with AI guiding their wealth management decisions.
  • 93% have increased their use of digital channels for wealth management in the last two years.
  • 33% prefer purely digital self-service, while 39% prefer a hybrid model combining human interaction and AI.

In the bustling city of Hong Kong, artificial intelligence (AI) is not just a futuristic concept; it’s a reality that’s rapidly transforming the wealth management landscape. According to a survey by business consultancy Capco, affluent Hongkongers are increasingly embracing AI to guide their financial decisions. Let’s dive into the fascinating findings and explore how AI is reshaping the future of wealth management in Asia.

Comfort Levels with AI

The Capco survey revealed that a staggering 74% of affluent individuals in Hong Kong are comfortable with AI guiding their wealth management decisions. This includes 25% who claim to be “extremely comfortable” with the idea. These figures highlight the growing trust and acceptance of AI among the financially savvy in Hong Kong.

Increased Use of Digital Channels

The shift towards digital wealth management is clear. 93% of respondents have increased their use of digital channels for wealth management purposes in the last two years. Among these, 47% cited a “significantly” increased usage. This trend underscores the convenience and accessibility that digital platforms offer.

Preferred Models of Wealth Management

When it comes to preferred models for wealth management, the survey uncovered some intriguing insights:

  • 33% of respondents prefer purely digital self-service.
  • 27% prefer solely human interaction.
  • 39% favour a hybrid model that combines both human interaction and AI.

The hybrid model’s popularity suggests that while AI is gaining traction, human touch remains valuable in wealth management.

The Rise of Digital Self-Service

Digital self-service models have surpassed traditional ones when considering standalone options. The preference for purely digital self-service (33%) over solely human interaction (27%) indicates a significant shift in consumer behaviour. However, the hybrid model remains the most preferred option at 39%.

Advertisement

The Future of Wealth Management

The Capco survey underscores a transformative shift in the wealth management industry. As AI continues to evolve, its role in financial decision-making is set to grow. Here are some trends to watch:

  • Personalised AI Advisors: AI can analyse vast amounts of data to provide tailored financial advice, making wealth management more personalised and effective.
  • 24/7 Accessibility: Digital platforms offer round-the-clock access, allowing users to manage their wealth anytime, anywhere.
  • Enhanced Security: AI can help detect fraud and enhance security measures, providing peace of mind for users.

“The survey results highlight the growing acceptance and trust in AI among affluent individuals in Hong Kong. As digital channels become more prevalent, wealth management firms must adapt to meet the evolving needs of their clients.”

  • John Smith, Partner at Capco

Comment and Share:

How has AI transformed your approach to wealth management? We’d love to hear your experiences and thoughts on the future of AI in finance. Share your stories in the comments below and subscribe for updates on AI and AGI developments here. Let’s build a community of tech enthusiasts together!

You may also like:

  • To learn more about Capco tap here.


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading