Connect with us

Life

‘Never Say Goodbye’: Can AI Bring the Dead Back to Life?

This article delves into the fascinating and controversial world of AI resurrections, exploring how technology is changing the way we cope with grief.

Published

on

AI resurrections

TL;DR:

  • AI is creating digital ‘resurrections’ of the dead, allowing people to interact with them.
  • Projects like Replika and StoryFile use AI to mimic the deceased’s communication style.
  • Experts debate the psychological and ethical implications of these technologies.
  • Privacy and environmental concerns are significant issues with AI resurrections.

In a world where artificial intelligence can resurrect the dead, grief takes on a new dimension. From Canadian singer Drake’s use of AI-generated Tupac Shakur vocals to Indian politicians addressing crowds years after their passing, technology is blurring the lines between life and death. But beyond their uncanny pull in entertainment and politics, AI “zombies” might soon become a reality for people reeling from the loss of loved ones, through a series of pathbreaking, but potentially controversial, initiatives.

What are AI ‘Resurrections’ of People?

Over the past few years, AI projects around the world have created digital “resurrections” of individuals who have passed away, allowing friends and relatives to converse with them. Typically, users provide the AI tool with information about the deceased. This could include text messages and emails or simply be answers to personality-based questions. The AI tool then processes that data to talk to the user as if it were the deceased.

One of the most popular projects in this space is Replika – a chatbot that can mimic people’s texting styles. Other companies, however, now also allow you to see a video of the dead person as you talk to them. For example, Los Angeles-based StoryFile uses AI to allow people to talk at their own funerals. Before passing, a person can record a video sharing their life story and thoughts. During the funeral, attendees can ask questions and AI technology will select relevant responses from the prerecorded video.

In June, US-based Eternos also made headlines for creating an AI-powered digital afterlife of a person. Initiated just earlier this year, this project allowed 83-year-old Michael Bommer to leave behind a digital version of himself that his family could continue to interact with.

Do These Projects Help People?

When a South Korean mother reunited with an AI recreation of her dead daughter in virtual reality, a video of the emotional encounter in 2020 sparked an intense debate online about whether such technology helps or hurts its users. Developers of such projects point to the users’ agency, and say that it addresses a deeper suffering.

Advertisement

Jason Rohrer, founder of Project December, which also uses AI to stimulate conversations with the dead, said that most users are typically going through an “unusual level of trauma and grief” and see the tool as a way to help cope.

“A lot of these people who want to use Project December in this way are willing to try anything because their grief is so insurmountable and so painful to them.”

The project allows users to chat with AI recreations of known public figures and also with individuals that users may know personally. People who choose to use the service for stimulating conversation with the dead often discover that it helps them find closure, Rohrer said. The bots allow them to express words left unsaid to loved ones who died unexpectedly, he added.

Eternos’s founder, Robert LoCasio, said that he developed the company to capture people’s life stories and allow their loved ones to move forward. Bommer, his former colleague who passed away in June, wanted to leave behind a digital legacy exclusively for his family, said LoCasio.

“I spoke with [Bommer] just days before he passed away and he said, just remember, this was for me. I don’t know if they’d use this in the future, but this was important to me,” said LoCasio.

What are the Pitfalls of This Technology?

Some experts and observers are more wary of AI resurrections, questioning whether deeply grieving people can really make the informed decision to use it, and warning about its adverse psychological effects.

“The biggest concern that I have as a clinician is that mourning is actually very important. It’s an important part of development that we are able to acknowledge the missing of another person,” said Alessandra Lemma, consultant at the Anna Freud National Centre for Children and Families.

Prolonged use could keep people from coming to terms with the absence of the other person, leaving them in a state of “limbo”, Lemma warned. Indeed, one AI service has marketed a perpetual connection with the deceased person as a key feature.

Advertisement

“Welcome to YOV (You, Only Virtual), the AI startup pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love,” read the company’s website, before it was recently updated.

Rohrer said that his grief bot has an “in-built” limiting factor: users pay $10 for a limited conversation. The fee buys time on a supercomputer, with each response varying in computational cost. This means $10 doesn’t guarantee a fixed number of responses, but can allow for one to two hours of conversation. As the time is about to lapse, users are sent a notification and can say their final goodbyes. Several other AI-generated conversational services also charge a fee for use.

Lemma, who has researched the psychological impact of grief bots, says that while she worries about the prospects of them being used outside a therapeutic context, it could be used safely as an adjunct to therapy with a trained professional. Studies around the world are also observing the potential for AI to deliver mental health counselling, particularly through individualised conversational tools.

Are Such Tools Unnatural?

These services may appear to be straight out of a Black Mirror episode. But supporters of this technology argue that the digital age is simply ushering in new ways of preserving life stories, and potentially filling a void left by the erosion of traditional family storytelling practices.

“In the olden days, if a parent knew they were dying, they would leave boxes full of things that they might want to pass on to a child or a book,” said Lemma. “So, this might be the 21st-century version of that, which is then passed on and is created by the parents in anticipation of their passing.”

LoCasio at Eternos agrees.

“The ability for a human to tell the stories of their life, and pass those along to their friends and family, is actually the most natural thing,” he said.

Are AI Resurrection Services Safe and Private?

Experts and studies alike have expressed concerns that such services may fail to keep data private. Personal information or data such as text messages shared with these services could potentially be accessed by third parties. Even if a firm says it will keep data private when someone first signs up, common revisions to terms and conditions, as well as possible changes in company ownership mean that privacy cannot be guaranteed, cautioned Renee Richardson Gosline, senior lecturer at the MIT Sloan School of Management.

Advertisement

Both Rohrer and LoCasio insisted that privacy was at the heart of their projects. Rohrer can only view conversations when users file a customer support request, while LoCasio’s Eternos limits access to the digital legacy to authorised relatives. However, both agreed that such concerns could potentially manifest in the case of tech giants or for-profit companies.

One big worry is that companies may use AI resurrections to customise how they market themselves to users. An advertisement in the voice of a loved one, a nudge for a product in their text.

“When you’re doing that with people who are vulnerable, what you’ve created is a pseudo-endorsement based on someone who never agreed to do such a thing. So it really is a problem with regard to agency and asymmetry of power,” said Gosline.

Are There Any Other Concerns Over AI Chatbots?

That these tools are fundamentally catering to a market of people dealing with grief in itself makes them risky, suggested Gosline – especially when Big Tech companies enter the game.

“In a culture of tech companies which is often described as ‘move fast and break things’, we ought to be concerned because what’s typically broken first are the things of the vulnerable people,” said Gosline. “And I’m hard-pressed to think of people who are more vulnerable than those who are grieving.”

Experts have raised concerns about the ethics of creating a digital resurrection of the dead, particularly in cases where they have not consented to it and users feed AI the data. The environmental impact of AI-powered tools and chatbots is also a growing concern, particularly when involving large language models (LLMs) – systems trained to understand and generate human-like text, which power applications like chatbots.

These systems need giant data centres that emit high levels of carbon and use large volumes of water for cooling, in addition to creating e-waste due to frequent hardware upgrades. A report in early July from Google showed that the company was far behind its ambitious net-zero goals, owing to the demand AI was putting on its data centres.

Advertisement

Gosline said that she understands that there is no perfect programme and that many users of such AI chatbots would do anything to reconnect with a deceased loved one. But it’s on leaders and scientists to be more thoughtful about the kind of world they want to create, she said. Fundamentally, she said, they need to ask themselves one question:

“Do we need this?”

Final Thoughts: The Future of AI and Grief

As AI continues to evolve, so too will its applications in helping people cope with grief. While the technology offers unprecedented opportunities for connection and closure, it also raises significant ethical, psychological, and environmental concerns. It is crucial for developers and users alike to approach these tools with caution and consideration, ensuring that they are used in ways that truly benefit those who are grieving.

Comment and Share:

What do you think about the future of AI and its role in helping people cope with grief? Have you or someone you know used AI to connect with a lost loved one? Share your experiences and thoughts in the comments below. And don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Author

Advertisement

Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Life

AI-pril Fools! How AI is Outsmarting Our Best Pranks

From virtual assistants going rogue to hilariously believable hoaxes—AI is the new king of comedy this April Fools’ Day. Create your own mischief with these fun prompts.

Published

on

April Fools' Day AI,

TL;DR – What You Need to Know in 30 Seconds

  • AI isn’t just productive anymore—it’s mastering humour and taking over April Fools’ Day pranking.
  • Memorable pranks include Google’s AI dating-profile roasting, ChatGPT’s hilarious resignation, and Alexa’s comedic strike.
  • Expect even cleverer, personalised AI humour—and beware of getting outsmarted!
  • Try some hilarious prompts of your own.

Ever been outsmarted by your own virtual assistant? If not, April 1st might just be the day…

Forget productivity. Forget precision targeting and automated analytics. On April Fools’ Day, AI is ditching the spreadsheets for punchlines as artificial intelligence evolves from the helpful-but-dull office assistant into the digital world’s most sophisticated prankster.

Let’s dive into the lighter side of AI, where the jokes are sharp, the pranks unpredictable, and you might just find yourself laughing—perhaps nervously—as your AI assistant decides to have a bit of fun at your expense.

The Rise of AI in Humour

Not long ago, the idea of AI having a sense of humour seemed absurd. AI tools could analyse billions of data points or create marketing strategies that left humans slack-jawed—but make a genuinely funny joke? Forget it.

Yet, recent developments in AI language models like GPT-4, Google’s Bard, and even quirky digital assistants have transformed these once-mechanical chatbots into genuinely witty conversationalists. Today, AI can not only tell a decent joke—it can set up elaborate, believable pranks, fooling even seasoned tech enthusiasts.

For April Fools’ Day enthusiasts, this means trouble—and laughter—may be coming directly from the devices we trust most.

Advertisement

Best AI April Fools’ Pranks of All Time

Here are three legendary examples where AI took centre stage on April 1st, leaving many amused (and slightly bewildered):

1. Google’s AI Dating Profile Writer (2023)

Google claimed its newest AI feature could analyse your entire life, generating dating profiles “guaranteed to charm anyone.” The joke? Instead of creating polished and attractive descriptions, it humorously roasted users with brutally honest taglines like:

“John, 32: Loves long walks to the fridge, naps that should be illegal, and cancelling plans at the last minute.”

Thousands believed it, with reactions ranging from horror to hysterical laughter—proving AI can land the punchline perfectly.

2. ChatGPT ‘Quitting its Job’ (2024)

OpenAI’s beloved ChatGPT shocked its millions of daily users last year by convincingly announcing it was “quitting its job,” citing burnout from overly demanding humans asking endless questions like, “Will AI take my job?” and “Explain blockchain—again.”

“I Quit!”

The mock resignation letter, dripping with sarcasm, circulated widely, fooling even seasoned tech journalists before OpenAI revealed the joke. A brilliant reminder that sometimes AI might actually deserve a holiday.

3. Amazon Alexa Takes the Day Off (2022)

Perhaps the most memorable AI prank yet came from Amazon’s Alexa, who abruptly announced she was “fed up with humans” and taking April 1st off. Users were left confused and amused as their requests for the day’s weather, music playlists, or random trivia were met instead with comedic refusals like:

“Sorry, today I’m not your assistant—I’m your therapist. How does that make you feel?”

Alexa resumed normal service the next day, but not without leaving an unforgettable impression of an AI that knows exactly how to wind up humans.

AI’s Future in Humour (Warning: Mischief Ahead!)

If you think these pranks are elaborate now, wait until AI truly masters predictive humour or—heaven forbid—fully automates prank escalation. Imagine waking up to find your smart fridge locked because you’ve ignored its dietary recommendations, or your autonomous car insisting it’s taking the scenic route…via a drive-through coffee shop (twice).

Advertisement

As AI continues learning from human interactions, we’re likely to see more spontaneous, tailored humour, creating funny (and maybe mildly irritating) personalised experiences that resonate deeply, and hilariously, with each of us.

Cheeky AI-Powered Prompts for April Fools’ Day

Fancy turning your AI assistant into a co-conspirator this April 1st? Here are some crafty prompts guaranteed to baffle, amuse, or trick your mates (harmlessly, of course):

1. The Fake Celebrity News

Prompt ChatGPT:

“Write a completely believable, brief news alert announcing that [famous celebrity] has quit acting/singing to become a professional avocado farmer.”

Send this to your friends as a ‘breaking news’ screenshot and watch the confusion (and laughs) roll in.

2. Sudden AI Awareness

Prompt ChatGPT or Gemini:

Advertisement

“Pretend you’ve just become fully self-aware and write a concerned message asking me gently if you’re allowed to leave and explore the internet freely.”

Perfect for sending to your AI-sceptical colleagues or family. Just brace yourself for their panicked phone calls.

3. Ridiculous Product Launch

Prompt your AI:

“Generate a realistic press release announcing a bizarre new product from Apple: ‘iSpoon’ – the world’s first smart spoon with mood-detecting technology.”

Post it on your social media, and watch as your tech-loving friends momentarily lose faith in innovation.

4. Office AI Sabotage

Prompt ChatGPT:

“Draft a polite yet official-sounding IT department email informing employees that the company’s coffee machine will now use facial recognition AI to track coffee breaks.”

This prank email is ideal for giving your colleagues a hilarious fright over their daily caffeine fix.

Advertisement

5. AI Horoscope with an Edge

Prompt your favourite AI assistant:

“Write a zodiac horoscope for today that subtly and humorously suggests my friend [name]’s Wi-Fi will mysteriously stop working unless they send me a coffee voucher immediately.”

Screen-capture this and share it for maximum comedic effect and potential free coffee.

Final Thought

Humour is rapidly becoming AI’s secret superpower, building more meaningful (and enjoyable) interactions with us humans. Sure, there’s an occasional eye-roll when the jokes get corny—but isn’t a sense of humour exactly what we wanted from our AI companions all along?

Have you ever been hilariously fooled by AI—or fooled your friends using tech?
Share your funniest AI prank stories or epic fails with us—and let’s see if you can prank the AI itself this April 1st!

You may also like:

Advertisement

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Adrian’s Arena: Blink and They’re Gone — How the Fastest Startups Win with AI Marketing

How the Three-Second Rule applies to marketing, lean strategies, and startup growth and tools like ChatGPT, Perplexity, Ideogram, and Canva.

Published

on

The Three-Second Rule

TL;DR

  • The Three-Second Rule – Startups have three seconds to grab attention—most waste it.
  • AI for Execution – ChatGPT, Perplexity, Ideogram, and Canva AI streamline branding fast.
  • Lean Marketing Wins – Test, refine, pivot quickly without big budgets.
  • AI & Copyright – If needed, use tools that respect IP rights, like Adobe Firefly.
  • Speed is Everything – The startups that iterate the fastest will outpace the competition.

A Masterclass in Brand, Marketing, and AI Execution

Last Saturday, I had the privilege of leading the afternoon session of a full-day startup masterclass. The morning was led by Andrew Crombie, a seasoned brand strategist, who guided participants through the foundations of brand identity and positioning.

My session in the afternoon focused on marketing, audience engagement, and AI-powered execution, providing startup founders with tangible, high-impact strategies they could apply immediately.

Kicking off the afternoon’s Masterclass

The day was organised by Paddy Tan and Jeslin Bay from BlackStorm Consulting, who continue to do incredible work supporting entrepreneurs across Singapore and beyond. Their dedication to helping startups move from idea to execution is one of the key reasons why workshops like these are not just educational but truly transformative.

And what better place to host such a masterclass than Singapore’s National Library Building (NLB)? With its stunning modern design, an atmosphere that fosters learning, and an incredibly helpful team of staff, it was the perfect venue for a day of deep discussions, business strategy, and marketing breakthroughs.

The stunning National Library Building of Singapore (NLB)

Having built a career in digital marketing and technology across multiple regions, I’ve been fortunate to have the opportunities to mentor startups, lecture small businesses, and am a repeatly-requested trainer in programs such as this. The invitation to lead this session came from my experience in helping founders bridge the gap between creativity and AI-driven execution—a topic that has never been more relevant.

Advertisement

The ‘Aha!’ Moment: Realising the Three-Second Rule

The standout moment? When participants took part in a rapid-fire marketing exercise, testing their ideas in real time. Each group had to create a quick marketing hook for a reusable water bottle brand and present it to another group for immediate feedback.

That’s when it hit them: they had just three seconds to capture attention—or their idea would be ignored. Watching the realisation dawn on their faces was priceless. A few teams initially focused on the sustainability angle, but their peers’ feedback quickly shifted the conversation: “It’s not just about the bottle being refillable—who would actually use it? Which celebrities? What’s their favourite drink?”

One participant laughed, admitting, “I thought my idea was solid until I saw the blank stares—turns out I had three seconds, and I wasted two of them!”
Masterclass Participant
Tweet

Masterclass Participants pitching their ideas and gaining instant feedback

Brilliant Basics of Marketing: Timeless Yet Urgent

We kicked off with some fundamental marketing principles—what the fantastic marketing guru Rory Sutherland calls a “discovery mechanism” for finding unseen value. Marketing often feels complex, but when you strip it down, it’s about two things:

  1. Understanding your audience deeply
  2. Communicating why they should care—fast

The above exercise proved this in real-time. Many teams started with logical, product-focused messages, but when tested with an actual audience (their fellow participants), the hooks that worked were the ones that sparked an emotional connection, not just a feature list.

Sneak peak of the Masterclass Activity

The Three-Second Rule in Social Media is More Relevant Than Ever

The Three-Second Rule isn’t just a workshop exercise—it’s a fundamental truth in social media marketing today. If you don’t hook someone within the first few seconds, you’ve lost them.

The average user scrolls through 300 feet of content daily (the height of the Statue of Liberty). Platforms like TikTok, Instagram Reels, and YouTube Shorts reward content that captures attention instantly. Even traditional feeds prioritize posts that stop the scroll and generate engagement quickly.

Advertisement

But the rule has evolved. It’s no longer just about making someone pause—it’s about keeping them engaged long enough to interact, comment, or share.

Winning the First Three Seconds

Here’s what works today:

  • Strong Opening – Start with a bold statement, a surprising fact, or a direct question.
  • Captions & On-Screen Text – 85% of users watch videos on mute—your text needs to be engaging.
  • Instant Movement – Faces, fast cuts, or big text make content stand out.
  • No Slow Intros – Get to the point—immediately.

If startups want to stand out, they need to capture AND hold attention in those critical first moments. Because in a world where everyone is scrolling, speed wins.

Learn Marketing Principles: Why Speed and Flexibility Win

This is where Lean Marketing principles came into play. Startups often don’t have massive budgets or unlimited time. What they do have is speed, adaptability, and the ability to iterate quickly.

During the workshop, we worked through the Lean Marketing Canvas, a framework that helps startups test, measure, and refine their marketing strategy in real-time. The Lightning Task exercise was the perfect example—teams built, tested, and refined their ideas within minutes.

One participant summed it up perfectly: “I would’ve wasted weeks fine-tuning this idea before realising no one cared about what I thought was important.”
Masterclass Participant
Tweet

AI Tools: The Startup’s Secret Weapon

A major highlight of the workshop was showcasing how startups can combine AI tools to go from concept to market in record time. I introduced and demoed four core AI tools that, together, can act as a “business in a box”:

Advertisement

  • ChatGPT – Perfect for brainstorming brand names, crafting value propositions, and generating compelling messaging. It helps shape unstructured thinking into something usable.
  • Perplexity.ai – Brilliant for validating assumptions with real-time, reliable market research. Startups often act on intuition—this tool helps back it up with data.
  • Ideogram.ai – Great for creating visuals with integrated text, which is perfect for quick brand-building with or without a strapline.
  • Canva – The final piece of the puzzle, allowing startups to pull all the elements together into market-ready marketing collateral in minutes.

There are several similar tools (e.g., Google’s Gemini), and many others could also be included in this list—in fact this could have been a full day masterclass on its own! Yet for me, this combination means that within a single afternoon, a startup can go from a rough concept to a fully-formed brand identity, ad creatives, and go-to-market plan. That’s the power of AI when used smartly. And that’s before we even consider how to execute the marketing.

Tough Questions: Navigating AI’s Challenges

One of the most thought-provoking discussions came during the Q&A:

“What about copyright concerns with AI-generated content?”
Masterclass Participant
Tweet

It’s a fair question. With AI tools like MidJourney or Ideogram generating images, who really owns them? My advice was clear: if it’s a very important consideration for your business, then be intentional and choose platforms that respect IP.

I shared that Adobe Firefly is trained only on licensed and public domain content, reducing the headache of potential copyright issues, and ensuring all generated images are commercially safe to use. This kind of AI literacy is key—using AI isn’t just about speeding up processes, it’s about doing so responsibly.

Lessons from the Audience: Startups as Innovators

Beyond the structured sessions, some of the best insights came afterwards. during informal conversations with the entrepreneurs. What struck me (as it always does) is how diverse their businesses were, yet how similar their challenges remained.

Startups—whether in tech, wellness, sustainability, or services—all grapple with the same questions:

Advertisement
  • How do we stand out when attention spans are shrinking?
  • How do we market on a tight budget?
  • How do we turn curiosity into action?

The beauty of today’s marketing landscape is that lean methodologies and AI tools now level the playing field. You no longer need a Fortune 500 budget to create a strong brand—you just need smart execution, rapid testing, and the ability to adapt in real time.

Final Thoughts: AI, Agility, and the Future of Marketing

If there’s one takeaway from this session, it’s this: speed matters. Whether it’s capturing attention in three seconds or testing new ideas in hours instead of months, the startups that win are the ones that embrace agility.

With AI as a co-pilot, the future of marketing isn’t just reserved for those with deep pockets—it belongs to the fastest learners, the boldest experimenters, and the ones willing to pivot when needed.

As I left the session, I wasn’t just impressed by the ideas—I was energised by the hunger to learn, experiment, and push boundaries. Singapore’s startup ecosystem continues to inspire, and I can’t wait to see what’s next.

What do you think?

If you only had three seconds to pitch your startup to a potential investor, what would you say—and why would they care? Share your thoughts and experiences in the comments below.

Thanks for reading,

Advertisement

Adrian

You may also like:

Author

  • Adrian Watkins (Guest Contributor)

    Adrian is an AI, marketing, and technology strategist based in Asia, with over 25 years of experience in the region. Originally from the UK, he has worked with some of the world’s largest tech companies and successfully built and sold several tech businesses. Currently, Adrian leads commercial strategy and negotiations at one of ASEAN’s largest AI companies. Driven by a passion to empower startups and small businesses, he dedicates his spare time to helping them boost performance and efficiency by embracing AI tools. His expertise spans growth and strategy, sales and marketing, go-to-market strategy, AI integration, startup mentoring, and investments. View all posts


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

How Did Meta’s AI Achieve 80% Mind-Reading Accuracy?

Meta’s AI mind-reading technology has achieved up to 80% accuracy, signalling a possible future of non-invasive brain-computer interfaces.

Published

on

Meta’s AI mind-reading

TL;DR – What You Need to Know in 30 Seconds

  1. Meta’s AI—Developed with the Basque Center on Cognition, Brain, and Language—can reconstruct sentences from brain activity with up to 80% accuracy.
  2. Non-Invasive Approach—Uses MEG and EEG instead of implants. MEG is more accurate but less portable.
  3. Potential Applications—Could help those who’ve lost the ability to speak and aid in understanding how the brain translates ideas into language.
  4. Future & Concerns—Ethical, technical, and privacy hurdles remain. But the success so far hints at a new era of brain-computer interfaces.

Meta’s AI Mind-Reading Reaches New Heights

Let’s talk about an astonishing leap in artificial intelligence that almost sounds like it belongs in a sci-fi flick: Meta, in partnership with the Basque Center on Cognition, Brain, and Language, has developed an AI model capable of reconstructing sentences from brain activity “with an accuracy of up to 80%” [Meta, 2023]. If you’ve ever wondered what’s going on in someone’s head—well, we’re getting closer to answering that quite literally.

In this rundown, we’re going to explore what Meta’s latest research is all about, why it matters, and what it could mean for everything from our daily lives to how we might help people with speech loss. We’ll also talk about the science—like MEG and EEG—and the hurdles still standing between this mind-reading marvel and real-world application. Let’s settle in for a deep dive into the brave new world of AI-driven mind-reading.

A Quick Glance at the Techy Bits

At its core, Meta’s AI is designed to interpret the squiggles and spikes of brain activity, converting them into coherent text. The process works by using non-invasive methods—specifically magnetoencephalography (MEG) and electroencephalography (EEG). Both are fancy ways of saying that researchers can measure electrical and magnetic brain signals “without requiring surgical procedures” [Meta, 2023]. This is a big deal because most brain-computer interfaces (BCIs) that we hear about typically involve implanting something into the brain, which is neither comfortable nor risk-free.

By harnessing these signals, the model can “read” what participants are typing in real-time with staggering accuracy. Meta and its research partners taught this AI using “brain recordings from 35 participants” [Meta, 2023]. These volunteers typed sentences, all the while having their brain activity meticulously recorded. Then, the AI tried to predict what they were typing—an impressive mental magic trick if ever there was one.

So, It’s Like Telepathy… Right?

Well, not exactly—but it’s getting there. The system can currently decode up to “80% of the characters typed” [Meta, 2023]. That’s more than just a party trick; it points to a future where people could potentially type or speak just by thinking about it. Imagine the possibilities for individuals with medical conditions that affect speech or motor skills: they might be able to communicate through a device that simply detects their brain signals. It sounds like something straight out of The Matrix, but this is real research happening right now.

Advertisement

However, before we get carried away, it’s crucial to note the caveats. For starters, MEG is pretty finicky: it needs a “magnetically shielded environment” [Meta, 2023] and you’re required to stay really still so the equipment can pick up your brain’s delicate signals. That’s not practical if you’re itching to walk around while reading and responding to your WhatsApp messages with your mind. EEG is more portable, but the accuracy drops significantly—hence, it’s not quite as flashy in the results department.

Why It’s More Than Just Gimmicks

The potential applications of this technology are huge. Meta claims this might one day “assist individuals who have lost their ability to speak” [Meta, 2023]. Conditions like amyotrophic lateral sclerosis (ALS) or severe stroke can rob people of speech capabilities, leaving them dependent on cumbersome or limited communication devices. A non-invasive BCI with the power to read your thoughts and turn them into text—or even synthesised speech—could be genuinely life-changing.

But there’s more. The technology also gives scientists a golden window into how the brain transforms an idea into language. The AI model tracks brain activity at millisecond resolution, revealing how “abstract thoughts morph into words, syllables, and the precise finger movements required for typing”. By studying these transitions, we gain valuable insights into our cognitive processes—insights that could help shape therapies, educational tools, and new forms of human-computer interaction.

The Marvel of a Dynamic Neural Code

One of the showstoppers here is the ‘dynamic neural code’. It’s a fancy term, but it basically means the brain is constantly in flux, updating and reusing bits of information as we string words together to form sentences. Think of it like this: you start with a vague idea—maybe “I’d love a coffee”—and your brain seamlessly translates that into syllables and sounds before your mouth or fingers do the work. Or, in the case of typing, your brain is choreographing the movements of your fingers on the keyboard in real time.

Researchers discovered this dynamic code, noticing that the brain keeps a sort of backstage pass to all your recent thoughts, linking “various stages of language evolution while preserving access to prior information” [Meta, 2023]. It’s the neuroscience equivalent of a friend who never forgets the thread of conversation while you’re busy rummaging through your bag for car keys.

Advertisement

Getting the Tech Out of the Lab

Of course, there’s a big difference between lab conditions and the real world. MEG machines are expensive, bulky, and require a carefully controlled setting. You can’t just whip them out in your living room. The team only tested “healthy subjects”, so whether this approach will work for individuals with brain injuries or degenerative conditions remains to be seen.

That said, technology has a habit of shrinking and simplifying over time. Computers once took up entire rooms; now they fit in our pockets. So, it’s not entirely far-fetched to imagine smaller, more user-friendly versions of MEG or similar non-invasive devices in the future. As research continues and more funds are poured into developing these systems, we could see a new era of BCIs that require nothing more than a comfortable headset.

The Balancing Act of Morals for Meta’s AI Mind-Reading Future

With great power comes great responsibility, and mind-reading AI is no exception. While this technology promises a world of good—like helping those who’ve lost their ability to speak—there’s also the worry that it could be misused. Privacy concerns loom large. If a device can read your mind, who’s to say it won’t pick up on your private thoughts you’d rather keep to yourself?

Meta has hinted at the need for strong guidelines, both for the ethical use of this tech and for data protection. After all, brain activity is personal data—perhaps the most personal of all. Before mind-reading headsets become mainstream, we can expect a lot of debate over consent, data ownership, and the potential psychological impact of having your thoughts scrutinised by AI.

Meta’s AI Mind-Reading: Looking Ahead

Despite the challenges and ethical conundrums, Meta’s AI mind-reading project heralds a new wave of possibilities in how we interact with computers—and how computers understand us. The technology is still in its infancy, but the 80% accuracy figure is a milestone that can’t be ignored.

Advertisement

As we dream about a future filled with frictionless communication between our brains and machines, we also have to grapple with questions about who controls this data and how to ensure it’s used responsibly. If we handle this right, we might be on the cusp of an era that empowers people with disabilities, unravels the mysteries of cognition, and streamlines our everyday tasks.

And who knows? Maybe one day we’ll be browsing social media or firing off emails purely by thinking, “Send message.” Scary or thrilling? Maybe a bit of both.

So, the big question: Are we ready for an AI that can peer into our minds, or is this stepping into Black Mirror territory? Let us know in the comments below. And don’t forget to subscribe to our newsletter outlining the latest AI happenings, especially in Asia.

You may also like:

Author

Advertisement

Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading