Life
‘Never Say Goodbye’: Can AI Bring the Dead Back to Life?
This article delves into the fascinating and controversial world of AI resurrections, exploring how technology is changing the way we cope with grief.
Published
9 months agoon
By
AIinAsia
TL;DR:
- AI is creating digital ‘resurrections’ of the dead, allowing people to interact with them.
- Projects like Replika and StoryFile use AI to mimic the deceased’s communication style.
- Experts debate the psychological and ethical implications of these technologies.
- Privacy and environmental concerns are significant issues with AI resurrections.
In a world where artificial intelligence can resurrect the dead, grief takes on a new dimension. From Canadian singer Drake’s use of AI-generated Tupac Shakur vocals to Indian politicians addressing crowds years after their passing, technology is blurring the lines between life and death. But beyond their uncanny pull in entertainment and politics, AI “zombies” might soon become a reality for people reeling from the loss of loved ones, through a series of pathbreaking, but potentially controversial, initiatives.
What are AI ‘Resurrections’ of People?
Over the past few years, AI projects around the world have created digital “resurrections” of individuals who have passed away, allowing friends and relatives to converse with them. Typically, users provide the AI tool with information about the deceased. This could include text messages and emails or simply be answers to personality-based questions. The AI tool then processes that data to talk to the user as if it were the deceased.
One of the most popular projects in this space is Replika – a chatbot that can mimic people’s texting styles. Other companies, however, now also allow you to see a video of the dead person as you talk to them. For example, Los Angeles-based StoryFile uses AI to allow people to talk at their own funerals. Before passing, a person can record a video sharing their life story and thoughts. During the funeral, attendees can ask questions and AI technology will select relevant responses from the prerecorded video.
In June, US-based Eternos also made headlines for creating an AI-powered digital afterlife of a person. Initiated just earlier this year, this project allowed 83-year-old Michael Bommer to leave behind a digital version of himself that his family could continue to interact with.
Do These Projects Help People?
When a South Korean mother reunited with an AI recreation of her dead daughter in virtual reality, a video of the emotional encounter in 2020 sparked an intense debate online about whether such technology helps or hurts its users. Developers of such projects point to the users’ agency, and say that it addresses a deeper suffering.
Jason Rohrer, founder of Project December, which also uses AI to stimulate conversations with the dead, said that most users are typically going through an “unusual level of trauma and grief” and see the tool as a way to help cope.
“A lot of these people who want to use Project December in this way are willing to try anything because their grief is so insurmountable and so painful to them.”
The project allows users to chat with AI recreations of known public figures and also with individuals that users may know personally. People who choose to use the service for stimulating conversation with the dead often discover that it helps them find closure, Rohrer said. The bots allow them to express words left unsaid to loved ones who died unexpectedly, he added.
Eternos’s founder, Robert LoCasio, said that he developed the company to capture people’s life stories and allow their loved ones to move forward. Bommer, his former colleague who passed away in June, wanted to leave behind a digital legacy exclusively for his family, said LoCasio.
“I spoke with [Bommer] just days before he passed away and he said, just remember, this was for me. I don’t know if they’d use this in the future, but this was important to me,” said LoCasio.
What are the Pitfalls of This Technology?
Some experts and observers are more wary of AI resurrections, questioning whether deeply grieving people can really make the informed decision to use it, and warning about its adverse psychological effects.
“The biggest concern that I have as a clinician is that mourning is actually very important. It’s an important part of development that we are able to acknowledge the missing of another person,” said Alessandra Lemma, consultant at the Anna Freud National Centre for Children and Families.
Prolonged use could keep people from coming to terms with the absence of the other person, leaving them in a state of “limbo”, Lemma warned. Indeed, one AI service has marketed a perpetual connection with the deceased person as a key feature.
“Welcome to YOV (You, Only Virtual), the AI startup pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love,” read the company’s website, before it was recently updated.
Rohrer said that his grief bot has an “in-built” limiting factor: users pay $10 for a limited conversation. The fee buys time on a supercomputer, with each response varying in computational cost. This means $10 doesn’t guarantee a fixed number of responses, but can allow for one to two hours of conversation. As the time is about to lapse, users are sent a notification and can say their final goodbyes. Several other AI-generated conversational services also charge a fee for use.
Lemma, who has researched the psychological impact of grief bots, says that while she worries about the prospects of them being used outside a therapeutic context, it could be used safely as an adjunct to therapy with a trained professional. Studies around the world are also observing the potential for AI to deliver mental health counselling, particularly through individualised conversational tools.
Are Such Tools Unnatural?
These services may appear to be straight out of a Black Mirror episode. But supporters of this technology argue that the digital age is simply ushering in new ways of preserving life stories, and potentially filling a void left by the erosion of traditional family storytelling practices.
“In the olden days, if a parent knew they were dying, they would leave boxes full of things that they might want to pass on to a child or a book,” said Lemma. “So, this might be the 21st-century version of that, which is then passed on and is created by the parents in anticipation of their passing.”
LoCasio at Eternos agrees.
“The ability for a human to tell the stories of their life, and pass those along to their friends and family, is actually the most natural thing,” he said.
Are AI Resurrection Services Safe and Private?
Experts and studies alike have expressed concerns that such services may fail to keep data private. Personal information or data such as text messages shared with these services could potentially be accessed by third parties. Even if a firm says it will keep data private when someone first signs up, common revisions to terms and conditions, as well as possible changes in company ownership mean that privacy cannot be guaranteed, cautioned Renee Richardson Gosline, senior lecturer at the MIT Sloan School of Management.
Both Rohrer and LoCasio insisted that privacy was at the heart of their projects. Rohrer can only view conversations when users file a customer support request, while LoCasio’s Eternos limits access to the digital legacy to authorised relatives. However, both agreed that such concerns could potentially manifest in the case of tech giants or for-profit companies.
One big worry is that companies may use AI resurrections to customise how they market themselves to users. An advertisement in the voice of a loved one, a nudge for a product in their text.
“When you’re doing that with people who are vulnerable, what you’ve created is a pseudo-endorsement based on someone who never agreed to do such a thing. So it really is a problem with regard to agency and asymmetry of power,” said Gosline.
Are There Any Other Concerns Over AI Chatbots?
That these tools are fundamentally catering to a market of people dealing with grief in itself makes them risky, suggested Gosline – especially when Big Tech companies enter the game.
“In a culture of tech companies which is often described as ‘move fast and break things’, we ought to be concerned because what’s typically broken first are the things of the vulnerable people,” said Gosline. “And I’m hard-pressed to think of people who are more vulnerable than those who are grieving.”
Experts have raised concerns about the ethics of creating a digital resurrection of the dead, particularly in cases where they have not consented to it and users feed AI the data. The environmental impact of AI-powered tools and chatbots is also a growing concern, particularly when involving large language models (LLMs) – systems trained to understand and generate human-like text, which power applications like chatbots.
These systems need giant data centres that emit high levels of carbon and use large volumes of water for cooling, in addition to creating e-waste due to frequent hardware upgrades. A report in early July from Google showed that the company was far behind its ambitious net-zero goals, owing to the demand AI was putting on its data centres.
Gosline said that she understands that there is no perfect programme and that many users of such AI chatbots would do anything to reconnect with a deceased loved one. But it’s on leaders and scientists to be more thoughtful about the kind of world they want to create, she said. Fundamentally, she said, they need to ask themselves one question:
“Do we need this?”
Final Thoughts: The Future of AI and Grief
As AI continues to evolve, so too will its applications in helping people cope with grief. While the technology offers unprecedented opportunities for connection and closure, it also raises significant ethical, psychological, and environmental concerns. It is crucial for developers and users alike to approach these tools with caution and consideration, ensuring that they are used in ways that truly benefit those who are grieving.
Comment and Share:
What do you think about the future of AI and its role in helping people cope with grief? Have you or someone you know used AI to connect with a lost loved one? Share your experiences and thoughts in the comments below. And don’t forget to subscribe for updates on AI and AGI developments.
You may also like:
- The Sinister Truth Behind the ‘Dead Internet Theory’: AI’s Takeover of the Web
- Unleashing AI Magic: Google’s Powerful Photo Editing Tools Come to Older Smartphones
- Revolutionary AI Unveils Hidden Heart Attack Risks
- To learn more about AI resurrection, tap here.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
You may like
-
If AI Kills the Open Web, What’s Next?
-
Unearthly Tech? AI’s Bizarre Chip Design Leaves Experts Flummoxed
-
How Did Meta’s AI Achieve 80% Mind-Reading Accuracy?
-
How to Prepare for AI’s Impact on Your Job by 2030
-
We (Sort Of) Missed the Mark with Digital Transformation
-
Reality Check: The Surprising Relationship Between AI and Human Perception
Life
Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything
Will AI get you fired? Discover 9 career-killing AI mistakes professionals make—and how to avoid them.
Published
3 weeks agoon
May 15, 2025
TL;DR — What You Need to Know:
- Common AI mistakes that cost jobs can happen — fast
- Most are fixable if you know what to watch for.
- Avoid these pitfalls and make AI your career superpower.
Don’t blame the robot.
If you’re careless with AI, it’s not just your project that tanks — your career could be next.
Across Asia and beyond, professionals are rushing to implement artificial intelligence into workflows — automating reports, streamlining support, crunching data. And yes, done right, it’s powerful. But here’s what no one wants to admit: most people are doing it wrong.
I’m not talking about missing a few prompts or failing to generate that killer deck in time. I’m talking about the career-limiting, confidence-killing, team-splintering mistakes that quietly build up and explode just when it matters most. If you’re not paying attention, AI won’t just replace your role — it’ll ruin your reputation on the way out.
Here are 9 of the most common, most damaging AI blunders happening in businesses today — and how you can avoid making them.
1. You can’t fix bad data with good algorithms.
Let’s start with the basics. If your AI tool is churning out junk insights, odds are your data was junk to begin with. Dirty data isn’t just inefficient — it’s dangerous. It leads to flawed decisions, mis-targeted customers, and misinformed strategies. And when the campaign tanks or the budget overshoots, guess who gets blamed?
The solution? Treat your data with the same respect you’d give your P&L. Clean it, vet it, monitor it like a hawk. AI isn’t magic. It’s maths — and maths hates mess.
2. Don’t just plug in AI and hope for the best.
Too many teams dive into AI without asking a simple question: what problem are we trying to solve? Without clear goals, AI becomes a time-sink — a parade of dashboards and models that look clever but achieve nothing.
Worse, when senior stakeholders ask for results and all you have is a pretty interface with no impact, that’s when credibility takes a hit.
AI should never be a side project. Define its purpose. Anchor it to business outcomes. Or don’t bother.
3. Ethics aren’t optional — they’re existential.
You don’t need to be a philosopher to understand this one. If your AI causes harm — whether that’s through bias, privacy breaches, or tone-deaf outputs — the consequences won’t just be technical. They’ll be personal.
Companies can weather a glitch. What they can’t recover from is public outrage, legal fines, or internal backlash. And you, as the person who “owned” the AI, might be the one left holding the bag.
Bake in ethical reviews. Vet your training data. Put in safeguards. It’s not overkill — it’s job insurance.
4. Implementation without commitment is just theatre.
I’ve seen it more than once: companies announce a bold AI strategy, roll out a tool, and then… nothing. No training. No process change. No follow-through. That’s not innovation. That’s box-ticking.
If you half-arse AI, it won’t just fail — it’ll visibly fail. Your colleagues will notice. Your boss will ask questions. And next time, they might not trust your judgement.
AI needs resourcing, support, and leadership. Otherwise, skip it.
5. You can’t manage what you can’t explain.
Ever been in a meeting where someone says, “Well, that’s just what the model told us”? That’s a red flag — and a fast track to blame when things go wrong.
So-called “black box” models are risky, especially in regulated industries or customer-facing roles. If you can’t explain how your AI reached a decision, don’t expect others to trust it — or you.
Use interpretable models where possible. And if you must go complex, document it like your job depends on it (because it might).
6. Face the bias before it becomes your headline.
Facial recognition failing on darker skin tones. Recruitment tools favouring men. Chatbots going rogue with offensive content. These aren’t just anecdotes — they’re avoidable, career-ending screw-ups rooted in biased data.
It’s not enough to build something clever. You have to build it responsibly. Test for bias.
Diversify your datasets. Monitor performance. Don’t let your project become the next PR disaster.
7. Training isn’t optional — it’s survival.
If your team doesn’t understand the tool you’ve introduced, you’re not innovating — you’re endangering operations. AI can amplify productivity or chaos, depending entirely on who’s driving.
Upskilling is non-negotiable. Whether it’s hiring external expertise or running internal workshops, make sure your people know how to work with the machine — not around it.
8. Long-term vision beats short-term wow.
Sure, the first week of AI adoption might look good. Automate a few slides, speed up a report — you’re a hero.
But what happens three months down the line, when the tool breaks, the data shifts, or the model needs recalibration?
AI isn’t set-and-forget. Plan for evolution. Plan for maintenance. Otherwise, short-term wins can turn into long-term liabilities.
9. When everything’s urgent, documentation feels optional.
Until someone asks, “Who changed the model?” or “Why did this customer get flagged?” and you have no answers.
In AI, documentation isn’t admin — it’s accountability.
Keep logs, version notes, data flow charts. Because sooner or later, someone will ask, and “I’m not sure” won’t cut it.
Final Thoughts: AI doesn’t cost jobs. People misusing AI do.
Most AI mistakes aren’t made by the machines — they’re made by humans cutting corners, skipping checks, and hoping for the best. And the consequences? Lost credibility. Lost budgets. Lost roles.
But it doesn’t have to be that way.
Used wisely, AI becomes your competitive edge. A signal to leadership that you’re forward-thinking, capable, and ready for the future. Just don’t stumble on the same mistakes that are currently tripping up everyone else.
So the real question is: are you using AI… or is it quietly using you?
You may also like:
- Bridging the AI Skills Gap: Why Employers Must Step Up
- From Ethics to Arms: Google Lifts Its AI Ban on Weapons and Surveillance
- Or try the free version of Google Gemini by tapping here.
Author
-
Adrian is an AI, marketing, and technology strategist based in Asia, with over 25 years of experience in the region. Originally from the UK, he has worked with some of the world’s largest tech companies and successfully built and sold several tech businesses. Currently, Adrian leads commercial strategy and negotiations at one of ASEAN’s largest AI companies. Driven by a passion to empower startups and small businesses, he dedicates his spare time to helping them boost performance and efficiency by embracing AI tools. His expertise spans growth and strategy, sales and marketing, go-to-market strategy, AI integration, startup mentoring, and investments. View all posts
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
Life
FAKE FACES, REAL CONSEQUENCES: Should NZ Ban AI in Political Ads?
New Zealand has no laws preventing the use of deepfakes or AI-generated content in political campaigns. As the 2025 elections approach, is it time for urgent reform?
Published
3 weeks agoon
May 14, 2025By
AIinAsia
TL;DR — What You Need to Know
- New Zealand politician campaigns are already dabbling with AI-generated content — but without clear rules or disclosures.
- Deepfakes and synthetic images of ethnic minorities risk fuelling cultural offence and voter distrust.
- Other countries are moving fast with legislation. Why is New Zealand dragging its feet?
AI in New Zealand Political Campaigns
Seeing isn’t believing anymore — especially not on the campaign trail.
In the build-up to the 2025 local body elections, New Zealand voters are being quietly nudged into a new kind of uncertainty: Is what they’re seeing online actually real? Or has it been whipped up by an algorithm?
This isn’t science fiction. From fake voices of Joe Biden in the US to Peter Dutton deepfakes dancing across TikTok in Australia, we’ve already crossed the threshold into AI-assisted campaigning. And New Zealand? It’s not far behind — it just lacks the rules.
The National Party admitted to using AI in attack ads during the 2023 elections. The ACT Party’s Instagram feed includes AI-generated images of Māori and Pasifika characters — but nowhere in the posts do they say the images aren’t real. One post about interest rates even used a synthetic image of a Māori couple from Adobe’s stock library, without disclosure.
That’s two problems in one. First, it’s about trust. If voters don’t know what’s real and what’s fake, how can they meaningfully engage? Second, it’s about representation. Using synthetic people to mimic minority communities without transparency or care is a recipe for offence — and harm.
Copy-Paste Cultural Clangers
Australians already find some AI-generated political content “cringe” — and voters in multicultural societies are noticing. When AI creates people who look Māori, Polynesian or Southeast Asian, it often gets the cultural signals all wrong. Faces are oddly symmetrical, clothing choices are generic, and context is stripped away. What’s left is a hollow image that ticks the diversity box without understanding the lived experience behind it.
And when political parties start using those images without disclosure? That’s not smart targeting. That’s political performance, dressed up as digital diversity.
A Film-Industry Fix?
If you’re looking for a local starting point for ethical standards, look to New Zealand’s film sector. The NZ Film Commission’s 2025 AI Guidelines are already ahead of the game — promoting human-first values, cultural respect, and transparent use of AI in screen content.
The public service also has an AI framework that calls for clear disclosure. So why can’t politics follow suit?
Other countries are already acting. South Korea bans deepfakes in political ads 90 days before elections. Singapore outlaws digitally altered content that misrepresents political candidates. Even Canada is exploring policy options. New Zealand, in contrast, offers voluntary guidelines — which are about as enforceable as a handshake on a Zoom call.
Where To Next?
New Zealand doesn’t need to reinvent the wheel. But it does need urgent rules — even just a basic requirement for political parties to declare when they’re using AI in campaign content. It’s not about banning creativity. It’s about respecting voters and communities.
In a multicultural democracy, fake faces in real campaigns come with consequences. Trust, representation, and dignity are all on the line.
What do YOU think?
Should political parties be forced to declare AI use in their ads — or are we happy to let the bots keep campaigning for us?
You may also like:
- AI Chatbots Struggle with Real-Time Political News: Are They Ready to Monitor Elections?
- Supercharge Your Social Media: 5 ChatGPT Prompts to Skyrocket Your Following
- AI Solves the ‘Cocktail Party Problem’: A Breakthrough in Audio Forensics
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
Life
7 Mind-Blowing New ChatGPT Use Cases in 2025
Discover 7 powerful new ChatGPT use cases for 2025 — from sales training to strategic planning. Built for real businesses, not just techies.
Published
3 weeks agoon
May 14, 2025By
AIinAsia
TL;DR — What You Need to Know:
- ChatGPT use cases in 2025 — they’re changing the way we work – and fast
- It’s new capabilities are shockingly useful — from real-time strategy building to smarter email, training, and customer service.
- The tech’s no longer the limiting factor. How you use it is what sets winners apart.
- You don’t need a dev team — just smart prompts, good judgement, and a bit of experimentation.
Welcome to Your New ChatGPT Use Cases in 2025
Something extraordinary is happening with AI — and this time, it’s not just another update. ChatGPT’s latest model has quietly become one of the most powerful tools on the planet, capable of outperforming human professionals in everything from sales role-play to strategic planning.
Here’s what’s changed: 2025’s AI isn’t just faster or more fluent. It’s fundamentally more useful. And while most people are still asking it to write birthday poems or summarise PDFs, smart businesses are doing something entirely different.
They’re solving real problems.
So here are 7 powerful, practical, and slightly mind-blowing ways you can use ChatGPT right now — whether you’re running a startup, scaling a business, or just trying to survive your inbox.
1. The Intelligence Quantum Leap
Let’s start with the big one. GPT-4o — OpenAI’s flagship model for 2025 — doesn’t just understand language. It reasons. It plans. It scores higher than the average human on standardised IQ tests.
And yes, that’s both impressive and terrifying.
But the real win for business? You now have on-demand access to a logic machine that can unpack strategy, simulate market moves, and give brutally clear feedback on your plans — without needing a whiteboard or a 5-hour workshop.
Ask ChatGPT:
“Compare three go-to-market strategies for a mid-priced SaaS product in Southeast Asia targeting logistics firms.”
It’ll give you a side-by-side breakdown faster than most consultants.
Why it matters:
The days of ‘I’ll get back to you after I crunch the data’ are over. You now crunch in real time. Strategy meetings just got smarter — and shorter.
2. Email Management: The Silent Revolution
Email is where good ideas go to die. But what if AI could handle the grunt work — without sounding like a robot?
In 2025, it can. ChatGPT now plugs seamlessly into tools like Zapier, Make.com, and even Outlook or Gmail via APIs. That means you can automate 80% of your email workflow:
- Draft responses in your tone of voice
- Auto-tag or file messages based on content
- Trigger follow-ups without lifting a finger
Real use case:
A boutique agency in Singapore uses ChatGPT to scan all inbound client emails, draft smart replies with custom links, and log actions in Notion. Result? 40% time saved, zero missed follow-ups.
But beware:
Letting AI send emails unsupervised is asking for trouble. Use a “draft-and-review” loop — AI writes it, you approve it.
3. Voice-Powered Strategy: AI That Walks With You
Here’s a glimpse of the future: You’re walking to get kopi. You press and hold your ChatGPT app. You say:
“I’m thinking about launching a mini-course for HR leaders on AI literacy. Maybe bundle it with a coaching session. Can you sketch out a funnel?”
By the time you get back to your desk, it’s done. A structured funnel. Headline ideas. Audience personas. Even suggested pricing tiers.
This is now live.
The new voice interaction mode in ChatGPT feels like talking to a strategist who never gets tired. It remembers what you said, clarifies details, and adapts based on your feedback. Use it during your commute. In the gym. While cooking.
Think about it:
Your best thinking doesn’t always happen at your desk. Now, it doesn’t have to.
4. Sales Role-Play (That Doesn’t Suck)
Sales teams have always known the value of practice. But let’s be honest: traditional role-play is awkward, slow, and often skipped.
Now imagine this: You open ChatGPT and say:
“Pretend you’re a CFO pushing back on my pitch for enterprise expense software. Hit me with your top three objections.”
It does. Relentlessly. Then you tweak it:
“Now play a more sceptical CFO. Use financial jargon. Be unimpressed.”
It does that too.
Why it works:
There’s no fear of judgement. No awkwardness. Just high-impact reps that sharpen your message and steel your nerves.
Results?
One founder I know used this daily before calls — and closed 4 out of 5 deals that quarter. That’s not hype. That’s practice made perfect.
5. Marketing Psychology at Scale
Your customers are constantly telling you what they care about. But the signal’s buried in reviews, chats, complaints, comments, and survey feedback.
ChatGPT is now ridiculously good at sifting through this mess and surfacing insights — emotional tone, patterns in word choice, common objections, even specific desires.
Example prompt:
“Analyse these 250 customer reviews. What do customers love most? What words do they use to describe our product? What are their biggest frustrations?”
What you get is a heatmap of customer psychology.
Smart marketers use this to:
Reframe messaging
Write landing pages in the customer’s voice
Identify overlooked objections early
Bonus trick:
Feed this analysis into your ad copywriting prompts. CTRs go up. Every. Single. Time.
6. 24/7 Customer Engagement — That Doesn’t Feel Robotic
We’ve all used chatbots that sound like your uncle trying to be cool. Not anymore.
With GPT-4o and custom instructions, you can now build a digital agent that actually sounds like your brand, asks smart follow-ups, and guides users toward decisions.
Imagine this:
You run an e-commerce site. A customer asks about shipping options. Instead of a static FAQ or slow email reply, ChatGPT:
- Asks where they’re based
- Calculates delivery timelines
- Recommends a bundled offer
- Logs the lead to your CRM
All in real time.
Result?
One online skincare brand reported a 50% increase in cart completions just by switching to an AI-led chat system.
The real kicker? Customers prefer talking to it.
7. Your Digital Ops Manual — Finally Done
Every business struggles with documenting processes. SOPs are boring, messy, and constantly out of date.
But ChatGPT? It lives for this.
Feed it rough notes, voice memos, old docs — and it turns them into clear, structured workflows.
Now take it one step further:
Set up a private knowledge base where your team can ask questions naturally and get precise answers.
“What’s our refund process for EU customers?”
“How do I update a client billing profile?”
“What’s the Slack etiquette for our sales team?”
ChatGPT answers. With citations.
Training time drops. Mistakes go down. New hires ramp up faster.
Best of all?
It gets smarter the more your team uses it.
So… What’s Stopping You Trying These ChatGPT Use Cases in 2025?
Every use case in this article is live. Affordable. And 100% usable today. No code. No dev team. No six-month roadmap.
Just smarter thinking — and a willingness to try.
So here’s the real question:
What’s your excuse for not using AI like this yet… and how long can you afford to wait?
You may also like:
- AI in Email Marketing: A New Dawn
- Omptimise Your Sales Strategy with ChatGPT: Top AI Prompts for Sellers
- Transforming Sales Coaching in Asia With AI
- Or try these out now on the free version of ChatGPT by tapping here.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.

Upgrade Your ChatGPT Game With These 5 Prompts Tips

If AI Kills the Open Web, What’s Next?

Build Your Own Custom GPT in Under 30 Minutes – Step-by-Step Beginner’s Guide
Trending
-
Life3 weeks ago
7 Mind-Blowing New ChatGPT Use Cases in 2025
-
Learning2 weeks ago
How to Use the “Create an Action” Feature in Custom GPTs
-
Business3 weeks ago
AI Just Killed 8 Jobs… But Created 15 New Ones Paying £100k+
-
Tools3 weeks ago
Edit AI Images on the Go with Gemini’s New Update
-
Learning6 days ago
Build Your Own Custom GPT in Under 30 Minutes – Step-by-Step Beginner’s Guide
-
Learning2 weeks ago
How to Upload Knowledge into Your Custom GPT
-
Business1 week ago
Adrian’s Arena: Stop Collecting AI Tools and Start Building a Stack
-
Life3 weeks ago
Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything