Life
Nevada’s AI Experiment: Unemployment Benefits Decided by Machines
Google’s AI-powered Ask Photos feature revolutionises digital memory interaction with complex, natural language queries.
Published
8 months agoon
By
AIinAsia
TL;DR:
- Nevada is using AI to decide unemployment benefits, despite concerns.
- The AI system, powered by Google, aims to speed up the decision-making process.
- Critics worry about accuracy, bias, and the potential for shortcuts.
In a bold yet controversial move, the state of Nevada has started using artificial intelligence to determine who receives unemployment benefits. This decision has sparked debate and raised concerns about the reliability and fairness of AI in such critical decision-making processes. Let’s dive into the details of this experiment and explore its implications.
AI in Unemployment Benefits: A New Era?
Nevada’s Department of Employment, Training, and Rehabilitation (DETR) has introduced an AI system to sift through transcripts and documents from hearings. The goal? To make quicker decisions about unemployment benefits. The system, powered by Google, was approved last month by the state’s Board of Examiners.
Christopher Sewell, the director of DETR, assured that human oversight is in place. “There’s no AI [written decisions] that are going out without having human interaction and that human review,” he told Gizmodo. The idea is to speed up the process, helping claimants get their benefits faster.
The Role of Human Referees
The AI system generates recommendations, but a human referee reviews each decision. If the referee disagrees with the AI, the documents are revised and investigated further by DETR. This process aims to ensure accuracy, but critics worry that it could end up taking more time if the AI frequently makes mistakes.
Morgan Shah, the director of community engagement for Nevada Legal Services, pointed out the potential pitfalls. “The time savings they’re looking for only happens if the review is very cursory,” she said. “If someone is reviewing something thoroughly and properly, they’re really not saving that much time.”
Concerns from Experts
Former Nevada labor official Michele Evermore expressed her concerns about the system. “If a robot’s just handed you a recommendation and you just have to check a box and there’s pressure to clear out a backlog, that’s a little bit concerning,” she told Gizmodo.
Google, which developed the AI system, assured that they work with customers to address potential biases and comply with regulations. However, the effectiveness of the AI remains uncertain until it’s shown to be doing a bad job, raising ethical questions about experimenting on vulnerable members of society.
The Potential for Bias and Shortcuts
One of the biggest concerns with AI decision-making is the potential for bias. If the AI system is trained on biased data, it could perpetuate or even amplify existing inequalities. Additionally, the pressure to clear backlogs could lead to shortcuts, compromising the thoroughness of the review process.
The Future of AI in Public Services
Nevada’s experiment with AI in unemployment benefits is part of a broader trend of using AI in public services. While AI has the potential to increase efficiency and accuracy, it also raises important questions about fairness, transparency, and accountability.
By exploring Nevada’s AI experiment, we gain insights into the broader implications of AI in public services and the need for careful evaluation and oversight.
Comment and Share:
What do you think about Nevada’s use of AI to decide unemployment benefits? Do you see potential benefits or concerns? Share your thoughts and experiences with AI and AGI technologies in the comments below. Don’t forget to subscribe for updates on AI and AGI developments.
You may also like:
- AI vs. Human Bias: The Fight for Fair Recruitment in the Digital Age
- AI as Curator: More Than Meets the Eye
- Unleashing AI’s Potential: A Strategic Guide for Asian Businesses
- To learn more about Nevada’s AI unemployment appeals experiment, tap here.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
You may like
-
Anthropic’s CEO Just Said the Quiet Part Out Loud — We Don’t Understand How AI Works
-
Where Can You Apply Generative vs. Analytical AI Effectively?
-
OpenAI’s Bold Venture: Crafting the Moral Compass of AI
-
Revolutionising Crime-Solving: AI Detectives on the Beat
-
ChatGPT’s Unsettling Advance: Is It Getting Too Smart?
-
Revolutionising Critical Infrastructure: How AI is Becoming More Reliable and Transparent
Life
How To Teach ChatGPT Your Writing Style
This warm, practical guide explores how professionals can shape ChatGPT’s tone to match their own writing style. From defining your voice to smart prompting and memory settings, it offers a step-by-step approach to turning ChatGPT into a savvy writing partner.
Published
5 hours agoon
June 4, 2025By
AIinAsia
TL;DR — What You Need To Know
- ChatGPT can mimic your writing tone with the right examples and prompts
- Start by defining your personal style, then share it clearly with the AI
- Use smart prompting, not vague requests, to shape tone and rhythm
- Custom instructions and memory settings help ChatGPT “remember” you
- It won’t be perfect — but it can become a valuable creative sidekick.
Start by defining your voice
Before ChatGPT can write like you, you need to know how you write. This may sound obvious, but most professionals haven’t clearly articulated their voice. They just write.
Think about your usual tone. Are you friendly, brisk, poetic, slightly sarcastic? Do you use short, direct sentences or long ones filled with metaphors? Swear words? Emojis? Do you write like you talk?
Collect a few of your own writing samples: a newsletter intro, a social media post, even a Slack message. Read them aloud. What patterns emerge? Look at rhythm, vocabulary and mood. That’s your signature.
Show ChatGPT your writing
Now you’ve defined your style, show ChatGPT what it looks like. You don’t need to upload a manifesto. Just say something like:
“Here are three examples of my writing. Please analyse my tone, sentence structure and word choice. I’d like you to write like this moving forward.”
Then paste your samples. Follow up with:
“Can you describe my writing style in a few bullet points?”
You’re not just being polite. This step ensures you’re aligned. It also helps ChatGPT to frame your voice accurately before trying to imitate it.
Be sure to offer varied, representative examples. The more you reflect your daily writing habits across different formats (emails, captions, articles), the sharper the mimicry.
Prompt with purpose
Once ChatGPT knows how you write, the next step is prompting. And this is where most people stumble. Saying, “Make it sound like me” isn’t quite enough.
Instead, try:
“Rewrite this in my tone — warm, conversational, and a little cheeky.” “Avoid sounding corporate. Use contractions, variety in sentence length and clear rhythm.”
Yes, you may need a few back-and-forths. But treat it like any editorial collaboration — the more you guide it, the better the results.
And once a prompt nails your style? Save it. That one sentence could be reused dozens of times across projects.
Use memory and custom instructions
ChatGPT now lets you store tone and preferences in memory. It’s like briefing a new hire once, rather than every single time.
Start with Custom Instructions (in Settings > Personalisation). Here, you can write:
“I use conversational English with dry humour and avoid corporate jargon. Short, varied sentences. Occasionally cheeky.”
Once saved, these tone preferences apply by default.
There’s also memory, where ChatGPT remembers facts and stylistic traits across chats. Paid users have access to broader, more persistent memory. Free users get a lighter version but still benefit.
Just say:
“Please remember that I like a formal tone with occasional wit.”
ChatGPT will confirm and update accordingly. You can always check what it remembers under Settings > Personalisation > Memory.
Test, tweak and give feedback
Don’t be shy. If something sounds off, say so.
“This is too wordy. Try a punchier version.” “Tone down the enthusiasm — make it sound more reflective.”
Ask ChatGPT why it wrote something a certain way. Often, the explanation will give you insight into how it interpreted your tone, and let you correct misunderstandings.
As you iterate, this feedback loop will sharpen your AI writing partner’s instincts.
Use ChatGPT as a creative partner, not a clone
This isn’t about outsourcing your entire writing voice. AI is a tool — not a ghostwriter. It can help organise your thoughts, start a draft or nudge you past a creative block. But your personality still counts.
Some people want their AI to mimic them exactly. Others just want help brainstorming or structure. Both are fine.
The key? Don’t expect perfection. Think of ChatGPT as a very keen intern with potential. With the right brief and enough examples, it can be brilliant.
You May Also Like:
- Customising AI: Train ChatGPT to Write in Your Unique Voice
- Elon Musk predicts AGI by 2026
- ChatGPT Just Quietly Released “Memory with Search” – Here’s What You Need to Know
- Or try these prompt ideas out on ChatGPT by tapping here
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
Life
Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything
Will AI get you fired? Discover 9 career-killing AI mistakes professionals make—and how to avoid them.
Published
3 weeks agoon
May 15, 2025
TL;DR — What You Need to Know:
- Common AI mistakes that cost jobs can happen — fast
- Most are fixable if you know what to watch for.
- Avoid these pitfalls and make AI your career superpower.
Don’t blame the robot.
If you’re careless with AI, it’s not just your project that tanks — your career could be next.
Across Asia and beyond, professionals are rushing to implement artificial intelligence into workflows — automating reports, streamlining support, crunching data. And yes, done right, it’s powerful. But here’s what no one wants to admit: most people are doing it wrong.
I’m not talking about missing a few prompts or failing to generate that killer deck in time. I’m talking about the career-limiting, confidence-killing, team-splintering mistakes that quietly build up and explode just when it matters most. If you’re not paying attention, AI won’t just replace your role — it’ll ruin your reputation on the way out.
Here are 9 of the most common, most damaging AI blunders happening in businesses today — and how you can avoid making them.
1. You can’t fix bad data with good algorithms.
Let’s start with the basics. If your AI tool is churning out junk insights, odds are your data was junk to begin with. Dirty data isn’t just inefficient — it’s dangerous. It leads to flawed decisions, mis-targeted customers, and misinformed strategies. And when the campaign tanks or the budget overshoots, guess who gets blamed?
The solution? Treat your data with the same respect you’d give your P&L. Clean it, vet it, monitor it like a hawk. AI isn’t magic. It’s maths — and maths hates mess.
2. Don’t just plug in AI and hope for the best.
Too many teams dive into AI without asking a simple question: what problem are we trying to solve? Without clear goals, AI becomes a time-sink — a parade of dashboards and models that look clever but achieve nothing.
Worse, when senior stakeholders ask for results and all you have is a pretty interface with no impact, that’s when credibility takes a hit.
AI should never be a side project. Define its purpose. Anchor it to business outcomes. Or don’t bother.
3. Ethics aren’t optional — they’re existential.
You don’t need to be a philosopher to understand this one. If your AI causes harm — whether that’s through bias, privacy breaches, or tone-deaf outputs — the consequences won’t just be technical. They’ll be personal.
Companies can weather a glitch. What they can’t recover from is public outrage, legal fines, or internal backlash. And you, as the person who “owned” the AI, might be the one left holding the bag.
Bake in ethical reviews. Vet your training data. Put in safeguards. It’s not overkill — it’s job insurance.
4. Implementation without commitment is just theatre.
I’ve seen it more than once: companies announce a bold AI strategy, roll out a tool, and then… nothing. No training. No process change. No follow-through. That’s not innovation. That’s box-ticking.
If you half-arse AI, it won’t just fail — it’ll visibly fail. Your colleagues will notice. Your boss will ask questions. And next time, they might not trust your judgement.
AI needs resourcing, support, and leadership. Otherwise, skip it.
5. You can’t manage what you can’t explain.
Ever been in a meeting where someone says, “Well, that’s just what the model told us”? That’s a red flag — and a fast track to blame when things go wrong.
So-called “black box” models are risky, especially in regulated industries or customer-facing roles. If you can’t explain how your AI reached a decision, don’t expect others to trust it — or you.
Use interpretable models where possible. And if you must go complex, document it like your job depends on it (because it might).
6. Face the bias before it becomes your headline.
Facial recognition failing on darker skin tones. Recruitment tools favouring men. Chatbots going rogue with offensive content. These aren’t just anecdotes — they’re avoidable, career-ending screw-ups rooted in biased data.
It’s not enough to build something clever. You have to build it responsibly. Test for bias.
Diversify your datasets. Monitor performance. Don’t let your project become the next PR disaster.
7. Training isn’t optional — it’s survival.
If your team doesn’t understand the tool you’ve introduced, you’re not innovating — you’re endangering operations. AI can amplify productivity or chaos, depending entirely on who’s driving.
Upskilling is non-negotiable. Whether it’s hiring external expertise or running internal workshops, make sure your people know how to work with the machine — not around it.
8. Long-term vision beats short-term wow.
Sure, the first week of AI adoption might look good. Automate a few slides, speed up a report — you’re a hero.
But what happens three months down the line, when the tool breaks, the data shifts, or the model needs recalibration?
AI isn’t set-and-forget. Plan for evolution. Plan for maintenance. Otherwise, short-term wins can turn into long-term liabilities.
9. When everything’s urgent, documentation feels optional.
Until someone asks, “Who changed the model?” or “Why did this customer get flagged?” and you have no answers.
In AI, documentation isn’t admin — it’s accountability.
Keep logs, version notes, data flow charts. Because sooner or later, someone will ask, and “I’m not sure” won’t cut it.
Final Thoughts: AI doesn’t cost jobs. People misusing AI do.
Most AI mistakes aren’t made by the machines — they’re made by humans cutting corners, skipping checks, and hoping for the best. And the consequences? Lost credibility. Lost budgets. Lost roles.
But it doesn’t have to be that way.
Used wisely, AI becomes your competitive edge. A signal to leadership that you’re forward-thinking, capable, and ready for the future. Just don’t stumble on the same mistakes that are currently tripping up everyone else.
So the real question is: are you using AI… or is it quietly using you?
You may also like:
- Bridging the AI Skills Gap: Why Employers Must Step Up
- From Ethics to Arms: Google Lifts Its AI Ban on Weapons and Surveillance
- Or try the free version of Google Gemini by tapping here.
Author
-
Adrian is an AI, marketing, and technology strategist based in Asia, with over 25 years of experience in the region. Originally from the UK, he has worked with some of the world’s largest tech companies and successfully built and sold several tech businesses. Currently, Adrian leads commercial strategy and negotiations at one of ASEAN’s largest AI companies. Driven by a passion to empower startups and small businesses, he dedicates his spare time to helping them boost performance and efficiency by embracing AI tools. His expertise spans growth and strategy, sales and marketing, go-to-market strategy, AI integration, startup mentoring, and investments. View all posts
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
Life
FAKE FACES, REAL CONSEQUENCES: Should NZ Ban AI in Political Ads?
New Zealand has no laws preventing the use of deepfakes or AI-generated content in political campaigns. As the 2025 elections approach, is it time for urgent reform?
Published
3 weeks agoon
May 14, 2025By
AIinAsia
TL;DR — What You Need to Know
- New Zealand politician campaigns are already dabbling with AI-generated content — but without clear rules or disclosures.
- Deepfakes and synthetic images of ethnic minorities risk fuelling cultural offence and voter distrust.
- Other countries are moving fast with legislation. Why is New Zealand dragging its feet?
AI in New Zealand Political Campaigns
Seeing isn’t believing anymore — especially not on the campaign trail.
In the build-up to the 2025 local body elections, New Zealand voters are being quietly nudged into a new kind of uncertainty: Is what they’re seeing online actually real? Or has it been whipped up by an algorithm?
This isn’t science fiction. From fake voices of Joe Biden in the US to Peter Dutton deepfakes dancing across TikTok in Australia, we’ve already crossed the threshold into AI-assisted campaigning. And New Zealand? It’s not far behind — it just lacks the rules.
The National Party admitted to using AI in attack ads during the 2023 elections. The ACT Party’s Instagram feed includes AI-generated images of Māori and Pasifika characters — but nowhere in the posts do they say the images aren’t real. One post about interest rates even used a synthetic image of a Māori couple from Adobe’s stock library, without disclosure.
That’s two problems in one. First, it’s about trust. If voters don’t know what’s real and what’s fake, how can they meaningfully engage? Second, it’s about representation. Using synthetic people to mimic minority communities without transparency or care is a recipe for offence — and harm.
Copy-Paste Cultural Clangers
Australians already find some AI-generated political content “cringe” — and voters in multicultural societies are noticing. When AI creates people who look Māori, Polynesian or Southeast Asian, it often gets the cultural signals all wrong. Faces are oddly symmetrical, clothing choices are generic, and context is stripped away. What’s left is a hollow image that ticks the diversity box without understanding the lived experience behind it.
And when political parties start using those images without disclosure? That’s not smart targeting. That’s political performance, dressed up as digital diversity.
A Film-Industry Fix?
If you’re looking for a local starting point for ethical standards, look to New Zealand’s film sector. The NZ Film Commission’s 2025 AI Guidelines are already ahead of the game — promoting human-first values, cultural respect, and transparent use of AI in screen content.
The public service also has an AI framework that calls for clear disclosure. So why can’t politics follow suit?
Other countries are already acting. South Korea bans deepfakes in political ads 90 days before elections. Singapore outlaws digitally altered content that misrepresents political candidates. Even Canada is exploring policy options. New Zealand, in contrast, offers voluntary guidelines — which are about as enforceable as a handshake on a Zoom call.
Where To Next?
New Zealand doesn’t need to reinvent the wheel. But it does need urgent rules — even just a basic requirement for political parties to declare when they’re using AI in campaign content. It’s not about banning creativity. It’s about respecting voters and communities.
In a multicultural democracy, fake faces in real campaigns come with consequences. Trust, representation, and dignity are all on the line.
What do YOU think?
Should political parties be forced to declare AI use in their ads — or are we happy to let the bots keep campaigning for us?
You may also like:
- AI Chatbots Struggle with Real-Time Political News: Are They Ready to Monitor Elections?
- Supercharge Your Social Media: 5 ChatGPT Prompts to Skyrocket Your Following
- AI Solves the ‘Cocktail Party Problem’: A Breakthrough in Audio Forensics
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.

How To Teach ChatGPT Your Writing Style

Upgrade Your ChatGPT Game With These 5 Prompts Tips

If AI Kills the Open Web, What’s Next?
Trending
-
Life3 weeks ago
7 Mind-Blowing New ChatGPT Use Cases in 2025
-
Learning2 weeks ago
How to Use the “Create an Action” Feature in Custom GPTs
-
Business3 weeks ago
AI Just Killed 8 Jobs… But Created 15 New Ones Paying £100k+
-
Learning2 weeks ago
How to Upload Knowledge into Your Custom GPT
-
Learning1 week ago
Build Your Own Custom GPT in Under 30 Minutes – Step-by-Step Beginner’s Guide
-
Business2 weeks ago
Adrian’s Arena: Stop Collecting AI Tools and Start Building a Stack
-
Life3 weeks ago
Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything
-
Life5 hours ago
How To Teach ChatGPT Your Writing Style