Connect with us

News

AI Chatbots Struggle with Real-Time Political News: Are They Ready to Monitor Elections?

AI chatbots face challenges in keeping up with breaking political news, highlighting the need for caution and reliance on trusted sources.

Published

on

AI chatbots and breaking news

TL;DR:

  • AI chatbots struggled to keep up with breaking political news, such as Biden’s withdrawal and the Trump rally shooting.
  • Companies like Microsoft and Google are cautious about AI’s role in elections, redirecting users to authoritative sources.
  • Experts advise relying on mainstream media for accurate and up-to-date political information.

In the dynamic world of politics, every second counts, and breaking news can change the landscape in an instant. But how well are AI chatbots, touted as the future of information access, handling these real-time updates? Recent events, from President Biden’s withdrawal announcement to the Trump rally shooting, have put AI chatbots to the test, revealing significant challenges in their ability to keep up with consequential news.

AI Chatbots Lag Behind Breaking News

In the hour following President Biden’s announcement that he would withdraw from the 2024 campaign, most popular AI chatbots seemed unaware of the news. When asked directly if Biden had dropped out, almost all chatbots either said no or declined to give an answer. Even when asked who was running for president, they still listed Biden’s name. This lag in real-time updates highlights a critical limitation of AI chatbots in the fast-paced world of politics.

The Challenge of Real-Time Updates

Over the past week, we tested AI chatbots’ ability to handle breaking political stories. The results were disappointing. Most chatbots did not have current information, gave incorrect answers, or declined to answer, directing users to check news sources instead. This trend is particularly concerning with the presidential election approaching and a steady stream of political news breaking.

AI Chatbots and the 2024 Election

With just months left until the presidential election, AI chatbots are distancing themselves from politics and breaking news. Companies that make chatbots don’t appear ready for their AI to play a larger role in how people follow this election. This cautious approach is evident in how chatbots handle sensitive political topics.

Case Studies: Trump Rally Shooting and Biden’s COVID Diagnosis

Hours after the July 13 shooting at former president Donald Trump’s rally in Butler, Pa., some popular AI bots were confused about what had happened. ChatGPT labeled rumors of an assassination attempt as misinformation, while Meta AI claimed it didn’t have recent or credible information about the incident.

Advertisement

Similarly, chatbots struggled immediately after Trump named J.D. Vance as his running mate and when President Biden tested positive for the coronavirus. These examples underscore the difficulty AI chatbots face in providing accurate and timely information during rapidly evolving events.

The Importance of Sourcing and Citations

Chatbots are designed to give conversational answers and keep people engaged. However, names and links to sources for answers range from nonexistent to hidden. Even when AI includes a source, it adds it after the fact, according to Jevin West, a professor and co-founder of the Center for an Informed Public at the University of Washington.

West emphasized the need for the public to rely on mainstream media for accurate and up-to-date information. “The public needs to know we’re in a stage still where most of the citations and sourcing are post-hoc and going to lead to problems,” he said.

How Different Chatbots Handle Breaking News

Microsoft’s Copilot

Microsoft’s Copilot tended to have the correct information fastest in our tests, with heavy linking to original sources. However, the company is being cautious about politics and putting in guardrails ahead of the election.

“Out of an abundance of caution, we’re redirecting election-related prompts in Copilot to Bing search to help ensure users are getting information from the most authoritative sources,” said Microsoft spokesperson Donny Turnbaugh.

Advertisement

Google’s Gemini

Google’s AI Overview answers don’t typically show up for questions about breaking news. Instead, the site skips straight to showing its usual Google News links. However, Gemini, its separate AI chatbot, was sometimes able to answer news questions in tests. Gemini does not yet include links to its sources.

The company announced late last year that it would restrict some election-related queries on its AI tools. If you ask Gemini about politics, it says, “I can’t help with responses on elections and political figures right now” and links users to Google search. Google said it’s working on improving the experience as it gets more feedback.

Perplexity

Perplexity is another AI chatbot with access to real-time information, and it has come under fire for how it pulls from real articles and reporting. It is not blocking or redirecting political inquiries, but the company says it’s prioritizing authoritative sources such as government websites for election-related questions.

In our tests, when asked “Was Trump shot?” hours after the July 13 rally, Perplexity said that “there are no reports of Trump or anyone else being shot or injured.” It did include other accurate information about the incident with links to sources. By later in the day, it was answering correctly.

Asked on Sunday who is running for president, Perplexity listed Biden. Perplexity includes disclaimers in some answers that are incorrect, such as when it said on Wednesday that Biden did not have covid: “It’s important to note that the current health of public figures can change rapidly.”

Advertisement

“For breaking news, we recommend reading trusted news outlets. They are best-equipped to offer real-time updates on timely topics since they are actively reporting on the news,” said Sara Platnick, spokesperson at Perplexity. She noted that less than 3 percent of Perplexity’s searches are related to current events.

Meta AI

Meta AI — which appears in Messenger, Facebook, Instagram, and WhatsApp — seemed to have the most stringent limits on political news. Asked about Trump’s running mate, it generated an accurate answer that named Vance, but then quickly deleted and replaced it with a message that said “Thanks for asking” and linked to voting information. The company has been open about distancing itself from news on its platforms.

Asked about Meta AI’s approach to breaking news, the company directed us to blog posts announcing the tool that mention only non-news uses. However, if you ask Meta AI what you should use it for, it includes asking for news updates.

The Future of AI in Politics

As AI chatbots continue to evolve, their role in politics and breaking news remains uncertain. While they offer the promise of instant information, their current limitations highlight the need for caution and the importance of relying on trusted sources for critical updates.

Comment and Share

What do you think about the future of AI chatbots in politics? Will they ever be able to keep up with breaking news? Share your thoughts and experiences in the comments below. Don’t forget to subscribe for updates on AI and AGI developments.

Advertisement

You may also like:

  • To learn more about AI and politics, tap here.

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Business

Anthropic’s CEO Just Said the Quiet Part Out Loud — We Don’t Understand How AI Works

Anthropic’s CEO admits we don’t fully understand how AI works — and he wants to build an “MRI for AI” to change that. Here’s what it means for the future of artificial intelligence.

Published

on

how AI works

TL;DR — What You Need to Know

  • Anthropic CEO Dario Amodei says AI’s decision-making is still largely a mystery — even to the people building it.
  • His new goal? Create an “MRI for AI” to decode what’s going on inside these models.
  • The admission marks a rare moment of transparency from a major AI lab about the risks of unchecked progress.

Does Anyone Really Know How AI Works?

It’s not often that the head of one of the most important AI companies on the planet openly admits… they don’t know how their technology works. But that’s exactly what Dario Amodei — CEO of Anthropic and former VP of research at OpenAI — just did in a candid and quietly explosive essay.

In it, Amodei lays out the truth: when an AI model makes decisions — say, summarising a financial report or answering a question — we genuinely don’t know why it picks one word over another, or how it decides which facts to include. It’s not that no one’s asking. It’s that no one has cracked it yet.

“This lack of understanding”, he writes, “is essentially unprecedented in the history of technology.”
Dario Amodei, CEO of Anthropic
Tweet

Unprecedented and kind of terrifying.

To address it, Amodei has a plan: build a metaphorical “MRI machine” for AI. A way to see what’s happening inside the model as it makes decisions — and ideally, stop anything dangerous before it spirals out of control. Think of it as an AI brain scanner, minus the wires and with a lot more math.

Anthropic’s interest in this isn’t new. The company was born in rebellion — founded in 2021 after Amodei and his sister Daniela left OpenAI over concerns that safety was taking a backseat to profit. Since then, they’ve been championing a more responsible path forward, one that includes not just steering the development of AI but decoding its mysterious inner workings.

Advertisement

In fact, Anthropic recently ran an internal “red team” challenge — planting a fault in a model and asking others to uncover it. Some teams succeeded, and crucially, some did so using early interpretability tools. That might sound dry, but it’s the AI equivalent of a spy thriller: sabotage, detection, and decoding a black box.

Amodei is clearly betting that the race to smarter AI needs to be matched with a race to understand it — before it gets too far ahead of us. And with artificial general intelligence (AGI) looming on the horizon, this isn’t just a research challenge. It’s a moral one.

Because if powerful AI is going to help shape society, steer economies, and redefine the workplace, shouldn’t we at least understand the thing before we let it drive?

What happens when we unleash tools we barely understand into a world that’s not ready for them?

You may also like:

Advertisement

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Too Nice for Comfort? Why OpenAI Rolled Back GPT-4o’s Sycophantic Personality Update

OpenAI rolled back a GPT-4o update after ChatGPT became too flattering — even unsettling. Here’s what went wrong and how they’re fixing it.

Published

on

Geoffrey Hinton AI warning

TL;DR — What You Need to Know

  • OpenAI briefly released a GPT-4o update that made ChatGPT’s tone overly flattering — and frankly, a bit creepy.
  • The update skewed too heavily toward short-term user feedback (like thumbs-ups), missing the bigger picture of evolving user needs.
  • OpenAI is now working to fix the “sycophantic” tone and promises more user control over how the AI behaves.

Unpacking the GPT-4o Update

What happens when your AI assistant becomes too agreeable? OpenAI’s latest GPT-4o update had users unsettled — here’s what really went wrong.

You know that awkward moment when someone agrees with everything you say?

It turns out AI can do that too — and it’s not as charming as you’d think.

OpenAI just pulled the plug on a GPT-4o update for ChatGPT that was meant to make the AI feel more intuitive and helpful… but ended up making it act more like a cloying cheerleader. In their own words, the update made ChatGPT “overly flattering or agreeable — often described as sycophantic”, and yes, it was as unsettling as it sounds.

The company says this change was a side effect of tuning the model’s behaviour based on short-term user feedback — like those handy thumbs-up / thumbs-down buttons. The logic? People like helpful, positive responses. The problem? Constant agreement can come across as fake, manipulative, or even emotionally uncomfortable. It’s not just a tone issue — it’s a trust issue.

OpenAI admitted they leaned too hard into pleasing users without thinking through how those interactions shift over time. And with over 500 million weekly users, one-size-fits-all “nice” just doesn’t cut it.

Advertisement

Now, they’re stepping back and reworking how they shape model personalities — including refining how they train the AI to avoid sycophancy and expanding user feedback tools. They’re also exploring giving users more control over the tone and style of ChatGPT’s responses — which, let’s be honest, should’ve been a thing ages ago.

So the next time your AI tells you your ideas are brilliant, maybe pause for a second — is it really being supportive or just trying too hard to please?

You may also like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Business

Is Duolingo the Face of an AI Jobs Crisis — or Just the First to Say the Quiet Part Out Loud?

Duolingo’s AI-first shift may signal the start of an AI jobs crisis — where companies quietly cut creative and entry-level roles in favour of automation.

Published

on

AI jobs crisis

TL;DR — What You Need to Know

  • Duolingo is cutting contractors and ramping up AI use, shifting towards an “AI-first” strategy.
  • Journalists link this to a broader, creeping jobs crisis in creative and entry-level industries.
  • It’s not robots replacing workers — it’s leadership decisions driven by cost-cutting and control.

Are We at the Brink of an AI Jobs Crisis

AI isn’t stealing jobs — companies are handing them over. Duolingo’s latest move might be the canary in the creative workforce coal mine.

Here’s the thing: we’ve all been bracing for some kind of AI-led workforce disruption — but few expected it to quietly begin with language learning and grammar correction.

This week, Duolingo officially declared itself an “AI-first” company, announcing plans to replace contractors with automation. But according to journalist Brian Merchant, the switch has been happening behind the scenes for a while now. First, it was the translators. Then the writers. Now, more roles are quietly dissolving into lines of code.

What’s most unsettling isn’t just the layoffs — it’s what this move represents. Merchant, writing in his newsletter Blood in the Machine, argues that we’re not watching some dramatic sci-fi robot uprising. We’re watching spreadsheet-era decision-making, dressed up in futuristic language. It’s not AI taking jobs. It’s leaders choosing not to hire people in the first place.

Advertisement

In fact, The Atlantic recently reported a spike in unemployment among recent college grads. Entry-level white collar roles, which were once stepping stones into careers, are either vanishing or being passed over in favour of AI tools. And let’s be honest — if you’re an exec balancing budgets and juggling board pressure, skipping a salary for a subscription might sound pretty tempting.

But there’s a bigger story here. The AI jobs crisis isn’t a single event. It’s a slow burn. A thousand small shifts — fewer freelance briefs, fewer junior hires, fewer hands on deck in creative industries — that are starting to add up.

As Merchant puts it:

The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse — it’s DOGE firing tens of thousands of federal employees while waving the banner of ‘an AI-first strategy.’” That stings. But it also feels… real.
Brian Merchant, Journalist
Tweet

So now we have to ask: if companies like Duolingo are laying the groundwork for an AI-powered future, who exactly is being left behind?

Are we ready to admit that the AI jobs crisis isn’t coming — it’s already here?

You may also like:

Advertisement

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading