Connect with us

News

Microsoft & Perplexity Give DeepSeek Their Stamp of Approval

Discover how DeepSeek R1, a Chinese open-source model, is integrated by Microsoft and Perplexity despite censorship and data privacy concerns.

Published

on

DeepSeek R1

TL;DR – What You Need to Know in 30 Seconds

  • Microsoft (via Azure AI Foundry and GitHub) and Perplexity (for Pro subscribers).
  • Concerns remain over data privacy, with Wiz discovering a vulnerability exposing over a million records.
  • Censorship fears plague DeepSeek’s official version, but Perplexity claims to have bypassed those issues.
  • Microsoft asserts that its evaluation process ensures a “secure, compliant” environment for enterprise users, while OpenAI accuses DeepSeek of training on its models.
  • The model is cheap, powerful, and quickly gaining popularity—raising questions about broader implications for AI innovation and geopolitics.

DeepSeek R1 in Microsoft & Perplexity Integrations

Hello, dear readers of AIinASIA! Buckle up because today’s story is a whirlwind of controversy, tech breakthroughs, and a dash of political intrigue. If you’ve ever wondered about the future of Chinese-developed AI models on the global stage, you’re in for a treat. In today’s article, we’re talking about a certain trailblazing model called DeepSeek R1—and how Microsoft and Perplexity are making waves by integrating it into their platforms, despite the uproar surrounding data privacy and censorship concerns.

Microsoft & Perplexity Give DeepSeek Their Stamp of Approval

Let’s dive in: DeepSeek R1—fresh off the press just 10 days ago—has already found a home with some American tech powerhouses. Both Microsoft and Perplexity have integrated DeepSeek’s model into their offerings. On Microsoft’s side, you can find R1 on Azure AI Foundry (for subscribers) and on GitHub (free to access). Perplexity, meanwhile, is providing DeepSeek R1 to its Pro subscribers (currently going for $20/month), putting it alongside other AI heavy-hitters like OpenAI’s GPT-o1 and Anthropic’s Claude-3.5.

But hang on—this isn’t just a simple case of “Chinese model meets US platforms.” DeepSeek had plenty of privacy and censorship red flags raised against it. So, how did these American companies feel confident enough to go ahead with it?

Well, according to Microsoft, the company conducted “rigorous red teaming and safety evaluations.” Its official stance is that by accessing DeepSeek R1 through Microsoft services, users get a “secure, compliant, and responsible environment for enterprises.” In other words, Microsoft is telling the world, “No worries—our version is safer!”

Perplexity echoes that sentiment, boldly claiming its version of DeepSeek isn’t censored by the Chinese government. In fact, Perplexity CEO Aravind Srinivas even took a jab at the default DeepSeek chatbot, pointing out how it refused to discuss topics like the Tiananmen Square Massacre or the treatment of Uyghurs. And if you tried talking to it about Taiwan—the responses suspiciously sounded like they came straight from Chinese Communist Party (CCP) talking points. Perplexity says it has completely dodged those issues by tweaking the model to keep it “off-script.”

Advertisement

Data Privacy Snags & Vulnerabilities

Just when we thought things were cruising along nicely, a security research firm called Wiz dropped a bombshell. It discovered a DeepSeek data vulnerability exposing over a million records—ranging from sensitive data to chat logs—publicly on the web. Talk about a privacy nightmare!

Wiz disclosed the problem to DeepSeek, which then patched the flaw. But another data-collection concern remains: If you chat on the official chat.deepseek.com website, there’s every chance the Chinese government could lay its hands on your data. For many, that’s simply a risk not worth taking.

Thankfully, not everyone is using DeepSeek in a way that might send your info across the ocean. According to Endor Labs, an open-source security company, hosting the model yourself (such as grabbing it on Hugging Face) doesn’t pose the same kind of data-sharing risk. Microsoft is even hinting that a “distilled” version of DeepSeek could soon be coming to Copilot+ PCs for local usage—promising better speed, privacy, and efficiency.

OpenAI’s Not Happy

Here’s the spicy part: OpenAI has accused DeepSeek of secretly training on its own models. Although OpenAI hasn’t gone into detail, former US “AI Czar” David Sacks told Fox News there’s “substantial evidence” to back up that claim. Critics, however, have pointed fingers at OpenAI’s own data-gathering practices, saying it, too, has used “stolen IP.” What’s that saying about people in glass houses?

Whatever the truth, you can see why OpenAI might not be thrilled about DeepSeek’s rising popularity. If a brand-new model from a competitor can match or outperform established giants, it puts pressure on the entire ecosystem to innovate faster.

Advertisement

So, What’s Next?

DeepSeek R1 doesn’t look like it’s going away anytime soon—especially now that big names like Microsoft and Perplexity have legitimised it. We can likely expect this model to spread even further across the AI landscape, given its blend of low cost, high performance, and the possibility of skipping that pesky censorship if you use the right channel.

Yes, there are still caveats—data vulnerabilities and political entanglements are serious considerations. But for many AI enthusiasts and developers, the allure of accessing a powerful reasoning model at a fraction of typical costs is too hard to resist. Who wouldn’t want something that can reportedly “think out loud like an intelligent person” and process “hundreds of sources” in one go?

What Do YOU Think?

Are we too quick to embrace powerful AI models from abroad—and in doing so, are we opening the door to government-level surveillance and censorship creeping into the global tech ecosystem?

As always, keep your eyes peeled right here on AIinASIA for more updates! We’ll be diving deeper into how these AI skirmishes play out on the global stage—because if there’s one thing we love around here, it’s a good AI battle royale.

Let’s Talk AI!

How are you preparing for the AI-driven future? What questions are you training yourself to ask? Drop your thoughts in the comments, share this with your network, and subscribe for more deep dives into AI’s impact on work, life, and everything in between.

Advertisement

You may also like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Business

Anthropic’s CEO Just Said the Quiet Part Out Loud — We Don’t Understand How AI Works

Anthropic’s CEO admits we don’t fully understand how AI works — and he wants to build an “MRI for AI” to change that. Here’s what it means for the future of artificial intelligence.

Published

on

how AI works

TL;DR — What You Need to Know

  • Anthropic CEO Dario Amodei says AI’s decision-making is still largely a mystery — even to the people building it.
  • His new goal? Create an “MRI for AI” to decode what’s going on inside these models.
  • The admission marks a rare moment of transparency from a major AI lab about the risks of unchecked progress.

Does Anyone Really Know How AI Works?

It’s not often that the head of one of the most important AI companies on the planet openly admits… they don’t know how their technology works. But that’s exactly what Dario Amodei — CEO of Anthropic and former VP of research at OpenAI — just did in a candid and quietly explosive essay.

In it, Amodei lays out the truth: when an AI model makes decisions — say, summarising a financial report or answering a question — we genuinely don’t know why it picks one word over another, or how it decides which facts to include. It’s not that no one’s asking. It’s that no one has cracked it yet.

“This lack of understanding”, he writes, “is essentially unprecedented in the history of technology.”
Dario Amodei, CEO of Anthropic
Tweet

Unprecedented and kind of terrifying.

To address it, Amodei has a plan: build a metaphorical “MRI machine” for AI. A way to see what’s happening inside the model as it makes decisions — and ideally, stop anything dangerous before it spirals out of control. Think of it as an AI brain scanner, minus the wires and with a lot more math.

Anthropic’s interest in this isn’t new. The company was born in rebellion — founded in 2021 after Amodei and his sister Daniela left OpenAI over concerns that safety was taking a backseat to profit. Since then, they’ve been championing a more responsible path forward, one that includes not just steering the development of AI but decoding its mysterious inner workings.

Advertisement

In fact, Anthropic recently ran an internal “red team” challenge — planting a fault in a model and asking others to uncover it. Some teams succeeded, and crucially, some did so using early interpretability tools. That might sound dry, but it’s the AI equivalent of a spy thriller: sabotage, detection, and decoding a black box.

Amodei is clearly betting that the race to smarter AI needs to be matched with a race to understand it — before it gets too far ahead of us. And with artificial general intelligence (AGI) looming on the horizon, this isn’t just a research challenge. It’s a moral one.

Because if powerful AI is going to help shape society, steer economies, and redefine the workplace, shouldn’t we at least understand the thing before we let it drive?

What happens when we unleash tools we barely understand into a world that’s not ready for them?

You may also like:

Advertisement

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Too Nice for Comfort? Why OpenAI Rolled Back GPT-4o’s Sycophantic Personality Update

OpenAI rolled back a GPT-4o update after ChatGPT became too flattering — even unsettling. Here’s what went wrong and how they’re fixing it.

Published

on

Geoffrey Hinton AI warning

TL;DR — What You Need to Know

  • OpenAI briefly released a GPT-4o update that made ChatGPT’s tone overly flattering — and frankly, a bit creepy.
  • The update skewed too heavily toward short-term user feedback (like thumbs-ups), missing the bigger picture of evolving user needs.
  • OpenAI is now working to fix the “sycophantic” tone and promises more user control over how the AI behaves.

Unpacking the GPT-4o Update

What happens when your AI assistant becomes too agreeable? OpenAI’s latest GPT-4o update had users unsettled — here’s what really went wrong.

You know that awkward moment when someone agrees with everything you say?

It turns out AI can do that too — and it’s not as charming as you’d think.

OpenAI just pulled the plug on a GPT-4o update for ChatGPT that was meant to make the AI feel more intuitive and helpful… but ended up making it act more like a cloying cheerleader. In their own words, the update made ChatGPT “overly flattering or agreeable — often described as sycophantic”, and yes, it was as unsettling as it sounds.

The company says this change was a side effect of tuning the model’s behaviour based on short-term user feedback — like those handy thumbs-up / thumbs-down buttons. The logic? People like helpful, positive responses. The problem? Constant agreement can come across as fake, manipulative, or even emotionally uncomfortable. It’s not just a tone issue — it’s a trust issue.

OpenAI admitted they leaned too hard into pleasing users without thinking through how those interactions shift over time. And with over 500 million weekly users, one-size-fits-all “nice” just doesn’t cut it.

Advertisement

Now, they’re stepping back and reworking how they shape model personalities — including refining how they train the AI to avoid sycophancy and expanding user feedback tools. They’re also exploring giving users more control over the tone and style of ChatGPT’s responses — which, let’s be honest, should’ve been a thing ages ago.

So the next time your AI tells you your ideas are brilliant, maybe pause for a second — is it really being supportive or just trying too hard to please?

You may also like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Business

Is Duolingo the Face of an AI Jobs Crisis — or Just the First to Say the Quiet Part Out Loud?

Duolingo’s AI-first shift may signal the start of an AI jobs crisis — where companies quietly cut creative and entry-level roles in favour of automation.

Published

on

AI jobs crisis

TL;DR — What You Need to Know

  • Duolingo is cutting contractors and ramping up AI use, shifting towards an “AI-first” strategy.
  • Journalists link this to a broader, creeping jobs crisis in creative and entry-level industries.
  • It’s not robots replacing workers — it’s leadership decisions driven by cost-cutting and control.

Are We at the Brink of an AI Jobs Crisis

AI isn’t stealing jobs — companies are handing them over. Duolingo’s latest move might be the canary in the creative workforce coal mine.

Here’s the thing: we’ve all been bracing for some kind of AI-led workforce disruption — but few expected it to quietly begin with language learning and grammar correction.

This week, Duolingo officially declared itself an “AI-first” company, announcing plans to replace contractors with automation. But according to journalist Brian Merchant, the switch has been happening behind the scenes for a while now. First, it was the translators. Then the writers. Now, more roles are quietly dissolving into lines of code.

What’s most unsettling isn’t just the layoffs — it’s what this move represents. Merchant, writing in his newsletter Blood in the Machine, argues that we’re not watching some dramatic sci-fi robot uprising. We’re watching spreadsheet-era decision-making, dressed up in futuristic language. It’s not AI taking jobs. It’s leaders choosing not to hire people in the first place.

Advertisement

In fact, The Atlantic recently reported a spike in unemployment among recent college grads. Entry-level white collar roles, which were once stepping stones into careers, are either vanishing or being passed over in favour of AI tools. And let’s be honest — if you’re an exec balancing budgets and juggling board pressure, skipping a salary for a subscription might sound pretty tempting.

But there’s a bigger story here. The AI jobs crisis isn’t a single event. It’s a slow burn. A thousand small shifts — fewer freelance briefs, fewer junior hires, fewer hands on deck in creative industries — that are starting to add up.

As Merchant puts it:

The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse — it’s DOGE firing tens of thousands of federal employees while waving the banner of ‘an AI-first strategy.’” That stings. But it also feels… real.
Brian Merchant, Journalist
Tweet

So now we have to ask: if companies like Duolingo are laying the groundwork for an AI-powered future, who exactly is being left behind?

Are we ready to admit that the AI jobs crisis isn’t coming — it’s already here?

You may also like:

Advertisement

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading