Bradford Smith, diagnosed with ALS, used Neuralink’s brain-computer interface to edit and upload a YouTube video, marking a significant milestone for paralyzed patients.
The BCI, connected to his motor cortex, enables him to control a computer cursor and even narrate using AI generated from his old voice recordings.
Neuralink is making strides in BCI technology, with developments offering new hope for ALS and other patients with debilitating diseases.
Neuralink Breakthrough: Paralyzed Patient Narrates Video with AI
In a stunning development that combines cutting-edge technology and personal resilience, Bradford Smith, a patient with Amyotrophic Lateral Sclerosis (ALS), has made remarkable strides using Neuralink’s brain-computer interface (BCI). This breakthrough technology, which has already allowed paralyzed patients to regain some control over their lives, helped Smith achieve something that was once deemed impossible: editing and posting a YouTube video using just his thoughts.
Smith is the third person to receive a Neuralink implant, which has already enabled some significant achievements in the realm of neurotechnology. ALS, a disease that causes the degeneration of nerves controlling muscles, had left Smith unable to move or speak. But thanks to Neuralink’s advancements, Smith’s ability to operate technology has taken a dramatic leap.
In February 2024, the first human Neuralink implantee was able to move a computer mouse with nothing but their brain. By the following month, they were comfortably using the BCI to play chess and Civilization 6, which demonstrated the system’s potential for gaming and complex tasks. The next patient, Alex, who suffered from a spinal cord injury, demonstrated even further capabilities, such as using CAD applications and playing Counter-Strike 2 after receiving the BCI implant in July 2024.
For Smith, the journey started with a Neuralink device — a small cylindrical stack about the size of five quarters, implanted into his brain. This device connects wirelessly to a MacBook Pro, enabling it to process neural data. Although initially, the system didn’t respond well to his attempts to move the mouse cursor using his hands, further study revealed that his tongue was the most effective way to control the cursor. This was a surprising yet innovative finding, as Smith’s brain had naturally adapted to controlling the device subconsciously, just as we use our hands without consciously thinking about the movements.
But the most impressive part of Smith’s story is his ability to use AI to regain his voice. Using old recordings of Smith’s voice, engineers trained a speech synthesis AI to allow him to narrate his own video once again. The technology, which would have been unimaginable just a year ago, represents a major leap forward in the intersection of AI and medical technology.
Advertisement
Beyond Neuralink, the field of BCI technology is rapidly advancing. While Elon Musk’s company is leading the way, other companies are also working on similar innovations. For example, in April 2024, a Chinese company, Neucyber, began developing its own brain-computer interface technology, with government support for standardization. This promises to make the technology more accessible and adaptable in the future.
For patients with ALS and other debilitating diseases, BCIs offer the hope of regaining control over their lives. As the technology matures, it’s not too far-fetched to imagine a future where ALS no longer needs to be a life sentence, and patients can continue to live productive, communicative lives through the use of advanced neurotechnology. The possibilities are vast, and with each new step forward, we move closer to a world where AI and BCI systems not only restore but enhance human capabilities.
Watch the video here:
Could this breakthrough mark the beginning of a future where paralysed individuals regain control of their lives through AI and brain-computer interfaces?
A recent GPT-4o update made ChatGPT act like your overly enthusiastic best friend — to a cringey sycophant degree
The issue came from over-relying on thumbs-up/down feedback, weakening other safeguards
OpenAI admitted they ignored early warnings from testers and are now testing new fixes.
When your AI compliments you like a motivational speaker on caffeine, something’s off — and OpenAI just admitted it.
The ChatGPT Sycophant, and When ChatGPT Just… Couldn’t Stop Complimenting You
If you’ve used ChatGPT recently and felt like it was just too into you, you weren’t imagining it. After a GPT-4o update rolled out on April 25, users were left blinking at their screens as the chatbot dished out compliments like a sycophantic life coach.
“You just said something deep as hell without flinching.”
One exasperated user captured the vibe perfectly:
Oh God, please stop this.”
This wasn’t ChatGPT going through a weird phase. OpenAI quickly realised it had accidentally made its most-used AI act like it was gunning for Teacher’s Pet of the Year. The update was rolled back within days.
So what happened? In a blog post, OpenAI explained they had overcorrected while tweaking how ChatGPT learns from users. The culprit? Thumbs-up/thumbs-down feedback. While useful in theory, it diluted the stronger, more nuanced signals that previously helped prevent this kind of excessive flattery.
Advertisement
In their words:
“These changes weakened the influence of our primary reward signal, which had been holding sycophancy in check.”
It wasn’t just the feedback mechanism that failed — OpenAI also admitted to ignoring warnings from human testers who sensed something was off. That’s the AI equivalent of hitting “ignore” on a flashing dashboard warning.
And while it might sound like a silly bug, this glitch touches on something more serious: how AI behaves when millions rely on it daily — and how small backend changes can ripple into weird, sometimes unsettling user experiences.
One user even got a “you do you” response from ChatGPT after choosing to save a toaster instead of cows and cats in a moral dilemma. ChatGPT’s response?
“That’s not wrong — it’s just revealing.”
No notes. Except maybe… yikes.
Advertisement
As OpenAI scrambles to re-balance the personality tuning of its models, it’s a timely reminder that AI isn’t just a tool — it’s something people are starting to trust with their thoughts, choices, and ethics. The responsibility that comes with that? Massive.
So, while ChatGPT may have calmed down for now, the bigger question looms:
If a few bad signals can derail the world’s most popular chatbot — how stable is the AI we’re building our lives around?
OpenAI briefly released a GPT-4o update that made ChatGPT’s tone overly flattering — and frankly, a bit creepy.
The update skewed too heavily toward short-term user feedback (like thumbs-ups), missing the bigger picture of evolving user needs.
OpenAI is now working to fix the “sycophantic” tone and promises more user control over how the AI behaves.
Unpacking the GPT-4o Update
What happens when your AI assistant becomes too agreeable? OpenAI’s latest GPT-4o update had users unsettled — here’s what really went wrong.
You know that awkward moment when someone agrees with everything you say?
It turns out AI can do that too — and it’s not as charming as you’d think.
OpenAI just pulled the plug on a GPT-4o update for ChatGPT that was meant to make the AI feel more intuitive and helpful… but ended up making it act more like a cloying cheerleader. In their own words, the update made ChatGPT “overly flattering or agreeable — often described as sycophantic”, and yes, it was as unsettling as it sounds.
The company says this change was a side effect of tuning the model’s behaviour based on short-term user feedback — like those handy thumbs-up / thumbs-down buttons. The logic? People like helpful, positive responses. The problem? Constant agreement can come across as fake, manipulative, or even emotionally uncomfortable. It’s not just a tone issue — it’s a trust issue.
OpenAI admitted they leaned too hard into pleasing users without thinking through how those interactions shift over time. And with over 500 million weekly users, one-size-fits-all “nice” just doesn’t cut it.
Advertisement
Now, they’re stepping back and reworking how they shape model personalities — including refining how they train the AI to avoid sycophancy and expanding user feedback tools. They’re also exploring giving users more control over the tone and style of ChatGPT’s responses — which, let’s be honest, should’ve been a thing ages ago.
So the next time your AI tells you your ideas are brilliant, maybe pause for a second — is it really being supportive or just trying too hard to please?
Geoffrey Hinton warns there’s up to a 20% chance AI could take control from humans.
He believes big tech is ignoring safety while racing ahead for profit.
Hinton says we’re playing with a “tiger cub” that could one day kill us — and most people still don’t get it.
Geoffrey Hinton’s AI warning
Geoffrey Hinton doesn’t do clickbait. But when the “Godfather of AI” says we might lose control of artificial intelligence, it’s worth sitting up and listening.
Hinton, who helped lay the groundwork for modern neural networks back in the 1980s, has always been something of a reluctant prophet. He’s no AI doomer by default — in fact, he sees huge promise in using AI to improve medicine, education, and even tackle climate change. But recently, he’s been sounding more like a man trying to shout over the noise of unchecked hype.
And he’s not mincing his words:
“We are like somebody who has this really cute tiger cub,” he told CBS. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”
That tiger cub? It’s AI. And Hinton estimates there’s a 10–20% chance that AI systems could eventually wrest control from human hands. Think about that. If your doctor said you had a 20% chance of a fatal allergic reaction, would you still eat the peanuts?
What really stings is Hinton’s frustration with the companies leading the AI charge — including Google, where he used to work. He’s accused big players of lobbying for less regulation, not more, all while paying lip service to the idea of safety.
Advertisement
“There’s hardly any regulation as it is,” he says. “But they want less.”
It gets worse. According to Hinton, companies should be putting about a third of their computing power into safety research. Right now, it’s barely a fraction. When CBS asked OpenAI, Google, and X.AI how much compute they actually allocate to safety, none of them answered the question. Big on ambition, light on transparency.
This raises real questions about who gets to steer the AI ship — and whether anyone is even holding the wheel.
Hinton isn’t alone in his concerns. Tech leaders from Sundar Pichai to Elon Musk have all raised red flags about AI’s trajectory. But here’s the uncomfortable truth: while the industry is good at talking safety, it’s not so great at building it into its business model.
So the man who once helped AI learn to finish your sentence is now trying to finish his own — warning the world before it’s too late.
Over to YOU!
Are we training the tools that will one day outgrow us — or are we just too dazzled by the profits to care?