Elon Musk insists Tesla is leading the charge in autonomous driving. Yet, after countless bold claims and billions invested, his Full Self-Driving (FSD) system still struggles with the basics; from recognising school bus stop signs to navigating pedestrian crossings. The reality is sobering: Tesla’s AI enabled software looks less like a polished product and more like an experiment playing out on public roads.
Tesla’s Full Self-Driving (Supervised) continues to make glaring errors, raising serious safety concerns.,Regulators have left driver-assist systems largely unregulated, creating a legal grey zone.,Musk’s financial incentives are directly tied to FSD adoption, fuelling questions over whether ambition is trumping safety.
A System That Can’t Read the Road
Forbes recently tested the latest FSD (version 13.2.9) in Los Angeles. The verdict was damning. The system ignored flashing pedestrian crossings, mishandled lane changes, and even accelerated when approaching a red light at the end of a freeway ramp. Most worryingly, Tesla still hasn’t solved a long-identified issue: its failure to stop for a flashing school bus sign, leading to repeated collisions with a mannequin child named “Timmy” during independent safety tests.
By contrast, competitors like Waymo have demonstrated far more reliable behaviour, stopping appropriately for the very same hazards Tesla continues to misread.
The Regulatory Loophole
Why is this system even legal? The short answer is that driving-assist technology falls into a murky category. U.S. regulators classify Tesla’s FSD as Level 2 automation, meaning the driver must remain fully attentive. This definition allows Tesla to market its system as “Full Self-Driving (Supervised)” while shifting responsibility back to the human behind the wheel.
As Professor Missy Cummings of George Mason University points out, “Driving-assist systems are unregulated, so there are no concerns about legality.”
As Professor Missy Cummings of George Mason University points out, “Driving-assist systems are unregulated, so there are no concerns about legality.”
The National Highway Traffic Safety Administration (NHTSA) can step in, but so far it has focused narrowly on driver monitoring rather than system safety. This gap creates an odd scenario: Tesla can sell an $8,000 addon or $99 a month subscription for a feature that fails basic road safety tests, all without pre-approval.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Musk’s Incentives
The controversy is amplified by Musk’s extraordinary pay deal, which hinges on hitting milestones such as one million Tesla robotaxis and ten million active FSD users over the next decade. For him, every additional FSD customer isn’t just revenue; it’s a step towards a trillion-dollar payout. Critics argue this creates a perverse incentive to promote FSD’s potential far beyond its actual performance. This ambition is also seen in other areas, such as when Elon Musk’s Big Bet: Data Centres in Orbit aims to further his technological empire.
Meanwhile, Tesla faces mounting lawsuits and regulatory scrutiny. A federal jury in Florida recently ordered the company to pay $243 million in damages for a fatal Autopilot-linked crash. California’s DMV is also pressing to stop Tesla from using misleading product names like “Autopilot” and “Full Self-Driving.” This situation highlights broader concerns about AI Browsers Under Threat as Researchers Expose Deep Flaws and the need for robust regulation in emerging tech.
Public Experiments in Real Time
Despite its name, FSD is far from autonomous. It requires constant vigilance, and reviewers frequently describe it as more stressful than conventional driving.
Dan O’Dowd, founder of the Dawn Project, is blunt: “This is an alpha-level product. It should never be in the customer’s hands. It’s just a prototype.”
Dan O’Dowd, founder of the Dawn Project, is blunt: “This is an alpha-level product. It should never be in the customer’s hands. It’s just a prototype.”
From failing to avoid road debris to stopping abruptly mid-turn, FSD’s unpredictable behaviour has rattled even loyal Tesla owners. Edmunds notes that the software resists manual corrections and deactivates abruptly if drivers intervene – hardly reassuring when dodging potholes or avoiding collisions. This brings to mind the ongoing debate around Will AI Agents Steal Your Job Or Help You Do It Better? and the human role in an increasingly automated world.
Calls for Accountability
With 59 fatalities already linked to Tesla’s driver-assist systems, calls for tighter oversight are growing. Former NHTSA administrator Mark Rosekind believes meaningful reform will require both regulatory strength and third-party validation. Without such safeguards, Tesla continues to test experimental AI on public roads while branding it as a premium product. For more on how other regions are tackling AI regulation, see Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.
The irony, as Cummings notes, is that FSD’s obvious flaws may be its only saving grace: “The one really good thing about how bad FSD is that most people understand it is terrible and watch it very closely.”
The irony, as Cummings notes, is that FSD’s obvious flaws may be its only saving grace: “The one really good thing about how bad FSD is that most people understand it is terrible and watch it very closely.”
The Bigger Question
The central issue is not whether Tesla can eventually deliver safe autonomous driving, but whether it should be allowed to sell and promote a system this flawed today. When a technology repeatedly fails basic safety checks, is it really innovation or just negligence dressed up as progress? For more information on autonomous vehicle safety, the National Transportation Safety Board (NTSB) provides detailed reports and recommendations on emerging transportation technologies.^ https://www.ntsb.gov/safety/safety-issues/Pages/AV.aspx










Latest Comments (4)
Agree, this "Full Self-Driving" is properly dodgy. Hope their regulators here in Singapore are keeping a close watch before any rollout.
This article really hits home. I remember seeing a dashcam video, probably on YouTube, of FSD here in Japan. It was struggling just to stay in its lane on a fairly standard highway. Made me wonder if the "full self-driving" moniker is more of a marketing ploy than reality. Those regulatory loopholes are quite concerning.
哎呀,这篇文章说得真是到位。我们这边也常看到新闻,对FSD的可靠性挺担忧的。安全第一啊,不能为了赶新技术就忽略了这些基本问题。监管确实得跟上,不能让这样的“实验”一直在公路上跑,太冒险了。
Seriously, given the FSD's dodgy performance, how long more can regulators here or elsewhere turn a blind eye, ah?
Leave a Comment