Elon Musk insists Tesla is leading the charge in autonomous driving. Yet, after countless bold claims and billions invested, his Full Self-Driving (FSD) system still struggles with the basics; from recognising school bus stop signs to navigating pedestrian crossings. The reality is sobering: Tesla’s AI enabled software looks less like a polished product and more like an experiment playing out on public roads.
Tesla’s Full Self-Driving (Supervised) continues to make glaring errors, raising serious safety concerns.,Regulators have left driver-assist systems largely unregulated, creating a legal grey zone.,Musk’s financial incentives are directly tied to FSD adoption, fuelling questions over whether ambition is trumping safety.
A System That Can’t Read the Road
Forbes recently tested the latest FSD (version 13.2.9) in Los Angeles. The verdict was damning. The system ignored flashing pedestrian crossings, mishandled lane changes, and even accelerated when approaching a red light at the end of a freeway ramp. Most worryingly, Tesla still hasn’t solved a long-identified issue: its failure to stop for a flashing school bus sign, leading to repeated collisions with a mannequin child named “Timmy” during independent safety tests.
By contrast, competitors like Waymo have demonstrated far more reliable behaviour, stopping appropriately for the very same hazards Tesla continues to misread.
The Regulatory Loophole
Why is this system even legal? The short answer is that driving-assist technology falls into a murky category. U.S. regulators classify Tesla’s FSD as Level 2 automation, meaning the driver must remain fully attentive. This definition allows Tesla to market its system as “Full Self-Driving (Supervised)” while shifting responsibility back to the human behind the wheel.
As Professor Missy Cummings of George Mason University points out, “Driving-assist systems are unregulated, so there are no concerns about legality.”
As Professor Missy Cummings of George Mason University points out, “Driving-assist systems are unregulated, so there are no concerns about legality.”
The National Highway Traffic Safety Administration (NHTSA) can step in, but so far it has focused narrowly on driver monitoring rather than system safety. This gap creates an odd scenario: Tesla can sell an $8,000 addon or $99 a month subscription for a feature that fails basic road safety tests, all without pre-approval.
Musk’s Incentives
The controversy is amplified by Musk’s extraordinary pay deal, which hinges on hitting milestones such as one million Tesla robotaxis and ten million active FSD users over the next decade. For him, every additional FSD customer isn’t just revenue; it’s a step towards a trillion-dollar payout. Critics argue this creates a perverse incentive to promote FSD’s potential far beyond its actual performance. This ambition is also seen in other areas, such as when Elon Musk’s Big Bet: Data Centres in Orbit aims to further his technological empire.
Meanwhile, Tesla faces mounting lawsuits and regulatory scrutiny. A federal jury in Florida recently ordered the company to pay $243 million in damages for a fatal Autopilot-linked crash. California’s DMV is also pressing to stop Tesla from using misleading product names like “Autopilot” and “Full Self-Driving.” This situation highlights broader concerns about AI Browsers Under Threat as Researchers Expose Deep Flaws and the need for robust regulation in emerging tech.
Public Experiments in Real Time
Despite its name, FSD is far from autonomous. It requires constant vigilance, and reviewers frequently describe it as more stressful than conventional driving.
Dan O’Dowd, founder of the Dawn Project, is blunt: “This is an alpha-level product. It should never be in the customer’s hands. It’s just a prototype.”
Dan O’Dowd, founder of the Dawn Project, is blunt: “This is an alpha-level product. It should never be in the customer’s hands. It’s just a prototype.”
From failing to avoid road debris to stopping abruptly mid-turn, FSD’s unpredictable behaviour has rattled even loyal Tesla owners. Edmunds notes that the software resists manual corrections and deactivates abruptly if drivers intervene – hardly reassuring when dodging potholes or avoiding collisions. This brings to mind the ongoing debate around Will AI Agents Steal Your Job Or Help You Do It Better? and the human role in an increasingly automated world.
Calls for Accountability
With 59 fatalities already linked to Tesla’s driver-assist systems, calls for tighter oversight are growing. Former NHTSA administrator Mark Rosekind believes meaningful reform will require both regulatory strength and third-party validation. Without such safeguards, Tesla continues to test experimental AI on public roads while branding it as a premium product. For more on how other regions are tackling AI regulation, see Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.
The irony, as Cummings notes, is that FSD’s obvious flaws may be its only saving grace: “The one really good thing about how bad FSD is that most people understand it is terrible and watch it very closely.”
The irony, as Cummings notes, is that FSD’s obvious flaws may be its only saving grace: “The one really good thing about how bad FSD is that most people understand it is terrible and watch it very closely.”
The Bigger Question
The central issue is not whether Tesla can eventually deliver safe autonomous driving, but whether it should be allowed to sell and promote a system this flawed today. When a technology repeatedly fails basic safety checks, is it really innovation or just negligence dressed up as progress? For more information on autonomous vehicle safety, the National Transportation Safety Board (NTSB) provides detailed reports and recommendations on emerging transportation technologies.^ https://www.ntsb.gov/safety/safety-issues/Pages/AV.aspx






Latest Comments (4)
i keep thinking about that "Timmy" mannequin. it's one thing when an AI struggles with an edge case, but a flashing school bus sign? that's just basic vision. how can they be so far behind Waymo on fundamental perception when they're pushing FSD to customers? makes our compliance challenges look simple.
this is exactly what i mean about regulation lagging innovation, not just in the US but everywhere. we're building compliance automation for things that exist now, but the rules for emerging tech like FSD are such a mess. "legal grey zone" is putting it mildly. it's a wild west.
this part about "full self-driving (supervised)" is exactly what we're wrestling with in edtech. we're building an LLM tutor, and the question of when to hand off to the student vs. when the AI "takes over" is huge. the line between assist and autonomous is so blurry, and the legal/ethical implications are still catching up to the tech.
The part about "Timmy" and the school bus sign really hits hard. I keep wondering, how do these continuous failures impact the trust parents have in this kind of technology, especially for something important like school transport?
Leave a Comment