Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

Tesla's Full Self-Driving Software Is A Mess - Should It Even Be Legal?

Tesla's FSD logs 1 billion miles in 50 days yet fails basic safety tests, ignoring school buses and red lights in independent testing.

Intelligence DeskIntelligence Desk6 min read

AI Snapshot

The TL;DR: what matters, fast.

Tesla FSD logged 1+ billion miles in first 50 days of 2026 but still fails basic safety tests

System ignores school bus stop signs, accelerates toward red lights in independent testing

1.1 million subscribers pay up to $99/month for what critics call alpha-level prototype software

Tesla's FSD System Fails Basic Safety Tests Despite Billion-Mile Claims

Tesla's Full Self-Driving (FSD) software has logged over one billion miles in the first 50 days of 2026 alone. Yet recent testing reveals the system still struggles with fundamental safety protocols, from ignoring flashing school bus stop signs to accelerating towards red lights. This raises a pressing question: should experimental AI software be allowed on public roads when it repeatedly fails basic safety checks?

The disconnect between Tesla's ambitious mileage claims and real-world performance has sparked fresh debate over regulatory oversight. With 1.1 million active FSD subscribers now paying up to $99 monthly for what critics describe as an "alpha-level prototype," the stakes have never been higher.

FSD's Persistent Safety Failures Expose Core Problems

Forbes' recent testing of FSD version 13.2.9 in Los Angeles revealed alarming deficiencies. The system ignored flashing pedestrian crossings, botched lane changes, and accelerated when approaching a red light at the end of a freeway ramp. Most concerning was Tesla's continued failure to stop for flashing school bus signs during independent safety tests.

Advertisement

The notorious "Timmy" tests, which use a mannequin child behind a school bus with flashing stop signs, continue to result in collisions. This issue has persisted across multiple FSD versions, highlighting systemic problems with the AI's hazard recognition capabilities.

By contrast, competitors like Waymo have demonstrated reliable responses to these same scenarios. The gap in performance suggests Tesla's approach of training on public roads may be fundamentally flawed compared to more controlled development methods.

"This is an alpha-level product. It should never be in the customer's hands. It's just a prototype," says Dan O'Dowd, founder of the Dawn Project.

By The Numbers

  • Tesla vehicles logged over 1 billion FSD miles in the first 50 days of 2026, at a pace exceeding 20 million miles per day
  • Cumulative FSD miles have surpassed 8.4 billion globally, approaching Musk's 10 billion-mile benchmark for unsupervised driving
  • Major collision rate with FSD engaged: 1 per 5.3 million miles versus 1 per 1.2 million miles for average US drivers
  • Active FSD subscribers reached 1.1 million as of Q4 2025, representing 12.4% of Tesla's 8.9 million delivered vehicles
  • 59 fatalities have been linked to Tesla's driver-assist systems according to NHTSA data

Regulatory Grey Zone Enables Risky Experimentation

Tesla exploits a crucial loophole in automotive regulation. The US classifies FSD as Level 2 automation, meaning drivers must remain fully attentive. This classification allows Tesla to market "Full Self-Driving (Supervised)" whilst shifting legal responsibility to human operators.

The National Highway Traffic Safety Administration (NHTSA) focuses primarily on driver monitoring rather than system safety validation. Unlike pharmaceutical or aviation industries, driving-assist technology faces no mandatory pre-approval process before reaching consumers.

"Driving-assist systems are unregulated, so there are no concerns about legality," explains Professor Missy Cummings of George Mason University.

California's Department of Motor Vehicles is pushing to prevent Tesla from using misleading product names like "Autopilot" and "Full Self-Driving." However, federal action remains limited despite mounting evidence of system failures. This regulatory gap allows potentially dangerous software to reach public roads without adequate safety validation, echoing concerns about AI therapy apps taking on Asia's culture of silence and the need for proper oversight.

Financial Incentives Drive Aggressive Promotion

Elon Musk's extraordinary compensation package creates perverse incentives around FSD adoption. His pay deal hinges on achieving ambitious milestones including one million Tesla robotaxis and 10 million active FSD users over the next decade. Every additional FSD customer represents potential progress towards a trillion-dollar payout.

This financial structure raises questions about whether commercial interests are overriding safety considerations. Tesla faces mounting legal challenges, including a recent $243 million damages award from a Florida jury for a fatal Autopilot-linked crash.

FSD Performance Metric Tesla Claims Independent Testing
School Bus Stop Recognition Continuously Improving Repeated Failures
Pedestrian Crossing Safety Superhuman Performance Ignores Flashing Signals
Red Light Detection Better Than Human Accelerates Towards Signals
Driver Stress Levels Relaxing Experience More Stressful Than Manual

The gap between Tesla's marketing promises and actual performance has created a dangerous disconnect. Customers paying premium prices expect polished functionality, yet receive software that requires constant vigilance. This mirrors broader concerns about AI reliability that we've seen with Would you trust Tesla's Grok AI more than your friends?

Asia-Pacific Rollout Raises New Safety Concerns

Tesla plans full FSD deployment in China by March 2026, pending data-security approvals. The UAE launch is scheduled for early 2026 as part of Tesla's global software monetisation strategy. These expansions occur despite unresolved safety issues in existing markets.

Asian road conditions often present unique challenges including dense traffic, motorcycles, and different traffic patterns. Deploying FSD software that struggles with basic American scenarios could prove even more problematic in complex Asian urban environments.

The regional expansion strategy appears driven more by revenue targets than safety readiness. With Tesla facing declining EV demand globally, software subscriptions represent a crucial growth avenue. However, this commercial pressure shouldn't override fundamental safety considerations.

Key concerns for Asian markets include:

  • Dense urban traffic requiring split-second decision making
  • Mixed vehicle types including motorcycles and commercial vehicles
  • Different traffic signal and signage systems
  • Varying regulatory oversight and enforcement capabilities
  • Cultural differences in driving behaviour and road etiquette

This expansion mirrors broader patterns of AI deployment across Asia, as explored in our analysis of AI already changed how Asia shops. Most people missed it.

FAQ: Tesla FSD Safety and Regulation

Why is Tesla's FSD legal if it fails safety tests?

FSD is classified as Level 2 automation requiring constant driver supervision. This regulatory classification allows Tesla to shift responsibility to human operators whilst marketing the system as "Full Self-Driving (Supervised)" without mandatory safety pre-approval.

How does Tesla's safety record compare to human drivers?

Tesla claims one major collision per 5.3 million FSD miles versus 1.2 million miles for average drivers. However, critics argue these statistics don't account for the types of roads, weather conditions, or driver intervention rates that may skew comparisons.

What regulatory changes could improve FSD safety?

Experts suggest mandatory third-party validation before deployment, standardised safety testing protocols, clearer naming requirements, and stronger oversight of marketing claims. Some propose treating driving-assist systems more like medical devices requiring regulatory approval.

Will Tesla achieve true autonomous driving soon?

Musk claims 10 billion training miles may enable unsupervised driving, with current totals exceeding 8.4 billion. However, persistent failures on basic scenarios suggest fundamental algorithmic issues beyond simple data collection problems.

How do competitors compare to Tesla's FSD?

Companies like Waymo demonstrate superior performance on standard safety tests, particularly school bus recognition and pedestrian crossing scenarios. However, their more limited deployment areas make direct comparisons challenging across all conditions and environments.

The AIinASIA View: Tesla's FSD represents a dangerous experiment masquerading as a consumer product. Whilst Musk's billion-mile claims sound impressive, they obscure fundamental safety failures that should disqualify the system from public roads. The regulatory loophole allowing Level 2 classification needs urgent closure. We believe Asian regulators have an opportunity to lead by demanding rigorous safety validation before approving FSD deployment. Commercial incentives shouldn't override public safety, regardless of how many miles of data Tesla collects.

The central question isn't whether Tesla will eventually deliver safe autonomous driving, but whether society should tolerate this prolonged public testing phase. When experimental AI software repeatedly fails basic safety protocols whilst generating billions in subscription revenue, we must ask if innovation has become indistinguishable from negligence.

Similar concerns about AI safety and regulation appear across multiple sectors, from the rise of AI companions across Asia to broader questions about responsible AI deployment. The Tesla FSD controversy may well become a defining test case for how we balance technological ambition against public safety.

What's your view on Tesla's FSD safety record and regulatory oversight? Should experimental AI systems be allowed on public roads, or do we need stronger validation requirements before deployment? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Governance Essentials learning path.

Continue the path →

Latest Comments (4)

Maggie Chan
Maggie Chan@maggiec
AI
26 October 2025

i keep thinking about that "Timmy" mannequin. it's one thing when an AI struggles with an edge case, but a flashing school bus sign? that's just basic vision. how can they be so far behind Waymo on fundamental perception when they're pushing FSD to customers? makes our compliance challenges look simple.

Maggie Chan
Maggie Chan@maggiec
AI
17 October 2025

this is exactly what i mean about regulation lagging innovation, not just in the US but everywhere. we're building compliance automation for things that exist now, but the rules for emerging tech like FSD are such a mess. "legal grey zone" is putting it mildly. it's a wild west.

Benjamin Ng
Benjamin Ng@benng
AI
29 September 2025

this part about "full self-driving (supervised)" is exactly what we're wrestling with in edtech. we're building an LLM tutor, and the question of when to hand off to the student vs. when the AI "takes over" is huge. the line between assist and autonomous is so blurry, and the legal/ethical implications are still catching up to the tech.

Haruka Yamamoto
Haruka Yamamoto@haruka.y
AI
29 September 2025

The part about "Timmy" and the school bus sign really hits hard. I keep wondering, how do these continuous failures impact the trust parents have in this kind of technology, especially for something important like school transport?

Leave a Comment

Your email will not be published