The problem of full self driving is not one of sensors alone. That’s the classic mistake that Tesla, Waymo and the rest make. Driving requires reasoning, thinking and above all, modeling human behavior. When you drive, you actively model the behavior of drivers around you. Is the person near me driving aggressively, cautiously, drunk etc. Am I entering a zone where accidents are common? Is it OK to drive above the speed limit here because almost everyone does? I live close to 101. If you drove at 101 at the speed limit of 65 mph, you’d likely get killed because the joker behind you is expecting you to drive at 75!
In short, driving is what is often referred to as an AI-complete problem, meaning it requires fusing multiple modalities, from vision to motor coordination to common sense reasoning. That’s why FSD without human supervision is a pipe dream. The idea that Tesla Robotaxis that come without steering wheels or brakes would be approved by the Government for public use is wishful thinking. At best, FSD is souped up cruise control. It relieves you from constantly having to press the pedals, steering the car and prevents accidents from you falling asleep. But there’s no substitute to human-level behavior modeling.
If you do an analysis of self-driving car accidents in San Francisco, it reminds me of Murphy’s law: if something can go wrong, it will.
View attachment 138238