Even After $100 Billion, Self-Driving Cars Are Going Nowhere

https://www.bloomberg.com/news/features/2022-10-06/even-after-100-billion-self-driving-cars-are-going-nowhere

Or if that doesn’t work:

https://archive.ph/nYvLu

No surprise to anyone who’s been paying attention to the crickets out of the companies driving into random things lately.

Also… I’d missed this “hilarious fail.”

What could go wrong if you use Tesla’s Smart Summon on an airport ramp? I mean, there are only a handful of things to avoid. A handful of really, really expensive things you definitely don’t want to plow into. About that.

I’m honestly surprised they’ve got things working as well as they even do.

I wish there was more interest around light-rail & bike paths personally, but to have a successful company based in the US (mostly) that has AI driving capabilities at all is pretty cool. With that being said… I’m quite happy with my Toyota :wink:

Have you guys seen Comma AI? It’s going to be the company that actually does this in my opinion. Run by George Hotz, they are taking it one step at a time, with none of the BS hype train.

You can add on a highway-autopilot (great for any sort of trips or commutes) to many newer cars for about $3k.

I’ve seen their stuff - I believe the 2017 Volt is the only model that works with their hardware. At least last I looked, it was “basic lane holding and some faintly traffic aware stuff” for cars. Looks like it at least has driver monitoring!

Interesting, just requires cars I’m unlikely to own.

I think a big part of the problems is the human.

“AI” can do basic things like lane holding and corrective cruise control etc - but still has problems with situations.

But people won’t treat it right - the moment it can do 80-90% people’s attention drops to damn near zero.

So they keep trying to go to full perfection and that’s damn hard, but they know that’s the only way to effectively sell it.

This is just human nature. The self driving car groups are busily re-learning the lessons of aerospace automation from the 50s and 60s, with the bonus of the sort of arrogance that says those lessons aren’t of any value because [reasons].

Humans make terrible systems monitors. We’re fine in the loop, we’re fine if we have some time to catch up, and we really need a solid indicator of where to look when the automation wants to fault back to us for some reason. The current state runs humans firmly in the place that we know they cannot operate, and justifies it with nonsense.

I’ve zero problems with the “monitor the driver and let them know if they’re about to do something stupid” systems - because the driver is still in the loop. Blind spot monitors, lane departure monitors, emergency brake monitors? Great! Computers are wonderful at looking over the driver’s shoulder and going, “Uh, hey…” Humans are just terrible at doing it to computers.

The other problem I have with a lot of these systems is that there’s no real way to tell what the automation is seeing. And at some point, if you’re relying on the human to be the backup, you can’t have much in the way of this UI or you’ll just distract people, but trusting something that’s an opaque black box is, at least, not my style. Plenty of people seem willing to put far more trust in them than they should, because they’ve no idea just how much the system can miss and still seem to operate well. Your first clue the system doesn’t see the motorcycle shouldn’t be when you run over the motorcycle.

Anyway, we’ll see how that whole market goes. Very little of what’s been claimed about it stands up to analysis, and you get an awful lot by just getting driver assist systems out anyway.