Dauric wrote:The point was in case of computer failure the human driver takes over, so we're talking about being suddenly sans-autopilot because of a failure. Problem is the vast majority of drivers, when presented with an autopilot, will flip out their smart phones and text or surf the web, or generally be -less- attentive than if they were actively driving the car, and thus less aware of a computer fault, or an unexpected circumstance the computer isn't handing in a safe manner.
The point was a bad point. People already do all of those things
simply because they feel confident from muscle memory. Show me the data of a driver in an autopilot car actually doing these things
more frequently than your average driver, or stop making up groundless horror stories.
It's a me problem for me in my own life, yes. It becomes a Google problem as soon as Google offers something that's supposed to have reliable navigation and doesn't, especially when that something is a thousand-plus pounds of vehicle tooling along at 30-60 MPH.
If I ask you for directions to the emergency room, or Google, and you both get it wrong, I don't really give a fuck whose ""fault" it is. The person I was racing to get there is already dead. The fault occurred, people got hurt because of it.
If Google makes that fuckup less than you (or anyone else does), then I'm going to ask it for directions by default even if it isn't right 100% of the time. The people getting injured in car crashes don't give so much of a fuck (at least, until it's time for the lawsuit) whose fault it is, just that there was
a fault. They want less faults. Being afraid of "who's going to get blamed" is irrational and completely irrelevant to the decision.
That's not the point. No matter what the circumstance, something will happen that demands that the driver take over NOW. I'm saying the driver won't be available NOW. It will be: "Huh, what?" CRASH! Because that's how fast things happen when they go mustard.
This is the key point. Any significant circumstances the computer can't handle are potentially highly unsafe, because human beings can't context-switch instantaneously, and in driving the acceptably safe reaction time goes down as the potential danger of the hazard goes up. A "take the wheel, I can't deal with this washboard gravel road" message is not a huge problem, because the couple of seconds it would take our hypothetical passenger-driver to put down his bagel and coffee and start driving the car is basically acceptable. An "OHMYGOD THIS DEER IS A SUICIDAL IDIOT" message or an "AHHHH FUCK FUCK BLACK ICE" message? Not something you can spare a few seconds for, that shit needs to be dealt with now.
It's the same perfect solution fallacy that's been repeated too many times in this thread, and it's as irrational as it was the first time.
There are, right now, people dying because they weren't able to figure out how to solve a driving problem, like black ice or deer, "RIGHT NOW". This is not a problem that the computers are introducing. Therefore, the only way it is relevant is if it occurs at a greater
rate than the problems already occurring do.
From the elegant yelling of this compelling dispute comes the ghastly suspicion my opposition's a fruit.