morriswalters wrote:From Ars Technica. Read this review from Ars and laugh.
Two things stand out:
Ars Technica wrote:Everyone I've taken for a drive in this car has—completely unprompted—tried to say "OK Google" to it.[...]Our inner geek would love a whole car wired for sound so that anyone could shout "OK Google" like it's the Star Trek computer.
Can you imagine a radio announcer saying "OK Google, Hit the brakes!"
regarding texting while driving using this app, Ars Technica wrote:Stop-and-go traffic and red lights exist—with your foot on the break, those are likely perfectly safe times to read the computer screen or even poke at it.
I guess my first exploit won't work unless the radio announcer says "Ok Google, hit the breaks!"
But this is the kind of thinking I'm afraid of. Fortunately Google is enforcing certain safety provisions, despite complaints by this reviewer. For now.
Tyndmyr wrote:We do it [(expend lots of money to find murderers)] to prevent murders.
But why prevent murders in the first place? Is this a "leading cause of death"? That money could be spent curing heart disease, or preventing household accidents. Point is, we don't (and shouldn't) spend money based solely on number of lives saved.
The reason we track down murderers is to enforce a societal norm, under which we want to live
. And although it's consistent with wanting to save lives, that's not the underlying reason. We do this so we ourselves can feel safe from our neighbors, and although this puts restrictions on murderers, we don't care because we are not murderers. Whether this is Good or Bad is debatable (substitute "troublemaker" for "murderer"), but OT for this topic. This is why we do it.
Tyndmyr wrote:What they have done with it in the past is sold advertising. This is not terribly fearsome.
The lessons of umwelt are lost on you.
Tyndmyr wrote:....no. Not thousands of miles away. That's not what was demonstrated. That would be a very different hack indeed.
They demonstrated it through a medium for which distance was not an object. That is sufficient. If you truely think that this is not
sufficient, they you might as well point out that they did not demonstrate it on a blue car, or on a station wagon.
Tyndmyr wrote:The hack that WAS demonstrated was not demonstrated on an computer driven car. So, it's not a vulnerability introduced by computer driven cars.
No, it is a vulnerability that is greatly amplified
by computer driven cars. Unless the industry somehow prevents it. Which they do not appear to be doing, Ars Technica notwithstanding.
Tyndmyr wrote:I'm not comparing to compromised PCs.
You should be, because cars nowadays are nothing more than compromised PCs going at highway speed. With seat belts.
Negated wrote:As long as the local safety decisions override the wirelessly transmitted commands, it makes hacking on mass scale much less likely.
Yes, so long as the local system does not interact in any way
with the networked system, which includes not being able to update it wirelessly. (Which is a safety issue in its own right
ucim wrote:Is isn't should.
There is no should morally. And you just spent a significant amount of time arguing the point. And there is no legislation yet, and no certainty about what it might need to say.
I'm not arguing morals (let alone objectivity in moral systems). I am stating what should happen (you can take that as IMHO, IMEO, or "because if it doesn't, there will be mustard"). You seem to be arguing "It won't happen, so let's be ok with it".
ucim wrote:Controlling access to simulators is not a good thing
You can't have it both ways. You want to impose some type of regulation on an industry which you have concerns about, some of which I share. You can't then argue that regulations for access to devices to teach you to fly Boeing Dream Liners are an abridgement of your freedom.
Yes I can, and without contradiction. It's important to decide which
regulations should be imposed, on whom
, and why
. The kneejerk simulator (and actual flight training) restrictions fail all those tests. (Not to invoke argument from authority, but I happen to be a pilot, and have actual exposure to these regulations.)
morriswalters wrote:Google is approaching a million miles in their testing
Is that a big number? It's more than I can visualize. A million dollars doesn't buy what it used to, but a million pounds is more than I can bench press even on a good day. For comparison, there were three trillion
miles driven in 2012 in the United States alone, making their test the equivalent of one third of a billionth of the yearly US traffic.
morriswalters wrote:The long game is about money. If Google or Ford or Mercedes can make money off this it will happen.
morriswalters wrote:They know their reputation will suffer if they don't get it right. They have an example that they don't want to follow in the first jet airliner, the de Havilland DH 106 Comet.
They have other examples they don't mind following however: The Pinto. The GM key. The airbag fiasco. The Toyota accelerator (not proven though). And just about every major recall that happened, and the ones that haven't happened yet. But they keep doing it
, and by "it" I don't mean making mistakes. I mean deliberately
covering up known
flaws hoping nobody will notice and they can keep getting away with shaving sixty cents on each car they make. There are other examples to follow: The Sony rootkit. Internet Exploder. The Pentium. Flatscreen auto entertainment interfaces. And just about every software release known to man, woman, or whatever. Again not because software is hard, but because release trumps ready.
Joseedit: quote mustard