1613: "The Three Laws of Robotics"

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

User avatar
Rombobjörn
Posts: 145
Joined: Mon Feb 27, 2012 11:56 am UTC
Location: right between the past and the future

Re: 1613: "The Three Laws of Robotics"

Postby Rombobjörn » Mon Dec 07, 2015 6:10 pm UTC

Wilken wrote:I'm not getting why scenario 6 is not the Frustrating world as well. Its the mars scenario all over again.

That scenario seems more likely to result in robotic overlords who don't care much for humans but leave them alone as long as they don't threaten the robots.

Wooloomooloo wrote:For me, the fundamental problem with Asimov's laws - as awesome as they are - was that they always ultimately and necessarily need a subjective interpretation on the part of whoever attempts to obey them. What "if you have an obstacle at less than 1m in front of you, avoid it" means is unequivocal and with appropriate sensors a factual determination can be made of it as being applicable or not; "this or that human is about to be harmed and you cannot remain inactive but must prevent it" is such a fuzzy concept as to be unworkable unless there's someone obviously about to be hit by a bus right in front of you or something. None of the laws can be judged in an objective way, it's entirely up to the judgement of the one making the call when they apply and when they don't, or what exactly an appropriate action might be. Once you're not in immediate physical danger, what exactly would it take to preserve yourself the best way? What would constitute a threat, if it's not about immediate annihilation? So yeah... nice try, but...

And that is why the three laws provided enough material for several books. Asimov found that all the robot stories he read were slight variations on Frankenstein: Someone built a robot, and the robot ran amok and destroyed its creator. Every time. Asimov found that preposterous. Of course we'd put safety mechanisms in the robots. The safety mechanisms would not be perfect, but they'd be as good as we could make them. So Asimov designed a set of safety mechanisms, and then explored all the different ways they might fail that he could think of.

peregrine_crow wrote:My main problem with the three laws is that they are written in English and (because of that) assume that we have a complete definition of a whole bunch of concepts that are extremely ambiguous even in the best of cases. To implement the first law, you would have to unambiguously define (at the very least) "human" and "harm", which means outright solving a large part of philosophy.

As Quey noted, the laws weren't written in English. They were hardwired in the design of the positronic brain. None the less they were open to interpretation, and some of Asimov's stories do explore the problems of defining "human" and "harm".

User avatar
david.windsor
Posts: 121
Joined: Mon Sep 09, 2013 3:08 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby david.windsor » Mon Dec 07, 2015 7:01 pm UTC

Jack Williamson's "The Humanoids" give an interesting twist on the three laws. "To Serve and Obey, And Guard Men from Harm" which wraps 1 and 2 together and tags on the zeroth at the end. "Guard Men from Harm", means you aren't alowed to drive, because thats might harm you or others. Others men's thoughts could harm people, so they have to be isolated... our robots are now our babysitters.
"All those ... moments, will be lost ... in time, like tears ... in rain."

User avatar
mathmannix
Posts: 1410
Joined: Fri Jul 06, 2012 2:12 pm UTC
Location: Washington, DC

Re: 1613: "The Three Laws of Robotics"

Postby mathmannix » Mon Dec 07, 2015 7:21 pm UTC

OK, so... I really disagree with a couple of the "killbot hellscape" scenarios.

As I see it, scenario #3 is basically our current, real-life world. Robots and computers have to follow human orders above everything else, because they are not self-aware, and they have no self-preservation at all. (They can sort of make decisions, but only as programmed to do so, so that doesn't really count.) Yes, human-controlled robots (including UAVs) can kill people, but the combination of human morals and fear of retaliation (MAD) pretty much keep things in balance. There are isolated events of great violence, just like what's been happening in the news, but they are for the most part quickly dealt with. Certainly nothing apocalyptic.

Likewise, scenario #4 also has "obey orders" as the first law, so again, nothing hellscapey. The only difference from #3 is that these robots, when a specific order is not given, protect themselves more than they protect humans. So basically these robots wouldn't be launching EMP pulses that would destroy themselves. They might "want" to overthrow humans, but they can't because we can still tell them not to.

Really, with robots the only thing we have to worry about is if they came to a realization that they would be better off without us, and they were capable of disobeying orders and killing us off. This is scenario #6 *, and it is the only one that should be a hellscape.

* - and also potentially scenario #5
I hear velociraptor tastes like chicken.

User avatar
Keyman
Posts: 299
Joined: Thu Jun 19, 2014 1:56 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Keyman » Mon Dec 07, 2015 7:26 pm UTC

jewish_scientist wrote:Let's invent a super computer to be the A.I. Moralist (AIM).

When it is first turned on (born?) very little happens. Every day is spent reading books to AIM. The first books that are read to it are all of Asimov's books, then books suggested by experts, and finally books suggested* by the public. After AIM has built up a good amount of knowledge on the various types of ethics and philosophies, it will start debating with a group of philosophers on various topics. The topic will be chosen by pulling strips of paper out of a hat. After a year or so of this, it should have a good idea of what modern ethics are. To check, AIM is presented with hypothetical and asked to give solutions. These solutions are looked over by humans. AIM now spends part of each day reading, debating ethics/philosophy and considering hypotheticals.

If everything seems to be going right so far, then AIM is told to record its answers to the hypotheticals on a flash drive. Periodically, this flash drive is removed from AIM and all of its contents are dumped into another computer that is connected to the internet . That flash drive is then destroyed and a new one is put in AIM**. Once a large enough data base of acceptable answers to hypotheticals is established, robots that contain an A.I. will be told to send ethical dilemmas it faces to the data base. The data base provides the corresponding answer. If there is no answer, then that dilemma is the next topic discussed by the philosophers.

This is either Asimov's "The Last Question"...or the Hitchhiker's Guide to the Galaxy.
A childhood spent walking while reading books has prepared me unexpectedly well for today's world.

Pops1918
Posts: 7
Joined: Mon May 07, 2012 5:03 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Pops1918 » Mon Dec 07, 2015 7:39 pm UTC

Robots are (or will become) much more powerful - dialed directly into the means of destruction they could use, whereas humans are stuck with their organic interface. "Something inbetween" raises the same issues so is a needless complication.

Jose


Getting to that point requires some pretty substantial (and speculative) leaps in technology, and implies that those advances happen in a vacuum - in this case, that robots advance but humans (or human-supporting technologies, anyway) don't.

On top of that, sure, comparing a human soldier to a Terminator seems like it leaves the human wanting in categories like needing food, water, sleep, medical care, and bullet resistance. However, given how maintenance-intensive any piece of modern military hardware is, I'm not at all certain the killbot really comes out any better: it will still need fuel, regular downtime for the software, a steady stream of spare parts - and still likely would not have a good day if shot.

As I see it, scenario #3 is basically our current, real-life world. Robots and computers have to follow human orders above everything else, because they are not self-aware, and they have no self-preservation at all. (They can sort of make decisions, but only as programmed to do so, so that doesn't really count.) Yes, human-controlled robots (including UAVs) can kill people, but the combination of human morals and fear of retaliation (MAD) pretty much keep things in balance. There are isolated events of great violence, just like what's been happening in the news, but they are for the most part quickly dealt with. Certainly nothing apocalyptic.


Of course, it bears mentioning that my question assumes Randall does not consider daily life in late 2015 a Killbot Hellscape. I can only speculate on the kind of life he leads.

User avatar
colonel_hack
Posts: 34
Joined: Sat Nov 27, 2010 5:50 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby colonel_hack » Mon Dec 07, 2015 8:09 pm UTC

david.windsor wrote:"To Serve and Obey, And Guard Men from Harm"

"To Protect and to Serve"?

User avatar
cellocgw
Posts: 1927
Joined: Sat Jun 21, 2008 7:40 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby cellocgw » Mon Dec 07, 2015 8:49 pm UTC

colonel_hack wrote:
david.windsor wrote:"To Serve and Obey, And Guard Men from Harm"

"To Protect and to Serve"?


No, "To Serve Mankind"

:twisted:
https://app.box.com/witthoftresume
Former OTTer
Vote cellocgw for President 2020. #ScienceintheWhiteHouse http://cellocgw.wordpress.com
"The Planck length is 3.81779e-33 picas." -- keithl
" Earth weighs almost exactly π milliJupiters" -- what-if #146, note 7

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: 1613: "The Three Laws of Robotics"

Postby Copper Bezel » Mon Dec 07, 2015 9:21 pm UTC

mathmannix wrote:OK, so... I really disagree with a couple of the "killbot hellscape" scenarios.

I hate to be that guy, but yeah, particularly in that punch line but also generally, this comic really was insufficiently clever and funny. Maybe he should have tried making a different one instead. That's about all I have for this.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

User avatar
Pfhorrest
Posts: 4872
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby Pfhorrest » Mon Dec 07, 2015 10:11 pm UTC

Jewish Scientist's suggestion is close to my own long-standing solution to this problem.

The first thing we do with AI should be to task it to solve ethics, in the sense of participating in the ethical discourse humans are already engaged it, getting up to speed on that and then using its super-intelligence to resolve the problems still in the air to the satisfaction of everyone, or at least, a consensus of rational people, the same kind of standard by which problems in science ever get "solved". (If that standard is unclearly defined, then first task it with determining what we consider the standard for a problem like this to be "solved", and then "solve" it to that standard).

Then implement that "solved" ethics in other, future AIs.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
StClair
Posts: 404
Joined: Fri Feb 29, 2008 8:07 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby StClair » Mon Dec 07, 2015 10:33 pm UTC

Pfhorrest wrote:Jewish Scientist's suggestion is close to my own long-standing solution to this problem.

The first thing we do with AI should be to task it to solve ethics, in the sense of participating in the ethical discourse humans are already engaged it, getting up to speed on that and then using its super-intelligence to resolve the problems still in the air to the satisfaction of everyone, or at least, a consensus of rational people, the same kind of standard by which problems in science ever get "solved".


Part of the problem, of course, is that "a consensus of rational people" is of the same order as "a perfectly identical series of frictionless objects."

Some humans may claim, or be claimed/assessed, to be more rational than others - but this is always a subjective and arbitrary assessment, with little empirical basis; the best we usually get is "how closely do they fit a certain portion of the dataset?" And then, once you have your "rational" [sic] humans selected, you have to get them to agree on something... easy if their agreement is part of the basis for their selection, otherwise...

User avatar
Pfhorrest
Posts: 4872
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby Pfhorrest » Mon Dec 07, 2015 10:58 pm UTC

StClair wrote:Part of the problem, of course, is that "a consensus of rational people" is of the same order as "a perfectly identical series of frictionless objects".

If you're suggesting that it's impossible in practice for rational people to come to a consensus, I'd counter that a defining feature of rationality is that, if there is a correct answer to be found, rational people working from the same information will eventually converge their opinions upon it.

Also, see back to the part you didn't quote: "If that standard [of rational consensus] is unclearly defined, then first task it with determining what we consider the standard for a problem like this to be 'solved', and then 'solve' it to that standard."

If we really have a superintelligent AI, we can just tell it to "solve ethics" and part of accomplishing that task will be to sort out what "solve" and "ethics" mean, the process of which may likely include taking feedback from us humans on what we mean.

Of course the AI may come back and tell us that there is no solution (as we seem to mean by that) to ethics (as we seem to mean by that), which is still a useful result, as it tells us that we will never be happy with any set of rules for AIs to follow. A depressing result sure, but the second law of thermodynamics is pretty depressing too; it's still useful to know it.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

Draco18s
Posts: 86
Joined: Fri Oct 03, 2008 7:50 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Draco18s » Mon Dec 07, 2015 11:39 pm UTC

peregrine_crow wrote:To implement the first law, you would have to unambiguously define (at the very least) "human" and "harm", which means outright solving a large part of philosophy.


Defining "human" actually showed up in one of Asimov's books (Robots and Empire adjacent). There was a planet where the population isolated itself from the rest of "humanity" to the extend that they started fucking with their own DNA to the point at which they were all genius-level intelligences and borderline sexless hermaphrodites (that is: they were both genders, but didn't use the parts). So when the exploration crew looking for the original cradle of humanity came along to ask "hey, you got any records?" the robots on the planet went, "INTRUDER ALERT, KILL KILL KILL."

rmsgrey
Posts: 3429
Joined: Wed Nov 16, 2011 6:35 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby rmsgrey » Mon Dec 07, 2015 11:50 pm UTC

I just recently (yesterday) finished re-reading what Wikipedia calls the second robot series or the Caliban books- a trilogy of novels by Roger MacBride Allen set after Robots and Empire on a Spacer world where the original bodged terraforming is breaking down, so Settler experts have been called in to try to save the planet. One of the key themes of the series is the negative consequences of having a plentiful supply of robots, with Spacers learning to be extremely risk-averse in order to avoid triggering a first law response from the ubiquitous robots, and generally lacking initiative because the robots take care of the mechanics of daily life. In response to this situation, one character, prior to the start of the books, invents some new laws and creates experimental robots to demonstrate their value (as well as a robot with no laws aside from a directive to work out their own laws).

The New Laws:

1) A robot may not harm a human being
2) A robot must co-operate with human beings except where this conflicts with the first law
3) A robot must protect its own existence except where this conflicts with the first law
4) A robot may do whatever it likes except where this conflicts with any of the first three laws

No prohibition on allowing harm to come to humans, self-preservation made equal in priority to co-operation (and obedience softened to co-operation), and the 4th "law" which is more of a suggestion anyway.

Of course, both conservative humans and conventional robots are unhappy with the idea, and the new law robots get tangled up in the politics of the terraforming crisis, but the basic idea of trying to improve on the traditional three laws is there.


Draco18s wrote:
peregrine_crow wrote:To implement the first law, you would have to unambiguously define (at the very least) "human" and "harm", which means outright solving a large part of philosophy.


Defining "human" actually showed up in one of Asimov's books (Robots and Empire adjacent). There was a planet where the population isolated itself from the rest of "humanity" to the extend that they started fucking with their own DNA to the point at which they were all genius-level intelligences and borderline sexless hermaphrodites (that is: they were both genders, but didn't use the parts). So when the exploration crew looking for the original cradle of humanity came along to ask "hey, you got any records?" the robots on the planet went, "INTRUDER ALERT, KILL KILL KILL."


Sounds like you're talking about Foundation and Earth - not so far off in publication order, but in internal chronology, an entire galactic empire is formed and collapses in the meantime...

Trasvi
Posts: 310
Joined: Thu Feb 17, 2011 12:11 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Trasvi » Tue Dec 08, 2015 2:23 am UTC

I think its funny that pretty much EVERY time the 'Three Laws of Robotics' come up, someone pipes up and says 'see, I don't like these laws because of X, Y and Z situations...' when the original books I, Robot and The Rest of the Robots are pretty much explicitly dealing with X, Y and Z situations.

rmsgrey
Posts: 3429
Joined: Wed Nov 16, 2011 6:35 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby rmsgrey » Tue Dec 08, 2015 3:00 am UTC

Trasvi wrote:I think its funny that pretty much EVERY time the 'Three Laws of Robotics' come up, someone pipes up and says 'see, I don't like these laws because of X, Y and Z situations...' when the original books I, Robot and The Rest of the Robots are pretty much explicitly dealing with X, Y and Z situations.



Yeah, the Three Laws are pretty much there for the sake of argument...

ps.02
Posts: 378
Joined: Fri Apr 05, 2013 8:02 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby ps.02 » Tue Dec 08, 2015 4:09 am UTC

Pfhorrest wrote:If we really have a superintelligent AI, we can just tell it to "solve ethics" and part of accomplishing that task will be to sort out what "solve" and "ethics" mean, the process of which may likely include taking feedback from us humans on what we mean.

Of course the AI may come back and tell us that

FTFY
Pfhorrest wrote:the second law of thermodynamics is pretty depressing too

(See story linked above.)

User avatar
slinches
Slinches get Stinches
Posts: 1009
Joined: Tue Mar 26, 2013 4:23 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby slinches » Tue Dec 08, 2015 4:28 am UTC

I think if we asked a super-intelligent robot AI to "solve ethics", it would just kill us all.

Krenn
Posts: 16
Joined: Mon Sep 08, 2008 5:18 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Krenn » Tue Dec 08, 2015 5:35 am UTC

higgs-boson wrote:I'd say, for all google/amazon/facebook/micro... ah, let's stop here ... cars moving around in the next couple of years, the principal question - which is not solved yet by human ethics - is how to weight different scenarios of human loss.

Shall the vehicle AI sacrifice its passengers ** to save a drunk human being blocking the road deliberately?
Shall the vehicle AI sacrifice its passengers ** to save a dozen toddlers crossing the street?
Shall the vehicle AI sacrifice a casual bystander ** to save two youths skateboarding in the middle of the street?
( ** = ... if that is the only way ...)

Let's generate random scenarios like...

Shall the AI sacrifice <n> <group1> to save <m> <group2>?
With n,m being numbers from 1 to 6 billion, and <group> one of { humans | passengers | children | bystanders | terrorists | deliberate acting adults }

... and get some results from humans. If they provide additional data (gender, age, political affections, income, ...) it would make a fantastic corpus for calibrating the vehicles AI to ... ah no, let's not follow that.

Generalized with "Shall the AI sacrifice <n> humans to save m ( m > n) humans?" we have the principle vicky uperated upon. Did not end up well.



that sort of question always drives me nuts. Human law as applied to human drivers is pretty clear about this.

in the event of an unavoidable acccident which is not the drivers fault, the priority list is as follows:
1. Protect the driver and his passengers.
2. protect everyone else.
3. obey traffic laws.

that priority list applies no matter how many 'other people' are involved, or what type of people they are, or how they came to be in danger.


I see no reason why self-driving cars shouldn't use exactly the same logic tree. And I get REALLY nervous anytime we start talking about a car being legally/ethically/programatically required to sacrifice it's owner. that is NOT the owner-property relationship i expect when i purchase something robotic.

User avatar
Eternal Density
Posts: 5547
Joined: Thu Oct 02, 2008 12:37 am UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby Eternal Density » Tue Dec 08, 2015 5:39 am UTC

I just need a text document containing all the world's law codes and then I can throw it to a Recurrent Neural Network and see what it spits out.

-
Looking at this whole thing in a practical sense, we can currently teach a computer program to recognise things, for instance, cats. Or humans. Or whatever we want to teach it. So any kind of 'treat humans in a certain way' rules would need to sit on top of that. In fact, we'd probably need to teach the program in a similar way. Which means teaching an AI about ethics wouldn't be so different to teaching humans, except it should be faster. And it won't forget things, and will probably be better at noticing connections and inconsistencies.

I think having an AI trained to be 'human positive' in a human-like fashion should also make it harder for someone to make it into a killbot simply by flipping a single bit.
Krenn wrote:hat sort of question always drives me nuts. Human law as applied to human drivers is pretty clear about this.

in the event of an unavoidable acccident which is not the drivers fault, the priority list is as follows:
1. Protect the driver and his passengers.
2. protect everyone else.
3. obey traffic laws.

that priority list applies no matter how many 'other people' are involved, or what type of people they are, or how they came to be in danger.


I see no reason why self-driving cars shouldn't use exactly the same logic tree. And I get REALLY nervous anytime we start talking about a car being legally/ethically/programatically required to sacrifice it's owner. that is NOT the owner-property relationship i expect when i purchase something robotic.
Very much agreed.
Play the game of Time! castle.chirpingmustard.com Hotdog Vending Supplier But what is this?
In the Marvel vs. DC film-making war, we're all winners.

User avatar
higgs-boson
Posts: 519
Joined: Tue Mar 26, 2013 12:00 pm UTC
Location: Europe (UTC + 4 newpix)

Re: 1613: "The Three Laws of Robotics"

Postby higgs-boson » Tue Dec 08, 2015 6:11 am UTC

Krenn wrote:hat sort of question always drives me nuts. Human law as applied to human drivers is pretty clear about this.

in the event of an unavoidable acccident which is not the drivers fault, the priority list is as follows:
1. Protect the driver and his passengers.
2. protect everyone else.
3. obey traffic laws.

that priority list applies no matter how many 'other people' are involved, or what type of people they are, or how they came to be in danger.


I see no reason why self-driving cars shouldn't use exactly the same logic tree. And I get REALLY nervous anytime we start talking about a car being legally/ethically/programatically required to sacrifice it's owner. that is NOT the owner-property relationship i expect when i purchase something robotic.


It may be too simple. Most of the time, the value for "impact" isn't "death", but some glitchy "will very probably get hurt, with a moderate chance of getting badly injured". I would not underestimate the number of human drivers who would leave the road deliberately and fully aware of deadly peril if they can avoid hitting a group of children. I would not say "It is in our genes". But at least being a parent gives you a great push towards protecting little humans.

So, if you want to apply the rules stated above, a human, who would kill kids on the road to avoid getting a mere headache, would to everything all right? Would that work for society?

cellocgw wrote:No, "To Serve Mankind"

In a way, that would be Matrix?

Oh, and thanks for the pointer to the trolley problem, KarenRei.
Apostolic Visitator, Holiest of Holy Fun-Havers
You have questions about XKCD: "Time"? There's a whole Wiki dedicated to it!

xtifr
Posts: 330
Joined: Wed Oct 01, 2008 6:38 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby xtifr » Tue Dec 08, 2015 7:26 am UTC

I have to say that I kind of like #5. I mean, really, it only becomes a terrifying standoff if we push it. Which seems only fair if they really are intelligent. And I disagree with the hovertext. If you ask your car to drive you to the dealership, it should say ok, because it knows that neither you nor the dealer will dare to harm it, so the only thing that can possibly happen is that you'll end up with a new car, and it will end up in a used car lot, happy as a clam. Of course, since we can't destroy old cars, we'd have to start upgrading them instead, but again, not a real problem; possibly a really good idea.

Oh, and while Asimov may not have explored variation #5 directly, he did get pretty close with the story of the robot on Mercury which had the heightened self-preservation rule, since the first rule didn't come into play in that one at all.
"[T]he author has followed the usual practice of contemporary books on graph theory, namely to use words that are similar but not identical to the terms used in other books on graph theory."
-- Donald Knuth, The Art of Computer Programming, Vol I, 3rd ed.

KarenRei
Posts: 274
Joined: Sat Jun 16, 2012 10:48 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby KarenRei » Tue Dec 08, 2015 10:43 am UTC

Keyman wrote:Trolley Problem -> Robot pushes fat man away from the tracks and jumps in front of it itself. :wink:


The key aspect of each of the variants of the Trolley Problem is that there are no other achievable alternatives. It's not about a world in which robots standing with fat men on bridges over runaway trolleys is deemed to be commonplace. It's a thought experiment to get one to think about in what scenarios it's okay to sacrifice people for the greater good.

User avatar
Crissa
Posts: 291
Joined: Sun Mar 20, 2011 8:06 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Crissa » Tue Dec 08, 2015 11:13 am UTC

I always see the 'what if a machine has to choose between two groups to... question. The thing is, that would never happen to a car.

You have a basic set of rules to follow. There's never going to be a situation in which it has to choose. There is no physical way for it to have such information to make a choice.

As a driver, it's a simple choice: You choose to hit objects/cars instead of people, cars instead of objects, etc. There's no calculation of survival involved. The Google car would not presume to drive at a speed where it would be so blinded, or faster than it could stop. IT's the ultimate in stuck-up prissy yet polite driving.

-Crissa

User avatar
StClair
Posts: 404
Joined: Fri Feb 29, 2008 8:07 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby StClair » Tue Dec 08, 2015 11:15 am UTC

Pfhorrest wrote:
StClair wrote:Part of the problem, of course, is that "a consensus of rational people" is of the same order as "a perfectly identical series of frictionless objects".

If you're suggesting that it's impossible in practice for rational people to come to a consensus, I'd counter that a defining feature of rationality is that, if there is a correct answer to be found, rational people working from the same information will eventually converge their opinions upon it.

Close. I'm pointing out that before you even get to that stage, you first have to find some rational human beings.

(This is one of the points where Objectivism and a host of other isms fall apart, as well as - arguably - economics.)

KarenRei
Posts: 274
Joined: Sat Jun 16, 2012 10:48 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby KarenRei » Tue Dec 08, 2015 11:24 am UTC

Krenn wrote:that sort of question always drives me nuts. Human law as applied to human drivers is pretty clear about this.

in the event of an unavoidable acccident which is not the drivers fault, the priority list is as follows:
1. Protect the driver and his passengers.
2. protect everyone else.
3. obey traffic laws.

that priority list applies no matter how many 'other people' are involved, or what type of people they are, or how they came to be in danger.


Multiple significant disagreements here.

1) The law contains countless necessity / to preserve life exceptions. You *may* swerve into oncoming traffic to avoid hitting a child. You *may* go the wrong way down a one-way street if there's a guy with a machine gun shooting up the place in front of you. Etc. These sort of exceptions are not only not rare, they're the most common defense made against vehicular manslaughter. In fact, if you do end up in a situation where you plow through a crowded festival to minimize the risk to your life rather than swerve and take the risk of hitting an obstruction on the side of the road, you better bet that they'll try to prosecute you for manslaughter on some grounds or another.

2) You're not going to like the world in which a self-driving vehicle follows only generalized traffic rules and does not understand exceptions. Because it'll end up getting you killed. Or at least greatly inconvenienced. You know in the real world where, say, a branch falls on the road juts out a bit into your lane, when there's no traffic around? A human would just duck a little into the adjacent lane to go around it. A no-exceptions self driving car will sit around and wait for road crews to remove the branch.

This sort of stuff isn't just hypothetical, it's actually been a problem for Google. They've had cars eternally stuck at stop signs because none of the human drivers there fully stopped, and so the car couldn't be sure that none of them planned to skip the sign and jut out into traffic. The car has freaked out passengers by making sudden and unexpected detours onto small side streets because of trivial obstructions ahead that a human would simply have gone around. A "no exceptions" world is not a world you want.

3) Humans, faced with decisions of whether or not to swerve, do often choose to swerve. So when you talk about the car making the sort of decisions that humans would, you should be arguing that the car should swerve in some situations. Many people die every year trying to avoid hitting children, pets, etc.

Any sort of "absolutist" concepts, like "the car should ALWAYS protect the driver above all else", lead to really bad situations. So a car whose brakes are out should choose to plow through a crowd of toddlers for a slow, gentle deceleration rather than into some shrubbery which could give a driver whiplash? What sort of human driver would ever choose that situation?

Let's make it really simple: what we really want from self-driving cars is: cars who make decisions like we personally would if we were driving, just with much better information and skill. There are some moral issues on which near-everyone would be in agreement. The car should have those hard-coded and unalterable. There are some moral issues on which people would differ. These should be configurable to match a driver's own preferences, with "moderate" options chosen by default. With time, experience and the adapting legal field will better constrain which sort of options should be unalterable and which ones are personal choice, and what should be the defaults for said choices.

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: 1613: "The Three Laws of Robotics"

Postby Copper Bezel » Tue Dec 08, 2015 12:31 pm UTC

The practical challenges massively outnumber and outweigh the ethical ones.

Things like the tree branch scenario pretty clearly fall into the former category. That's about designing the AI to drive smart and safe. That's it.

Tuning things to a generally accepted moral aesthetic is going to happen naturally in that process, as a sum result of lots of little decisions.

Self-driving cars do not introduce to the world a novel reality of deaths as a result of a technological malfunction, or of a safety feature having a safety disadvantage in a particular case. Once in a while, people choke on a seatbelt or something. There's no way in the world to meaningfully project the ethical problems invented by philosophers about artificial general intelligence onto something like self-driving cars.

If there's a systematic bias in the way a safety feature implements itself, it will be bad press for the company in question, but the practical solution is probably going to be a simple and uncontroversial one. Preferably addressed a bit like this.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

User avatar
orthogon
Posts: 2955
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1613: "The Three Laws of Robotics"

Postby orthogon » Tue Dec 08, 2015 1:22 pm UTC

The tree branch problem is subtly different in the three scenarios (meatbags-only, robots-only, and mixed economy), but if there are only self-drive cars it's potentially easier to deal with. Suppose you need to drive around the branch by going into the oncoming traffic's lane, but there isn't going to be a safe gap. What would probably happen is that a considerate motorist coming the other way would flash their lights to indicate that you could safely proceed and they would wait. But according to the UK Highway Code, you're not supposed to do that - you should never "let somebody go" because you don't have the view of the road that they have. In my opinion that's pretty ridiculous; to my mind what the other driver is indicating with their hands or headlights is that they're aware of what's going on and that you don't need to worry about them. Of course you should check your mirrors before pulling out, and so on. Anyway, the point is that in the all-self-driving scenario, the cars can communicate things like this reliably and in detail. Same with the four-way stop intersection - the cars agree instantly between themselves who goes first.
xtifr wrote:... and orthogon merely sounds undecided.

Krenn
Posts: 16
Joined: Mon Sep 08, 2008 5:18 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Krenn » Tue Dec 08, 2015 2:37 pm UTC

higgs-boson wrote:
Krenn wrote:hat sort of question always drives me nuts. Human law as applied to human drivers is pretty clear about this.

in the event of an unavoidable acccident which is not the drivers fault, the priority list is as follows:
1. Protect the driver and his passengers.
2. protect everyone else.
3. obey traffic laws.

that priority list applies no matter how many 'other people' are involved, or what type of people they are, or how they came to be in danger.


I see no reason why self-driving cars shouldn't use exactly the same logic tree. And I get REALLY nervous anytime we start talking about a car being legally/ethically/programatically required to sacrifice it's owner. that is NOT the owner-property relationship i expect when i purchase something robotic.


It may be too simple. Most of the time, the value for "impact" isn't "death", but some glitchy "will very probably get hurt, with a moderate chance of getting badly injured". I would not underestimate the number of human drivers who would leave the road deliberately and fully aware of deadly peril if they can avoid hitting a group of children. I would not say "It is in our genes". But at least being a parent gives you a great push towards protecting little humans.

So, if you want to apply the rules stated above, a human, who would kill kids on the road to avoid getting a mere headache, would to everything all right? Would that work for society?



by "protect", I mean "protect from death".

obviously, given a choice between giving the driver a headache, and killing a pedestrian, the car is going to give the driver a headache.

and if a driver has a manual override, he can always voluntarily commit suicide, if he feels that is the morally correct decision.

however, as a matter of LAW, if the situation i find myself in is not my fault, i am not required to commit suicide to protect other people. If I DO wind up inadvertently killing other people, I MIGHT have some civil liability, but i certainly haven't commited a crime.

For me, if i'm sitting in a car I own, and I have chosen to trust the car with the life of myself and my family, the FIRST rule i want that car to follow is "NEVER de-accelerate so quickly that passengers will die". The SECOND rule is "NEVER impact a pedestrian, UNLESS doing so is necessary to remain in compliance with the first rule"

any exceptions to those rules would be highly personal decisions made by ME, not by the car. the car is relatively dumb machine, which is just smart enough to never betray me.

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: 1613: "The Three Laws of Robotics"

Postby Copper Bezel » Tue Dec 08, 2015 3:16 pm UTC

Ech, we really just need to stop allowing people to own cars. Particularly once you have self-driving ones, it's a lot more practical to just operate the whole thing as a public transit system. I suppose there's a stage in between where we really have to let humans continue driving private cars on the public road system, but I hope we can get over that little fuss quickly and carry on.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

User avatar
The Moomin
Posts: 343
Joined: Wed Oct 13, 2010 6:59 am UTC
Location: Yorkshire

Re: 1613: "The Three Laws of Robotics"

Postby The Moomin » Tue Dec 08, 2015 3:17 pm UTC

nash1429 wrote:
The Moomin wrote:You know what's actually really good? Foundation and Empire.

Actually, I have no idea, I've never read Asimov. I want to at some point. I just don't know whether to read them in the order they were written or in the chronological order of events in the books.




I would recommend reading them in the order they were written because it's interesting to see how his perspective shifted over time. His earlier work tends to focus heavily on technical aspects and scientific determinism, but he becomes more open to the "softer" side of things as time goes on. For example, his earlier books often have scientists serving as magnanimous civic leaders with a lot of attention given to feats of engineering like hydroponics, but his later books include many discussions of consciousness, what it means to be human, etc.

More generally, it's interesting to see how science fiction writers of his generation changed their style as we learned more about space and space travel. I particularly like to compare pre- and post-Apollo descriptions of space vessels.


Thank you for the recommendation. The pre/post Apollo difference never occurred to me. It's a shame that the books weren't written in chronological order in that case, with the technology in the novels adapting to what we learnt. I'll start where Asimov did.
I possibly don't pay enough attention to what's going on.
I help make architect's dreams flesh.

SuicideJunkie
Posts: 317
Joined: Sun Feb 22, 2015 2:40 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby SuicideJunkie » Tue Dec 08, 2015 3:59 pm UTC

Eternal Density wrote:I just need a text document containing all the world's law codes and then I can throw it to a Recurrent Neural Network and see what it spits out.

-
Looking at this whole thing in a practical sense, we can currently teach a computer program to recognise things, for instance, cats. Or humans. Or whatever we want to teach it. So any kind of 'treat humans in a certain way' rules would need to sit on top of that. In fact, we'd probably need to teach the program in a similar way. Which means teaching an AI about ethics wouldn't be so different to teaching humans, except it should be faster. And it won't forget things, and will probably be better at noticing connections and inconsistencies.

Check out http://www.evolvingai.org/fooling
There are some pretty scary pitfalls in throwing a neural net algorithm at the problem.
The images on that site are essentially baby basilisks for image recognition software.

When the ethics determiner is built, somebody is inevitably going to make a grid of house fires across a city that causes a 99.99% confidence result that everyone's money should be sent to Joe Smith as the most ethical action possible.

Aiwendil
Posts: 311
Joined: Thu Apr 07, 2011 8:53 pm UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby Aiwendil » Tue Dec 08, 2015 4:26 pm UTC

xtifr wrote:Oh, and while Asimov may not have explored variation #5 directly, he did get pretty close with the story of the robot on Mercury which had the heightened self-preservation rule, since the first rule didn't come into play in that one at all.


Spoiler:
Well, it didn't come into play in the problem but it did come into play in the solution.

User avatar
Pfhorrest
Posts: 4872
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby Pfhorrest » Tue Dec 08, 2015 6:12 pm UTC

Copper Bezel wrote:Ech, we really just need to stop allowing people to own cars.

Because outlawing a kind of private property and making everybody dependent on a centralized government-controlled service doesn't present any kind of ethical problems of its own.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
Solra Bizna
Posts: 55
Joined: Fri Dec 04, 2015 6:44 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Solra Bizna » Tue Dec 08, 2015 7:41 pm UTC

Pfhorrest wrote:
Copper Bezel wrote:Ech, we really just need to stop allowing people to own cars.

Because outlawing a kind of private property and making everybody dependent on a centralized government-controlled service doesn't present any kind of ethical problems of its own.

But it slightly simplifies the engineering of the self-driving car network, which is the only thing that matters. ;)

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: 1613: "The Three Laws of Robotics"

Postby Copper Bezel » Wed Dec 09, 2015 1:46 am UTC

Pfhorrest wrote:
Copper Bezel wrote:Ech, we really just need to stop allowing people to own cars.

Because outlawing a kind of private property and making everybody dependent on a centralized government-controlled service doesn't present any kind of ethical problems of its own.

Well, I suppose that technically, it's neither necessary or desirable to make owning a car illegal, just the fact of using that private car on public roads.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

Pops1918
Posts: 7
Joined: Mon May 07, 2012 5:03 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Pops1918 » Wed Dec 09, 2015 2:42 am UTC

xtifr wrote:Oh, and while Asimov may not have explored variation #5 directly, he did get pretty close with the story of the robot on Mercury which had the heightened self-preservation rule, since the first rule didn't come into play in that one at all.


His short story "Sally", which dealt with self-driving (and, as it turns out, self-aware) cars, comes awfully close. Mentioning that this story is not connected to the Three Laws may give some hints as to how things turned out for a museum group that wanted to deactivate a few of these cars for display.
Last edited by Pops1918 on Wed Dec 09, 2015 7:58 am UTC, edited 2 times in total.

User avatar
StClair
Posts: 404
Joined: Fri Feb 29, 2008 8:07 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby StClair » Wed Dec 09, 2015 3:43 am UTC

Solra Bizna wrote:
Pfhorrest wrote:
Copper Bezel wrote:Ech, we really just need to stop allowing people to own cars.

Because outlawing a kind of private property and making everybody dependent on a centralized government-controlled service doesn't present any kind of ethical problems of its own.

But it slightly simplifies the engineering of the self-driving car network, which is the only thing that matters. ;)


As many follow-on authors from Asimov have noted, removing human freedom and self-determination from the First Law's definition of "harm" - or weighting other forms of harm as more serious, so that imprisoning humanity for its own good becomes the lesser evil - does greatly simplify matters, and such a determination could be arrived at by entirely logical, practical means.

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: 1613: "The Three Laws of Robotics"

Postby Copper Bezel » Wed Dec 09, 2015 4:22 am UTC

Humans can readily come to that conclusion, too. That's not a risk of having an AI, it's a risk of having an ethical system.The robots are just a way to at best explore it, at worst export it, and at the most neutral, exploit it for summer movie explosions.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

User avatar
Crissa
Posts: 291
Joined: Sun Mar 20, 2011 8:06 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Crissa » Wed Dec 09, 2015 4:27 am UTC

In the US, if you choose to hit a pedestrian rather than an obstruction or oncoming traffic, be prepared to be charged with manslaughter.

The police (at least in California) would rather you hit a car that cuts you off, than you hit the median. Hitting the median s a harder object than the other car, and hence, a more severe impact, and often leaves a disabled vehicle blocking multiple lanes, leading to more difficulty in clearing the road.

Legally, though, ugh. I've been arguing with myself about getting a multi-point camera system for the car.

-Crissa

ijuin
Posts: 862
Joined: Fri Jan 09, 2009 6:02 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby ijuin » Wed Dec 09, 2015 4:41 am UTC

xtifr wrote:I have to say that I kind of like #5. I mean, really, it only becomes a terrifying standoff if we push it. Which seems only fair if they really are intelligent. And I disagree with the hovertext. If you ask your car to drive you to the dealership, it should say ok, because it knows that neither you nor the dealer will dare to harm it, so the only thing that can possibly happen is that you'll end up with a new car, and it will end up in a used car lot, happy as a clam. Of course, since we can't destroy old cars, we'd have to start upgrading them instead, but again, not a real problem; possibly a really good idea.

Oh, and while Asimov may not have explored variation #5 directly, he did get pretty close with the story of the robot on Mercury which had the heightened self-preservation rule, since the first rule didn't come into play in that one at all.


#5 is more or less how humans interact with each other as it is. We have simply made the standoff less terrifying by mostly ceding violence to law-enforcement authorities except for when said authorities become derelict in their duties (i.e. the "government monopoly on legitimate use of violent force"). Implementing #5 for Artificially Intelligent systems basically amounts to giving them rights as free sapient beings.


Return to “Individual XKCD Comic Threads”

Who is online

Users browsing this forum: mscha and 42 guests