Ambiguous cases and Asimov's Three Laws of Robotics

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

User avatar
Poochy
Posts: 358
Joined: Wed Feb 20, 2008 6:07 am UTC

Ambiguous cases and Asimov's Three Laws of Robotics

Postby Poochy » Thu Jun 05, 2008 6:24 am UTC

This is the kind of stuff I come up with when my mind wanders:

First, Asimov's First Law of Robotics states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.

Now, let's say a robot with some kind of projectile weapon operating under the 3 laws sees an unquestionably drunk driver on the street. The robot could shoot the drunk driver, possibly killing him, which would be injuring a human being. But, if the robot doesn't shoot the drunk driver, it's probable that the drunk driver will crash and kill him/herself AND innocent people, which would be allowing a human being to come to harm through inaction.

So, how could a robot be programmed to deal with ambiguous situations like this?
clintonius wrote:"You like that, RIAA? Yeah, the law burns, doesn't it?"
GENERATION 63,728,127: The first time you see this, copy it into your sig and divide the generation number by 2 if it's even, or multiply it by 3 then add 1 if it's odd. Social experiment.

Micron
Posts: 319
Joined: Sat Feb 16, 2008 1:03 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Micron » Thu Jun 05, 2008 7:18 am UTC

Poochy wrote:So, how could a robot be programmed to deal with ambiguous situations like this?

Poorly I would imagine but that was the point of several of Azimov's stories. Unless I'm forgetting something the three laws (or 4 or with some variations depending on the story) were not claimed to be fool proof. Many of the stories were about cases the laws handled poorly, the side effects they produced, or the ways in which people (or robots) could subvert them.

I think the best you can get from a human designer is an attempt to emulate the way a human would make a decision under the same conditions, we have a goal and we have to guess which action will produce the best results closest to that goal in any situation. The robot might be able to consider more options and act with more speed or precision but you'll still need to be prepared to accept an uncertain result based on a prediction of the future.

Dr. Willpower
Posts: 197
Joined: Wed May 28, 2008 3:55 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Dr. Willpower » Thu Jun 05, 2008 4:58 pm UTC

From my understanding the robots in Asimov's fiction weren't programmed in the same sense that a modern computer is programmed. The three laws were part of a functioning circuit inside of the robot's positronic brain. That being so, the robot's actions would be based on the output of that circuit's functional capabilities. What I mean is, the robots are built so that the circuitry of the law doesn't change but the complexity of the input changes based on the complexity of the entire brain. In that sense the robot would be forced to think about all of the risks of the situation and then make a decision that held true when it was tested on the first law. Since both actions violate the law, I think it would either act another way to stop the car or simply break.
Image
Hat me, bro

User avatar
Berengal
Superabacus Mystic of the First Rank
Posts: 2707
Joined: Thu May 24, 2007 5:51 am UTC
Location: Bergen, Norway
Contact:

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Berengal » Fri Jun 06, 2008 5:09 am UTC

As I've understood it, the laws are reinterpreted as the robots grow more advanced (one could say evolved). At least that's what I gathered from one of the androids in Foundation's Edge. According to her story, robots were first the servants of humans, later their rulers (as a shadow-government-ish thing), and finally buggered off to let the humans fend for themselves.
The first robots had a very narrow, situationalistic view of the laws, not too far from the quoted version. Such a robot would probably try to stop the car somehow, by shooting it's tires or engine, throwing itself in front of it or something similar. If it was inevitably going to crash into someone else, killing both, it would maybe shoot the driver to save the other's life. I doubt it would break as there will be times when a robot can do nothing to prevent a human from being harmed. A single broadcasted murder would wipe out the entire robot population.
As the robots grew more advanced they widened the definitions of the laws, and eventually took controll over the whole of humanity to better protect them from themselves. At this point robots could be made indistinguishable from humans, so the humans didn't know their entire ruling class were androids. This kind of robot would certainly kill the driver if the choice was between one death or several deaths. If it could find a way to stop any human from being harmed it would prefer to do that, obviously.
The robots grew even more advanced, and the laws turned more and more into "increase the amount of 'humanity' as much as possible". This was what triggered the robots to take controll over humanity at first, but it eventually led the robots to determine that any intervention by them lessened the total humanity in the universe. The best way to increase humanity was to let humans be humans, and make war if they had to. The robots left earth and instead settled on a planet they would eventually turn into one gigantic telepathic conciousness. This robot wouldn't be around with a gun and would therefore not even know about the driver, unless he/it/they telepathically scanned earth from some billion light-years away.
It is practically impossible to teach good programming to students who are motivated by money: As potential programmers they are mentally mutilated beyond hope of regeneration.

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby quintopia » Wed Jun 18, 2008 10:52 am UTC

One of those stories was about the first telepathic robot. It destroyed itself when its inability to tell the truth (which would generally cause emotional harm which it could telepathically detect) or prevent the harmful situation that hiding the truth allowed/caused to happen. So yeah, it just might break.

User avatar
staticShock
Posts: 5
Joined: Wed Nov 28, 2007 10:03 pm UTC
Location: Edinburgh, Scotland
Contact:

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby staticShock » Mon Jun 23, 2008 8:44 am UTC

I think I remember in one of Asimov's novels (don't quote me on this, looking it up now [and its not from the film i,Robot]) what the robots mention a Law 0 which states: "a robot may not harm humanity as whole or, through inaction, allow humanity as a whole to come to harm". The law took precedence over all of their programmed laws and the robots could even harm humans to save humanity.

As for your drunk driver example, I guess that the robot would have to check the probability of fatal accidents (which kill more than one person) when the driver is drunk. If the risk of this driver hurting "humanity" was to great then yes, the robot would attempt to stop the drunk driver. The problem would probably come when the robot has to assess the risk quickly before the driver goes beyond the robot.

Edit: Found it: http://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Zeroth_Law_added. Unfortunately its not sourced apart from the name of the book it is in. I don't have any of these books so I'm afraid this is all of the proof that I can offer.

User avatar
Alomax
Posts: 267
Joined: Tue Jul 17, 2007 3:33 pm UTC
Location: Phoenix, AZ
Contact:

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Alomax » Mon Jun 30, 2008 10:11 am UTC

The "Zero" Law was not part of the normal robotic functionality. The only place I'm aware of it being mentioned in Asimov's books is in the Foundation series, and the robot who reveals that he is operating under that law is highly advanced. The law evolved "naturally" as the robot became older and older and began to care for more and more human beings instead of just one or several.

As for the OP -
Remember that a robot, by nature, deals in facts. Unless we assume that someone has told the robot that the driver is drunk, the robot only observes that a person in a car is driving recklessly. If the robot is sufficiently advanced, it may deduce that allowing the observed individual to continue may cause harm to others, and attempt to stop him. Most likely by disabling his vehicle. If there is an immediate observable need to stop the driver (he is clearly about to hit someone) the robot would again most likely attempt to disable the vehicle or divert it's course, even if it resulted in it's own destruction.

There are several unspecified variables that could alter the outcome of this scenario, all of which begin to make an interesting story once they are answered. :) The sophistication of the robot, it's abilities and tools (gun? no gun?), the observable facts, and the exact nature of the danger (observable or deduced).
bookishbunny wrote:You do not appreciate the powers of bleach until you have carried a dead conch from the Florida Keys to South Carolina in the middle of August.

http://www.omnicrola.com
http://www.ilanca.org

User avatar
Sir_Elderberry
Posts: 4206
Joined: Tue Dec 04, 2007 6:50 pm UTC
Location: Sector ZZ9 Plural Z Alpha
Contact:

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Sir_Elderberry » Thu Jul 03, 2008 6:08 pm UTC

The Zeroth law that was mentioned above appeared in Robots and Empire. It is not a Law in the sense that it was written into the robots at the beginning, rather, some robots were able to deduce it and thus make utilitarian decisions. Lots of robots couldn't, if I remember correctly a robot may have died when he attempted to follow it out. In any case, Asimov's Laws, as has been pointed out, have cases like this where there is conflict or self-contradiction. I believe that in another Robot novel, Elijah Baley is speaking to an expert on robotics about the First Law, and the expert claims that modern robots, if stuck in such a situation, either work it out or just toss a coin to avoid locking up.

In other words, the Three Laws were less "see these thing? wouldn't these be great?" and more Asimov saying "See these thing? What would the practical effects be, and how would that be dealt with?" Asimov used them often to introduce conflict, not to resolve it.
http://www.geekyhumanist.blogspot.com -- Science and the Concerned Voter
Belial wrote:You are the coolest guy that ever cooled.

I reiterate. Coolest. Guy.

Well. You heard him.

User avatar
suraj
Posts: 1
Joined: Fri Jun 13, 2008 7:52 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby suraj » Thu Jul 03, 2008 6:14 pm UTC

That is not the only ambiguity. Other major ambiguity is the definition of 'human'. I remember that in the novel that deals with Solarian robots, those robots had definition of human as 'one who speaks in Solarion accent'. On that world, humans have effectively vanished leaving robots in charge. The zeroth law was derived by a robot upon observing Solarion robots.

User avatar
Sir_Elderberry
Posts: 4206
Joined: Tue Dec 04, 2007 6:50 pm UTC
Location: Sector ZZ9 Plural Z Alpha
Contact:

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Sir_Elderberry » Fri Jul 04, 2008 2:26 am UTC

suraj wrote:That is not the only ambiguity. Other major ambiguity is the definition of 'human'. I remember that in the novel that deals with Solarian robots, those robots had definition of human as 'one who speaks in Solarion accent'. On that world, humans have effectively vanished leaving robots in charge. The zeroth law was derived by a robot upon observing Solarion robots.


Actually I think the Zeroth law was a book or two later, it came about as a result of a telepathic robot speculating on the idea of reading humanity as a large psychic mass. Also, Solarian humans did exist, it's just that Robot:Human ratio was something like 1000:1. Humans rarely saw each other in person, but the robots were by no means "in charge".
http://www.geekyhumanist.blogspot.com -- Science and the Concerned Voter
Belial wrote:You are the coolest guy that ever cooled.

I reiterate. Coolest. Guy.

Well. You heard him.

User avatar
Indon
Posts: 4433
Joined: Thu Oct 18, 2007 5:21 pm UTC
Location: Alabama :(
Contact:

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Indon » Mon Jul 07, 2008 9:34 pm UTC

The Zeroth Law is intuited independently a number of times in Asimov's universe - the first machines to apply the "law" appear to be The Machines in the last story of I, Robot, and Daneel figures it out on his own at one point - it happens a number of times and at one point there was apparently even a conflict between AI's sophisticated enough to apply the Zeroth Law and those who could or did not (akin, in fact, to a religious conflict) - this is likely why the Galactic Empire has very little AI.
So, I like talking. So if you want to talk about something with me, feel free to send me a PM.

My blog, now rarely updated.

Image

Philwelch
Posts: 2904
Joined: Tue Feb 19, 2008 5:33 am UTC
Location: RIGHT BEHIND YOU

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Philwelch » Thu Jul 10, 2008 4:11 am UTC

It was also stated at some point that the Three Laws were just a natural-language approximation of the actual mathematical rules programmed into the robots. Then again, Asimov did have unhealthy fascinations with the idea that you could accurately represent human language in symbolic logic (some mention of it was made in one of the early Foundation books, where symbolization was used to analyze treaties).
Fascism: If you're not with us you're against us.
Leftism: If you're not part of the solution you're part of the problem.

Perfection is an unattainable goal.

Xaioxaiofan
Posts: 2
Joined: Sun Jan 01, 2012 10:41 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Xaioxaiofan » Sun Jan 01, 2012 10:56 am UTC

Poochy wrote:This is the kind of stuff I come up with when my mind wanders:

First, Asimov's First Law of Robotics states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.

Now, let's say a robot with some kind of projectile weapon operating under the 3 laws sees an unquestionably drunk driver on the street. The robot could shoot the drunk driver, possibly killing him, which would be injuring a human being. But, if the robot doesn't shoot the drunk driver, it's probable that the drunk driver will crash and kill him/herself AND innocent people, which would be allowing a human being to come to harm through inaction.

So, how could a robot be programmed to deal with ambiguous situations like this?

Another case is (I don't know if it's been said) :

The 3 laws are flawed if you paradox a robot.

Tell the robot to press a button
2nd law... It listens.
Then tell the robot if it lets go of the button it will kill a human, and if it doesn't a human will die.
.... It's screwed....
It can't let go because of the 1st rule.
It has to let go because of the 1st rule.

What does it do?

mfb
Posts: 947
Joined: Thu Jan 08, 2009 7:48 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby mfb » Tue Jan 03, 2012 8:39 pm UTC

You don't need the pressed button.
- pressing a button will kill a human
- not pressing it will kill another human
Or maybe the same human in a different way, that depends on how the rules are interpreted.

It was also stated at some point that the Three Laws were just a natural-language approximation of the actual mathematical rules programmed into the robots.

That. With current methods to program computers, there will be some function which evaluates the harm to humans for every possible action. The robot can then choose an option with a small "harm level".
It is not useful to always take the option with the least value (up to the level of rounding errors) - this would mean that every action is determined from the first rule and the other rules are meaningless. The robot could refuse to bring you the newspaper, as it could contain a news which influences you in a negative way. Or the robot could evaluate its risk to break during the action, which reduces its possibility to help you later if you have an accident. The robot could lock you in the house, as road traffic is dangerous.

The easy way is to build paperclip maximizers.

Carnildo
Posts: 2023
Joined: Fri Jul 18, 2008 8:43 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Carnildo » Wed Jan 04, 2012 4:13 am UTC

Xaioxaiofan wrote:The 3 laws are flawed if you paradox a robot.

Tell the robot to press a button
2nd law... It listens.
Then tell the robot if it lets go of the button it will kill a human, and if it doesn't a human will die.
.... It's screwed....
It can't let go because of the 1st rule.
It has to let go because of the 1st rule.

What does it do?

It locks up. You've created a situation where the robot is incapable of following the first law, and it's well-established in Asimov's robot stories that a robot in that situation will stop functioning.

User avatar
BobTheElder
Posts: 86
Joined: Wed Feb 17, 2010 11:30 pm UTC
Location: England, near Bournemouth

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby BobTheElder » Fri Jan 06, 2012 1:27 am UTC

"it's probable that the drunk driver will crash"
Although drunk driving is unquestionably a bad thing, I think you'll still find countless people do it regularly and DON'T kill anyone. So your robot would never shoot the driver. Robot will more likely inform emergency services...

The robot will either have a priority rule to follow in doubt, or criteria for weighing options, but an alternative could be to just take the action rated most likely to succeed. I guess any criteria would depend upon the complexity and processing speed (for decisions requiring a quick reaction) of the robot, but you start getting into difficult moral ground quite quickly as these criteria would have to be written by humans...
Rawr

User avatar
Sc4Freak
Posts: 673
Joined: Thu Jul 12, 2007 4:50 am UTC
Location: Redmond, Washington

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Sc4Freak » Sat Jan 07, 2012 2:26 pm UTC

For those unaware, it may be worth pointing out that a large proportion of Asimov's Robot books were exactly about interpretations of the Three Laws and how they could be "broken".

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Izawwlgood » Sat Jan 07, 2012 3:43 pm UTC

By calculating the odds of minimum loss of life. In addition to being a pretty introductory level ethics question, similar scenarios are played out in robotics films all the time. In Space Odyssey for example, HAL decided that humans were dangerous to the success of the mission, and had them killed. If a drunk driver is careening towards an orphanage, your missile-bot will simply take out the drunk driver.

Or, you know, being a robot, it'll simply disable the vehicle.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

Turtlewing
Posts: 236
Joined: Tue Nov 03, 2009 5:22 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Turtlewing » Tue Jan 10, 2012 9:17 pm UTC

I'm pretty strongly in the "calculate the odds" camp.

I would expect the robot to determine that shooting the driver has a higher (near 100%) chance of injuring a human, than failing to prevent the drunk driver from driving. The robot may however determine it can safely disable the vehicle and detain the drunk human until law enforcement/emergency services can arrive.

Also I'd imagine any case where taking an action and not taking the same action both result in human casualties the robot would likely lock up, or possibly loop until one of the humans dies first allowing it to save the other by choosing whichever action doesn't kill the remaining human.

userxp
Posts: 436
Joined: Thu Jul 09, 2009 12:40 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby userxp » Wed Jan 11, 2012 11:03 pm UTC

Poochy wrote:So, how could a robot be programmed to deal with ambiguous situations like this?


Well, since you already have the AI done, all you have to do is create an unambiguous, objective and universally accepted definition of what is ethical, formalize it, and give it to the robot. Piece of cake.

User avatar
Quizatzhaderac
Posts: 1587
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Quizatzhaderac » Mon Jul 09, 2012 7:34 pm UTC

Poochy wrote:So, how could a robot be programmed to deal with ambiguous situations like this?

No! Bad computer scientist! How should a robot be programmed to deal with ambiguous situations like this? Should!

You implied in this situation that the robot should shoot the drunk driver. Why? Because two lives are more valuable than one? then program that in. Because the drunk driver's life is forfeit and the true decision is the just to save the potential pedestrian? Enable it to distinguish between "causing a human harm" and "a human that was going to die, either by gunshot or car crash."

For a super intelligent AI, you're basically going to want to reproduce the entire human value function.

For merely intelligent ones I'd say they should keep to their place. Your Fedex robot, in general, should not be deciding weather or not to fire a rocket launcher at other drivers. It's laws should tell it what not to do: don't break the law, don't injure people, don't damage property. If it can make a decision about blowing up people, it can make the wrong decision.

The only positive command it should have should be to do it's job. Sci-fi literature loves overly general robots, because it's literature and wants robots to be a type of people. The fed-ex bot should just get out of the way and inform traffic-bot, who has programming specific to this situation.

Of course the robot solution to the situation as stated is to have the drunk driver's car be a robot, so the drunk "driver" just selects as destination. In this case the only risk of "drunk driving" is choosing his ex-girlfriend's house as the destination.
The thing about recursion problems is that they tend to contain other recursion problems.

elasto
Posts: 3563
Joined: Mon May 10, 2010 1:53 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby elasto » Thu Jul 12, 2012 3:26 am UTC

Quizatzhaderac wrote:The only positive command it should have should be to do it's job. Sci-fi literature loves overly general robots, because it's literature and wants robots to be a type of people. The fed-ex bot should just get out of the way and inform traffic-bot, who has programming specific to this situation.

Yeah. If you program it to intervene of its own volition, how will it respond to the situation where there's six people each with a different organ failing who'll die without a transplant - and one person with all healthy organs...

Humans don't really act according to the rule 'you must not cause harm, or, through inaction, allow harm to come about', so neither should robots. They only really believe in the first half of that rule. In practice, the second half of that is not simply downplayed - 'interventionist utilitarianism' is positively frowned upon - eg the transplant scenario.

Carnildo
Posts: 2023
Joined: Fri Jul 18, 2008 8:43 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Carnildo » Thu Jul 12, 2012 6:22 am UTC

elasto wrote:
Quizatzhaderac wrote:The only positive command it should have should be to do it's job. Sci-fi literature loves overly general robots, because it's literature and wants robots to be a type of people. The fed-ex bot should just get out of the way and inform traffic-bot, who has programming specific to this situation.

Yeah. If you program it to intervene of its own volition, how will it respond to the situation where there's six people each with a different organ failing who'll die without a transplant - and one person with all healthy organs...

Humans don't really act according to the rule 'you must not cause harm, or, through inaction, allow harm to come about', so neither should robots. They only really believe in the first half of that rule. In practice, the second half of that is not simply downplayed - 'interventionist utilitarianism' is positively frowned upon - eg the transplant scenario.

Read Asimov's stories. There's a very good reason for the second part.

elasto
Posts: 3563
Joined: Mon May 10, 2010 1:53 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby elasto » Thu Jul 12, 2012 9:11 am UTC

Carnildo wrote:Read Asimov's stories. There's a very good reason for the second part.

I've read Asimov's stories. I'm talking about in the real world, and I'm saying we should program robots to follow a similar morality to humans - because otherwise we'll find robots making really 'evil decisions' according to common morality.

If you look carefully at human morality, it's much worse to choose to commit harm than to fail to prevent harm happening to someone else. There are negligence and recklessness laws, sure, but, in general, we don't punish people for failing to dive into the sea to rescue a drowning person and we don't punish doctors for failing to kill one person to harvest their organs to save six: Actively committing harm to one is considered to greatly outweigh passively allowing harm to happen to six, or a hundred, or a thousand. Just look at the various 'person tied to a train-track' scenarios.

Or what about non-consensual medical testing? We could test new medical drugs and procedures on inmates on death row - drugs and procedures that could potentially save millions of lives. But we consider the rights of the condemned man to far exceed the rights of millions of people to a new potentially life-saving drug.

Asimov's laws are basically about preventing a robot doing anything 'bad' rather than making it do something 'good': "If doing something would cause harm, don't do it; If doing nothing would cause harm, then don't do nothing; And if all options would cause harm, then burn out." I say that's not good enough. Humans are capable of choosing to do 'the least-worst thing', even if that thing still causes harm. And that's what our goal should be for robots too. But we (oddly in many ways) value 'active' and 'passive' harm very differently, so robots will have to too: If robots diverge significantly from our morality they are going to be viewed as 'evil', plain and simple - no matter how much objective, utilitarian good comes out of their actions.

mfb
Posts: 947
Joined: Thu Jan 08, 2009 7:48 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby mfb » Fri Jul 13, 2012 12:28 pm UTC

elasto wrote:Actively committing harm to one is considered to greatly outweigh passively allowing harm to happen to six, or a hundred, or a thousand. Just look at the various 'person tied to a train-track' scenarios.

I think the reason is similar to the Fed-Ex robot scenario: In case of doubt, do nothing which directly harms innocent people, even if might help others. If we all would evaluate all those scenarios (does it help to kill a person which, according to your evaluation, is a serial killer with 50% probability?), there would be too many mistakes.

cphite
Posts: 1290
Joined: Wed Mar 30, 2011 5:27 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby cphite » Fri Jul 13, 2012 9:06 pm UTC

Carnildo wrote:
Xaioxaiofan wrote:The 3 laws are flawed if you paradox a robot.

Tell the robot to press a button
2nd law... It listens.
Then tell the robot if it lets go of the button it will kill a human, and if it doesn't a human will die.
.... It's screwed....
It can't let go because of the 1st rule.
It has to let go because of the 1st rule.

What does it do?

It locks up. You've created a situation where the robot is incapable of following the first law, and it's well-established in Asimov's robot stories that a robot in that situation will stop functioning.


A modern robot would have a TRY...CATCH around the whole three laws module and simply log the fact that an unworkable situation had occurred.

User avatar
4=5
Posts: 2073
Joined: Sat Apr 28, 2007 3:02 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby 4=5 » Sun Jul 22, 2012 5:57 am UTC

Asimov's stories always seemed to be about willfully stupid programmers. Only crazy programmers would implement rules as if the robots had them literally written down in their heads. And the programmers must be stupid because they never fix the problem.
Asimov wrote:Here are three rules I came up with, and here is how they don't work

There is nothing that makes this set of rules special, I can come up with hundreds of rules that don't work.

No reasonably programmed robot is going to wallow in metaphysical angst over the definition of a human, for the same reason that no photocopier worries about the future of the paper industry. Good robots have jobs to do and they do them.

Daggoth
Posts: 51
Joined: Wed Aug 05, 2009 2:37 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Daggoth » Fri Jul 27, 2012 10:20 am UTC

In response to the original post:

the robot would evaluate a few relevant characteristics, if it was in his specs capability to do so:

.- how likely would it be for the drunk driver to harm someone , ie a drunk guy who has driven back home drunk a hundred times, takes it slow and is semi-aware vs a reckless wasted unpredictable driver driving on the wrong lane above speed limit, during traffic hours

User avatar
PM 2Ring
Posts: 3652
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby PM 2Ring » Sun Jul 29, 2012 5:47 am UTC

Let's not to be too harsh on the Good Doctor: computer science was still in its infancy when Asimov started writing his robot stories, eg the first compiler was written about ten years after Asimov's first robot story. So it's somewhat unfair to criticize his laws of robotics on the grounds that they are not computationally rigorous. Besides, Asimov wanted to illustrate the difficulties in mechanizing ethics, and his flawed laws of robotics work nicely as a plot device.

Also, the 3 laws have provided food for thought to several generations of people interested in AI. I bet the young Asimov didn't expect that people would still be discussing the 3 laws more than half a century after he first proposed them.

tomtom2357
Posts: 563
Joined: Tue Jul 27, 2010 8:48 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby tomtom2357 » Sun Jul 29, 2012 10:33 am UTC

I think a good way to write the rules would be:
1: A robot may not harm a human being.
2: A robot may not, through inaction, cause a human being to come to harm.
3: A robot must obey any instruction given to it by a human.
4: A robot must protect its own existence.

The earlier rules supersede the later ones, so for example a robot may not obey rule 2 if by doing so, it disobeys rule 1. This way, the robot would not kill the drunk driver, because this disobeys the first and most important law. It would try to figure out another way to stop the drunk driver, possibly by arresting them. This avoids the contradiction of the first two laws.
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.

mfb
Posts: 947
Joined: Thu Jan 08, 2009 7:48 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby mfb » Sun Jul 29, 2012 2:24 pm UTC

Any strict order of laws just means that the robot follows a single law. Any difference in the evaluation "probability that it harms a human", even at the level of rounding errors, would determine its action. A robot would not push a human away to save the town / the whole world / the whole universe or whatever: Pushing a human can cause harm.

By the way: What exactly is "inaction"? No change in the mechanical degrees of freedom of the robot? If the robot drives a conventional car, inaction for more than a few seconds is a guaranteed crash. If a robot walks and suddenly stops all actions, it falls over.

User avatar
Quizatzhaderac
Posts: 1587
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Quizatzhaderac » Sun Jul 29, 2012 8:22 pm UTC

Well "inacation" would of course depend on how exactly "action" is defined for the robot. With the original formulation of the laws it wouldn't matter. But if I was to guess/make up a distinction...

Inaction includes both doing literally nothing and doing nothing relevant to the decsion at hand.

Assuming the robot is able to shoot the drunk driver without compromising it's driving. (say it shoots with one hand and doesn't need to use the stick shift anytime soon.) As it's driving it's continually making decisions about it's driving: speed/slow, shift lanes, ect. It prioritizes first to avoid crashing into anybody, second to efficiently perform it's task (which also forbids crashing).

When it sees the drunk driver, it has an extra decision about what to do about the driver. It rules out shooting the driver or crashing into him under the first rule. Any other action it takes is irrelevant to the drunk driver situation. So all other points in it's action space (which is far to huge to be exhaustively computed, for instance it never even considers singing the Hello Dolly score) are grouped together and considered "inaction" relative to the drunk driver.

So (using tomtom's rules) The robot prefers risking humans through "inaction" on the drunk driver decision to hurting one though action. It also decides to keep driving for the package delivery decision as inaction would would risk humans independently of the drunk driver decision.
The thing about recursion problems is that they tend to contain other recursion problems.

User avatar
Sizik
Posts: 1221
Joined: Wed Aug 27, 2008 3:48 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Sizik » Mon Jul 30, 2012 2:03 pm UTC

And what about all of the starving people in 3rd world countries that the robot is harming by not bringing them food?
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

User avatar
Quizatzhaderac
Posts: 1587
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby Quizatzhaderac » Mon Jul 30, 2012 6:27 pm UTC

I'm not sure how Asminov dealt with it, but the robot would abandon it's fedex job as quickly as possible in order to devote it's efforts to stopping world hunger. Were the robots in the books property or just lower class citizens? Requiring them to obey orders from ANY human seems to make owning a robot not much better than not owning it. My (completely wild) speculation is that societies owned robots, not people. SO the robots would worry about the starving people first, then worry about rich people getting their packages.
The thing about recursion problems is that they tend to contain other recursion problems.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby WarDaft » Mon Jul 30, 2012 8:40 pm UTC

There's two solutions. There are no more starving people, or the robots just don't know about them.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
The Geoff
Posts: 144
Joined: Wed Jun 08, 2011 6:22 am UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby The Geoff » Fri Aug 03, 2012 1:01 pm UTC

For a more modern take on the idea, there's a wonderfully titled book called "Governing Lethal Behaviour In Autonomous Robots" - it's pretty dismissive of Asimov's Laws (mostly because they're damn near impossible to code into current machines, you need a vague element of AI to even recognise a "human" and what might "harm" it - hell, that's a difficult question for a human to answer fully)

With current (non-AI) methods, we as humans have to very explicitly write a program to deal with all of these contrary situations. The first step in writing a program to deal with these issues is being able to strictly define everything about the issues, and we're not yet capable of that (just try an introductory medical ethics course to find that out), so the question is a little redundant. Frankly, the best we've currently got is human vetos, emergency stop buttons for example - we're still well pre-Asimov when it comes to robots/AI, and until we invent something functionally equivalent to (if I remember correctly) a platinum-iridium positronic brain, then we don't have to worry about the good Doctor's scenarios.

User avatar
PM 2Ring
Posts: 3652
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby PM 2Ring » Sat Aug 04, 2012 6:18 am UTC

I guess this thread ought to have a link to Friendly AI.

From the Friendly AI FAQ:
1. What is Friendly AI?

A Friendly AI (FAI) is an artificial intelligence that benefits humanity. It is contrasted with Unfriendly AI (uFAI), which includes both Malicious AI and Uncaring AI. More specifically, Friendly AI may refer to:

a very powerful and general AI that acts autonomously in the world to benefit humanity.
an AI that continues to benefit humanity during and after an intelligence explosion.
a research program concerned with the production of such an AI.
The Singularity Institute's approach (Yudkowsky 2001, 2004) to designing such an AI:
Goals should be defined by the Coherent Extrapolated Volition of humanity.
Goals should be reliably preserved during recursive self-improvement.
Design should be mathematically rigorous and proof-apt.

Friendly AI is a more difficult project than often supposed. As explored in other sections, commonly suggested solutions for Friendly AI are likely to fail because of two features possessed by any superintelligence (Muehlhauser & Helm, forthcoming):

Superpower: a superintelligent machine will have unprecedented powers to reshape reality, and therefore will achieve its goals with highly efficient methods that confound human expectations and desires.

Literalness: a superintelligent machine will make decisions using the mechanisms it is designed with, not the hopes its designers had in mind when they programmed those mechanisms. It will act only on precise specifications of rules and values, and will do so in ways that need not respect the complexity and subtlety (Kringelbach & Berridge 2009; Schroeder 2004; Glimcher 2010) of what humans value. A demand like "maximize human happiness" sounds simple to us because it contains few words, but philosophers and scientists have failed for centuries to explain exactly what this means, and certainly have not translated it into a form sufficiently rigorous for AI programmers to use.

2. Can you explain Friendly AI in 200 words with no jargon?

Every year, computers surpass human abilities in new ways. Computers can beat us at doing calculations, playing chess and Jeopardy!, reading road signs, and more. Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.

Many experts predict that during this century we will design a machine that can improve its own intelligence better than we can, which will make the machine even more skilled at improving its own intelligence, and so on. By this method the machine could become vastly more intelligent than the smartest human being.

A relatively small difference in intelligence between humans and other apes gave us dominance of this planet. A machine with vastly more intelligence than humans will be able to rapidly out-smart our feeble attempts to constrain it.

A machine with that much power will reshape reality according to its goals, for good or bad. If we want a desirable future, we need to make sure a super-powerful machine has (and keeps) the same goals we do. That is the challenge of building a "Friendly AI".

User avatar
sam_i_am
Posts: 624
Joined: Mon Jun 18, 2012 3:38 pm UTC
Location: Urbana, Illinois, USA

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby sam_i_am » Mon Aug 13, 2012 9:53 pm UTC

Poochy wrote:This is the kind of stuff I come up with when my mind wanders:

First, Asimov's First Law of Robotics states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.

Now, let's say a robot with some kind of projectile weapon operating under the 3 laws sees an unquestionably drunk driver on the street. The robot could shoot the drunk driver, possibly killing him, which would be injuring a human being. But, if the robot doesn't shoot the drunk driver, it's probable that the drunk driver will crash and kill him/herself AND innocent people, which would be allowing a human being to come to harm through inaction.

So, how could a robot be programmed to deal with ambiguous situations like this?


the second law is that a robot has to obey any human so long as it does not conflict with the first law.

What if there was a robot who was told to get lost, and went to where other robots were so humans couldn't distinguish between them. How to you make him un-loose himself?


What if a robot was told to go get some resource from a dangerous location, but also told not to get himself destroyed(because he was expensive), and the closer and closer you got to the resource, the more dangerous the landscape. Would the robot eventually just stop at a point of equilibrium and keep running circles around the resource?

letterX
Posts: 535
Joined: Fri Feb 22, 2008 4:00 am UTC
Location: Ithaca, NY

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby letterX » Tue Aug 14, 2012 12:47 am UTC

sam_i_am wrote:What if a robot was told to go get some resource from a dangerous location, but also told not to get himself destroyed(because he was expensive), and the closer and closer you got to the resource, the more dangerous the landscape. Would the robot eventually just stop at a point of equilibrium and keep running circles around the resource?


You mean... literally the exact plot of the second story in I, Robot?

User avatar
ElWanderer
Posts: 287
Joined: Mon Dec 12, 2011 5:05 pm UTC

Re: Ambiguous cases and Asimov's Three Laws of Robotics

Postby ElWanderer » Fri Aug 17, 2012 11:37 am UTC

letterX wrote:
sam_i_am wrote:What if a robot was told to go get some resource from a dangerous location, but also told not to get himself destroyed(because he was expensive), and the closer and closer you got to the resource, the more dangerous the landscape. Would the robot eventually just stop at a point of equilibrium and keep running circles around the resource?


You mean... literally the exact plot of the second story in I, Robot?

I presume that was deliberate. The "get lost" suggestion in the previous paragraph was also a short story... the one where the robot goes and hides in a warehouse with a bunch of similar (physically but not in terms of programming) robots and they have to come up with some very inventive experiments to tell them apart. The short story that was oh so brilliantly rewritten as: "Will Smith goes into the warehouse and shoots a random robot in the head (then threatens to shoot the rest) to force the fugitive to reveal itself" for the I, Robot film [/sarcam].
Now I am become Geoff, the destroyer of worlds


Return to “Computer Science”

Who is online

Users browsing this forum: No registered users and 7 guests