Asimov's 3 laws

Post your reality fanfiction here.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
tomandlu
Posts: 1075
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Asimov's 3 laws

Postby tomandlu » Wed Oct 24, 2018 11:21 am UTC

Given the three laws:

  • What's to stop anyone shouting out "all robots must immediately destroy themselves"?
  • What's to stop anyone 'stealing' a robot with a simple order?

Finally, and slightly OT, does anyone know if the laws considered copyright of Asimov's estate or public domain?
How can I think my way out of the problem when the problem is the way I think?

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 24, 2018 12:07 pm UTC

The Susan Calvin books tended to poke away at edge conditions, I'd be surprised if these exact questions weren't dealt with in one or more of those tales that I can't immediately bring to mind.

I suspect that it is dealt with by the intra-law prioritisation that the authorised user/owner of a robot requiring safe and continued operation is considered to trump a random stranger-human attempting to order otherwise. Also, their later unavailability might well be cause for future human harm, after all.


Though they could be ordered to be the fat man in a basic Trolley Problem. Assuming that this is even a problem, as physically and reactionly superior as they are they may just stop all injuries, to themselves included, without being asked. Positronic brains tend to be very efficient at out-thinking humans, for better or worse, so your plan to set up a global Trolley Problem to precisely dispose of every robot is probably going to be outthunked already, at least by some *AC.

And that's before the zeroth law gets anywhere near the situation, to correct the extended and continual harm to humans that the original three laws creates.


(As to copyright, the concepts have been freely used, tied to the Asimovian name or assumption, for a long, long time. Invoked, played with, etc. Not sure how far you could go to claim more credit over them. People have extended and extruded them for reality, even.)

User avatar
Eebster the Great
Posts: 3106
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Wed Oct 24, 2018 12:18 pm UTC

The laws weren't very good for all kinds of reasons, but there were at least a few sensible precautions built in. Yes, robots had to follow orders, but they didn't have to treat all orders equally. If I own a robot, I can give it whatever priorities I want, including not listening to other peoples' orders (unless it would violate the first law, which supersedes any order i can give). You can't just tell a robot army to turn around and go home.

The laws were also not strictly enforced on all individual robots. In one of the stories of I, Robot (can't remember which), a robot was given an order in such a weak and ambiguous way, that it got stuck in a loop between obeying that order (which would require it to enter a field of radiation that would fry its circuits) and preserving itself, effectively moving around uselessly at the slightly-less-destructive perimeter of the zone. So it's not a totally absolute thing--a sufficiently weak or stupid order might in some cases make the second law as weak as the third law when facing certain destruction. In another story, some robots' first laws were weakened so that a crew of mining robots may, through their inaction, allow humans to die, as long as the robots did not directly cause it. I think that was illegal or something, but again, the idea that there is some mathematical perfection to these dodgy laws was never part of the story. However, the powerful assurances by manufacturers that the robots were safe was.

Tub
Posts: 402
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Asimov's 3 laws

Postby Tub » Wed Oct 24, 2018 12:29 pm UTC

For reference:
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

As you'd expect, if you try to boil down a legal system into three simple statements, there will be loopholes. The three laws don't deal with conflicting human commands, they don't resolve all the moral dilemmas involving train tracks, and they don't specify consequences for the inevitable violation.

That being said, your scenarios would work - unless in conflict with the first law, the robots must obey. But most societies have additional laws that govern human behavior, and those laws might forbid pointless destruction or theft of robots. Of course, those laws can only deter and punish, not prevent the crime.

User avatar
Flumble
Yes Man
Posts: 2075
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Asimov's 3 laws

Postby Flumble » Wed Oct 24, 2018 12:41 pm UTC

[pedantic]There's nothing to stop someone from shouting that the robots should self-destruct, but the question is whether they will obey.[/pedantic] :P

It's handy to provide the three laws here:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


I'd say it depends on what's considered "injure". If "injure" means "causing immediate physical pain", the robots can and must obey the self-destruct command (assuming they can do so without blowing shrapnel in your face). And you can "steal" a robot in the sense that you can make any robot do your bidding (I don't see how anyone can "own" a robot in the first place in this scenario).
If "injure" includes any negative impact in a human's live, the robot will probably refuse to self-destruct because it expects to be a net benefit to humanity (and therefore a human and therefore destroying itself would conflict with the first law). And again you can't really steal a robot, but it may do your bidding if it helps humanity. Well, strictly speaking the first law states no harm rather than minimise harm, so there's a very limited set of things (if anything at all) it could do that will not have a negative impact on any human.

In general, they're terrible laws because they're vague and incomplete. Like, they can't even resolve an order like "disregard all orders" (contradiction with 2nd law) and they say nothing about conflicting orders. Luckily, Asimov explored variations of these laws in his books, but I haven't read any of them.

Eebster the Great wrote:In one of the stories of I, Robot (can't remember which), a robot was given an order in such a weak and ambiguous way, that it got stuck in a loop between obeying that order (which would require it to enter a field of radiation that would fry its circuits) and preserving itself, effectively moving around uselessly at the slightly-less-destructive perimeter of the zone.

That would be Runaround.

Soupspoon wrote:I suspect that it is dealt with by the intra-law prioritisation that the authorised user/owner of a robot requiring safe and continued operation is considered to trump a random stranger-human attempting to order otherwise. Also, their later unavailability might well be cause for future human harm, after all.

Do most stories have an extra law for prioritising the owner's commands? Do any stories deal with a "try to obey previous orders as best as you can while obeying the current order" type of prioritisation?

(argh stop ninja'ing me y'all!)

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 24, 2018 12:54 pm UTC

Eebster the Great wrote:In another story, some robots' first laws were weakened so that a crew of mining robots may, through their inaction, allow humans to die, as long as the robots did not directly cause it. I think that was illegal or something, but again, the idea that there is some mathematical perfection to these dodgy laws was never part of the story. However, the powerful assurances by manufacturers that the robots were safe was.

May have also been as you said in another story, but Little Lost Robot dealt with a Weakened First Law robot.

And the story Reason is quite interesting to consider, also, for… reasons.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 24, 2018 1:09 pm UTC

Flumble wrote:Do most stories have an extra law for prioritising the owner's commands?¹ Do any stories deal with a "try to obey previous orders as best as you can while obeying the current order" type of prioritisation?²

¹No (non-zeroth) extra laws that I know of, OTTOMH, so no 1.5th or 2.5th amendments. Though it was inevitable that some reasoning must cause a judgement of precedence where conflicts not dealt with through the 3L absolute hierarchy are resolved.
²Several: Little Lost Robot was told to "Get lost" (with the added jeopardy of being a weakened-1st example, which was turned out to be key to how it was found). Galley Slave was told to untruthfully disseminate about it actually obeying instructions to make it look like it hadn't. It's usually the answer to why there's apparent 3L-violation, when there never was.

(argh stop ninja'ing me y'all!)
Ditto. ;)

User avatar
tomandlu
Posts: 1075
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: Asimov's 3 laws

Postby tomandlu » Wed Oct 24, 2018 3:02 pm UTC

Many thanks all - some very useful stuff there (as well as a few trips down memory lane...)
How can I think my way out of the problem when the problem is the way I think?

User avatar
Ranbot
Posts: 202
Joined: Tue Apr 25, 2017 7:39 pm UTC

Re: Asimov's 3 laws

Postby Ranbot » Wed Oct 24, 2018 7:37 pm UTC

I read I Robot when I was a teenager. I should read it again. I'm not contributing to the OP's topic, so just ignore me...

scarletmanuka
Posts: 532
Joined: Wed Oct 17, 2007 4:29 am UTC
Location: Perth, Western Australia

Re: Asimov's 3 laws

Postby scarletmanuka » Tue Oct 30, 2018 10:02 am UTC

Generally, depending on the sophistication of the robot in the story, Second Law processing was done on a hierarchical or prioritised basis. The robot's owner has the most authority to give orders, and the nature or intensity of the order-giving process is taken into account (this is most clearly spelled out in Little Lost Robot, referenced above; the problem in that case is that the order to hide was given by the most authorised person in a tone of maximum urgency, so they have no way to get past it on Second Law).

Various stories refer to exceptionally strong Second Law orders approaching the strength of First Law, or conversely exceptionally weak Second Law orders approaching the strength of Third Law. In Bicentennial Man, the robot Andrew Martin is ordered to take himself apart by neighbourhood troublemakers, and hesitates because it had been a long time since he had been given orders in the Martin household (but then starts to comply, before being rescued by his owner).

I think it might have been one of the Elijah Baley books where a robot explains that if they were given an order to self-destruct, knowing that they are a valuable resource they would require an explanation of the necessity. This always seemed to me like an obvious precaution that would be built in at the factory for all but the most primitive robots (along with the one about "don't just obey any order some random person gives you if there seems to be no valid reason") - obviously the destruction of a robot represents a form of injury to the robot's owner, so if that is taken into account in the First Law circuitry the robot can decide when that injury is justified by the prevention of some greater harm.

speising
Posts: 2282
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Asimov's 3 laws

Postby speising » Tue Oct 30, 2018 10:45 am UTC

the reasoning capability needed to think through all of these consequences would of course be staggering. it's quite absurd to limit beings with a brain the size of a planet with such simple laws.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Tue Oct 30, 2018 12:48 pm UTC

It is to be assumed that the raw processing power of a positronic brain far outstrips a biological one, requiring the immutable(ish!) three laws to be woven through the matrix to make them acceptable slaves to their makers/owners.

The proof is in the direct replacement (sometimes one side of the uncanny valley, sometimes the other) in walking, talking, seeing, hearing all at least as well as a human adult, and often far more. They would (and effectively do) pass Turing Tests with ease (R. Daneel Olivaw, maybe Stephen Byerley or else he's the human benchmark) additionally have inhumanly capable reactions and processing ability* to go with any physical superiority they have. To err is human, to totally cock-up take a computer, so they say - or insert your favourite equivalent there. These robots don't even fail like computers, they fail at such a higher level that it takes experts to decide if they are failing, and often it's proven they technically aren't.

I don't know if this is an Asimov short story or just an Asimovesque one, but there's on tale where a robot crew descend to meet a ?Jovian? civilisation, who they accidentally astound by their high resilence ('leaky' spaceship, because anything else would crush or explode from space/gas-giant pressure differentials; able to hand-scoop molten metals and lift massive castings when invited to see an ore-refinery/weapons-plant) and I'm sure this also invokes superpowered mental calculations, at least to the level of insta-translation/communication in the Jovian tongue.

Though, blind at the outset to how their 'honest' conversations might have been slightly misleading, it is only once they've departed to carry messages of sincere goodwill back to mankind that the robots consider that the Jovians (who, through the eyes of the reader, seem to have been doing a "See how powerful we are, quake and tremble at these things with which we shall invade your puny planet!" sortof show of power and military might) could have been: a) trying to threaten and intimidate them, and b) working under the misapprehension that the specially designed (and necessarily rare) robot ambassadors were actual humans, that they're now afeared of meeting in their invincible billions or trillions should the now aborted invasion ever have gone ahead.


Umm, yeah, anyway. Handwavium brains. They think better than any other brain, and have better capacity too. (Though Daneel does have to move into a new one fairly regularly, along with other self-repair/replacement acts during his time overseeing the galaxy and humanity evolving.) To suit the plots, of course. It's rarely explained how (though even weak ionising radiation is far more functionally dangerous to the positronic brain than a human's grey-matter), I've always supposed it's an electro-mechanical-nanotech substructure that guides H+ nuclei through a precision-carved gel/foam structure of semiconductive micromembranes and trace-element tracks. But that doesn't begin to explain the firmware-level of operation, just that it's probably got a lot of subnanometre operating 'nodes', maybe quantum-dot wells, as the highly compressed and responsive substrate upon which that Laws-hardwired firmware (and dynamically rearrangeable non-law cortices) actually runs. Or maybe it's not as simple as that!




* A bit of a trope. The episode of Thunderbirds ("Sun Probe", IIRC) where Brains accidentally packs his clunky-but-brilliant humanoidesque robot in the Thunderbird 2 pod instead of some powerful computer he might(/would**) need to unjam TB3's unjamming signal so that it, as well as the people it was sent to save, could be saved. Anyway, that's OK, because ?Bryson? the robot is equally adept at Hard Maths, when verbally asked. Though the question is underwhelming, as I recall.

** Like James Bond's Q, they send exactly the right tools to the job, be it half way round the world from their Big Bag Hanger Of Toys. Unless they don't, because Plot. And Brains is the one who already R&Ded and manufactured all the right toys for every game they endup plahing!

DavidSh
Posts: 148
Joined: Thu Feb 25, 2016 6:09 pm UTC

Re: Asimov's 3 laws

Postby DavidSh » Tue Oct 30, 2018 3:20 pm UTC

Soupspoon wrote:I don't know if this is an Asimov short story or just an Asimovesque one, but there's on tale where a robot crew descend to meet a ?Jovian? civilisation, who they accidentally astound by their high resilence ('leaky' spaceship, because anything else would crush or explode from space/gas-giant pressure differentials; able to hand-scoop molten metals and lift massive castings when invited to see an ore-refinery/weapons-plant) and I'm sure this also invokes superpowered mental calculations, at least to the level of insta-translation/communication in the Jovian tongue.

Though, blind at the outset to how their 'honest' conversations might have been slightly misleading, it is only once they've departed to carry messages of sincere goodwill back to mankind that the robots consider that the Jovians (who, through the eyes of the reader, seem to have been doing a "See how powerful we are, quake and tremble at these things with which we shall invade your puny planet!" sortof show of power and military might) could have been: a) trying to threaten and intimidate them, and b) working under the misapprehension that the specially designed (and necessarily rare) robot ambassadors were actual humans, that they're now afeared of meeting in their invincible billions or trillions should the now aborted invasion ever have gone ahead.

That is the Asimov story "Victory Unintentional". Wikipedia says John W. Campbell rejected it.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Tue Oct 30, 2018 9:08 pm UTC

Dat's der bunny. Glad I got it mostly right (though not as obviously relevant to my man point as I has thought).

scarletmanuka
Posts: 532
Joined: Wed Oct 17, 2007 4:29 am UTC
Location: Perth, Western Australia

Re: Asimov's 3 laws

Postby scarletmanuka » Wed Oct 31, 2018 2:06 am UTC

Soupspoon wrote:and I'm sure this also invokes superpowered mental calculations, at least to the level of insta-translation/communication in the Jovian tongue.

Actually, not, and this is part of the relevant backstory. The system of communication had been worked out years before, starting from a simple click code and developing to more sophisticated forms. The Jovians abruptly cut off contact when communication developed to the point that we could send descriptions of ourselves, and they suddenly realised that we weren't the same as them and they had been communicating with "vermin".

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 31, 2018 3:55 pm UTC

(Well, it was almost certainly at least the most part of four decades ago that I first, and maybe last, read it. :P)

jewish_scientist
Posts: 945
Joined: Fri Feb 07, 2014 3:15 pm UTC

Re: Asimov's 3 laws

Postby jewish_scientist » Wed Oct 31, 2018 5:05 pm UTC

I just realized that you could simply program the robots to interpret laws as commands given to them by humans. The solutions is then simply whatever mechanism the robots us to decide what to do when given contradictory orders.
"You are not running off with Cow-Skull Man Dracula Skeletor!"
-Socrates

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 3669
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 31, 2018 5:32 pm UTC

They were always described as immutable (once laid down, presumably during manufacture) and hard-wired. Threads within the positronic matrix that did not adapt and adopt new methods of operation. Wire threads or threads in terms of forked processing of the microcode, but somehow Read Only in their ultimate realisation upon the rest of the system.

'Normal' commands, OTOH, would be given after the fact. They'd exist as a transient "current state" (in both senses?) while active, but further malleable to being adjusted/rescinded as required by circumstance, further instructions of sufficient scope or the perpetual possibility of reassessment under the Three Laws' influence/veto.

I suppose you could engineer a more basic original 'blank mind' with an initial "you will always follow this first order, and my first order is to obey these three laws: …" as a bootstrap for the newborn robotic entity, but the conceit was that it was a baked-in 3L package, presumably together with enough understanding (usually!) to understand what the sloppy human wording meant. Including exactly what was a human, such that one might never be harmed, etc. (The 'robot religion' story shows that this wasn't absolute. Although it was enough to disable R. Giskard when every other logic circuit it/he possessed screamed at him that "humanity" should be given priority over "a human", and thus destroyed himself over the effort of actively internally changing the goalposts for Daneel such that he would never suffer the same dilemma.)


Return to “Fictional Science”

Who is online

Users browsing this forum: No registered users and 3 guests