Asimov's 3 laws

Post your reality fanfiction here.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Asimov's 3 laws

Postby tomandlu » Wed Oct 24, 2018 11:21 am UTC

Given the three laws:

  • What's to stop anyone shouting out "all robots must immediately destroy themselves"?
  • What's to stop anyone 'stealing' a robot with a simple order?

Finally, and slightly OT, does anyone know if the laws considered copyright of Asimov's estate or public domain?
How can I think my way out of the problem when the problem is the way I think?

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 24, 2018 12:07 pm UTC

The Susan Calvin books tended to poke away at edge conditions, I'd be surprised if these exact questions weren't dealt with in one or more of those tales that I can't immediately bring to mind.

I suspect that it is dealt with by the intra-law prioritisation that the authorised user/owner of a robot requiring safe and continued operation is considered to trump a random stranger-human attempting to order otherwise. Also, their later unavailability might well be cause for future human harm, after all.


Though they could be ordered to be the fat man in a basic Trolley Problem. Assuming that this is even a problem, as physically and reactionly superior as they are they may just stop all injuries, to themselves included, without being asked. Positronic brains tend to be very efficient at out-thinking humans, for better or worse, so your plan to set up a global Trolley Problem to precisely dispose of every robot is probably going to be outthunked already, at least by some *AC.

And that's before the zeroth law gets anywhere near the situation, to correct the extended and continual harm to humans that the original three laws creates.


(As to copyright, the concepts have been freely used, tied to the Asimovian name or assumption, for a long, long time. Invoked, played with, etc. Not sure how far you could go to claim more credit over them. People have extended and extruded them for reality, even.)

User avatar
Eebster the Great
Posts: 3487
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Wed Oct 24, 2018 12:18 pm UTC

The laws weren't very good for all kinds of reasons, but there were at least a few sensible precautions built in. Yes, robots had to follow orders, but they didn't have to treat all orders equally. If I own a robot, I can give it whatever priorities I want, including not listening to other peoples' orders (unless it would violate the first law, which supersedes any order i can give). You can't just tell a robot army to turn around and go home.

The laws were also not strictly enforced on all individual robots. In one of the stories of I, Robot (can't remember which), a robot was given an order in such a weak and ambiguous way, that it got stuck in a loop between obeying that order (which would require it to enter a field of radiation that would fry its circuits) and preserving itself, effectively moving around uselessly at the slightly-less-destructive perimeter of the zone. So it's not a totally absolute thing--a sufficiently weak or stupid order might in some cases make the second law as weak as the third law when facing certain destruction. In another story, some robots' first laws were weakened so that a crew of mining robots may, through their inaction, allow humans to die, as long as the robots did not directly cause it. I think that was illegal or something, but again, the idea that there is some mathematical perfection to these dodgy laws was never part of the story. However, the powerful assurances by manufacturers that the robots were safe was.

Tub
Posts: 475
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Asimov's 3 laws

Postby Tub » Wed Oct 24, 2018 12:29 pm UTC

For reference:
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

As you'd expect, if you try to boil down a legal system into three simple statements, there will be loopholes. The three laws don't deal with conflicting human commands, they don't resolve all the moral dilemmas involving train tracks, and they don't specify consequences for the inevitable violation.

That being said, your scenarios would work - unless in conflict with the first law, the robots must obey. But most societies have additional laws that govern human behavior, and those laws might forbid pointless destruction or theft of robots. Of course, those laws can only deter and punish, not prevent the crime.

User avatar
Flumble
Yes Man
Posts: 2265
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Asimov's 3 laws

Postby Flumble » Wed Oct 24, 2018 12:41 pm UTC

[pedantic]There's nothing to stop someone from shouting that the robots should self-destruct, but the question is whether they will obey.[/pedantic] :P

It's handy to provide the three laws here:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


I'd say it depends on what's considered "injure". If "injure" means "causing immediate physical pain", the robots can and must obey the self-destruct command (assuming they can do so without blowing shrapnel in your face). And you can "steal" a robot in the sense that you can make any robot do your bidding (I don't see how anyone can "own" a robot in the first place in this scenario).
If "injure" includes any negative impact in a human's live, the robot will probably refuse to self-destruct because it expects to be a net benefit to humanity (and therefore a human and therefore destroying itself would conflict with the first law). And again you can't really steal a robot, but it may do your bidding if it helps humanity. Well, strictly speaking the first law states no harm rather than minimise harm, so there's a very limited set of things (if anything at all) it could do that will not have a negative impact on any human.

In general, they're terrible laws because they're vague and incomplete. Like, they can't even resolve an order like "disregard all orders" (contradiction with 2nd law) and they say nothing about conflicting orders. Luckily, Asimov explored variations of these laws in his books, but I haven't read any of them.

Eebster the Great wrote:In one of the stories of I, Robot (can't remember which), a robot was given an order in such a weak and ambiguous way, that it got stuck in a loop between obeying that order (which would require it to enter a field of radiation that would fry its circuits) and preserving itself, effectively moving around uselessly at the slightly-less-destructive perimeter of the zone.

That would be Runaround.

Soupspoon wrote:I suspect that it is dealt with by the intra-law prioritisation that the authorised user/owner of a robot requiring safe and continued operation is considered to trump a random stranger-human attempting to order otherwise. Also, their later unavailability might well be cause for future human harm, after all.

Do most stories have an extra law for prioritising the owner's commands? Do any stories deal with a "try to obey previous orders as best as you can while obeying the current order" type of prioritisation?

(argh stop ninja'ing me y'all!)

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 24, 2018 12:54 pm UTC

Eebster the Great wrote:In another story, some robots' first laws were weakened so that a crew of mining robots may, through their inaction, allow humans to die, as long as the robots did not directly cause it. I think that was illegal or something, but again, the idea that there is some mathematical perfection to these dodgy laws was never part of the story. However, the powerful assurances by manufacturers that the robots were safe was.

May have also been as you said in another story, but Little Lost Robot dealt with a Weakened First Law robot.

And the story Reason is quite interesting to consider, also, for… reasons.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 24, 2018 1:09 pm UTC

Flumble wrote:Do most stories have an extra law for prioritising the owner's commands?¹ Do any stories deal with a "try to obey previous orders as best as you can while obeying the current order" type of prioritisation?²

¹No (non-zeroth) extra laws that I know of, OTTOMH, so no 1.5th or 2.5th amendments. Though it was inevitable that some reasoning must cause a judgement of precedence where conflicts not dealt with through the 3L absolute hierarchy are resolved.
²Several: Little Lost Robot was told to "Get lost" (with the added jeopardy of being a weakened-1st example, which was turned out to be key to how it was found). Galley Slave was told to untruthfully disseminate about it actually obeying instructions to make it look like it hadn't. It's usually the answer to why there's apparent 3L-violation, when there never was.

(argh stop ninja'ing me y'all!)
Ditto. ;)

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: Asimov's 3 laws

Postby tomandlu » Wed Oct 24, 2018 3:02 pm UTC

Many thanks all - some very useful stuff there (as well as a few trips down memory lane...)
How can I think my way out of the problem when the problem is the way I think?

User avatar
Ranbot
Posts: 276
Joined: Tue Apr 25, 2017 7:39 pm UTC

Re: Asimov's 3 laws

Postby Ranbot » Wed Oct 24, 2018 7:37 pm UTC

I read I Robot when I was a teenager. I should read it again. I'm not contributing to the OP's topic, so just ignore me...

scarletmanuka
Posts: 533
Joined: Wed Oct 17, 2007 4:29 am UTC
Location: Perth, Western Australia

Re: Asimov's 3 laws

Postby scarletmanuka » Tue Oct 30, 2018 10:02 am UTC

Generally, depending on the sophistication of the robot in the story, Second Law processing was done on a hierarchical or prioritised basis. The robot's owner has the most authority to give orders, and the nature or intensity of the order-giving process is taken into account (this is most clearly spelled out in Little Lost Robot, referenced above; the problem in that case is that the order to hide was given by the most authorised person in a tone of maximum urgency, so they have no way to get past it on Second Law).

Various stories refer to exceptionally strong Second Law orders approaching the strength of First Law, or conversely exceptionally weak Second Law orders approaching the strength of Third Law. In Bicentennial Man, the robot Andrew Martin is ordered to take himself apart by neighbourhood troublemakers, and hesitates because it had been a long time since he had been given orders in the Martin household (but then starts to comply, before being rescued by his owner).

I think it might have been one of the Elijah Baley books where a robot explains that if they were given an order to self-destruct, knowing that they are a valuable resource they would require an explanation of the necessity. This always seemed to me like an obvious precaution that would be built in at the factory for all but the most primitive robots (along with the one about "don't just obey any order some random person gives you if there seems to be no valid reason") - obviously the destruction of a robot represents a form of injury to the robot's owner, so if that is taken into account in the First Law circuitry the robot can decide when that injury is justified by the prevention of some greater harm.

speising
Posts: 2367
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Asimov's 3 laws

Postby speising » Tue Oct 30, 2018 10:45 am UTC

the reasoning capability needed to think through all of these consequences would of course be staggering. it's quite absurd to limit beings with a brain the size of a planet with such simple laws.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Tue Oct 30, 2018 12:48 pm UTC

It is to be assumed that the raw processing power of a positronic brain far outstrips a biological one, requiring the immutable(ish!) three laws to be woven through the matrix to make them acceptable slaves to their makers/owners.

The proof is in the direct replacement (sometimes one side of the uncanny valley, sometimes the other) in walking, talking, seeing, hearing all at least as well as a human adult, and often far more. They would (and effectively do) pass Turing Tests with ease (R. Daneel Olivaw, maybe Stephen Byerley or else he's the human benchmark) additionally have inhumanly capable reactions and processing ability* to go with any physical superiority they have. To err is human, to totally cock-up take a computer, so they say - or insert your favourite equivalent there. These robots don't even fail like computers, they fail at such a higher level that it takes experts to decide if they are failing, and often it's proven they technically aren't.

I don't know if this is an Asimov short story or just an Asimovesque one, but there's on tale where a robot crew descend to meet a ?Jovian? civilisation, who they accidentally astound by their high resilence ('leaky' spaceship, because anything else would crush or explode from space/gas-giant pressure differentials; able to hand-scoop molten metals and lift massive castings when invited to see an ore-refinery/weapons-plant) and I'm sure this also invokes superpowered mental calculations, at least to the level of insta-translation/communication in the Jovian tongue.

Though, blind at the outset to how their 'honest' conversations might have been slightly misleading, it is only once they've departed to carry messages of sincere goodwill back to mankind that the robots consider that the Jovians (who, through the eyes of the reader, seem to have been doing a "See how powerful we are, quake and tremble at these things with which we shall invade your puny planet!" sortof show of power and military might) could have been: a) trying to threaten and intimidate them, and b) working under the misapprehension that the specially designed (and necessarily rare) robot ambassadors were actual humans, that they're now afeared of meeting in their invincible billions or trillions should the now aborted invasion ever have gone ahead.


Umm, yeah, anyway. Handwavium brains. They think better than any other brain, and have better capacity too. (Though Daneel does have to move into a new one fairly regularly, along with other self-repair/replacement acts during his time overseeing the galaxy and humanity evolving.) To suit the plots, of course. It's rarely explained how (though even weak ionising radiation is far more functionally dangerous to the positronic brain than a human's grey-matter), I've always supposed it's an electro-mechanical-nanotech substructure that guides H+ nuclei through a precision-carved gel/foam structure of semiconductive micromembranes and trace-element tracks. But that doesn't begin to explain the firmware-level of operation, just that it's probably got a lot of subnanometre operating 'nodes', maybe quantum-dot wells, as the highly compressed and responsive substrate upon which that Laws-hardwired firmware (and dynamically rearrangeable non-law cortices) actually runs. Or maybe it's not as simple as that!




* A bit of a trope. The episode of Thunderbirds ("Sun Probe", IIRC) where Brains accidentally packs his clunky-but-brilliant humanoidesque robot in the Thunderbird 2 pod instead of some powerful computer he might(/would**) need to unjam TB3's unjamming signal so that it, as well as the people it was sent to save, could be saved. Anyway, that's OK, because ?Bryson? the robot is equally adept at Hard Maths, when verbally asked. Though the question is underwhelming, as I recall.

** Like James Bond's Q, they send exactly the right tools to the job, be it half way round the world from their Big Bag Hanger Of Toys. Unless they don't, because Plot. And Brains is the one who already R&Ded and manufactured all the right toys for every game they endup plahing!

DavidSh
Posts: 217
Joined: Thu Feb 25, 2016 6:09 pm UTC

Re: Asimov's 3 laws

Postby DavidSh » Tue Oct 30, 2018 3:20 pm UTC

Soupspoon wrote:I don't know if this is an Asimov short story or just an Asimovesque one, but there's on tale where a robot crew descend to meet a ?Jovian? civilisation, who they accidentally astound by their high resilence ('leaky' spaceship, because anything else would crush or explode from space/gas-giant pressure differentials; able to hand-scoop molten metals and lift massive castings when invited to see an ore-refinery/weapons-plant) and I'm sure this also invokes superpowered mental calculations, at least to the level of insta-translation/communication in the Jovian tongue.

Though, blind at the outset to how their 'honest' conversations might have been slightly misleading, it is only once they've departed to carry messages of sincere goodwill back to mankind that the robots consider that the Jovians (who, through the eyes of the reader, seem to have been doing a "See how powerful we are, quake and tremble at these things with which we shall invade your puny planet!" sortof show of power and military might) could have been: a) trying to threaten and intimidate them, and b) working under the misapprehension that the specially designed (and necessarily rare) robot ambassadors were actual humans, that they're now afeared of meeting in their invincible billions or trillions should the now aborted invasion ever have gone ahead.

That is the Asimov story "Victory Unintentional". Wikipedia says John W. Campbell rejected it.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Tue Oct 30, 2018 9:08 pm UTC

Dat's der bunny. Glad I got it mostly right (though not as obviously relevant to my man point as I has thought).

scarletmanuka
Posts: 533
Joined: Wed Oct 17, 2007 4:29 am UTC
Location: Perth, Western Australia

Re: Asimov's 3 laws

Postby scarletmanuka » Wed Oct 31, 2018 2:06 am UTC

Soupspoon wrote:and I'm sure this also invokes superpowered mental calculations, at least to the level of insta-translation/communication in the Jovian tongue.

Actually, not, and this is part of the relevant backstory. The system of communication had been worked out years before, starting from a simple click code and developing to more sophisticated forms. The Jovians abruptly cut off contact when communication developed to the point that we could send descriptions of ourselves, and they suddenly realised that we weren't the same as them and they had been communicating with "vermin".

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 31, 2018 3:55 pm UTC

(Well, it was almost certainly at least the most part of four decades ago that I first, and maybe last, read it. :P)

jewish_scientist
Posts: 1045
Joined: Fri Feb 07, 2014 3:15 pm UTC

Re: Asimov's 3 laws

Postby jewish_scientist » Wed Oct 31, 2018 5:05 pm UTC

I just realized that you could simply program the robots to interpret laws as commands given to them by humans. The solutions is then simply whatever mechanism the robots us to decide what to do when given contradictory orders.
"You are not running off with Cow-Skull Man Dracula Skeletor!"
-Socrates

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 4060
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: Asimov's 3 laws

Postby Soupspoon » Wed Oct 31, 2018 5:32 pm UTC

They were always described as immutable (once laid down, presumably during manufacture) and hard-wired. Threads within the positronic matrix that did not adapt and adopt new methods of operation. Wire threads or threads in terms of forked processing of the microcode, but somehow Read Only in their ultimate realisation upon the rest of the system.

'Normal' commands, OTOH, would be given after the fact. They'd exist as a transient "current state" (in both senses?) while active, but further malleable to being adjusted/rescinded as required by circumstance, further instructions of sufficient scope or the perpetual possibility of reassessment under the Three Laws' influence/veto.

I suppose you could engineer a more basic original 'blank mind' with an initial "you will always follow this first order, and my first order is to obey these three laws: …" as a bootstrap for the newborn robotic entity, but the conceit was that it was a baked-in 3L package, presumably together with enough understanding (usually!) to understand what the sloppy human wording meant. Including exactly what was a human, such that one might never be harmed, etc. (The 'robot religion' story shows that this wasn't absolute. Although it was enough to disable R. Giskard when every other logic circuit it/he possessed screamed at him that "humanity" should be given priority over "a human", and thus destroyed himself over the effort of actively internally changing the goalposts for Daneel such that he would never suffer the same dilemma.)

User avatar
Heimhenge
Posts: 384
Joined: Thu May 01, 2014 11:35 pm UTC
Location: Arizona desert

Re: Asimov's 3 laws

Postby Heimhenge » Sat Jun 22, 2019 11:12 pm UTC

Seen the movie 2001 Space Odyssey (many times) but never read the book. For those who have read the book, was there any mention of how HAL got around the 3 laws? Would've been cool for Clarke to pay that homage to Asimov. If not, then I go back to my usual theory that HAL had enough AI to override Asimov's laws based on its secret mission objective (disclosed by Dr. Floyd after HAL is shut down).

I'm sure Clarke knew about the 3 laws. They predate 2001 by around 10 years. Seems strange that he wouldn't incorporate them into his story line.

Obviously, Skynet had no problems.

User avatar
Pfhorrest
Posts: 5487
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: Asimov's 3 laws

Postby Pfhorrest » Sat Jun 22, 2019 11:35 pm UTC

It's been years (decades?) since I read the books, but I don't recall there being any mention of the three laws in them. There was some explanation of the logic that HAL jumped through to reconcile two conflicting sets of orders he was given, where the crew being dead was the only thing that allowed him to consistently fill both objectives; I think it was something like a general principle to always deliver correct answers to questions he was asked to process (his main function) and a specific order to keep the true mission secret, and the only way around that was to make sure that nobody could ask him questions to begin with.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
Eebster the Great
Posts: 3487
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Sun Jun 23, 2019 5:03 am UTC

I don't see why the three laws of robotics should have to leak into other science fiction stories anyway. They were Asimov's plot device; they don't have to be shoehorned into every story about AI ever.

User avatar
Heimhenge
Posts: 384
Joined: Thu May 01, 2014 11:35 pm UTC
Location: Arizona desert

Re: Asimov's 3 laws

Postby Heimhenge » Sun Jun 23, 2019 5:32 pm UTC

Eebster the Great wrote:I don't see why the three laws of robotics should have to leak into other science fiction stories anyway. They were Asimov's plot device; they don't have to be shoehorned into every story about AI ever.


Because they makes sense? Especially with the zeroth law added? I mean, if I was writing a story about robots I'd use that plot device just because it's a realistic approach to designing AI. What designer wouldn't want those kind of built-in safeguards? It might not need to be Asimov's laws verbatim, but some equivalent system of safeguards for sure.

Unless of course it's a story about the follies of unintelligent design or intentional evil goals.

User avatar
Pfhorrest
Posts: 5487
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: Asimov's 3 laws

Postby Pfhorrest » Sun Jun 23, 2019 6:19 pm UTC

You know that the whole point of the three laws as a plot device is to explore all the ways they're still horribly flawed? Yes, having some kind of friendliness to your general AI is probably a good thing unless you want a story about badly designs robots, but three-laws stories are about badly-designed robots, on purpose. Just not as blatantly badly-designed as the old "robots immediately try to enslave mankind" stories that preceded them.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Asimov's 3 laws

Postby gmalivuk » Sun Jun 23, 2019 6:20 pm UTC

Heimhenge wrote:Imean, if I was writing a story about robots I'd use that plot device just because it's a realistic approach to designing AI.

How is it realistic? How would you actually program any of the laws? Do you actually think anyone currently working on AI is trying to add these rules to their system? Do you think anything like the First Law would ever make it into a military robot?
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
Heimhenge
Posts: 384
Joined: Thu May 01, 2014 11:35 pm UTC
Location: Arizona desert

Re: Asimov's 3 laws

Postby Heimhenge » Sun Jun 23, 2019 8:12 pm UTC

gmalivuk wrote:
Heimhenge wrote:I mean, if I was writing a story about robots I'd use that plot device just because it's a realistic approach to designing AI.

How is it realistic? How would you actually program any of the laws? Do you actually think anyone currently working on AI is trying to add these rules to their system? Do you think anything like the First Law would ever make it into a military robot?


I get that. And until we build autonomous AI robots that could physically harm us, we don't really need it. By the time we can, AI will have advanced to the point where it can be taught those rules ... we're talking fiction here after all. How would you code the 3 laws? Maybe you hardwire the AI to accept the first 3 instructions it's given upon activation as immutable? Ain't gonna be able to "write" the laws using some programming language.

"Realistic" was probably the wrong word. Just sayin' that if you're talking about movies (like 2001) where an AI becomes malevolent, it seems like the reason for that failure should be part of the plot. I recall being disappointed in The Forbin Project for the same reason ... sure, after Colossus communicated with Guardian it "decided" it was OK to kill humans to get their data link reconnected, and later offers the excuse that it was trying to prevent war (which seems to be it's prime directive).

Likewise the behavior of Data's bro Lore in TNG. When designed to be good, AI that goes bad begs an explanation. At least it would if I was writing SF. But maybe that's just me.

As for Pfhorrest's comment about plot lines ... I get that too. There's exceptions, like Bicentennial Man, but yeah, 3 laws gone wrong is probably the norm whenever they're even mentioned.

User avatar
Eebster the Great
Posts: 3487
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Sun Jun 23, 2019 10:15 pm UTC

Lore was intended to be good, but there were flaws in his design that weren't explained in detail. Apparently they had something to do with the emotion chip, which is why Dr. Soong didn't give one to Data and why the chip also seemed to allow Data to become cruel (though he was also being influenced directly by Lore). It's not that it didn't occur to Soong to make him be a good person, he just failed at the task. That's why he was shut down and disassembled.

ijuin
Posts: 1152
Joined: Fri Jan 09, 2009 6:02 pm UTC

Re: Asimov's 3 laws

Postby ijuin » Mon Jun 24, 2019 8:01 pm UTC

gmalivuk wrote:Do you think anything like the First Law would ever make it into a military robot?

I would expect that a combat AI would have the relative priority of the First and Second laws reversed.

User avatar
ucim
Posts: 6896
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Asimov's 3 laws

Postby ucim » Mon Jun 24, 2019 10:49 pm UTC

Heimhenge wrote:And until we build autonomous AI robots that could physically harm us, we don't really need it. By the time we can, AI will have advanced to the point where it can be taught those rules ...
Humans are smart enough that they can be taught these rules and look who's in the White House. The thing isn't getting the human or robot to understand those rules, it's getting it to actually follow those rules, especially when bending them helps advance its more immediate goals.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
Eebster the Great
Posts: 3487
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Tue Jun 25, 2019 6:19 am UTC

I think if you knew what you were doing and the robot really understood the laws, programming it to follow them would be trivial. The problems are (1) the laws aren't very good, and (2) it is very difficult to define the laws with the sort of precision required. Even programmers don't necessarily know what exactly the laws mean, let alone how to code them in directly. (If you aren't hardcoding them but just coding a robot to permanently obey the first English language instructions given, that just pushes the problem back one step to ensuring it understands those English instructions to the required precision, which is no easier.) Even philosophers can't agree on what they mean.

These are the sort of laws you can write for humans because there is an expectation that humans will generally understand what they mean. With no ability to rely on common sense and knowledge, the task goes from trivial to formidable or even intractable.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Asimov's 3 laws

Postby gmalivuk » Tue Jun 25, 2019 2:54 pm UTC

As is typical for AI, the easy problems are hard and the hard problems are easy.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

ijuin
Posts: 1152
Joined: Fri Jan 09, 2009 6:02 pm UTC

Re: Asimov's 3 laws

Postby ijuin » Tue Jun 25, 2019 5:42 pm UTC

Consider just the first law: an AI shall not harm a human. What constitutes harm? Physical injury is straightforward, but defining that as the only form of harm would lead to AIs locking us all up for our own safety and denying us access to anything more dangerous than a ballpoint pen.
Meanwhile, defining psychological harm is much more slippery. Which is more psychologically harmful—denying a child the pleasure of getting cookies on-demand, or denying them the lesson that they ought not to have their every demand instantly gratified even when the demand conflicts with others? An AI would have to be capable of making that judgement call if “no harm” were to supersede all other rules.

User avatar
Pfhorrest
Posts: 5487
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: Asimov's 3 laws

Postby Pfhorrest » Tue Jun 25, 2019 5:44 pm UTC

gmalivuk wrote:As is typical for AI, the easy problems are hard and the hard problems are easy.

Funny, I find that true in philosophy of mind as well, where the "hard problem" of phenomenal consciousness turns out to be a trivial philosophical question, and the "easy problem" of access consciousness turns out to be... basically, how to program an AI, which is a lot harder.

ijuin wrote:Consider just the first law: an AI shall not harm a human. What constitutes harm?

And what constitutes a human? If a fetus a human? If not, where between there and adult does it become human? Is a cyborg a human? If at some point not, how much of it can be replaced with artificial parts before it stops being human? Does a pacemaker or cochlear implant make someone not human anymore? Is a human brain in an otherwise artificial body human? Is a human mind uploaded to a virtual reality human?
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
Quizatzhaderac
Posts: 1827
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

Re: Asimov's 3 laws

Postby Quizatzhaderac » Tue Jun 25, 2019 6:20 pm UTC

Heimhenge wrote:Just sayin' that if you're talking about movies (like 2001) where an AI becomes malevolent, it seems like the reason for that failure should be part of the plot. I recall being disappointed in The Forbin Project for the same reason ... sure, after Colossus communicated with Guardian it "decided" it was OK to kill humans to get their data link reconnected, and later offers the excuse that it was trying to prevent war (which seems to be it's prime directive).
I'd say if the AI is designed to be a tool, and it acts like a tool, that's fine. In the Forbin Project (haven't read it) that may have been something the designer considered "working as intended" to kill a few people to prevent a war.

When an AI follows the instructions literally, but not the intent, that can be fine. However it's been done so many times before, I'd say anyone writing that plot now should be subtle/clever in the logical hole the designers missed.
gmalivuk wrote:How is it realistic? How would you actually program any of the laws? Do you actually think anyone currently working on AI is trying to add these rules to their system? Do you think anything like the First Law would ever make it into a military robot?
So strong AI isn't a thing, and general purpose robots aren't a thing; we imagine a sci-fi world and ask what would be realistic if we assumed some things.

If we assume a robot understands (in common cases) "human" and "harm", and the robot it capable of an infinite number of tasks, then it makes sense to create a directive "do not harm humans". Similarly, photocopiers can copy an infinity of images, but they specifically can't copy US currency.

As I imagine it, somebody in super-science land figured out how to give a computer a reasonable understanding of a "Human" , "Harm", "robot", "obey" and action versus inaction. All robots contain this code so they can be future-fantasy robots. They contain the three laws on top of that. Their specific knowledge and tasks are on top of that.

As for military robots, they obviously wouldn't have a general "do no kill" command. But it would have a lot of specific ones, "don't kill civilians, don't kill allies, don't kill people who are surrendering, don't kill enemy medics".
The thing about recursion problems is that they tend to contain other recursion problems.

ijuin
Posts: 1152
Joined: Fri Jan 09, 2009 6:02 pm UTC

Re: Asimov's 3 laws

Postby ijuin » Tue Jun 25, 2019 6:40 pm UTC

Perhaps we should distinguish between “artificial intelligence” and “ artificial sapience”. An artificial sapient would have the whole “free will, capable of contemplating disobedience, acting as its own agent” thing, whereas an artificial intelligence would not. As such, an AI would be capable of neglect, but only an AS would be capable of malice.

User avatar
Eebster the Great
Posts: 3487
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Tue Jun 25, 2019 7:12 pm UTC

Quizatzhaderac wrote:As I imagine it, somebody in super-science land figured out how to give a computer a reasonable understanding of a "Human" , "Harm", "robot", "obey" and action versus inaction. All robots contain this code so they can be future-fantasy robots. They contain the three laws on top of that. Their specific knowledge and tasks are on top of that.

As for military robots, they obviously wouldn't have a general "do no kill" command. But it would have a lot of specific ones, "don't kill civilians, don't kill allies, don't kill people who are surrendering, don't kill enemy medics".

That's not so much saying that robots should have Asimov's 3 laws baked in as saying that robots should be given instructions that conform to our common sense. I agree with that, and I'm sure Kubric does too. The problems with HAL 9000 were not limited to the conflict that Pfhorrest mentioned (though this was crucial to its specific choices) but also due to more mysterious problems that crept in due to the mentally destructive nature of knowledge of alien encounters. This doesn't really make sense, but a lot of things in that book don't really make sense.

If the basic question is why among HAL's many programmed duties, protecting the crew was not one of them, I agree that is confusing. It must have been in there at some point, but as far as I know, there is no particular explanation for why it obeyed the instructions Pfhorrest mentioned but not other ones we might find more important. It could be as simple as a mistake in priorities during coding. It's not necessarily a stupid thing, when you consider how intelligent HAL is. For instance, you would not ordinarily want to code HAL to lie to the crew in order to protect them; otherwise, it would make up a reason they could not start the mission in the first place. And you would not want HAL to reveal the secret that they were going to an alien obelisk in order to protect the crew, because HAL did not have access to the crucial intel that convinced people to keep the secret in the first place. So if both of those orders have higher priority than the security of the crew, and they conflict, then they can only both be satisfied by killing the crew. It's the logical decision.

User avatar
Sizik
Posts: 1261
Joined: Wed Aug 27, 2008 3:48 am UTC

Re: Asimov's 3 laws

Postby Sizik » Tue Jun 25, 2019 8:24 pm UTC

ijuin wrote:Perhaps we should distinguish between “artificial intelligence” and “ artificial sapience”. An artificial sapient would have the whole “free will, capable of contemplating disobedience, acting as its own agent” thing, whereas an artificial intelligence would not. As such, an AI would be capable of neglect, but only an AS would be capable of malice.


I feel like the reason those are distinct things is because we as animals have a different in-built set of laws than Asimov's ones. Stuff like "Don't starve", "Don't get killed", "Try to reproduce", "Protect things you love", and other biochemical influences on our mental state (i.e. emotions).
she/they
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

User avatar
Eebster the Great
Posts: 3487
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Asimov's 3 laws

Postby Eebster the Great » Wed Jun 26, 2019 1:35 am UTC

Well our mind operates very differently from any AI we have built on a lot of levels. It seems likely we would be able to create very intelligent AIs with "minds" that operate in a totally alien manner long before we could figure out how to create a program that operates in a way resembling the human brain (if we wish to do so). A lot of sci fi has robots acting in essentially selfish ways, which doesn't make much sense unless you deliberately build selfish robots. It's not a natural thing that everything that gains intelligence is automatically self-preserving, except that all intelligent things that we know about right now developed through natural selection, so they would have to be that way. For instance, a robot that perversely destroys itself is just as likely as (if not more likely than) one that perversely destroys the prototyping lab or its human occupants.

Making a general purpose AI is really hard and a current area of research. Making an AI that thinks like a human is much harder and isn't even really at the stage of serious research yet. That's part of what makes it so hard to ensure your AI does what you want.

User avatar
Quizatzhaderac
Posts: 1827
Joined: Sun Oct 19, 2008 5:28 pm UTC
Location: Space Florida

Re: Asimov's 3 laws

Postby Quizatzhaderac » Thu Jun 27, 2019 4:19 pm UTC

Eebster the Great wrote:If the basic question is why among HAL's many programmed duties, protecting the crew was not one of them, I agree that is confusing. It must have been in there at some point, but as far as I know, there is no particular explanation for why it obeyed the instructions Pfhorrest mentioned but not other ones we might find more important. It could be as simple as a mistake in priorities during coding. It's not necessarily a stupid thing, when you consider how intelligent HAL is.
The main mistake of mission control seems to be in confusing priorities and commands.

"Complete the mission" is a priority.
"Protect the crew" should have been a priority. If not, it would have been a very important proximate goal for completing the mission.
"Answer accurately" is a priority.
"Don't tell the crew about the aliens" should be a command, but it's given to HAL as a priority.

There's a reason for the command; HAL is capable of understanding the reason, but HAL is given the command in such a way that forces it to ignore all intent and context by inserting the command as an intrinsic priority.

In my head-cannon: the people who decided the aliens must be kept secret took shortcuts to limit the number of people in the know. As such, the proper experts were not involved in altering HAL. The sensible thing to do would be to insert the "fact" that knowledge of the aliens would compromise the crew's ability (but less so than death) to complete the mission, and give HAL the ability to refuse to answer.

Also, I assume the mission was weighted pretty heavily above the lies of the crew by mission command in such a way that's common in fiction, but not done by even the military in combat situations any more.
The thing about recursion problems is that they tend to contain other recursion problems.

ijuin
Posts: 1152
Joined: Fri Jan 09, 2009 6:02 pm UTC

Re: Asimov's 3 laws

Postby ijuin » Thu Jun 27, 2019 5:26 pm UTC

Given that it was a long-term mission that was going where no man had gone before, and that it would be a decade before they could send anybody else, then the loss of crew functionality would more or less be synonymous with failure of the mission.

User avatar
PM 2Ring
Posts: 3715
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

Re: Asimov's 3 laws

Postby PM 2Ring » Sat Jun 29, 2019 12:59 am UTC

Isaac Asimov was invited to an early screening of 2001: A Space Odyssey. When HAL started to misbehave he loudly exclaimed "They're breaking the First Law!" (or words to that effect), to the chagrin of his wife. Sorry, I can't remember the exact details. I tried searching in my Asimov collection, but I couldn't find that particular anecdote.


Return to “Fictional Science”

Who is online

Users browsing this forum: No registered users and 4 guests