ethics of artificial suffering

For the discussion of the sciences. Physics problems, chemistry equations, biology weirdness, it all goes here.

Moderators: gmalivuk, Moderators General, Prelates

brötchen
Posts: 112
Joined: Mon Aug 31, 2009 1:45 pm UTC

ethics of artificial suffering

Postby brötchen » Sat Apr 24, 2010 3:25 pm UTC

lately i'v been thinking a lot about the possibilities of powerful ai systems and the also about related ethical problems .

how does a machine qualify for " human rights" ? shouldn't a machine simulating a full human being have the same rights as a human being ?
are we allowed to cancel an ai system or would that be murder ?
What about medical trails on simulated humans ?

What are your thoughts on the topic ?

User avatar
'x7'
Posts: 30
Joined: Tue Apr 20, 2010 10:45 pm UTC

Re: ethics of artificial suffering

Postby 'x7' » Sat Apr 24, 2010 7:07 pm UTC

Hmm... Well, Machines shouldn't have human emotions anyway... At least if I'm understanding this correctly.
A neutron goes into a bar and asks the bartender, "How much for a beer?"
The bartender replies, "For you, no charge."

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Sat Apr 24, 2010 8:15 pm UTC

brötchen wrote:shouldn't a machine simulating a full human being have the same rights as a human being ?

Yes.
are we allowed to cancel an ai system or would that be murder ?

Assuming we save the contents of memory, that would seem more like putting a person into suspended animation than murdering someone. Short term suspensions may be necessary for technological reasons, but long-term or permanent suspensions would certainly raise thorny ethical considerations. Unplugging such a simulation and then scrambling the contents of its memory to permanently destroy it should be treated as murder.

What about medical trails on simulated humans ?

I suspect most trials could simulate human physiology without simulating human neurology (beyond brain stem activity), and so there would be no ethical problems. However, if a medication required a full simulation (i.e. because it's supposed to treat mental disorders rather than physical illness) then a simulated trial would raise ethical issues. On the other hand, it may still have fewer ethical issues than a human study, because you may be able to do things with a simulation that you couldn't do with a real study, like instantly reversing the effects of a medication if it proved problematic. Still, you would need to get the informed consent of the simulated subjects to perform the study.

Here's an amusing quote from AI researcher Eliezer Yudkowsky, which is relevant to this topic. It's on the ethics of using Solomonoff induction to make predictions of the future. Solomonoff induction works by feeding all previous observations into a hypercomputer, which would then go through all possible computer algorithms, find the simplest one which predicts those observations, and spit out the whatever future that algorithm predicts.
Solomonoff induction, taken literally, would create countably infinitely many sentient beings, trapped inside the computations. All possible computable sentient beings, in fact. Which scarcely seems ethical. So let us be glad this is only a formalism.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

User avatar
Charlie!
Posts: 2035
Joined: Sat Jan 12, 2008 8:20 pm UTC

Re: ethics of artificial suffering

Postby Charlie! » Sat Apr 24, 2010 11:45 pm UTC

skeptical scientist wrote:
brötchen wrote:shouldn't a machine simulating a full human being have the same rights as a human being ?

Yes.

I disagree, or at least I disagree that the answer is obvious.

Why would a simulated human have rights? Because it's a human by some mental definition, and all humans have to have rights, right?

But what about a different approach: the natural rights perspective would claim that, however you define a simulated human, it would not have rights by its nature - no physical human born a slave by their nature, but an AI could be, therefore there is not a natural rights argument against slavery for AIs, for example.

A third consideration: the possibilities available to AI are so different that applying human morality to them is at best a limiting idea. For example, say you generated an AI by some deterministic process, "killed it," and then regenerated it. Have you committed murder? I would say that the answer must be yes. It's analogous to generating 2 AIs and then killing one. But this certainly does go against common sense. So then you might make some change of definition that it is the non-repeatable outside experiences that make an AI "human." But that's at least as bad! I'll leave it there to avoid ranting about a mere hypothetical definition.
Some people tell me I laugh too much. To them I say, "ha ha ha!"

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Sun Apr 25, 2010 1:29 am UTC

I think ethics is just as limited as any theory, and what do you do when you step out of theory limits? you stop applying it there. just like all your "murder is no-no" do not apply at war, self-defense situations, or when they kill someone by court order.

Birk
Posts: 236
Joined: Tue May 19, 2009 5:08 pm UTC

Re: ethics of artificial suffering

Postby Birk » Sun Apr 25, 2010 1:35 am UTC

makc wrote:I think ethics is just as limited as any theory, and what do you do when you step out of theory limits? you stop applying it there. just like all your "murder is no-no" do not apply at war, self-defense situations, or when they kill someone by court order.



I think when a theory is no longer applicable to a situation people should work to see if they can refine or develop the theory to deal with new challenges. It is not always possible but should be attempted. For instance, there are "ethical guidelines" so to speak for each of your example situations.

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Sun Apr 25, 2010 3:18 am UTC

Would your life be in any way different right now if you were a simulation of yourself, living in a simulated world identical to this one? If you were such a being, would you think you should still have rights?

These questions are my basis for believing that simulated humans should be entitled to the same human rights as flesh-and-blood humans, although technological limitations and new possibilities opened up by the technology affect how those rights translate into duties owed.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: ethics of artificial suffering

Postby Technical Ben » Sun Apr 25, 2010 3:39 pm UTC

With an AI you could (theoretically) reset it's memory and start, or as said, rewind the effects. This would, from the AI's perspective mean it never suffered in the first place.
So the fact it suffered is of no consequence to itself. You are left however with the ethical problem of those doing the testing. Should they be allowed to cause suffering? I see it similar to entertainment. If you get too wrapped up in simulated killings for example, are you more likely to do the same to real people?
[Edit] Although, suggesting that book characters be given rights is ridiculous. They are simulated right? But I guess we will meet definite limits or definable boundaries with technology.
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
kernelpanic
Posts: 891
Joined: Tue Oct 28, 2008 1:26 am UTC
Location: 1.6180339x10^18 attoparsecs from Earth

Re: ethics of artificial suffering

Postby kernelpanic » Sun Apr 25, 2010 5:14 pm UTC

makc wrote:I think ethics is just as limited as any theory, and what do you do when you step out of theory limits? you stop applying it there. just like all your "murder is no-no" do not apply at war, self-defense situations, or when they kill someone by court order.

But then it isn't murder. Murder is intentional homicide.

I think that, because humans are only a bunch of chemical reactions, it doesn't matter if they happen with real or simulated atoms.
I'm not disorganized. My room has a high entropy.
Bhelliom wrote:Don't forget that the cat probably knows EXACTLY what it is doing is is most likely just screwing with you. You know, for CAT SCIENCE!

Image

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Sun Apr 25, 2010 6:03 pm UTC

Technical Ben wrote:With an AI you could (theoretically) reset it's memory and start, or as said, rewind the effects. This would, from the AI's perspective mean it never suffered in the first place.
So the fact it suffered is of no consequence to itself.

I don't think this is true. The AI may not remember having suffered, but it still had to endure the suffering when it happened. Suppose I invent a medication that prevents permanent memories from being formed. Then I give you the pill and torture you for eight hours. Does the fact that you don't remember being tortured afterwards mean what I did was okay?

You are left however with the ethical problem of those doing the testing. Should they be allowed to cause suffering? I see it similar to entertainment. If you get too wrapped up in simulated killings for example, are you more likely to do the same to real people?

No, that's not the problem at all. The simulated people are real people, if not flesh-and-blood ones, since they have experiences just as flesh-and-blood humans do, and any harm done to them is just as bad as if it were done to a flesh-and-blood individual.

[Edit] Although, suggesting that book characters be given rights is ridiculous. They are simulated right? But I guess we will meet definite limits or definable boundaries with technology.

That's not what anyone is talking about. We're talking about computer simulations which are actually sentient human-like AIs. A character in a book is not actually sentient, so the same concerns don't apply.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

GeorgeH
Posts: 527
Joined: Mon Aug 17, 2009 6:36 am UTC

Re: ethics of artificial suffering

Postby GeorgeH » Sun Apr 25, 2010 6:12 pm UTC

skeptical scientist wrote:Would your life be in any way different right now if you were a simulation of yourself, living in a simulated world identical to this one? If you were such a being, would you think you should still have rights?


As far as rights are concerned, yes, it would be. What rights I have are granted by my peers, not the creators/controllers of my universe (flowery prose in legal documents notwithstanding.) If we granted simulated humans the same rights as "real" humans we would be unable to create simulated humans in the first place, simply because we would have to outlaw "acts of god" in the simulated world.

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Sun Apr 25, 2010 7:17 pm UTC

GeorgeH wrote:If we granted simulated humans the same rights as "real" humans we would be unable to create simulated humans in the first place, simply because we would have to outlaw "acts of god" in the simulated world.

Why would it be any more unethical to create simulated humans (or human-level intelligences) than it would be to have children? Yes, you would have the power to mess with the universe they inhabit if you are simulating an entire universe for them, but as long as that power isn't abused, I don't see the problem.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

mouseposture
Posts: 42
Joined: Tue Sep 15, 2009 2:42 am UTC

Re: ethics of artificial suffering

Postby mouseposture » Sun Apr 25, 2010 11:39 pm UTC

skeptical scientist wrote:Why would it be any more unethical to create simulated humans (or human-level intelligences) than it would be to have children? Yes, you would have the power to mess with the universe they inhabit if you are simulating an entire universe for them, but as long as that power isn't abused, I don't see the problem.


That just begs for the rejoinder "The power will be abused, and that's the problem (if you're a consequentialist)"

However, you've missed GeorgeH's point. Any act of god, even non-abusive, would be a violation of the AIs' rights. Slavery -- not abusive slave masters -- is a violation of human rights. Perhaps simulated universes which ran autonomously, without divine intervention, would be ethical, though, if I understand GeorgeH's position, even the act of creation might be prohibited.

Now GeorgeH seems to be basing his ethics on preserving rights. If you're also concerned about preventing suffering, then you're even prohibited from creating an autonomous simulation if it's deterministic and AIs' suffering is a predictable consequence of initial conditions. The child-bearing argument isn't convincing here, because it may be immoral to have a child if that's likely to result in the child's suffering. This is the dilemma faced by prospective parents who carry genetic diseases.

User avatar
tastelikecoke
Posts: 1208
Joined: Mon Feb 01, 2010 7:58 am UTC
Location: Antipode of Brazil
Contact:

Re: ethics of artificial suffering

Postby tastelikecoke » Mon Apr 26, 2010 12:52 am UTC

Shouldn't we treat artificial life, like how we shake about on animal cruelty?

I mean some people are seeing this as a gradient from human to machine, but how about experiments with tonnes a monkeys?

There are already robots to produce limited humanoid abilities, but of a brain (not even really) equal to an insect, we can just dismantle them. If sooner or later go to higher robots, we could treat them in progressive levels.

And with that said, officers will be strict on regulations if simulation of humans come to place.

GeorgeH
Posts: 527
Joined: Mon Aug 17, 2009 6:36 am UTC

Re: ethics of artificial suffering

Postby GeorgeH » Mon Apr 26, 2010 5:13 am UTC

Mouseposture pointed out that I failed to make my point sufficiently clear, so I'll rephrase and expand.

To make a complete simulated human, we would also need to create some kind of simulated universe for them to interact with. Further, any sufficiently complete simulated universe would have to include the chance that the simulated human would suffer harm of some kind. Let's say that chance that our simulated human would suffer harm is X%, with 0<X<=100.

Because we could run the simulation as many times as we want, we are guaranteed to create at least one simulated human that will suffer. Therefore there is no substantive difference between X=0.01% , X=27%, and X=82%, so we might as well make X=100%.

With X=100%, we know that if we simulate a human we will cause them harm. Deliberately causing a "real" human to suffer harm is generally considered to be a violation of their rights, so if we grant simulated humans the same rights as "real" humans we will be completely unable to run the simulation in the first place.


skeptical scientist wrote:Why would it be any more unethical to create simulated humans (or human-level intelligences) than it would be to have children?

In general having a child isn't comparable to simulating a human simply because we have no control over the real universe, but we do have control over the simulated one; everything in the real world that could happen to the child is God's fault (assuming the standard omniscient and omnipotent creative deity) but everything in the simulated world is our fault.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 5:40 am UTC

kernelpanic wrote:But then it isn't murder. Murder is intentional homicide.
Wordplay is weak argument :) A.k.a. "you know what I mean".
GeorgeH wrote:What rights I have are granted by my peers, not the creators/controllers of my universe
Good point, therefor simulated humans should seek rights from other simulated humans :D

brötchen
Posts: 112
Joined: Mon Aug 31, 2009 1:45 pm UTC

Re: ethics of artificial suffering

Postby brötchen » Mon Apr 26, 2010 6:45 am UTC

what about a simulated human without simulated universe ? just supply the simulation with means of interacting with the real world. i don't think it changes anything about the rights of the simulation weather or not its environment is a simulation as well and i don't really understand why it would diminish the rights of the simulation if there was a creator .... it thinks like a human it behaves like a human and the way it gets it input ( from the real world or from a simulation) doesn't change that so whats the point of treating it differently ?

Again, excuse my bad English

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 7:16 am UTC

brötchen wrote:what about a simulated human without simulated universe ?
what about non-simulated human without its environment? imagine yourself suddenly suffocating in the emptiest place of our universe, Douglas Adams style. and then you hear deep voice in your head: "we hold these truths to be self evident.... [a list of your rights goes here]" doesn't make much sense does it.

brötchen wrote:i don't really understand why it would diminish the rights of the simulation if there was a creator
let's see how this argument will work for you on the judgement day :D "oh no, you have no right to send me to hell, hell NO! I am going to sue you in court of law!" ha ha, just imagine that. taking "rights" concept that far really is not smart thing to do.

brötchen
Posts: 112
Joined: Mon Aug 31, 2009 1:45 pm UTC

Re: ethics of artificial suffering

Postby brötchen » Mon Apr 26, 2010 7:33 am UTC

makc wrote:
brötchen wrote:what about a simulated human without simulated universe ?
what about non-simulated human without its environment? imagine yourself suddenly suffocating in the emptiest place of our universe, Douglas Adams style. and then you hear deep voice in your head: "we hold these truths to be self evident.... [a list of your rights goes here]" doesn't make much sense does it.

brötchen wrote:i don't really understand why it would diminish the rights of the simulation if there was a creator
let's see how this argument will work for you on the judgement day :D "oh no, you have no right to send me to hell, hell NO! I am going to sue you in court of law!" ha ha, just imagine that. taking "rights" concept that far really is not smart thing to do.


first of all dont quote out of context ....
brötchen wrote:what about a simulated human without simulated universe ? just supply the simulation with means of interacting with the real world.

also i couldnt really sue the creator of my universe but he would probably be sued by others of his kind.... just like a simulated human running on a computer i own couldnt sue me but i would still get in trouble for being un ethical to the simulation

User avatar
Shivahn
Posts: 2200
Joined: Tue Jan 06, 2009 6:17 am UTC

Re: ethics of artificial suffering

Postby Shivahn » Mon Apr 26, 2010 7:39 am UTC

The "being created" thing doesn't diminish the rights any more than being born in Iran rather than the US diminishes your rights.

In other words, yes, it diminishes what rights you will receive, concretely, but generally, abstractly, it doesn't diminish what you're entitled to. A god or a state violating your inherent human/natural rights (from either of those perspectives) isn't somehow voiding them from existence, just violating them.

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: ethics of artificial suffering

Postby Josephine » Mon Apr 26, 2010 7:43 am UTC

here's a slightly different question. does a fully simulated human brain in an android body have the same rights as a human?
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 7:53 am UTC

brötchen wrote:also i couldnt really sue the creator of my universe but he would probably be sued by others of his kind.... just like a simulated human running on a computer i own couldnt sue me but i would still get in trouble for being un ethical to the simulation
cool, so the god's dad will kick god's ass for burning people in hell! nice to know that.

seriously though, if I had enough power to run human simulation, I would also have enough power to not "get in trouble for being un ethical".

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: ethics of artificial suffering

Postby Josephine » Mon Apr 26, 2010 8:01 am UTC

makc wrote:seriously though, if I had enough power to run human simulation, I would also have enough power to not "get in trouble for being un ethical".

not in 20 years you wouldn't. That amount of computing power will be commonplace by then. talking about ethics here is really only applicable to conditions where brain simulations are commonplace.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

brötchen
Posts: 112
Joined: Mon Aug 31, 2009 1:45 pm UTC

Re: ethics of artificial suffering

Postby brötchen » Mon Apr 26, 2010 8:04 am UTC

makc wrote:
brötchen wrote:also i couldnt really sue the creator of my universe but he would probably be sued by others of his kind.... just like a simulated human running on a computer i own couldnt sue me but i would still get in trouble for being un ethical to the simulation
cool, so the god's dad will kick god's ass for burning people in hell! nice to know that.

seriously though, if I had enough power to run human simulation, I would also have enough power to not "get in trouble for being un ethical".


what the F ? are you religious or something ? ....
also violating the rights of the simulation and getting away with it is not the same as if the simulation didn't have rights to begin with ... and i think it does have rights.
Also why do you think you would have to be powerfull to run a human simulation ? Maybe in 50 years you will be able to run it on your iphone 45 GS or your htc holyfuck running android “penutbutter jelly” or what ever kind of handheld we will have by than :lol:

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 8:17 am UTC

also violating the rights of the simulation and getting away with it is not the same as if the simulation didn't have rights to begin with ... and i think it does have rights
the 1st part of this is saying that invisible undetectable unicorn is not the same as no unicorn and 2nd part is just what you think. I think simulation does not have rights. why should simulator be punished based on your opinion, or why should he be allowed to do whatever based on mine?

not in 20 years you wouldn't. That amount of computing power will be commonplace by then.
why do you think you would have to be powerfull to run a human simulation ? Maybe in 50 years you will be able to run it on your iphone 45 GS
this so reminds me that scene from "back to the future" where doc says to marty something similar about buying plutonium in 80s.... let's just say I did not have computational power in mind.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: ethics of artificial suffering

Postby Technical Ben » Mon Apr 26, 2010 9:08 am UTC

skeptical scientist wrote:
Spoiler:
Technical Ben wrote:With an AI you could (theoretically) reset it's memory and start, or as said, rewind the effects. This would, from the AI's perspective mean it never suffered in the first place.
So the fact it suffered is of no consequence to itself.

I don't think this is true. The AI may not remember having suffered, but it still had to endure the suffering when it happened. Suppose I invent a medication that prevents permanent memories from being formed. Then I give you the pill and torture you for eight hours. Does the fact that you don't remember being tortured afterwards mean what I did was okay?

You are left however with the ethical problem of those doing the testing. Should they be allowed to cause suffering? I see it similar to entertainment. If you get too wrapped up in simulated killings for example, are you more likely to do the same to real people?

No, that's not the problem at all. The simulated people are real people, if not flesh-and-blood ones, since they have experiences just as flesh-and-blood humans do, and any harm done to them is just as bad as if it were done to a flesh-and-blood individual.

[Edit] Although, suggesting that book characters be given rights is ridiculous. They are simulated right? But I guess we will meet definite limits or definable boundaries with technology.

That's not what anyone is talking about. We're talking about computer simulations which are actually sentient human-like AIs. A character in a book is not actually sentient, so the same concerns don't apply.

I mentioned your exact first point, and agreed with it :D . From the person who has had their memory (Or in this case, entire existence) reset, nothing happened. However, those left, such as yourself, are still accountable for their actions. Similar to time travel. We are , for the AI, sending it back in time, undoing anything done to it. As if it never happened. This would undo unintentional injury, but we are still responsible for intentional acts.
[edit] Oh, and I cannot make the assumption that the characters/AI "are" people. They will always be "written" characters, just like a book, as far as I can see. Can we make a sentient lifeform? That's a massive ask. Can we make a machine? Well we do already. We cannot give machines (cars, spanners, clocks) rights can we? At what point does a program become a person? That's where I see a definite limit appearing. Before we get the trouble of deciding "is the program alive?" we will hit a roadblock in technology or programming preventing us from getting that far anyway.
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: ethics of artificial suffering

Postby Technical Ben » Mon Apr 26, 2010 9:16 am UTC

Technical Ben wrote:
skeptical scientist wrote:
Spoiler:
Technical Ben wrote:With an AI you could (theoretically) reset it's memory and start, or as said, rewind the effects. This would, from the AI's perspective mean it never suffered in the first place.
So the fact it suffered is of no consequence to itself.

I don't think this is true. The AI may not remember having suffered, but it still had to endure the suffering when it happened. Suppose I invent a medication that prevents permanent memories from being formed. Then I give you the pill and torture you for eight hours. Does the fact that you don't remember being tortured afterwards mean what I did was okay?

You are left however with the ethical problem of those doing the testing. Should they be allowed to cause suffering? I see it similar to entertainment. If you get too wrapped up in simulated killings for example, are you more likely to do the same to real people?

No, that's not the problem at all. The simulated people are real people, if not flesh-and-blood ones, since they have experiences just as flesh-and-blood humans do, and any harm done to them is just as bad as if it were done to a flesh-and-blood individual.

[Edit] Although, suggesting that book characters be given rights is ridiculous. They are simulated right? But I guess we will meet definite limits or definable boundaries with technology.

That's not what anyone is talking about. We're talking about computer simulations which are actually sentient human-like AIs. A character in a book is not actually sentient, so the same concerns don't apply.

I mentioned your exact first point, and agreed with it :D . From the person who has had their memory (Or in this case, entire existence) reset, nothing happened. However, those left, such as yourself, are still accountable for their actions. Similar to time travel. We are , for the AI, sending it back in time, undoing anything done to it. As if it never happened. This would undo unintentional injury, but we are still responsible for intentional acts.
[edit] Oh, and I cannot make the assumption that the characters/AI "are" people. They will always be "written" characters, just like a book, as far as I can see. Can we make a sentient lifeform? That's a massive ask. Can we make a machine? Well we do already. We cannot give machines (cars, spanners, clocks) rights can we? At what point does a program become a person? That's where I see a definite limit appearing. Before we get the trouble of deciding "is the program alive?" we will hit a roadblock in technology or programming preventing us from getting that far anyway.


[second edit] And GeorgeH, that argument does not stand. Just by existing yourself, you have a greater than zero chance of causing harm. Even by dying, your death could cause harm, or by doing nothing (just sitting there). These are all risks we are willing to take, and reasonable not to hold someone to account for having the cheek to exist!
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Mon Apr 26, 2010 11:03 am UTC

Technical Ben wrote:We are , for the AI, sending it back in time, undoing anything done to it. As if it never happened. This would undo unintentional injury, but we are still responsible for intentional acts.

You wrote,
With an AI you could (theoretically) reset it's memory and start, or as said, rewind the effects. This would, from the AI's perspective mean it never suffered in the first place.
So the fact it suffered is of no consequence to itself.

Where I disagree is that, while the suffering may be of no consequence to the future self who doesn't remember it, it's still of consequence to the past self who felt it at the time, and this is enough to make that behavior unethical. With respect to the intentional/unintentional injury, an unintentional injury is still a wrong done if it resulted from negligence. If the injury was completely unforeseeable, then I agree that we are not responsible for it, since we can only be held responsible for the foreseeable consequences of our actions. However, in this case, I don't see how the ability to reset time (from the AI's perspective) makes any difference. We are still not responsible for unforeseeable harm, whether we can fix it or not, and we are still responsible for foreseeable harm, even if we can erase the wronged party's memory of the wrong done. (In fact, messing with the wronged party's memory without their consent is yet another wrong done!)

Technical Ben wrote:Before we get the trouble of deciding "is the program alive?" we will hit a roadblock in technology or programming preventing us from getting that far anyway.

What is your evidence for this? I'll accept that if we never create true artificial intelligence, then this debate is moot, but it seems entirely possible that we will.




GeorgeH wrote:With X=100%, we know that if we simulate a human we will cause them harm. Deliberately causing a "real" human to suffer harm is generally considered to be a violation of their rights, so if we grant simulated humans the same rights as "real" humans we will be completely unable to run the simulation in the first place.

In general having a child isn't comparable to simulating a human simply because we have no control over the real universe, but we do have control over the simulated one; everything in the real world that could happen to the child is God's fault (assuming the standard omniscient and omnipotent creative deity) but everything in the simulated world is our fault.

I think you are right. So I suppose my conclusion is that it is unethical to simulate humans, period. It may be ethical to create artificial intelligences, but only if they are nonsentient (they think but don't feel). I don't really know enough about AI to know if/how that would work (in fact, it may be that nobody does), but creating AIs which can feel, particularly if they can feel anything like human emotions, seems not only like a bad idea (especially if they have superhuman intelligence/speed of thought) but unethical to boot.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 11:12 am UTC

fuuuu... I just wrote long reply to this thread, and someone meanwhile edited or deleted last post, and now my text is gone :/ I feel like simulation that has been messed with.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: ethics of artificial suffering

Postby Technical Ben » Mon Apr 26, 2010 12:06 pm UTC

I suppose what I was getting at is that I can see, just as there are limits in the physical world, there are some in programming as well. We are never getting travel faster than light. This is a fact. There is an upper limit. So applying that to AI, or a complete human simulation, where is the limit? Well, to simulate all the connections, with all the chemical weights, electrical impulses, and external factors, you would need a computer that practically IS a human brain. I can see us building a set of rules, that tell us how a illness will effect a person. But does running the set of rules is run on a massive processor farm make it alive? Like with the book comparison. All our programming could be written into a book, and run by hand (Like the XKCD stones comic ;)). Does this make it a living person? If you move that code from the book to a computer what then? Is it not still a book?
Why am I sceptical about simulating a person or a brain? "50–100 billion (1011) neurons, of which about 10 billion (1010) are cortical pyramidal cells. These cells pass signals to each other via as many as 1000 trillion (1015) synaptic connections.[2]" The brain has so many connections and synapses, that your going to have a margin of error. It's like trying to transport someone Star Trek style. Your going to hit something like the Heisenberg uncertainty principle. We just cannot get a perfect copy of something that complex. At most we will get an Ape brain. But then it's the ethics of weather we should do tests on animals. If you can simulate that much detail, why not just simulate the part you are studying?

I suppose it's a self answering question. If it's assumed it's a perfect simulation of a human, in AI form, the question is not "Should we allow artificial suffering" but "should we allow suffering". Because there is no definable way to differentiate between the two. Most would agree we should not cause suffering. However if it's still just a simulation, just a series of code or just a character in a game/movie or book (but simulated via AI etc), then we are asking "Should we find suffering entertaining?"
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: ethics of artificial suffering

Postby idobox » Mon Apr 26, 2010 12:48 pm UTC

The computational power will be reached some day. A quantum computer simulating the brain could be the size of an actual brain, and probably even smaller if we can skip the molecular level simulation.
As to Technical Ben point on the difficulty to copy a real person, I don't see it as a problem. There are about 7 billions human brains right now on earth, that are all different, but yet all human. Simulating a human brain doesn't imply simulating someone's brain; you could just build a brand new brain by following whatever rules make a brain human and not ape.

On the pure ethics part, we grant rights to animals right now, but not so long ago, children and women were not recognized the same rights as adults males, and only a few centuries ago, it was whole "races" (I don't know what the Basically Decent word in English).
The definition of what is human, and what deserves rights changes over time, usually to encompass more and more people. As long as a computer simulation has feelings, intelligence and passes the turing test, I don't see how we could consider it less human than, for example, a tetraplegic person (fully conscious, but unable to move).
I guess simulating a human in a universe could be considered to be abusive, the same way raising a child in your basement for an experiment would. This would be especially true if the simulated universe was substantially different from our own.
The android with simulated brain situation is a bit more tricky. It would be a bit like giving birth to a child knowing it's going to be crippled or otherwise handicapped or ill. The debate exist today.

The main problem I see arise during the conception phase. Before the fully functioning simulated brain exists there will be a number of failed attempts and intermediary steps that could be considered the equivalent of growing diseased brains in a petri dish.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 1:02 pm UTC

idobox wrote:...we grant rights to animals right now...
sure sign we are going too far with trying to extend ethics beyond its limits, btw; if this continues, everyone will be forced into vegetarianism. it's even worse for our children: as soon as they start granting rights to plants, the only allowed food will be some synthetic chemicals.

if you allow me this pun, a plant has more rights to have rights than your computer program.

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Mon Apr 26, 2010 1:28 pm UTC

makc wrote:
idobox wrote:...we grant rights to animals right now...
sure sign we are going too far with trying to extend ethics beyond its limits, btw; if this continues, everyone will be forced into vegetarianism. it's even worse for our children: as soon as they start granting rights to plants, the only allowed food will be some synthetic chemicals.

There is every reason to believe that some animals are sentient, and plants are not, which is an entirely reasonable justification for granting rights to animals but not plants. Also, people have been granting rights to animals for thousands of years: one of the Noachide Laws specifically prohibits a particular type of cruelty to animals (taking part of its flesh while still alive). Also, nobody here is saying that animals should be granted the same rights as human beings (it would, after all, be pretty meaningless to grant a cat the right to freedom of speech and religion).

if you allow me this pun, a plant has more rights to have rights than your computer program.

Not if the plant is not sentient, but the computer program is.


makc wrote:fuuuu... I just wrote long reply to this thread, and someone meanwhile edited or deleted last post, and now my text is gone :/ I feel like simulation that has been messed with.

Sorry, that was me. I wrote two separate posts, and then combined them into one to avoid double-posting. I didn't realize doing so could screw with someone else posting.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

User avatar
JBJ
Posts: 1263
Joined: Fri Dec 12, 2008 6:20 pm UTC
Location: a point or extent in space

Re: ethics of artificial suffering

Postby JBJ » Mon Apr 26, 2010 1:53 pm UTC

One thing that appears to be lacking is what constitutes suffering. Suffering is a subjective experience, and we tend to impose our perception of suffering on other people as well as animals and other creatures.

I think that most people associate suffering with physical pain, but that's only a minor component. In mammals, physical pain is a conscious perception. Pain responses are processed in the prefrontal cortex. The pain is then linked with an emotional response which is the cause of suffering. When pain is reduced by blocking nerve signals from an anesthetic, or in patients who have received a lobotomy or cingulotomy, there is no suffering. There is no change in the objective level of physical damage, but the messages from the nerves does not associate to an emotional response.

The emotion most closely associated with suffering is fear. Fear is a stronger component of suffering than physical pain. Fear alone can induce suffering, even in the absence of physical pain. Having the ability to suppress the fear response can reduce if not totally eliminate suffering. It's the basis of hypnosis as an alternative to anesthesia. Sensory input from the nerves is not reduced. It's all in what the brain perceives as a threat/damage vs. what it doesn't.

Applied to this situation, if a simulated brain is capable of experiencing fear, then it is capable of suffering, regardless of sensory inputs from a simulated universe or from the real world. If we can construct an otherwise fully functioning human-like brain without fear then there is no ethical dilemma. Without fear it won't suffer.
So, you sacked the cocky khaki Kicky Sack sock plucker?
The second cocky khaki Kicky Sack sock plucker I've sacked since the sixth sitting sheet slitter got sick.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 1:58 pm UTC

ha ha, cat rights, rrright. in the world where a cat can be castrated or declawed at will, what rights does it have?

but nvm, why base rights on sentience?

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 2:02 pm UTC

JBJ wrote:If we can construct an otherwise fully functioning human-like brain without fear then there is no ethical dilemma. Without fear it won't suffer.
how do you verify that the program is scared? like, its output gets random and inadequate, cpu is overheating :D probably not something a programmer would want, so you bet they will program it without fear.

brötchen
Posts: 112
Joined: Mon Aug 31, 2009 1:45 pm UTC

Re: ethics of artificial suffering

Postby brötchen » Mon Apr 26, 2010 2:08 pm UTC

i found it to be rather mind blowing to think about all this in the context of xkcd 505 ( the one with the bunch of rocks) ... if you assume that a simulation may be unethical ( which i think is the case) it leads to the conclusion that there are certain configurations of rocks that are unethical ... kinda wired as rocks clearly cant fell but it has to be true as the bunch of rocks might be turing complete

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Mon Apr 26, 2010 2:18 pm UTC

brötchen wrote:there are certain configurations of rocks that are unethical
certain configurations of rocks AND the guy moving them. In "oogh, naughty boy, I love the way you move your rocks" way :)

mouseposture
Posts: 42
Joined: Tue Sep 15, 2009 2:42 am UTC

Re: ethics of artificial suffering

Postby mouseposture » Mon Apr 26, 2010 10:11 pm UTC

makc wrote: you bet they will program it without fear.


Not a safe assumption at all. Fear serves a useful function in biological brains. Perhaps it's possible to implement Azimov's Third Law without fear -- perhaps not. Perhaps it's possible, but beyond the technical ability of the folks who program the first generation of sentient AIs.

User avatar
Telchar
That's Admiral 'The Hulk' Ackbar, to you sir
Posts: 1937
Joined: Sat Apr 05, 2008 9:06 pm UTC
Location: Cynicistia

Re: ethics of artificial suffering

Postby Telchar » Mon Apr 26, 2010 10:52 pm UTC

1. I think the term simulation is bad. I can run simulations of climate, but I'm not actually creating climate. Creating simulations of humans sounds superficial, and probably wouldn't be called that in any case.

2. I doubt we would be able to create a human intelligence on a computer without creating a copy of an existing intelligence (Mapping someones brain and transporting it onto a computer). So much of human development relies on sensory input and group dynamics, you would at least have to start with existing intelligences and have them raise baby intelligences.

3. I think it's more interesting to think about non-human intelligences. IE: Artificial intelligence allowed to "grow" organically and not molded.

4. In terms of experimentation, I don't think it'd be hard to get around the medical ethics. You could just as easily have the artificial person react the same to pain without having them experience it. Expirimenting on people w/o their consent is still bad, but modifying people so they don't feel pain so you can expiriment on them? Or making people so they are easily influenced by you?
Zamfir wrote:Yeah, that's a good point. Everyone is all about presumption of innocence in rape threads. But when Mexican drug lords build APCs to carry their henchmen around, we immediately jump to criminal conclusions without hard evidence.


Return to “Science”

Who is online

Users browsing this forum: No registered users and 16 guests