ethics of artificial suffering

For the discussion of the sciences. Physics problems, chemistry equations, biology weirdness, it all goes here.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: ethics of artificial suffering

Postby idobox » Tue Apr 27, 2010 12:25 am UTC

why base rights on sentience?

Because we do not grant right to some amount of human flesh. A severed human hand has no rights, and a corpse has much less rights than the living person it was, and in fact most of its rights are in fact due to the person it was, not the corpse it is.
The question of what is ethic and what isn't when talking about fetus, newborn or comatose people is the point of a strong debate, because they are at the boundary of human sentience, and ethics around boundaries are always blurred.

About pain and fear. Neither defines sentience, since many animals usually considered non-sentient can feel both. Defining sentience is really difficult, and many people have many different propositions.
What's more, fear and pain are not efficient markers of abuse. Fear and pain are normal processes in the human brain. Also, unethical treatment isn't limited to inflicting pain or fear. Think of everything that could be considered child abuse : sensory privation, lack of social interaction, massive lies about what life really is, etc... In short, detecting pain or fear in a simulation is neither a sign of sentience or unethical treatment.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
JBJ
Posts: 1263
Joined: Fri Dec 12, 2008 6:20 pm UTC
Location: a point or extent in space

Re: ethics of artificial suffering

Postby JBJ » Tue Apr 27, 2010 1:38 pm UTC

idobox wrote:About pain and fear. Neither defines sentience, since many animals usually considered non-sentient can feel both. Defining sentience is really difficult, and many people have many different propositions.
What's more, fear and pain are not efficient markers of abuse. Fear and pain are normal processes in the human brain. Also, unethical treatment isn't limited to inflicting pain or fear. Think of everything that could be considered child abuse : sensory privation, lack of social interaction, massive lies about what life really is, etc... In short, detecting pain or fear in a simulation is neither a sign of sentience or unethical treatment.

Many people try to apply their concept of suffering onto others. "If it can hurt me, it must hurt them" which is incorrect. Some of the items you mentioned, sensory deprivation and lack of social interaction, are examples of unethical treatment towards humans (or other social animals). An action is never ethical or unethical. It gains an ethical component only when that action is applied towards someone or something. Who or what it impacts is more important than the act.

One common bond between all sentient creatures is fear*. Animals that show a learned aversion to negative situations are capable of suffering. Animals that don't are capable of experiencing pain but it is experienced reflexively, not subjectively. Non-sentient creatures don't experience fear. Pain without fear is not suffering. So I respectfully disagree. Detecting a fear response in a simulation would be the first indication of sentience and possible unethical treatment. Whether that reaction is due to pain, sensory deprivation, sensory overload, or any other cause is irrelevant.

* Regarding fear, I'm using the term as a generic negative emotional response. Could also include anxiety, sadness, hostility, among many others.
So, you sacked the cocky khaki Kicky Sack sock plucker?
The second cocky khaki Kicky Sack sock plucker I've sacked since the sixth sitting sheet slitter got sick.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Tue Apr 27, 2010 3:07 pm UTC

idobox wrote:
why base rights on sentience?
Because we do not grant right to some amount of human flesh.
I don't see everything written after "because" actually answering my question. I see no problem to granting rights to living functional humans, and apparently legal system in most (if not all) countries does not see a problem either. I also see no reason to extend a concept of "rights" beyond human species.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: ethics of artificial suffering

Postby idobox » Tue Apr 27, 2010 5:16 pm UTC

I don't see everything written after "because" actually answering my question. I see no problem to granting rights to living functional humans, and apparently legal system in most (if not all) countries does not see a problem either. I also see no reason to extend a concept of "rights" beyond human species.

Let me try to rephrase it. Rights are granted to persons, and what define a person is mostly its mind.
If someone dies of head injury, his body is fully human, and apart for the brain, is fully functioning, but he is considered dead and has almost no rights as such.
A person suffering heavy injuries, and having lost its arms, legs and a big part of the torso, but still alive and conscious is considered human.
An embryo, up to a certain age, is legally considered not to be human, although it biologically is.

Refusing rights to a sentient being because of its physical characteristics has a name, its called discrimination. Children, women, black people, native americans, etc were devoid of rights because they were not considered "men". Today, in most countries, such discriminations are fought, and refusing rights to fully sentient human-like being just because it lacks a biological body is discrimination as well. If the AI/simulation passes the Turing test, if you can't tell whether it is a genuine human or a computer you're talking to, then I don't see any justification to treat it differently than a human.

The question gets much more complicated when talking of non human-like intelligence. That's why there is a gradient of rights. You are allowed to eat an oyster alive, or to boil a lobster alive, but not to do the same to a chicken, still you can cage it, force feed it and eat it, which you can't do to humans.

One common bond between all sentient creatures is fear*. Animals that show a learned aversion to negative situations are capable of suffering. Animals that don't are capable of experiencing pain but it is experienced reflexively, not subjectively. Non-sentient creatures don't experience fear. Pain without fear is not suffering. So I respectfully disagree. Detecting a fear response in a simulation would be the first indication of sentience and possible unethical treatment. Whether that reaction is due to pain, sensory deprivation, sensory overload, or any other cause is irrelevant.


I disagree with your definition of sentience. What you call sentience is the capacity to learn negative reactions. For me, a creature is sentient when it is aware of itself, when it makes the difference between itself and the rest of the universe, and the one I prefer, when it is able to analyze its own mental processes. By my definition, sentient creatures are only a subset of creatures that can feel negative emotions.
The standard way to test this definition of sentience is learn the AI a way to solve a problem, then confront it to a slightly different problem and look if it is able to determine the solution it knows doesn't fit, and even better, if it can find a new way, not step-by-step, but in a "jump".
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Tue Apr 27, 2010 5:38 pm UTC

idobox wrote:Refusing rights to a sentient being because of its physical characteristics has a name, its called discrimination.
oh, ok, does refusing rights to in-sentient being go under the same name?

idobox wrote:If the AI/simulation passes the Turing test, if you can't tell whether it is a genuine human or a computer you're talking to, then I don't see any justification to treat it differently than a human.
oh, let's see.... how about this: because it actually is NOT human?

idobox wrote:You are allowed to eat an oyster alive, or to boil a lobster alive, but not to do the same to a chicken
Really? Please name your country so that I never move in there by mistake. seriously, I have seen some cool stuff cooked from alive animals, and I would hate it to be forbidden because it upsets some pussy.

p.s. that second part you are replying to is by some other dude. can be confusing to have this in one post without names in quote tags.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: ethics of artificial suffering

Postby idobox » Tue Apr 27, 2010 6:28 pm UTC

p.s. that second part you are replying to is by some other dude. can be confusing to have this in one post without names in quote tags.

Sorry, won't do it again


You are allowed to eat an oyster alive, or to boil a lobster alive, but not to do the same to a chicken

Really? Please name your country so that I never move in there by mistake. seriously, I have seen some cool stuff cooked from alive animals, and I would hate it to be forbidden because it upsets some pussy.

How do you eat oysters and lobsters in your country? because I'm pretty sure that's the standard way worldwide.
And I'm from France

oh, let's see.... how about this: because it actually is NOT human?

It all depends on your definition of what a human is. Slavery of Africans started when the church decided native Americans had a soul, but not black people, denying them the status of human beings, and it didn't bother much people by the time.
A morphological difference is not a reason to deny the status of human being.
The lack of a biological body is a huge morphological difference, for sure, but is it more than a morphological difference?

oh, ok, does refusing rights to in-sentient being go under the same name?

Refusing the right to vote to women is called discrimination in the western world, and doesn't really have a name in countries that refuse it.
Eating meat is considered normal, and even a good thing by most of the population, but is called murder by extremist vegans.
And I'm pretty sure you can find people that fight for rights of trees, or even rocks.
But I don't think anyone ever called it discrimination when applied to something perceived as non-sentient.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Tue Apr 27, 2010 10:48 pm UTC

What about free will? Computers, as we know them today, give output (including any decision made by your simulated human) based solely on input (senses) and stored data (human memory). Thus, a computer (and your simulation) by definition has no free will, as all decisions are predetermined (sort of. in a world of only these simulations, everything would be predetermined, with the condition that nature (ie all inputs to your simulation) are predetermined). Even if you don't call that "predetermined", by definition all decisions would be based on input and memory. Wait, I already said that. Anyway, that is (one of) the definitions of "no free will". Therefore, either 1) humans have no free will, 2) your simulation is faulty and humans cannot be simulated by computers, or 3) computers will have some sort of other element, beyond input and memory, for decision making. 2) seems very possible to me (and solves lots of questions. if there is a "soul" that cannot be simulated, which, as an example, is found to exist in animals only, questions about cruelty towards animals, computers, and plants can be easily resolved). 3) is hard to imagine... but that doesn't mean anything. 1) is, well, something that some freely accept but brings up issues such as those of moral responsibility. That is a completely different topic that we should not derail the thread on, but, just as a demonstration, here is one possible problem:

your simulation starts off with the same "memory" (ie initial data) as an average human at birth. Some time, say 20 years, is simulated, with the simulated human in controlled "normal" circumstances; everything in its simulated environment is predetermined, but from its perspective it looks like an average human life. After 20 years, it ends up in situation where, say, it has the option to steal something big (or commit some other major crime). If it does steal it, is it morally responsible? Should it be punished, even though, it had "no say" in the matter? (one argument for punishing it could be that it is necessary to serve as an example to others, to decrease the likelihood of others also stealing when they consider their chances of being punished. this really has nothing to do with the "morally responsible" question, though.)

You could say that randomness could be introduced as another factor in addition to memory and input, but a) that makes it seem even less responsible for its actions, b) it would have to be independent of senses or stored data, and c) it would have to be independent of the environment. c) is debatable, since elements of the environment could be random themselves (ie pattern of falling raindrops). If the environment is not truly random but simply chaotic, that poses even more questions. if a butterfly in kansas can cause a human to commit a crime...

BTW, 1) and 3) from my original list are not mutually exclusive.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Tue Apr 27, 2010 10:54 pm UTC

idobox wrote:How do you eat oysters and lobsters in your country? because I'm pretty sure that's the standard way worldwide.
And I'm from France
That sucks, I heared France have some really good food. I was referring, however, to cool stuff they make in Asia from live fish and snakes. Fish still breathes on your plate, yet the other half of it is cooked. I would like to try that one day. But yeah, western bullshit finds its way there too, I heared China was going to ban eating cats and dogs.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: ethics of artificial suffering

Postby idobox » Wed Apr 28, 2010 4:06 am UTC

makc wrote:
idobox wrote:How do you eat oysters and lobsters in your country? because I'm pretty sure that's the standard way worldwide.
And I'm from France
That sucks, I heared France have some really good food. I was referring, however, to cool stuff they make in Asia from live fish and snakes. Fish still breathes on your plate, yet the other half of it is cooked. I would like to try that one day. But yeah, western bullshit finds its way there too, I heared China was going to ban eating cats and dogs.

And yet, they will keep on eating pigs, which are very intelligent, and are pets in some parts of China.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: ethics of artificial suffering

Postby idobox » Wed Apr 28, 2010 4:48 am UTC

Strange Quirk,

this is a frequent cause of misunderstanding.

If you take one person, and confront it to a choice, there is only two solutions:
-either it's behavior is predictable, and free will doesn't really exist, because there isn't really a choice
-or it's behavior is unpredictable, or not accurately predictable, which means random.

The idea that human decisions are neither predictable nor random is an inheritance of of older times. It implies the process of choosing is other-wordly.
This is a very comforting idea, because materialist explanations give the feeling you don't have any freedom. But there is absolutely no evidence of it.

But even if the process of decision is materialist, it doesn't mean free will absolutely doesn't exist. You have no way to predict accurately how a person is going to react, you can barely guess. The mind of other people is a black box, which means you don't know what happens inside. You can observe how it reacts to events, and try to guess how it WILL react. But as long as you're unable to look inside the box, you can consider there is free will.
It's a bit like flat Earth. The Earth is round, and you know it, but when you build a house, travel to the next city or play football, you work with a model of the universe where the ground (ie the earth) is flat.
It's the same with free will. Even if free will didn't exist, when you interact with people normally, considering free will exists works fine.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

UberNube
Posts: 13
Joined: Wed Feb 03, 2010 1:22 am UTC

Re: ethics of artificial suffering

Postby UberNube » Thu Apr 29, 2010 12:44 pm UTC

makc wrote:
idobox wrote:If the AI/simulation passes the Turing test, if you can't tell whether it is a genuine human or a computer you're talking to, then I don't see any justification to treat it differently than a human.
oh, let's see.... how about this: because it actually is NOT human?

idobox wrote:You are allowed to eat an oyster alive, or to boil a lobster alive, but not to do the same to a chicken
Really? Please name your country so that I never move in there by mistake. seriously, I have seen some cool stuff cooked from alive animals, and I would hate it to be forbidden because it upsets some pussy..


I'm sorry, but I really have to raise issue with the idea that non-human organisms (organic or artificial) should lose all rights simply because of their physical manifestation.

I'm certain you consider yourself deserving of basic human rights (like the right not to be eaten alive).
I'm also reasonably certain that you'll agree that there are almost certainly more intelligent and advanced species somewhere in the universe.
What happens when (if) we make first contact? Do they consider us a delicious delicacy simply because we don't share a genetic heritage with them? Do we lose all rights simply because of our physical differences, despite being intellectually comparable. Put yourself in the situation of any sentient being before you consider disregarding its basic rights.

Also, just out of interest, which country are you from, just so I don't move there by mistake. I'd hate to be at risk of being eaten alive simply because I don't have sufficiently similar DNA. The definition for species is fairly arbitrary, and it really wouldn't be hard to go from arguing that animals that share 94% [1] of our dna (chimpanzees) shouldn't have any rights, to arguing that people who have a different colour skin or any other physical differences should be treated similarly. For reference, there is about a 0.1% genetic variation within the human population [2].

As for AI's, the idea of disregarding their rights is even more worrying, since one of the assumptions made in this debate is that an AI is an exact replica of a human mind within a virtual environment. I certainly wouldn't want my mind to be the one chosen to be copied and tortured, and I suspect you would feel the same way. Since an AI is capable of all the same feelings as you or I, then similarly I am certain that it would feel the same way.

[1] http://www.sciam.com/article.cfm?chanID ... 436FEF8039
[2] http://www.nature.com/ng/journal/v36/n1 ... g1435.html

=== The following is not directed at makc ===

As for the argument that by creating an AI we are giving it the ability to suffer and therefore indirectly causing it suffering, I think that is simply wrong. There is large grey area in the concept of blame. At one end, it could be argued that the minute disruptions caused by your very existance, alive or dead, could contribute to suffering in the world (chaos theory), but at the other it could be argued that just because you pulled the trigger on the gun, momentum was to blame for the bullet hitting the victim. We have to draw a line somewhere, and usually that line is drawn at intent or negligence. If an action is likely to do harm, then it would be immoral to perform that action. However, if an action only has the possibility of doing harm, but also some good is likely to come from it, then usually it is acceptable. Creating an AI in a virtual environment full of acid and deathtraps would be considered immoral, but creating an AI in an environment in which it has the opportunity for happiness as well as misery would seem perfectly acceptable. Every time a woman gives birth, she is doing precisely that. Sure, life does contain a lot of unhappiness, but at the same time would you really choose to never be born simply to avoid the small amount of unhappiness inherent in life.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 4:13 pm UTC

UberNube wrote:I'm certain you consider yourself deserving of basic human rights (like the right not to be eaten alive).
I'm also reasonably certain that you'll agree that there are almost certainly more intelligent and advanced species somewhere in the universe.
What happens when (if) we make first contact? Do they consider us a delicious delicacy simply because we don't share a genetic heritage with them? Do we lose all rights simply because of our physical differences, despite being intellectually comparable. Put yourself in the situation of any sentient being before you consider disregarding its basic rights.
I'm sorry, this is completely backwards. No need to go for aliens here - if you pass the bill to forbid me eating bears alive, it does absolutely nothing to prevent bears from eating me. Personally, I think eating bears alive would be thrilling and interesting experience, but hardly ever safe, so noone will do this any way.

UberNube wrote:Also, just out of interest, which country are you from, just so I don't move there by mistake. I'd hate to be at risk of being eaten alive simply because I don't have sufficiently similar DNA.
Ha ha, don't go to ukraine. There's no rasism here, because we eat all non-white people :)

UberNube wrote:I certainly wouldn't want my mind to be the one chosen to be copied and tortured, and I suspect you would feel the same way.
No, I wouldn't give a damn. A copy of my mind in some machine deserves the same rights as the copy of my face in some sculpture.

UberNube wrote:Since an AI is capable of all the same feelings as you or I, then similarly I am certain that it would feel the same way.
Of course, but you should listen to some pig squealing when they cut her throat. From what it sounds like, I'm fairly certain the pig feels the same way about its throat being cut as I or you would (even if it's not as smart as us, humans). Neverless, pigs are tasty, and I see no reason to stop eating them.

edit/p.s. I actually had no opinion on this subject when I came to the thread, but thanks to reading obvious and not so obvious nonsense came to conclusion that ethics should be applied where it actually applies, and not extended over unrelated domains just because you can. this may be not final, if I hear some convincing arguments (but "how would you like to be eaten alive by aliens" is not one of them).

UberNube
Posts: 13
Joined: Wed Feb 03, 2010 1:22 am UTC

Re: ethics of artificial suffering

Postby UberNube » Thu Apr 29, 2010 4:55 pm UTC

makc wrote:I'm sorry, this is completely backwards. No need to go for aliens here - if you pass the bill to forbid me eating bears alive, it does absolutely nothing to prevent bears from eating me. Personally, I think eating bears alive would be thrilling and interesting experience, but hardly ever safe, so noone will do this any way.

UberNube wrote:
Ha ha, don't go to ukraine. There's no rasism here, because we eat all non-white people :)

UberNube wrote:
No, I wouldn't give a damn. A copy of my mind in some machine deserves the same rights as the copy of my face in some sculpture.

UberNube wrote:
Of course, but you should listen to some pig squealing when they cut her throat. From what it sounds like, I'm fairly certain the pig feels the same way about its throat being cut as I or you would (even if it's not as smart as us, humans). Neverless, pigs are tasty, and I see no reason to stop eating them.

edit/p.s. I actually had no opinion on this subject when I came to the thread, but thanks to reading obvious and not so obvious nonsense came to conclusion that ethics should be applied where it actually applies, and not extended over unrelated domains just because you can. this may be not final, if I hear some convincing arguments (but "how would you like to be eaten alive by aliens" is not one of them).


Unfortunately I don't think there's any argument I could use to convince you to change your mind. I guess in the particular situation we are in currently, your disregard for ethics outside the species is beneficial to you (and only you). Just be glad you're currently a member of the most technologically advanced known species, and as such are in the fortunate position of being able to disregard ethics.

Just out of interest, do you have any reason, other than legal consequences, not to torture or kill other people. From a truely self-serving logical standpoint without the interference of government, there have been many occasions in my life when it would have been beneficial to me to commit serious crimes, even going as far as murder. If you cannot accept that other sentient beings should have rights, then surely that should include humans too, especially those ones who are detrimental to your existence. If a perfect simulation of a human has no rights, then why should any other human?

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Thu Apr 29, 2010 7:45 pm UTC

idobox wrote:Strange Quirk,
this is a frequent cause of misunderstanding.

Wait, what? What exactly did I misunderstand? As far as I can tell, except for a few of my opinions, my post was entirely logical, and didn't actually present any conclusions.

idobox wrote:If you take one person, and confront it to a choice, there is only two solutions:
-either it's behavior is predictable, and free will doesn't really exist, because there isn't really a choice
-or it's behavior is unpredictable, or not accurately predictable, which means random.

The idea that human decisions are neither predictable nor random is an inheritance of of older times. It implies the process of choosing is other-wordly.
This is a very comforting idea, because materialist explanations give the feeling you don't have any freedom. But there is absolutely no evidence of it.


First of all, you are arguing semantics about the word "random". In common speech, if I choose "randomly" (if I were, indeed, to be completely random) then, essentially, I'm giving and equal chance to every possible outcome. That's obviously not the case with normal decisions, where we take into account memory and input. Thus, I'll have to say that nobody thinks that all human decisions are "random". The fact that there could be a random element to our decisions is something I considered in my post.

Let's use my definition of "random" for now, ok? I'm not sure what yours is, but it doesn't make much sense, since you first say that "unpredictable means random" and then say that "decisions are neither predictable nor random", implying that they could be unpredictable but not random. Then, using my definition, you have only too situations, the ones you listed first, and we can forget about the one that you say involves other-worldly actions. So basically, we agree that decisions are either predictable or unpredictable. Pretty obvious.

Your two options are, essentially, "free will" or "no free will". You won't find an answer consensus/agreement to this in one sentence, and, as I said, we shouldn't derail this thread on that debate, as I'm sure its already going on elsewhere (or maybe people have gotten tired of it already). Physics (/biology/neuroscience) doesn't give us any answers on this yet; we have no basis to believe that a person with the same memory and same input will make the same decisions. We have no basis to believe the opposite either, which, unlike the God/no God debate, means that we have pretty much no scientific footing for either case. We do have some moral questions (ie responsibility) that come up if we assume that human behavior is deterministic. The rest of your post bases on the assumption that human behavior is deterministic.

Wait, what am I talking about? All of this is irrelevant. Where did I say that we do or don't have free will? I came up with three possible scenarios. If I missed some other possibility, if its possible that none of them are true, do tell me. My whole idea was bringing up some other interesting questions: 2) is something none of you considered, 3) is something interesting to think about, and 1) is something that many people don't agree with, and whose moral problems are easier to look at in terms of a simulation. If you think that humans don't have free will, then no problem. My post will probably not give you anything new. But I don't see any problems.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 8:08 pm UTC

UberNube wrote:Just out of interest, do you have any reason, other than legal consequences, not to torture or kill other people.
Once again, this is backwards. The thing is, I don't have any reason to torture or kill other people. I might have one at some point of time (I mentioned few on this thread) and, of course by the definition of "reason" I might actually have to do it then. But right now you're safe :)

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Thu Apr 29, 2010 8:11 pm UTC

As for the original question: I don't think we will get very far in this debate. If and when we do build a human simulation, then we will probably know everything there is to know about the brain, which will give us a much clearer picture of the matter.

As for everybody saying that the simulation shouldn't have human rights... If the simulation is an exact copy of a human, except without a physical body with cells, then is the fact that we have cells the only thing that gives us these rights? One problem may be this; even if we think the sim is an exact replica, we may have no way of knowing for sure if, for instance, it is in fact "conscious". The fact that it responds to all input like a human would doesn't really mean that it is the same thing. For example, take the [url="http://en.wikipedia.org/wiki/Chinese_room"]Chinese room[/url]. On the outside, it appears to be a machine that understands Chinese and would pass the Turing test. However, while you may think that insulting someone is unethical, you probably won't be thinking the same thing about insulting the machine. The machine's response to your insult can indicate that its sad, or angry, or suicidal, but neither the book nor the man inside the room will actually be in any way hurt.

Let's extend this thought experiment to the simulation. Your simulation is run by a man in a room, that, following instructions in a book (which is the computer's program written down), will tell the simulated human to scream if the noise level percieved by the machine rises above the pain threshold for humans. Again, neither the man nor the book is in any way hurt.

OK, writing this has made me change my mind. I think that, no matter how powerful our computers are or how accurate the simulation is, cruelty towards a simulation is completely ethical as long as our technology stays the same, because the machine/simulation wouldn't actually be conscious like we are. However, our technology won't stay the same forever, so maybe with some future tech we may be able to simulate consciousness. Who knows? I'd love to argue this point, though, if anyone disagrees.

My point from my earlier post still stands, although it doesn't have much to do with cruelty.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 8:19 pm UTC

Strange Quirk wrote:If the simulation is an exact copy of a human, except without a physical body with cells, then is the fact that we have cells the only thing that gives us these rights?
of course not, it's just convenient threshold. we may consider to include (or exclude) some subject or class of subjects from ethical consideration as necessary (e.g., I would not like you to eat my friend Catherine, but I don't care about cats in general; or, aliens should be granted some rights so that we can legally trade with them; or jews should be denied some rights so that nazies can put them into ovens - not going to happen soon, except maybe in iran)

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Thu Apr 29, 2010 8:42 pm UTC

makc wrote:
Strange Quirk wrote:If the simulation is an exact copy of a human, except without a physical body with cells, then is the fact that we have cells the only thing that gives us these rights?
of course not, it's just convenient threshold. we may consider to include (or exclude) some subject or class of subjects from ethical consideration as necessary (e.g., I would not like you to eat my friend Catherine, but I don't care about cats in general; or, aliens should be granted some rights so that we can legally trade with them; or jews should be denied some rights so that nazies can put them into ovens - not going to happen soon, except maybe in iran)


Not sure I understand you. I'm not talking about what legally gives us rights... but moral rights. I suppose I'm assuming that we have "natural" rights. But if two people/animals/objects are identical, save for one thing, giving certain rights to one and not to the other means that the separating factor is that one thing. Again, I'm not talking about what is legal for them to do, but what they morally should have a right to. The nazis actually believed (some of them, at least) that it wasn't morally wrong to kill jews. But they have a distinguishing factor, ethnicity, that separates who it's right to kill and who it's not. We all agree, I assume, that race, or ethnicity, or religious beliefs shouldn't be valid ways to discriminate. But we have this ridiculous-sounding case: can we discriminate based on whether they have cells? I don't think so, but that was my original point.

UberNube
Posts: 13
Joined: Wed Feb 03, 2010 1:22 am UTC

Re: ethics of artificial suffering

Postby UberNube » Thu Apr 29, 2010 8:55 pm UTC

Strange Quirk wrote:OK, writing this has made me change my mind. I think that, no matter how powerful our computers are or how accurate the simulation is, cruelty towards a simulation is completely ethical as long as our technology stays the same, because the machine/simulation wouldn't actually be conscious like we are. However, our technology won't stay the same forever, so maybe with some future tech we may be able to simulate consciousness. Who knows? I'd love to argue this point, though, if anyone disagrees.

My point from my earlier post still stands, although it doesn't have much to do with cruelty.


I hate to break it to you, but from my point of view you're just a black box which receives input and produces output. For all I know you could be a very powerful computer posting on these forums from some DARPA lab. The moment we consider such black-boxes as simple algorithms and disregard any apparent sentience, then EVERYONE should lose their rights. For example, try to prove to me that ANYONE on these forums other than me is sentient. (hint: it's impossible)

Using current technology we can create neural networks which are as intelligent as many animals. It isn't a great leap to consider creating a larger one, somewhat modelled on the human brain, which would be capable of perceiving pain exactly the same as us. Again, we only percieve it as a black box, but by definition it should get at least basic rights.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 8:55 pm UTC

Strange Quirk wrote:I'm not talking about what legally gives us rights... but moral rights.
As far as I understand the terms, the only difference is that legal rights are enforced by government, so unless there's some other subtle difference that I am missing, I don't see why don't we mix it all together in this discussion.

Strange Quirk wrote:I suppose I'm assuming that we have "natural" rights. But if two people/animals/objects are identical, save for one thing, giving certain rights to one and not to the other means that the separating factor is that one thing.
Yeah, I understand you, to that I replied, this is only a threshold. What we have here is a gradient; on one end of it we have you and me, and we want our rights to stick with us no matter what - on the other end we have plants and rocks, which we don't think they deserve any rights - and a whole lots of subjects such as chimps, pigs, cats, aliens, AI simulations all inbetween somewhere. So obviously on our way from one end to another we have to stop granting these rights somewhere - there goes our theshold. Then, my other minor point was that we can move subjects "in" and "out" of "rights zone" at will - I guess, this contradicts your natural rights idea, and thus causing confusion.

Turtlewing
Posts: 236
Joined: Tue Nov 03, 2009 5:22 pm UTC

Re: ethics of artificial suffering

Postby Turtlewing » Thu Apr 29, 2010 9:04 pm UTC

Strange Quirk wrote:Let's extend this thought experiment to the simulation. Your simulation is run by a man in a room, that, following instructions in a book (which is the computer's program written down), will tell the simulated human to scream if the noise level percieved by the machine rises above the pain threshold for humans. Again, neither the man nor the book is in any way hurt.

OK, writing this has made me change my mind. I think that, no matter how powerful our computers are or how accurate the simulation is, cruelty towards a simulation is completely ethical as long as our technology stays the same, because the machine/simulation wouldn't actually be conscious like we are. However, our technology won't stay the same forever, so maybe with some future tech we may be able to simulate consciousness. Who knows? I'd love to argue this point, though, if anyone disagrees.

My point from my earlier post still stands, although it doesn't have much to do with cruelty.


By this reasoning I can argue that you aren't conscious:

1. assume that the Chinese room is not conscious because no piece of it in isolation is consious.
2. generalizing this we can conclude that nothing made from non-conscious pieces can be conscious
3. atoms in their elemental form are not conscious
4. electrical current is not conscious
5. humans are made from atoms in various amounts and contain electrical current.
6. Ocam's razor tells us that the existance of an unobserved conscious component of humans is less likely than the lack of such a component (it's existance must be assumed as it has not been proven to exist)
7. Thus if you are human, you are most likely not conscious.

The primary fallacy of the Chinese room experament is that it creates a straw man in the form of the hardware it describes, which distracts from the fact that it's actually saying: Assume there is an agent which speeks Chinese. Does that alone imply the agent understands what it says?

For example:

Assume a man can speek coherent chineese, and make intelligent replies to statments made in Chinese. does the man understand Chinese?

Assume a dog can speek coherent chineese, and make intelligent replies to statments made in Chinese. does the dog understand Chinese ?

Assume a computer can speek coherent chineese, and make intelligent replies to statments made in Chinese. does the computer understand Chinese ?

Assume a room can speek coherent chineese, and make intelligent replies to statments made in Chinese. does the room understand Chinese?

If you answere any of those questions differently, and you can provide a logical and falsafyabl (but not actually falsafied) reason why the answeres must by nesesity be different than you should probably publish it. I garantee it'll get you some sort of award, and probably a cooshy appointment to a university somewhere.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 9:15 pm UTC

I think what all you "rights to sentients" people are missing is that your choice of threshold is just as arbitrary as "rights to whites". In some aspects it may be better by modern standards, but it is still another rigid dogma.

UberNube
Posts: 13
Joined: Wed Feb 03, 2010 1:22 am UTC

Re: ethics of artificial suffering

Postby UberNube » Thu Apr 29, 2010 9:32 pm UTC

makc wrote:I think what all you "rights to sentients" people are missing is that your choice of threshold is just as arbitrary as "rights to whites". In some aspects it may be better by modern standards, but it is still another rigid dogma.


Well, personally my choice of threshold is the entire animal kingdom, which is far from arbitrary (all animals have some kind of nervous system, other organisms do not), hence why I am vegitarian, however, I realise it is entirely impractical and wrong to expect other people to agree with the range of my assignment of rights. I prefer the "if in doubt about its ability to perceive pain, lets not cause it to suffer" option, but even disregarding any arguments over animal rights, the rights of human-level minds embedded in computers is hardly a grey area. If its a human mind, then I'm pretty sure its physical manifestation is irrelevent.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 9:56 pm UTC

UberNube wrote:rights of human-level minds embedded in computers is hardly a grey area. If its a human mind, then I'm pretty sure its physical manifestation is irrelevent.
I hope you see that this is just another version of "soul" concept.

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: ethics of artificial suffering

Postby Josephine » Thu Apr 29, 2010 10:19 pm UTC

the problem here is the assumption that the chinese room is not sentient. Sentience is a construct. Sentience is immaterial and so hard to quantify because it doesn't exist. For all intents and purposes, the chinese room is as sentient as you are. to me, it's a black box (and so are all of you). To it, it's sentient. just because the guy inside doesn't think the entire system is sentient means nothing. By Occam's Razor, it's easier to assume that entities which display what looks like sentience are sentient.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 10:32 pm UTC

on the other side, if the guy inside chinese room doesn't really suffer, why not torture the room for fun? that's how BDSM works :)

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: ethics of artificial suffering

Postby Josephine » Thu Apr 29, 2010 10:49 pm UTC

makc wrote:on the other side, if the guy inside chinese room doesn't really suffer, why not torture the room for fun? that's how BDSM works :)

Um, 1, no, it's not how that works. 2, say your neurons were individually sentient. torturing you becomes no less wrong. I said sentience was a construct. I did not say that that construct has no rights (I didn't even devalue it, despite what I may have seemed to imply).
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26765
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: ethics of artificial suffering

Postby gmalivuk » Thu Apr 29, 2010 11:21 pm UTC

makc wrote:I think what all you "rights to sentients" people are missing is that your choice of threshold is just as arbitrary as "rights to whites".

No, it's not, because arbitrariness suggests that there's nothing underlying the distinction. But those two positions are very different because the underlying rationale for believing them is very different. And in most of the "rights to whites" cases, that rationale was based on incorrect or untestable claims about how people work.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Thu Apr 29, 2010 11:42 pm UTC

makc wrote:on the other side, if the guy inside chinese room doesn't really suffer, why not torture the room for fun?


I'm not responding particularly to makc, but sure, why not? Sure, we can break down conscious stuff into non-conscious stuff (if we disregard the possibility of some indivisible "soul"), and you can use that to argue that the Chinese room as a whole is "conscious", whatever that may mean. You can argue that the room understands Chinese. However, (switching to the human simulation room vs the chinese room) humans feel pain based on certain inputs. The fact that we feel pain is caused by a specific element(s) of our nervous systems, and we don't like it one bit. To reiterate, even though each element of our brains may not be conscious, some of them do actually feel pain. In the room, there are essentially only two elements (the book and the man), neither of which feels pain or discomfort, and their combination doesn't either. The room itself doesn't recognize pain, whereas humans do.

About black boxes, @UberNube: I'm saying that there can be fundamental differences between different things that display the same output when given the same input. Of course any of us could theoretically be a powerful program, but I assume that you are not, which is why I refrain from insulting people for fun. When fooling around with iGod, I have no qualms about it what so ever. And also, no, I'm pretty confident that you don't perceive me as a black box. You probably think of me as a human communicating with you on an internet forum, which, chances are, is what I am.

For example, here's a fundamental difference: humans have thoughts and feelings, while a simulation does not need to. If human behavior is deterministic (as we are assuming in this discussion), then we could probably simplify down the calculation of input+memory=>output significantly, without the need for in-between things like thoughts or feelings. For instance, we could have a massive look-up table as the program: an output for every possible input and memory state. No thoughts or feelings (or pain receptors), but same appearance outside of the black box.

If you believe inflicting "pain" on a simulation is unethical because it hurts them, then it follows that you believe insulting the Chinese room is unethical because it hurts it too, right? What if there is no room, just a person taught to pronounce the valid response to an insult? Does insulting them in Chinese hurt them? (Let's say the person doesn't even know that it is an insult)
If it does, then what here, exactly, is getting hurt? It's clearly not the man. Is it the [man+his Chinese knowledge] system? If so, that's way to abstract for me. If its something else, I'd like to hear it. If nothing is getting hurt, then where did my logic go wrong?

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: ethics of artificial suffering

Postby Josephine » Thu Apr 29, 2010 11:47 pm UTC

Strange Quirk wrote: What if there is no room, just a person taught to pronounce the valid response to an insult? Does insulting them in Chinese hurt them? (Let's say the person doesn't even know that it is an insult)

Then that's just a vast macro library trained to look sentient at first glance. that's different.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

UberNube
Posts: 13
Joined: Wed Feb 03, 2010 1:22 am UTC

Re: ethics of artificial suffering

Postby UberNube » Thu Apr 29, 2010 11:55 pm UTC

makc wrote:
UberNube wrote:rights of human-level minds embedded in computers is hardly a grey area. If its a human mind, then I'm pretty sure its physical manifestation is irrelevent.
I hope you see that this is just another version of "soul" concept.


Hardly. A soul by lack of definition is entirely undetectable. I am reasonably sure that, at least once we achieve a better understanding of physics, it will be possible to say: "here is a list of every particle and its properties which comprise the human brain." At that point, it will then be trivial (well, at least theoretically) to have a computer simulate that human brain down to the quantum interactions. Of course, you need to include the rest of the body if you want it to remain alive, but theoretically it is possible to simulate a perfect replica of a human without ever resorting to approximations or simplifications. It may not be composed of REAL cells, but it certainly is alive by every definition.

Strange Quirk wrote:About black boxes, @UberNube: I'm saying that there can be fundamental differences between different things that display the same output when given the same input. Of course any of us could theoretically be a powerful program, but I assume that you are not, which is why I refrain from insulting people for fun. When fooling around with iGod, I have no qualms about it what so ever. And also, no, I'm pretty confident that you don't perceive me as a black box. You probably think of me as a human communicating with you on an internet forum, which, chances are, is what I am.

For example, here's a fundamental difference: humans have thoughts and feelings, while a simulation does not need to. If human behavior is deterministic (as we are assuming in this discussion), then we could probably simplify down the calculation of input+memory=>output significantly, without the need for in-between things like thoughts or feelings. For instance, we could have a massive look-up table as the program: an output for every possible input and memory state. No thoughts or feelings (or pain receptors), but same appearance outside of the black box.

If you believe inflicting "pain" on a simulation is unethical because it hurts them, then it follows that you believe insulting the Chinese room is unethical because it hurts it too, right? What if there is no room, just a person taught to pronounce the valid response to an insult? Does insulting them in Chinese hurt them? (Let's say the person doesn't even know that it is an insult)
If it does, then what here, exactly, is getting hurt? It's clearly not the man. Is it the [man+his Chinese knowledge] system? If so, that's way to abstract for me. If its something else, I'd like to hear it. If nothing is getting hurt, then where did my logic go wrong?


First, the chinese room is a poor example for this since it is simply a lookup table. A lookup table can't possibly suffer even as a whole, but it also does not pass any real definition of sentience. Sentience includes the ability to respond and adapt to unexpected input, which is something a lookup table lacks by definition. It also lacks the ability to learn - another key part of sentience.

I beleive that ANY system truely capable of doing everything which a human is normally capable of (a psychologist would probably be more qualified to give a precise definition here, but it certainly includes memory, problem solving, communication, adaptation to the unexpected, etc) would almost have to be considered sentient as a whole. I don't really care whether it's a computer runing a physics simulation, a room full of well-trained chimps, or even a guy solving the physics equations by hand (see http://xkcd.com/505/), the end result as a whole is sentient, therefore it should have rights.


EDIT: To further my point about humans being black-boxes, I challenge anyone here to actually prove to THEMSELVES that THEY are not a simulation running on a computer in the real universe somewhere else.
Last edited by UberNube on Fri Apr 30, 2010 12:15 am UTC, edited 2 times in total.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: ethics of artificial suffering

Postby makc » Thu Apr 29, 2010 11:58 pm UTC

gmalivuk wrote:arbitrariness suggests that there's nothing underlying the distinction.
I would say arbitrariness suggests that we can equally easily make distinction elsewhere, but that is repeating myself :roll:

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Fri Apr 30, 2010 12:49 am UTC

UberNube wrote:To further my point about humans being black-boxes, I challenge anyone here to actually prove to THEMSELVES that THEY are not a simulation running on a computer in the real universe somewhere else.

How does that help? The fact that you can't prove that anything outside yourself exists and is not just impulses being fed to your brain, doesn't help our discussion in the slightest. That's just the old matrix theory, which helps no one, and yours is just as good. Edit: you can't prove Russell's teapot, either. So?

UberNube wrote:A lookup table can't possibly suffer even as a whole, but it also does not pass any real definition of sentience. Sentience includes the ability to respond and adapt to unexpected input, which is something a lookup table lacks by definition. It also lacks the ability to learn - another key part of sentience.

It can't learn, but I can appear to. Humans have a finite memory, and a finite number of possible input values, and we are assuming that all output depends entirely on the memory state and the input, so we could in theory make a look-up table if we knew the entire workings of the brain. In fact, it will learn; for certain combinations of inputs, certain "information" will appear in its memory. Learning, is it not? Well, maybe not quite, but its indistinguishable, so by your reasoning it doesn't matter, right? I'm saying that a look-up table is enough to make an accurate simulation of a human. If, of course, we accept all of the assumptions you all have been making. But this sim will pass all tests of whatever you want, because it acts exactly as a human would.

Edit: you say a look-up table can't suffer? So the Chinese room can't suffer if the book is a look-up table for input sentence=>output sentence, but it can suffer if the book is a more complex symbol-manipulating program that the man follows? Even if input=>output is exactly the same?
Last edited by Strange Quirk on Fri Apr 30, 2010 1:03 am UTC, edited 1 time in total.

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: ethics of artificial suffering

Postby Josephine » Fri Apr 30, 2010 12:57 am UTC

misattributed quote: the note on lookup tables is UberNube's, not mine.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Fri Apr 30, 2010 1:05 am UTC

Ah, sorry, fixed. don't know how that happened, although you did say pretty much the same thing with your comment on vast macro libraries.

UberNube
Posts: 13
Joined: Wed Feb 03, 2010 1:22 am UTC

Re: ethics of artificial suffering

Postby UberNube » Fri Apr 30, 2010 1:22 am UTC

Strange Quirk wrote:
UberNube wrote:To further my point about humans being black-boxes, I challenge anyone here to actually prove to THEMSELVES that THEY are not a simulation running on a computer in the real universe somewhere else.

How does that help? The fact that you can't prove that anything outside yourself exists and is not just impulses being fed to your brain, doesn't help our discussion in the slightest. That's just the old matrix theory, which helps no one, and yours is just as good. Edit: you can't prove Russell's teapot, either. So?

My point was that nobody can even know if they themselves are "real" according to your definition. If 2 things are equivelent under every possible measurement, then I would suggest that they are the same.

Strange Quirk wrote:
UberNube wrote:A lookup table can't possibly suffer even as a whole, but it also does not pass any real definition of sentience. Sentience includes the ability to respond and adapt to unexpected input, which is something a lookup table lacks by definition. It also lacks the ability to learn - another key part of sentience.

It can't learn, but I can appear to. Humans have a finite memory, and a finite number of possible input values, and we are assuming that all output depends entirely on the memory state and the input, so we could in theory make a look-up table if we knew the entire workings of the brain. In fact, it will learn; for certain combinations of inputs, certain "information" will appear in its memory. Learning, is it not? Well, maybe not quite, but its indistinguishable, so by your reasoning it doesn't matter, right? I'm saying that a look-up table is enough to make an accurate simulation of a human. If, of course, we accept all of the assumptions you all have been making. But this sim will pass all tests of whatever you want, because it acts exactly as a human would.

Edit: you say a look-up table can't suffer? So the Chinese room can't suffer if the book is a look-up table for input sentence=>output sentence, but it can suffer if the book is a more complex symbol-manipulating program that the man follows? Even if input=>output is exactly the same?
[/quote]

Right, I'm too tired to argue properly against that. I'll edit this post (or make a new one) tomorrow with more content.
For now though, I agree that in fact any system composed of non-sentient parts is not sentient. The human brain is a computer like any other you have described there. If you're arguing that a lookup table doesn't have rights, then neither does a human. The logical part of my brain is inclined to agree with you. In fact, there is no such thing as sentience, or free will. Everything I am typing right now was always going to be typed. There was no possible other option. It is meaningless to punish people for crimes, since it was entirely deterministic, but similarly they will be punished simply because the legal system is also part of the same deterministic universe. Our lives have no purpose, no meaning, we are simply Von-neuman turing machines running through our programs line by line doing everything precisely as it was always going to be done. Unfortunately we have precisely the same rights as a rock - the right to obey the laws of physics.

Anyway, sorry for getting horribly off-topic, but I'm rather tired. If a mod really feels that this isn't the place for existencial rants, then feel free to delete this post.

NOTE: None of the above was sarcastic in any way. I'm not sure if it will come across that way, but it isn't meant to.

PS. This topic is really depressing to read right before bed.

Edit: please remind me never to post while asleep again. That made a lot more sense last night.
Last edited by UberNube on Fri Apr 30, 2010 8:49 am UTC, edited 1 time in total.

Strange Quirk
Posts: 30
Joined: Tue Apr 13, 2010 1:39 pm UTC

Re: ethics of artificial suffering

Postby Strange Quirk » Fri Apr 30, 2010 1:52 am UTC

Sure, get some sleep (though I have no idea where you are/what time it is). I'll be interested to hear your response. I'll respond to what you have so far anyway.

UberNube wrote:My point was that nobody can even know if they themselves are "real" according to your definition. If 2 things are equivelent under every possible measurement, then I would suggest that they are the same.

Sure, but a simulation and human are not the same under every possible measurement, they are only the same in terms of input-output. If you dissect them, you will find them to be completely different. My previous example with the Chinese room mechanism: I'd say the version with a table and the version with a complex program, and also a native Chinese speaker, are very different animals even if they all output the same thing to a given input.

UberNube wrote:For now though, I agree that in fact any system composed of non-sentient parts is not sentient. The human brain is a computer like any other you have described there. If you're arguing that a lookup table doesn't have rights, then neither does a human. The logical part of my brain is inclined to agree with you. In fact, there is no such thing as sentience, or free will. Everything I am typing right now was always going to be typed. There was no possible other option. It is meaningless to punish people for crimes, since it was entirely deterministic, but similarly they will be punished simply because the legal system is also part of the same deterministic universe. Our lives have no purpose, no meaning, we are simply Von-neuman turing machines running through our programs line by line doing everything precisely as it was always going to be done. Unfortunately we have precisely the same rights as a rock - the right to obey the laws of physics.


What? I don't agree with pretty much all of that. If there's no "soul", then we are entirely made up of atoms which are not sentient, but we are no doubt sentient. I think sentience can be built from non-sentient parts, although the "soul" would solve lots of problems. And humans are sentient pretty much by definition, since the definition is what we use to separate us from the animals jellyfish plants.
My whole point was about the fundamental difference between a Chinese room and a human, and the fact that one is sentient and the other isn't. While I can't pinpoint the aspect that causes one to be sentient, I have been giving various arguments why a simulation made with current tech shouldn't have rights while a human should. Personally, (especially after debating in this thread) I don't like the idea of no free will because it causes various moral problems, but that's irrelevant, as I said, to this discussion. I don't agree that punishment is meaningless. I also disagree with the fact that we have the same rights as a rock, and I'm pretty sure you do too. If you kick rocks for no reason but not people, then clearly you have morals which separate the two. If not for your NOTE, I would have assumed you were being sarcastic, and still can't shake off the feeling that you were.

But yeah, lets get back on topic.


Um, yeah. Edited.
Last edited by Strange Quirk on Fri Apr 30, 2010 5:39 pm UTC, edited 1 time in total.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26765
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: ethics of artificial suffering

Postby gmalivuk » Fri Apr 30, 2010 1:57 am UTC

Of course a collection of non-sentient things can be sentient, just like a collection of non-alive things can be alive and a collection of non-liquid things can be liquid. It's an emergent property.

And sentience has to do with sensing things, not with having free will (which is a usually incoherent concept fit for a different thread).
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: ethics of artificial suffering

Postby skeptical scientist » Fri Apr 30, 2010 1:01 pm UTC

Strange Quirk wrote:
UberNube wrote:A lookup table can't possibly suffer even as a whole, but it also does not pass any real definition of sentience. Sentience includes the ability to respond and adapt to unexpected input, which is something a lookup table lacks by definition. It also lacks the ability to learn - another key part of sentience.

It can't learn, but I can appear to. Humans have a finite memory, and a finite number of possible input values,

Yeah, sort of. Since a human lifetime is limited to something like 120 years, there are only a finite number of possible inputs that can be received in that time, to within the ability of human senses to distinguish between different inputs.
and we are assuming that all output depends entirely on the memory state and the input, so we could in theory make a look-up table if we knew the entire workings of the brain.

Again, sort of. For a very theoretical value of "make". Even if our hypothetical human only lives 80 years and only gets one bit of input every second*, there are \(2^{2.5\times10^9}\) different possibly life input histories. To put all of these into a lookup table would require a table with \(2^{2.5\times10^9}\) entries, each storing around \(2.5\times10^9\) bits of data. So yes, you can build such a lookup table, provided you have more terabyte hard drives than there are atoms in the universe**. Trying to simulate an intelligent being with a lookup table is really a pretty ludicrous notion; in fact, there's every reason to believe that a reasonable facsimile of intelligence (say, one which could pass a Turing test), in order to exist in our universe, would have to actually be intelligent.

*Obviously a gross underestimate.
**This is roughly like describing the number of possible permutations of the alphabet as "millions".
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

Ubik
Posts: 1016
Joined: Thu Oct 18, 2007 3:43 pm UTC

Re: ethics of artificial suffering

Postby Ubik » Fri Apr 30, 2010 1:37 pm UTC

One more point about the look-up table argument: Some sort of simulation would need to be run for each entry of the table to create it in the first place, so you would need to simulate a large number of reactions to various stimuli. It could be thought as a system where the consciousness (or huge numbers of them actually) is behind a proxy.


Return to “Science”

Who is online

Users browsing this forum: No registered users and 20 guests