Human emulator and ethics?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Fri May 08, 2009 8:25 pm UTC

EnderSword wrote:You'd likely atleast want the choice. I'd take the 20 years but not the 3 weeks. And what if it was 2 days whenever they felt like it, despite other plans you had?

Any contrived scenario can be imaged where you're being paused needlessly, my only point was that it is not equivalent to murder, not anymore so than punching a guy unconscious. It could potentially be a close equivalent, but it is clearly not the same thing.

EnderSword wrote:if you endowed it with the same sentience as you gave the original being you'd be making 50,000 people at the ball game

The more the merrier. We will be enjoying virtual ball games eventually and I doubt many are going to complain about it.

EnderSword wrote:and provide for each of whoever you made for that and environments for all of them

This is the beautiful part. Once something is made once in the digital domain, it's made for all time and in practically any desired quantity.

EnderSword wrote:You'd essentially have to create them a Matrix, and then you're into the ethics of all that

The ethics of creating a universe without suffering, mortality and infinite material wealth? The only ethical consideration here is why such a place doesn't exist already.

EnderSword wrote:I'd be against being deceived into not knowing that's what I was.

I don't know why anyone would care to deceive you. Perhaps they may think you wouldn't like the idea, though I can't imagine why. However if the being's only known existence was virtual (from our perspective), there would be no deception. They are what they perceive and they know what that is.

If our 'creators' told us we are running on some Pentium 2000 machine, it would not change our knowledge of what we are, it would merely change our knowledge of how we came to exist.

Quote sniping: You can do better.

-Az
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

GoC
Posts: 336
Joined: Mon Nov 24, 2008 10:35 pm UTC

Re: Human emulator and ethics?

Postby GoC » Sat May 09, 2009 1:55 am UTC

Telchar wrote:I would say everything. The only key is knowing the difference between the simulation and reality. However, once you've determined it is a simulation, it has no ethical standing, just like if I destroyed the planet in a climate simulation I'm not an evil scientist.

So anyone who "uploads" their conciousness instantly loses all human rights?
Belial wrote:I'm just being a dick. It happens.

User avatar
Telchar
That's Admiral 'The Hulk' Ackbar, to you sir
Posts: 1937
Joined: Sat Apr 05, 2008 9:06 pm UTC
Location: Cynicistia

Re: Human emulator and ethics?

Postby Telchar » Tue May 12, 2009 8:48 am UTC

negatron wrote:
If you draw a distinction between simulation and reality, and you've determined something is a simulation and NOT reality, then you imply that reality is not a simulation when such an implication is unsound, as the distinction here isn't even known, much less described. As a result the initial determination too is unsound. It's quite easy to disqualify any premise which attempts to exclude from reality a computer process.

The best you can suggest, while being logically coherent, is to say a computer process is not a first order process, or maybe a natural process. It cannot be said a computer process is not real.


I also assume when I set something on a desk it will not go through the desk, and that when I step outside my house I will not fall through the earth. These are basic assumption, like assuming I am not a simulation and that everyone I meet is not a simulation, that allow for humanity to function. We have to make these assumption, or everything else falls apart.

And even if I were a simulation, the thing on the computer is a simulation of a simulation. In some bizzare simulatory hierarchy, reality then becomes subjective. It's a perfectly rational position to dictate that the state which I reside is reality, at least for me. I can only base decisions on information I have, and I can't have information about some vague uberreality which I have not experienced.

So if that xkcd comic with god being a person in an endless desert simulating a universe was true, then you can say with certainty that our lives are futile and have no value at all? We can live our lives just fine regardless of the medium we run on. Let me put it this way: How can you tell those two things apart? What makes our world more real than a simulated one? In fact using the words "reality" and "simulation" is misleading if they are impossible to distinguish.


The xkcd comic is a bad example because, as far as I know, any arrangment of rocks in a desert won't create anything more than rocks in a desert. Again, it is true we could be a simulation, but you can't base decisions on information you don't have and can't conceivably acquire.

GoC wrote:So anyone who "uploads" their conciousness instantly loses all human rights?


Again, if we go back the metric from my first post, a concsiousness uploaded (and this is waaaaay off topic but....) would be intelligent, so it falls into the "Can't do anything we wan't" category. If I were to simulate an Internal Combustion Engine on my computer, that doesn't mean my computer now contains an ICE no matter how good the simulation is.
Zamfir wrote:Yeah, that's a good point. Everyone is all about presumption of innocence in rape threads. But when Mexican drug lords build APCs to carry their henchmen around, we immediately jump to criminal conclusions without hard evidence.

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Human emulator and ethics?

Postby headprogrammingczar » Tue May 12, 2009 4:37 pm UTC

Telchar wrote:Again, if we go back the metric from my first post, a concsiousness uploaded (and this is waaaaay off topic but....) would be intelligent, so it falls into the "Can't do anything we wan't" category. If I were to simulate an Internal Combustion Engine on my computer, that doesn't mean my computer now contains an ICE no matter how good the simulation is.

This logic is faulty. An ICE is different from a consciousness in that one is a physical construct and the other is an abstraction. If I write a detailed thesis on efficiency of an ICE, Carnot Engine, and how they both increase entropy of the universe, it doesn't matter if it is me explaining it to someone else by talking, writing it down, typing it, or encoding it in desert rocks. It is still a detailed thesis, regardless of "how realistic your simulation of pen and paper is".
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Tue May 12, 2009 5:32 pm UTC

Telchar wrote:If I were to simulate an Internal Combustion Engine on my computer, that doesn't mean my computer now contains an ICE no matter how good the simulation is.

In what sense? If it's built like a combustion engine and functions like a combustion engine, it's a combustion engine.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

User avatar
Naurgul
Posts: 623
Joined: Mon Jun 16, 2008 10:50 am UTC
Location: Amsterdam, The Netherlands
Contact:

Re: Human emulator and ethics?

Postby Naurgul » Tue May 12, 2009 7:03 pm UTC

I think we can construct a generalised Turing-test to resolve this problem: A judge can conduct experiments with two systems, one is "real" and the other "simulated". The judge can input the desired experiments and measure results but he cannot directly interact with the systems. If the judge cannot reliably tell the real system from the simulated one, then the simulation is said to have passed the test.

Now, if a simulation can pass this test, I think it is safe to say that it is as real as the physical system upon which it is based. Anyone disagree with this?
Praised be the nightmare, which reveals to us that we have the power to create hell.

User avatar
hideki101
Posts: 342
Joined: Wed May 28, 2008 5:50 pm UTC
Location: everywhere and nowhere

Re: Human emulator and ethics?

Postby hideki101 » Tue May 12, 2009 10:55 pm UTC

Naurgul wrote:I think we can construct a generalised Turing-test to resolve this problem: A judge can conduct experiments with two systems, one is "real" and the other "simulated". The judge can input the desired experiments and measure results but he cannot directly interact with the systems. If the judge cannot reliably tell the real system from the simulated one, then the simulation is said to have passed the test.

Now, if a simulation can pass this test, I think it is safe to say that it is as real as the physical system upon which it is based. Anyone disagree with this?

The problem is that a Turing test is not entirely accurate, and can be easily fooled. The Wikipedia article has numerous references to why the Turing test is unreliable. One of my objections to the Turing test is that it doesn't cover all the possible human behaviors. A lot of human interaction is not verbal, so a computer that did pass a Turing test through use of text, or even voice only is not necessarily human. Also, the "real" person in the test could fake their responses so they feel robotic. Also, to differentiate between a human or a computer, the judge would need to remove as much of their preconceptions about the responses of computers. Logically, the best judge of a Turing test is a computer itself, because a computer, programmed with just the (hypothetical) programming needed to differentiate between a human and a program would experience no self doubt, second guessing, or internal bias that a human judge would have. However, to program the computer, you would need a comprehensive algorithm of human behavior, which we currently don't have. Thus a reliable Turing test is currently impossible.

For off topic:
Spoiler:
On wikipedia, the last external link links to this
Albert Einistein wrote:"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."

Goplat
Posts: 490
Joined: Sun Mar 04, 2007 11:41 pm UTC

Re: Human emulator and ethics?

Postby Goplat » Wed May 13, 2009 2:28 am UTC

negatron wrote:In what sense? If it's built like a combustion engine and functions like a combustion engine, it's a combustion engine.
But it isn't built like a combustion engine (it's built like a computer program), and it certainly doesn't function like one. You can't power your car with a simulated engine. The correct statement would be "If it's built like a combustion engine inside the simulation and functions like a combustion engine inside the simulation, it's a combustion engine inside the simulation". Which is clearly distinct from being a combustion engine in real life.

Other properties work the same way: just because something is "a human inside a simulation", doesn't mean that it's "a human", so you can't say hurting it is "immoral".

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Wed May 13, 2009 3:42 am UTC

Goplat wrote:But it isn't built like a combustion engine (it's built like a computer program)

How must a combustion engine be built for it to qualify as a combustion engine? So long as the end result of whichever process is used to create it results in a combustion engine, you have a combustion engine.

Goplat wrote:You can't power your car with a simulated engine.

You can't power your simulated car with a non-simulated engine either. Goes both ways.

But I can power my 'simulated' car once I've built it in a similar manner.

Goplat wrote:"If it's built like a combustion engine inside the simulation and functions like a combustion engine inside the simulation, it's a combustion engine inside the simulation".

I never said anything different. Note the qualifying point: "it's a combustion engine".

Goplat wrote:Other properties work the same way: just because something is "a human inside a simulation", doesn't mean that it's "a human",

It's a human inside a simulation, so yes it does mean it's a human. Not a human fit to your specifications perhaps, but a human nevertheless.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

armandowall
Posts: 23
Joined: Mon Jul 23, 2007 3:24 pm UTC
Location: Massachusetts, USA
Contact:

Re: Human emulator and ethics?

Postby armandowall » Wed May 13, 2009 7:00 am UTC

Goplat wrote:Other properties work the same way: just because something is "a human inside a simulation", doesn't mean that it's "a human", so you can't say hurting it is "immoral".


Adding more fuel to the fire here:

Suppose you create a device that allows you to communicate with the human simulated inside the computer, or in the rock arrangement representation xkcd-style mentioned earlier. Let's say the communication device has the shape and works like a cellphone.

Since we're talking about a human simulation down to the level of molecules, you would expect to hear a voice through the "intercom" that sounds perfectly human. You talk to this "person", make jokes, and you hear "him/her" laugh and even come up with some witty comment. Now, tell the simulated person that he/she is about to be killed (by stopping the simulation, for example. Let's set aside the details on how to make "him/her" believe you). Observe the changes in his/her voice. Ask about the feelings. I'm sure you'll hear words like "terrified", "anxious", "panicking", "crying", even "no, please!" or "mercy!"

Would you still consider that simulated human as just a simulation?

Once again, I'm not taking any sides (yet :wink: )... I'm just curious about the answers and the whole discussion.

Edit: Typos.

Goplat
Posts: 490
Joined: Sun Mar 04, 2007 11:41 pm UTC

Re: Human emulator and ethics?

Postby Goplat » Wed May 13, 2009 5:13 pm UTC

armandowall wrote:Would you still consider that simulated human as just a simulation?
Yes, absolutely. Adding extra bells and whistles doesn't change the fact that saying one state of a simulation is more ethical than than another state is like saying 0100010111010010 is more ethical than 1001011010010011. It's just nonsense.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11128
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Human emulator and ethics?

Postby Yakk » Wed May 13, 2009 5:37 pm UTC

And if it turns out that you are a simulation, Goplat?

Ie, you are nothing but 0s and 1s, being shuffled around in some computer.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Telchar
That's Admiral 'The Hulk' Ackbar, to you sir
Posts: 1937
Joined: Sat Apr 05, 2008 9:06 pm UTC
Location: Cynicistia

Re: Human emulator and ethics?

Postby Telchar » Wed May 13, 2009 5:51 pm UTC

He is obviously intelligent, and therefore subject to normal rules about moral behavior. So is the "intelligence" on the other side of the "cellphone that is not a cellphone." These are easy questions. What you are assuming is a perfect software simulation of a human is automatically intelligent when you cannot make that assumption.

In regards to the turing test, I've never thought human gulability is a good measure for anything. Somone being able to be fooled by a computer doesn't prove the computer is intelligent, only that the person can be fooled by a computer.
Zamfir wrote:Yeah, that's a good point. Everyone is all about presumption of innocence in rape threads. But when Mexican drug lords build APCs to carry their henchmen around, we immediately jump to criminal conclusions without hard evidence.

User avatar
Naurgul
Posts: 623
Joined: Mon Jun 16, 2008 10:50 am UTC
Location: Amsterdam, The Netherlands
Contact:

Re: Human emulator and ethics?

Postby Naurgul » Wed May 13, 2009 6:05 pm UTC

I think my proposed test was misunderstood. I said we can use a "generalised Turing test", the judge of course being an expert in the field the systems are about. Seeing that the judge can conduct any experiments he wants on the systems, not just a human conversation via text, I'd say that hideki101's arguments against the validity of the test are irrelevant. As such, the only problem would be the difference between the human scientific model (which is what gets simulated) and the mechanisms of the physical system. That would be the limit of the human expert judge, although, as I said in a previous post, that is possibly also the limit for any human understanding in general.
Praised be the nightmare, which reveals to us that we have the power to create hell.

User avatar
hideki101
Posts: 342
Joined: Wed May 28, 2008 5:50 pm UTC
Location: everywhere and nowhere

Re: Human emulator and ethics?

Postby hideki101 » Wed May 13, 2009 9:54 pm UTC

Naurgul wrote:I think my proposed test was misunderstood. I said we can use a "generalised Turing test", the judge of course being an expert in the field the systems are about. Seeing that the judge can conduct any experiments he wants on the systems, not just a human conversation via text, I'd say that hideki101's arguments against the validity of the test are irrelevant. As such, the only problem would be the difference between the human scientific model (which is what gets simulated) and the mechanisms of the physical system. That would be the limit of the human expert judge, although, as I said in a previous post, that is possibly also the limit for any human understanding in general.

Oh, wait. So this is more like the Chinese room, or a black box problem? (e.g. you put in a hydrocarbon and get out CO2 and H2O plus a certain amount of energy) then whether or not it actually combusted inside the box or through some other reaction it is completely equivalent, and as such, should be treated as something something with a fire inside it? Hm, I guess it could work, but it could only work for systems of particles that behave in a regular pattern and as such, are identifiable. Identical information from two systems, assuming you don't actually know the contents of the systems themselves should be treated as identical.
Albert Einistein wrote:"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."

GoC
Posts: 336
Joined: Mon Nov 24, 2008 10:35 pm UTC

Re: Human emulator and ethics?

Postby GoC » Thu May 14, 2009 1:47 am UTC

Telchar wrote:
GoC wrote:So anyone who "uploads" their conciousness instantly loses all human rights?


Again, if we go back the metric from my first post, a concsiousness uploaded (and this is waaaaay off topic but....) would be intelligent, so it falls into the "Can't do anything we wan't" category. If I were to simulate an Internal Combustion Engine on my computer, that doesn't mean my computer now contains an ICE no matter how good the simulation is.

But the simulation is intelligent too isn't it? If it's a perfect simulation and a human is intelligent then the simulation is intelligent by definition. And by declaring uploaded conciousness' to be intelligent you've declared that a series of 1's and 0's can have intelligence.
Belial wrote:I'm just being a dick. It happens.

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Thu May 14, 2009 9:37 am UTC

Please... The AI vs human, the virtual vs real, the simulated feeling vs the 'true' feeling is a debate that has happened so many times that it should be considered a troll by now : It all boils down about what you believe form the core of emotions and there are un-dispelled theory on each side of the debate....

GoC
Posts: 336
Joined: Mon Nov 24, 2008 10:35 pm UTC

Re: Human emulator and ethics?

Postby GoC » Thu May 14, 2009 5:37 pm UTC

Iv wrote:It all boils down about what you believe form the core of emotions

Why are emotions relevant? Most animals have them and we don't consider them sapient.
Belial wrote:I'm just being a dick. It happens.

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Fri May 15, 2009 8:02 am UTC

This is not an argument about the sapience of an AI but on its "authenticity". It is about differentiating "real mind" against a "simulated mind". As there are intelligence tests and challenges of know-how, it is easy to point out that none of these challenges is unfeasible by an AI. Therefore the argument of the people defending the "AIs are not real mind" is often that intelligence (or sapience if we have compatible definition but I don't trust such a slippery word) is authentic in human beings but mimicked in AIs. As objective tests give the same results, and as intelligence, even mimicked, is still intelligence, they usually say that the self-consciousness and the feelings of an AI are purely artificial and are just acting.

User avatar
arbivark
Posts: 531
Joined: Wed May 23, 2007 5:29 am UTC

Re: Human emulator and ethics?

Postby arbivark » Wed Jun 03, 2009 1:15 am UTC

interesting topic.
i'm reading this on a laptop at a clinic where i'm spending a month as a paid human research subject.
it's important to people in my profession that experimental subjects be treated well.
(i'm having a conflict with a place i used to do studies that didn't treat me well, and it's currently under review by the IRB, the review board the experimenter hired to monitor itself.)
treating experimental subjects well is important not just for the subject, but for the researcher too.
it's a short downward spiral to Dr. Mengele otherwise.
http://www.craphound.com/down is Cory Doctorow's first online novel, down and out in the magic kingdom, that deals with ethics of simulated persons and backups and such. it's also a treatise about economics, and a very good read.

User avatar
Telchar
That's Admiral 'The Hulk' Ackbar, to you sir
Posts: 1937
Joined: Sat Apr 05, 2008 9:06 pm UTC
Location: Cynicistia

Re: Human emulator and ethics?

Postby Telchar » Wed Jun 03, 2009 2:36 am UTC

Iv wrote:This is not an argument about the sapience of an AI but on its "authenticity". It is about differentiating "real mind" against a "simulated mind". As there are intelligence tests and challenges of know-how, it is easy to point out that none of these challenges is unfeasible by an AI. Therefore the argument of the people defending the "AIs are not real mind" is often that intelligence (or sapience if we have compatible definition but I don't trust such a slippery word) is authentic in human beings but mimicked in AIs. As objective tests give the same results, and as intelligence, even mimicked, is still intelligence, they usually say that the self-consciousness and the feelings of an AI are purely artificial and are just acting.


Can you direct me to an objective test for an AI? I'm serious, as I havn't seen one yet.
Zamfir wrote:Yeah, that's a good point. Everyone is all about presumption of innocence in rape threads. But when Mexican drug lords build APCs to carry their henchmen around, we immediately jump to criminal conclusions without hard evidence.

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Wed Jun 03, 2009 7:20 am UTC

What do you mean Telchar ? What do you want to test ? The Turing test is one, solving autonomously a known math problem is another one, testing another ability. What do you mean "test for an AI" ?

User avatar
Telchar
That's Admiral 'The Hulk' Ackbar, to you sir
Posts: 1937
Joined: Sat Apr 05, 2008 9:06 pm UTC
Location: Cynicistia

Re: Human emulator and ethics?

Postby Telchar » Thu Jun 04, 2009 7:58 am UTC

You said that objective tests give the same result, but we have no objective tests to measure the capability or existence of an artificial intelligence. Turing is as far away from objective as possible. If you could point me to the math problem one I could take a look.

However, I don't know how we test for artificial intelligence when we don't know what homegrown intelligence is, or how to define it.
Zamfir wrote:Yeah, that's a good point. Everyone is all about presumption of innocence in rape threads. But when Mexican drug lords build APCs to carry their henchmen around, we immediately jump to criminal conclusions without hard evidence.

thehivemind5
Posts: 6
Joined: Tue Apr 28, 2009 3:32 am UTC

Re: Human emulator and ethics?

Postby thehivemind5 » Fri Jun 12, 2009 8:08 pm UTC

I think the example about the internal combustion engine needs to be cleared up a little bit.

If you have a perfect simulation of an internal combustion engine in your computer, then no, that is not an internal combustion engine on this level of "reality" at least.

If you have a computer which simulates an internal combustion engine, and then routes the outputs of the simulation to the outside (through pistons or something on the outside of the computer) in such a way that you could plug the system into your car and not be able to tell the difference, then this is still probably not *technically* an internal combustion engine, but in all the important respects it is (at the very least is falls into the more general class of "engine").

Our human simulator is much more like the second example than the first. It may not *technically* be human, but it matches in all the important ways, and, I feel, at the very least falls into the general class of sentient beings who probably deserve rights.

Over the course of history we've learned to accept people of different tribes, religions, races, and genders. Learning to accept these AIs as people will just be another step along that path as we conquer "speciesism".

Goplat
Posts: 490
Joined: Sun Mar 04, 2007 11:41 pm UTC

Re: Human emulator and ethics?

Postby Goplat » Sun Jun 14, 2009 9:04 pm UTC

thehivemind5 wrote:If you have a computer which simulates an internal combustion engine, and then routes the outputs of the simulation to the outside (through pistons or something on the outside of the computer) in such a way that you could plug the system into your car and not be able to tell the difference
That's impossible: a computer can't produce energy, like the engine does by burning fuel. Just as a computer can't produce consciousness, like the brain does somehow.

Our human simulator is much more like the second example than the first. It may not *technically* be human, but it matches in all the important ways, and, I feel, at the very least falls into the general class of sentient beings who probably deserve rights.
Okay, so in your view the moral status of something depends only on how it acts, rather than how it's built. This means that if a human is injured such that they cannot talk or move, it's okay to torture them because they can't ask you not to. To me, that view feels pretty repugnant.

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Mon Jun 15, 2009 3:56 am UTC

Goplat wrote:Okay, so in your view the moral status of something depends only on how it acts, rather than how it's built.

A creature's behavior is a reflection of it's thought processes. If it is otherwise unknown what it is thinking/feeling, this can be approximately determined by judging known factors, particularly determining state of mind from behavior. Also, it is known how it is built. In this case it is a reconstruction of the human brain. With this fact known, and with behavior evidently exhibiting the function of a natural brain, any definitive conclusions about an absolute lack of a human trait, which is evidently present, can only be ideological.



Goplat wrote:This means that if a human is injured such that they cannot talk or move, it's okay to torture them because they can't ask you not to. To me, that view feels pretty repugnant.

Your conceited mis-interpretation is pretty repugnant. Failing the ability to communicate does not suggest a lack of awareness as both you and him well know.

Convincing communications of wants are suggestive of their existence. The lack of ability for communication suggests nothing about the desirability for communication. Clearly you're aware of this too as it is self evident.

thehivemind5 made no suggestion as to how the second case should be interpreted, only the first. I suspect if you had asked him about it rather than made the absurd assumption, you would find he has no such inclinations.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Mon Jun 15, 2009 8:03 am UTC

Goplat wrote:Just as a computer can't produce consciousness, like the brain does somehow.

Isn't that a bit of a gratuitous affirmation ?

thehivemind5
Posts: 6
Joined: Tue Apr 28, 2009 3:32 am UTC

Re: Human emulator and ethics?

Postby thehivemind5 » Tue Jun 16, 2009 8:57 pm UTC

To give Goplat some credit, the analogy is a little bit weird, but I feel it at least sort of gets my point across: How a thing functions is often more important than how it is constructed.

The question of brain-dead or comatose people is actually an interesting one. As I'm sure we've all read, there have been cases in the not so distant past dealing with this exact issue. Is it ok to kill someone (or let someone die, if thats too harsh) when they cease being able to communicate and think? I am certainly not advocating torture, and I appreciate your defence of my statement, negatron, but there is at least some segment of the population which believes that this kind of "euthanasia" is ok, and the method (often starvation) isn't the most humane we could pick out of a lineup.

negatron wrote:A creature's behavior is a reflection of it's thought processes. If it is otherwise unknown what it is thinking/feeling, this can be approximately determined by judging known factors, particularly determining state of mind from behavior. Also, it is known how it is built. In this case it is a reconstruction of the human brain. With this fact known, and with behavior evidently exhibiting the function of a natural brain, any definitive conclusions about an absolute lack of a human trait, which is evidently present, can only be ideological.


This makes a lot of sense to me. If anyone can counter this in a clear and logical way, I'd be glad to hear it, but if you believe this to be true, and accept that you're making an idealogical distinction, you open yourself up to a number of arbitrary divisions, and subsequently, contradictions.

If construction is more important than function in determining the "value" of a thing, then we need ways to determine which structures have value, and thus rights, and which don't. I think it's relativity easy to categorize functions: things which function in a "sentient" way are able to create, contribute, and improve other things, functions which are generally seen as good, and so we assign value to sentience. What sets the brain apart structural such that it has rights and a computer does not? And remember, we can't include function here at all. Under this model, a table can't even have value for holding things up; that's its function. At this point, you have to be very arbitrary (i.e. only brains have value, and just because we say so), or maybe less arbitrary (i.e. biological things have value). At that less arbitrary stage, we have to ask why we kill bacteria whenever we want.

I guess that's probably over simplifying, as you could assign value to both function and structure, but I feel the structure side of that equation is always going to amount to arbitrarily placing this particular incarnation of this particular evolution of what was once a soup of DNA on a pedestal.

Edit: For those who are interested, there is conveniently enough a topic dealing with whether or not we should sustain the brain dead/vegetative:

http://echochamber.me/viewtopic.php?f=8&t=38336
Last edited by thehivemind5 on Tue Jun 16, 2009 11:38 pm UTC, edited 1 time in total.

User avatar
Griffin
Posts: 1363
Joined: Sun Apr 08, 2007 7:46 am UTC

Re: Human emulator and ethics?

Postby Griffin » Tue Jun 16, 2009 11:23 pm UTC

When you emulate a human being, congratulations, you are creating a new world, a new universe, a new level of reality. The ethics we know and use and rely on no longer apply (except in cases where it is clearly interacting with our reality).

Which just leaves the question...

What sort of God do you want to be?


...On another note, this would probably make a damned good sci-fi story... hmm...
Bdthemag: "I don't always GM, but when I do I prefer to put my player's in situations that include pain and torture. Stay creative my friends."

Bayobeasts - the Pokemon: Orthoclase project.

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Wed Jun 17, 2009 7:07 am UTC

Griffin wrote:...On another note, this would probably make a damned good sci-fi story... hmm...

Fifty years ago, yes. To write one today with that theme, you'd have to find a godamn good new twist :-)

User avatar
Griffin
Posts: 1363
Joined: Sun Apr 08, 2007 7:46 am UTC

Re: Human emulator and ethics?

Postby Griffin » Wed Jun 17, 2009 9:08 pm UTC

Iv wrote:Fifty years ago, yes. To write one today with that theme, you'd have to find a godamn good new twist


Except I haven't actually seen one based around a guy, living in a simulation just to be killed or experimented on over and over and over again. Have any examples? I would like to check them out... Eh, I'm sure Asimov or Analog has done it at least once...
Bdthemag: "I don't always GM, but when I do I prefer to put my player's in situations that include pain and torture. Stay creative my friends."

Bayobeasts - the Pokemon: Orthoclase project.

User avatar
Dezign
Posts: 88
Joined: Sun Oct 12, 2008 3:03 am UTC
Location: North of the Land o' Fruits 'n' Nuts

Re: Human emulator and ethics?

Postby Dezign » Wed Jun 17, 2009 10:31 pm UTC

Griffin wrote:Except I haven't actually seen one based around a guy, living in a simulation just to be killed or experimented on over and over and over again. Have any examples?

Where Am I? by D.C.D. is of relevance.

It's not precisely what Griffin asked for, but it's a pretty good fictional presentation/story by Dennett right on top of this topic. The speaker details his history as a person turned into a brain in a vat remotely reintegrated with its original body in order to consider the implications of having a believable virtual mind in copies, with an emphasis on the assumption that functionally identical inputs to functionally identically constructed machines lead to functionally identical output processes. Near the end, confronts the ethics of a human emulator with a striking narrative development. It also includes some reflections on the impossibility of the matter.

Caveats: It is a story written by the guy who coined the term intuition pump, perhaps to create cynicism about the misuse of crafty parables in pop philosophy, and the setting's conceit question-begs a physicalist cognition thesis. It's not a direct challenge to the possibility of artificial intelligence, it's more of a musing on the ethics of the matter. This poster possesses a bias in favor of much of Dennett's work.

Edited to add content I mistakenly left out of the final version of this post.

Griffin wrote:When you emulate a human being, congratulations, you are creating a new world, a new universe, a new level of reality. The ethics we know and use and rely on no longer apply (except in cases where it is clearly interacting with our reality).

I propose any interaction with our reality whatsoever means we should consider our ethics relevant. What if a simulated human from an "alternate" reality, supposedly designed to be totally secluded from the real world except by the black-box manipulator controlled by experimenters, was liberated into our society (by, say, a strongheaded philosophy undergrad)? What about the experimenters' consciences; what about spontaneous empathy, seen by some ethicists as the beginning of morality?

Assuming a perfect replication of humans in every way except those ways by which we choose to interact with them, I suspect some people would support these simulated humans' status as ethical recipients. If we walled a non-simulated human off in a lab, separating him from the real world in every way except through the experimenters' manipulations, one's intuition might be swayed differently; the key is in how believable we think these simulated humans are.

The most important lesson of the much-objected Turing test is that the most popular idea anyone has yet put forth for determining the reality of a strong artificial intelligence is totally subjective. Alternatives are not well-agreed upon as a means for determining general intelligence. These points should show that intelligence as a quality is -- by the flawed but influential standard of popularity -- an abstract construct at this point, rendered obtainable in silico very much by the people who choose to believe it is possible, if not yet demonstrable to the point of fulfillment.

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Thu Jun 18, 2009 5:50 am UTC

Griffin wrote:Except I haven't actually seen one based around a guy, living in a simulation just to be killed or experimented on over and over and over again. Have any examples? I would like to check them out... Eh, I'm sure Asimov or Analog has done it at least once...

There was even a movie on this theme : Nirvana (don't get afraid by the poster, it is ugly). One of the best cyberpunk movies I have seen so far : http://en.wikipedia.org/wiki/Nirvana_(film)

That reminds me of a discussion we had with a friend about The Matrix plot. I sustained the idea that "reality is in fact a simulation" is a plot device that has been known for at least 50 years in SF literature. He pointed out that I could not quote a single story that used that as a premise. Then we observed together that it would simply not work : the fact that a simulation can look realistic enough to fool someone is taken for granted in most SF universes. They evoke trapped people in Neuromancer, in Iain M. Banks' Culture series he mentions simulated universes where the dead wanders, etc... You could not get a reader to be fooled by strange events and wondering what happened. Writing "he disconnected from the simulation" in an SF setting is just like writing "then he awoke" in a regular setting : it means that all happened beforehand had very few meaning for the rest of the story, and the reader usually feels cheated. It works in the Matrix because (please don't hit too hard, Matrix fans) it doesn't have a plot intended to give a coherent story and SF setting. Instead, it aims (and succeeds) at making an interesting SF-based action movie. We accept all the incoherences in the plot because when Neo says "I know kung fu" you know that you'll get a gorgeous fight scene.

Now try to imagine this written in a book. A lot of things just don't make sense : why do the agents need to implant an emitter in Neo ? Why do they need phone lines to get in or out of the matrix ? Why can they make super jumps but not fly or go through walls ? Why can they make a room full of guns but not use something more convenient to deliver Morpheus ? (like a big laser to slice down the building) Why can't agents just engulf their enemies in concrete ? We know the answer to these. This is in order to get the movie and the story running. If you remove these, you remove good scenes from the movie and we are used to accept that in order to have a good fight scene, implausible stuff happens.

Making a good story on the same premises in a book, now that would be harder. You die in the Matrix, you die in real life. Sure, otherwise the movie would not be fun. But finding a plausible way for this to happen is a lot harder and makes the point of virtual universes almost completely moot.

To sum up : virtual universes that look more realistic than reality are taken for granted in SF literature, they just don't offer a good thing to completely base a story on.

User avatar
Ran4
Posts: 131
Joined: Mon May 04, 2009 2:21 pm UTC

Re: Human emulator and ethics?

Postby Ran4 » Sun Jul 05, 2009 3:31 am UTC

The problem is that ethics isn't universal. Killing someone is wrong because we say so not because there is some magical fundamental force of the universe that says so.

I think that it's obvious that the "simulated" human is just as "real" as the "original" human. I mean, what is the difference? They both work in the same ways, just using different materials. Data goes in, algorithms are run on the data, data comes out. If the simulated human isn't real, then either the non-simulated (...not that we know that we aren't actually a simulation. The simulation hypothesis does apply) human isn't real, or there is some magic that happens in our brains that is unique to us.

Now, I don't believe in that magic: I'm a methodological naturalist, so dualism is out of the way. I do believe in randomness (but not in determinism per say), but I do not believe that it is fundamental to use sub-atomic randomness in order to simulate a human being.

So, the simulated human is for me no different from the "original" one. I haven't heard any good arguments for why this wouldn't be the case, unless you invoke dualism (ie. the information that makes a person a person is in some other dimension that is impossible to reach), a really weird definition of what is needed to be human or subatomic randomness.

Now to the question "Is it ethically right to kill* the simulated human?".
Well, that's about the same question as "Is it ethically right to kill* an flesh-and-blood-human?".
Now, since I really don't believe in universal (as in works-everywhere-in-the-universe) ethics, there isn't a problem with killing the person.
But there is still some sort of practical ethics. Not killing a person is wrong because killing people would lead to more harm than good (obviously not on an galactic scale: just harm relative to all humans or something alike).

This is where the problem comes in: how do you decide when it is okay?
Here I'm stuck. I don't know how that person will be simulated, in which way or who will do it. So, I'd let this question be answered not now, but when we actually have created that fully simulated human.

I might guess of some alternatives: the energy required to create the human is huge relative to how much energy we humans have, so destroying the work would be immoral against all the humans that have built the simulation. If the energy requirement/relative importance to humanity/a single human is small, then killing someone would be ethically accepted. So, if you just simulate a human being, experiment with it, and then kills it, it's okay. But if you simulate a human being and then get someone emotionally involved, killing the simulation would be immoral, but only relative to the emotionally involved other human/other simulation.

* as in destroying so much information about the person that the person cannot be rebuilt in any other way other than by pure randomness (information-theoretic death).

thehivemind5
Posts: 6
Joined: Tue Apr 28, 2009 3:32 am UTC

Re: Human emulator and ethics?

Postby thehivemind5 » Sun Jul 05, 2009 2:56 pm UTC

Ran4, let me just try and get a handle on your idea here.

Would it be ok under the ethics you purpose to kill/experiment on a biological human who was produced in some way involving very little energy and who had no social attachment to to others? I.e. if we had a human growing pod that ran off of AA batteries for years or something equally improbable.

User avatar
Ran4
Posts: 131
Joined: Mon May 04, 2009 2:21 pm UTC

Re: Human emulator and ethics?

Postby Ran4 » Mon Jul 06, 2009 8:44 am UTC

thehivemind5 wrote:Ran4, let me just try and get a handle on your idea here.

Would it be ok under the ethics you purpose to kill/experiment on a biological human who was produced in some way involving very little energy and who had no social attachment to to others? I.e. if we had a human growing pod that ran off of AA batteries for years or something equally improbable.

Well, that depends on a quite a few things. The reason why I probably wouldn't support it is because this might create an interest in killing/heavily experimenting with babies that aren't brought up in this way.

With energy, I don't mean it like in how much watts the life takes to produce, but more how hard it is/how important something is to someone else. I guess "solution cost" would be a better way to describe it.

Let's use your idea as an example. Let's say that we want to learn more about a certain type of brain disease that people (I suppose "normal, biological people", ie. you and me) have. We are quite certain that if we do a certain experiment that requires creating and killing a baby, we will learn so much about the disease that we can treat lots of now-living people that have this disease. In order to see if this is ethically okay, we'll create a solution cost. If the solution cost is negative, it's okay to do the experiment. If it's positive, the experiment is strongly ethically wrong, so it shouldn't be done.

Solution cost of doing the experiment:
Universal value of a human being: 0 cost (killing the baby or not have no universal positive or negative "cost", so this shouldn't be used)
Risk that killing one human will entice us to create more elaborate experiments, which we believe might have a positive solution cost: positive cost
Risk that killing one human will decrease the respect for human life, which will create a spiral of actions which we believe might have a positive solution cost: positive cost
Risk that people will condemn this experiment which leads to the research facility being shut down so that no further progress can be made, giving us no answers and therefore killing more people with the disease: positive cost
Chance/risk that we have forgotten something that is good/bad: positive or negative cost
Chance that what we will learn (if we learn anything) will save the life of lots of humans: negative cost

...Now, obviously, finding the exact value of this solution cost is nearly impossible, since just about every single solution cost is built upon other solution costs. Personally, as a humanist (ish), the "Risk that killing one human will decrease the respect for human life" value is really high. I do however not believe that there exists a "universal human value" cost.

Based upon this, most of the time I'd most probably say "no, killing the baby is wrong". But if you did know the exact solution cost, say you had some extremely advanced intelligence system that you'd trust in questions like this, then that could tell you if killing the baby is right or not.

You have the same type of solution cost scheme when calculating the solution cost for doing experiments on simulated people. But there'd be some differences. For example, the "Risk that people will condemn this experiment..." value would most probably be much lower when dealing with simulated people (as we have seen in this thread: people who have no problems killing simulated people, because they wouldn't be "real").

Actually, I think that above is how most people today would treat the situation, when they went past their (thankfully) built-in "killing-humans-is-wrong"-solution cost. Their values of "Risk that killing one human will decrease the respect for human life" would go from negative to positive, most extremely positive.

Now, of course, this is just the practical system, to be used by humans/human-like objects. I don't believe that there is such a thing as any universal/objective solution costs/moral values (...such as an "universal cost of human life").

thehivemind5
Posts: 6
Joined: Tue Apr 28, 2009 3:32 am UTC

Re: Human emulator and ethics?

Postby thehivemind5 » Mon Jul 06, 2009 3:29 pm UTC

Isn't there an implicit assumption in costs 2 and 3 that human life does have value? If we really can't determine the value of a human life, and have set that to zero, shouldn't there be no way we can predict the solution cost of "Risk that killing one human will decrease the respect for human life, which will create a spiral of actions" (I am assuming that the 'spiral of actions' you mention focuses on killing more humans, as I cant really think of what else you might mean)? I feel that as long as a human has no discernible value either way, than shouldn't wiping us out have no value either way? Also, could I not just as easily change "human life" to "machine life" in the statement above? I feel like you're coming from a very logical platform, but still have some implicit assumptions of value which you claim not to have made. Not that assumptions of value are wrong or anything.


Now to the question "Is it ethically right to kill* the simulated human?".
Well, that's about the same question as "Is it ethically right to kill* an flesh-and-blood-human?".


As long as you believe this, whatever your assumptions of value, we are fundamentally on the same page about this.

User avatar
Ran4
Posts: 131
Joined: Mon May 04, 2009 2:21 pm UTC

Re: Human emulator and ethics?

Postby Ran4 » Mon Jul 06, 2009 4:03 pm UTC

thehivemind5 wrote:I feel that as long as a human has no discernible value either way, than shouldn't wiping us out have no value either way?

Something like that, yes. If some alien decided to destroy our solar system in an instant (using some weird technology), killing all humans in the process, no real harm would have been done relative to the humans. Of course, that alien might use some similar ethical system in which the aliens' action is deemed immoral to the alien, but it's not ethically wrong on an universal/objective scale, since no such ethics exists.

I guess about the same thing would apply to a human: if a human managed to kill all humans, all animals etc. in an instant, there'd be nothing morally wrong with this. You might argue either way depending on how deep you want to get into philosophy, but in a practical sense I wouldn't say that it matters. Everyones dead, no-one is there to take the bad moral blow.

thehivemind5 wrote:Isn't there an implicit assumption in costs 2 and 3 that human life does have value? If we really can't determine the value of a human life, and have set that to zero, shouldn't there be no way we can predict the solution cost of "Risk that killing one human will decrease the respect for human life, which will create a spiral of actions" (I am assuming that the 'spiral of actions' you mention focuses on killing more humans, as I cant really think of what else you might mean)?

No, they do not imply that there is an objective value of human life. The system is supposed to be practical and subjective, not universal/objective. Killing someone is bad because it's practically bad, not because there is some universal/objective reason for someone not dying. And even if there is an objective value of human life (thought I must say that it's extremely unlikely, unless every single thing in the universe have some universal moral value), that shouldn't change the practical viewpoint.

If universal ethics doesn't exist, fine, make your own rules. If they does exist... well, it doesn't matter anyway. :P

Nulono
Posts: 54
Joined: Wed May 13, 2009 5:09 pm UTC

Re: Human emulator and ethics?

Postby Nulono » Wed Jul 08, 2009 9:54 pm UTC

It'd really just be an AI. Should PETA arrest me if I don't feed my virtual dogs?

User avatar
zug
Posts: 902
Joined: Wed Feb 25, 2009 12:05 am UTC

Re: Human emulator and ethics?

Postby zug » Wed Jul 08, 2009 10:13 pm UTC

I think it would be a neat experiment to do. Assuming we were able to synthesize this kind of compuperson, why don't we just ask them about it? Ask if something hurts or if they want us to stop.
Velifer wrote:Go to the top of a tower, drop a heavy weight and a photon, observe when they hit the ground.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 8 guests