Human emulator and ethics?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

WaterToFire
Posts: 213
Joined: Sun Oct 05, 2008 7:09 pm UTC

Re: Human emulator and ethics?

Postby WaterToFire » Wed Jul 08, 2009 11:42 pm UTC

zug wrote:I think it would be a neat experiment to do. Assuming we were able to synthesize this kind of compuperson, why don't we just ask them about it? Ask if something hurts or if they want us to stop.

So if you program a robot, for example a Tickle Me Elmo, to say "Stop it, that hurts!" in response to physical stimulus, that means we should believe it? We would be doing essentially the same thing with a human emulator. The issue is not that it responds to stimulus, as many things that are not conscious do this, but whether it is conscious itself. This cannot be viably determined by measuring response to stimulus. In fact, as far as I can tell, it can never be determined. All you can do is make an educated guess based on what you know of its neural complexity.

As I see it, a human emulator would pass this test, but not because of its responses to questions. A talk-bot can be programmed to do the same things, after all.

User avatar
zug
Posts: 902
Joined: Wed Feb 25, 2009 12:05 am UTC

Re: Human emulator and ethics?

Postby zug » Thu Jul 09, 2009 3:38 pm UTC

WaterToFire wrote:
zug wrote:I think it would be a neat experiment to do. Assuming we were able to synthesize this kind of compuperson, why don't we just ask them about it? Ask if something hurts or if they want us to stop.

So if you program a robot, for example a Tickle Me Elmo, to say "Stop it, that hurts!" in response to physical stimulus, that means we should believe it? We would be doing essentially the same thing with a human emulator. The issue is not that it responds to stimulus, as many things that are not conscious do this, but whether it is conscious itself. This cannot be viably determined by measuring response to stimulus. In fact, as far as I can tell, it can never be determined. All you can do is make an educated guess based on what you know of its neural complexity.

As I see it, a human emulator would pass this test, but not because of its responses to questions. A talk-bot can be programmed to do the same things, after all.

If we are engineering something that is exactly human, it would have a repertoire of language and it would presumably have humanoid feelings. We wouldn't just program it to say "that hurts." We'd program it with a full language and humanoid interpretation to say "That feels good" or "Please pass the chicken, I'm hungry" or "My name is Joe, how are you?" in the appropriate situations
Velifer wrote:Go to the top of a tower, drop a heavy weight and a photon, observe when they hit the ground.

WaterToFire
Posts: 213
Joined: Sun Oct 05, 2008 7:09 pm UTC

Re: Human emulator and ethics?

Postby WaterToFire » Thu Jul 09, 2009 11:36 pm UTC

zug wrote:
WaterToFire wrote:
zug wrote:I think it would be a neat experiment to do. Assuming we were able to synthesize this kind of compuperson, why don't we just ask them about it? Ask if something hurts or if they want us to stop.

So if you program a robot, for example a Tickle Me Elmo, to say "Stop it, that hurts!" in response to physical stimulus, that means we should believe it? We would be doing essentially the same thing with a human emulator. The issue is not that it responds to stimulus, as many things that are not conscious do this, but whether it is conscious itself. This cannot be viably determined by measuring response to stimulus. In fact, as far as I can tell, it can never be determined. All you can do is make an educated guess based on what you know of its neural complexity.

As I see it, a human emulator would pass this test, but not because of its responses to questions. A talk-bot can be programmed to do the same things, after all.

If we are engineering something that is exactly human, it would have a repertoire of language and it would presumably have humanoid feelings. We wouldn't just program it to say "that hurts." We'd program it with a full language and humanoid interpretation to say "That feels good" or "Please pass the chicken, I'm hungry" or "My name is Joe, how are you?" in the appropriate situations

If we are engineering something that is exactly human, wouldn't we know that it is conscious because it is an exact replica of something that is? We wouldn't have to ask it; we'd know the answer before we asked the question. Because we programmed it that way.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11128
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Human emulator and ethics?

Postby Yakk » Thu Jul 09, 2009 11:50 pm UTC

WaterToFire wrote:
zug wrote:
WaterToFire wrote:
zug wrote:I think it would be a neat experiment to do. Assuming we were able to synthesize this kind of compuperson, why don't we just ask them about it? Ask if something hurts or if they want us to stop.

So if you program a robot, for example a Tickle Me Elmo, to say "Stop it, that hurts!" in response to physical stimulus, that means we should believe it? We would be doing essentially the same thing with a human emulator. The issue is not that it responds to stimulus, as many things that are not conscious do this, but whether it is conscious itself. This cannot be viably determined by measuring response to stimulus. In fact, as far as I can tell, it can never be determined. All you can do is make an educated guess based on what you know of its neural complexity.

As I see it, a human emulator would pass this test, but not because of its responses to questions. A talk-bot can be programmed to do the same things, after all.

If we are engineering something that is exactly human, it would have a repertoire of language and it would presumably have humanoid feelings. We wouldn't just program it to say "that hurts." We'd program it with a full language and humanoid interpretation to say "That feels good" or "Please pass the chicken, I'm hungry" or "My name is Joe, how are you?" in the appropriate situations

If we are engineering something that is exactly human, wouldn't we know that it is conscious because it is an exact replica of something that is? We wouldn't have to ask it; we'd know the answer before we asked the question. Because we programmed it that way.
Emulation is not an exact replica.

Emulation is generating something that has the same surface interface as something else.

Perfect emulation would be something that, from the outside, is indistinguishable from the original. Distinguishing a perfect emulation from a replica isn't possible by definition from the 'outside'.

One interesting thing is that, in physics, we keep on running into cases where describing the surface of a phenomena is sufficient to describe the phenomena. From the Faraday cage, to the limit on entropy of a volume described by a black hole, to http://en.wikipedia.org/wiki/Green%27s_theorem / http://en.wikipedia.org/wiki/Stokes%27_Theorem .

Now, much of what is talked on in here is a human emulator within a virtual reality or some limited interface (like text on a computer screen).

The http://en.wikipedia.org/wiki/Turing_test is a thought experiment by a famous computer scientist -- Alan Turing. Suppose you had a computer that, using a text terminal, could not be distinguished from a human being. The idea is that it could be used as a test to determine if that computer is "really thinking". It is an attempt to take that 'unanswerable' question, and provide a concrete test that is seemingly related. The "surface area" (the computer interaction) of the emulation is used to determine if it has the "volumetric" properties that it seems to have.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

WaterToFire
Posts: 213
Joined: Sun Oct 05, 2008 7:09 pm UTC

Re: Human emulator and ethics?

Postby WaterToFire » Fri Jul 10, 2009 3:35 am UTC

Yakk wrote:Now, much of what is talked on in here is a human emulator within a virtual reality or some limited interface (like text on a computer screen).
Sorry I got my terms wrong. This is what I was referring to.

Yakk wrote:The http://en.wikipedia.org/wiki/Turing_test is a thought experiment by a famous computer scientist -- Alan Turing. Suppose you had a computer that, using a text terminal, could not be distinguished from a human being. The idea is that it could be used as a test to determine if that computer is "really thinking". It is an attempt to take that 'unanswerable' question, and provide a concrete test that is seemingly related. The "surface area" (the computer interaction) of the emulation is used to determine if it has the "volumetric" properties that it seems to have.
I understand this. In theory the test should work, but I'm not convinced that human testing techniques can ever determine if consciousness is really there, just as you can never really know if the people you are talking to in real life are truly conscious. It's a guess based on their enacted level of conversational complexity combined with the knowledge that because they are human, they probably are conscious, because you are. The best way to tell if something is really complex enough for consciousness is to examine its internal workings.

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Fri Jul 10, 2009 7:33 am UTC

The "surface area" (the computer interaction) of the emulation is used to determine if it has the "volumetric" properties that it seems to have.


Evolution does not add unnecessary complexity. If our behavior could be created "superficially" it would have already been done. If a character behaves human, it can only be sensibly assumed human to the equivalent degree.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

User avatar
Yakk
Poster with most posts but no title.
Posts: 11128
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Human emulator and ethics?

Postby Yakk » Fri Jul 10, 2009 8:06 am UTC

negatron wrote:Evolution does not add unnecessary complexity.
Evolution is a local hill climbing algorithm, not a $diety.

If there was an optimal solution to a problem, but it isn't locally reachable, Evolution isn't very good at reaching it. Local optimas will be found much faster, even if they contain lots of unnecessary clutter. Speech and the like in humans isn't a super-evolved trait that has gone through billions of years of selection and refinement, like (for example) our mitochondria-based metabolism has. It is really recent -- some thing as recent as a few million years.

Evolution is willing to carry around arbitrary amounts of baggage, so long as the crud isn't fatal, or is tied (through accidents of Evolution) to positive features.

Evolution isn't magic.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: Human emulator and ethics?

Postby Josephine » Sat Jul 11, 2009 3:41 am UTC

I think simulating the entire person (much less a universe) is unnecessary. If you want to do an experiment on the body, do a simulation on the body. If the brain never existed, there arises no ethical problem whatsoever. Mental experiments are different, though, but nothing's stopping you from only simulating what you need. If you have to disconnect the cerebral cortex, then do so.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

slacks
Posts: 36
Joined: Sat Jul 11, 2009 2:47 am UTC

Re: Human emulator and ethics?

Postby slacks » Sat Jul 11, 2009 6:10 am UTC

It seems to me that creating a virtual world for a real intelligence is not a very ethical choice to begin with.
I think it would be comperable to keeping a human intelligence in a permanent dream state (no matter how realistic such a state is percieved to be!). So it seems like there would have to be an option made for a physically capable body for any virtual intellegence you make... which become prohibitive if you make a lot.

Tying back into the euthenasia discussion is interesting as well... if it is ethical to pull the plug on a human, is it ethical to do the same for a computer intelligence?

I'm not sure how we would resolve punishment issues for a being that can live so much longer than a human being either. I mean you can't exactly sentence a computer intelligence to life in prison and figure that is equivalent to the same sentence for a human being (an arguement could be made for it being worse or better!).

I'm also unsure how you would handle people owning/maintaining the physical hardware that stores a computer intelligence, particularly if said software is loaded without permission.

User avatar
Ran4
Posts: 131
Joined: Mon May 04, 2009 2:21 pm UTC

Re: Human emulator and ethics?

Postby Ran4 » Sat Jul 11, 2009 1:41 pm UTC

Nulono wrote:It'd really just be an AI. Should PETA arrest me if I don't feed my virtual dogs?

If your virtual dogs is living in a virtual world where they can feel pain just like a "normal" dog, then yes. Not feeding your dogs is bad (uhm, let's not get into a long discussion about animals), not feeding your virtual dogs, which works just like your "normal" dogs, is also bad.

WaterToFire wrote:
zug wrote:I think it would be a neat experiment to do. Assuming we were able to synthesize this kind of compuperson, why don't we just ask them about it? Ask if something hurts or if they want us to stop.

So if you program a robot, for example a Tickle Me Elmo, to say "Stop it, that hurts!" in response to physical stimulus, that means we should believe it?
Uhm, the exact same thing can be applied for "normal" humans. So yes, we should believe it, or at least partly (we don't believe in everything a "normal" human says either).

nbonaparte1 wrote:I think simulating the entire person (much less a universe) is unnecessary. If you want to do an experiment on the body, do a simulation on the body. If the brain never existed, there arises no ethical problem whatsoever. Mental experiments are different, though, but nothing's stopping you from only simulating what you need. If you have to disconnect the cerebral cortex, then do so.

Well, much of our bodies are controlled by the brain, you'd lose a lot by removing it. Besides, why would you even need to do an experiment on an actual simulated body? There's no need to make your program more abstract.

slacks wrote:I'm not sure how we would resolve punishment issues for a being that can live so much longer than a human being either. I mean you can't exactly sentence a computer intelligence to life in prison and figure that is equivalent to the same sentence for a human being (an arguement could be made for it being worse or better!).

I'm also unsure how you would handle people owning/maintaining the physical hardware that stores a computer intelligence, particularly if said software is loaded without permission.

Well, the idea that great punishment is the same as putting someone in jail was created for "normal" humans, for "normal" humans. We'd just find some other way of punishing someone. If that is even needed.

The idea of "life in prison" is based upon the fact that people rarely reaches 100 years of age. What's the point in sentencing a virtual being to ten billion years of prison? Wouldn't it be way easier to just reprogram the virtual being and then continue on? I don't buy into the notion that punishment should be given "just because". If human A kills human B and human A is reprogrammed to not kill more humans, shouldn't that be enough? I mean, why should the reprogrammed human A have to do jail time for what the previous human A did?

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Sat Jul 11, 2009 1:48 pm UTC

Ran4 wrote:I don't buy into the notion that punishment should be given "just because". If human A kills human B and human A is reprogrammed to not kill more humans, shouldn't that be enough? I mean, why should the reprogrammed human A have to do jail time for what the previous human A did?

Punishment serves no purpose in itself, rather it acts as a deterrent. You may have fixed the person but you have not deterred others from doing the same.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

WaterToFire
Posts: 213
Joined: Sun Oct 05, 2008 7:09 pm UTC

Re: Human emulator and ethics?

Postby WaterToFire » Sat Jul 11, 2009 9:01 pm UTC

negatron wrote:
Ran4 wrote:I don't buy into the notion that punishment should be given "just because". If human A kills human B and human A is reprogrammed to not kill more humans, shouldn't that be enough? I mean, why should the reprogrammed human A have to do jail time for what the previous human A did?

Punishment serves no purpose in itself, rather it acts as a deterrent. You may have fixed the person but you have not deterred others from doing the same.

I think the threat of having your identity re-written as best suites the governing agent would be a pretty strong deterrent.

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Sun Jul 12, 2009 12:33 am UTC

I wouldn't mind a brain twerk which makes me less aggressive and more compassionate towards others. Perhaps I need that more than anything else. I might even kill for it.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

User avatar
echelle
Posts: 6
Joined: Sat Apr 25, 2009 10:59 pm UTC

Re: Human emulator and ethics?

Postby echelle » Mon Jul 13, 2009 4:12 am UTC

If we are dealing with a perfect replica of a human and not just an emulation as yakk wrote, then it would be essentially human but if there were also to a perfect backup of the current state of the "human" before the killing, then killing would be by itself inconsequential to the program conscious or not. It would only be the pain it may cause that it feel could be considered amoral. One of the big problems in killing intelligence especially if it can be restored is this then brings into question the worth of that intelligence both for the program and the killer. Another reason for the possible amorality of killing something even if it is unaware of your actions you are, it is my opinion that being able to kill a companion with no reprimand would most certainly change your shared relationship quite possibly for the worse. but if no memory would be held by the killer as well then any events would have effectively never happened and any crime could be commited as they would be poitless and have no effect of any size on anything in or outside the program.

One other think to be thought about is whether the program would be not only self aware but aware of it's own state in our reality. because if it were to be a perfect simulation of a current human I assume the results would not be quite as accurate if it were aware of being a computer program and if a "matrix" were to be provided then the only way to ensure further accuracy would be to make sure no changes would be able to be made. Thus making it more or less a separated universe. in other words the simulated people would have to be able to make the assumption that they are not a simulation in order for it to be accurate (just as we assume likewise).

Telchar wrote:
If I were to simulate an Internal Combustion Engine on my computer, that doesn't mean my computer now contains an ICE no matter how good the simulation is.

to go back to telchar's point, to us the engine would be a simulation because in our reality it is, but within the simulation if the engine was a perfect replica(as we're assuming the "human" is) then it would be a real engine within the simulation to anything else residing there. the point of a virtual anything is not to create physical objects or energy but emulate them or recreate them using electric signals instead of physical objects. this does not mean that one is any less real then the other just that they are being expressed in different ways and do not exist in the same physical sense. this of course means that it would be pointless to use perfect simulated humans as anything more then a way to speed up a process
thehivemind5 wrote:
many things are fundamental to the concept of humanity, but, in my opinion at least, carbon is not one of them.


Liberals are closet aristocrats

User avatar
poopsicle
Posts: 2
Joined: Wed Jul 22, 2009 5:25 am UTC

Re: Human emulator and ethics?

Postby poopsicle » Fri Jul 24, 2009 9:02 pm UTC

echelle wrote:it would be pointless to use perfect simulated humans as anything more then a way to speed up a process

If you assume that you would apply the same ethical standards to simulated humans as you would to "real" humans, then this is true, but if somebody wanted to use human test subjects without needing to worry about the legal implications, this would be a very good way to do it. A perfectly digital replica of a human is much easier to control and easier to get rid of than a real human would be.



My opinion on the main thread topic is that there should be no difference in your view of the simulation and your view of a "real" human. You would be, quite literally, "playing god."
There are countless parallels between this scenario and a typical theistic view of our existence. The software performs all of the same functions, it's just encoded on different hardware. If we could create an exact replica of the universe just after the big bang (I'm a bit hazy on the subject, but I believe there are quite a few implications about predictability if you just start with the singularity itself) you would be creating an exact replica of our universe. Fast forward about 13 billion years, and you will have a simulation of all of humanity (and the rest of the universe) in real time. It would be no less unethical for us to "kill" one of the humans in our digital world than it would be for some (hypothetical) being on a higher plane of consciousness to kill someone in our "real" world.

User avatar
Ran4
Posts: 131
Joined: Mon May 04, 2009 2:21 pm UTC

Re: Human emulator and ethics?

Postby Ran4 » Tue Jul 28, 2009 7:42 pm UTC

poopsicle wrote:
echelle wrote:My opinion on the main thread topic is that there should be no difference in your view of the simulation and your view of a "real" human. You would be, quite literally, "playing god."

Now, how is that an argument for anything? It has no meaning. "You are playing god, so it's wrong!" is an absurd argument.

User avatar
poopsicle
Posts: 2
Joined: Wed Jul 22, 2009 5:25 am UTC

Re: Human emulator and ethics?

Postby poopsicle » Wed Jul 29, 2009 7:58 am UTC

Ran4 wrote:Now, how is that an argument for anything? It has no meaning. "You are playing god, so it's wrong!" is an absurd argument.


I didn't say that playing god was wrong, I merely said that in this situation, your relationship to the simulated person would be analogous to that between a "god" and a "real" person. Obviously, views on whether "gods" should be allowed to end the existence of the "people" they create will vary. I - being ethically opposed to murder, torture, and human testing without a subject's consent - don't think it would be ethical to do any of those things to a "simulated" human, even if we are one step higher than them on some kind of existential hierarchy.

Iv
Posts: 1207
Joined: Thu Sep 13, 2007 1:08 pm UTC
Location: Lyon, France

Re: Human emulator and ethics?

Postby Iv » Wed Jul 29, 2009 2:11 pm UTC

What if you get consent for a simulated human for testing and torture or even murder ? How about tailoring their genetic material so they agree with every proposition you make them ? What about restoring them to their initial conditions afterwards ?

FrankManic
Posts: 103
Joined: Sat Nov 29, 2008 9:12 pm UTC

Re: Human emulator and ethics?

Postby FrankManic » Thu Jul 30, 2009 7:56 am UTC

If it looks like a duck, walks like a duck, quacks like a duck, eats like a duck, and shits like a duck, it's a duck.

I've always found... flavoured rights movements vaugely ridiculous. It isn't about colored rights or gay rights or anything else. It's just sentient rights, and that applies to every human, robot, thinking electrical phenomena, sapient plant, and Sims 4 Simoleon out there.

Course, at the same time I'm increasingly convinced that human rights is an expendable casualty if we can trade it for real access to space.

cpp789
Posts: 45
Joined: Mon Mar 09, 2009 4:20 am UTC
Location: Near Chicago

Re: Human emulator and ethics?

Postby cpp789 » Mon Aug 03, 2009 3:15 am UTC

It is an interesting question and one for which we will probably have to decide on an answer (in a couple thousand years). My view is, if we can get them to want to be experimented on, there's no problem.

First, I would say that an exact enough simulation would count as human. I've read the following somewhere, I just can't remember the source. We, as humans, are nothing more than patterns in matter which change and react to the environment in specified manners. As such, when we dream about or even think about what to get someone for their birthday, we create very low resolution versions of those same people within our brains. The same reactions are there, the same taxes, and the same basic processes are played out in our neurons instead of in a purely autonomous creature. Where ever that was from, it's heavily paraphrased. As long as the simulation was exact enough, it could be considered human (how exact is exact would be a different question, but you said down to the molecule).

Now, experimenting on an unwilling participant to a point where they would feel extreme pain in this manner would be immoral in my book, even if it could save others (once more, another debate). I did say unwilling participant though. If (since we do have complete theoretical control over them), we were to reprogram them to feel even more happiness at the thought of the experiments or the potential to help others or if we made it so pain didn't hurt them (that is, they new that they were being hurt, but they did not find it unpleasant), then I would say we could experiment on them. Perhaps we could even heal them, seeing as the second law of thermodynamics isn't strictly enforced on them. It's like experimenting on a willing CIPA patient in a way which will leave no lasting damage.

These human beings who wish to be experimented on could be experimented on without guilt or any reason for guilt.

User avatar
negatron
Posts: 294
Joined: Thu Apr 24, 2008 10:20 pm UTC

Re: Human emulator and ethics?

Postby negatron » Mon Aug 03, 2009 3:31 am UTC

Perhaps we could even heal them, seeing as the second law of thermodynamics isn't strictly enforced on them.

You speak nonsense. The second law of thermodynamics doesn't state "biological creatures cannot be healed". Don't quote me on that but it's not in my physics book.
Image
I shouldn't say anything bad about calculus, but I will - Gilbert Strang

User avatar
Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Re: Human emulator and ethics?

Postby Josephine » Mon Aug 03, 2009 7:07 am UTC

cpp789 wrote:It is an interesting question and one for which we will probably have to decide on an answer (in a couple thousand years). My view is, if we can get them to want to be experimented on, there's no problem.
As such, when we dream about or even think about what to get someone for their birthday, we create very low resolution versions of those same people within our brains. The same reactions are there, the same taxes, and the same basic processes are played out in our neurons instead of in a purely autonomous creature.


Hold on a second... your post gave me a thought. That must mean that the brain is capable of a physical simulation (dreams). Hmm. Shows you just how powerful it is.

And not so much a few thousand years as a few decades.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

cpp789
Posts: 45
Joined: Mon Mar 09, 2009 4:20 am UTC
Location: Near Chicago

Re: Human emulator and ethics?

Postby cpp789 » Mon Aug 03, 2009 6:00 pm UTC

negatron wrote:
Perhaps we could even heal them, seeing as the second law of thermodynamics isn't strictly enforced on them.


You speak nonsense. The second law of thermodynamics doesn't state "biological creatures cannot be healed". Don't quote me on that but it's not in my physics book.


Maybe the 2nd Law of Thermodynamics wasn't quite the law of physics I should have been referencing. The point is that since they are still simulated, we could experiment without inflicting permanent damage.

Also, the 2nd Law of Thermodynamics does state that entropy tends to increase over time in a closed system. As the simulated human would be ordered, and the simulated universe would be comprised of nothing except the simulated human, healing the simulated human to a more ordered state would seem to go against the 2nd Law of Thermodynamics as it would imply entropy decreasing over time.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 13 guests