thehivemind5 wrote:Ran4, let me just try and get a handle on your idea here.
Would it be ok under the ethics you purpose to kill/experiment on a biological human who was produced in some way involving very little energy and who had no social attachment to to others? I.e. if we had a human growing pod that ran off of AA batteries for years or something equally improbable.
Well, that depends on a quite a few things. The reason why I probably
wouldn't support it is because this might create an interest in killing/heavily experimenting with babies that aren't brought up in this way.
With energy, I don't mean it like in how much watts the life takes to produce, but more how hard it is/how important something is to someone else. I guess "solution cost" would be a better way to describe it.
Let's use your idea as an example. Let's say that we want to learn more about a certain type of brain disease that people (I suppose "normal, biological people", ie. you and me) have. We are quite certain that if we do a certain experiment that requires creating and killing a baby, we will
learn so much about the disease that we can treat lots of now-living people that have this disease. In order to see if this is ethically okay, we'll create a solution cost. If the solution cost is negative, it's okay to do the experiment. If it's positive, the experiment is strongly ethically wrong, so it shouldn't be done.
Solution cost of doing the experiment:
Universal value of a human being: 0 cost (killing the baby or not have no universal positive or negative "cost", so this shouldn't be used)
Risk that killing one human will entice us to create more elaborate experiments, which we believe might have a positive solution cost: positive cost
Risk that killing one human will decrease the respect for human life, which will create a spiral of actions which we believe might have a positive solution cost: positive cost
Risk that people will condemn this experiment which leads to the research facility being shut down so that no further progress can be made, giving us no answers and therefore killing more people with the disease: positive cost
Chance/risk that we have forgotten something that is good/bad: positive or negative cost
Chance that what we will learn (if we learn anything) will save the life of lots of humans: negative cost
...Now, obviously, finding the exact value of this solution cost is nearly impossible, since just about every single solution cost is built upon other solution costs. Personally, as a humanist (ish), the "Risk that killing one human will decrease the respect for human life" value is really high. I do however not believe that there exists
a "universal human value" cost.
Based upon this, most of the time I'd most probably say "no, killing the baby is wrong". But if you did know the exact solution cost, say you had some extremely advanced intelligence system that you'd trust in questions like this, then that could tell you if killing the baby is right or not.
You have the same type of solution cost scheme when calculating the solution cost for doing experiments on simulated people. But there'd be some differences. For example, the "Risk that people will condemn this experiment..." value would most probably be much lower when dealing with simulated people (as we have seen in this thread: people who have no problems killing simulated people, because they wouldn't be "real").
Actually, I think that above is how most people today would treat the situation, when they went past their (thankfully) built-in "killing-humans-is-wrong"-solution cost. Their values of "Risk that killing one human will decrease the respect for human life" would go from negative to positive, most extremely positive.
Now, of course, this is just the practical system, to be used by humans/human-like objects. I don't believe that there is such a thing as any universal/objective solution costs/moral values (...such as an "universal cost of human life").