Philwelch wrote:The problem is that you're just creating an ad-hoc utilitarian justification for not killing hoboes, one that we could argue about all day long (keep in mind that I chose hoboes specifically because they tend to be migratory vagrants who wouldn't be missed, and further that nothing in my argument implies any sort of institutional enforcement, it could just be a doctor or nurse who goes out killing hoboes), but one which doesn't provide convincing evidence that there isn't something like the hobo case that bring reciprocity and utilitarianism into conflict.
I'm sorry, I didn't do a very good job enunciating my main point.
roc314 wrote:We have to include in our rules a limitation to what can be done for the greater good, else we unintentionally defeat the greater good.
Is a society in which anyone can do anything so long as it is justifiable as causing the greater good better than a society that limits what may be done to others, even if it would cause the greater good? If yes, then an unlimited utilitarianism is justifiable (under utilitarianism); if no, then only a limited utilitarianism is justifiable.
A few examples of actions that were considered justified at the time/place because it was thought they would cause the greater good (whether explicitly or implicitly the reason): the Stalinist purges, the Holocaust (haha, Godwin's Law), the Patriot Act, the Japanese-American internment camps in WWII, the American genocide of the Native Americans, the "White Man's burden", the crucifixion of Jesus, and Guantanamo Bay prison (sorry for the America-centrism, but my most detailed history education is in that area, so it's what I know the most examples from).
There is a direct parallel between the concepts of utilitarianism and democracy. One says we should do what is best for the most and the other says we should do what is wanted by the most. However, one integral component of a democracy is some kind of bill of rights--something to prohibit the majority from abusing the minority. Much like we need to prevent the tyranny of the majority in democracy, we need to prevent the tyranny of the greater good in utilitarianism. It's not an ad-hoc rule in only one circumstance, it's an essential rule in utilitarianism.
Look at it this way, in many of the above examples, the ones carrying it out honestly thought at the time that what they were doing was truly for the greater good. However, with the advantage of history and hindsight, we can see that it truly isn't. Because humans are fallible, we will make mistakes in determining what is the greater good. Certain actions have such large negative side-effects that if we are wrong that they will lead to the greater good, then huge harm will be caused to society. To avoid that huge risk, we have to make sure that certain actions are prohibited, even if it seems they will lead to the greater good and no matter who is causing them (whether a doctor, a dictator, or even the majority populace).
At the very minimum, you have to include the element of probability in utilitarianism.
Of course, all this is skirting the issue that determining exactly what the greater good is difficult--if not impossible.
Philwelch wrote:Hate to say it, but most of the time when some flawed moral theory gives us an unconscionable duty to turn our friends over to the Secret Police or to murder hoboes, the only thing being contradicted with is our intuitions—not some part of the theory under question.
I suppose I'm assuming that our intuitions are not a logically rigorous system the way we would like our ethics to be. In particular, my intuitions are not a logically rigorous system. If yours are, more power to you.
Heh, my intuitions are not in anyway close to logically rigorous.
Our intuitions aren't logically set forth, all derivable from a few basic principles. Instead, they are a collection of ideas that we take for granted and mostly don't even think directly about. They are treated as axioms. By including our intuition in our moral calculation, all we are doing is throwing in a large number of axioms, often ad-hoc, not necessarily well-thought out or even consciously recognized; it's no surprise that they would cause contradictions in conjunction with another moral system.
Philwelch wrote:The problem is with your axiom, of course. The axiom should be, "I want others to treat/judge me according to the ethic of reciprocity" if you believe in reciprocity. But again, such an axiom is implicit by scope—it is not part of the ethical theory per se so much as it may be part of the metaethical unpacking of what moral claims mean.
Philwelch wrote:Any ethics that leads to the proposition "I should treat/judge others by the basis of their morality" (assuming it doesn't contradict itself) is no ethics at all—it is a recipe for sycophancy. It would compel us to positively judge anyone whose actions were consistent with their moral beliefs. Which is counterintuitive—normally we don't give positive moral judgments to Nazis.
Which is why I don't think we should go by pure, unlimited reciprocity (much like we shouldn't go by pure, unlimited utilitarianism).
Questions for you:
- When our moral system contradicts our intuition, when do we go with the moral system and when do we go with the intuition?
- Why is ad-hoc bad in a moral system? I agree that it's not as elegant as having everything decided by the same, few rules, but it is helpful in overcoming contradictions between our intuitive moral ideas and our rigorous moral system.