I. Evaluation of Kant’s ethical view:
A. In favor of Kant’s ethical view:
1. Rational, consistent, impartial: Kant’s view emphasizes the importance of rationality, consistency, impartiality, and respect for persons in the way we live our lives. If Kant is correct that moral absolutes cannot be violated, then he prevents any loopholes, self-serving exceptions, and personal biases in the determination of our duties.
2. Intrinsic worth of a human being: In virtue of being a human being, you have rights, dignity, and intrinsic moral worth/value. Every human being is like a unique artistic creation, such as a Ming vase.
3. A moral framework for
rights: As a culture here in the
4. Non-relativistic rights and duties: These moral rights and duties transcend all societies and all contexts, so Kant’s view doesn’t have the problems of cultural relativism, or individual relativism. No empirical appeal will have any effect against Kant's view. You need to point out inconsistencies within his system.
5. Autonomy and ability to choose your moral projects: You have a duty to pursue your happiness through the use of reason, as long as you’re not lying, breaking your promises, or committing suicide (or any other duty as determined by the categorical imperative formulations).
6. Alternative: Consequences? Can we ever be completely sure about the consequences of our actions? Haven’t there been times when you thought you were doing the best thing, based on the anticipated consequences, but the results turned out badly? Kant’s view avoids consequences in making ethical decisions, so it doesn’t have such a problem.
B. Against Kant’s ethical view:
1. Is the good will always good without qualification? [STRONG] Can’t I be a do-gooder who always tries to do my duty but creates misery instead? For example, say that I’m running around campus taking cigarettes from the mouths of students, passing out anti-smoking pamphlets. I'm only trying to help people. It doesn't matter if I get restraining orders against me, beaten up, fired, etc. - I'm supposed to have a good will even if I'm annoying. Does this sound ethical?
2. How can Kant deal with
these hard cases? [STRONG]
a. Nazi Case: It's 1939, and you're hiding Jews in a cellar. The Nazi's come to the door and ask you if you're hiding Jews in a cellar. Should you lie to Nazi's? Is this a good objection to Kant? [See this link "On a Supposed Right to Lie from Altruistic Motives," by Kant, to read his answer to this objection.]
b. Suicide Case: Joe is terminally ill (with
some nasty cancer) in the opinion of two doctors, and is in a
lot of pain, at the legal limit of painkillers. Why can't Joe
take a pill that will kill himself? [NOTE: This case is not
identical to someone who is merely depressed and doesn't think
life is worth living, as he addresses in the reading.]
3. Two objections from David Hume [STRONG]:
a. Hume's first objection: Reason doesn't discover moral rules. Morality is feeling, affect, or sentiment.
b. Hume's second objection: Reason doesn't motivate moral action. Suppose Kant is right that reason discovers moral duties. So what? What happens then? We need to have action. Is reason sufficient to motivate us to do our duty? Suppose reason discovers Action A is a duty. In order to do Action A, do I need something else, such as a desire or an inclination to decide to do Action A, or is it enough to know that Action A is my duty? Hume says we need to have a desire or an inclination to do the right action, even if we know that it's the right action. In fact, for Hume, first we need a desire or an inclination to do something, then we look to reason to fulfill it. "Reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." Treatise concerning Human Nature (Bk.II, Part III, Sec. III, p. 415; my underlining)
4. Akrasia (weakness of will or moral conviction) [STRONG]: You see/know what the right action to do is, you want to do the good action, but you do the bad action instead. Is akrasia possible? If it exists, then reason does not simply force us to do the right thing.
5. What about non-human animals? [WEAK] According to Kant, we only have a duty to treat rational moral agents as ends, not animals. What about chimps that have a high percentage of our DNA structure? What about senile people or the comatose? Are these people things as opposed to ends in themselves, as "normal" people are, according to Kant? [This is weak because he could just update his view (if we showed him current animal research/abilities) and say that animals can be ends in themselves due to their reason; but, more importantly, this objection says nothing against the way in which Kant claims we should treat humans.]
II. EVALUATION OF UTILITARIANISM:
A. In favor of Mill’s Ethical view:
1. Intuitive in general: It links happiness with morality, instead of possibly pitting happiness against morality (such as Kant's view). We think it makes sense with common beliefs about morality. For instance, in general, it backs up murder's being wrong, lying, rights. So Utilitarianism gives us a system to our intuitions.
2. Common sense that pain is bad, pleasure is good: Everything being equal, though people have many different and conflicting moral beliefs, people agree that pain is bad, and pleasure is good.
3. Impartial, fair, & promotes social harmony: Utilitarianism requires us to balance our interests with those of others.
4. Practical, clear-cut procedure: Utilitarianism doesn’t rely on vague intuitions or abstract principles. It allows psychologists and sociologists to determine what makes people happy and which policies promote the social good [Warning/paper advice: Do not use this as your main reason why you like this theory - flipping a coin as Two Face in "The Dark Knight" is a simple ethical decision procedure, but that by itself does not make it a good theory].
5. Flexible and sensitive to circumstances (Act and Rule): Utilitarianism does not rigidly label actions as absolutely right or wrong (though certain actions like lying will in general be wrong), and it allows flexibility and sensitivity to the circumstances surrounding an action. This makes it practical. Act Utilitarianism is sensitive to the situation, but Rule Utilitarianism can be as well, as long as one can provide a rule that maximizes happiness in general, which also applies to this situation.
B. Against Mill’s Ethical View:
1. Negative Responsibility (Act and Rule Utilitarianism) [STRONG]: According to Utilitarianism, you're morally responsible for:
a. The things you didn't do but could have done to maximize happiness; and
b. The things that you could have prevented others from doing that decrease overall happiness; as well as for:
c. What you actually do to maximize/increase happiness.
E.g., If you go out and play tennis, you
could be doing something (almost certainly) to increase the
overall happiness of the world instead. Therefore,
Utilitarianism is an excessively demanding theory: You need/may need to give up a lot, if not
everything, in order to do the moral thing. This is a
criticism of Rule Utilitarianism because you have to think
about rules that would maximize happiness that you are not
currently following or did not follow, that could have
maximized happiness in general.
2. Lack of autonomy/integrity of the moral agent (Act Utilitarianism) [STRONG]: Utilitarianism takes moral responsibility out of the realm of personal autonomy. The agent must choose the one act that will maximize happiness, as opposed to his/her own moral projects that rank second or below that, which technically would be immoral to do, even if they create a lot of happiness. If you like the idea of choosing your own moral projects, Utilitarianism is not for you.
3. Can people not be wrong about what is pleasurable (Act Utilitarianism) [STRONG]? Does everyone always accurately decide for him or herself what is pleasurable? Can I mistake what in fact will really bring me pleasure and what will not? Would we think this is a good theory with which to handle or raise children? For instance, should we ask them what they would like to eat or drink and maximize their pleasure, especially if they outnumber us and are much more excited about having something than we parents or adults are about their having it? How can Mill really answer this question, given that he only says that we need to differentiate between noble and base pleasures? Even after we’ve differentiated them, Mill can still not ignore what “ignorant” people are wanting or find pleasurable. This is a problem for Rule Utilitarians because we could be wrong about what causes pleasure in general as well.
4. Hard Cases: Act Utilitarianism may require us to commit morally reprehensible acts, according to other ethical theories [WEAK or STRONG - see each example]:
a. Prisoners of War [WEAK]: You, as one of many prisoners, are told, "If you don't give me the name of a prisoner to shoot in 5 minutes, then I will shoot 10 myself." What should you do? Utilitarianism requires you to choose the prisoner who is the least useful or happiness-producing. [Note: This is weak, only because a staunch utilitarian will not flinch at this objection, but just nod his/her head. To other theories, such as Kant's, choosing someone to kill is not permissible, because the person holding you captive should not kill any prisoners, and perhaps should not even have them as prisoners. Moreover, the prisoners have no reason to believe that the captor will keep his/her word (e.g., the captor might kill 10 anyway, or just make this same offer every hour until everyone is dead anyway), so why play the game? It's not as if everyone will get to leave once one person is killed, right? The fact that the captor has a bad will to use the prisoners only as a means does not allow you to do the same.]
b. Terrorist group example [WEAK]: You have access to the child of a ruthless terrorist who has a nuclear weapon aimed at your city. If you torture the child, you can get the terrorist to stop the bombing action. Should you torture the child? Utilitarianism might require you to torture the child to ensure the safety of the whole city. [Note: Again, Kant would say that we should not torture people in this way or ever, because we'd be using the child only as a means to an end, and you could not and could never know that torturing anyone ever will give you the outcome you desire. The fact that the terrorist has a bad will to use everyone in the city only as a means does not allow you to do the same.]
c. Rotten Professor example [STRONG]: Suppose there's a really ornery, mean professor who has no living relatives (or if he does, they all don't like him!) and who happens to be very healthy! Suppose you're his doctor who knows that there are 5 people looking for organs, and the professor is a perfect match. The question is, if no one would know about it, should you kill the professor to donate the organs for transplants? There would be happiness created by every “donee” and his/her family and friends, plus the students of the rotten professor! Therefore, Utilitarianism says you should murder the rotten professor. [Now imagine thinking of homeless people as organ donors.]
5. Conflict of Rules (Rule
Utilitarianism) [STRONG]? What if rules conflict in a moral
I find myself in a situation where I need to decide
between helping someone in need, or
keeping a promise I made to be somewhere at a certain time,
and both helping someone in need and keeping promises
maximizes happiness. What would a
Rule Utilitarian say I should do? OBJ1: If the Rule Utilitarian says that
we should maximize the happiness of the people affected by
the action, then he/she has changed their ethical theory to
Act Utilitarianism. And, OBJ2: If the Rule Utilitarian says
that we should pick either rule and follow it, then the
theory is arbitrary and/or does not provide effective
guidance, because you get to just choose whatever you feel
like doing, as long as you can cite a rule that maximizes
happiness in general, that applies in this situation.
Monsters? (Act Utilitarianism) [STRONG]:
Robert Nozick proposed that there could be creatures ("Utility
Monsters") that experienced more pleasure than the average
human, so if we assume that they experienced 100 times the
pleasure of a human when eating a cookie, then we would have
to do what pleased the utility monster, eventually doing
everything we do in order to please the monster.