I.  Evaluation of Kant’s ethical view:

A.     In favor of Kant’s ethical view:

1.      Rational, consistent, impartial:  Kant’s view emphasizes the importance of rationality, consistency, impartiality, and respect for persons in the way we live our lives.  If Kant is correct that moral absolutes cannot be violated, then he prevents any loopholes, self-serving exceptions, and personal biases in the determination of our duties.

2.      Intrinsic worth of a human being:  In virtue of being a human being, you have rights, dignity, and intrinsic moral worth/value.  Every human being is like a unique artistic creation, such as a Ming vase.

3.      A moral framework for rights:  As a culture here in the U.S., we are interested in and fond of rights.  Kant’s theory helps us to see where we get them.  Duties imply rights, and rights imply legitimate expectations.  If every human has intrinsic worth (as Kant believes), then every human should have the same rights, other things being equal.

4.      Non-relativistic rights and duties:  These moral rights and duties transcend all societies and all contexts, so Kant’s view doesn’t have the problems of cultural relativism.  Relativism/subjectivism kind of objection won't work against Kant's view.  No empirical appeal will have any effect on Kant.  You need to point out inconsistencies within his system.

5.      Autonomy and ability to choose your moral projects:  It’s a duty to pursue your happiness through the use of reason, as long as you’re not lying, breaking your promises, or committing suicide (or any other duty as determined by the categorical imperative formulations).

6.      Alternative:  Consequences?  Can we ever be completely sure about the consequences of our actions?  Haven’t there been times when you thought you were doing the best thing, based on the anticipated consequences, but the results turned out badly?  Kant’s view avoids consequences in making ethical decisions, so it doesn’t have such a problem.

B.     Against Kant’s ethical view:

1.      Is the good will always good without qualification? [STRONG]  Can’t I be a do-gooder who always tries to do my duty but creates misery instead?  For example, say that I’m running around campus taking cigarettes from the mouths of students, passing out anti-smoking pamphlets.  I'm only trying to help people.  It doesn't matter if I get restraining orders against me, beaten up, fired, etc. - I'm supposed to have a good will even if I'm annoying.  Does this sound ethical?

2.      How can Kant deal with these hard cases? [STRONG] 
a. Nazi Case
:  It's 1939, and you're hiding Jews in a cellar.  The Nazi's come to the door and ask you if you're hiding Jews in a cellar. Should you lie to Nazi's?  Is this a good objection to Kant?  [See this link "On a Supposed Right to Lie from Altruistic Motives," by Kant, to read his answer to this objection.]

      b. Suicide Case:  Joe is terminally ill (with some nasty cancer) in the opinion of two doctors, and is in a lot of pain, at the legal limit of painkillers.  Why can't Joe take a pill that will kill himself? [NOTE:  This case is not identical to someone who is merely depressed and doesn't think life is worth living, as he addresses in the reading.]

3.      Two objections from David Hume [STRONG]:

a.       Hume's first objection:  Reason doesn't discover moral rules.  Morality is feeling, affect, or sentiment.

b.      Hume's second objection:  Reason doesn't motivate moral action.  Suppose Kant is right that reason discovers moral duties.  So what?  What happens then?  We need to have action.  Is reason sufficient to motivate us to do our duty?  Suppose reason discovers Action A is a duty.  In order to do Action A, do I need something else, such as a desire or an inclination to decide to do Action A, or is it enough to know that Action A is my duty?  Hume says we need to have a desire or an inclination to do the right action, even if we know that it's the right action.  In fact, for Hume, first we need a desire or an inclination to do something, then we look to reason to fulfill it. "Reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." Treatise concerning Human Nature (Bk.II, Part III, Sec. III, p. 415)

4.      Akrasia (weakness of will or moral conviction) [STRONG]:  You see/know what the right action to do is, you want to do the good action, but you fail to do the good action.  You do the bad action.  Is akrasia possible?  If it exists, then reason does not simply force us to do the right thing.

5.      What about non-human animals? [WEAK]  According to Kant, we only have a duty to treat rational moral agents as ends, not animals.  What about chimps that have 99.4% of our DNA structure?  What about senile people or the comatose?  Are these people things as opposed to ends in themselves, as "normal" people are, according to Kant?  [This is weak because he could just say animals can be ends in themselves due to their reason (updating his view), and this does nothing to the way in which we should treat humans.]

6.  Consistency of Categorical Imperative? [WEAK]  Are the formulations consistent?  Kant says they are, but how can we be sure?  It wouldn't seem like we should lie EVER, but if we use someone else as a means, or allow others to use people solely as a means by NOT lying, isn't this morally objectionable?

7.      Practical Application Problems [WEAK]:  How do we draw the maxim, given that the Categorical Imperative can give us different answers depending on the formulation?  For example, I could consistently draw the maxim that “No one should eat raw oysters” if I don’t like them, or “Everyone should tie his or her right shoe first”, just based on my preference, and these would be morally binding on all moral agents.



A.     In favor of Mill’s Ethical view:

1.      Intuitive in general:  It links happiness with morality, instead of possibly pitting happiness against morality (such as Kant’s view).  We think it makes sense with common beliefs about morality.  For instance, in general, it backs up murder's being wrong, lying, rights.  So Utilitarianism gives us a system to our intuitions.

2.       Common sensical - Pain is bad, pleasure is good:  Everything being equal, though people have many different and conflicting moral beliefs, people agree that pain is bad, and pleasure is good.

3.      Impartial, fair, promotes social harmony:  Utilitarianism requires us to balance our interests with those of others.

4.   Practical, clear-cut procedure:  Utilitarianism doesn’t rely on vague intuitions or abstract principles.  It allows psychologists and sociologists to determine what makes people happy and which policies promote the social good [Warning:  Do not use this as your main reason why you like this theory - flipping a coin as Two Face in "The Dark Knight" is a simple ethical decision procedure, but that by itself does not make it a good theory].      

5.      Flexible and sensitive to circumstances (Act and Rule):  Utilitarianism does not rigidly label actions as absolutely right or wrong (though certain actions like lying will in general be wrong), and it allows flexibility and sensitivity to the circumstances surrounding an action.  This makes it practical.  Act Utilitarianism is sensitive to the situation, but Rule Utilitarianism can be as well, as long as one can provide a rule that maximizes happiness in general, which also applies to this situation.

B. Against Mill’s Ethical View:

1.      Negative Responsibility [STRONG]: According to Utilitarianism, you're morally responsible for:

a.       The things you didn't do but could have done to maximize happiness; and

b.      The things that you could have prevented others from doing that decrease overall happiness; as well as for:

c.       What you actually do to maximize/increase happiness.

E.g., If you go out and play tennis, you could be doing something (almost certainly) to increase the overall happiness of the world instead.  Therefore, Utilitarianism is an excessively demanding theory:  You need/may need to give up a lot, if not everything, in order to do the moral thing.

2.      Lack of Autonomy/Integrity of the Moral Agent [STRONG]:  Utilitarianism takes moral responsibility out of the realm of personal autonomy.  The agent isn't able to choose his/her own moral projects.  If you like the idea of choosing your own moral projects, Utilitarianism is not for you.

3.      Can people not be wrong about what is pleasurable [STRONG]?  Do we really want to let everyone decide for him or herself what is pleasurable?  Can I mistake what in fact will really bring me pleasure and what will not?  Would we think this is a good theory with which to handle or raise children?  Should we ask them what they would like to eat or drink and maximize their pleasure, especially if they outnumber us and are much more excited about having something than we parents or adults are about their having it?  How can Mill really answer this question, given that he only says that we need to differentiate between noble and base pleasures?  Even after we’ve differentiated them, Mill can still not ignore what “ignorant” people are wanting or find pleasurable.  Is this a problem?

4.      Hard Cases:  ACT Utilitarianism may require us to commit morally reprehensible acts [STRONG]:

a.       Prisoners of War:  You, as one of many prisoners, are told, "If you don't give me the name of a prisoner to shoot in 5 minutes, then I will shoot 10 myself."  What should you do?  Utilitarianism requires you to choose the prisoner who is the least useful or happiness-producing.

b.      Terrorist group example:  You have access to the child of a ruthless terrorist who has a nuclear weapon aimed at your city.  If you torture the child, you can get the terrorist to stop the bombing action.  Should you torture the child?  Utilitarianism might require you to torture the child to ensure the safety of the whole city.

c.      Rotten Professor example.  Suppose there's a really ornery, mean professor who has no living relatives (or if he does, they all don't like him!) and who happens to be very healthy!  Suppose you're his doctor who knows that there are 5 people looking for organs - two have a failing kidney, one a heart, one a liver, and one needs some corneas.  The question is, if no one would know about it, should you kill the professor to donate the organs for transplants?  There would be happiness created by every “donee” and his/her family and friends, plus the students of the rotten professor!  Therefore, Utilitarianism says you should murder the rotten professor.

5.      Conflict of Rules for the RULE Utilitarian [STRONG]?  What if rules conflict in a moral situation?  EX:  I find myself in a situation where I need to decide between helping someone in need, or keeping a promise I made to be somewhere at a certain time.  What would a Rule Utilitarian say I should do?  Either keep my promise (because the rule of keeping promises maximizes happiness) or help the person (because the rule of helping people in need maximizes happiness).  In this case, Rule Utilitarian doesn’t say what to do – so it seems that when the rules conflict, we have to collapse Rule Utilitarianism into Act Utilitarianism.  And then the theory is arbitrary (that is, you're just making it up as you go along, and there is not theory!), because you get to just choose whatever you feel like doing, as long as you can cite a rule that maximizes happiness in general, that applies in this situation.

6.      [ Bentham’s bodily pleasures and Mill’s noble pleasures [WEAK]:  On the one hand, how can Bentham think that bodily pleasures are better or more valuable than the pleasures of the mind?  On the other hand, how can Mill argue that one pleasure is better or more valuable than another?  Doesn’t he need something other than pleasure to argue that one pleasure is better than another?  If this is wrong, aren’t we back to Bentham’s view? ]