Philosophical Ethics: Kant, The Good Will, And Rational Actions

In a series of posts this semester, I am going to blog all (or almost all) the lecture topics for the two Philosophical Ethics classes I am teaching this semester. Each of these posts will primarily explicate the reading or a theme that dominated class discussion in a way that should be accessible to novices (such as my students are). I will also offer some degree of analysis of the ideas considered and then pose suggested discussion questions. These posts will usually feature more speculation than argumentation from me as I try to stimulate your thinking rather than stake out my own positions. Some of my students will be responding to these short discussion primers in a private forum through the university. I’ve told the students they are free to discuss the blog post versions of these discussion primers as well, so they might show up here. The text we are using and from which all citations will be taken is Ethical Theory: classical and contemporary readings, edited by Louis Pojman. Wadsworth: California, 2007). This post explores Kant’s emphasis on a dutiful will and reasons to think that all rational agents’ actions must in principle be capable of consistent universalization.

According to Kant, the one unconditionally good thing is a good will. What this essentially means for Kant is that any action that is done from a will that is motivated primarily by respect for duty is an indisputably well motivated one.  We might ask of an action done from any other motive whether or not it is good.  For example, just because someone acts out of love does not mean she acts well.  People can do both good and bad acts from the motivation of love.  Actions motivated by fear of punishment certainly do not reflect any inherent goodness in the will of the agent, but rather only a concern to avoid certain harms.  No matter how right or good one’s chosen action is in itself, if it is motivated by concern for rewards then one’s will is not inclined to perform it only because of its goodness but out of consideration of something extrinsic to the goodness of the action—one’s own satisfaction.

In other words, if I only do an excellent deed because of its relationship to satisfaction of my desires, then my interest is not in the excellent deed itself or in generally doing excellent deeds for their own sake as a matter of course, but rather my interest is in something else that only happens to be related to the excellent deed in this case.  For example, if I give to charity in order to save money on net through tax deductions, then I have no inherent interest in being charitable or in doing moral things in general but rather am only motivated by saving money.  The fact that I do what is moral and dutiful is incidental in this case.  I might equally have done what was immoral had that been the way that I could best save money.  My only concern was saving money, not being dutiful or avoiding duty.

I gladly gave to charity where doing what happened to be my duty, giving to charity, coincidentally was what served my personal ends .  But had it been otherwise, I would not have.  In the case in which I am motivated by selfish concerns rather than my respect for duty in giving to charity, the act of donating is no less a recommendable action or a desirable benefit for those who have been aided by my charity.  But regardless of the rightness of the action and the moral desirability of its consequence, my will is not good.  And insofar as my will is not good my action is not entirely good and not entirely commendable.  In fact, my action, despite being the one duty that would also have advised me to do, is of little moral worth insofar as it expresses and embodies my moral goodness as a willing agent.  The action might be right, but I am not, my will is not.

But how do we know exactly what our duty is such that we can act out of respect for duty in the first place?  Kant’s idea is essentially that a duty must be something universalizable in principle, otherwise it could not rationally be a duty for everyone.  If there were a duty that not everyone could fulfill without contradiction of the duty itself, then that would make for an irrational and unenforceable duty.

For example, if I were to universalize a principle that everyone should promise to pay back loans with no intention of following through on that promise, and everyone were to obey this principle as a law, the result would be that promises would lose all meaning.  To promise would mean the precise opposite of what it means now and no one would give loans anymore since promises would be worthless.  In that case, the lies made in order to secure loans would be ignored and ineffectual.  If we have a duty, it must be something that consistently everyone could theoretically fulfill without counteracting the very purposes of the actions themselves.

We can similarly not universalize a world in which everyone lied as a matter of duty since that would obliterate trust in each other’s words and make lies ineffectual and it would undermine the very purpose of communicative acts in general.  We would live in practical contradiction if we acted like that.  Our actions would contradict their own conditions.

We must assess every action we consider undertaking—whether it is suggested to us by our feelings, our desires, our community, our religion, our family or any one else—by the test of whether or not the action makes sense as something universalizable.  The reason for this goes back to considerations we have already seen be developed well by RM Hare.

Whenever I perform an intentional act as a rational being, I make a choice to act according to some sort of rule (and Kant would call this rule my “maxim.”)  I am sitting down now.  If I want to walk out of the room, I will first stand up.  This is because implicitly I understand and act based on a principle:  ”All those who are seated should stand up if they intend to walk.”  This principle (or something quite like it) is a rule we all happily follow and if some seated person insisted that they wanted to walk but refused to stand up, they would be expressing a desired course of action that contradicted itself.  If one is seated and yet wants to walk, one must first stand up.

Of course, someone may theoretically never want to (or be able to) to stand up and so in those cases there is no imperative that they will to walk.  But if one does will to walk, standing up is part of the deal.  (At least if we adequately distinguish “walking” from crawling and squatting and “walking” on one’s knees, etc.)  Hypothetical imperatives are rules we must follow only if we want to carry out certain kinds of actions but not if we do not.  It is imperative that one use tomato sauce if one wants to make what’s known as a “plain” New York style pizza pie.  But one may never need to make a pizza pie.  There is no contradiction in one’s will when one opts not to make pizza.  So that’s why such an imperative is only hypothetical.

Every time we reason about an action we think on a level that goes beyond merely ourselves.  We employ our reason which considers both relationships between means and ends in particular and formal consistency in general.  Our rational conclusions are not merely idiosyncratic and personal to us but are necessarily comprehensible to others.  When I think that I should eat a slice of pizza now because I love pizza and (having mentioned it a moment ago) I have begun to crave it, this is a rational thought insofar as it is one that any rational being can understand as fitting, at least prima facie. It makes sense to you, even if you hate pizza, that someone’s having a desire gives them a hypothetical reason to act.

Had I said, I think I should eat a slice of pizza right now because I hate pizza, that would have been irrational and made no sense.  Hating something is not a reason for doing it.  There may be other reasons that I can give that would make my choice to eat something I hate rational.  And there may be other reasons that I can give that would make my choice to not eat something I love rational.  If I say to you that I should eat pizza because I love it, that’s minimally rational.  If I say to you that should not eat pizza even though I love it because it is less healthy for me than eating some fruits and vegetables would be, then that may make even more sense to you.

As a rational being you can understand being motivated by health and being motivated by pleasure.  We might have an interesting debate over which I should prioritize.  When and how much I should prioritize my pleasure over my health or vice versa is a debate we can have only if we share our understanding of the relative reasonableness of pursuing each good.   If I said that I should eat vegetables even though I hate them because they are healthy for me, you can readily understand my reasoning that I should do what I hate.  It would not make sense to do what I hate because I hate it but it would make sense if I did it either for my greater concern for my health.  Again, we can then debate how reasonable it is, how much reason any theoretical rational agent might have, to prioritize the one good over the other.

So, all our actions, insofar as they are rational are explicable in terms of reasons that other rational agents could understand and themselves formally approve of.  If I know you are a gifted and trained pianist and you opt to quit playing altogether, I might say that it’s an irrational thing to do, not because it is irrational for I, who have no talent nor personal investment in piano playing at stake, to quit piano playing altogether.  I think of the issue, rather, in formal terms about rational agents abandoning their talents and their investments in training.  Now, you may persuade me you had good reasons after all to quit piano playing.  You may highlight other formal features of the situation (piano playing is no longer challenging to you and you think you will grow more by forcing yourself to stretch yourself or piano playing has crowded out too many other more important goods from your life) which would convince me that a rational agent in such a total situation of formal relationships would be right to abandon her talents, no matter who she was.

These considerations of the complicated processes by which we assess formal factors go beyond what Kant concerns himself with in The Groundwork for the Metaphysics of Morals, which I intend here to discuss here.  To bring this back to Kant, the test of the categorical imperative is simply whether our actions are minimally consistent and universalizable.  In the scenarios I was just analyzing, hypothetical imperatives were at stake.  If I would like to eat what I love and most crave right now, I should eat pizza.  If I would like to eat what is in my long term health interests right now, I should eat some vegetables or something comparably healthy right now instead.  The categorical imperative tests our actions to see which hypothetical imperatives we should never follow out because they would involve acting according to a deeper practical contradiction.

The hypothetical imperative, “I should lie in order to get this loan” may express a rationally understandable principle, the motivation of which any rational agent would understand, but it is not rationally consistent since it involves a practical contradiction of the sort already explicated.  In this way it is a bad “maxim”, a bad reason on which to act.  It is not one that we could consistently and formally will that all rational agents take as their reason and anything that cannot be a reason for all rational agents cannot be rational.  Any action which leads to practical contradiction is irrational.

As rational beings we are law givers insofar as when we act with intention, we necessarily implicitly act based on a rule and on a rule that our reason approves of in order to set the action in motion.  It is rationally inconsistent to make the laws upon which we base our actions laws that not every rational agent could also make the laws for their own actions.  We make an irrational and unfair duty for ourselves and others if the law we make is not something all rational agents could consistently agree to and follow.

Your Thoughts?