Motto:

"There are none so blind as those who will not see." --

Google+ Badge

Thursday, March 12, 2009

A problem for consequentialism

I think one of the main problems with utilitarian theories is that they do not attribute any value to moral agents or moral patients as such. Their experiences might be valuable, as might the satisfaction of their desires, but moral agents and moral patients themselves are of no value at all. (From this point on, I will use the term “moral subjects” to refer to both moral agents and moral patients.) So if, for example, I kill Smith and somehow create a new individual whose well being is the same as Smith’s was, this state of affairs is, intrinsically, neither better nor worse than how things would be if I had left Smith alone. But this seems wrong. To borrow a term from W.D. Ross, it is our prima facie duty not to kill people, and this is so even if killing someone has no impact on the total amount of utility in the universe as a whole. Because utilitarian theories cannot discriminate between actions which result in the same overall amount of utility in the world, they are blind to the fact that one such action can be permissible while another such action is prohibited.

Consequentialists, more generally, could try to remedy this problem by assigning intrinsic value to moral subjects in themselves, apart from any value their experiences might have, and take this into account in their moral deliberations. After all, consequentialism in general requires that we try to maximize the good, but is silent on which things are good. But I doubt that consequentialists can acknowledge the intrinsic value of moral subjects without giving up on consequentialism or else failing to do justice to what we normally mean when we say that moral subjects are intrinsically valuable. In order to have a genuinely consequentialist theory, consequentialists would have to treat the value of a moral subject as being comparable with other sorts of value, such as happiness, or well being in general. But then we are faced with essentially the same problem we encountered above: Suppose Smith himself is worth 11 units of goodness (utiles), while his well being is worth 10 utiles, so that the total value of Smith’s life is 21 utiles. Why can I not kill Smith, provided that I also create 21 utiles through some other means to make up for the loss? If I do this by creating a new person, they would presumably be worth just as much as Smith was; so as long as their well being is also worth 10 utiles we have a life which is worth exactly the same as Smith’s. All the same, it is still wrong to kill Smith, even if I create a new person to “make up for it”. So a consequentialist theory has again given us the wrong result, even though we augmented it so that it assigns intrinsic value to Smith himself and takes this value into account in determining one’s permissible courses of action. What has gone wrong?

The problem is not that we have assigned Smith too little value—all persons are equally valuable, and so just as valuable as Smith is, and given this we can always concoct a new scenario in which we create enough utiles to make up for murdering Smith. This is so even if one thinks that moral subjects are of infinite value. And that view is problematic on its own: If moral subjects are infinitely valuable, one moral subject is worth just as much as a million. But if forced to chose, one surely ought to save a million rather than one.

If there is any hope for the idea that moral subjects are intrinsically valuable, I think it must lie in something like Kant’s Categorical Imperative, which says that moral subjects ought always to be treated as ends in themselves, and never merely as a means. One of the problems with the above lines of reasoning is that we have taken the term “intrinsic value” at face value and have falsely assimilated the value of persons to the value of sub-personal things like happiness or well being. Moral subjects are not merely valuable, but unique and irreplaceable. Instead of saying that moral subjects are intrinsically valuable, it would be better to say that moral subjects have moral dignity, and that this is something which cannot be measured in the same way that the value of happiness or well being can. Perhaps it cannot be measured at all. What is certain is that we cannot hold that moral subjects may be replaced by other beings whose lives have “equal value”, for a being which has moral dignity is by that very fact one which ought not to be disposed of, even if they are replaced by another being who also has moral dignity. Neither may we use moral subjects as a mere means to improve the general welfare. It might still be true, in general, that we may save the many rather than the few, but it will only be permissible for us to do so if the circumstances which force this choice on us are not of our own making.

4 comments:

musefree said...

Interesting post. My way of dealing with this is that I subscribe to a version of consequentialism where there are intrinsic values, but these are attached to the acts, rather than things. In other words, any act that violates someones rights (defined in a precise way, but wont go into that here) has associated to a large net negative value.

So suppose you kill Smith. To make up for it, you will not only have to create a new person, but also make up for the intrinsic cost of violating someone's rights -- in this case you violated Smith's most basic right to life -- and that would require you to do much much more, especially if the intrinsic cost of the killing of an innocent human is defined to be sufficiently high.

Observe also, that the intrinsic cost I am assigning depends on the exact nature of the act, and not the consequences of the act.This is a very important point. So killing Smith for no reason has a huge intrinsic cost in addition to the cost of his life. If Smith were to die in a manner that is no ones fault, this intrinsic cost would not be there (though the cost of his life would still be).

This is my way of getting back some of the advantages of deontological ethics while still remaining in a consequentialist framework. I would be interested in hearing your thoughts on it.

Jason Zarri said...

Hi musefree,

That sounds like an intriguing view. I don't have any other comments at the moment--unfortunately I'm kind of busy--but I'll try to give a proper response soon. But if you have any links to where you expound your view further, I'd be interested to read them when I have the time.

Jason Zarri said...

musefree,

I think you're right that intrinsic values can be attached to acts, and I would agree that your account resolves my Smith example. However, I have two points of (potential) disagreement:

First, I wouldn't agree that intrinsic values are attached to acts *rather than* things; for I see no reason why intrinsic values couldn't be attached to things as well. In addition, I would conjecture that the intrinsic values of acts are grounded in the intrinsic values of the things they concern, much as the intrinsic value of a complex thing is grounded in the intrinsic values of its parts and their relations to each other. But perhaps I'm misunderstanding your view--did you mean that intrinsic values are attached to acts rather than *merely* to things?

Second, I'm not sure that your account is really a form of consequentialism. What would you say about cases where we can greatly improve the general welfare by infringing the rights of some small group of people? (Of course, the issue of whether or not your view is correct is far more important than that of how it is best characterized; I'm just curious.)

musefree said...

Thanks for your reply.

"First, I wouldn't agree that intrinsic values are attached to acts *rather than* things; for I see no reason why intrinsic values couldn't be attached to things as well. In addition, I would conjecture that the intrinsic values of acts are grounded in the intrinsic values of the things they concern, much as the intrinsic value of a complex thing is grounded in the intrinsic values of its parts and their relations to each other. But perhaps I'm misunderstanding your view--did you mean that intrinsic values are attached to acts rather than *merely* to things?
"

Yes, my comment was poorly phrased. What you wrote above is a better formulation of what I meant to express. In essence, I take into account *all* the parameters that define an act, i.e. both the nature of the act and its consequences on everything else, and assign values to each parameter. Some of these values are intrinsic. In particular two different acts can thus end up with different values even if in the end, they result in apparently similar outcomes.

"Second, I'm not sure that your account is really a form of consequentialism. What would you say about cases where we can greatly improve the general welfare by infringing the rights of some small group of people? (Of course, the issue of whether or not your view is correct is far more important than that of how it is best characterized; I'm just curious.)"

In my personal political philosophy, I tend to attach significantly higher (intrinsic) values to violations of rights, than to general welfare (when I say general welfare here, I restrict myself to situations where no rights are violated by not providing this welfare). In fact, to many specific cases of general-welfare like situations, one may choose to attach practically zero value; this would get us essentially close to deontological conclusions. In any case, without going into details, *I* do attach non zero values to many things that do not directly involve rights, but nonethless in most 'real life' cases, my assignment of values are such that I would usually still end up going for the upholding individual-rights position over the general-welfare. However, theoretically at least there will exist situations where my analysis would prefer the solution that "greatly improve(s) the general welfare by infringing the rights of some small group of people". So I do think what I describe is a version of consequentialism, unless by the term consequentialism you automatically disallow intrinsic values (I think this is a point of controversy among some).