Tuesday, August 09, 2011

On ethics, part II: Consequentialism

by Massimo Pigliucci
http://www.bbc.co.uk/ethics
[This post is part of an ongoing series on ethics in which Massimo is exploring and trying to clarify his own ideas about what is right and wrong, and why he thinks so. Part I was on meta-ethics]
Following up on my general discussion about what ethics is, I am now going to proceed with a series of brief commentaries on specific approaches to ethics, attempting to cover the major ways of philosophizing about morality that have been developed over time. Before we go on, I need to remind everyone that this is most certainly not a comprehensive, not to mention scholarly, survey (there is simply a huge amount of material on this out there), though I will provide links to resources for those who wish to pursue any of these topics in more depth. Rather, this is my way to try to sharpen my own thoughts about ethics, and to expose them to a discussion amongst friends, hoping for truth (or at least a better understanding) to spring from it.

So let's get started with consequentialism, a way of thinking about ethics that really subsumes a family of ethical theories, most importantly the various forms of utilitarianism (from Bentham to Mill to Peter Singer). I'm starting here because consequentialism is arguably the most popular form of ethical reasoning among philosophers, and perhaps even among the public at large (whether or not most people would actually recognize it). It is also both very appealing at first sight and turns into a highly problematic Pandora box once we look a bit closer.
Essentially, consequentialism is the idea that all that matters in morality are the overall consequences of our actions. As opposed to what, one might ask? Well, for people adopting other ways of looking at ethics consequences are not the overriding criterion, which may instead be duty (to law or to gods), personal integrity, respect of other people's rights, and so on.
The first obvious question is: what quantity should our concern for consequences try to maximize? The answer depends on what sub-type of consequentialist one is, but for modern utilitarians this amounts to maximizing both happiness (philosophically conceived as human well being, not hedonistically) and minimizing pain. One could, however, choose different criteria to be maximized and still be a consequentialist.
This in turns brings up the first obvious problem for basic consequentialism: it doesn't seem to provide a principled way to distinguish between different ways of maximizing, say, happiness. This is really a matter of math: if you want a particular number as the result of a given operation, there are many ways to obtain that number. Maximization could be achieved, for instance, by attempting to increase everyone's happiness, but also by increasing the happiness of a majority at the expense of a certain increase of pain in a minority. Heck, depending on how one measures happiness, one might even find that a higher total can be obtained by greatly favoring a small number of people while making things worse for the rest.
A similar point is that consequentialism is supposed to be agent-neutral, meaning that it doesn't matter whether a certain action has the consequence of increasing your own happiness, that of a friend, of a relative, or even that of your nemesis. This is something that strikes many people (and a good number of philosophers) as rather odd. For instance, we consider it moral for a parent to care first and foremost for his children, but there is no way to justify this within a consequentialist frame. One possible reaction could be: so much the worse for our sense of parental duty to children, but one might equally develop suspicions that there is something wrong with an ethical system that does not take the specialness of parent-child relationship into account. (Of course, the concept of “duty” itself lies outside of consequentialism's purview.)
Here is another obvious problem: in most cases we simply do not know what the consequences of a given course of action will be, certainly not the overall consequences. And what does “overall” mean here? Over what extent of space and time are we supposed to preoccupy ourselves? Surely, in the very long run, nothing will matter anyway, since the sun, and eventually the galaxy and the universe itself will not exist. But even over the much smaller scale of human existence, are we supposed to measure the lifetime consequences of our actions, for all people potentially involved?
Clearly this makes no sense, and no serious consequentialist pursues that line of reasoning. But here is the funny thing about the approach: every time consequentialists run into one of these daunting issues (there are many more, some of which I will mention below), they modify consequentialism itself. For instance, one can get around the problem posed by the impossibility of knowing all the consequences of an action by moving to what is called “reasonable consequentialism,” the idea that an action is morally right if it has the best reasonably expected consequences.
This move may seem strange, as it amounts to rescuing one’s theory from trouble by modifying the theory ad hoc so that it can absorb, rather than resolve, the trouble. As Popper argued in the case of scientific theories, however, this is the path that leads straight to pseudoscience (or pseudophilosophy). But in fact things aren't quite that bad: even scientific theories are often adjusted to account for new data, a practice that is both acceptable and in fact indispensable for scientific progress. Think of how the Copernican theory got “tweaked” by Kepler, who replaced circular planetary orbits with elliptical ones to better account for the actual movements of planets. No astronomer thinks that that move amounts to a step in the direction of pseudoscience.
And yet, as post-Popperian philosophers of science (e.g., Imre Lakatos, one of Popper's students) have argued, there must be a limit to these adjustments, beyond which we declare the research program associated with a given theory to be “degenerate” and we abandon the theory in favor of a better overall alternative. I think a major issue with consequentialism is that it is difficult to tell when so much tweaking has been done that the theory has become unrecognizable.
For instance, I mentioned above the contrast between consequentialism’s agent neutrality and the common moral thinking that we have special duties toward friends and relatives. One way that has been proposed to deal with this is to introduce so-called “friendly consequentialism,” the doctrine according to which an action is moral if it has the best consequences for one’s own friends, or family. Okay, but then we seem to lose a major appeal of consequentialism itself, precisely the fact that it doesn’t favor one person over another in terms of moral worth. Can friendly consequentialism still be meaningfully thought of as consequentialism at all?
Or consider another common issue raised in this context: if most of your neighbors engage in a behavior that pollutes a local resource (say, dumping garbage in a nearby river), then if you did the right thing instead and brought your garbage to the processing plant, consequentialism might lead you to conclude that doing the right thing is not moral in that case, since you are not making a dent in the situation (the river will still be polluted) and you are causing a problem for yourself (you have to spend time, energy and money to bring the garbage to the plant). The overall consequences of your actions are negative. All right, then, say some consequentialists: we are going to introduce “rule consequentialism,” the idea that something is morally right if it doesn't violate the kind of behavioral rules that would have the best consequences for the community. Neat, but, once again, is it still consequentialism?
Consequentialism also seems to have a hard time with the concept of rights, not just the positive ones considered necessary by progressive thinkers, but even the negative ones minimally accepted by conservatives. Suppose the best overall consequences could be achieved if we prohibited certain behaviors, like driving highly polluting cars; or, to take a more extreme counterfactual commonly invoked against utilitarianism, imagine that a doctor could save five people’s lives by harvesting the organs of a sixth, healthy person. Although it can be done, it is hard to see how a consequentialist can respond to these situations without further difficult mental gymnastics to save his conception of morality. Individual rights just don’t sit well with a doctrine established on maximizing overall happiness.
One more problem to think about: one of the premises of consequentialism is that it makes sense to think of an overall benefit resulting from our courses of action. But one worry is that “overall benefit” is as problematic a concept as “overall skill”: one can have this or that skill (painting, music, writing, plumbing), but it doesn’t make any sense to ask what is a person’s overall skill. Does it make sense to ask what is the overall benefit of a course of action?
As I mentioned in the beginning, this does not pretend to be either a comprehensive or an in-depth analysis of consequentialism. That’s not the sort of thing that can be accomplished in a blog post (or even a series of blog posts). Nonetheless, the reflections I forced myself into while writing this confirm my overall impression of utilitarianism: it is an initially appealing idea that is likely to die the death of a thousand small ethical cuts. This certainly doesn’t mean that consequences shouldn’t enter into moral discourse, but it does mean that I do not see an exclusive focus on consequences as viable to construct an entire ethical theory.
Next: deontological ethics.

60 comments:

  1. "It is also both very appealing at first sight and turns into a highly problematic Pandora box once we look a bit closer."
    That's what happened with me - it seemed like a really good idea at first sight, but the more I've read on it (and spent time thinking about it) the less appealing I find it.

    ReplyDelete
  2. At least for me, the operative words above are "an exclusive focus."

    In other words, I wonder if an exclusive focus on any kind of normative criteria - be they rules (deontological ethics) or character traits (virtue ethics) - is a good idea. And yet making such exclusions often seems like the stock & trade of moral philosophy.

    ReplyDelete
  3. Consequentialism seemed like a good idea when I first learned about it as an undergrad philosophy major, but the more I think about it, the less viable it seems. I almost feel like it isn't even really a full theory; it's just an algorithm.

    Every form of consequentialism has to decide what goods to maximize, but those have to be determined independently of consequentialism itself. There are no internal tools for what ends we should have. There are only more or less specific (depending on the version) dictates for actions.

    ReplyDelete
  4. Should a doctor kill a sixth person to save 5 people?
    Most people feel this is immoral. If he does, he will feel unhappy for the rest of his life, and if people realize what he did, they will feel sick as well. This will decrease the overall hapiness. Therefore he should not...
    Conclusion: consequentialism is about following commonly accepted moral judgments. :-D

    ReplyDelete
  5. "(Of course, the concept of “duty” itself lies outside of consequentialism's purview.)"

    Except perhaps for the duty to maximize well-being (or whatever consequence is being maximized).

    ReplyDelete
  6. Q,

    that's how a lot of consequentialists get out of that one. But besides the fact that it isn't at all clear that their defense works as stated, what if the doctor could conceal from the rest of society what he had done? Would what he did *then* be ok on consequentialist grounds?

    ReplyDelete
  7. mufi, I think the reason that you see ethical systems with exclusive focus is because they are attempting to account for the "right making" features of actions or judgments. For eudaimonistic virtue ethics, it's the tendency of an action or judgment to further human well being. For Kantian ethics its adherence to the rule of reason as expressed in our duties. Massimo talked about consequentialism here.

    If you tried to mix and match theories, say a hybrid Kantian-virtue ethics then you are potentially going to wind up with some internal inconsistencies. At which point you have to decide which theory provides the fundamental element of your ethical system. Kant's full moral theory has quite a lot to say about virtue; but, in the end, while virtues might make right action easier for an agent, being in accordance with virtue does not make an action right. Only when action is undertaken to be in compliance with the categorical imperative are we fulfilling moral duties (or so says the Kantian).

    ReplyDelete
  8. Oyster, yes, I'm aware of that likelihood.

    But my impression - from both personal experience and moral psychology - is that most folks (e.g. as demonstrated by the Trolley Problem) are not so bothered by inconsistencies and contradictions as philosophers are. They don't mind (or are largely unaware of) adhering to a hodgepodge of norms - some apparently utilitarian, others deontological, and others virtue-ethicist - in a situational or heuristic way. More to the point: it's far from obvious that they're wrong (morally speaking) to do so.

    That said, I've personally grown increasingly sympathetic to virtue ethics (in part, thanks to Massimo's passionate defense of that normative theory). But I still don't see it as air-tight, and occasionally catch myself slipping into consequentialist and rule-based ways of judging my and other people's actions.

    Again, I'm not so sure that's a bad thing (at least for a non-philosopher like myself).

    ReplyDelete
  9. @Massimo

    Maybe a consequentialist could argue that in certain circumstances, it's ok to do something bad if nobody knows. For example, if someone own something that he never uses and he will never realize it has disapeared, isn't it more acceptable to steal it? Except for someone who believes that God will judge us, I think it is.

    ReplyDelete
  10. Mufi, you won't get an argument from me if you say that in the construction of our normative moral theories we have not paid adequate attention to empirical moral psychology, which points to just the sort of heuristics that you're talking about (and as an aside, there is a similar problem in normative epistemology of the strain that seeks to find the necessary and sufficient conditions for justified true belief). A normative theory has to be empirically adequate (in my judgment), which is why consequentialism seems so counterintuitive at times (as Massimo alludes to in his post). This means it has to take account of the heuristics and biases that color our moral judgments. But I've yet to be convinced that a purely descriptive account will give us everything we want in a moral philosophy, so I still think there's a place for "higher level" metaethical considerations.

    ReplyDelete
  11. Should a doctor kill a sixth person to save 5 people?

    So how can you judge this in a consequentialist frame when you haven't said what the consequences of that action would be? Who are these people and what is the consequence of saving them or not? People do not exist in a vacuum and they don't have the same inherent value. Anyone who thinks that the life of a wino or a mass murderer has the same inherent value as a doctor, engineer or scientist are kidding themselves. Anyone who asserts that all lives ARE equal should be prepared to say why.

    ReplyDelete
  12. Oyster: I would expect a moral philosopher to point out to me that, with a purely descriptive account, we run afoul of the naturalistic fallacy; i.e. something along the lines of: Just because moral psychology tells us that's how we are, doesn't mean that's how we should be. And I would probably agree with him/her, as far as that argument goes.

    However, it's still not obvious that how we are, in this case, is wrong (or right for that matter). Indeed, the only flaw that I can think of off-hand is, what you already said, that it's inconsistent. But that's a flaw in logic, and - if our moral instincts run as deeply as I suspect they do - then pointing that logical flaw out will not necessarily make a dent in how most people (again, outside of academic philosophy departments) respond to a typical moral dilemma.

    But I suppose it's an exercise worth trying.

    ReplyDelete
  13. I certainly agree with your conclusion: consequentialism is a useful tool in the toolbox allowing us to make moral judgments, but it cannot be the only one. I also have a suspicion that the same goes for every other principle.

    However, what bothers me is that part (not all) of your argument essentially goes like this: Consequentialism logically demands that I do not prefer my daughter's well-being over that of other people. That does not feel right. Therefore, consequentialism must be wrong.

    I was under the perhaps naive impression that our emotionally and evolutionarily understandable nepotism was not really a rational argument to be entertained in moral philosophy. It may enter as part of a reality check to show us that certain moral systems would not really work in practice because they are too much at odds with what human nature allows us to actually make work, but how can it enter as an intellectual argument about what is moral in the first place?

    ReplyDelete
  14. I tend to think that deontological ethics arises out of consequentialist ethics the same way that classical physics arises from quantum physics. Consequentialism is unsolvable because consequences are unpredictable and all actions are trapped in a web of game theory. An effective solution is to have lots of rules and conventions. Not killing people is a good rule because even if we think a specific instance of killing is good, we are likely to be mistaken.

    In hypothetical ethical dilemmas, we tend to assume very specific circumstances, predictable consequences, and all the time in the world to think about it. I don't trust moral intuitions in unrealistic hypotheticals.

    BTW, what distinguishes utilitarianism from consequentialism? (Here I betray my ignorance of ethics.)

    ReplyDelete
  15. "there is no way to justify this within a consequentialist frame"

    Argument from ignorance.

    "then if you did the right thing instead and brought your garbage to the processing plant, consequentialism might lead you to conclude that doing the right thing is not moral in that case, since you are not making a dent in the situation"

    Begging the question.

    ReplyDelete
  16. In my thinking about consequentialism and utilitarianism, I feel I've learned a lot more from economists - who often aren't utilitarians even when they self-identify that way (as they e.g. ignore animal utility) – than from philosophers, partly since economists take the maximization operation as an imperative/directive to employ the instrumentally rational (in the decision theoretic rather than philosophical sense) course of action to achieve the end. So for instance, taking "maximize" as well-defined, you can't, contra Bentham, both max happiness and min pain, since there are tradeoffs between the two, and so following one of the objectives comes at the cost of the other. Without instrumental rationality I don't feel "consequentialism" means anything; it seems meaningless to me to say you want to minimize despair or something and not have "minimize" be well-defined.

    The "first obvious problem" you raise is something you can often brush aside. If the thing you're maximizing is continuous, like utility, then having two different options result in consequences whose values are exactly the same is what the jargon would term an event of Lebesgue measure zero - its probability is zero, so unless it's a non-random process it will never happen. For the large-scale issues social issues you raise here (majority vs. minority etc.) it's really not an issue.

    But consider the very special case where two options do yield consequences of the exact same value. Most metrics (like utility) will, in rendering those two values, already have taken into account the chooser's own preferences over the options, and yet there's still something on the other side that balances it out. At that point, personal preferences or no, there is evidently good reason for either way. If you really wanted, you could create what the jargon terms a "lexicographic" ordering – when the main function is indifferent (both choices yield exactly the same utility), you have a fall-back selection mechanism, say, min inequality. Doing so would change the type of consequentialist you are, but I'll point out that most standard consequentialisms – max utility, max per-capita utility, max happiness, min suffering (e.g., some Buddhism), etc. – are decisive of far more decision problems than I think the other ethical systems actually are. It's just that those others are sufficiently hazily defined that nobody notices. (continued)

    ReplyDelete
  17. As for the criticism we don't always know the consequences of our actions, (decision-theoretic) instrumental rationality does allow maximizing/minimization under imperfect information. This point here seems to be where philosophers branch off, although I'm not overly familiar with "reasonable consequentialism" or the others, so I'm not sure there's much I can say here other than to note that in the economic view, the information set affects the actions to achieve the end rather than the end itself. It seems almost nonsensical to me to respond to imperfect information by changing the objective function rather than recognizing how the instrumentalism can cope. In any event, I'm fine with saying the "major problem" you raise might be an issue for "friendly consequentialism."

    Family, rights, and rules are "weirdness" issues where I think there's a lot of misunderstanding since at first glance consequentialism seems a little weird here. I think however it's fairly easy to see how consequentialism can accommodate these issues without altering the objective function. I'll just discuss rights. Government actors are not immaculately compassionate or ethical, power corrupts etc., and you need to restrain them to avoid bad outcomes. Moreover, you need to restrain them in ways that allow you to identify and adjudicate breaches fairly easily – the more complex the restraint, the more costly and error-prone the recognition and response becomes (and for most consequentialisms, costs and the risk of error are issues).

    Legal rights are those constraints. You could see this idea in the (admittedly rudimentary) literature on optimal constitutional design. Even the sense that people have rights that the legal regime doesn't accept is understandable, since it is possible for a rights regime to be suboptimal for a consequentialism's objective. For example, a utilitarian might want government, but in formulating the constraints, having the legal right that the government treat you only in ways consistent with maximizing utility would be too prone to government abuse, since it would be difficult to recognize abuses of power, easy to argue false defenses, costly since the scope of the evidence you would need to consider would be enormous, etc. It would promote utility better to have narrower, more adjudicable legal rights. It's similar to what miller's comment discusses.

    Anyway, thanks for the discussion. I don't often get to think about these things anymore.

    ReplyDelete
  18. Interesting comments, everyone! As usual, I apologize for responding only briefly and in part, but [insert usual excuses about this not being my full time job etc. here].

    Q,

    > Maybe a consequentialist could argue that in certain circumstances, it's ok to do something bad if nobody knows. <

    Maybe, but in the example we were discussing the sixth person who gets killed in order to harvest his organs will surely know. And few people would consider that action moral under any circumstances.

    Thameron,

    > So how can you judge this in a consequentialist frame when you haven't said what the consequences of that action would be? <

    The short answer is that consequentialists themselves see this sort of hypotheticals as problematic. I think the underlying assumption is that all six are nice people, though frankly I would see it as problematic to harvest the organs of even a certified bad person.

    miller,

    > what distinguishes utilitarianism from consequentialism? <

    Utilitarianism is the subtype of consequentualism that measures the consequences of actions in terms of increased overall happiness or decreased pain. But one could be a consequentialist while trying to maximize / minimize other criteria.

    Brian,

    > Argument from ignorance. ... Begging the question. <

    You really need to revisit your understanding of logical fallacies. Here is a good starting point: http://www.fallacyfiles.org/index.html

    Alex,

    > I was under the perhaps naive impression that our emotionally and evolutionarily understandable nepotism was not really a rational argument to be entertained in moral philosophy. It may enter as part of a reality check to show us that certain moral systems would not really work in practice because they are too much at odds with what human nature allows us to actually make work, but how can it enter as an intellectual argument about what is moral in the first place? <

    Good point, there are two reasons: first, as we will see in the rest of the series, there are other ethical theories that disagree with consequentualism on how to treat relatives and friends. Second, whenever possible ethicists try to square their theories with most people's ethical intuitions. Yes, the latter could turn out to be wrong, but you have to have a very good reason for claiming that, and consequentualism's agent neutrality leads to a variety of problems (see links in the post) that are better addressed by different approaches to ethics.

    ReplyDelete
  19. Timothy,

    I'm very skeptical of economists' practices in general, and particularly when it comes to ethics. For instance:

    > for instance, taking "maximize" as well-defined, you can't, contra Bentham, both max happiness and min pain, since there are tradeoffs between the two <

    First, you have to demonstrate that to be the case, you can't just assume it. Second, this is no trouble to a consequentialist, who can prioritize one of the two criteria (as Singer does, focusing on pain) or propose a weighted balance between the two.

    > If the thing you're maximizing is continuous, like utility, then having two different options result in consequences whose values are exactly the same is what the jargon would term an event of Lebesgue measure zero - its probability is zero, so unless it's a non-random process it will never happen. <

    I don't see how that applies to the issue. The point is that there may be different, but not ethically equivalent, ways to maximize happiness. It has nothing to do with random processes.

    > Most metrics (like utility) will, in rendering those two values, already have taken into account the chooser's own preferences over the options <

    But that is the issue: on what grounds does one account for those preferences, and in what way? You've just brushed aside the problem and then declared it not a problem.

    > It seems almost nonsensical to me to respond to imperfect information by changing the objective function rather than recognizing how the instrumentalism can cope. <

    It's not a matter of imperfect information, it's an issue of not having a principled way to determine how far and how many consequences are relevant. Again, you are brushing aside the problem too quickly.

    > Government actors are not immaculately compassionate or ethical, power corrupts etc., and you need to restrain them to avoid bad outcomes. ... Legal rights are those constraints <

    Of course they are, but there is no principled way to account for rights of any kind within the consequentalist approach, which means that there is a problem with consequentialism. And plenty of consequentialists have recognized and tried to deal with the problem (unsuccessfully, in my opinion).

    ReplyDelete
  20. I see consequentialism like a framwork. It is completely general and flexible, and you need to add something else to make it complete, or more complete.

    Q,

    In your example of stealing something from someone, you could try at least in the first place to talk to him to see if he does not need it at all and try to convince him to give it to you. If he does not want to give it to you, then he is wrong not you, although you do not get it. If you steal it, for you in that situation, the end justifies the mean. Even if what you get it is not a big deal. But still, you are stealing and that's not good.

    Regarding family and friends, a little bit similar. They are your relatives, but there are limits. You can not justify any discrimination on the fact that they are your relatives and the others not.

    Massimo,

    I am thinking on the golden rule. I think it could be something at least to take into account into a moral theory.

    ReplyDelete
  21. >I'm very skeptical of economists' practices in general, and particularly when it comes to ethics.<

    A fair point, and I agree more than you'd suspect. Nonetheless I find their analytic tools to be much better.

    >First, you have to demonstrate that to be the case, you can't just assume it. Second, this is no trouble to a consequentialist, who can prioritize one of the two criteria (as Singer does, focusing on pain) or propose a weighted balance between the two.<

    I'm not sure what you mean by "it" - that there are tradeoffs between pain and happiness? Consider a device that would permanently and instantaneously eliminate all life from the universe. If you want to minimize pain, you would use it (since pain would then be permanently zero), but doing so would be inconsistent with maximizing happiness. A less sci-fi example would be someone who's terminally ill and in excruciating, continual pain, but who has a brief, infinitesimal flicker of happiness despite the pain every time his daughter visits him in the hospital - and you as his physician have to decide whether to assist his suicide or not. Saying you can both max happiness and min pain is untrue as a general statement, if you take "maximization" and "minimization" to be mean their well-defined mathematical operations. Barring that, it seems to me to be a horribly fuzzy, almost meaningless statement.

    Prioritizing one over the other is one response; a lexicographical ordering is one such prioritization. But then you aren't truly maxing or mining at least one of them.

    >I don't see how that applies to the issue. The point is that there may be different, but not ethically equivalent, ways to maximize happiness. It has nothing to do with random processes.<

    If the ways aren't ethically equivalent by consequentialism, then one of them yields more utility or whatever the objective is, but if so then it's that one that maximization selects. For consequentialism to view them as equivalent, they must yield equal amounts of utility (or whatever). And whether the utility relates to the action options randomly or otherwise very much affects how possible it is to have such ethically equivalent options.
    (continued)

    ReplyDelete
  22. >But that is the issue: on what grounds does one account for those preferences, and in what way? You've just brushed aside the problem and then declared it not a problem.<

    For utilitarianism, you take preferences into account by their utility values. I'll say, by your statements, you seem to have an issue I'm not addressing, but I don't know what it is. I've reread the article and it very much seems to me that my reply to you here was entirely on-point to that portion of your post. We might be speaking a bit past each other. Perhaps if you restate the problem?

    >It's not a matter of imperfect information, it's an issue of not having a principled way to determine how far and how many consequences are relevant. Again, you are brushing aside the problem too quickly.<

    Not knowing all the consequences your possible actions would have on the future is a matter of imperfect information - you don't have the information about how your action will affect the sum total of pain or utility etc. that occurs in the rest of history. As such it is a matter of optimizing under imperfect information.

    When you mention the relevance of consequences, I'm not sure, but taken with portions of your post, I suspect you're envisioning problems with an optimal information-collection algorithm but are folding your discussion into the objective function (to the extent you're saying some e.g. utility in the far future is not relevant for utility maximization).

    >Of course they are, but there is no principled way to account for rights of any kind within the consequentalist approach, which means that there is a problem with consequentialism. And plenty of consequentialists have recognized and tried to deal with the problem (unsuccessfully, in my opinion).<

    I mean, it's hard to know what to say to a conclusory assertion. There are principled ways, which I outlined. We might just have to disagree here.

    ReplyDelete
  23. I'm not sure how we can possibly escape consequentialism: when we ask ourselves what morality is for - what we want it to do for us - I can't see any other way of answering this question other than: because we all want to live happily and without suffering. Does anyone care if a particular human right or moral virtue is upheld if no impact is made on our happiness or suffering as a result?

    In this sense, then, human rights, moral virtues, freedoms, and other moral approaches (including rule utilitarianism) are ultimately consequentialist: they are heuristics that are aimed at particular consequences, namely happiness and the avoidance of suffering.

    ReplyDelete
  24. Many of the difficult examples for utilitarianism (like the doctor killing a sixth person to save the first five) involve consent. These problems are abhorrent to our moral intuition because they involve decisions that not all parties involved would voluntarily accept.

    A moral theory is not going to work in practice unless the people making up a society adopt it voluntarily. And no one is going to adopt a moral theory that states that they might be killed without their consent in order to help others.

    The only way utilitarianism (or any moral theory) would work, then, would be to rule out any action that met with objections from any of the people involved. Exceptions would, of course, have to be made in cases in which all available actions raised objections (e.g., the trolley problem) or when punitive action of some kind was necessary (jailing a serial murderer).)

    ReplyDelete
  25. @Oscar

    You say "But still, you are stealing and that's not good."
    The question is: why is it not good, if not by virtue of its (actual or expected) consequences?

    ReplyDelete
  26. Often, the predicted consequences of a considered action or inaction includes consequences that are immoral...perhaps more immoral than the action.
    There is a tendency to refer to the latter events as good/bad rather than moral/immoral...probably simply because of chronology and our experience of chronology. If one removes the chronology...one can recognize that the so called consequence is really another action choice....to be compared with the other action choice in terms of which is more moral/immoral. Even the Deontoligist must hide behind the "veil of chronology" to avoid making difficult choices.....even if the choose on the basis of duty.

    ReplyDelete
  27. @Q

    Ok, I understand, but it is not that easy to me. It sounds like a justification. Maybe, it is precisely that I differ from consequentialism, I don't know.

    But, ok, you steal it because he will not miss it, and you know that for sure. What if in one month let's say, he starts to miss it again? What then?

    ReplyDelete
  28. Massimo...Did my post get lost ?

    ReplyDelete
  29. kpharri said...

    I'm not sure how we can possibly escape consequentialism: when we ask ourselves what morality is for - what we want it to do for us - I can't see any other way of answering this question other than: because we all want to live happily and without suffering. Does anyone care if a particular human right or moral virtue is upheld if no impact is made on our happiness or suffering as a result?

    Good point, actually. Two replies spring to my mind: Some believers would argue that you have to follow rules set by their god no matter whether it increases or decreases overall or personal well-being. Yes, originally the idea was to follow the rules to curry favour with the god and escape their wrath either in this world or the afterlife, but sometimes you actually see people who argue that something is moral simply because god said so.

    Second, coming from a biological background I always lean to a view of societal mores being less the product of reasoning, but more of quasi-evolutionary trial and error during the attempt to build stable human societies. In that sense, I doubt that our morals are, as you wrote, primarily "heuristics that are aimed at particular consequences, namely happiness and the avoidance of suffering", but perhaps rather heuristics that turned out to work to produce stable and competitive societies. We don't consider lying, stealing, vandalism and manslaughter to be undesirable because they lead to suffering, but because societies that consider these behaviours virtuous cannot function.

    (So much for the plausibility of the "evil races" in high fantasy, by the way.)

    ReplyDelete
  30. kpharri:
    "I'm not sure how we can possibly escape consequentialism: when we ask ourselves what morality is for - what we want it to do for us - I can't see any other way of answering this question other than: because we all want to live happily and without suffering."

    If you mean that everyone adopts happiness and the elimination of suffering as the primary goal of moral life, then I disagree with you. A Christian would argue that you have to obey God's law without regard to whether you end up happy in the end (i.e., even if it didn't result in rewards in heaven). A Kantian would similarly claim that our duties should dictate our actions independently of their outcomes.

    If you just mean that moral reasoning is going to contain some element of instrumental reasoning, then I agree, but instrumental reasoning =/= "consequentialism."

    "A moral theory is not going to work in practice unless the people making up a society adopt it voluntarily. And no one is going to adopt a moral theory that states that they might be killed without their consent in order to help others."

    But surely we don't have to have universal assent to act as a society, right? If we dispense with the notion of inviolable rights (as strict consequentialism implies we should) then the fact that someone withholds consent to a moral proposition is irrelevant. And even with the concept of rights we have plenty of people in our society that don't consent to laws they are legally obligated to obey. The Tea Party thinks taxes are immoral but they still have to file by April 15th.

    ReplyDelete
  31. Timothy,

    > Not knowing all the consequences your possible actions would have on the future is a matter of imperfect information ... As such it is a matter of optimizing under imperfect information. <

    Okay, I'm not sure how this advances consequentialism. You seem preoccupied with how to implement the idea, I'm preoccupied with whether the idea is the right one.

    > I suspect you're envisioning problems with an optimal information-collection algorithm <

    Not really. I'm familiar with optimization theory in biology. And even some consequentialists actually defend a satisfying rather than an optimizing approach because of some of the reasons you brought up. But this is all about the mechanics of how, not on the ethics of why.

    > Consider a device that would permanently and instantaneously eliminate all life from the universe. If you want to minimize pain, you would use it <

    Yes, that only demonstrates that it makes no sense to talk about minimizing pain without context. Obviously the context here is to improve the human condition, not to eliminate the species.

    > Saying you can both max happiness and min pain is untrue as a general statement <

    I don't see why: you can imagine a completely happy and pain free life, can you not? You are assuming (likely, but not logically mandatory) tradeoffs.

    > If the ways aren't ethically equivalent by consequentialism, then one of them yields more utility or whatever the objective is, but if so then it's that one that maximization selects. <

    That's the point: alternate ways are ethically equivalent under consequentialism but don't seem to make much sense to the rest of us, and even to some consequentialists.

    kpharri,

    > I'm not sure how we can possibly escape consequentialism: when we ask ourselves what morality is for - what we want it to do for us - I can't see any other way of answering this question other than: because we all want to live happily and without suffering <

    You are taking the term too broadly. Of course in some sense one can always think of consequences, but consequentialism is the doctrine that only consequences per se, and not, for instance, duties, rights, personal character and so on, are ethically relevant. The other things too have "consequences," but then the word becomes so broad as to be meaningless. Accordingly, consequentialists themselves do not use it in that sense.

    DJD,

    > Did my post get lost ? <

    No, it was published with a bit of delay because I don't like to check my mobile phone while on a date...

    ReplyDelete
  32. Well, I lucked out you posted this morning rather than this afternoon (gulp).

    >Okay, I'm not sure how this advances consequentialism. You seem preoccupied with how to implement the idea, I'm preoccupied with whether the idea is the right one.<

    I might be mistaken, but your post seems to me to be arguing that consequentialism (1) is conceptually unworkable and (2) does not align with intuition, and my comments have attempted to address such criticisms. The point here goes to conceptual viability.

    >And even some consequentialists actually defend a satisfying rather than an optimizing approach because of some of the reasons you brought up.<

    A satisficing rather than optimizing ethic is a new one on me (intriguing really), and as you say it would be an alternate approach to developing viable consequentialisms.

    >Yes, that only demonstrates that it makes no sense to talk about minimizing pain without context. Obviously the context here is to improve the human condition, not to eliminate the species.<

    I mean, if someone told me in all seriousness that her or his first principle of ethics was to minimize pain, and then upon considering such hypotheticals said that she or meant what s/he said, but only in some sort of context, it's hard to see why I should think such a person has given much thought to her or his views. It seems like such a person should have given a more accurate and precise statement at the onset.

    >I don't see why: you can imagine a completely happy and pain free life, can you not? You are assuming (likely, but not logically mandatory) tradeoffs.<

    Yes, originally I had said that *there are* tradeoffs between the two, not that there are logically necessary tradeoffs. From your post it seemed like you were discussing an ethical system some people thought was actually viable - you said it was the view of modern utilitarians, and they presumably think their view applies to us. I would have further expected them to want their ethic to apply even to hypotheticals such as my sci-fi one.

    >That's the point: alternate ways are ethically equivalent under consequentialism but don't seem to make much sense to the rest of us, and even to some consequentialists.<

    Ah, well I happen to think that the implications (most) consequentialism(s) hold for our world and humans as they exist (rather than as economists construct homo economicus to be) happen to coincide very well with most, though not all, common moral intuitions. But if your objection in this part was simply that it is unintuitive, it seems to be the argument of a moral intuitionist, rather than a virtue ethicist. If you were objecting that there's nothing intrinsic to the ethic that prevents it from acting in intuitively objectionable ways, I agree. Its substantive implications depend very much on the empirical matter it applies to; if we were homo economicuses, its implication would look very different, although so would most ethics' implications, for that matter.

    I hope your date went well!

    ReplyDelete
  33. Oyster Monkey wrote:
    "But surely we don't have to have universal assent to act as a society, right?"

    Right. Universal assent is not a practical necessity for a functioning society, but some strong measure of assent is. If Obama handed the reigns of U.S. power over to, say, an Islamic despot tomorrow morning, revolts in the streets would not be long in coming.

    While we can't expect every citizen to do what he pleases until he is old enough to judge the moral value of the laws of his land, and give his assent to them, we can give him the option of protesting against, and changing, what he may see as immoral laws.


    "If we dispense with the notion of inviolable rights (as strict consequentialism implies we should) then the fact that someone withholds consent to a moral proposition is irrelevant."

    It's not irrelevant if a large (or otherwise powerful) enough group of people feel this way and make their voice heard, and effect change through, say, political means.


    "And even with the concept of rights we have plenty of people in our society that don't consent to laws they are legally obligated to obey. The Tea Party thinks taxes are immoral but they still have to file by April 15th."

    I agree, but that's not a problem with consequentialism, or any other moral theory. That's a problem with human nature.

    Besides, consider the alternative: some philosopher comes up with a brand new moral theory, and he has water-tight arguments in defense of its principles. That's all well and good but how, exactly, is his system going to be implemented, and made of practical use, if not by some democratic means, e.g. by gaining the consent of a majority?

    ReplyDelete
  34. If a moral action produces a consequence that is immoral, should we balance this consequence against the moral act? Or, if one consequence is moral and another is immoral, should we weigh them against each other in terms of their moral values?

    ReplyDelete
  35. I've never been fully convinced of consequentialism, specifically utilitarianism (I've been pretty hostile towards it until a few months ago), but Peter Singer has done a good job of explaining it and providing convincing arguments. Every time I hear him speak about it, I feel like he's getting closer to convincing me.

    ReplyDelete
  36. Laurence
    >"I feel like he's getting closer to convincing me."
    That's ok if you want your life and beliefs determined by runaway inferential connections...turning similarity into identicality and then necessity.

    ReplyDelete
  37. Timothy,

    > your post seems to me to be arguing that consequentialism (1) is conceptually unworkable and (2) does not align with intuition, and my comments have attempted to address such criticisms. The point here goes to conceptual viability. <

    It depends on what you mean by "conceptually." You seem to mean whether there is a way to implement consequentialist ideas. I think there are several. My problem is whether the approach makes sense from a philosophical-ethical perspective, which is why I'm not concerned with the mechanics of implementation.

    > if someone told me in all seriousness that her or his first principle of ethics was to minimize pain, and then upon considering such hypotheticals said that she or meant what s/he said, but only in some sort of context, it's hard to see why I should think such a person has given much thought to her or his views. <

    That's not what I meant. I was simply saying that your thought experiment of eliminating pain by killing the human race is a non-starter, because the background assumption of any ethical theory must be to improve the human condition, hardly something that can be done by causing its extinction.

    > But if your objection in this part was simply that it is unintuitive, it seems to be the argument of a moral intuitionist, rather than a virtue ethicist. <

    No, other approaches to ethics (particularly deontology, which I'll tackle next, and virtue ethics) consider it wrong not to have special regard for family and friends, toward which you have special duties precisely because they are close to you. So no intuitionism is necessary.

    ReplyDelete
  38. >It depends on what you mean by "conceptually." You seem to mean whether there is a way to implement consequentialist ideas. I think there are several. My problem is whether the approach makes sense from a philosophical-ethical perspective, which is why I'm not concerned with the mechanics of implementation.<

    There were a number of points where you seemed to address its conceptual viability (logical validity/rationality), as where you said it made "no sense" or was "ad hoc." In any case, I feel here we may have drifted from any substantive point of disagreement we share.

    >That's not what I meant. I was simply saying that your thought experiment of eliminating pain by killing the human race is a non-starter, because the background assumption of any ethical theory must be to improve the human condition, hardly something that can be done by causing its extinction.<

    I took your statement of modern utilitarians' aims as an imprecise one, in that they didn't mean "max" and "min" in their sense of optimization theory, but rather in some vague sense. Now it seems rather you're saying they're inaccurate statements of utilitarians' beliefs, that these philosophers smuggle in hidden premises that reflect some sort of context that the statements themselves do not capture. [As an aside, this point now seems to go to that longstanding disagreement you have with Julia over whether it's appropriate to smuggle in hidden propositions into definitions. Your stance here has always struck me as more Continentalist, whereas hers is more analytic and as I understand it, utilitarians tend to identify with the analytic tradition. Regardless, I for one welcome any future post on how your conversations with Julia on the topic of definitions has progressed since last! :)] Either way seems to support the point I had made.

    Anyway, I am not so confident as you that a consequentialist who sought to min pain would see it the way you're saying they would - partly because of my sense of their writings, and partly because the only similar ethics I know of, Theravada Buddhism and most Mahayana Buddhism, seek to min suffering in particular and actually do seek the annihilation (nirvana) of life.

    >No, other approaches to ethics (particularly deontology, which I'll tackle next, and virtue ethics) consider it wrong not to have special regard for family and friends, toward which you have special duties precisely because they are close to you. So no intuitionism is necessary.<

    It's surprising to me that you'd say that deontology and virtue ethics would criticize something simply as being unintuitive, but that aside, I suspect strongly that anyone who has the opinion that family duty is primary rather than instrumental to something deeper simply has not had the family I have had. I look forward to your posts on those ethics.

    ReplyDelete
  39. kpharri,

    I think I muddied the waters by focusing on the social/political aspects of consent to moral systems. What I was trying to get at (albeit not too clearly) is that when we talk about selecting between moral systems one thing you have to focus on is the logical consequences of the basic structure, not on the likelihood that those consequences would actually be implemented or consented to.

    The fact that we would never consent to a political structure in which we killed one person to harvest his organs to give to five other people doesn't change the fact that strict consequentialism (i.e., agent neutral consequentialism without ad hoc adjustments) might support such an action.

    And in fact, your correct point that we would not consent to one of consequentialism's logical conclusions implies (though doesn't prove) that there is probably something wrong with the theory.

    ReplyDelete
  40. @Oscar

    I suppose you're right, things are never that simple in real life, and consequentialism is a bit too "idealistic".

    ReplyDelete
  41. >What I was trying to get at (albeit not too clearly) is that when we talk about selecting between moral systems one thing you have to focus on is the logical consequences of the basic structure, not on the likelihood that those consequences would actually be implemented or consented to.

    The fact that we would never consent to a political structure in which we killed one person to harvest his organs to give to five other people doesn't change the fact that strict consequentialism (i.e., agent neutral consequentialism without ad hoc adjustments) might support such an action.<

    Just to add my 2 cents here, I don't think consequentialism would support an open such regime, partly since it would disincentivize going to doctors at all (which is probably the most important problem, although there are other issues as well, such as moral-hazard effects for keeping your own organs healthy). If a doctor could get away with it without anyone finding out, and had no alternative, then that would be one thing (a bit like Nurse Jackie's tendencies), but that assumption is already fairly unrealistic in most medical settings.

    More generally, I think consent is important for social trust, which in turn is important for many consequence-enhancing institutions, so while I don't think consequentialism requires consent in the strong way kpharri is suggesting, I think he's close to the truth of the ethics' implications.

    ReplyDelete
  42. In the classic case of pushing one person to their death in order to save five lives, what if we treat each choice as an equal. Instead of saying shall we sacrifice one life to save five, we say shall we sacrifice 5 lives in order to save one.
    Are we to hide behind the fact that in the classic case,the five deaths occur ten seconds after the one life saved?

    ReplyDelete
  43. Why are consequences always described in terms of "goods" instead of "moral" or "immoral" consequences?

    ReplyDelete
  44. Because moral is defined as whatever increases good, immoral as whatever decreases it.

    ReplyDelete
  45. Timothy,

    "I don't think consequentialism would support an open such regime, partly since it would disincentivize going to doctors at all (which is probably the most important problem, although there are other issues as well, such as moral-hazard effects for keeping your own organs healthy)."

    I agree with you that a rule based consequentialism might not endorse the organ harvest situation, but I think if the consequentialist makes the move to a rule based calculus, then the system becomes almost intractable. Given the complexity of factors that lead to overall happiness in the long run, how do you decide which rules lead to the greatest overall happiness? At least with applying consequentialism to individual actions the effects can be more or less localized. But global rules for action relate to scenarios that seem far to complex for them to be functional.

    ReplyDelete
  46. >I agree with you that a rule based consequentialism might not endorse the organ harvest situation, but I think if the consequentialist makes the move to a rule based calculus, then the system becomes almost intractable. Given the complexity of factors that lead to overall happiness in the long run, how do you decide which rules lead to the greatest overall happiness?<

    It's not a move to a rule-based calculus, but rather the selection of laws, which are considerably more constrained than the set of possible "rules" rule-based utilitarianism might consider. If anything, my response to you sketched how you examine the issue; I also touched on the issue above in my discussions with Massimo. A real-world example would be the antitrust lawmaking of the FTC and the Justice Dept., as they're both fairly explicitly utilitarian, although to be sure, they have constraints that a utilitarian lawmaker in the abstract wouldn't. It's really just an issue of optimization under incomplete information.

    The point in this particular case is that utilitarian calculations on the level of, say, the individual doctor often may face constraints that arise (consistently with utilitarianism) from other actors, such as the median legislator - and that hypotheticals such as this doctor one often obscure this feature of the ethical system, since they just present the individual situation while not mentioning the role other actors may play. As I said, if you can (albeit unrealistically) isolate the individual actor from the other actors' constraints, as with the doctor who can guarantee nobody finds out what s/he might do, then the simple implication can hold. But people don't think of that and then come away with wild impressions of doctors killing every 6th patient or whatever.

    ReplyDelete
  47. Massimo
    > "Because moral is defined as whatever increases good, immoral as whatever decreases it."
    This definition allows for a discussion of consequentialism to dodge difficult questions about immoral consequences as a result of propsed actions. n the classic case of pushing one person to their death in order to save five lives, what if we treat each choice as an equal. Instead of saying shall we sacrifice one life to save five, we say shall we sacrifice 5 lives in order to save one.
    Are we to hide behind the fact that in the classic case,the five deaths occur ten seconds after the one life saved?

    ReplyDelete
  48. Timothy,

    It seems to me that you're sliding from ethics to politics. Or maybe you think the distinction is spurious. But a legal framework is always going to be silent on a braod range of actions that are legally permissible, but which moral theory might say are required or forbidden.

    Maybe I'm slow on the uptake today, but I don't think I get what antitrust legislation has to do with consequentialism as a moral theory that is used to guide individual action.

    ReplyDelete
  49. >It seems to me that you're sliding from ethics to politics. Or maybe you think the distinction is spurious. But a legal framework is always going to be silent on a braod range of actions that are legally permissible, but which moral theory might say are required or forbidden.<

    My point was that utilitarian calculations on the level of the individual actor often may face constraints that arise, consistently with utilitarianism, from other actors and that these discussions often omit such constraints. You had assumed I had been discussing rule utilitarianism, but instead I meant to discuss how lawmakers (themselves individual actors, after all) can act to constrain others under utilitarianism (without turning to rule utilitarianism). I agree of course that laws will not touch on every moral issue, or even most.

    The antitrust example went to your question about how to "decide which rules lead to the greatest overall happiness." I thought a real-world example might be more helpful than my more abstract discussion with Massimo. Perhaps I was being confusing or unclear by providing a law example to your rule question. If so, my apologies.

    ReplyDelete
  50. Timothy,

    Ah, I see. That makes a lot of sense from a practical perspective. The only thing I would add is that I think the role of the simplified thought experiments is to focus in on what the theory itself logically implies (i.e., its morally relevant features), not what is likely to happen in admittedly messier real world situations. And I think there's a value in that. If you're evaluating the consequences of a moral theory and the negative aspects are mitigated by bringing other considerations that aren't internal to the theory into the discussion, then it might end up looking more viable than it should. Or the converse, the barriers to implementation might make us discard a theory as conceptually inadequate when separating the two aspects might allow us to affirm the theory and explore other options for implementation.

    ReplyDelete
  51. Oyster,
    I largely agree; such simplified hypotheticals have a great deal of usefulness. However, in this context I take issue with the distinction you're drawing between "what the theory itself logically implies" and "what is likely to happen in admittedly messier real world situations." The considerations I was offering *were* what the theory logically implies for situations more likely to arise than the hypothetical under consideration. Just as simple hypotheticals can identify a theory's implications in extreme situations, it is also worthwhile to consider the theory's implications in more elaborate hypotheticals. After all, some theories could be much less attractive in practice whatever their abstract, simplified appeal. Furthermore (and what I was reacting to), considering just a simple hypo can create a misleading picture of the range of what the theory logically implies.

    ReplyDelete
  52. Timothy, I definitely agree that the "intuition pumps" (as Dennett calls them) that philosophers are sometimes too simple to be useful, but I think it's important that the complications that are added to them to make them more realistic are related to the theory itself, not simply external constraints that would obtain while implementing any moral theory.

    ReplyDelete
  53. >I think it's important that the complications that are added to them to make them more realistic are related to the theory itself, not simply external constraints that would obtain while implementing any moral theory.<

    I'm not sure what the distinction you're drawing here is. What makes something "related to the theory itself"? Consequentialism concerns consequences, so what would actually happen as its consequence would seem to be its concern.

    ReplyDelete
  54. Timothy, let's go back to the organ harvest thought experiment. Let's say an advocate of consequentialism (and I don't mean this to be putting words into your mouth) responds by saying, "That example is too simplistic. We need the example to account for the fact that harvesting organs is likely to increase the cost of medical malpractice insurance because of lawsuits, driving up medical costs for everyone, thereby decreasing overall happiness. So consequentialism would not endorse organ harvesting."

    This sort of response misses the point of moral thought experiments. The legal framework of our particular country doesn't change whether consequentialism as a theory should endorse organ harvesting in principle to be consistent. Insisting on adding more "real life" details is changing the question, which incidentally, is not necessarily a bad thing. There is such a thing as a stupid question, and you could probably argue that some of the traditional thought experiments are stupid questions.

    ReplyDelete
  55. Oyster,
    I'm sorry, but now I really don't understand you. Before, you were suggesting that the point of your hypo was more "related to the theory itself" than more elaborate hypotheticals (which are moral thought experiments too, after all). Now, it seems like you're saying that considering different hypotheticals is tantamount to going cheap by denying a particular hypothetical. If so, then I have to say I bristle at that suggestion, as I never did so, if you'll review my replies to you. (If not then my apologies for bristling.)

    ReplyDelete
  56. First, I wasn't trying to say that you were dodging the question. No offense intended.

    Let me try it this way. In any thought experiment some things have to be stipulated, right? So in the organ harvest thought experiment the question that's being asked of consequentialism is this: assuming that killing one healthy person to distribute his organs to five other people increases overall happiness (this is the stipulated part), would consequentialism endorse this action? Now, if the defender of consequentialism says "No," we would expect a rationale for the answer based on the theoretical structure of consequentialism, not based on constraints that are incidental to the theory. That's all I was getting at.

    ReplyDelete
  57. Ah, I appreciate that then. But yes, a max-happiness consequentialist should say yes to it given that rather decisive stipulation (other consequentialisms may not of course). I suspect you're using "incidental" in a way I'm not following, but perhaps it's best to let it lay.

    ReplyDelete
  58. Massimo, an excellent summation. I have many of the same issues and concerns.

    ReplyDelete
  59. And if we hadn't entered WWI on the side of the Allies, then Germany would have won, there would have been no Hitler, and (probably) no Holocaust or WWII, although Germany would have eventually invaded Russia.

    ReplyDelete
  60. Said, Miller and others,indeed. Please study [Google] covenant morality for morality- the presumption of humanism where I find it paradoxical that an objective morality can depend on wide reflective subjectivism, and consequences make for rules [deontological] and virtues [virtue theory]. I rely on John Beversluis dissection of Lewis's nonsense about subjectivism in " C.S.Lewis and the Search for Natural Religion" where he defends wide reflective subjectivism [ my term based on Kai Nielsen's wide reflective equilibrium] and his noting of two objective conditions. I support objective as inter subjective,consequences all can discern and like science factual,provisional [no A.Rand closed system!] and -debatable. This is situational.
    http://lordgriggs1947.wordpress.com about that certain intellectual,emotional and sometime monetary scam of the ages and its scam artist supreme.

    ReplyDelete

Note: Only a member of this blog may post a comment.