by Julia Galef
I hope Massimo won't start regretting his generous invitation for me to co-blog with him (hi readers! great to be here!) if I kick things off by immediately and publicly disagreeing with him. He and I have been having a debate on moral philosophy for the last few weeks, and after the twentieth iteration of the same arguments we decided it makes sense to invite you all to weigh in, at the very least because we're tired of the sound of our own voices by now. Massimo asked me to lay out the debate, and then he'll follow up with his own post next week.
I hope Massimo won't start regretting his generous invitation for me to co-blog with him (hi readers! great to be here!) if I kick things off by immediately and publicly disagreeing with him. He and I have been having a debate on moral philosophy for the last few weeks, and after the twentieth iteration of the same arguments we decided it makes sense to invite you all to weigh in, at the very least because we're tired of the sound of our own voices by now. Massimo asked me to lay out the debate, and then he'll follow up with his own post next week.
So, I agree with Massimo that moral reasoning is possible, given a set of initial axioms. (Axioms are the starting assumptions on which all of your moral judgments are based, like the concept of certain fundamental rights, or tit-for-tat justice, or protecting individual liberty, or maximizing total happiness). Where I disagree with him is over his belief that it is possible to use scientific facts to justify selecting one particular set of initial axioms over another.
Roughly speaking, Massimo starts with biological and neuroscientific facts such as "Human welfare requires things like health, freedom, etc." and "Humans are wired to care about each other's welfare," and from these he derives the conclusion, "Therefore, it is moral to act in a way that increases those things which are necessary for human welfare." In my opinion, this is an example of what is sometimes called the naturalistic fallacy: telling me scientific facts doesn't tell me how to act on those facts, and the alleged point of moral principles is to tell me how to act. Science can tell me that if I want to make other people happier, then treating them in certain ways -- giving them health, freedom, and so on -- will accomplish that goal. But science can't tell me whether making other people happier should be my goal.
Alternately, you could use evolutionary biology and neuroscience to argue that being kind to others is the best way to maximize one's own happiness, thanks to the way our brains have become wired over the course of our evolution as social animals. I agree that there's some truth to this claim, but I deny that we can derive any moral principles from it -- it implies only an appeal to self-interest that happens, through lucky circumstances, to have positive consequences for others. (Furthermore, if your moral imperative takes this form, the implication is that if for some reason I were wired differently, then being unkind would not be immoral.)
The difficulty of deriving facts about how people ought to behave from facts about how the world is was most famously articulated by David Hume in his A Treatise of Human Nature (1739):
The difficulty of deriving facts about how people ought to behave from facts about how the world is was most famously articulated by David Hume in his A Treatise of Human Nature (1739):
"In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it."
This is called the "is-ought problem", or sometimes "Hume's Guillotine" (because it severs any connection between "is"- and "ought"-statements). My understanding is that Hume is generally believed to have meant not just that people jump from "is to "ought" without sufficient justification, but that such a jump is in fact logically impossible. There have been a number of attempts to make that jump (here's a famous one by John Searle), though I've found them pretty weak, as have other people with much more philosophical expertise than me.
With that in mind, I can't see any way in which a claim of the kind Massimo is making -- "doing X increases human welfare, therefore X is the moral thing to do" -- could logically hold, unless you're simply defining the word "moral" to mean "that which increases human welfare," in which case the statement is tautologically true. But I'm not sure what we gain by simply inventing a new word for a concept that already exists.
With that in mind, I can't see any way in which a claim of the kind Massimo is making -- "doing X increases human welfare, therefore X is the moral thing to do" -- could logically hold, unless you're simply defining the word "moral" to mean "that which increases human welfare," in which case the statement is tautologically true. But I'm not sure what we gain by simply inventing a new word for a concept that already exists.
Fortunately, even though I think the blade of Hume's guillotine is inescapably sharp in the philosophical world, I don't think it has the power to sever much in the real world. Because, thanks to some combination of evolutionary biology and social conditioning, I do enjoy being kind, and I do want to reduce other people's suffering -- and I would want to do those things even without a rational justification for why that's "moral." And I believe most people would feel the same way.
But if someone didn't care about other people's welfare, I couldn't accuse him of irrationality. He would be committing no fallacy in his reasoning, nor would he be acting against any of his own preferences. (If he wanted to increase human welfare and yet he knowingly acted in a way that reduced human welfare, then I could legitimately call him irrational.)
Massimo, I believe I've represented our disagreement accurately, but please correct me if I haven't! *thwack* Ball's in your court!
Massimo, I believe I've represented our disagreement accurately, but please correct me if I haven't! *thwack* Ball's in your court!
I agree with everything you've said. Natural is amoral. I'm curious to see Massimo attempt to escape the tautology point you made.
ReplyDeleteOur psychology reflects our past as a species. The specifics (if they exist) are very hard to find but Science is able to collect useful knowledge that may give us information to whether some set of axioms are better than others. Perhaps, in the future, we will have enough knowledge to state, with some degree of certainty, that human persons should choose axioms X to maximize overall happiness or any other criteria. But I feel that, even then, that knowledge would not be sufficient to decide that moral/social question (why, I'm not sure, but perhaps you have some clues for me :-)
ReplyDeleteAnother way to evaluate different starting axioms is to study historical evidence of how societies, that preferred certain types of morality, treated their citizens (controlled human experiences would, of course, be quite unethical).
Galef is on target. The problem with so many moral realists is they say things like "there are empirical facts about which behaviours increase human flourishing" as though this means that anyone should do anything or not. There's nothing more rational about thinking that flourishing is good than there is about thinking that flourishing is bad. I could say that "goodness" is nothing more than the production of radiation. Then you could use science to determine which actions are right or wrong...
ReplyDeleteIf we accept that humans generally have certain characteristics, and that acting in a particular fashion in light of those characterists will benefit most humans, why question the propriety of acting in that fashion? What more would be required to establish what proper conduct would be, i.e. what is "moral" under the circumstances?
ReplyDeleteOne wonders why it is necessary, or whether it even makes sense, to claim that something more or different is required in order to establish what is moral.
Hume is right.
ReplyDeleteBut Massimo's tautology doesn't negate Hume because morality is a condition. You're implying that Massimo is asserting universality.
Maybe he is, but I don't see it.
"being kind to others is the best way to maximize one's own happiness, thanks to the way our brains have become wired over the course of our evolution as social animals. I agree that there's some truth to this claim, but I deny that we can derive any moral principles from it -- it implies only an appeal to self-interest that happens, through lucky circumstances, to have positive consequences for others."
ReplyDeleteThis alternative makes a lot of sense to me. It explains all the data, requires no fuzzy thinking, and provides a great reason to be moral.
"(Furthermore, if your moral imperative takes this form, the implication is that if for some reason I were wired differently, then being unkind would not be immoral.)"
I suspect that this is the real crux of your objection to this alternative. However, just because you don't like this consequence, it doesn't make it untrue.
Luckily, extraordinarily few people are wired sufficiently differently. The vast majority of those who, to use a short-cut term, behave "immorally", are misguided and not acting in the interest of their own happiness.
At the risk of having rotten tomatoes thrown at my window by angry philopsophers (sounds like a Monty Python sketch)...
ReplyDeleteAs usual when I read these philosophical arguments I find myself wondering what the point is, i.e. what is the benefit of arguing the point or even reaching an agreement (if even possible). (I'm not having a go at philosophy so much as admitting my ignorance).
We form societies because we evolution has formed us that way, we form laws because those societies won't work without them (and because we don't want someone to kill us just because we annoyed them etc). So the question is, what benefit do we derive by discussing this - I really am seeking an answer not being argumentative for the sake of it!
I agree that the is-ought gap can't be fully bridged in this way. But I wonder if it's possible to reduce the distance of the leap necessary.
ReplyDeleteThe problem, as I see it, is that valuing the welfare of humans seems arbitrary (and thus potentially irrational) if you're not a human. So what is it about "humanness" that's worth preserving and nurturing?
Although I don't think an answer to that question would bridge the gap entirely, it might at least take a step in that direction, by showing how we might make moral judgments about the welfare of other kinds of beings -- and how other kinds of beings might make judgments about our welfare!
I agree. There is no way to to use science to select a particular set of moral axioms. In addition I don't believe it is even possible to act in a way that everyone would perceive as moral according to a give set of axioms. There is always the possibility of conflicts in interest.
ReplyDeleteAlso, I fear (but can't prove) you would hit up against analogous problems that arise in Mathematical Logic: Any consistent and powerful enough axiomatic system will be incapable of proving certain truths. This means there would be undecidable moral dilemmas in such a system.
I think that if morality is going to be justified or explained using evolutionary criteria, then morality ceases to exist. If there is a psychological reward associated with promoting the welfare of others, then there may very well be or have been a selective advantage which allowed this reward to evolve and be maintained. (There are at least 3 assumptions in that previous sentence that may not be true, but nonetheless lets continue this thought.) For example, it almost certainly is advantageous to help family members and members of your tribe, clan, or other small group. However, it is almost certainly not advantageous to help non-family members or members of out-groups. There are finite resources and allowing a rival tribe to hunt on "your" land, may limit you families ability to survive a winter or famine. So we can point to psychological rewards associated with promoting each others welfare as evidence for this being moral. However, we could point to the psychological rewards associated with harming, allowing harm, or simply ignoring harm as evidence for this being moral.
ReplyDeleteEvolutionarily it makes biological sense that we promote the welfare of close relatives. Much as it makes sense that we reproduce faster than we kill each other. If we killed each other faster than we could reproduce, then we would be long extinct and not having this discussion. So if deciding that murder is immoral because of the evolutionary selections (ie scientific facts) involved, then this makes having as many offspring as possible a moral act, even if you cannot provide for them. Actually this is a useful approach, since it seems to be agreed here that most people want to promote the welfare of others, then others will pick up the slack while you are busy generating more and more children.
If providing food to a hungry human is a moral act, then vampire bats must also have morality. In fact using these a biological justification for morality, suggests to me that all social animals have a moral code as do plants some of which release pheromones in response to insect attacks to allow other nearby plants to initiate protective measures.
Thanks Julia for your clear presentation of this very important issue. While I agree with you that empirical evidence and pragmatic concerns fall prey to Hume’s guillotine, I don’t think that this admission necessarily means that a rational basis for morality is impossible.
ReplyDeleteIntuitively, we know right from wrong, justice from injustice and also that we should be held accountable for our moral failures. If these powerful intuitions are merely the product of chemical-electrical reactions, then it would seem that we are illegitimately proceeding from an “is” to an “ought” when we try to derive a moral system However, if it could be logically-rationally demonstrated that there is an additional basis for these intuitions– a transcendent and authoritative Reality standing behind and enforcing our intuitions – then the “is-ought” distinction is dissolved. (God “is” and also embodies an “ought.”)
Seems to me you better define moral before we continue.
ReplyDeleteWhat does it mean to be told how to act?
I agree with Julia. Hume had it right, there's no way to cross the line from is to ought. Also, I smell a little begging of the question in the use of the word "welfare". Faring "well" is a value judgement, not a statement of fact. Personally, though I haven't read the ongoing debate, I suspect Massimo's just helping you hone your chops.
ReplyDeletePretty much side with Julia, Derek, Ritchie, and Joanna. Morals, ethics, truth and fiction don't belong in philosophical conversations anymore. As stated above, our actions are driven by self-interest. It is fine to talk about moral imperatives and their basis in science, but please to define good and bad in objective terms. Cannot be done. It's all subjective and what is good for you and three other people may well be bad for me. What is true for me and three other people may be bad for you. Its like the previous discussion about info theory a la Shannon - proves lots about content, nothing about meaning. Massimo may well be trying to build a religion from secularism (which in principle I agree with and laud) but gets one nowhere when we consider the effect of our actions on non-humans (plants, animals, and plenty of other stuff)
ReplyDeleteJulia Galef, you have expressed very nearly exactly my own thoughts on that issue; now I am looking forward to the counter-argument.
ReplyDeleteI think a major problem here is that there is a certain kind of mind who wants to have a preferably universal moral code served to him/her from the outside. Some expect it to come on stone tablets from a holy mountain, others expect it can be found by sitting in a philosophy institute and thinking hard, others commit the naturalistic fallacy and do not even notice it.
IMO, there is no shortcut because morals are a human invention, simple as that. As a biologist, I have a perspective that is, not surprisingly, somewhat informed by evolution: nature puts certain constraints on our moral systems insofar as we want to form working societies. These societies can only work if certain morals are accepted, e.g. stealing, lying and murdering must be vices, bravery and charity must be virtues. Try reversing these states and you will see that such a society will break down. But that does, just as JG writes, not mean that this is moral by any universal, demonstrable standard; who, after all, has decreed that forming working societies is what we should do?
I at least can live with the conclusion that we (as a society) can and have to decide and negotiate what is moral ourselves, and that there is no universal agency to decide them for us. Would that everybody could live without that imagined safety net.
Joanna Masel slips here:
ReplyDelete"The vast majority of those who, to use a short-cut term, behave 'immorally', are misguided and not acting in the interest of their own happiness."
I will use this brief piece of phrase to elucidate my take on the existence of moral truth.
Suppose that we define morality as the maximization of happiness and/or minimization of suffering. What, then, does it mean to say that something is good? It merely means that it causes happiness. "Bad", in turn, just means "causes suffering".
This is clearly a manipulation of language - the use of a "short-cut term", as Masel puts it, to attach a normative gravity to a descriptive concept. Contrary to the suggestion of ciceronianus, much more is necessary to make a leap from the descriptive to the normative. To be normative is not to just say that the world is a certain way - it is to say that something in the world ought to be different, that people should do certain things, that some circumstances are better than others, and so forth. To define goodness as the maximization of happiness (or minimization of suffering) is to merely couch descriptive concepts in moral language. There's something missing, the core of morality: the disjoint between how things are and how they "should" be.
Moral realists are forced to play language games of this sort, however, because the alternative is hopeless: to claim that there really are facts about how the world ought to be. To claim this is to claim that the existence of certain real-world facts somehow makes it such that certain actions are better than others.
sLx misses a critical point: the idea that happiness is good is itself an axiom, chosen by most people because they like being happy. This doesn't change the fact that it's an axiom like any other, and one not backed up by science or sound philosophical argument in any way. I counter anyone who believes that happiness just has to be good to explain why it is better for people to be happy than for them to suffer.
As long as Massimo believes in some form of fatalism (although we may have ”elbow room” or the ability to contemplate our predicament if I understand him right from previous posts) the search for moral principles would be a secondary and rather meaningless chase.
ReplyDeleteSure, we experience moras in everyday life. But these guidelines must also be deterministic (/arbitrary if you like quantum apologists) and therefore a given. Things are what they are.
So much worse for the human condition which, I presume, is the only way it can be.
With that in mind, I can't see any way in which a claim of the kind Massimo is making -- "doing X increases human welfare, therefore X is the moral thing to do" -- could logically hold, unless you're simply defining the word "moral" to mean "that which increases human welfare," in which case the statement is tautologically true.
ReplyDeleteI'm not sure if I agree with Massimo or Julia, or both, or none, so I'll just state what I think.
Sidestepping Hume's guillotine, I define "moral" as "according with moral sentiment" which is that reflexive appraisal of right or wrong we're all familiar with. A "moral philosophy" is an MSMS--Moral Sentiment Management System--a prescriptive framework of concepts, a protocol, a model, that (ideally) maximizes positive moral sentiment and/or minimizes negative moral sentiment, among and within its adherents: "how we should live."
The syllogisms of moral argument tend to be this:
1. X implies Y
2. Y accords with positive moral sentiment
3. To accord with positive moral sentiment is "moral"
:. X is moral
Alternatively,
1. X implies Y
2. Y accords with negative moral sentiment
3. To accord with negative moral sentiment is "immoral"
:. X is immoral
X is an action/behavior/personality attribute/maxim advocated or discouraged by the moral philosophy. Implication Y is conceived and appealed to both in defense of X and in refutation--the meat and potatoes of philosophical discussion. Premise 3 tends to go unstated but is, as I've opined, the basis for moral theory.
Where does Science fit in? Premise 2, mostly. Through scientific inquiry we uncover the character of our moral sentiments, the Ys, which is undoubtedly useful to the moral philosopher. In knowing a Y you can reverse-engineer an X. In exploring the implications of an X you do so with parameters; beware the Y of ill-repute!
That said, it is a mistake to think that moral philosophy (the _best_ moral philosophy, rather) can be derived solely from Science. At the very least, moral sentiment has proved inconsistant. It is not uncommon, for example, if self-righteous parties view the others as reprehensible. Evidently, the Y of moral sentiment can be excessively broad. This is why we need philosophers.
ReplyDeleteRoughly speaking, Massimo starts with biological and neuroscientific facts such as "Human welfare requires things like health, freedom, etc." and "Humans are wired to care about each other's welfare," and from these he derives the conclusion, "Therefore, it is moral to act in a way that increases those things which are necessary for human welfare."
If this accurately represents his position then he has a serious problem to overcome: it suffers from something rather similar to the Euthyphro dilemma.
If we happened to have been wired differently, even radically differently, what is right would be radically different.
Coincidentally, I'm currently reading a science fiction novel, PRIMARY INVERSION, in which exactly this scenario is being played out. 6000 years in the future one of several galactic empires is ruled by an offshoot of humanity that is biologically wired to be sadists.
I'm rather suspicious of a theory about morality that would have to conclude that sadism is therefore (at least for this subspecies of humans) a virtue.
I think we need to go a bit deeper than contingent biological wiring to look for the basis of the good.
For example (using ideal observer theory--the approach that I've so far found most fruitful), we can ask what kind of being, one programmed for sadism or for compassion, it is most worthwhile to be.
Which alternative tends to lead to better lives for individuals with those characteristics and societies made up of individuals with those characteristics.
It seems pretty clear that a rational person would find the latter intrinsically preferable (though I'd be glad to hear someone argue that the contrary is the case).
Of course, my approach still leaves us with a difficult question: supposing you're so unfortunate as to be biologically programmed for sadism, what then?
But I think that may be more a practical question than a question about what the good is and what is its basis. Whatever metaethical theory one may subscribe to, we still find ourselves in a world where some seem to be wired to be ethically deficient (sociopaths being the obvious example).
With that in mind, I can't see any way in which a claim of the kind Massimo is making -- "doing X increases human welfare, therefore X is the moral thing to do" -- could logically hold, unless you're simply defining the word "moral" to mean "that which increases human welfare," in which case the statement is tautologically true.
The solution, perhaps, lies in human welfare being an intrinsic good--something worth desiring in and of itself. Though I disagree that contingent human wiring is what will lead us to the answer to what is an intrinsic good and what isn't. Again, ideal observer theory seems to me the most fruitful way to approach that question.
Re: The Lorax
ReplyDeleteI think there is a lot of confused thinking going on in your post...
I think that if morality is going to be justified or explained using evolutionary criteria, then morality ceases to exist.
We can still use that word for rules in our society even if there is no universal moral code of natural or even divine origin.
So if deciding that murder is immoral because of the evolutionary selections (ie scientific facts) involved, then this makes having as many offspring as possible a moral act, even if you cannot provide for them.
Er, what? Firstly, JG argues against such naturalistic thinking. Secondly, no, evolutionarily it only makes sense to produce offspring that actually grows up, and that includes somehow making sure that they are being provided for. Thirdly, what you are describing after that is an evolutionary strategy actually taken by many male animals, including deadbeat-parent humans, but that does not mean it can be moral behaviour from the perspective of a society. Since a lone human on an island can do whatever he pleases anyway, talking morals is only ever talking about social interaction.
If providing food to a hungry human is a moral act, then vampire bats must also have morality.
And your point/problem is?
Re: Mann's Word
By all means go ahead and provide evidence for god(s). Until then, you are doing it again: Concluding that a certain state of affairs would be desirable from your perspective, and then taking that as a justification for believing that it is actually true.
I also doubt your "intuitively" - are you sure a child that grows up without being taught social interaction will "know that it should be held accountable"? My 6 month old daughter does not even know that she cannot have everything she see before I teach her that, how do you assume she is born with complex ethical instincts?
Re: Helten
You should go back to the discussion on free will, read it over, and consider carefully what I already tried to explain there. It does not make any difference, really. You have not understood the point of Massimo's position at all.
Re: David B. Ellis
If we happened to have been wired differently, even radically differently, what is right would be radically different.
I do not think that is a big problem, as, being humans, we are always only interested in what is moral behaviour for humans, even if there may be some aliens 500 light years away who see things differently. Then again, I very much doubt that a society regarding sadism as moral would work in real life, so alien morals are likely necessarily comparable to ours.
Sounds like an interesting novel, though.
Mintman: I at least can live with the conclusion that we (as a society) can and have to decide and negotiate what is moral ourselves, and that there is no universal agency to decide them for us. Would that everybody could live without that imagined safety net."
ReplyDeleteOh that would describe Haiti perfectly. After this weeks quake the gov collapsed and all the policemen ran away. And that is the philosophy of life there. Everyone for themselves.
There would be no orphanages in Haiti except for the fact that many missionaries come there establish them and work in them. That, Mintman, is a TRUE safety net and needed one at that. You have no idea what you are advocating for I think.
What I think you are REALLY saying is that you would object to anything like truth with capital "T" informing your decisions. Well, in a society like that, FAR FROM KINDNESS JUST BEING A "NICE" THING TO DO, the weak and poor REALLY DO SUFFER if no one cares enough to intervene.
(One of my older sisters worked in an orphanage as an RN in Haiti for years and has son adopted from there as well.)
It seems like the goal of humanity is to maximize the life of our species (we don't want to go extinct right?). Science can surely show that a healthy environment and healthy humans contribute positively to this goal.
ReplyDeleteIf air, water, food, and other environmental factors decline in quality, this will negatively affect us as a species. (It's possible that evolution could select a small subset of humanity to continue to survive under very adverse conditions, but this is a possibility I'm sure most people would want to avoid.)
If people are physically or mentally stressed (and I mean in an unnatural way), this also has a negative affect on our long-term survival.
Can we not now make decisions and judgments based on something like this?
To be a bit facetious (though still serious), I still don't know what this whole discussion is about at all.
ReplyDeleteIan, said Sidestepping Hume's guillotine, I define "moral" as "according with moral sentiment" which is that reflexive appraisal of right or wrong we're all familiar with. A "moral philosophy" is an MSMS--Moral Sentiment Management System--a prescriptive framework of concepts, a protocol, a model, that (ideally) maximizes positive moral sentiment and/or minimizes negative moral sentiment, among and within its adherents: "how we should live."
Being moral defined using the word moral doesn't seem to go anywhere at all. Could someone please define what it means to be moral?
If morality is about "how we should live", then what does that "should" mean? Should, or else... what?
Thanks in advance.
Philosophers for centuries have wasted their time, and that of others, on such questions as "are their other minds?" or "is there an external world?" I think the question "what is moral (right, good, etc.)" is much the same. Peirce, I think, was right to poke fund at Hume and Descartes. Dewey was right as well when he wrote philosophy will recover itself when it ceases to deal with the problems of philosophers, and seeks instead a method to resolve the problems of men. That morality cannot be established in the same sense that other things may be established (i.e., the temperature at which water boils) should be a non-issue. That it is a human construct is obvious (what else could it be?) and insignificant. We make value judgments all the time; some are better, and more reasonable, than others. Perhaps we should focus on how to make the best possible value judgments, rather than wondering what is "really" valuable.
ReplyDeleteThe Lorax,
ReplyDelete"However, it is almost certainly not advantageous to help non-family members or members of out-groups.... Evolutionarily it makes biological sense that we promote the welfare of close relatives."
Is this always necessarily true? What about in environments where resources aren't scarce? Obviously the primary mandate for all living creatures is to reproduce, but in conditions of plenitude, what selection pressure is there against benevolence to outsiders?
It seems to be that there are plenty of good evolutionary arguments -- even from the "selfish gene" perspective -- that all sorts of benevolence can be adaptive.
Ian - interesting construct, but it is too easy to take that and find some way to create groups whose individuals will self-identify as moral creatures based on a large %age of interactions within the group. Outside the group all or some of those actions can be constructed to be seen as immoral. We do not even need to leave the human race to do this. Witness the actions of a military organization, both from within and without. Every act of aid by one soldier to a fellow soldier can be seen as immoral.
ReplyDeleteMorals and philosophy just do not mix, because my simplistic definition of philosophy is 'a search for truth'. Never mind that truth and fiction are simply concepts, but they seem more worthy than morals and ethics, which when created and adhered to, immediately provide cover for hiding indecent behavior hidden from the surface.
Ritchie, I do define morality with regard to happiness, but only a particular kind of happiness, a no-regrets peace-of-mind kind. The seemingly normative nature of the concept is for me a short-cut for the fact that pretty much every human on the earth, would, if it could be achieved easily enough (ie with less discipline than is typically the case), love to have that particular kind of happiness, generally trumping other kinds.
ReplyDeleteIan, it seems a crazy claim to me that Science's role is elucidate moral sentiment. Science has no history of success in this regard. This seems something better achieved by introspection. It is also where the concept of a spiritual guide comes in.
The position I am putting forward is a standard one in yogic philosophy, according to which the entire purpose of acting "morally" is to achieve a calm mind. I like the idea of a moral-sentiment-management-system. The yogic philosophy stresses not so much the precise rules to determine what is most moral in a difficult dilemma, but simply that you have made your best possible effort in the direction of morality, however you understand it.
Sociopaths do make an interesting counterexample. But they account for a small minority of "immoral" behavior.
Caliana:
ReplyDeleteI have no idea where you get this from. The question here is whether there is one universal, definitive system of morals that can somehow be objectively deduced. How do you get from the opinion that this is not the case to nobody building orphanages? Because that makes so little sense, and especially because of your mention of missionaries, I fear that you belong to those who believe that without Christian religion, we would all have no morals. To repeat what I wrote in another discussion:
Seriously, what is so difficult about that? Why must you religious types always assume that without god handing the meaning of life down to us people can only commit suicide? That without god setting moral guidelines people can only axe-murder each other? And in this case, that without some magical mo-yo that is not subject to the dirty laws of physics and chemistry we must simply sit around and drool? Watch me. Watch every other atheist. We do not commit suicide, we do not commit crimes, we are not more apathetic than believers. Your imagined problem simply. does. not. exist.
If I misunderstood your ranting, please disregard this answering rant, but that is what I understood. Inventing a cosmic law-giver god like the Christians did would, by the way, be among those things that I meant with decide and negotiate what is moral ourselves, it is simply the case that I would like to arrive at a humane society without the detour over superstition, well-meant lies and delusion.
It may be that morals imply guidance, therefore everyone following moral guidelines can be said to be moral. But this can never be said about those who create the the moral guidelines,because they are free to change them
ReplyDeleteBy the way, Caliana, I am also unimpressed by the reference to missionaries. It may well be that some of them, maybe even a good number, do their humanitarian work because they are intrinsically good and generous people. But if they are already, then why do they need to carry that religious baggage with them at all? The point is, with people who claim to be inspired to do good deeds by their religion, the suspicion remains that their real motivation is not generosity and altruism per se, but one of the following: being blackmailed by their god who threatens to throw them into a lake of fire if they do not do good deeds; trying to impress their god because they believe that good deeds will be rewarded; trying to spread their faith by bribing other people with humanitarian aid; or trying to spread their faith by getting their hands on defenseless, easily impressionable children (like, say, orphans). All these are purely selfish motivations. An atheist doing the same acts impresses me much more, as they certainly do not expect any heavenly reward from it.
ReplyDeleteIt should also be mentioned that Haiti would probably have plenty of orphanages if they threw all missionaries out of the country but were at the same time as rich as the USA. They are likely just as moral as every other society on the planet, but they probably never had the resources to spare.
Julia,
ReplyDeleteI loved this, up until the last paragraphs where you try to resolve the problem through establishing that we are naturally good in the "real world". This seems like an abdication of "ought" after all, and a kind of moral embrace of whatever happens to be going on in the lives of "most people."
On balance we may all be pretty good to each other, all things considered. But moral philosophy exists to help us resolve internal conflicts, where biology and social conditioning (to whatever extent this is not also biological) seem to provide conflicting information. Do you see no role for moral agency?
The concept that we are good on balance is probably a good corrective to the long-standing Christian notion that we are fundamentally sinful, but it doesn't really help us navigate the immense moral choices we face. I think you end up here joining Massimo in a kind of "que sera sera" fatalism.
The notion of "sinful humanity" is useful to the extent that it is an antidote to the smug self-satisfaction we lapse into so easily. Taken in excessive doses, however, it does become toxic.
ReplyDeleteThe following poem by Wislawa Szymborska captures the issue perfectly.
In Praise of Feeling Bad About Yourself
The buzzard never says it is to blame.
The panther wouldn't know what scruples mean.
When the piranha strikes, it feels no shame.
If snakes had hands, they'd claim their hands were clean.
A jackal doesn't understand remorse.
Lions and lice don't waver in their course.
Why should they, when they know they're right?
Though hearts of killer whales may weigh a ton,
in every other way they're light.
On this third planet of the sun
among the signs of bestiality
a clear conscience is Number One.
The problem alluded to here -- that if someone doesn't care about other people, lacks empathy, and is unconcerned about the effects of his/her actions on others, then we have no good argument for their behaving morally that doesn't involve coercion -- is surely unsolvable. But it isn't clear that the deeper problem involving a lack of grounds more generally isn't a quite general unsolvable one.
ReplyDeleteSo if someone doubts that Q follows from the pair of sentences "If P than Q" AND "P", it isn't at all clear how one can rationally argue to the conclusion that they ought to. That is, all of our tools for arguing about rationality depend on our already accepting some basic axioms of logic...
Similarly, if someone is a committed immoralist, it isn't clear how we can convince them to act morally.
I don't, however, take that to be Massimo's project.
Rather, I take Massimo's project to be something like the following:
1) Hume gives an account of moral sentiments describes why we care about morality.
2) Evolutionary reasoning can explain, in part, why we have the moral sentiments we do (and perhaps even why some people lack them)
3) Given the moral sentiments we in fact have, we can use of knowledge of the world and our ability to abstract from our particular circumstances to better / more consistently apply our moral reasoning.
It is, as I have understood it from past conversations, a clever combination of Hume, Kant, and Mill. Does it yield the conclusion that morality follows directly from foundational logic? No, it doesn't (so Massimo thinks Kant was wrong about that). But does it explain, in part, why we ought to be able to argue for certain moral norms on the basis of the kinds of creatures we are, our knowledge of how the world works, and our ability to abstract away from our local circumstances.
(Hey Massimo -- let me know if I've got you wrong here!)
And that might be all the "moral realism" we need...
As for the "problem" that if we evolved differently we might have different moral norms, or nothing that we would recognize as moral norms, I don't see it as a problem for moral realism in the weaker sense I'm proposing. Yes, we can't, looking at ants, say coherently "oh, how immoral!" or "how moral!" If we find truly alien intelligent beings, it might turn out that we cannot hope to understand each others moral intuitions. But it might be that understanding each others moral intuitions is a prerequisite for understanding each other as intelligent / meaningful creatures in any event, so the point might be moot.
The harder question, I think, is how we move beyond this kind of sketch to a position that would permit us to coherently criticize those people whose behavior we find morally repellent (and who at least claim to find ours similarly awful).
Jonathan
ReplyDeleteI do not think that is a big problem, as, being humans, we are always only interested in what is moral behaviour for humans....
I, for one, am rather strongly interested in the question of whether there are moral truths applicable to all sentient beings and not just our own species....as I suspect a great many moral philosophers would be; both as a matter of pure intellectual curiosity and for practical reasons.
We're rapidly approaching the time when the technology will exist to alter what it means to be human in very fundamental ways---not to mention the possibility of interacting with non-human sentients in the form of AIs and uplifted animals. I wouldn't be in the least surprised if humanity is dealing with both a century from now.
I could also say, Mintman, that the smarter you are and the better off you have if the less concerned you might be with what it is exactly that missionaries do for people in impoverish situations. If you're willing to step up and do the same, I'll meet you there.
ReplyDeleteOne of my sisters has a ministry that feeds about 2500 street children in Guatemala every week. There is no expectation whatsoever placed on these children but they likely will hear about Christ at some point in time. The story's of many of the childrens lives would blow most thinking peoples minds. Some children often sell their bodies so that they can eat. One little girl of about 9 is raped by her three older brothers every night. !!! So ultimately, the goals of the people who work with the children is to get them off the street and hopefully show a better way to live then selling themselves for food. I have yet to see the flaw (and selfishness) in working to get these children into a better situation than they are in.
Mintman, you have a deeply cynical view of why people do things. If its true of Christians only doing things for selfish motives, it goes with out saying it must be true of everyone who reaches out a hand to another person in need. And therefore you've arrived at confirming the fact of sin. No one really is "good", are they.
Lastly, I am certainly not afraid of God if I do not reach out to help another person, but why ever wouldn't I want to? What an empty, meaningless, pointless life that would be. I think we are each given our lives to give our lives away.
I do agree that scientific facts cannot provide the foundation for moral judgements. Science is about matters of fact; about how the world is and therefore it is descriptive. Moral judgements are prescriptive, normative, or judgements about what we ought or ought not do or in other words about norms of conduct. This distinction eliminates Hume’s proposed ought-is problem.
ReplyDeleteThe only axiom or maxim needed was proposed by Aristotle in his Nicomachean Ethics, and it is clearly explained in Mortimer Adler’s book Ten Philosophical Mistakes, which is the axiom or maxim of “right desire.” By right desire, Aristotle meant that one ought to desire the really good things that promote living a flourishing life. He makes the distinction between real, natural, desires or needs and apparent, acquired, desires or wants. This applies to the individual as well to society as a whole. As a bonus, this distinction between needs and wants eliminates the believe that moral judgements are relative or subjective.
As for the individual, or state, that does not care for people’s welfare or might even prefer in decreasing it, would be acting in an irrational way. As rational beings, we should be able to recognize that acting against people’s welfare, what is really good, is morally wrong.
The sub-thread on the motives (religious vs. secular) for providing aid or relief to impoverished or disaster-struck populations (e.g. Haiti) is perhaps a bit too purist.
ReplyDeleteFor example, although I prefer to support secular humanitarian-aid groups (e.g. Doctors Without Borders), I also prefer that needy people receive the help that they need from faith-based groups to their receiving less help or none at all.
Perhaps that says more about the accident of human nature (e.g. empathy, compassion, or moral sentiment) than it does about philosophical ethics. But that's an accident we needn't explain nor rationalize before we act on it. Let's just agree to do it, shall we?
Caliana:
ReplyDeleteI could also say, Mintman, that the smarter you are and the better off you have if the less concerned you might be with what it is exactly that missionaries do for people in impoverish situations.
So what you are implying is that the wealthiest countries in the world would have the worst welfare systems and pay the least amount of developmental aid. Lemme check this against the facts here in the real world for a second - ah, look, you are wrong. How surprising.
The rest of your post simply shows that you have not understood my argument: if these people are really intrinsically generous and good, then why do they need to do it under the mantle of their religious convictions? When somebody is making the argument that they do something good because they are inspired by their beliefs, I always wonder: what if they lost their belief tomorrow? Did they not just admit that they would then stop doing it? So how can it be that they do it because they are simply good people? That they do nice things is out of the question, but why not do them without spreading superstition?
Ritchie the Bear and Mintman: Yes, very well said! Every time I get into discussions like the one I've been having with Massimo, I notice this (probably unintentional) language game going on. First, I inevitably say something like, "What do you mean when you say 'X is moral?'" and then they reply with a purely descriptive, not normative, definition -- e.g., "What I mean is that X increases human welfare." But of course, that isn't really all they mean by "moral" -- If it were, then what they call a system of morality would merely be objectively describing the consequences of various actions, without implying any preference for one over the other.
ReplyDeleteHere's one argument I tried out to make this point when I was arguing with Massimo: Imagine someone says,
"Distributing condoms in Africa is immoral" and you say
"Distributing condoms in Africa is moral."
Clearly, these statements contradict each other, so you disagree. But if your respective definitions of the word really are descriptive, then you can just plug them in and the conversation becomes:
"Distributing condoms in Africa violates the word of the Bible."
"Distributing condoms in Africa reduces human suffering."
Now there's no contradiction, and assuming you both agree with these two empirical statements, your disagreement is over. Right? Well, in practice, of course it's not over -- because you're implying that humans should reduce others' suffering, and the other guy is implying that humans should live by the word of the Bible. If you were to remove that normative component of the definition of "moral," all moral disagreements would disappear as long as everyone agreed on the empirical facts of the case. And that's clearly not true in practice.
Of course, then the problem becomes -- what do you mean by the word "should"? As far as I can see, the word "should" is only coherent with an implied goal (e.g., "You should wear your hat" carries an implied "...if you want to stay warm".) The word has no meaning in isolation; it doesn't make sense just to talk about how people "should" behave, full stop. (As Bjorn said, "Should, or else... what?")
Having these discussions often feels to me like trying to smooth down a rug; I push down a bump over here, and as a result a new bump pops up nearby.
Galef’s hypothetical conversation further exposes (if it can be further exposed) the fact that moral language is deceptive if merely used as a substitute for descriptive terms. In common parlance (that is, any parlance not occurring between metaethicists), the terms “good”, “bad”, “should”, and “must” are used not to make some sort of claim about how the physical world is but to make a statement with a certain sort of force. Human beings intuitively understand the force the speaker is trying to project; we understanding that the speaker thinks that it is a fact (not a personal preference) that another person should act in a certain way. This is not some incidental fact about normativity; it is the cornerstone of it. If there is no force, no suggestion that a certain behavior is ultimately right (or wrong), then a statement or string of statements is not in the land of normativity.
ReplyDeleteDeceptively substituting a value-laden term for a descriptive term is an under-recognized form of intellectual manipulation. I first read a clear description of it in Nicholas Shackel’s paper “The Vacuity of Postmodern Methodology”, though his description of the tactic (“Humpty-Dumptying”, I believe) applies to a more general class of verbal deception. Humpty-Dumptying works like this.
- Decide that you want to reach some particularly hard-to-defend conclusion about a concept generally referenced by a term or set of terms T.
- Offer an alternate definition of T (or the terms within T).
- Reach a conclusion about the concept denoted by the altered T term or terms.
- Act as though you’ve reached the desired conclusion (laid forth in the first step) about the concept denoted by the standard T.
With moral realism, it works like this:
- Decide that you want to conclude that objective truths about morality exist and can be discovered by scientific methods. The terms generally used to refer to morality include “right”, “wrong”, “should”, “should not”, and so forth.
- Redefine the standard moral terms so that they merely refer to various aspects of some straightforward real-world phenomenon. If you’re, say, Sam Harris, you might just define “good” as “causes happiness”, “wrong” as “causes suffering”, and so forth.
- Conclude that objective truths about morality exist and can be discovered by science, since science can uncover truths about the real-world phenomenon just injected into our moral language.
- Act as though you’ve rationally reached the conclusion that moral truths exist and can be discovered by science, oblivious to the actual concept of normativity.
It’s sort of like integrating a function using u-substitution without substituting the original variable back in at the end of the operation. The concept of normativity, as it is used in everyday language and most non-everyday language, is used to make forceful statements about an absolutely existing superiority of one action or set of actions to another action or set of actions. Those moral realists who employ the Descriptive-Normative Moral Swap (there, now it has a name!) are often oblivious to the fact that they change the subject when defining morality just in terms of some descriptive concept.
There is an interesting ambiguity in the question: what do we mean by "moral"? We might be asking for a description of the kind of thing that "morality" -- what *counts* as a moral system or a moral rule (as opposed to say a rule of etiquette). Or we might agree about what kind of thing 'morality' is and be asking instead "what general principles or rules underwrite your judgments about what is moral?" Or we might be asking "in cases like this, what do you think is the moral course of action / character trait to have / etc.?"
ReplyDeleteNow, I want to say that *of course* you can't get to "ought" statements from statements that are value-free, for exactly the reasons that Julia flagged in her last response. It is only by *valuing* things, by having desires, by having (in Kant’s formulation) ends (very roughly - things we want to accomplish), that anything is valuable. In a world with no one / nothing that wants stuff, there is nothing of value...
So we are not going to get from a neutral description of the world w/o our desires, needs, etc., to a moral system. The questions, then, are something more like: is there something about the kinds of beings we are, and the sort of world we live in, that can support 1) a particular view of what makes ‘morality’ special (different from etiquette)? and /or 2) a particular set of principles that underwrite moral judgements (over some other possible set)?
So we are not going from a set of value-free “is” statements to “ought” statements. Rather, we are going from claims about the kinds of things we want / the kinds of ends we have (and hence facts that have embedded ‘ought’ claims, as Julia notes), to claims about the broader system of “ought”-statements that our moral local statements get embedded in.
Again, is there anything we can say to *show* someone who truly doesn’t care that they ought to be moral? I can’t see how. True psychopaths – people who lack empathy, guilt, are unable to form strong emotional attachments with others, who fail to recognize moral rules as in any way special – are bad candidates for convincing to be moral, even if they can reason effectively from ends to means, from causes to effects, and/or are competent folk-logicians more generally.
But that, fortunately, isn’t the challenge (well, it isn’t the challenge I’m interested, in any event!). I take the challenge instead to be to move from facts about the kinds of being that we are, and the sorts of desires / ends that are broadly shared (value-laden though those descriptions are), to claims about, first, the kinds of things that should be included in a moral system, and second, about the principles that we should endorse as underwriting our moral claims.
So it does matter that, for example, humans are social creatures, and it does matter that most (all normal) humans are capable of empathy (indeed, are incapable of *not* having empathy), capable of guilt, strong emotional attachments to others, etc., and it does matter that we recognize each other as capable of setting and pursuing goals (for ourselves and with others, etc).
Can we get from there to the kind of Millian political liberalism I strongly prefer? I’d like to think so, but I grant it isn’t easy. But in any event, our movement will not be from value-free facts about the world, but from facts about the world that are deeply fact-laden to different kinds of fact-laden claims...
Jonathan
Julia,
ReplyDeleteIn my view, your example simply demonstrates that logic often requires "premises" to induce "conclusions."
I really like a lot of what you've said, Jonathan. In fact, I was on a similar tack as I pecked away at my thoughts on the subject earlier today. Then I read what you had to say and it gelled nicely.
ReplyDeleteWhat follows is where I ended up (in two parts, I think, due to character limits).
I mostly agree with you, Julia, but I disagree with this:
ReplyDelete"[T]elling me scientific facts doesn't tell me how to act on those facts, and the alleged point of moral principles is to tell me how to act."
Moral principles, to the extent that they exist, do not have a point. They have a structure. And they have that structure because that structure is stable. It is not there to tell you how to act; rather, the way people act is the reason it is there.
Is it possible to use scientific facts to justify selecting one particular set of initial axioms over another? No. The reason for this is that the initial axiom (at least!) is predetermined by biology. We can use scientific facts to justify promoting, demoting, introducing or removing axioms from our rational concepts of morality, but the foundation of morality itself (or absence thereof) is innate, even though it varies amongst individuals.
Here is one way that I arrive at this view:
Without sentience, there is no morality. Even those who might disagree with this must be sentient to do so. Further, even a morality that addresses behavior toward non-sentient objects nevertheless predicates upon sentience, since such behavior must be performed by a sentient creature, and it is the behavior itself that is of the essence. (Those who would argue that a hailstone breaking a window or an earthquake killing tens of thousands of people is immoral must also ascribe some sentience to the motivating forces behind these events. Otherwise, their concepts of morality are meaningless.) Thus we can say in an objective sense that sentience circumscribes morality.
Within the realm of sentience, there is no universal morality. That is plain to see, based on the heterogeneity of extant moral systems. The particulars of morality are subject to many factors, most notably culture and individual experience. The only way to examine morality broadly and fail to see that various conditions produce multiple valid moralities is to overlay conjecture atop the available facts, and thereby occlude those facts which do not correspond to the conjecture.
Nevertheless, while not universal, there are certain widely accepted moral norms, such as that murder, thievery, etc. are wrong in a civil society, but also that one's genetic offspring should be the benefactors of one's wealth, women and men should have different roles, newcomers and visiting outsiders should not enjoy the same status as established in-group members, and so on. It should of course come as no surprise that most widespread moral norms have to do with community, i.e. how individuals within a given society "should" treat each other, since in order to become widely accepted within the pool of sentient beings, the society they apply to must be stable enough to thrive. Thus, society-stabilizing morals tend to survive and become intrinsic to the society's identity and sense of well-being whereas more extraneous ones generally do not. In this sense, that which promotes a group's sense of well-being is the first determinant of its concept of moral righteousness.
Some have argued from other angles. As Ritchie put it, "There's nothing more rational about thinking that flourishing is good than there is about thinking that flourishing is bad." But why should we presume that morality is based on rational thinking? As an example, don't we, like chimpanzees, by and large look after our old and terminally ill, even though those individuals do not contribute to, and in fact materially detract from, the well-being of society? Wouldn't doing that be considered a moral act? And wouldn't rational thinking produce a different response? I don't accept that this behavior is, in fact, a rational response to a careful consideration of what would alleviate the most suffering, and therefore I don't accept that morality is rationally founded. Without any rational thinking at all, we recognize members of our group, we recognize suffering, and when the two coincide we act to alleviate that suffering impulsively, even when it is detrimental. As individuals, we do it primarily for the sufferer, and only secondarily for ourselves. But as members of the group that includes both, we are impelled to do it for the sufferer because, quite apart from individual needs, such behavior has on balance tended to serve the group's sense of well-being enough to have selected for this individual characteristic. Yet this behavior is much less pronounced when we look at nomadic peoples living at subsistence levels. When the old, sick and lame cannot keep up, they are much more readily left behind -- again, in service to the group's sense of well-being.
ReplyDeleteAll that being said, it is still well worth engaging in moral reasoning, first and foremost to understand what morality innately is. This may be a bit more than what has been meant previously by the phrase "moral reasoning", but I believe it is a necessary precursor to applying it in the standard sense, which includes apprehending the eventual and implicit consequences of morality's various manifestations, and perhaps also to thereby evade potential pitfalls in a way that cultural evolution blindly and reactively groping for stability sans extrapolatory reasoning could not. For example, if we are, as many believe, currently serving our group's sense of well-being through actions that actually undermine its ability to sustain itself under any plausible scenario, moral reasoning can help us see this and thereby provide a means to influence the particulars of the group's morality that contribute most heavily to this in advance of the actual crisis that they, unchecked, would precipitate.
Any consequentialist thoery relies on empirical facts. Your complaint is broad, and I think, has been addressed in Singer's article, "The Triviailty of the Debate over Is/Ought." Julia - at some point we need to accept some premises. Attacking their soundness is fine, but dull . . . i'm sensing an impasse.
ReplyDeletePerspicio:
ReplyDeleteI would certainly agree with all your explanations, but saying that our morality arises without rational decisions, that it is the way it is because it produces working societies etc. is, while perfectly true in my opinion, besides the point. The questions here seem to be not how morals are actually derived in reality, but whether we could derive them objectively from empirical facts through rational thinking, and then probably whether they are universal.
From the perspective that you have just taken, it would then be most useful to ask first whether the innate desire of humans to form working societies supplies us with an acceptable "ought". From a purely philosophical viewpoint, I'd think no, but from a practical viewpoint, just for living your daily life while acknowledging that some aspect of it may not be rationally defensible, why not? Sometimes you just have to start somewhere - how would you, for example, even defend the notion that it is desirable to live, rationally? But the point here is that we both answer the philosophical question with no. Notice also that I make a society the criterion here, as I think that morals by definition require not only sentience, but also interaction between sentients.
The second question would then be which mores can be seen as universal because only they and not their opposite allow a society to function. For the latter, we both already mentioned some examples: a society can only work if murder and lying are vices, not virtues. Details like women not being permitted to show their hair or whether you are allowed to eat pork are, in contrast, completely arbitrary and certainly not universal, unless the existence of a god comes into the discussion (at which point we phase into the Euthyphro dilemma).
ReplyDeleteThe problem alluded to here -- that if someone doesn't care about other people, lacks empathy, and is unconcerned about the effects of his/her actions on others, then we have no good argument for their behaving morally that doesn't involve coercion -- is surely unsolvable.
That some people might be constitutionally incapable of recognizing the right is not, however, an argument against there being moral truths.
I do agree that scientific facts cannot provide the foundation for moral judgements. Science is about matters of fact; about how the world is and therefore it is descriptive. Moral judgements are prescriptive, normative, or judgements about what we ought or ought not do or in other words about norms of conduct. This distinction eliminates Hume’s proposed ought-is problem.
Can it not, however, be said to be a fact that being in agony is an intrinsically undesirable state of consciousness? It seems to me fairly obvious that it can. There are universal facts about subjectivity as much as there are facts about the external world.
And it seems at least potentially workable that one might bridge the is-ought gap based on this sort of thinking. In thumbnail: that there are certain things which are universal intrinsic goods and that right consists in cultivating them.
Mintman,
ReplyDelete"The questions here seem to be not how morals are actually derived in reality, but whether we could derive them objectively from empirical facts through rational thinking, and then probably whether they are universal."
If the 1st question is whether it is possible to develop a valid, free-standing, rationally founded and constructed moral system, my answer is a qualified "no", since moral systems as we know them are not rationally founded. However, if such a system worked, then it would be valid by definition (even while modifying the operative definition of morality). It's hard for me to imagine a functional moral system devoid of all non-rational impulses guiding behavior; nevertheless, this is not sufficient reason to rule it out. But it's far too abstract an idea for me to really see the point of such an exercise anyway, so if this is indeed the type of model under discussion I'll leave it to others to advance ideas of how they think such a thing could work. I'll pay it no further attention in this post.
If on the other hand the question is whether it is possible to develop a rational model that can accurately predict the particulars of any arbitrarily chosen moral system in every meaningful regard, it seems to me that to answer this we must endeavor to understand what it is we are trying to replicate. Otherwise, how would we know if we got it right? Of course, modeling morality is largely synonymous with endeavoring to understand it - yet, at least from my perspective, the way the discussion here was initially cast seems to contain an element of what I alluded to previously: the overlaying of conjecture atop the available facts, thereby occluding those facts which do not correspond to the conjecture. Specifically I refer to misconceptions about what "ought" actually is.
Mintman (cont'd),
ReplyDelete"From the perspective that you have just taken, it would then be most useful to ask first whether the innate desire of humans to form working societies supplies us with an acceptable "ought".... [T]he point here is that we both answer the philosophical question with no."
Actually, in my view that is an acceptable "ought" for a predictive model, for the specific reason that it is an already operational one in existing moral structures. Any predictive moral model must take into account the breadth and scope of how people wish to behave, especially socially. The logical fallacy I'm seeing consistently in this discussion is in perceiving "ought" primarily as an edict rather than an impulse. Certainly there are edictive "oughts", but they predicate upon impulsive ones. We must understand the emotive architecture that edictive "oughts" appeal to in order to see how they relate to morality.
Fundamentally, morality isn't about "ought". Ostensibly it's about "right" and "wrong", while in practice it's about "perceived as good for the group" and "perceived as bad for the group". Yet, it isn't objectively "about" anything at all. To the degree and in every form that morality exists, it does so because it is a stable emergent phenomenon in that time and place. Moreover, the stability of a moral system and the details of any of its particulars may be largely incidental to each other. So while "ought" is certainly a component of moral systems, trying to replicate such a system by making edictive "oughts" its driving mechanism fundamentally misconstrues the nature of morality, and therefore will not work.
The sense of "ought" that is intrinsic to morality is a combination of impelling factors acting on individuals that run the entire gamut of (in rough order of decreasing concreteness) biological imperative, individual experience, cultural conditioning, codified consequences for non-abidance, and ideology, any of which may be trumped or moot, temporarily or permanently, in a particular individual's circumstances, resulting in a different but equally valid "ought" architecture (by its own lights) than that of her peers. (It's objective validity can be measured simply by whether and to what degree it survives.) Hence, flattening and abstracting "oughts" into edicts, even those that closely mirror the impulsive "oughts" at the core of morality, will not result in a reliably predictive model.
Perhaps an honest replication of "oughtness" in a predictive model's mores would have an "or else" clause attached to each that points not to consequences for the individual, but to a lessened likelihood of that more's chance of being passed on.
David,
ReplyDelete"That some people might be constitutionally incapable of recognizing the right is not, however, an argument against there being moral truths."
Fine. But if you are speaking of universal moral truths, we don't need arguments against them when in fact there are no cogent arguments for them.
"Can it not, however, be said to be a fact that being in agony is an intrinsically undesirable state of consciousness? It seems to me fairly obvious that it can."
I think one trip to a Sundance ceremony or to suspendc.com would change your mind. Actually, the association between physical agony and religious ecstasy is very well established.
David, I think you cited Mintman in writing, “The problem alluded to here -- that if someone doesn't care about other people, lacks empathy, and is unconcerned about the effects of his/her actions on others, then we have no good argument for their behaving morally that doesn't involve coercion -- is surely unsolvable.”
ReplyDeleteI certainly agree that once the rational basis for a belief in God is rejected, then any rational basis for morality is also rejected. Consequently, many of you are understandably trying to provide a non-rational basis for morality – an instinctual basis – as Mintman has written (quoting from Perspicio):
"From the perspective that you have just taken, it would then be most useful to ask first whether the innate desire of humans to form working societies supplies us with an acceptable "ought"....”
Although instinct does serve to lead us into moral behavior – and it’s also pragmatically sound behavior – instinct alone isn’t a sufficient basis for morality as others have aptly pointed out.
1. What happens when these instincts for been overcome by other factors?
2. Without a rational basis for morality, morality inevitably reduces to self-serving, instinctual and pragmatic concerns – hardly a pragmatic design for a robust life!
Please do not miss the delightful irony here. The “brights,” having rejected the rationale for God, have reduced themselves to an un-bright (non-rational) pragmatism based upon a purely instinctual foundation.
perspicio:
ReplyDeleteDigging through your rather complicated prose, I still get stuck on one detail: you want to discuss morals as evolved behaviour or norms, and that is fine with me from my professional perspective as a biologist. But I understand this to be a philosophical, not a sociobiological discussion. I agree with basically all you write, just as I agreed earlier with somebody else's the opinion that building an orphanage is good work, but it does not seem to be directly relevant to the disagreement between Julia Galef and Massimo Pigliucci as far as I understand it. But I may be wrong, of course.
Jonathan wrote:
ReplyDelete"There is an interesting ambiguity in the question: what do we mean by 'moral'? We might be asking for a description of the kind of thing that 'morality' -- what *counts* as a moral system or a moral rule (as opposed to say a rule of etiquette). Or we might agree about what kind of thing 'morality' is and be asking instead 'what general principles or rules underwrite your judgments about what is moral?' Or we might be asking 'in cases like this, what do you think is the moral course of action / character trait to have / etc.?'"
That's a great disambiguation, Jonathan! I've noticed that this kind of confusion is responsible for a lot of the miscommunication in these metaethical discussions. Let me make sure I'm understanding you correctly:
(1) The first kind of question is asking: what function does this word generally play in the language? (right?) And the answer would be something like, "Moral statement are normative rules of behavior" or some such? (Or maybe... "normative rules of behavior that can't be translated into instrumental 'If you want to accomplish goal X, then you should do Y' descriptive statements?" Or maybe "normative rules of behavior that are taken to be universal, rather than specific to a particular culture and society", to exclude etiquette rules?)
(2) The second kind of question is basically asking for the person's moral axioms, right? It's a meta-ethical question.
(3) The third kind of question is asking what course of action fits the person's moral axioms for a given case, right? It's an ethical question now, no longer a meta-ethical question.
Sound accurate to you?
One concern: You said "I take the challenge... to be to move from... the sorts of desires / ends that are broadly shared"... and then you listed a few relevant facts, like the fact that humans are social creatures and are capable of empathy and guilt. Given all these facts, I agree that we can set up a system of instrumental rules (i.e., the best way to achieve the goals that are built into us as humans is to follow this kind of system of rules) and we could call that system "morality".
But despite my one rosy paragraph about humans having those built-in goals, I have to admit I'm not sure those goals are as universal or innate as you or I made them sound. People have unquestionable empathy for their kin or in-group, but empathy for strangers (or for people in the abstract) is more dubious. And people also have plenty of other innate desires (anger, competitiveness, greed, etc) that often counteract whatever innate empathy they do have. So I'm not sure an instrumental set of rules of the kind you've described would really capture the way people use the term "morality", because it doesn't seem to have anything to say about whether we should follow those rules if/when our goals diverge from the goals that were used to generate the rules in the first place.
Toph said that my complaint has been addressed in Singer's article, "The Triviailty of the Debate over Is/Ought."
ReplyDeleteHere's the article toph is referring to:
http://www.utilitarian.net/singer/by/197301--.htm
Yes, I like that article a lot, toph! I nearly linked to it in my post, in fact. But if I'm reading it correctly, Singer doesn't deny the is/ought problem, just says that we can still have meaningful conversations about morality despite it.
Toph, you also said "at some point we need to accept some premises" if we want to build a moral system. I agree. The point of my post was just that I disagree with Massimo that that decision can be made solely on the basis of objective scientific facts.
Chris Schoen said "On balance we may all be pretty good to each other, all things considered. But moral philosophy exists to help us resolve internal conflicts, where biology and social conditioning (to whatever extent this is not also biological) seem to provide conflicting information."
ReplyDeleteYes, I agree -- and I also agree that my paragraph about our innate empathy was kind of weak, for reasons that I lay out in my comment to Jonathan (above).
Brilliant piece and looks it's being a healthy debate.
ReplyDeleteNow Hume's Guillotine needs to be handled with caution, lest it chops Hume himself! For he defended a fully naturalistic ethics (he self styled the Newtonian of human nature).
Could Hume speak now I believe he would say:
"The reason why we behave morally is because we invented certain rules without which cooperation wouldn't work. We manage to do this thanks to our natural endowments (sympathy) and a wee bit of reflection to correct our cognitive biases (i.e. enlarge the scope of our sympathy beyond relatives and friends). Moral conventions emerge universally and necessarily across cultures without any deliberate design and through a process of spontaneous social order. Nothing mysterious, no supernatural intervention and certainly no soul.
"Now, if you object that this might explain but doesn't justify morality I would say that it is not the purpose of philosophy to recommend virtue and that what you call "ultimate moral justification" is unattainable and of no practical use in deciding in moral dilemmas anyway. If adopting this position implies being a nihilist, then every non moral philosopher is one. So to me the question about an ultimate moral justification seems to be intrinsically theological and thus complete nonsense. Nobody has found such holy grail because it doesn't exist. You can't convince the psychopath to scratch his finger to saves humanity."
This comment has been removed by the author.
ReplyDeleteMann's Word
ReplyDelete"I certainly agree that once the rational basis for a belief in God is rejected, then any rational basis for morality is also rejected. Consequently, many of you are understandably trying to provide a non-rational basis for morality" Its not that they are trying to provide a non-rational basis for morality, it is that you are trying to provide a supernatural basis for morality on a foundation based on a religion that is merely a few thousand years old in the very broad spectrum of humanity. Whether or not you are willing to admit it, the religious based moral rationalities that the bible presents are man-made and merely God inspired. You attempt to make the bible out to be a text containing wisdom about morality that has never before been seen by society. You assert that moral concepts like right and wrong are things that must be based on not just a belief system, but a christian belief system. You even assert that any basis for morality that isn't based on God is "non rational", but what proof do you have that bible is any more rational than any other belief system? What proof do you have that the bible is any more effective of a moral basis than any other religious text? You hold that without a Christian basis for morality, morality is inevitably reduced to self serving, instinctual and pragmatic concerns. I think that you completely forget that it was Christianity, and the bible that inspired the crusades, a war so atrocious that its casualties can only be match by histories second atrocity, the holocaust. For 200 years, Christians killed millions of people in the name of God. The murdered entire families. They destroyed entire cities that refuse to convert to their belief system. Where was this rational basis of morality then? Don't attempt to declare these individuals as not Christian either. They were as Christian in their beliefs, and maybe even more so, than anyone today. The crusades is a perfect example that just because you have God as a basis for morality, doesn't mean that you are any more moral than anyone else. The crusades are a perfect example that religion, a concept that is irrational in nature in terms of faith in the intangible, can sometimes brings man to commit atrocities that given normal circumstances, he would decide otherwise. God is subject that has inspires war. Religion isn't a basis for morality, because religion is based on empathy. "Do unto others". Empathy is a basis for morality. It is at the heart of every decision you make everyday. It isn't supernatural. You don't have to go to church to find it. By saying that empathy isn't a basis of morality, you label all of society sociopathic. The bible, or christianity can no more claim to have a rational basis for morality than any one else. The catholic church child molestation cases are perfect examples of why that concept is wrong. They dedicate their lives to God. They were leaders, preachers, and priests, and yet their moral compasses, supposedly based on an infallible belief system, were compromised in the same way that the men of the crusades were. You make Christians sound like a league of superheroes, as if their moral compasses surpass that of any in comparison.
Mann's Word,
ReplyDelete"The “brights,” having rejected the rationale for God, have reduced themselves to an un-bright (non-rational) pragmatism based upon a purely instinctual foundation." as if Christians can claim purity in form. As if they can not be accused of atrocities or sin. As if their reasoning cannot be compromised. The reality is that the christian belief system was built around a society that lived 2000 years ago. A society founded in superstition, tradition, and illogical rationale. Claiming that your moral basis for morality is anymore rational than anyone elses is prideful, egotistical, and discriminatory. Christians are human just like everyone else. They are face with the same situations, that we are faced with, and just because they do it because they don't want to burn in hell for eternity doesn't make their moral basis any more rational than anyone else, in fact it make them self serving. At least an atheist does good because it is the right thing, or because they feel bad. Christians do good because they want to please someone(God), or because they don't want to go to hell. Doing good just to get something in return, whether it is Gods favor, eternal life, or a get out hell free card, is the very definition of self seeking, and instinctual. You view on morality is solely based on your perception of the bible, God, society, humanity, and right and wrong. Your perception can neither be considered authoritative or even realistic because it is inherently discriminative, self seeking, and lacks complexity.
I'm a bit late to this discussion I figure :)
ReplyDeleteI believe many are missing one important fact - context. You cannot say something obvious like "freedom is good, so we ought to strive for more freedom". That is too general. However, in the right context you can say it. "Freedom is good, so if we want to work politically for a better society, we ought to work for more freedom."
Those that make sense?