About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Wednesday, January 30, 2013

Philosophy & Science: Overlapping Magisteria

by Steve Neumann

Recently Massimo posted about Michael Shermer’s misguided attempt to claim for science what traditionally — and rightfully — belongs to philosophy. It is another episode in a recently growing trend, as exemplified by Sam Harris’ book-length treatment of the same matter in The Moral Landscape. For anyone convinced that ethics is ultimately the proper domain of philosophical inquiry (though philosophical reasoning can and should be informed by our best science), it can be a very frustrating experience to have to continually combat this rising tide of incipient scientism.

But, of course, that doesn’t mean we stand back and assume that the opposing viewpoint will ultimately exhaust itself. On the contrary, we should be more inclined to criticize positions that are becoming successful (i.e., popular); at the vey least responding to others sharpens our own way of thinking. In this sense, I must admit to a love/hate relationship with people like Sam Harris. Although I disagree with many of his positions, I have to admit that I admire his tenacity and his courage to stand alone and be criticized. And his popularity (or at least his controversial public persona) helps create the necessary conditions for a vigorous dialectic — and that is a good thing.

Harris’ central premise, which is essentially the same premise shared by Shermer and others in that camp, is that questions of value can at least in principle be reduced to facts about the well-being of conscious creatures, and that these facts and their interpretations fall within the purview of science. Harris further maintains that the most relevant discipline here is neuroscience. 

Back in August, 2012, Philosophy Now published an essay by philosophers Julian Savulescu and Ingmar Persson entitled “Moral Enhancement”, where they essentially argue for a nascent program of eugenics, what they call the “biomedical means of moral enhancement,” or “moral bioenhancement.” Their reasoning is that the evolutionary course of the human species has left us ill-equipped to deal with specifically modern existential challenges like global climate change or warfare that may involve weapons of mass destruction, thus threatening to eradicate all sentient life on the planet. This echoes Sam Harris’ motivation for writing The Moral Landscape: “changing people’s ethical commitments ... is the most important task facing humanity in the twenty-first century.”

But, just as with Harris’ book, where he admits that concepts like “well-being” and “flourishing” are notoriously difficult to measure, Savulescu and Persson also acknowledge that “it is too early to predict how, or even if, any moral bioenhancement scheme will be achieved.” What these authors do have in common is an unflagging confidence that science will be able to figure it out. I understand this feeling of confidence. In fact, I share this confidence concerning many if not most things science tackles. But ethics isn’t one of them.

I think that moral reasoning and the related dialectical activity is the most important thing we can do in life. I believe this not only because of obvious existential threats we face, but because knowing “how to live and what to do,” to paraphrase the late poet Wallace Stevens, seems to be the most indispensable and perhaps even the oldest need of our species — going back possibly to the earliest emergence of self-consciousness in our evolutionary lineage. I mean, so far as we know, other animals don’t experience ethics (broadly conceived) as a problem; every aspect of their existence is determined or ordered by instinctual behavioral patterns. Obtaining sustenance, finding mates, avoiding dangers: these aren’t problems for them in the way ethics is a problem for us. We still face the same issues, of course, but our nature as social creatures and more critically our capacity for knowing that we know (and knowing we have the ability to choose between alternatives based on reflection), creates the possibility for doubt about which course of action is best, whether it’s deciding which personality type would ensure the best marriage or which hobby or career would give the most satisfaction in life. We humans do more than wonder which action is the most utilitarian (lowercase “u”); it’s also about which action is the most rewarding.

Despite Harris’ confidence in his moral realism, there is a streak of relativisn in his own approach as articulated in The Moral Landscape. In my own annotated copy of his book, I’ve marked six significant concessions to the variability of the concept of “well-being.” Most tellingly, when comparing moral well-being to the notion of physical well-being (i.e., health), he says that science “cannot tell us why, scientifically, we should value health.” Of course, Harris doesn’t see this as a knock-down punch to his general project; he goes on to say that “once we admit that health is the proper concern of medicine, we can then study and promote it through science.” Yes, that’s true; but the key words here are “study” and “promote.” Just think about the voluminous yet conflicting scientific pronouncements on health from sites like Medical Xpress: one week there’s a report that “coffee will make you live forever!”, and the next week they report that “coffee kills you faster!” (Full disclosure: I truly believe coffee will allow me to live forever. Of course, my cognitive apparatus may be compromised by addiction in this case.) Reading any of these medical news aggregation sites illustrates perfectly the amorphous nature of “well-being,” whether physiological or psychological.

And as a professional dog trainer, I have to ask: whatever happened to good old operant conditioning, the discovery that a system of naturally-occurring rewards and punishments determine and shape behavior? The principles of operant conditioning apply to all animals, humans included. There are four basic “quadrants” delineated by this concept: 1) positive reinforcement; 2) negative reinforcement; 3) positive punishment; and 4) negative punishment. 

Positive reinforcement is the idea that when a subject receives a reward for doing a particular action, then that subject is likely to perform that action with more frequency and more vigor in the future, and it results in lasting behavior modification in most cases. A classic example of this is the pigeon in the lab pressing a lever and receiving a pellet of food every time it does; the bird will peck the crap out of that thing!

Negative reinforcement, on the other hand, is the idea that removing an aversive stimulus will increase the frequency and vigor of a desired response. As an example, think of that godawful buzzing noise you hear if you don’t buckle your seatbelt when driving.

Positive punishment is by far humanity’s favorite method of behavior modification (despite its inadequacy), and it entails introducing an aversive stimulus in order to decrease the frequency of an undesirable behavior. I hardly need to cite an example, but think of smacking someone’s hand when they reach for the cookie jar.

Negative punishment, on the other hand, is the removal of a desired stimulus or reward in order to decrease an undesirable behavior. Think of taking away your teenager’s video game privileges because he’s been bullying his sister.

The unique and most fantastic consequence of discovering these principles is that we are in a position to intentionally manipulate and exploit them, whether with other animals (consider the history of animal domestication for human benefit) or with our fellow human beings. We don’t have to simply rely on naturally-occurring environmental circumstances to trigger a behavior that we hope will happen. We all use these four quadrants in varying degrees every day, without really being aware of what we’re doing (or why), and without the sense that if we apply a little sophistication to our approach, we may be able to make more effective use of them. 

Of course, I think there is a bit of difference between changing behavior and changing beliefs; but if behavior flows from belief, then the principles of operant conditioning should be able to accomplish what we desire from moral philosophy, assuming we’re assiduous enough and creative enough to apply them properly. If we can succeed in changing behavior for the better, do we need to change beliefs? If we can succeed in indefinitely deterring Iran from using nuclear weapons against Israel or us, do we need to change their belief that we’re the Devil Incarnate? 

I think those in Harris’ camp believe that “science” — particularly neuroscience, but possibly even evolutionary psychology and the like — is a powerful shortcut to the type of behavior modification we’re all seeking. Harris contrasts the type of “science of morality” that psychologists like Jonathan Haidt and Joshua Greene do with the kind Harris envisions. He believes that theirs is important but ultimately insufficient. I also believe that the work of those like Haidt and Greene is important; but I think that it’s up to philosophers to be aware of and utilize the findings from this science of morality in their moral reasoning.

Interestingly, there was an article entitled “The Folly of Scientism” in The New Atlantis by biologist Austin L. Hughes, who takes Harris and others to task. Hughes blames in part the discipline of philosophy for abdicating its prerogative with regard to some intellectual matters, allowing the louder voices of the hard sciences to take over discourse on things like “values” and such. I’m not a part of academia, but based on published books, essays and blogs by philosophers, I don’t think Hughes is correct here. It seems more likely that scientists have simply become emboldened by and enamored of the success of their respective disciplines, and are thus riding that wave onto the shores of philosophical discourse, where they come crashing impudently down.

Unlike the dispute between religion and science, where most people believe the two approaches have nothing to say to each other, and where Stephen J. Gould famously sought to establish an ideological Switzerland with his notion of N.O.M.A. (Non-Overlapping MagisteriA), philosophy and science do overlap; and I believe the best course forward is to maintain and enhance the dialogue currently taking place between philosophers and scientists. Having prominent (or at least popular) thinkers like Shermer and Harris and others stake out their positions with verve, and having others muster an equally vigorous critique of their positions carries on the ancient Greek tradition of the agon, a good way of getting clear on how to solve the problems of our age.

Harris and others seem to be desperately seeking a way out of this intellectual morass we call moral philosophy. But why should we expect it to be anything but a morass? Why should we expect definitive or clear-cut answers to ethical questions? Instead of trying to settle once and for all the questions upon which humankind has meditated since time immemorial, we should strive for the best approximation to sensible answers, which will of necessity be moving targets (at a minimum, insuring job security for philosophers!).

A “science of morality” should result from the best efforts of philosophers, psychologists, neuroscientists, sociologists — and even economists — working in the most open, mutually-beneficial manner possible. I think that is what is actually shaping now, despite a growing tidal swell of scientistic sentiment coming from some skeptical quarters. And scientism needs to be countered both because it’s intellectually misguided and because it engenders endless misconceptions about science in the public at large.

Shermer’s piece was a response to John Brockman’s annual question, “What should we be worried about?” In my opinion, we should be worried about the usurpation of philosophy by scientists.


  1. I wonder whether framing this as you have – as a defense of the role of philosophy and philosophers – is the best approach. Why should we trust philosophers on ethical questions? How do we know they are not motivated by conscious or unconscious presuppositions which we do not share? At least scientists have methods to filter out bias.

    Wittgenstein was one respected philosopher who believed in the importance of morality, but believed it to be beyond the scope of discursive reason.

    My own view is that a bit of ethical theory is useful as an aid to clarifying moral questions, but that philosophical approaches very quickly become problematic.

    Scientific understanding can only take us so far also.

    Human dignity and freedom can easily be threatened by the imposition of the moral ideas of a small, influential elite onto the broader society, so I think our moral thinking always needs to be (as it were) decentralized.

    1. I think it's obvious that philosophers - and scientists; or any thinker for that matter - are motivated by conscious or subconscious presuppositions. Nietzsche attacked Kant for failing to question the "moral law" itself.

      And I don't imagine that a cadre of philosopher kings will take over moral discourse, by law or by default. But I think philosophers - whatever their ideological leanings - are best equipped to propound on ethical questions. My ideal is a dialectic between philosophers of many stripes and scientists of many stripes.

    2. You say that philosophers are best equipped to propound ethical questions. Maybe. But I for one don't believe they are better equipped than anyone else to adjudicate on such questions.

      The kind of expertise scientists have allows them to speak with a degree of authority within their specific area of expertise. The kind of expertise that philosophers might plausibly claim to have in the area of ethics is very different from scientific expertise. It is a dangerous mistake to confuse two.

    3. Mark,

      while it is certainly the case that the expertise of moral philosopher is different in kind from that of scientists, so is the expertise of, say, an art critique, or a musician. Would you have scientists talk about aesthetics or play music too?

    4. Massimo, I'm not suggesting that scientists have any special 'moral expertise'.

      The art critic example is a good one. The scientist can say a lot about the psychology of perception and emotion, but we don't look to the scientist for sophisticated aesthetic judgements.

      Likewise in ethics, science can tell us a lot, but it doesn't make prescriptive judgements.

      Your implicit suggestion is that the moral philosopher's prescriptive judgements have a similar 'authority' to that of the art critic. Perhaps.

      In both cases there is often a judgement or preference expressed which is supported by sophisticated reasoning and/or persuasive rhetoric.

    5. Mark,

      I'm not mistaking philosophic and scientific expertise. The whole point of my post is a desire to see the two come together in a productive way. To me, ethics (broadly conceived) is systematic investigation into what leads to so-called "human flourishing"; and as such it draws upon not only neuroscience and evolutionary biology/psychology, but sociology, anthropology, history, etc.

      In our modern age, I don't believe the philosopher has the last word on ethics; but nor does the (neuro)scientist. I believe the philosopher should pose the questions - or a philosophically-inclined physicist or biologist or neuroscientist - but ultimately, satisfying answers will come from a dialectic between philosophers and scientists of various stripes.

  2. Science is searching for the absolute that philosophy has yet to find. One day soon they as is All will be united by the absolute, a single truth that shall set us free.
    Free at last,


  3. Sam Harris's arguments for a science of morality are sloppy and weak. There's a detailed criticism of them here:


    Richard Carrier takes the same view as Harris, that happiness is our only ultimate goal and science suffices to make us happy, but Carrier's case is several years older than Harris's, in Carrier's book Sense and Goodness Without God.

    Jerry Coyne is another science-centered atheist and his scientism is criticized in detail here:


    What we have here is a split between science- and philosophy-centered atheists. Scientific atheists look down on philosophy as being too much like theology, and since they think science is unassailable they want to bring science in wherever they can in the war against theistic religion. Meanwhile, philosophical atheists see scientism as a self-refuting form of fundamentalism that embarrasses atheists by vindicating the old theistic argument that everyone needs some form of religion (we need to "fill the hole in our hearts," etc).

    You can find more on this split between these camps of atheists here:


  4. What happened to operant conditioning? It went out with Skinner. We realized that dogs and people actually think.

    1. Operant conditioning didn't "go out" with Skinner; he merely articulated and described the processes by which behavior in animals is altered. Of course dogs and people think; operant conditioning more or less describes the factors involved in a dog or person's thinking...

    2. No, it doesn't describe thinking at all. It describes conditioning.

    3. Classical conditioning - think Pavlov's dogs - is true conditioning, in the sense of stimuli eliciting a conditioned (i.e., reflexive, automatic, physiological) response. Operant conditioning involves "thinking" for the animal: the animal encounters a set of circumstances or stimuli that require it to make a choice, to deliberate - to "think". The animal - provided a sophisticated enough cognitive apparatus - performs a cost/benefit analysis, as it were, to decide if what it wants is "worth" the potential consequences. That's what I mean by "thinking" in this regard.

    4. You are conditioning the animal NOT to think for itself, but to automatically follow your commands. You're now saying that conditioning is a part of thinking. Maybe, but thinking is not a part of conditioning.

    5. I think we're talking past each other here.

      I'm saying that "learning" involves "thinking", and operant conditioning is, generally-speaking, a description of how animals learn. I'll give you an example from my work with guide dogs:

      I utilize positive reinforcement (click & treat, in this case) to teach a guide dog to target a door knob with its nose. The dog eventually learns what it is that will earn him a treat (touching the door knob with it nose). I think it's true that the dog doesn't *understand* that it is targeting a door knob with its nose so the blind person can find it; but I think it's true that the animal is learning (i.e., "thinking" about) how to "operate" me in order to get a treat.

      To be completely accurate, the targeting work involves both classical and operant components: I first condition the dog to associate the sound of the click with a food treat (just like Pavlov's dogs); but once the dog is thus conditioned, he must *learn* what he must do in order to get me to click (and therefore give him a treat). He doesn't get a click until and unless he touches his nose to the door knob.

      And that's why I say learning involves thinking.

      Can you elaborate on what you mean by "thinking"?

    6. Everything we do involves thinking, so to explain conditioning by telling us it's simply thinking, fails to distinguish the difference between having a dog reduce it's options from those that it wanted to choose for itself to those you've chosen for it.
      Please, you don't have to explain conditioning to me, and the distinction between classical and operant components only tells us that there is more than a single step to the process, and there always had been.

  5. " In my opinion, we should be worried about the usurpation of philosophy by scientists."

    Sure, and we should also be worried of philosophers who fail to embrace the rising tide of science as Ladyman & Ross argue.

    The rising tide of science need not eat away at a rigidly defined philosophical shoreline. As previous uncertainties are resolved new ones emerge. We need a philosophy that can evolve with science pointing the direction for future hypotheses and experiments.

    This can happen most efficiently I think if both scientists and philosophers recognize how their respective fields benefit from the constraints imposed the other.

  6. Consistent with the rest of science, the science of morality is about what moral behavior ‘is’, not what values or goals ‘ought’ to be. Specifically, science’s understanding of morality is limited to understanding it as biological and cultural evolutionary adaptations.

    So sure, Harris’ and other’s claims that the goal of moral behavior ‘ought’ to be increasing well-being (however that is defined) is stepping outside the magisteria of science.

    But the larger problem hindering progress is main stream moral philosophy’s arrogant disregard of the emerging science of morality. For the last couple of thousand years, main stream moral philosophy has been seeking to define what moral codes ought to be without understanding what moral behavior ‘is’. Unsurprisingly, they have not reached a consensus.

    Moral philosophy would benefit by understanding what the function of moral behavior objectively ‘is’ according to the emerging science of morality: increasing the benefits of cooperation in groups (overcoming altruism failures in Phillip Kitcher’s parlance, advocating pro-social behaviors in DS Wilson’s). It seems unlikely that philosophers could come up with a ‘better’ function for moral codes based on their efforts to date.

    If they can’t come up with a ‘better’ function for moral codes, philosophers would still sensibly argue about what science cannot define, such as who ought to be included in the many groups we all belong to and what benefits of cooperation we ought to pursue. Philosophers enlightened by the science of morality would still be arguing, but at least they would be arguing about the right subject.

    Further, with this acceptance of what the function of morality ‘is’, the science of cooperation from game theory and psychology could then be cheerfully applied to refining and enforcing moral codes as to best “increase the benefits of cooperation in groups”. This could proceed more quickly without sniping from moral philosophers based on misconceptions about what morality ‘is’, such as mere social convention.

    Finding a rational basis for refining and enforcing cultural moral codes is necessarily a joint project between science and moral philosophy. But at present, moral philosophy is doing more to retard and confuse this effort than science is.

    1. Mark,

      The question of what something *is* is a definitional question rather than one of empirical nature. In principle, the (definitional) question of what something *is* is prior to empirical investigation. (We must know what something is (to some extent at least) in order to make empirical claims about it). In practice, however, with respect to empirical phenomena, definitions and empirical understanding can be mutually-determining in that new empirical insights can lead to adjustments or changes of definitions, which in turn can lead to a shift in the empirical scope of the thing being studied. Still definitions are in principle distinct from empirical nature.

      Now I think a basic mistake you make is assuming that morality is centrally an empirical phenomenon: it seems you take morality to *just be* the "moral" behavior we can observe people engaging in from a third-person point of view. This is not what morality *is*. This is like saying that mathematics is the observable behavior of mathematicians in doing math.

      Morality, like math, is a *first-person* concern. It consists in first-person concern about how we should live and treat others. Philosophical study of morality is also first-person: we ask e.g. how we ought to live, what it means to ask how we ought to live, and how an answer to such questions can be justified. While moral philosophy is a highly self-conscious version of moral thought itself, i.e. of morality itself, relevant sciences of morality are at best just the anthropology, sociology, psychology, or brain science of moral thought. With respect to actual moral thought, these things are as relevant to morality as the sociology of mathematics is to mathematics: it's not knowledge that has much bearing on the practice. Specifically, such science, and science generally, cannot help answer the following basic questions of morality:

      What is morally right and wrong?
      What do/should we mean by out moral terms?
      How can moral claims be justified?

      Knowing that chimps express "moral outrage" or that moral deviance is correlated with a certain brain chemistry is not especially helpful. With morality, we are the phenomenon and all science can do is document what we do or have done.

    2. Morality is only a first person concern? Do you guys ever think before you parrot something you learned in a philosophy class? It's obviously a cultural concern, and the fact that it's found in other cultures such as chimps should tell you that - except of course you seem to think that chimp strategies are caused by brain chemistry when in fact its generally the opposite.

    3. Paul, thanks for commenting. From your response, I expect your background is neither science nor math. That’s OK, I am interested in learning how to explain the science of morality to people of all backgrounds.

      In the science of moral behavior, the leading hypothesis is that moral behaviors are biological adaptations (such as our ‘moral’ emotions and moral intuitions) and cultural adaptations (enforced moral codes) that were selected for by the benefits of cooperation in groups they produce. In science, hypothesis concerning what ‘is’ are judged based on criteria such as explanatory power, consistency with known facts, simplicity, and so forth. A hypothesis is not defined like a premise is in moral philosophy.

      This particular hypothesis has wonderful explanatory power for the existence of our ‘moral emotions’ such as empathy, loyalty, shame, guilt, indignation, gratitude, and moral disgust. It also explains enforced cultural norms such as versions of the Golden Rule, “Do not steal, bear false witness, or kill”, and markers of membership in in- groups such as do not eat shellfish or pigs, or have homosexual sex. The Golden Rule and marker of membership strategies advocate well known cooperation strategies from game theory.

      Further, moral behaviors (behaviors motivated by our moral emotions and advocated by cultural moral codes) are solutions to the cross-species universal dilemma of how to obtain the synergistic benefits of cooperation without being exploited. That is, this understanding of moral behavior from science is cross species universal and human independent.

      You say: “Morality, like math, is a *first-person* concern. It consists in first-person concern about how we should live and treat others. Philosophical study of morality is also first-person: we ask e.g. how we ought to live, what it means to ask how we ought to live, and how an answer to such questions can be justified.” You are obviously talking about neither behaviors motivated by our biology based moral emotions that made our ancestors social animals, nor behaviors advocated by enforced moral codes. In the normal cultural sense of the word, you are not talking about moral behavior at all, because in the normal cultural sense, moral behavior is determined by a culture’s moral code. I try to avoid this confusion by focusing only on the science of moral emotions and enforced moral codes where science has useful things to say.

      Similarly, math is not first person dependent. Math is a function largely of the conservation laws in our universe. 2+2 = 4 because of conservation laws, not because people decided to define it that way, which also makes it cross-species universal.

    4. Paul, perhaps it would be useful to point out that you are talking about what individuals think morality ought to be (what they ought to do). The science of morality cover a very different subject, what moral behaviors are as biological and cultural adaptations. The science of morality is about what morality descriptively ‘is’, you are talking about what you think ought to normatively be moral.

    5. I think this back & forth between Mark & Pual is a good example of an unecessary argument based on an artificially sharp distinction between science and philosophy.

      Science predominates in answering what 'is' but should incorporate philosophy in generating it's hypotheses. Philosophy predominates in answering what 'ought', but needs to be informed by the best experimental results.

      The interaction between self-interest and cooperation can form a stable synergy that leads to a good degree of flourishing. If however either aspect dominates the interaction, then stability, flourishing and I would say morality suffer.

      I think it is far more productive to probe the beneficial interaction then to argue the merits (of philosophy, or science, or self interest, or cooperation,or.....) in isolation.


    6. Is
      The universe is truly boundless except for the laws we ourselves have created and placed around us.
      This box we made confines us.
      How many laws are there?
      Can they be counted?
      And where oh where does freedom fit in?
      It doesn't!
      The miracle is outside the box,
      Our own self-evidence.
      I found the edge of the box One day,
      Looked over and afraid...slipped and fell,
      And now the miracle is free,


    7. I like it. Interesting questions framed in a poem.

    8. Seth, my interest in the science of morality is in its cultural utility in refining cultural moral codes – an idea I think you are sympathetic with. Its cultural utility definitely benefits from the interaction of science and moral philosophy. However, I see that science is far more important than the mere supporting role implied by phrases such as Steve Neumann’s “philosophical reasoning can and should be informed by our best science” or even your “Science … should incorporate philosophy in generating it's hypotheses”.

      For example, the leading hypothesis from science can be phrased as “Moral codes are cultural adaptations for increasing the benefits of living in a culture by advocating strategies that solve the cross-species universal dilemma of how to obtain the synergistic benefits of cooperation without being exploited”, or more generally as “Altruistic acts that also increase the benefits of cooperation in groups are moral”.

      Moral philosophy is unimportant to formulating such hypothesis. The normal methods of science are sufficient.

      Moral philosophy’s contribution comes from informing us about what values and goals we ought to pursue in implementing the means of morality as defined by science. For instance, who ought to be included in each the many cooperative groups we belong to and how ought we balance self-interest and cooperation with others?

      I know it will be a long, hard slog to convince mainstream moral philosophy that culturally enforced moral codes’ means are essentially defined as a matter of science and moral philosophy’s contribution should focus on ends. But I see a great culturally useful prize in mere acceptance of this means of moral codes as revealed by science. Even if moral philosophy struggles for another couple of thousand years with what moral behavior’s end ought to be, the rest of us can proceed with “Altruistic acts that also increase the benefits of cooperation in groups are moral” which would be a great improvement over the present state of affairs.

    9. "Consistent with the rest of science, the science of morality is about what moral behavior ‘is’, not what values or goals ‘ought’ to be."

      Mark, it seems to me that you're being overly prescriptive in defining the science of morality. Surely there can be more than one science of morality which study it from different points of view. What "ought to normatively be moral" might turn out to be a scientific question.

      So yes, the study about what morality descriptively 'is' is one science (moral ethology?), but that doesn't mean there aren't other scientific questions we can ask about morality.

      If the question you want to ask is how should we make moral choices, then you're in Harris's domain. Moral ethology cannot answer this question directly, but of course that doesn't mean that its contribution to our understanding of our situation won't help.

      You might argue that science is for "is" questions, not "ought" questions, but that is not true. Some sciences are descriptive (physics), and some are instrumental (rocket science). Instrumental sciences show us how we "ought" to behave to achieve our goals.

      Medicine is Harris's example. Biology is the science that tells us what the human body "is", while medicine is the science that tells us how we "ought" to treat it. All Harris is saying is that once we accept the goal of promoting well-being, a new science of morality might be born which would help us to make moral choices.

      Not that I necessarily agree with him that much (see my post below).

    10. Disagreeable, we agree that science can be the source of instrumental oughts regarding goals such as well-being. My point was that science cannot logically be the source of the imperative oughts commonly sought in moral philosophy.

      Assume a culture’s goal (ends) for enforcing their moral code is increasing well-being while maintaining a value that no one should be exploited. Science’s revelation of the means of moral codes plus these defined ends of moral codes seem adequate – on their own – to refine enforced moral codes in ways that will be far more culturally useful than any basis proposed by moral philosophy to date that I am aware of.

      But moral philosophers might reasonably point out that the ends for enforcing moral codes need justification, terms in those ends such as “well-being” and “exploitation” need defining, and moral philosophy’s methods are the best means of justifying such ends and making those definitions. Sure. But suppose moral philosophy is slow in coming to a consensus. Those of us who already share such goals for enforcing moral codes are logically free to move full speed ahead to refine moral codes to be consistent with science’s defined means and these generally agreed on, but not fully justified or defined, ends for moral codes.

      I did not understand Sam Harris to be making the sensible claim that “once we accept the goal of promoting well-being, a new science of morality might be born which would help us to make moral choices.” I understood him to be saying “the goal of promoting well-being is imperative based on science” which I could make no sense of.

    11. Mark, I think our two positions are rather close. I agree with much of what you wrote. My main point was a reaction to your perhaps overly narrow interpretation of "science of morality".

      I do, however, think you are misunderstanding Harris's point. I don't believe he ever says that science shows us that we should promote well-being. Rather he says that he cannot imagine what we might mean by morality if we are not talking about promoting well-being.

      He takes this definition of morality as an axiom, and then argues that science is how we should proceed to pursue this goal. In Harris's view, we should not therefore agree to differ on questions of morality, nor should we entertain any kind of moral relativism.

      The point he is stressing is that for any moral question, there is an objectively correct answer within the scope of his definition of morality.

      The confusion arises because he unfortunately overstates his position by maintaining that this has given us an objective foundation for absolute morality, which is clearly not the case.

    12. I still think the distinctions being drawn are far to sharp. Moral philosophy is not just about 'ends'. Morality is also not some static thing that 'is', it 'is' in constant evolution. So I agree that it is crucial for any moral philosophy to be informed by the best 'science of morality' that can help describe the process of how moral codes tend to change over time.

      The evidence I am aware of suggests a progressive trend in morality that is influenced by many factors feeding back & forth. The evolutionary, economic & cultural influences are all important in this process and it is not so simple to isolate the impact of science or philosophy in this web. I think those advocating for a dominant of role science are likely ignoring how much philosophy they are doing. Just as the best 'moral philosophy' is informed by the best 'science of morality' I think the reverse is also true, and it is counter-productive to think of them as adversaries.

    13. Disagreeable, thanks for clarifying Harris’ position. I was so disappointed with The Moral Landscape that I did not finish it.

      As you mention, he overstated his case. But what really offended me was his ignoring the literature that so plainly supports the science of morality being about what moral behaviors are as biological and cultural adaptations for increasing the benefits of cooperation in groups. By focusing on ideas like we can understand what well-being is by looking at brain scans, he drove the bus into the ditch regarding science’s rightful place in defining moral codes, and may have set back those efforts by years.

      On the other hand, I agree with Harris when he says “he cannot imagine what we might mean by morality if we are not talking about promoting well-being.” I would go farther and say “I cannot imagine what we might mean by morality if we are not talking about promoting well-being by the specific means of increasing the benefits of cooperation in groups.”

    14. Mark
      “Altruistic acts that also increase the benefits of cooperation in groups are moral”

      I am unclear how moral philosophy can be separated from this example. What is the altruistic act? Ignoring a definition for altruism, how is it delivered? Is it delivered within the group or to outsiders. Altruism purely within the group may produce a group loyalty that often has negative consequences for outsiders. The quote seems to me to be a less than well considered philosophical statement.

      If it is not obvious I should qualify that I am no expert in philosophy or in the 'science of morality'. I have a long interest in philosophy (mostly eastern) and my primary field is statistics.

    15. Mark,

      I have no problem with the general idea of the science you describe. Your own version of it, however, seems to have the problem of taking groundless positions on fairly substantial philosophical questions regarding morality. For me it's hard to tell whether you're just oblivious to such questions or believe that science somehow supports your positions. I'm not sure which explanation would be more charitable.

      I think when you talk about "what morality really *is*" you are talking about the way the phenomenon of morally appears within a biological framework. Now while I think morality can be studied within a biological framework, I also think that (1) the biological framework is not the only framework within which morality can be understood, and (2) the biological framework is not the morality's primary framework.

      Regarding point (1), ordinary life and (moral) philosophy are also frameworks within which thinking about morality is coherent. Actually, as morality is a philosophical aspect of ordinary life, we might think of these two frameworks simply as philosophy.

      Regarding point (2), the reason I say that the biological framework is not morality's primary framework is that clearly our moral terms developed within the framework of philosophy/ordinary life and not with the biological framework. While moral concepts are at home in their anceint philosophical framework, the biological framework is a recent artifice of science.

      Now a serious error I see you as possibly involved in is thinking that you can redefine moral terminology in terms of biological terminology. Perhaps the idea here is that since physics and chemistry redefine ordinary terms, such as 'space' and 'water', in the terms of their frameworks, biology can do the same with moral terms. But morality is very different from space and water. Moral terms are not empirical terms that refer to things in the world that have independent, discoverable, natures. Rather moral terms function within human interaction and serve certain assessments of self and others. Given this, to skew moral terminology toward biological meanings so that they are like terms of an empirical science would simply undermine and confuse the developed usefulness of such terms in ordinary life (to the extent that anyone paid attention). From the point of view of the average person, someone who used moral terminology in a way steeped in biological redefinition would be incomprensible.

      Generally, I think what you're trying to do, Mark, is assert the biological framework as the dominant framework of moral discourse. You want everyone to talk about morality as if it were a branch of the biological sciences. Now, regarding what I said at the beginning about your ignoring philosophical questions, to me it seems you take a strong position on the philosophical question of whether morality can be seen as a branch of biology without appearing to have given possible problems with that view much thought. Given that you seem to have problems getting philosophers onboard your vision, you might entertain the idea that perhaps there's something philosophically wrong with it. Generally, the vision you are trying to explain here is a philosophical vision - to argue that morality should principally be understood within a biological framework is a philosophical thesis, one that would be very at home in an ethics journal. So perhaps it would be educational to engage with the philosophical issues as such rather than seeing yourself as an educational missionary in the land of naive philosophers.

    16. Seth, I agree that it is not culturally useful to think of moral philosophy and the science of morality as necessarily adversaries.

      My adversaries are those in mainstream moral philosophy that either dismisses the science of morality as irrelevant or at best as if science has some minor secondary roll.

      And yes, morality is in constant flux. But the science of morality illuminates what moral progress ‘is’ in ways that moral philosophy is unable to do. Assume for a moment that “moral behaviors increase the benefits of cooperation in groups” as I claim. If this is cross-species universal as part of science, then what is changing with moral progress?

      The most obvious source of moral progress is in changes to “Who is in the group?” Over human history the in-group that merits full moral regard has expanded from family to tribe to race to country to all people. This is the “expanding circle” Peter Singer and traditional Buddhism describe as exemplifying moral progress.

      The second most important source of moral progress is in the strategies advocated in those moral codes. Perhaps the simplest strategy is direct reciprocity, the oldest written form of the Golden Rule: “Do for one who may do for you, that you may cause him thus to do” Egypt, about 1500BC. Its more modern and effective written version is indirect reciprocity: “Do to others as you would have them do unto you, for this summarizes the law and the prophets (morality)”, Jesus Mathew 7:12.

      There is one very old strategy for increasing the benefits of cooperation in groups that is falling victim to moral progress: marker strategies that show membership and commitment to a more moral (more reliable cooperator) in-group. These marker strategies include moral requirements to be circumcised, not eat shrimp or pigs, and to not engage in homosexual sex. Here, moral progress is made by abandoning these in-group versus out-group markers as no longer effective when all people are conserved to deserve full moral consideration.

      This understanding of what moral progress really ‘is’ is an effortless fallout of the science of morality. Traditional moral philosophy is both necessary and useful for refining moral codes, but it is not remotely competitive in revealing what both morality ‘is’ or what moral progress ‘is’.

    17. On this point:

      >The science of morality is about what morality descriptively ‘is’, you are talking about what you think ought to normatively be moral.<

      No, I'm denying that the descriptive nature of morality is biological. Descriptively, morality is a domain of human concern defined by certain kinds of questions. I think what you are talking about is simply observable moral behavior rather than morality itself.

    18. Seth, this is my reply to you 9:02 comment.

      “Altruistic acts that also increase the benefits of cooperation in groups are moral” describes what moral behavior is as a biological and cultural evolutionary adaptation (what the science of morality is about). As such, it is the hypothesis about moral behavior with the largest explanatory power. It is not a philosophical statement, though out of context I can see how that is confusing. It is a proposition about the means of moral behavior that can be shown to be either true or false. Perhaps I should phrase it as “… in groups are evolutionarily moral”. Would that be clearer?

      If we define normative moral codes (and their principles) as “what would be put forward by all rational people under specified circumstances”, then to make it into a philosophical claim we would have to show why 1) all rational people would put forward this particular means for moral behavior and 2) what goals (ends) in terms of definitions of “benefits”, “groups”, and other values such as “do not exploit other people” all rational people would put forward.

    19. Hi Mark
      I'm just not buying the statement:

      “Altruistic acts that also increase the benefits of cooperation in groups are evolutionarily moral”

      has the largest explanatory power of moral behavior.

      Again even if we ignore what seems to be a potential tautology between how 'altruistic' and 'evolutionarily moral' are defined, and even if we limit ourselves to biology and cultural adaptations, I think the statement takes a partial view of the biological patterns that tend to be stable and flourish.

      The dominant pattern in biological systems that find stability and flourish, seems to be based on negative feedback systems that mutually constrain each other. So part of that picture involves the constraint of individual self interest for the benefit of the whole group as the quoted statement implies. The other part of the picture involves the necessary constraint on the power of the group so that a proper degree of self interest is maintained by the individuals.

      As you mentioned Sellars 'expanding circle' this pattern seems relevant as we move through different levels of biological complexity and into concepts. I don't see how we can tap the explanatory moral power without a philosophical consideration of values at the various levels. An example would be the ideal balance between individual level virtue ethics and group level utilitarianism so that each constrains the other in a complementary way. Massimo has posted on this topic.

    20. Paul, perhaps I can clarify why what you see as fatal flaws are not.

      Starting with the science of the matter, it sensible to make hypotheses about what moral behavior is as a matter of science because there is objective data about what moral behavior is and there are applicable criteria for those hypotheses’ provisional truth.

      It is easier to begin with cultural data. That cultural data is all past and present enforced moral codes. (Enforced moral codes are sets of norms whose violators are commonly thought to deserve punishment of at least social disapproval.) Applicable criteria for provisional truth include explanatory power, no contradictions with known facts, simplicity, and so forth. No matter how diverse, contradictory, and bizarre they superficially appear, virtually all past and present enforced moral norms share a common function. The philosopher Phillip Kitcher calls that function “overcoming altruism failures”. The evolutionary biologist DS Wilson calls it “advocating pro-social behaviors”. I describe it as “overcoming the universal dilemma of how to obtain the benefits of cooperation without being exploited, or more briefly: advocating cooperation strategies”.

      That is, the primary selection force for enforced cultural moral standards is the increased benefits of cooperation in groups they produce, where those benefits are whatever the group finds attractive.

      The above is just about the science of the matter about what moral behavior ‘is’. There is no imperative ought or normativity connected with it.

      Repeating from my reply to Seth: If we define normative moral codes (and their principles) as “what would be put forward by all rational people under specified circumstances”, then to make it (the above hypothesis) into a philosophical claim we would have to show 1) why all rational people would put forward this particular means (function) for moral behavior and 2) what goals (ends) in terms of definitions of “benefits”, “groups”, and other values such as “do not exploit other people” all rational people would put forward.

      Making the above hypothesis into a moral principle that would be put forward by all rational people is moving out of the domain of science and into moral philosophy.

      My complaint against mainstream moral philosophy is their disinterest in what moral behavior is as a matter of science and the insights that science provides.

      You seem to be focused on the biology of morality, the moral emotions of empathy, loyalty, guilt, shame, indignation and so forth and the remarkable biology underlying our moral intuitions. Our moral biology is only one substrate for encoding strategies for overcoming the universal cooperation/exploitation dilemma. The main substrate is, as you point out, past and present moral codes.

      Just FYI, I have made presentations to professional philosophers who are interested in and even who specialize in understanding moral behaviors as biological and cultural evolutionary adaptations. None have seen the logical errors you do.

      Does the above help?

    21. Seth, there are a great many “biological patterns that tend to be stable and flourish” that have nothing to do with morality. Some are decidedly immoral such as greed and an inclination to dominate other people through violence. So I don’t understand your comment “the statement takes a partial view of the biological patterns that tend to be stable and flourish.” Of course it takes a partial view. It is just describing morality as an evolutionary adaptation, not all evolutionary adaptations.

      The science of a matter, including moral behavior, does not come with any intrinsic values. Therefore, it makes no sense to talk about “philosophical consideration of values at the various levels” when the subject is the science of morality. Moral values should not change objective science.

      On the other hand, philosophical consideration of values (such as the split between commitment to personal well-being and group well-being) is necessary for the science of the morality to be culturally useful, such as in refining moral codes.

      Perhaps you are thinking of an evolutionary preference (a selection force or forces) for the split between commitment to personal well-being and group well-being that might be culturally useful to understand? Maybe, but that sounds like pretty muddy water to me. Remember, for that science to be culturally useful in refining moral codes you would have to first identify the science and then argue that “all rational people would put that forward as properly part of a moral code”. This seems unlikely.

    22. Mark

      It's not helping yet, but I appreciate your engagement and will will continue to review what you have written and perhaps my understanding will evolve.

    23. Mark

      Just to clarify one point. When I wrote:

      “biological patterns that tend to be stable and flourish”

      I was imprecise (as I tend to be often). I was referring to a 'flourishing' at multiple levels (part-whole). So a symbiotic relationship or a negative feedback system would qualify for what I meant as a stable flourishing pattern, but a parasitic relationship would not.

      I think these mutually constraining patterns align well with some eastern philosophical ideas that I find intuitive and accessible, and I think they are applicable to many problems. I think sometimes the rigor of western science, and perhaps western philosophy as well tend towards a reduction or separation and could benefit from checking in with this concept. Of course I am also well aware that my understanding will benefit from a more rigorous engagement with the science you are describing.

    24. Mark,

      The science you describe in the first part of your last post to me sounds quite interesting and valid, like a kind of sociology. I'm not sure, however, that the codes in question should be called "moral codes" or that the behaviors in question should be called "moral behaviors." The reason is that I'm not sure how 'moral' is being defined there. It seems there's a real risk of calling codes and behaviors that are merely prudential, or otherwise non-moral, moral. So how do you define 'moral'?

      Is this statement supposed to suggest a definition?

      >That is, the primary selection force for enforced cultural moral standards is the increased benefits of cooperation in groups they produce, where those benefits are whatever the group finds attractive.<

      That is, are we to see moral codes as codes that serve group cooperation in attaining whatever ends the group desires, and such that they are better or worse depending on well they serve such? If so, at best this is a controversial definition of 'moral code'. Personally I don't see how it distinguishes moral codes from prudential codes, something that a definition of 'moral code' must do. But you might have an answer to this, so I ask: how do you distinguish moral codes from mere prudential, or enforced instrumental codes?

      On a more general level, thus far - in all my posts - I have been critiquing your project qua scientific project. (Btw, above I've meant 'biology' as a stand-in term for a natural science generally.) Another way of making the critique is to see it as a philosophical project - at least in part - and criticizing it on the basis of its invokation of the word 'science'.

      As you may know, there's bit of a movement among scientists to give half-baked opinions on philosophical issues and then stamp such opinions SCIENCE. Lawrence Krauss's dubious redefinition of 'nothing' with the attitude that it was a scientific redefinition is a favorite example. We might call makers of such dubious moves "science-stamp philosophers" or "stamposophers" ("lovers of stamping") for short.

      Generally, I have no problem with your theorizing so long you avoid stamposophy - i.e. so long as you recognize complex philosophical issues for what they are and avoid stamping your philosophical positions as science. The posture that science has answered a difficult philosophical question when in fact this is just a reflection of the philosophical naivete of the scientist naturally irks philosophers. It's also a bit of let down as scientists should be epistemic heros.

    25. Paul, I think we are getting closer to understanding each other.

      Regarding whether we should call cultural moral codes and the behaviors they advocate ‘moral’: It would a losing battle to try to convince groups, religions, and societies that they have no moral codes and that those codes do not advocate moral behaviors. So it would be both impractical and, I think, misleading to call these codes and behaviors anything else but moral. I define what is moral as the standard cultural definition – good behavior that merits praise and bad behavior that merits punishment of at least social disapproval.

      I see the communication problem between science and moral philsophy as due to a strong inclination among philosophy majors to consider any mention of moral codes or behaviors as claims about what is normatively moral. This inclination is so ingrained that it is commonly irrationally clung to, regardless of how often and how vigorously it is pointed out that science only makes claims about what is descriptively moral, not normatively moral in the moral philosophy sense. Here descriptively moral is in the same sense that all past and present moral codes put forward by groups, religions, and societies are descriptively moral.

      So the root of my communication problem is that the science of morality is only about descriptive moral principles (hypotheses concerning what is descriptively moral), but, these are repeatedly and falsely interpreted as normative claims by moral philosophy majors, regardless of my best efforts at clarity. And then, based on their gross misunderstanding, I am told I am naïve!

      I would greatly appreciate any suggestions you might have on how to more clearly talk about descriptive moral principles without them being misunderstood as making normative claims.

      Regarding “how do you distinguish moral codes from mere prudential, or enforced instrumental codes?”: Descriptive moral codes are enforced instrumental codes. A science derived descriptive moral principle (hypothesis about descriptive moral codes) is, like the rest of science, first a source of instrumental and prudential oughts to achieve goals. Only if that principle is put forward in shaping an enforced moral code, then people give it normative power in the same way all cultural moralities gain normative power by cultural advocacy and enforcement.

      Regarding logical errors in the interactions between the science of morality and moral philosophy: I count the larger errors on the mainstream moral philosophy side. (Though Sam Harris may be an outlier case largely based on him not being clear he was only talking about instrumental moral oughts.)

      Scientists spend their lives dealing primarily in what descriptively is, whether the subject is gravity, biology, or moral codes, and then sometimes deal in what people instrumentally ought to do based on that knowledge in order to achieve their goals. So the idea that a descriptive hypothesis about past and present moral codes has inherent normative power over what people imperatively ‘ought’ to do is bizarre and the obvious product of faulty logic.
      Moral philosophy majors, as I have mentioned, too often cling with a death like grip to the idea that any claim about morality, no matter how many times it is described as descriptive, is actually (secretly?) a normative claim which they can then readily bash as philosophically unjustified.

      I agree that people working in the science of morality should be more clear that their hypotheses are, at best, instrumentally useful and in no way are imperatively binding. But mainstream moral philosophy is making their tough job – reaching a consensus on what moral codes ‘ought’ to be – tougher by ignoring or dismissing what the science of morality tells us moral behavior ‘is’ as an evolutionary adaptation.

      Again, any suggestions you might make on how to reduce such miscommunications would be greatly appreciated.

    26. Mark,

      Your problem with moral philosophers might center on differing views of the nature and scope of normativity. A binary that appears a lot your posts above is one between "descriptive" talk about morality and (normative) talk about what morally should be the case. This binary is problematic in a few ways and may explain your miscommunication with moral philosophers.

      Normativity might be defined are prescriptivity with respect to norms (i.e. how things should be on some human scale). Any statement that has some prescriptive force with respect to norms is normative. Note that prescriptive statements are not necessarily normative; to tell someone that they should hire a new attorney is prescriptive but not normative. Also note that prescriptivity (including normativity) is not necessarily moral; it can also be prudential or instrumental.

      Now normativity and description are not mutually-exclusive. This might be explained by taking any given statement as possibly normative on two levels: a statement can be normative in content and/or normative in force. This makes way for statements being normative in force but descriptive in content. An example of a *normative description* is "knowledge is justified true belief." In terms of content, knowledge is being described; the force of the statement, however, is (non-morally) normative; a prescription is being made about how we should think about knowledge. On my view of philosophy btw, (non-moral) normative description is philosophy's main sort of assertion.

      So the opposite of prescription (including normativity) is not description but what we might call 'positivity', where this just means non-prescriptive (for our purposes). To say that one had a sandwich for lunch, for example, is a positive statement given that there's no suggestion that e.g. everyone should have sandwiches for lunch.

      Now perhaps what the moral philosophers are saying is not, as you suppose, that you are making moral claims with your descriptive claims but that you are making (non-moral) normative claims about morality with your descriptive claims. The fact that your statements about morality are descriptive in content does not mean they are not normative in force. In fact they certainly are normative in force: the only way to talk non-normatively in using (or mentioning) moral terms is to limit oneself to quoting what others say with such terms.

      Generally, I think that to talk about morality at all from a knowledge point of view is to enter the domain of philosophy. This is because definitions of moral terms are inherenly normative and contestable, however "descriptive" they are. This is obvious when one thinks about it: there is no objective factuality that entails this definition or that for moral terms (unlike in the case of 'space' and 'water'). Thus one is always taking a philosophical position when one defines a moral term. My guess is that what the moral philosophers take exception with is your definitions of moral terms. When they mention 'normativity' they may mean that of your descriptive definitions, rather than that of any prescriptive moral claims they supposedly take you to be making.

      Without meaning to implicate you Mark, I think large part of the problem between scientists and philosophers is a lack of philosophical education on the part of scientists. Being in the knowledge game they really should have at least an above average familiarity with philosophy. As it is, lawyers probably know much more about philosophy on average than scientists. As to philosophers not understanding science and math, such an idea could only result from knowing little about philosophy. Rigorous study of the foundations and methods of knowledge, something within which science is just one dimension, is part of the substance of philosophical study (as it has been since the Greeks).

    27. Paul, many thanks for your careful reply.

      I think I understand your points about normativity, prescriptivity, and the potential normativity even of descriptive claims, but will have to take some time to consider your other comments regarding how I could better present the science of morality.

      But I see a problem in your suggestion to avoid using terminology from moral philosophy when talking about the science of morality. Reducing my communication difficulty to its simplest elements:

      I (and others) propose a hypothesis (though we phrase that hypothesis differently) about ‘morality’ in the form of all past and present enforced cultural moral codes and ‘moral’ emotions and further claim this hypothesis is provisionally true in the normal sense of science based on appropriate criteria, predominantly explanatory power.

      My communication difficulties begin when I say something like:

      “This hypothesis about behavior advocated by past and present enforced cultural moral codes and motivated by our ‘moral’ emotions is a descriptive moral principle that summarizes what those ‘moral’ behaviors ‘are’, cultural and biological adaptations selected for by increased benefits of cooperation in groups. This moral principle appears to be instrumentally useful in refining moral codes to more effectively achieve goals that are aided by increasing the benefits of cooperation in groups.”

      Then a response might be “But you have not justified that moral principle!”, which prompts me to question either the person’s understanding of the scientific method or their rationality.

      Your advice to not use moral philosophy’s terminology such as “moral” and “descriptive” might reduce communication problems with moral philosophy majors, assuming I could communicate what the terms referred to. However, it would make the discussion incredibly difficult for anyone else to follow.

      Somehow I have to figure out how to tell the story such that both groups can follow it at the same time.

      And yes, until I began engaging with philosophy majors I also expected them to be quite knowledgeable about the “foundations and methods of knowledge” in science and mathematics. I have been bewildered by how frequently this does not appear to be the case, even among professionals. Perhaps it is just due to my inability to understand the profound points they are making.

    28. I would like to thank Mark and Paul for continuing a dialog that identified and clarified areas where one may have been inadvertently talking past another.

      I think a scientific 'description' of the patterns that take place between moral codes and moral behaviors at the level of the group and the individual is crucial. Since groups compete with each other each group is a sub-group of a larger whole. I think a description with the greatest reach or explanatory power would need to account for that hierachical structure.

      The bias I have to think in terms of holistic structure made it difficult for me to think of morality in purely descriptive terms. Once there are sub-groups with varying moral codes it seems some type of normativity is useful in the description of the patterns.

    29. Seth - It's a pleasure; I too thank Mark for his patience in hashing this out.


      I didn't suggest that you avoid using terms of morality/moral philosophy, only that you avoid using them problematically.

      Anyway, let me see if I now have a better handle on what you're trying to do. As I understand it, you want to use empirical study of past moral codes as a development/test basis for hypotheses about correct moral principles.

      Sounds like the hypotheses in question would have a form something like: Moral principle, p, if adopted by a group, increases that group's chance of attaining its collective aims. This hypothesis can be tested by looking at past moral codes; if the hypothese looks to be true, then we can say that moral principle, p, is correct.

      If this is what you have in mind (I'm aware that I may still not be understanding you), then I know why moral philosophers mention justification; this concerned is related to my concern about distinguishing between moral and instrumental codes. In effect these are two ways of making the same criticism.

      Regarding the justification criticism, the historic basis does not justify the "moral" principle, p, qua moral principle. The reason is that p's facility in bringing about the group's aim doesn't imply that p is morally correct. Even if it were assumed that the aim of the group is a moral one, it would not follow that p is morally correct; there can be immoral means to moral ends. In sum, the historic basis is inadequate to make the claim that p is *morally* correct.

      The way I put this concern, in an earlier post (above), consisted effectively in saying that there's a worry that at most what is justified by the historic basis is p qua instrumental principle, i.e., not qua moral principle.

      There's a great difference between saying that p serves a group's aims and saying that p is morally correct. The former does not imply the latter, and this inference I think is the problematic inference in your project. Because this inference doesn't hold, you're effectively not talking about morality at all.

      This doesn't mean your project is without merit. It just means it should be called something like "cooperation theory," as opposed to being dubiously elevated to a "science of morality." The word 'moral' in you project might be like a Rolls Royce hood ornament taped to the hood of an Acura ;)

    30. it's too bad that, relative to otherwise serving a groups aims, you've come up with nothing that describes "morally correct," unless it's your own group's morality that's being used as a standard for some other, to you, less civilized or emancipated group. In the end you're struggling to eliminate the arbitrary, which you should know isn't possible. We try to predict and react to the arbitrary, period.

    31. Paul,

      Again, I think we are getting closer to understanding each other. I will use your wording as closely as possible to help find where we part company.

      I start my logic here:

      A hypothesis about moral codes can be tested by looking at past and present moral codes; if the hypothesis meets appropriate criteria for provisional truth in science, then we can say that hypothesis, p, is correct.


      If groups decide that this hypothesis about moral codes provides insights into how to refine their moral codes to better meet group goals, then these groups are logically free to use these insights to refine their moral codes.


      This hypothesis unifies all past and present enforced moral codes under a single principle that describes the universal function of moral codes. Therefore, it can sensibly be called a descriptive moral principle that groups ought (instrumental) to advocate and enforce if they expect doing so to best meet their goals.

      Specifically, this descriptive moral principle describes the function of moral codes as: to increase the benefits of cooperation in groups by advocating altruistic cooperation strategies.

      However, as delivered from science, this descriptive moral principle is necessarily inadequate to fully define a moral code because it defines only what moral means descriptively are. It does not define what moral ends ought to be.

      Philosophers could still argue if what moral means 'ought' to be is different from what moral means ‘are’, but I expect that would quickly reveal itself to be a waste of effort. It would not be a wasted effort for philosophers to argue about moral ends, which goals and benefits are moral and immoral and how those benefits ought to be shared.

      Then I get responses such as “But you have not justified your moral principle!”

      You have phrased a perhaps similarly motivated response as “There's a great difference between saying that p serves a group's aims and saying that p is morally correct.”

      From my perspective, I have never made any such naïve claim and remain profoundly puzzled by assertions that I have. Can you identify where you think I did make such an assertion?

      I feel like we are really close here to the nut of a communication problem between the science of morality and moral philosophy.

    32. Mark,

      Your above post is helpful but something that is still unclear to me is whether the science you're describing is a purely descriptive study of moral codes or one that also leads to moral prescriptions. So is it purely descriptive or also prescriptive?

      If it is at all prescriptive then the problem of justification, discussed above, is somewhere within it. If it is purely descriptive then I don't see the novelty - it sounds like a specialty within sociology; and I don't see, referring back to earlier things you said, why moral philosophers should find it especially significant. They generally are after all very well-versed in the history of moral thought.

      So a second clarifying question is: what's the special significance of this science to moral philosophers?

      I'm hopeful that answers to the above two questions will finally clarify things. Btw, apologies about the hood ornament analogy; it's a good analogy for what I meant but I see that it comes off as rather crass.

    33. Paul, to your first question about the prescriptivity of science based hypotheses:

      The science of morality is necessarily only descriptive and can only reveal descriptive moral principles in the form of hypothesis, for example, about past and present enforced moral codes.

      Like the rest of science’s knowledge, knowledge from the science of morality necessarily has no inherent prescriptive power.

      To your second question “What's the special significance of this science to moral philosophers?”:

      There are two levels of answer to this question. The first is about its instrumental utility. The second is about increasing moral philosophy’s focus on prescriptive moral ends, an area that the science of morality is necessarily silent on.

      The first level is that moral philosophers (and others) should find its instrumental utility interesting.

      I argue that, based on its instrumental usefulness in meeting common goals for putting forward and enforcing moral codes, this descriptive moral principle would be put forward by more well-informed, rational people than any other available principle. I think moral philosophers ought to find interesting a moral principle that would be preferred by more well-informed, rational people than any available alternative.

      The foundations for why it would be preferred to any other available alternative include:

      It defines moral behavior as strategies to increase the benefits of altruistic cooperation in groups and thereby cooperation in groups in general. This is the selection force for past and present moral codes. If rational people choose any other function for moral codes, then they are talking about a new kind of social code unrelated to all past and present moral codes, which seems unlikely.

      The principle defines the selection forces for our ‘moral’ biology that produces 1) ‘moral’ emotions such as empathy, loyalty, shame, guilt, indignation, gratitude, and moral disgust, and 2) much of our experience of durable well-being that is triggered by altruistic cooperation in groups such as family, friends, and so forth.

      Since the principle defines the selection force (the benefits of altruistic cooperation in groups) that selected for both our ‘moral’ biology and past and present moral codes, it fits people like a key in a well-oiled lock because this key (the evolutionary morality principle) is largely what shaped this lock, our social psychology and much of our experience of durable well-being.

      Finally, this descriptive moral principle is cross-species universal since it defines the solution (altruistic cooperation strategies) to the cross-species universal dilemma of how to obtain the benefits of cooperation without being exploited.

      The second level of answer is that moral philosophers (and others) should find this descriptive moral principle interesting because it enables a productive and appropriate focus of moral philosophy on the ‘ends’ of enforced moral codes rather than what ‘means’ ‘are’ – which is what the descriptive moral principle defines. For example, should the ultimate goal of moral codes be maximization of durable well-being for everyone (a Utilitarian goal) or for the in-group that imposes the moral code? Then, is durable well-being the proper end for enforcing moral codes in societies or ought our moral codes have a different end? And if our moral codes are aimed at increase durable well-being, what does that mean and how do we measure it?

      I look forward to your comments and hope I have clarified rather than muddied the water.

    34. Mark,

      I think I have a much better view of what the science in question is now, however I think at this point we loop back to the question of what moral philosophy is about and whether biological accounts of morality are relevant to it. This is a philosophical question that philosophers might have different views on, and one that it appears you have a view on, one that I think takes a debatable view on the nature of morality.

      As I mentioned earlier this exchange, I don't think the supposed biology of morality has a close relation to the practice of moral philosophy. I'm not a moral philosopher but I'm an (independent) philosopher of another kind and I know what doing moral philosophy would mean for me, given my background philosophical views. Biology would not be especially relevant to my practice. I would be concerned with morality as an idiom of natural language and would develop my view that morality is grounded in rationality.

      Moral phenomena is of course underpinned by our biological/evolutionary natures, as are all human phenomena, including, math, poetry, and law. It doesn't follow from this, however, that the science of our biological nature is especially relevant to the nature of morality or the practice of moral philosophy. And it seems clear that many moral philosophers don't see it as especially relevant. If you want to argue with such philosophers on this point, what you are arguing about essentially is the nature of morality and the aims of moral philosophy. That is, this is an argument that properly takes place within moral philosophy. It seems the best strategy for you - given that your aim is adequately important to you - is to develop a deep enough understanding of moral philosophy to make the argument from the inside.

    35. Again, in my meticulously self-informed opinion, all morality comes down to what we can be trusted by our cultures to do right, and distrusted if we do it wrong. And in that sense all beings with a culture have moral rules and standards to adhere to. Of course if you don't believe all beings have a culture, then this comment will mean nothing to you , and you'll continue to count your angels only on that human pin that seemingly arose from nowhere.

    36. Paul, I agree that many (perhaps almost all) moral philosophers don't see “the science of our biological nature is especially relevant to the nature of morality or the practice of moral philosophy.” This view of moral philosophy began with the classical Greek philosophers and has been reinforced for over 2000 years.

      The circumstance is almost inevitable since the intellectual tools (largely evolutionary science and game theory) have only been adequate for a couple of decades that enable understanding the universal source of past and present moral codes and our moral emotions.

      For now, I will continue to work on how to present the science of moral codes and its instrumental utility in achieving common goals in ways that are less likely to be misunderstood. That is, I plan to avoid trying to convince professional moral philosophers that their concept of what morality ‘is’ is misconceived. At best, I will try to defend the science of morality as NOT necessarily making the naïve error of claiming an ‘ought’ derived from an ‘is’ without explanation of how that was done.

      Thanks again for your conversation.

  7. One way to resolve (or avoid) this issue is to just declare that philosophy IS a science, and change its name to reflect that realization. That's what Colin McGinn did last year ("Philosophy" becomes "Ontics") in two NYTimes essays. Chemists, biologists, and physicists don't have turf wars over where their boundaries lay, why should onticists and other scientists?


    1. Philip,

      I read McGinn's essays, and frankly it sounds like an awful idea. Not only it smells pretty badly of being just a marketing move, but it gives up literally thousands of years of intellectual history for the "privilege" of being recognized a "science." No thanks, philosophy is continuous with, but distinct from, science, and I think philosophers just need to stop being defensive about that.

  8. While I agree with the original post overall and in many of the details, I'm not sure that all of the specific criticisms of Sam Harris are fair.

    I think that Harris overstates his thesis. He claims that moral questions are objective scientific ones, but this is only true if we accept the arbitrary utilitarian premise that morality is about promoting the well-being of conscious beings.

    As arbitrary premises go, however, it's not a bad one, and if we grant this then the question of how best to promote well-being is in principle an empirical one and not a philosophical one. Some paths will be better than others. Science can in principle help us to choose.

    The special place Harris grants neuroscience has nothing to do with bio-enhancement. Neuroscience is only central because this is the only science that could allow us to objectively quantify well-being or define what counts as a conscious being. As such, it's primarily of use in defining the goal clearly. The other sciences may be more useful in determining how to attain it.

    Contrary to the original post, I think Harris's analogy to medicine is very apt, while the examples regarding the purported health benefits or risks of coffee rather miss the point.

    In the medical analogy, we accept as a given that we should promote human health, and use science to help us to achieve this goal. Harris's point regarding morality is precisely parallel - we accept as a given that we should promote the well-being of conscious creatures and that science is the means by which we can do this.

    Your example regarding coffee does not defeat this position because even if it is difficult to decide in practice whether coffee is healthy or not, the question remains an empirical one.

    And that's all that Harris is claiming regarding morality.

    To summarise my position:
    1) Harris is wrong to claim that there is an objective foundation of morality.
    2) Harris is right to claim that, given his particular definition of and foundation for morality, moral questions are empirical ones.
    3) Harris's definition of morality is reasonable but not unimpeachable.
    4) Harris's position that only science can answer moral questions is not wholly unreasonable, but it is made dubious by the practical difficulties in correctly identifying optimal policies scientifically.

    1. 1) Harris doesn't claim an "objective" foundation for morality, at least not in the sense of a Platonic or "intrinsic" value to certain actions, actions whose value is somehow independent (and therefore presumably "objective") of the well-being of conscious creatures, as he puts it, or even independent of consciousness at all.
      2) It's interesting that Harris briefly describes morality as a game of Chess (in The Moral Landscape); he says there are definite rules that two players agree to, and that generally-speaking it's not a good thing to lose your Queen, but occasionally the *best* move IS to sacrifice your Queen. I tend to think morality is pretty much exactly like a game of Chess; that is, rules humans (or a subset of humans) have agreed to in order to be able to "play" together at all. And there seem to be two levels to this game: there is the level of the pieces that have well-defined and unalterable rules for movement, and various strategies each player can employ to obtain their desired results - all within certain limits of mutually-agreed upon play.
      3) Harris' definition is almost identical to Owen Flanagan's, but Harris tries to claim (much) more for science than Flanagan. (See Flanagan's "The Problem of the Soul", and "The Really Hard Problem")
      4) This is where Harris' analogy involving Health/Medicine is instructive: he says that science can't tell us *scientifically* why we should value health, but it certainly (so he claims) has a lot to say about it.

    2. 1) I agree, he does not maintain that actions have value independently of their significance to the well-being of conscious creatures. Rather, he maintains actions have intrinsic value *because* of their impact on well-being. He believes therefore that his proposed moral framework (judging value in terms of well-being) is an objective basis for morality. If I am misinterpreting him, this misinterpretation is pretty common.

      The subtitle of his book "How Science Can Determine Human Values" is probably largely to blame.

      In his debate with William Lane Craig, the question was "Is the foundation of morality natural or supernatural?". Craig maintained, correctly in my view, that without God then there was no objective basis for morality. Harris would not concede this point. He should have agreed with Craig but made the point that we do not need an objective basis for God - a subjective basis is enough to provide a consistent moral foundation as long as we agree to it.
      4) I agree that the medical analogy is good, however this point seems to be unrelated to my point 4). My point was that it's all very well showing that moral questions are empirical in principle, but that in practice it may not be feasible to determine the right answers using the scientific method. We might be better off with philosophy. That doesn't mean we shouldn't at least try to use science, of course. This is why I agree with the overall point of the original post. Both disciplines have a lot to add to the field, and should work together rather than trying to lay exclusive claim to it.

    3. Just to clarify: the description of actions as having an intrinsic value *because* of their relation to well-being is an oxymoron. If it's intrinsic, then it can't be because of anything else. So, in Harris's moral framework it's not intrinsic, it's extrinsic.

      Rather, what Harris claims as "objective" is the intrinsic value of the well-being of conscious creatures. If it is objectively true that we ought to value the well-being of conscious creatures, then no rational well-informed being could disagree.

      I don't think this is the case. I think it is perfectly rational to disagree with this, because I think that values are independent of rationality and objectivity.

  9. I am not a scientist or a philosopher, but I do enjoy this blog and the interesting discussions that follow in the comments. But it seems at times that the language stumbles over itself. What is a man of average intelligence to make of phrases like "the source of *instrumental* oughts" and "the source of the *imperative* oughts" [emphasis added]? Or, "given his particular definition of and foundation for morality, moral questions are empirical ones"? Now, these citations are not intended to single out particular individuls. I appreciate the points being made by all the commentators. They just happened to strike me as I read through the comments. So, is there really a point in qualifying "oughts" with "instrumental" and "imperative" because both seem implicit in "ought"? Or, when discussing moral questions do we have a choice in deciding whether we are approaching the subject from "empirical" grounds or some "other" grounds. Is there really any meaning in discussing the route taken to arrive at a destination? If there is, I assume it is like a contest between two canines intended to determine which can urinate higher on a tree.

    1. Hi Thomas,

      For my part, I apologise where my language has been needlessly obtuse.

      I think the distinction between intrumental oughts and imperative oughts is a meaningful one.

      The former concerns what we should do to most effectively accomplish our goals. "I ought pick up some glue next time I'm in the supermarket".

      The latter concerns what we should do for moral, legal, or ethical reasons. "I ought to donate to that charity".

      As for whether moral questions are empirical or not, well, if they are empirical then they can in principle be answered by the scientific method. There are right answers, and outcomes that can be measured and quantified. If they are not empirical then there may be no right answers and they might best be approached with philosophical discourse.

      Since the original post is about whether determining moral choices is the domain of science or philosophy, the question of whether moral questions are empirical ones seems to be relevant.

    2. Thank you, Disagreeable Me, for clarifying the uses made for these terms. Let me say, I do understand the distinction you are making; however, to my way of thinking it is possibly to make a distinction where there is none. For example, is it not possible to talk about the efficacy of charitable donations in the same way it is possible to talk about obtaining glue needed for a project? When we use "ought" in common parlance, the notion of obligation and duty is implicit in its use. Thus, to say "imperative" ought is to create a redundancy. Similarly, the use of ought is hardly meaningful without also implying "capability," and so the use of the qualifier "instrumental" is hardly helpful to me in this attempt to make a distinction where perhaps there is none. As to your statement that Harris has a "right to claim . . . that moral questions are empirical" given his definition and approach to the issue, I'm not sure that much is at work here. If you and I get together and decide to go to "x" today, and you take route "y" while I take route "z", but we both arrive at "x," is it really critcal to talk about the route without adding some other point about the destination and the need to arrive there in a certain manner. You might say your route was faster and you can support your statement with empirical data. I might say my route is more scenic and have difficulty supporting it with empirical data. But does that mean my route can no longer be taken anymore? And, yes, your summary of the purpose of the original post is correct. I'm just saying that the language used in the comments sometimes created difficulties rather than clarifications.

    3. Thomas -

      For a good and accessible introduction to ethics broadly-conceived, I suggest Duke philosopher Owen Flanagan's book "The Problem of the Soul." The last chapter of the book is dedicated to ethics, and it touches on much if not all of the things we've been discussing here. His idea is to think of "ethics as human ecology". Here are two representative quotes:

      "And ecology is the science that studies how living systems relate to each other and to their environment, and so is the relevant analogy." (pg. 266)

      "If we take the analogy of ethical inquiry with ecology seriously, then ethics can be conceived as empirical. That is, ethics is inquiry into the conditions that reliably lead to human flourishing." (pg. 272)

  10. Thanks, Juno, for your suggestions. I read Flanagan's "The Really Hard Problem: Meaning in a Material World" a few months ago and struggled quite a bit with it. I would say this had as much to do with Flanagan's problems with written expression and the pieced together feel of the chapters as much as the content of his thinking. I have a literary background and so I frequently approach writings on philosophic topics with two things in mind (1) is a topic unclear to me because I lack a necessary understanding or knowledge of the topic or (2) has it been expressed in a manner that hides a vacuity. Alas, sometimes both are in play. I don't have a problem with an empirical approach to ethics (values, norms, moral systems). But I do have reservations with the mindset that says this is the fittest, or the best, approach to an ethos used to produce human flourishing.

  11. Let me preface by saying that I'm a pretty big fan of Stoic philosophy and I'm also pretty familiar with Epicureanism.

    On Karma and Logos...

    I'm not sure the analogy can really be made between Logos and Karma. While every stoic I've read is a theist, I think the concept is a bit more flexible. The idea of a logos could simply mean the logical and deterministic order of the universe. Stoic philosophy doesn't necessarily require some sort of transcendental accountability. Some stoics have suggested that this may be the case, but they have also explicitly stated that the existence of gods isn't necessary for their philosophy (see the Meditations by Marcus Aurelius).

    On Stoicism as Advocating Inhuman Detachment

    Epicureanism and Buddhism certainly advocate removing desires as a means to stop suffering, but I'd argue that the Stoics understood that emotions are an important piece of being human and they were quick to criticizing the Epicureans for just this fact.

    Seneca the Stoic says this in one of his letters:

    "The difference here between the Epicurean and our own school is this: our wiseman feels his troubles but overcomes them, while their wiseman does not even feel them."

    I do think Epicureans and Buddhists are often battling human nature, but one of the core tenants of Stoicism is that one lives in accordance with Nature (capital N is intentional). How could any reasonable person who's accepted this tenant try to live a Buddhist or Epicurean life? I don't think it makes sense.

    On the Passivity of Stoicism

    I don't have much to say on this other than that figures like Cato the Younger and Marcus Aurelius hardly strike me as passive people. Living in accordance with nature and reason doesn't necessarily mean accepting the status quo.

  12. >It seems more likely that scientists have simply become emboldened by and enamored of the success of their respective disciplines, and are thus riding that wave onto the shores of philosophical discourse, where they come crashing impudently down.

    I also really enjoyed Thoughts on Nihilism, part I and II.


Note: Only a member of this blog may post a comment.