About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, October 11, 2013

Ethical questions science can’t answer

by Massimo Pigliucci

Yes, yes, we’ve covered this territory before. But you might have heard that Sam Harris has reopened the discussion by challenging his critics, luring them out of their hiding places with the offer of cold hard cash. You see, even though Sam has received plenty of devastating criticism in print and other venues for the thesis he presents in The Moral Landscape (roughly: there is no distinction between facts and values, hence science is the way to answer moral questions), he is — not surprisingly — unconvinced. Hence the somewhat gimmicky challenge. We’ll see how that ones goes, I already have my entry ready (but the submission period doesn’t open until February 2nd).

Be that as it may, I’d like to engage my own thoughtful readers with a different type of challenge (sorry, no cash!), one from which I hope we can all learn something as the discussion unfolds. It seems to me pretty obvious (but I could be wrong) that there are plenty of ethical issues that simply cannot be settled by science, so I’m going to give a few examples below and ask all of you to: a) provide more and/or b) argue that I am mistaken, and that these questions really can be answered scientifically.

Before we proceed, however, let’s be clear on what the target actually is. I have summarized above what I take Harris’ position to be, and I have previously articulated what I think the proper contrast to his approach is: ethics is about reasoning (in what I would characterize as a philosophical manner) on problems that arise when we consider moral value judgments. This reasoning is informed by empirical evidence (broadly construed, including what can properly be considered science, but also everyday experience), but it is underdetermined by it.

This may be taken to be somewhat out of synch with Harris’ attempt, because he is notoriously equivocal about what he means by “science.” At one point (in an endnote of the book) he claims that science encompasses every activity that uses empirical facts, not just the stuff of biology, chemistry, physics, neuroscience, and so on. But if that is the case, then his claim comes perilously close to being empty: of course facts understood so broadly are going to be a crucial part of any ethical discussion, so what? Therefore, for the purposes of this discussion I will make what I take to be a commonsensical (except in Harris’ world) distinction between scientific facts (i.e., the results of systematic observations and experiments, usually embedded in a particular theoretical framework), and factual common knowledge (e.g., the n. 6 subway line in New York City stops at 77th St. and Lexington). If you don’t accept this distinction (even approximately) then you “win” the debate by default and there is nothing interesting to be said. (Actually, no you still lose, because I can do one better: I arbitrarily redefine philosophizing as the activity of thinking, which means that we all do philosophy all the time, and that the answer to any question, not just moral, is therefore by definition philosophical. So there.)

I also need to make a comment about the other recent major supporter of the view that I’m criticizing: Michael Shermer. To be honest, I still don’t know exactly what Michael’s position is on this, even though I asked him explicitly on more than one occasion. At times he sounds pretty much like Harris (whom he openly admires). But if that’s the case, then one wonders why Shermer feels compelled to write another book on the relationship between science and morality, as he is reportedly doing. At other times Michael seems to be saying that both science and philosophy are needed for a comprehensive understanding of morality — both in terms of its nature and when it comes to applications of moral reasoning to actual problems. But if that is what he means, then no serious philosopher would disagree. So, again, why write a whole book to elucidate the obvious?

Anyway, let’s get down to business with a few examples of ethical questions that I think make my point (many others can be found in both recent books by Michael Sandel). (Entries are in no particular order, by the way.)

1. Should felons not regain their full rights as citizens after time served? Most US states (the exceptions are Maine and Vermont) prohibit convicted felons from voting while they are serving their sentence. This, seems to me, is relatively easy to defend: being a convicted felon entails that you lose some (though certainly not all) of your rights, and one can make an argument that voting should fall into the category of suspended rights for incarcerated individuals, just like liberty itself. More controversial, however, is the idea of disenfranchising former convicts, which is in fact the case in nine states, with three of them (Florida, Kentucky, and Virginia) imposing a lifelong ban from voting. Is this right? How would science answer the question? One can’t just say, “well, let’s measure the consequences of allowing or not allowing the vote and decide empirically.” What consequences are we going to measure, and why? And why are consequences the ultimate arbiter here anyway? Consequentialism is famously inimical to the very concept of rights, so one would then first have to defend the adoption of a consequentialist approach which, needless to say, is a philosophical, not empirical matter.

2. Is it right to buy one’s place in a queue? This example comes straight from Sandel’s What Money Can’t Buy (hint hint), and there are several real life examples that instantiate it. For instance, lobbying firms in Washington, DC are paying homeless people to stand in line on their behalf in order to gain otherwise limited access to Congressional hearings. Yes, on the one hand this does some good to the homeless (even if one were to set aside issues of dignity). But on the other hand the practice defeats the very purpose of a queue, that is to allow people who care enough to get ahead of others because they are willing to pay a personal sacrifice in terms of their own time. Even more importantly, as Sandel argues, the practice undermines the point of public hearings in Congress, which are vital for our democracy: instead of being truly open to the public, they become a near-monopoly of special interests with lots of money. Again, what sort of experiment could a neurobiologist, a chemist, or even a social scientist carry out in order to settle the question on exclusively empirical grounds?

3. Should discrimination (by sex, gender, religion, or ethnicity) be allowed? This may seem like an easy one, but even here it is hard to see what an empirical answer would look like. What if, for instance, it turns out that social and economic research shows that societies that provide disincentives to women in the workplace (in order to keep them at home raising children) thrive better (economically, and perhaps in other respects) than societies that strive for equality? Such a scenario is not far fetched at all, but I would hope that most of my readers would reject the very possibility out of hand. It wouldn’t be right (insert philosophical argument about rights, individuals and groups here) to sacrifice an entire class of people in order to improve societal performance in certain respects. And, of course, there is the issue of why (according to which more or less hidden values?) we picked those particular indicators of societal success rather than others.

4. Those darn trolley dilemmas! I doubt there is need for me to rehash the famous trolley scenarios that can be found in pretty much any book or article on ethics these days. But it is worth considering that those allegedly highly artificial thought experiments actually have a number of real life similes, for instance in the case of decisions to be made in hospital emergency rooms, or on the battlefield. Regardless, the point of the trolley thought experiments is that the empirical facts are clearly spelled out (and they don’t require anything as lofty as “scientific” knowledge), and yet we can still have reasonable discussions about what is the right thing to do. Even people who mindlessly choose to “pull the lever” or “throw the guy off the bridge,” following the simple calculus that saving five lives at the cost of one is the obviously right thing to do, quickly run into trouble when faced with reasoned objections. For instance, what about the  analogous case of an emergency room doctor who has five patients, all about to die because of the failure of a (different) vital organ? Why shouldn’t the doctor pick a person at random from the streets, cut him up, and “donate” his five vital organs to the others? You lose one, you save five, just as with the trolleys. And yet, a real life doctor who acted that way would go straight to jail and would surely be regarded as a psychopath.

5. How do we deal with collective responsibility? Another of Sandel’s examples (this one from Justice). He discusses several cases of apologies and reparations by entire groups to other groups,  cases that are both complex and disturbingly common. Examples cited by Sandel include the Japanese non-apology for wartime atrocities that took place in the 1930s and 40s, including the coercion of women into sexual slavery for the benefit of its officers; or the apologies of the Australian government to the indigenous people of that continent; or the reparations of the American government to former slaves or to native Americans. The list goes on and on and on. What sort of scientific input would settle these matters? Yes, we need to know the facts on the ground as much as they are ascertainable, but beyond that the debate concerns the balance between collective and individual responsibility, made particularly difficult by the fact that many of these cases extend inter-generationally: the people who are apologizing or providing material reparations are not those who committed the crimes or injustices; nor are the beneficiaries of such apologies or material help the people who originally suffered the wrongs. These are delicate matters, and the answers are far from straightforward. But to boldly state that such answers require no philosophical reasoning seems just bizarre to me.

Of course, in all of the above cases “facts” do enter into the picture. After all, ethical reasoning is practical, it isn’t a matter of abstract mathematics or logic. We need to know the basic facts about felonies, voting, queues, Congressional hearings, sex / gender / religion / ethnicity, trolleys, war crimes, genocide, and slavery. From time to time we even need to know truly scientific facts in order to reason about ethics. My favorite example is the abortion debate: suppose we agree (after much, ahem, philosophical deliberation) that it is reasonable to allow abortion only up to the point at which fetuses begin to feel pain (perhaps with a number of explicitly stated exceptions, such as when the life of the mother is in danger). Then we need to turn to developmental and neuro-biologists in order to get the best estimate of where that line lies, which means that science does play a role in that sort of case.

Very well, gentle reader. It is now up to you: what other examples along the lines sketched above can you think of? Or, alternatively, can you argue that science (in the sense defined above) is all we need to make moral progress?

79 comments:

  1. What strikes me is the political or ideological dimension of most of these examples. How you would answer them will often depend (broadly speaking) on your ideology. So I would see the real debate as being, as it were, at one remove from the particular issue (in many cases) – that is, about ideological frameworks.

    Also, the mention of moral progress seems to me to imply a progressive perspective which not everybody shares and which needs, I think, to be argued for.

    In other words, only if we can agree on an ideological framework will discussion of (many of these) moral issues be productive.

    I utterly agree however that science will not of itself lead to answers to such questions.

    ReplyDelete
  2. Simply excellent, Massimo. And it's not just a question of science at its current level of knowledge not being able to answer these questions. Rather, it never will.

    ReplyDelete
  3. Your cases 1 and 2 focus on the democratic process, and individuals' measure of influence on government. My sense is that Harris might respond by arguing that our current form of democracy acts as a method for transferring individuals' perceived policy preferences into policy decisions, but that future science will likely lead to a more effective process. For instance, imagine a Future Nate Silver that could nearly perfectly predict individuals' policy preferences. Further, imagine a technology that allows individuals to nearly perfectly analyze how different policies affect their well-being. Given these technologies, we could presumably forego democracy in favor of a mechanism that scientifically predicts how a given policy impacts people's well-being and chooses the optimal policies. My sense is there's no rational basis for objection, provided the science worked.

    This technology would presumably make cases 1 and 2 moot.

    ReplyDelete
    Replies
    1. There is an Isaac Asimov story were a computer selects the most representative voter, who then votes for everyone. http://en.wikipedia.org/wiki/Franchise_%28short_story%29

      Delete
  4. Which Way should I go, left, right, or is it best to stay right here?

    Should I leave it up to science and its measurements of uncertainty to show me the theoretical Way? What if they send me to the black hole? What about religion and its inequities of the Gods and all of us, the rest? Do I need faith in God to ask him which Way is best? Or is it best to ask the Supreme Court, the blind Goddess Justice to show me the fairest Way? What about the President, our government, they might know. But I guess I'll have to wait till they are open for corporation or should I say business again.
    I know what to do, I'll ask them all and while I am waiting for the answer I think I'll just stay right here. =

    ReplyDelete
  5. One question that was implied in a Saturday Morning Breakfast Cereal that I liked is: "Preventing the murder of Batman's parents may seem ethical, but by doing so, you prevent the creation of Batman, and so, you may prevent him of saving hundreds of lives. So, if time travel is invented, is it ethical to prevent the murder of Batman's parents?"

    ReplyDelete
  6. Great post! I was wondering what you thought about Harris's idea that his basic assumption on "wellbeing" is just as objective as anything else in science because science also makes basic assumptions that are not necessarily defended. I ask this because I think that is one of the central ways he tries to get around explaining how science would determine value questions.

    ReplyDelete
  7. All of your examples are good ones to show that science (alone) couldn't answer them, but perhaps too much is proved, because all of them are also unanswerable by philosophy (or at least - unanswered up to this point, which is evidence supporting the view that they're unanswerable). If so, where are there answers agreed upon by the consensus of experts?

    I'm well aware of your view that philosophy doesn't make progress like science does; instead, it makes negative progress by ruling out incoherent ideas in logical space. Assuming that you hold that view, then it's not clearly being applied here as the implicit assumption in all of your examples is that science can't answer examples 1-5 (but philosophy can!). But if philosophy makes negative progress only, then it's not in the business of providing solid positive answers, in which case my request for the consensus solutions to these problems is of course problematic. But it seems that you forgot this point here and are presuming that philosophy can solve these problems. Well, if so, what are there solutions? Why after thousands of years do we have several normative ethical frameworks with no consensus or clear way to say which one's correct? Sure, philosophy can tell us that it's "right" to kill 1 to save 5 but "wrong" from, say, certain rights-based perspectives. But philosophy has failed to date to tell us which framework is correct.

    So while agree with your thesis that science (alone) can't solve moral problems, it's downright disingenuous to imply that philosophy (alone or not) can. The more accurate title to this posting would be "Ethical Questions that Ethics Can't Answer."

    ReplyDelete
  8. I haven't read this by Harris, but his apparently broad definition of "science" may be similar to Dewey's definition of "inquiry" as a method employing scientific, empirical data and logical analysis in resolving problems of all sorts, including moral ones. Dewey believed moral inquiry and empirical inquiry were or should be similar. Moral judgments being instrumental as guides to conduct in pursuit of ends become subject to verification and testing just as empirical propositions are. I'm not sure if this is what Harris intends, though, nor do I think this indicates facts and values are necessarily the same, except to the extent they're subject to intelligent reflection and evaluation..

    ReplyDelete
  9. Mark,

    > What strikes me is the political or ideological dimension of most of these examples. How you would answer them will often depend (broadly speaking) on your ideology. <

    Indeed, but ideologies themselves are based on values, and they can be defended (or criticized) rationally.

    > the mention of moral progress seems to me to imply a progressive perspective which not everybody shares and which needs, I think, to be argued for. <

    It has been, convincingly, I think.

    contrarian,

    > imagine a Future Nate Silver that could nearly perfectly predict individuals' policy preferences. Further, imagine a technology that allows individuals to nearly perfectly analyze how different policies affect their well-being. Given these technologies, we could presumably forego democracy in favor of a mechanism that scientifically predicts how a given policy impacts people's well-being and chooses the optimal policies <

    I love how these kinds of Harris-like responses rely on future perfect technology rather than actual science. But even granting the thought experiment, no, that's not the answer. Morality is not about the aggregate of individual preferences, that is a crude utilitarian perspective that not even actual utilitarians would go for.

    Imad,

    > I was wondering what you thought about Harris's idea that his basic assumption on "wellbeing" is just as objective as anything else in science because science also makes basic assumptions that are not necessarily defended. <

    I'm not bothered by premises, but Harris doesn't even attempt to defendhis assumption that morality is about maximizing wellbeing, however ill defined. You will also notice that you can get through Harris' entire book without encountering a single example of *novel* moral insight arrived at by science...

    Philo,

    > perhaps too much is proved, because all of them are also unanswerable by philosophy <

    That's only if you subscribe to the idea that philosophy has to provide science-like answers. As I wrote, ethical reasoning is about analyzing, unpacking and evaluating our assumptions about morality. Much progress can (and has) been made that way. See Sandel's books for excellent examples of how this works in practice.

    > Why after thousands of years do we have several normative ethical frameworks with no consensus or clear way to say which one's correct? <

    Because logical space is not like the empirical one, it admits of multiple peaks with equally reasonable solutions to a problem. (Here, ironically, Harris' concept of a moral landscape is apt.) it is also important to note that we have a small number of such frameworks in competition, not a huge one. I call that progress and consensus.

    > it's downright disingenuous to imply that philosophy (alone or not) can <

    And where did I write that? I thought I was clear that ethical debate requires several participants, including the sciences (mostly the social ones, I'm not as impressed as Harris is by neuroscience in this respect).

    ciceronianus,

    > his apparently broad definition of "science" may be similar to Dewey's definition of "inquiry" as a method employing scientific <

    I honestly think you are paying far too high a compliment to Harris (or, alternatively, insulting Dewey too harshly).

    Thomas,

    > Perhaps, some fore-knowledge of this fact might have changed the selection of whom to toss, but there still remains the question whether you've acted morally <

    I have no idea what you are talking about.

    > all I see here is Harris's premonition that science will one day progress to omniscience and we no longer need to include a criterion of falsifiability. <

    Here we go again with promissory notes that will likely never be cashed. And of course your example still begins with an unargued for utilitarian position.

    ReplyDelete
    Replies
    1. I haven't argued that morality is based on an aggregate of preferences; rather, that voting is. When a population elects a candidate, it implies that the population, in the aggregate, prefers the candidate's policies. (This is somewhat crude given political coalitions, strategic voting, etc. but largely accurate)

      If voting preferences can be understood and predicted through science, then voting presumably becomes unnecessary. In that scenario, any ethical questions dealing with voting rights or political access become moot. This seems like an example of scientific progress leading to moral progress, no?

      Delete
    2. Yikes. I admire Dewey greatly. I guess I shouldn't have assumed Harris was trying to do something similar.

      Delete
  10. @ Massimo

    > This may be taken to be somewhat out of synch with Harris’ attempt, because he is notoriously equivocal about what he means by “science.” At one point (in an endnote of the book) he claims that science encompasses every activity that uses empirical facts, not just the stuff of biology, chemistry, physics, neuroscience, and so on. But if that is the case, then his claim comes perilously close to being empty: of course facts understood so broadly are going to be a crucial part of any ethical discussion, so what? <

    If Harris is employing the term "science" in a different sense than you are, then what's the point of this exercise? It seems to me that you will be making nothing more than a straw man argument.

    One of Merriam-Webster's definition of "science" is "the state of knowing." And one of its definitions of "empiricism" is "a theory that all knowledge originates in experience." Employing these definitions, one could make a compelling argument that "mysticism" qualifies as a science because it is based on empiricism. And we both know that Harris has a soft spot for mysticism.

    ReplyDelete
  11. As you can imagine, I agree completely that science cannot answer any ethical questions whatsoever. But it is not because I agree with your definition of scientific facts: There are only facts about the world around us and not-facts (e.g. values). As an example, if all humans lived in the lowland tropics and had not yet invented refrigerators, then the fact that water turns solid below 0°C would surely be a scientific fact. But just as surely it is now a commonsensical everyday fact for a great number of people. The distinction is artificial.

    No, the important point that Harris does not appear to grok is still is-ought. Science can say: if we do A, B will happen. But there is no way for science to say: therefore we should not do A. For that one needs to add values, and everybody who believes differently is fooling themselves.

    The big issue I think is that many people consider certain values to be self-evident because everybody (who counts?) appears to hold them. So they are quite comfortable with a sentence such as: science shows that global warming happens, that we cause it, and that it will be increasingly destructive to us and our descendants, therefore we should change our ways. But it is simply not true that all who understand the facts agree with the last step. Let's be honest, a good number of people value their current level of comfort higher than that of their grandchildren. (Most do not want to own up to it, perhaps not even to themselves, and thus adopt a stance of denialism, but some are quite open about their indifference at lest in private conversation.)

    As depressing as it is, it simply does not follow from science that we need to ensure the planet to be a pleasant place to live on in two hundred years. (And adding non-scientific everyday facts to the picture would not change anything whatsoever.) To arrive at the conclusion, we need to add the value of responsible stewardship.

    ReplyDelete
  12. From Wikipedia:
    Ethics, also known as moral philosophy, is a branch of philosophy that involves systematizing, defending and recommending concepts of right and wrong conduct.

    First we would need to define good and bad before determining right and wrong.

    Harris argues for an analogy to health. Can we have a general agreement on good vs bad health? I would say yes, but only a general agreement. There would be many "fuzzy" areas where differing concept of health could be equally rationally supported. If this is the case then there can be no one set of "rules" to maximize one's health. The methods of science would be the best tool to determine what choices to make to maximize one's health despite there being different concepts of what is "health".

    It is likely that humans have different genetic predispositions toward the concepts of good and bad, I would argue many more similarities than differences. We are a social/communal species and consequently our concept of good and bad has two primary considerations, what is good/bad for me and what is good/bad in a larger communal environment. So the question of good and bad is complex and has conflicting elements right at the core.

    Science can investigate the outcomes of choices. It is the BEST tool we have to determine not only the possibility-space of all our actions but the consequences of specific actions. We are just starting to have the technological tools to explore this domain. In 50, 100, 200 or more years we will in all likelyhood have the ability to simulate complex scenarios and their permutations to such a degree that what seems like intractable ethical problems today are clearly modeled to a point where they are considered solved.

    Science continues to be underestimated and the complexity of human problems overestimated.

    ReplyDelete
  13. Massimo,

    Thanks for the clear description of your position. I agree with your objections to Harris’ arguments. But I want to make a different argument that science can answer at least some ethical questions.

    What if all basis for moral codes other than what science arguably shows moral behaviors descriptively ‘are’ will likely be culturally useless (societies will not be motivated to enforce them) due to their inherent lack of long term motivating force for the individual? If we consider only moral codes that are likely to be culturally useful, then the is/ought problem in using science to answer ethical questions may become largely irrelevant in many cases, including your five examples.

    As described in Nowak’s Evolution, Games, and Gods, Gintis and Bowles’ A Cooperative Species, and many other sources, moral behaviors are costly cooperation strategies (moral ‘means’) selected for by the benefits of cooperation in groups (science’s underdetermined moral ‘ends’).

    Any other basis for moral codes will be, to one degree or another, dissonant with 1) our biologically based values and moral intuitions, 2) the biological connection between moral behavior and durable well-being, and 3) costly cooperation strategy solutions to the cross species universal dilemma of how to obtain the benefits of cooperation without being exploited (the cooperation/exploitation dilemma that moral behavior solves). Those dissonances will reduce motivation and eliminate rational (self-interested) reasons to act morally.

    Philosophy would have had a freer hand in arguing what moral codes ought to be if moral behavior was merely a culturally dependent product of social living and rational thought. But moral behavior is not just a product of living in societies; human societies and human moral psychology, including moral values, are products of moral behaviors which, as costly cooperation strategies, existed (in the sense simple mathematics did) prior to the evolution of people. Any proposed moral code that is inconsistent with human moral psychology and does not solve the universal cooperation/exploitation dilemma lacks motivation and reasons to act morally, and will therefore likely be culturally useless.

    Restricting our attention to moral codes that societies will be motivated to enforce, science can answer your ethical questions as follows. Assuming all individuals are considered in the group which deserves moral concern (a subject science is silent on):

    1) and 2) are immoral because they decrease mutual regard and trust between sub-groups and therefore the benefits of cooperation between subgroups.
    3) Discrimination by sex, gender, religion, or ethnicity is immoral to the extent it, again, decreases mutual regard and trust between sub-groups and therefore the benefits of cooperation.
    4) Trolley dilemmas largely vanish when one recognizes that, just as science shows us, what is moral is a function of effects on future cooperation benefits, not body count. Pushing the large man reduces future trust, and thus cooperation, between people. Throwing the switch does not.
    5) Collective responsibility is a moral obligation when not acknowledging that responsibility would reduce future benefits of cooperation between the sub-groups.

    So what tasks would remain for moral philosophers who conclude that moral codes normatively ought to be based on what science tells us morality objectively is? Moral philosophy is the only source of answers to critical questions including the following:

    How ought we balance obligations to ourselves versus the many different groups we belong to such as family, friends, nations, all humans? If animals cannot meaningfully ‘cooperate’ with us, do we have any moral obligations to them? What ‘benefits’ of cooperation are immoral? How can coherent moral codes be defined regarding issues such as abortion, suicide, and what sorts of competition are immoral based on what science tells us moral behavior ‘is’?

    These are difficult issues. The science of morality and the ethical questions you posed are comparatively simple.

    ReplyDelete
    Replies
    1. Hi Mark,

      You seem to accept that science 'underdetermines' the so called 'benefits to cooperation', yet you include them all five of your responses to Massimo's cases.

      Do you in fact consider that these 'benefits' can be empirically determined? If so, how?

      Delete
  14. > >What strikes me is the political or ideological dimension of most of these examples. How you would answer them will often depend (broadly speaking) on your ideology. <

    Indeed, but ideologies themselves are based on values, and they can be defended (or criticized) rationally.<

    Yes, but the ideological dimension certainly does add another layer of complexity.

    In ordinary life, quite rightly I think, we interpret what people say in the light of their position, ideological or otherwise. ("Well, he would say that, wouldn't he?")

    And this applies as much to philosophers as to anyone else. So, if I know a philosopher is a Marxist or an evangelical Christian or whatever, I will not be inclined to give too much attention to (or even to bother reading, to be frank) those arguments of theirs which are patently designed to justify or defend their ideological commitments. Unless, of course, I happen to be attracted to Marxism or evangelical Christianity or whatever. (But in that case the real interest would be in assessing the broader ideological or religious position rather than in the specific moral issue under debate.)

    So I am suggesting that we always need to keep that larger context in mind. And, though I do not deny that the values on which ideological structures are based can be rationally scrutinized, I am not all that sanguine about the power of rational argument to shift in determinate ways the deep and fundamental value-related instincts which drive our ideological commitments.

    Another (related) thought I had as I read the post concerns the implications of the fact (as I see it, and you acknowledge also) that there simply is no one true or correct answer to many of the issues raised – like the abortion question, for example.

    Sure, we need to agree on laws, but the framing of laws is inevitably a messy political process, shot through with compromise and inconsistency. Though jurists and legal philosophers sometimes play significant roles, it is not all that often, I would suspect.

    Don't get me wrong. I am not saying there is no point in discussing contentious value-related issues. But, reflecting my greater interest in ideology and meta-ethics than applied ethics, I would do so (as you would too, no doubt, even in applied contexts) with a view to deepening understanding rather than changing minds.

    Minds will change, but in their own time – and in unpredictable ways.

    ReplyDelete
  15. "If you don’t accept this distinction (even approximately) then you “win” the debate by default and there is nothing interesting to be said."

    I disagree. On a moral anti-realist view, there are no objective moral facts to be discovered, so "science" (in any sense of that word) cannot discover such facts. In other words we can argue that Harris is wrong on the metaethics, without even mentioning science or making a science/non-science distinction. That's the approach I've taken in the entry I've written for his challenge. I agree that Harris's use of the word "science" is problematic (and sometimes equivocal), but that's best ignored if we want to discuss the metaethics. Bringing the word "science" into the discussion just gets in the way of discussing the metaethics.

    ReplyDelete
  16. Hi Massimo,


    Interesting post as always. However on this subject I am wondering why you don't deal with Richard Carrier's argument for science as a source for determining moral truths. Carrier is equally critical of both Harris and Shermer and does a pretty good job of pointing out the flaws in both their positions, while making his own case, and complaining that Harris and Shermer are poor representatives of the overall position.

    http://freethoughtblogs.com/carrier/archives/4498

    I am especially intrigued by the pivotal point he raises, following Phillipa Foot, that all moral statements boil down to (or can be converted to) hypothetical imperatives (even supposedly categorical ones)

    I'm not here arguing in favor of Carrier's case, but I do think he at least does a better job of it, and should be dealt with seriously. I think you would agree that when attempting to refute a certain position, it is best to argue against the strongest case, rather than the weakest ones, if indeed Carrier's argument is thus.

    ReplyDelete
  17. Alastair,

    > If Harris is employing the term "science" in a different sense than you are, then what's the point of this exercise? It seems to me that you will be making nothing more than a straw man argument. <

    No, because he is employing the term in a way that doesn’t accord to scientists’, philosophers’ or even laypersons’ usage, so he is simply committing a fallacy of equivocation, and he needs to be called on it.

    Alex,

    > The distinction is artificial <

    Well, we’ve been here before. Not, it’s not artificial, but it is a matter of gradation with plenty of grey areas. If the distinction were truly artificial then “science” would be synonymous with every bit of empirical knowledge, no matter how trivial, unsystematic, or detached from any theoretical framework. Which means that the word “science” would be empty of any specific meaning. Why go that way, other than to appease some scientists’ ego?

    > The big issue I think is that many people consider certain values to be self-evident because everybody (who counts?) appears to hold them. <

    Agreed.

    > As depressing as it is, it simply does not follow from science that we need to ensure the planet to be a pleasant place to live on in two hundred years. <

    Again, agreed.

    Thomas,

    > you are right. You have no idea what I'm talking about <

    My comment was specific to a couple of your sentences that didn’t make sense to me (and still don’t), but you prefer instead to give me a lecture on the limits of demarcationism. Fine.

    > Science and the humanities (even philosophers) can aid in the refinement of these values and norms by broadening perspectives and context. But they cannot alone claim warrant for moral choice without fore-knowledge of outcomes. <

    Agreed, I don’t think I ever said anything that would be in contradiction with the above.

    semi,

    > Harris argues for an analogy to health. <

    Actually, he stole that analogy straight from the ancient Greeks, it’s used in Plato and Aristotle.

    > If this is the case then there can be no one set of "rules" to maximize one's health. The methods of science would be the best tool to determine what choices to make to maximize one's health despite there being different concepts of what is "health". <

    The second sentence follows not at all from the first one, as I’ve argued time and again.

    > Science can investigate the outcomes of choices. It is the BEST tool we have to determine not only the possibility-space of all our actions but the consequences of specific actions. <

    No disagreement there, but morality is not all about consequences (unless you are a consequentialist). And what happens when different values enter into conflict? What sort of empirical data would determine which values have to have precedence?

    > Science continues to be underestimated and the complexity of human problems overestimated. <

    Funny, I’d say precisely the opposite.

    contrarian,

    > I haven't argued that morality is based on an aggregate of preferences; rather, that voting is. <

    But in the context of this discussion you have reduced one to the other.

    > If voting preferences can be understood and predicted through science, then voting presumably becomes unnecessary ... any ethical questions dealing with voting rights or political access become moot. <

    So if the majority of people decide that slavery is ok, it makes it moral? Come again?

    ReplyDelete
    Replies
    1. I'm genuinely curious where you see me adopting any moral framework. My claim is only that if a majority of people prefers legalizing slavery, then the result of an election would be electing politicians who write laws legalizing slavery (which are then almost certainly struck down by the supreme court). I'm just describing how democracy works.

      The function of voting, for better or worse, is to aggregate voter preferences, and if you see voting as a useful function that presents moral questions, and assume that science will identify a better mechanism for serving this function than representative democracy, then you present a scenario where scientific progress resolves moral questions.

      Delete
    2. @ Massimo

      > No, because he is employing the term in a way that doesn’t accord to scientists’, philosophers’ or even laypersons’ usage, so he is simply committing a fallacy of equivocation, and he needs to be called on it. <

      Do you think that argument will sway him to part with $20,000 and publicly recant?

      Delete
  18. Mark S.,

    > What if all basis for moral codes other than what science arguably shows moral behaviors descriptively ‘are’ will likely be culturally useless (societies will not be motivated to enforce them) due to their inherent lack of long term motivating force for the individual? <

    I’m not sure how that would work, but the first objection is that I wouldn’t talk of “moral codes.” To me ethics is a type of applied logic: we reason from assumptions about a range of values to the consequences of those values in specific cases pertinent to our lives.

    > If we consider only moral codes that are likely to be culturally useful <

    Useful how, by what criteria? You just can’t get away from philosophically loaded questions, as you can see.

    > Philosophy would have had a freer hand in arguing what moral codes ought to be if moral behavior was merely a culturally dependent product of social living and rational thought. <

    I disagree, philosophy — like reasoning in general — allows to use our bio-cultural evolution as a starting point and to decide in which directions we’d rather go because they make the most sense, genetic propensities (sometimes) be damned. For instance, we have natural tendencies to xenophobia, but we can reason our way through them, control them, and hopefully eventually eliminate them.

    To pick on one of your examples:

    > Discrimination by sex, gender, religion, or ethnicity is immoral to the extent it, again, decreases mutual regard and trust between sub-groups and therefore the benefits of cooperation. <

    What if “science” shows that some degrees of discrimination make for a “better” (you pick the criterion, but let’s say economic efficiency) society? I’d argue that would still be wrong.

    > Trolley dilemmas largely vanish when one recognizes that, just as science shows us, what is moral is a function of effects on future cooperation benefits, not body count. <

    Science shows us that? How? What sort of empirical evidence shows that future cooperation benefits are the ultimate moral criterion?

    > what tasks would remain for moral philosophers who conclude that moral codes normatively ought to be based on what science tells us morality objectively is? <

    “Morality objectively is”? I don’t believe in empirically-determined objective morality. (I hope you know from other things I’ve written that that doesn’t make me a moral relativist...)

    > These are difficult issues. The science of morality and the ethical questions you posed are comparatively simple. <

    Indeed, but your own series of examples clearly shows, things get even more difficult for a science-only “solution” when we get to more serious ethical problems.

    ReplyDelete
  19. Mark E.,

    > if I know a philosopher is a Marxist or an evangelical Christian or whatever, I will not be inclined to give too much attention to (or even to bother reading, to be frank) those arguments of theirs which are patently designed to justify or defend their ideological commitments. <

    Yup, I’m on board, up to a point (even Marxists and Christians may have a point or two, once in a while... ;-)

    > though I do not deny that the values on which ideological structures are based can be rationally scrutinized, I am not all that sanguine about the power of rational argument to shift in determinate ways the deep and fundamental value-related instincts which drive our ideological commitments. <

    Indeed, but that pertains to the distinction between rationality and persuasion. One can have a perfectly tight rational argument and still fail to persuade notoriously emotionally ridden human beings...

    > I would do so (as you would too, no doubt, even in applied contexts) with a view to deepening understanding rather than changing minds. Minds will change, but in their own time – and in unpredictable ways. <

    Indeed, my interest is both in understanding and in rationally-based persuasion.

    Richard,

    > On a moral anti-realist view, there are no objective moral facts to be discovered, so "science" (in any sense of that word) cannot discover such facts. <

    I would agree, but that bit was simply to expose how ridiculous Harris’ redefinition of science is. I was trying a reductio approach...

    > I am wondering why you don't deal with Richard Carrier's argument for science as a source for determining moral truths. <

    I find Carrier to be far too longwinded, not to mention a particularly dislikable character (which, of course, says nothing about his arguments). See this: http://rationallyspeaking.blogspot.com/2012/08/on-with-comment-about-richard-carriers.html. Yes, one of these days I will have to devote more time to his take, but I need a triple martini before considering it seriously.

    > I am especially intrigued by the pivotal point he raises, following Phillipa Foot, that all moral statements boil down to (or can be converted to) hypothetical imperatives (even supposedly categorical ones) <

    Even if that made sense (I don’t think it actually does), I fail to see what science has to do with it. But as I said, perhaps I’ll finally devote an entry entirely to Carrier.

    > I think you would agree that when attempting to refute a certain position, it is best to argue against the strongest case, rather than the weakest ones, if indeed Carrier's argument is thus. <

    Actually, my priority here is to attack the most influential figures, and Harris (and Shermer) has done a lot more damage than Carrier, I think.

    ReplyDelete
  20. Hi Massimo,

    This isn't a "moral" problem, but I'll ask anyway: can science solve Newcomb's paradox?

    You would expect a question of rational choice theory to be *less* controversial than most moral problems. In many ethical problems, the disagreements are due to differences in values and ends. In Newcomb's problem, we can all stipulate that the end in question is to get as much money as possible, yet there's *still* disagreement over what the "rational" thing to do is. While decision theorists use mathematical formalisms to explore this problem, I don't think anybody is under the impression that there is going to be a "purely" mathematical proof for one answer over another; arguments typically use mathematical results supplemented with substantive philosophical assumptions.

    I imagine that Harris would dismiss Newcomb's problem as too unrealistic or fanciful, but I'm interested in whether you think this has any relevance to the debate.

    ReplyDelete
  21. Massimo: Well, we’ve been here before. Not, it’s not artificial, but it is a matter of gradation with plenty of grey areas. If the distinction were truly artificial then “science” would be synonymous with every bit of empirical knowledge, no matter how trivial, unsystematic, or detached from any theoretical framework.

    Not quite. Science would deal with every bit of empirical knowledge, no matter how trivial. And that is what I argue.

    Which means that the word “science” would be empty of any specific meaning.

    Not at all, because obviously what facts to assign to the box labelled "science" is only part of what defines that box. The way these facts are used and how conclusions are drawn are other parts of the definition, and with all that included the word will have plenty of meaning.

    Of course it is not science if a bunch of cranks carefully shields their crankery from refutation even if the crankery incorporates some misinterpreted facts.

    Of course it is not science if I personally figure out whether Craspedia species are apomictic. I have to share this information in a way that allows other humans to test it, reproduce it, and build on it, because science is a community effort. But then it would be science no matter how trivial the fact.

    Of course it is not science if I personally figure out how to get from here to Woden by bus. Humanity trivially had that information already. But if those buses had been built by aliens and I found that information out and circulated it so that others can confirm it and make use of it, whysoever would it not be science, other than to appease somebody who yearns to see scientists taken down a notch?

    ReplyDelete
  22. I find Carrier to be far too longwinded, not to mention a particularly dislikable character (which, of course, says nothing about his arguments).

    That made me smile. One could mention that every second sentence of his appears to be something like "of course I conclusively demonstrated in my book XYZ that this is wrong". Constantly patting himself on the back...

    ReplyDelete
    Replies
    1. He also seems to have the rather naive idea that significant philosophical questions (like metaethics) can be settled by formal deductive arguments.

      In a couple of interviews with philosophers he's suggested the setting up of a register of philosophical arguments that have been shown to be fallacious. I wonder who would decide which arguments get added to the register...

      Delete
  23. Massimo, thanks for your reply.

    >… the first objection is that I wouldn’t talk of “moral codes.” To me ethics is a type of applied logic: we reason from assumptions about a range of values to the consequences of those values in specific cases pertinent to our lives. <

    But what moral codes normatively ought to be is a subset of ethics.

    Also, the science of morality encompasses, at minimum, the origins and functions of all biologically based values (such as valuing well-being) that I assume are a large part of the given ‘data’ on which all your ethical reasoning proceeds. Would not it be potentially useful to know the ultimate sources of your givens?


    > If we consider only moral codes that are likely to be culturally useful <

    >Useful how, by what criteria? You just can’t get away from philosophically loaded questions, as you can see. <

    I did provide criteria for culturally useless: “societies will not be motivated to enforce them due to their inherent lack of long term motivating force for the individual”

    As Bernard Gert defines: Morality normatively refers to a code of conduct that, given specified conditions, would be put forward by all rational persons. Rational persons would not put forward a moral code that societies will not be motivated to enforce due to their inherent lack of long term motivating force for the individual. So, by Gert’s definition, a normative moral code cannot be culturally useless.

    >I disagree, philosophy — like reasoning in general — allows to use our bio-cultural evolution as a starting point and to decide in which directions we’d rather go because they make the most sense, genetic propensities (sometimes) be damned. <

    I am not so much interested in damning our nasty genetic inclinations as in understanding them and thereby providing a rational basis for debunking their moral authority.

    My favorite examples are norms condemning homosexuality. They have particularly shameful origins (in exploiting a minority as an imaginary threat) which provide a strong debunking argument.

    > Trolley dilemmas largely vanish when one recognizes that, just as science shows us, what is moral is a function of effects on future cooperation benefits, not body count. <

    >Science shows us that? How? What sort of empirical evidence shows that future cooperation benefits are the ultimate moral criterion? <

    The empirical evidence is 1) what in-group moral behaviors (and moral values) descriptively ‘are’ as a matter of science, 2) my argument (only briefly outlined here) that any other basis for cultural moral codes will be notably less “culturally useful”, and 3) Gert’s definition of normatively moral which implies whatever is normatively moral will be culturally useful.


    > “Morality objectively is”? I don’t believe in empirically-determined objective morality. <

    I intended “what science tells us morality objectively (and descriptively) is” which is an empirical question. Or are you arguing that science cannot tell us what morality descriptively is in terms of its origins and functions?

    > …your own series of examples clearly shows, things get even more difficult for a science-only “solution” when we get to more serious ethical problems.

    I am not proposing a science-only solution in the strict sense as my list of philosophy-only questions shows.

    Science can answer many in-group ethical questions based on 1) what morality in in-groups descriptively ‘is’, 2) recognizing that what is normatively moral in an in-group is, by Gert’s definition, also culturally useful in in-groups. Where science goes silent and moral philosophy must take over is when we start to discuss morality between groups.

    We agree that normative moral codes regarding both in-group and out-group interactions require the skill set and methods of philosophy.

    Where we disagree, and are unlikely to quickly resolve, is if the ethics of in-group morality, as with your five examples, can often be better addressed by descriptive science than by existing mainstream philosophical theory.

    ReplyDelete
  24. Would I use a philosopher to explain WHAT the pasteurization process is and WHY it is beneficial? No, I will use a scientist properly vetted in the appropriate field. Science has built-in checks and balances, which is why it can distinguish itself from stupid pseudoscience. That's a moral benefit right there. It's in Randi's Million Dollar challenge, for example. Hey, let's not the world, we have science, we can investigate and confirm the likelihood that Joe Schmoe is a big faker with the tools we have. These are age old examples, I know, but I grew up questioning everything and respecting the scientific reality and it felt good.

    ReplyDelete
  25. C,

    > can science solve Newcomb's paradox? <

    No, if there is a solution to the paradox it comes either from a mathematical / optimization analysis or from a conceptual / philosophical untangling of the (alleged) paradox. Or, as you say, a combination of both approaches. I don’t see how empirical evidence would enter into it, and without that, there is no science to be done.

    Alex,

    > Science would deal with every bit of empirical knowledge, no matter how trivial. And that is what I argue. <

    I know, but your position leads to the — for me untenable — conclusion that science is synonymous with facts, which in turns eviscerates the word “science” of any interesting meaning.

    > obviously what facts to assign to the box labelled "science" is only part of what defines that box. The way these facts are used and how conclusions are drawn are other parts of the definition, and with all that included the word will have plenty of meaning. <

    Now you are coming dangerously close to an equivocation fallacy. It ought to be *obvious* that both Harris and I do include methods as well as facts in our (very diverging) definitions of science. Indeed, I would argue that there is no such thing as a “scientific fact” at all, if that statement means an empirical finding entirely divorced from any theory and/or systematic method of collecting and analyzing data.

    > Of course it is not science if I personally figure out how to get from here to Woden by bus. Humanity trivially had that information already. <

    Thank you, we seem to agree after all... ;-)

    Mark S.,

    > the science of morality encompasses, at minimum, the origins and functions of all biologically based values <

    I pointed out in my replies to both Harris and Shermer that of course there is a science of morality, for instance a science of how moral feelings came to be in the first place (evolution), or how the brain instantiates them (neurobiology). But that is an entirely different matter from saying that ethics *is* a science.

    > societies will not be motivated to enforce them due to their inherent lack of long term motivating force for the individual <

    But this is , at best, a descriptive approach: (likely) society X will do Y because of Z. But is Y the right thing to do? According to what criteria?

    > Rational persons would not put forward a moral code that societies will not be motivated to enforce due to their inherent lack of long term motivating force for the individual. <

    I suspect that by those terms the push for ending slavery in the American South was an irrational pursuit.

    > I intended “what science tells us morality objectively (and descriptively) is” which is an empirical question. <

    But I reject the whole premise that science can tell us any such thing. Indeed, I’m not even sure what it means to look for moral objectivity. Morality is a human construction, as far as I can tell (which, I hope I don’t need to point out, doesn’t make me a moral relativist).

    ReplyDelete
    Replies
    1. Massimo, thanks again for your reply.

      I can readily respond to all your points, but perhaps the keystone to understanding how science can answer some questions about morality is pointed out in your response:

      >But I reject the whole premise that science can tell us any such thing (what “morality objectively and descriptively ‘is’ which is an empirical question”). Indeed, I’m not even sure what it means to look for moral objectivity. Morality is a human construction, as far as I can tell … .<

      Consider the following:

      In our physical reality, synergistic benefits of cooperation are commonly available. But attempts at cooperation expose agents to exploitation. Further, exploitation is almost always the winning strategy in the short term and sometimes the winning strategy in the long term. Unfortunately, exploitation destroys future benefits of cooperation. This is the universal cooperation/exploitation dilemma that is as objectively real as the mathematics that define it.

      This dilemma will be faced by all beings in our physical reality from the beginning of time to the end of time. Any species that does not solve this dilemma will not be able to obtain the benefits of sustainable cooperation, cannot become a cooperative social species, and will be unlikely to evolve intelligence.

      In people, biological adaptations such as empathy, loyalty, and the emotional experience of well-being in the cooperative company of friends and family motivate initiating cooperation. Biological adaptations such as indignation, shame, and guilt motivate punishment of people who exploited cooperators. Together, these ‘moral’ emotions motivate costly cooperation strategies (as Herbert Gintis and others define them). Our ancestors who did not have these adaptations tended to die out.

      The emergence of cultural norms provided an additional substrate on which to encode cooperation strategies. Hence, all past and present moral codes were selected for by the benefits of cooperation in groups that the norms produced. Largely to resolve differences in these norms, ethics was born.

      If one “looks for moral objectivity”, what one finds are biological and cultural solutions to the universal cooperation/exploitation dilemma.

      If we ever communicate with intelligent space aliens from anywhere in our universe, I expect we could ask them about an objective basis for morality and they would answer: “Why morality refers to solutions to the universal cooperation/exploitation dilemma. Everyone knows that!”

      The above objective basis for morality may sound like a new idea, but it is old. Plato’s character Protagoras describes it nicely while retelling the Greek myth of how people got their moral sense and why they got it: to increase the benefits of cooperation in groups.

      Delete
  26. contrarian,

    > I'm genuinely curious where you see me adopting any moral framework. My claim is only that if a majority of people prefers legalizing slavery, then the result of an election would be electing politicians who write laws legalizing slavery ... I'm just describing how democracy works. <

    If that’s all you are doing, then it is irrelevant to our discussion. If you go one step further and claim that that is the way it should be then you are adopting a (somewhat crude) utilitarian approach, and hence a particular moral framework.

    Alastair,

    > Do you think that argument will sway him to part with $20,000 and publicly recant? <

    You kidding, right? I don’t expect to win either the $20,000 or the $2,000 prize. Luckily, I ain’t in it for the money.

    sketchanything,

    > Would I use a philosopher to explain WHAT the pasteurization process is and WHY it is beneficial? No, I will use a scientist properly vetted in the appropriate field. <

    Uhm, yes. And this has anything to do with the subject of the post because...?

    Richard,

    > He also seems to have the rather naive idea that significant philosophical questions (like metaethics) can be settled by formal deductive arguments. <

    Yes, that’s one of several reasons I don’t take Carrier very seriously. But fine, I’ve downloaded his article and will read it on my way to the debate with Krauss and Dennett next week (in Ghent, Belgium — stay tuned for a likely blog post and eventually a link to the video...).

    ReplyDelete
    Replies
    1. My sense is that the purpose of the discussion is to evaluate examples of specific moral questions (not moral frameworks) that effectively challenge Sam Harris's argument that science alone can resolve all moral questions. I suggested that since two of the examples you present rely on the existence of representative democracy, Harris or his allies might respond that scientific discovery will lead to new forms of government, thus making these two questions pretty irrelevant, in the same sense that "When should I disobey the king" isn't a meaningful moral question (in most countries).

      If this was not the type of response you were looking for, then I apologize for wasting time and electrons.

      Delete
  27. An ethical question for science:

    If the measurements of science have been proven by science to be uncertain or only probable at best, then as for ethics or anything else, how certain or good is science at judging anything? =

    ReplyDelete
  28. Though I've been critical in the past of Massimo's "demarcationism" (as I call it), I must say I tend to agree with him over Alex here.

    Fuzzy distinctions are a normal and useful feature of natural language. The distinction between "child" and "adult" is fuzzy, but it would be counter-productive to try to eliminate it from our language. We just have to remain aware of the fuzziness, and not insist that there's an absolute fact of the matter for every individual as to whether s/he is a child or an adult. We may define a precise age of adulthood for legal or administrative purposes, e.g. at age 18 for voting, but that creates a special sense of the word, which differs from the ordinary fuzzy sense. We also divide the visible spectrum into useful ranges (blue, green, etc) even though the boundaries of those ranges are fuzzy, and different linguistic communities may divide the spectrum in significantly different places.

    Similarly, it's useful to make a distinction between science and broader empirical inference, however fuzzy that distinction may be. To altogether deny such a distinction would mean admitting historians into the National Academy of Sciences, shelving history books under "Science" in libraries, and so on.

    I fully appreciate (along with Harris and Coyne) the continuity and underlying commonality of all empirical inference, from the most rigorous physics to our ordinary, everyday perceptions of our immediate environment. I think they're right to emphasise that continuity, particularly in response to those who want to impose simplistic demarcation criteria on science, the tendency I call "demarcationism". On the other hand, the correct response to the observation of a fuzzy distinction is not to try to define it out of existence, as they do, but to appreciate that fuzzy distinctions are a normal and useful feature of language, and learn to live with them.

    Our use of the word "science" is driven by many considerations, and it would be a mistake to pick any one criterion for dividing science from non-science. I would say--very crudely--that we tend to reserve the word "science" for the more methodical, rigorous and precise of our empirical inferences. But our usage is also driven by convention. It's conventional to call some fields "science" and not others. Those conventions are not arbitrary, but once established they encourage us to refer to those fields--and activity within those fields--as "science" regardless of more specific considerations. Fields of study which involve human behaviour tend to occupy an uncomfortable half-way house, sometimes being called "human", "social" or "soft" sciences, because those fields don't lend themselves so well to a methodical, rigorous and precise approach, though activity in those fields can have these properties to some degree. Other factors can come into consideration too, of which Alex mentions one: the communal aspect of science. But it seems rather odd for Alex to pick that as his one demarcation criterion. (It seems like Alex wants to reject the typical demarcation criteria, but he can't give up altogether on the idea of there being a demarcation criterion, so he picks an unusual one.)

    When a distinction is so fuzzy, it's not worth trying too hard to come up with general demarcation criteria. We can make some very broad observations about the distinction, of the sort I've made here. Beyond that, it's best to wait until a specific use arises, and then ask just what it is we want to convey. If it's not going to be clear enough from the context, then we should say explicitly. If we want to make a point about communal vs individual activity, we can do that much more clearly by using such words than by calling these two alternatives "science" and "non-science".

    [Continued...]

    ReplyDelete
  29. [...continued]

    Once we start making a useful but fuzzy distinction we tend to imbue it with more significance that it has, and usually fail to see just how fuzzy it is. It's in the nature of language for us to reify our concepts, but we often over-reify and start to to see science--for example--as a well-defined entity, especially when we say such things as "science can do this and can't do that". These tendencies encourage us to think it's important to come up with demarcation criteria. The best way to resist such tendencies is to develop a more realistic understanding of what language is for and how it works. (I recommend a Wittgensteinian view of language.)

    By the way, Massimo, is it my imagination, or have you become less demarcationist as a result of Maarten Boudry's influence? ;-)

    ReplyDelete
  30. On the issue of science possibly have any significant insight in order to substantiate ethical and moral arguments, I would say yes, as a principle. However considering the question if I expect science to help us on that I am mostly certainly a skeptic (however I would certainly agree that advances in philosophy or metaphysics can do that).

    As for the contributions of “science”, we can already evaluate that it had certainly had a few contributions in the past as to propose the morality of slavery or of eugenic programs (that took place in US or in Germany). I certainly would expect that those unfortunate experiences would enlighten (and cool) somehow the expectations of the optimists about the contributions of science on those issues.


    Of course one can always pretend that a version of “science” that if far overreaching from its purposes (and epistemology) is reasonable. Then anything is possible and valid, and science can say whatever we want.

    ReplyDelete
    Replies
    1. Vasco, science is descriptive not prescriptive. That means science can, as a matter of logic, only tell us how to achieve whatever our ultimate goals are, perhaps increased well-being. Science cannot tell us our ultimate goal ought to be to increase well-being. Your examples of ‘science’ justifying eugenics and slavery of ‘inferior’ races are based on the false assumption that what is natural somehow has moral implications; it does not.

      Agricultural science can tell a farmer how to grow lots of beans but not that he ought to grow lots of beans. Similarly, the science of morality can tell us what moral norms are most likely to achieve whatever our ultimate goals are, but it is incapable, as a matter of logic, of telling us what our ultimate goals ought to be.

      Delete
    2. Mark,

      I agree with you.

      However there is large number of people, namely among the followers of Massimo’s blog, that consider that scientism is respectful and rational, and fail to realize that scientism is completely wrong, self-refuting and epistemologically unsustainable, being apparently only grounded in the success and progress of science and technology. However the fact that there is success and progress leads only to praise the success and hope for more, nothing else. This progress lasts for more than a century and that brought (and still does) us awe and amazement, however we should be aware that it brought us also deception, as atrocious deeds we humans made in the name of science and progress (which was claimed as a basis for every major totalitarian regime in the 20th century), and it would be wise to refrain our enthusiasm on the matter of letting science, or better said scientists, make moral and ethical (or political) claims.

      I would agree that science can assist us (I doubt that those insights are significant), in making decisions (although this is problematic, as I wouldn’t agree with the example that Massimo used to illustrate his point regarding abortion). However I have to consider that science and technology can provide a variety of new issues, even hypothetical ones, such as artificial intelligence.

      Delete
    3. Vasco, my delay in replying was because I unfortunately did not see your comment.

      About the facts of the matter at hand, we are perhaps more in “violent agreement” than disagreement. But perhaps not, depending on what you make of the following.

      Consider the real source of “… atrocious deeds we humans made in the name of science and progress (which was claimed as a basis for every major totalitarian regime in the 20th century)”.

      You claim the cause of these “atrocious deeds” was ‘scientism’. Scientism has no such power because it only deals with what ‘is’. The cause of these atrocious deeds was the failure of moral philosophy to provide a coherent account of what morality ‘ought’ to be. By recognizing what morality ‘is’, I see no reason the present chaotic confusion in moral philosophy cannot be resolved for all time.

      The cure for the evil deeds you reference is not less scientism, but more scientism specifically about morality which, by revealing what morality ‘is’, enables a coherent study of what morality ought to be. The science involved has only begun to emerge in the last 40 years or so. If that science had been available to Darwin (though his speculations about morality were spot on, he did not have the science to support them), virtually all of the “atrocious deeds” could have been avoided.

      Just consider the prospects if Darwin had convincingly argued that the function of morality was to solve the universal cooperation/exploitation dilemma by costly cooperation strategies.

      The “atrocious deeds” you reference were direct products of the failures of moral philosophy based on an inadequate understanding of the science of morality, not too much respect for science.

      Delete
  31. Massimo,

    "Why go that way, other than to appease some scientists’ ego?"

    Projecting much? I think Thomas Jones above was right that you "tend to see everything lately in terms of demarcations and turf", and try to preserve what your turf from being taken over by scientists.

    "I arbitrarily redefine philosophizing as the activity of thinking, which means that we all do philosophy all the time, and that the answer to any question, not just moral, is therefore by definition philosophical. So there."

    And since you're a philosopher, that would entail that you are an "expert"and an authority on any question. That should teach Harris a lesson...

    ReplyDelete
  32. Massimo

    >One can have a perfectly tight rational argument and still fail to persuade notoriously emotionally ridden human beings...<

    I don't know that any natural language-based argument about a complex and serious issue – especially if it has an ideological dimension – can be "perfectly tight".

    And we are all driven by emotions, are we not? Even philosophers. :-)

    > ... my interest is both in understanding and in rationally-based persuasion.<

    Rationally-based persuasion is, of course, a central and vitally important human activity. But I personally have reservations about attempts to isolate, professionalize and institutionalize it. My concerns relate especially to ethical and other normative matters, and the way, for example, that ethical arguments often involve implicit and problematic appeals to the 'authority' of reason.

    The inappropriateness of such claims is related the fact (which I think you recognize) that there are often various rationally-defensible answers to particular ethical questions (appealing ultimately to different value systems or ideologies). And choosing between such options takes us not only beyond science, but also beyond reason – in the sense that rationality (alone or in conjunction with science) cannot decide between them.

    ReplyDelete
  33. Let me add another comment on demarcation. Some of Massimo's writing on demarcation (especially with Maarten Boudry) has been about demarcating science from pseudoscience (or good science from bad science). But he has also written about demarcating science from other fields (like philosophy). These are significantly different subjects.

    I would also point out that there's a significant difference between distinction and demarcation. We certainly want to distinguish between good and bad science. That's what scientists do all the time, when they accept some ideas and reject others. But it's one thing to make individual judgements about what is good and bad, and quite another to come up with general principles for deciding what's good and bad. I'm not denying that we can say some broad things about what constitutes good and bad science. But the word "demarcation" tends to convey the sense of drawing rather precise lines. That's the sort of thing Karl Popper was looking for, when he came up with falsifiability as a demarcation criterion. My view is that such distinctions necessarily require human judgement, and can't be reduced to a formula, whether simple or complex. (Statistical formulas are useful, but they can't on their own get us all the way from evidence to conclusion.) So the search for demarcation criteria (at least in a strong sense) is misguided.

    Distinguishing between science and other fields is rather less important than distinguishing between good and bad science. It's important that we make good inferences. It's less important whether we label those inferences "scientific", "philosophical" or just "rational". That said, we need to apply the right inferential methods for a given question, and that does mean recognising the different nature of different questions. However an over-rigid drawing of lines between fields can get in the way. Recognising the continuity between science and philosophy can encourage a more naturalised, scientifically-informed approach to philosophical questions, and that's to the good in my view. The more traditional "a priori" approach to philosophy has been a dead end.

    As far as I recall, Massimo's recent online articles on demarcation haven't actually proposed any demarcation criteria. So perhaps he isn't looking for the sort of hard or formulaic demarcation criteria that I criticise. That's why I suggested above that perhaps he's become less "demarcationist" recently, despite the fact that he's been writing a lot about "demarcation". Of course, he might respond that he's never been demarcationist in that strong sense. (But I would also use the word to refer more broadly to an over-rigid attitude towards fuzzy distinctions. As one example of that I would mention Massimo's statement in the past that experimental philosophy is an oxymoron.)

    Anyway, I hope I've made the point that "demarcation" is yet another difficult word, about whose meaning we have to think carefully.

    ReplyDelete
  34. P.S. Massimo, I've just read your paper in "Philosophy of Pseudoscience". I've been too stingy to buy the book, but your paper is available in full with Amazon UK's Look Inside feature. I strongly agree with much of what you've written there, but I think you've been over-optimistic in holding out the possibility of a rigorous quantification of degrees of scientificity. That seems like another attempt to take human judgement out of the process of theory evaluation, and it's as doomed to failure as previous attempts.

    ReplyDelete
  35. Anyway, let’s get down to business with a few examples of ethical questions that I think make my point
    This sort of misses the point - Unless you have a *right* answer for these questions and then prove that science could never have arrived at those answers , then this is the a God of the Gaps argument. I don't think that Harris has made the claim that every question can be answered by science - only that either they can be or they cannot be answered by anything else (in a way that can be verified reasonably objectively)

    In any case I believe Richard carrier makes a better argument than Harris
    e.g. http://freethoughtblogs.com/carrier/archives/4498#more-4498

    ReplyDelete
  36. G,

    > I think Thomas Jones above was right that you "tend to see everything lately in terms of demarcations and turf", and try to preserve what your turf from being taken over by scientists. <

    I find this sort of charge bizarre, considering that I am *also* a scientist.

    > And since you're a philosopher, that would entail that you are an "expert"and an authority on any question. <

    Seems to me you missed the sarcasm in my redefinition of philosophy.

    Mark S.,

    > This is the universal cooperation/exploitation dilemma that is as objectively real as the mathematics that define it. <

    I get it, and I don’t disagree with several of your specific points. I just can’t see ethics boiling down just to issues of cooperation.

    > Any species that does not solve this dilemma will not be able to obtain the benefits of sustainable cooperation, cannot become a cooperative social species, and will be unlikely to evolve intelligence. <

    How do you account for intelligent but non-social species then?

    > these ‘moral’ emotions motivate costly cooperation strategies (as Herbert Gintis and others define them). Our ancestors who did not have these adaptations tended to die out. <

    Again, yes, but ethics in current times is much more complicated than that. You are coming close to making one of Shermer’s mistakes: confusing an account of the biological origins of morality (with which I have no disagreement) with an account of what science can tell us about how to behave morally in post-Pleistocene environments.

    > all past and present moral codes were selected for by the benefits of cooperation in groups that the norms produced. <

    Once more: I don’t think talk of “moral codes” is very helpful in contemporary moral philosophy.

    contrarian,

    > I suggested that since two of the examples you present rely on the existence of representative democracy, Harris or his allies might respond that scientific discovery will lead to new forms of government, thus making these two questions pretty irrelevant, in the same sense that "When should I disobey the king" isn't a meaningful moral question <

    Thanks for the clarification, but I really still don’t see how this follows. Ethical questions remain such pretty much regardless of our form of government. And of course also please note that Harris’ argument is that science can *already* answer moral questions better than philosophy. enough with the promissory notes about a distant scifi future...

    ReplyDelete
    Replies
    1. Harris: "Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice)." --samharris.org/blog/item/the-moral-landscape-challenge1

      I read this to mean that Harris does want to allow for promissory notes. I don't see where Harris argues that present science is better at addressing moral questions than present philosophy; maybe someone in the comments can help.

      Given my interpretation of Harris (to include promissory notes), I think you can improve your questions 1 and 2 by removing their references representative democracy. For instance, "To what extent should policy value the interests of former criminals vs. non criminals, or of organizations large enough to lobby effectiely vs. smaller organizations/individuals?" These are relatively minor tweaks, I know, but they help you hone in on what I see as the most effective criticisms of Harris' view.

      Delete
    2. Interesting reply, Massimo, to Mark. Do you have specific ideas on intelligent but non-social animals? I'm not saying sociability in general, let alone E.O. Wilson's eusocialilty, is a requirement for evolution of intelligence, but it does seem to be the general case.

      I do like Mark's follow-up on one aspect, not directed at just you but at Pop Ev Psychers, that evolution of moral and ethical ideas started pre-Pleistocene. And, that cooperation vs exploitation might even be considered a pre-moral, or non-moral, issue.

      Delete
    3. Gadfly, the focus of some Ev Psychers on the Pleistocene has been particularly unfortunate.

      Based on neurochemistry, body postures, and behaviors, people share at least kin altruism, empathy, loyalty, indignation, and gratitude with our ape cousins. I have also read some evidence for shame in our ape cousins. Since we share these, it is likely they are millions of years older than the Pleistocene.

      So what about guilt and the related biology underlying our remarkable ability to incorporate cultural norms into our instant judgments of right and wrong? They likely did evolve after the emergence of culture and thus during the Pleistocene.

      But to me, mere biology is still too low a level of causation to enable really understanding morality. The ultimate level of causation of moral behavior is in the nature of physical reality which provides both the cooperation/exploitation dilemma that social animals must solve and the costly cooperation strategies (moral behaviors) that solve it.

      Delete
  37. Mark E,

    > there are often various rationally-defensible answers to particular ethical questions (appealing ultimately to different value systems or ideologies). And choosing between such options takes us not only beyond science, but also beyond reason – in the sense that rationality (alone or in conjunction with science) cannot decide between them. <

    While I agree with the substance of your comment, I wouldn’t say that these instances bring us “beyond” reason (where, exactly?). Rather, we can simply agree that some times there is more than one reasonable solution to a given problem, which means that considerations of practicality and even taste then enter into it.

    Richard,

    > When a distinction is so fuzzy, it's not worth trying too hard to come up with general demarcation criteria. We can make some very broad observations about the distinction ... Beyond that, it's best to wait until a specific use arises, and then ask just what it is we want to convey. <

    Yup.

    > (I recommend a Wittgensteinian view of language.) <

    Funny, so do I, in the very first chapter of Maarten’s and my book on Philosophy of Pseudoscience.

    > Massimo, is it my imagination, or have you become less demarcationist as a result of Maarten Boudry's influence? <

    Possibly, the collaboration with Maarten has been good for both of us. But notice that I have taken a Wittgensteinian approach to demarcation projects at least since the writing of Nonsense on Stilts, i.e. at least since ’08...

    > Distinguishing between science and other fields is rather less important than distinguishing between good and bad science. <

    I’m not so sure. It’s a continuum, and some bad science slides easily into pseudoscience, and pseudoscience harms people.

    > Massimo's recent online articles on demarcation haven't actually proposed any demarcation criteria. So perhaps he isn't looking for the sort of hard or formulaic demarcation criteria that I criticise. <

    Correct, see above.

    Deepak,

    > Unless you have a *right* answer for these questions and then prove that science could never have arrived at those answers, then this is the a God of the Gaps argument. <

    Not at all. Harris claims that science can answer the sort of questions. I think it’s pretty clear he’s mistaken about this. If you want “answers” (really, rational analyses of questions and options, which is what good moral philosophy does) all you need to do is to read the actual literature in moral philosophy, or at least popular books like those published by Sandel.

    ReplyDelete
  38. I think the dialogue between Mark S & Massimo is interesting. I agree with Massimo that ethics extend beyond the cooperation/exploitation conception. This conception I think can also be turned around and framed in relation to competition as a complement to cooperation.

    All complex systems (not just social ones) that sustain some larger stability relative to there environment over time seem to find a balance in which the parts of a system cooperate to support the whole while also retaining some autonomous properties. This doesn't entail that we should we conclude 'ought' from 'is'. Sometimes in nature the balance becomes parasitic or exploitive where the autonomy of a subgroup is compromised even as a degree of stability is maintained in a larger system.

    I do think however that systems where competition and cooperation maximally interact allowing for individuals to both assert autonomy while being part of a larger whole tend to be the most sustainable, adaptive and resilient to changes in environment. I think science can play an important role in shining a light on how this process tends to unfold.

    ReplyDelete
    Replies
    1. Seth_blog, I appreciate your saying there was something interesting in my conversation here.

      We agree that competition, cooperation, and autonomy are all important in achieving social goals of increasing well-being.

      But solutions to the universal cooperation/exploitation dilemma are costly cooperation strategies that increase the “benefits” of cooperation in groups, with those “benefits” unspecified. Those benefits could include the benefits of autonomy and the benefits of competition.

      So a ‘cost’ in those costly cooperation strategies might be the cost of respecting other people’s autonomy even when you really don’t want to.

      Similarly, societies might (and do) cooperate to enforce norms (moral norms and even laws) that prohibit forms of competition expected to reduce well-being in the society.

      The superficial emphasis on ‘cooperation’ is a bit misleading. Societies can and do enforce norms (costly cooperation strategies) whose benefits are to increase the benefits of autonomy and competition.

      Delete
    2. I see how one benefit of cooperation might be conceived as the an increase recognition of 'others' autonomy. I am not sure why we should elevate the cooperation conception above that of competition. So I agree with your concluding statement:

      'The superficial emphasis on ‘cooperation’ is a bit misleading. Societies can and do enforce norms (costly cooperation strategies) whose benefits are to increase the benefits of autonomy and competition.'

      I think it is best to converge on the space where the benefits cross-over.

      After all, I believe the latin definition of competition was something like ' to strive together'.

      Of course we still need some philosophy of what is beneficial to strive for :)

      Delete
    3. Seth, it took a bit of thinking to figure out how to best put my answer.

      I expect we agree that while autonomy and competition produce important benefits for societies, unbridled autonomy and competition can reduce the benefits of living in those societies. For this reason, societies define and enforce moral norms and laws that limit autonomy and competition. Enforcing these moral norms and laws requires two kinds of costly cooperation. First, people as individuals limit their own autonomy and competition to conform to the norms even when they really don’t want to. Second, people cooperate to enforce these limits on others even when such enforcement has some cost for them.

      So it is not that cooperation is necessarily more important than autonomy or competition, it is that cooperation is required to limit autonomy and competition in order to maximize the benefits of living in a society.

      Delete
  39. Massimo,

    > Again, yes, (“these ‘moral’ emotions motivate costly cooperation strategies, as Herbert Gintis and others define them. Our ancestors who did not have these adaptations tended to die out”) but ethics in current times is much more complicated than that. You are coming close to making one of Shermer’s mistakes: confusing an account of the biological origins of morality (with which I have no disagreement) with an account of what science can tell us about how to behave morally in post-Pleistocene environments. <

    The cooperation/exploitation dilemma was not just present in the Pleistocene and neither is human moral biology the only possible implementation of strategies for overcoming it. As an aspect of our physical reality, the cooperation/exploitation dilemma both preceded the evolution of people and will be a challenge to all societies until the end of time.

    In the present, we have examples of failed states and failed societies where, for one reason or another, people lose the ‘formulas’ and trust necessary to overcome the cooperation/exploitation dilemma, resulting in the benefits of cooperation as a larger society being destroyed.

    That said, I emphasize my agreement that any science based ‘oughts’ on how to behave morally can be, of logical necessity, only instrumental. That is, ‘oughts’ that are intrinsically justified only because they are most likely to achieve independently defined social goals, such as increased well-being.

    Fortunately, a lot of people are interested in increasing well-being regardless of what philosophy has to say about the matter.

    In the absence of good ethical arguments that some other ultimate social goal ought to be pursued, moral codes based on costly cooperation strategies that are the most likely to achieve well-being goals will be culturally useful. And based on it being culturally useful, such as pointing the way to, for instance, rescue failed states and avoid new ones, the effort of pursuing such knowledge is justifiable for most people.


    >Once more: I don’t think talk of “moral codes” is very helpful in contemporary moral philosophy. <

    As I learn more about contemporary moral philosophy, I am coming to a similar conclusion. That does not mean I think talk of “moral codes” ‘ought’ not to be helpful in contemporary moral philosophy, but that it ‘is’ not helpful. It is both our perceptions that contemporary moral philosophy seems to have little interest in the subjects of what morality and cultural moral codes ‘are’.

    In response to these circumstances, I have a proposal.

    Could not the science of costly cooperation strategies, our ‘moral’ biology and biologically based values, and, based on this knowledge, means of designing moral codes as purely instrumental oughts be usefully split off from contemporary moral philosophy into a “Science of Morality”?

    This science of morality has no more intrinsic normative power than agricultural science. Both just inform us as to instrumental oughts for achieving our ultimate goals. But as I said above, that is really not much of an issue for the many people who agree that the ultimate goal of enforcing moral codes is to increase well-being in their societies. And if they agree that all people are ‘in their society’ and deserve moral regard, then the whole in-group vs out-group problem largely goes away.

    I am sure consulting with philosophers on the best way to define intellectually coherent moral codes, and other aspects, would be needed, but at least the work could be directly culturally useful for improving the lives of people, And such a division into purely instrumental oughts (from the science of morality) versus normative ought (from Ethics) might be clarifying for non-philosophy majors as well as consistent with the normal boundaries between objective science and moral philosophy.

    ReplyDelete
  40. Harris claims that science can answer the sort of questions. I think it’s pretty clear he’s mistaken about this.
    Oh I agree with you in general (that science cannot answer (atleast some) ethical questions). I'm merely saying that an example based approach of what science cannot answer today doesn't refute Harris.

    ReplyDelete
  41. @ Massimo

    > You kidding, right? I don’t expect to win either the $20,000 or the $2,000 prize. Luckily, I ain’t in it for the money. <

    If I truly believed that my argument was bullet proof, I would expect not only a check to be forthcoming but also a public recant.

    ReplyDelete
    Replies
    1. Again, you are either kidding or you are much more naive than I thought.

      Delete
    2. @ Massimo

      > Again, you are either kidding or you are much more naive than I thought. <

      Your response would seem to suggest that Harris is incapable of being honest and objective.

      Delete
    3. I have no reason to believe he isn't honest. Objective? Doubtful.

      Delete
    4. @ Massimo

      > I have no reason to believe he isn't honest. Objective? Doubtful. <

      Interesting. Here I was thinking that an honest skeptic would be objective. The two characteristics - honesty and objectivity - seem to mutually presuppose each other. But perhaps you're right, I'm too naïve. I should be more skeptical of skeptics.

      Delete
    5. Hate to break it to you, but no human being is capable of objectivity. That's why disputes are not settled just by rational argumentation. But the latter helps, if one is willing to strive for improvement. And yes, you are very naive, or - equally possible - highly disingenuous.

      Delete
    6. @ Massimo

      > Hate to break it to you, but no human being is capable of objectivity. <

      Well, that would also include you. Right?

      > That's why disputes are not settled just by rational argumentation. But the latter helps, if one is willing to strive for improvement. <

      It doesn't appear that you have a very high view of philosophy and its methodology (rational argumentation). I find it strange that a professor of philosophy would express such a sentiment. Perhaps, you're losing the faith,

      > And yes, you are very naive, or - equally possible - highly disingenuous. <

      Sometimes I get the feeling that you don't like me very much.

      Delete
    7. Obviously it would include me. And you. I have a very high opinion of philosophy and rationality, not so high of human beings. And it's not that I don't like you (I don't know you), it's that sometimes I ask myself why I'm bothering.

      Delete
    8. @ Massimo

      Question: Do you believe we have a moral obligation to be objective, to be intellectually honest?

      Delete
    9. If you have been reading this blog for a while and you still don't know the answer to that question I definitely have wasted my time.

      Delete
    10. @ Massimo

      > If you have been reading this blog for a while and you still don't know the answer to that question I definitely have wasted my time. <

      I don't know. So, why don't you humor me and answer the question? (It's an "ethical question" and directly relevant to the subject matter at hand.)

      Delete
    11. Maximus,

      Do you believe that it is objectively true that we ought to be intellectually objective, to be intellectually honest? (It seems to be a simple question, and you seem to think that it has a simple answer. So, answer the question.)

      Delete
  42. I know you limited this to ethical questions, but there's certainly plenty of other questions science can't answer. As an aficionado of classical music, including more avant-garde compositions from the last two centuries, I know it can't answer aesthetic questions. And, won't be able to in the future, either.

    ReplyDelete
  43. Massimo,

    Thanks for replying--and I can see you've got your hands full responding to a lot of people!

    These days I say that my approach to philosophy is a combination of (a) a very naturalised way of thinking, and (b) a Wittgensteinian approach to language. I'm glad you combine these elements too, but from my point of view you don't yet go far enough with either of them. You're still too demarcationist for my taste, though much less so than I previously thought.

    I'm looking forward to listening to the new podcast with Maarten.

    ReplyDelete
  44. These are all "should" questions - and so are poorly specified. Science can't answer question about angels and pinheads, either - but that isn't science's fault: specify clearly what you mean by an "angel" and science will be able to answer. Poorly specified questions don't have answers.

    ReplyDelete
  45. 1. I think the real contribution of science to this issue is that we cannot justify permanent enfranchisement by any genuine evidence, nor can we justify the creation of a disenfranchised class by any evidence. In short, the contribution of science to this issue is to show that the proponents of the policy are discriminatory, and that they cannot justify this discrimination. I think that we can demonstrate the good consequences of the franchise, using multiple standards and perspectives, while we can also demonstrate the inherent difficulties of measurement strongly imply that an assumption of equal rights is feasible, while religion, philosophy and law cannot show that a consequentialist analysis is.

    2. The assumption that social science cannot address the question of how legislation that is not open to the public is different from legislation that is, and whether the differences are beneficial to many, few, none or all (or immeasurable,) strikes me as forced. The apparent implication that an "experiment" has to be carried out seems like a way of foreclosing argument, rather than an argument itself.

    3. Any argument that justifies discrimination against any group historically has always rested on philosophical arguments informed by supposed empirical facts. Frankly, I think one of the greatest contributions of science to ethics is the discovery that practically every philosophical or religious or legal argument for such discrimination cannot be justified. The hypothesized groups cannot be empirically distinguished or the hypothesized traits meriting discrimination cannot be empirically assigned to the groups or the hypothesized ill consequences cannot be empirically demonstrated. In general, any proposed measurement system applied to humans has been found to be intransitive, indefinite, transient and statistically insignificant in any global sense demanded by the religious, philosophical and legal justifications proposed.

    It is in fact, the insistence that mere empirical or scientific evidence is insufficient that justifies the importation of religious, philosophical and legal justifications for discrimination of this kind, that these distinctions are justified by another way of knowing.

    4. The claim that an ER physician is facing a trolley problem does not seem to be a reasoned objection. Every trolley problem I know of always is an either/or problem, where someone must necessarily face an evil consequence. The man chopped up by the ER physician for organs was not necessarily going to die, so this is not a trolley problem. I'm not sure that any trolley problem is ever relevant. As an argument for the relevance of philosophy to ethical problems, this is a complete loser.

    5. The notion of collective responsibility seems to be a philosophical one. It really seems to me that the confusions here are due to philosophy, and the value of a scientific analysis is to demonstrate this. Reparations for slavery are a misleading philosophical evasion of genuine questions about a society that discriminates for no justifiable reason. At least for no reasons that can be scientifically demonstrated.

    ReplyDelete
  46. Here's one for consequentialists. Suppose Casey Anthony is not guilty of murder, the jury is convinced she's not guilty, but the public is convinced she is guilty. Finding her not guilty will convince everyone that they can get away with murder, and some of them will try. Knowing this, should the jury lie and find her guilty?

    ReplyDelete
  47. If science cannot answer something, then what can? To say science can't measure something is to say that it is immeasurable.

    All of these examples share a common link - they are grounded in subjectivity. The answers can only be determined by opinion, and ask for persuasion rather than proof, hence "philosophical problems".

    I believe this criticism stems from a compartmentalization of science.

    Science is simply the product of the scientific method; empirical information verified by continual testing to uncover the nature of reality.

    In order for science to determine a course of action, a goal must be provided. We need to know what we are trying to accomplish. Different goals will direct different actions. Using science to accomplish a given objective is as simple as "doing whatever works".

    Lets look at the classic transplant problem.

    The "problem" exists because people are accustomed to employing prescriptive ethics, labeling certain actions as universally right or wrong, rather than reserving that judgement for the effect they produce. This is the nature of ideology.

    In each example, we must first ask, "what are we trying to accomplish?" Failing to outline an intended goal is like shooting a gun without a target, and claiming you missed!

    If we say that morality, all matters regarding social decisions should be made for the health, functionality, and sustainability of society, and we qualify these principles precisely, then the choice has already been made for us. Whatever action most effectively produces the stated goal is the "correct" choice. The accuracy of our decision is only limited by the availability of reliable information, and our ability to properly assess it (predict results). Choosing a course of action is simply manipulating reality to get what we want. If we don't know what we want, we can't know what to do.

    If the imaginary doctor valued his paycheck above all other considerations, then his course of action would be directed by his objective to maximize his monetary gain.

    The doctor can still make a faulty choice. Lets say that by murdering the patient and harvesting organs, he'll receive more pay, and serve his intended goal. After the operation, he is later caught, fired, and thrown in prison. As a result, he loses his money and fails to serve his goal. His choice was in error because he did not have enough information to foresee his persecution.

    Lets say the doctor is trying to serve a healthy society as previously stated, and rationalizes murdering the patient to save several more lives. As a result, the act of murdering healthy people to save those that are sick becomes a common practice within the medical field, and the public learns to avoid them for fear of becoming the next victim. This creates destabilization within society, and people now fear the hospital. This outcome fails to serve the doctor's initial goals. Had it been predicted, he would not have pursued that course of action.

    ReplyDelete
    Replies
    1. Now lets restrict ourselves to the limitations of the original question. The doctor will not be caught, the patients will definitely be saved, and no one will ever know. If you want to know the "correct" answer in this case... ask the doctor! Its totally up to his subjective preference; his own personal goal. Its as pointless a question as asking "what do you want?". He still has to use science to determine the optimal course of action once he states his personal goal.

      We have to remember that logic and science are descriptive, not prescriptive. They are tools that give us the best understanding of what reality is, not how it should be.

      We can't say that any problem can be solved through science or reason alone; they need each other to function. Science is like a hard drive storing data, and reason the processor and applications that apply that data to achieve given tasks. A processor without a hard drive will simply have nothing to compute.

      To say science cannot answer ethical questions is to say that ethical questions have no objective solution. I believe this is demonstrably false. So long as an intended goal is provided, it seems science can provide an objective solution.

      Delete

Note: Only a member of this blog may post a comment.