About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Monday, October 29, 2012

From the naturalism workshop, part III


by Massimo Pigliucci

And we have now arrived at the commentary on the final day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. It was my tun to do an introductory presentation on the relationship between science and philosophy, and on the idea of scientism. (Part I of this commentary is here, part II here.)

I began by pointing out that it doesn’t help anyone if we play semantic games with terms like “science” and “philosophy.”  In particular, “science” cannot be taken to be simply whatever deals with facts, just like “philosophy” isn’t whatever deals with thinking. So for instance, facts about the planets in the solar system are scientific facts, but the observation that I live in Manhattan near the Queensborough Bridge is just a fact, science has nothing to do with it. Similarly, John Rawls’ book A Theory of Justice, to pick an arbitrary example, is real philosophy, while Ron Hubbard’s nonsense about Dianetics isn’t, even though he thought of it as such.

So science becomes a particular type of structured social activity, characterized by empirically driven hypothesis testing about the way the world works, peer review, technical journals, and so on. And philosophy is about deploying logic and general tools of reasoning and argument to reflect on a broad range of subject matters (epistemology, ethics, aesthetics, etc.) and to reflect on other disciplines (“philosophies of”).

Another important thing to get straight: philosophy is not in the business of advancing science. We’ve got science for that, and it works very well. Some philosophy is “continuous” with science, but most is not. Also, philosophy makes progress by exploring logical space, not by making empirical discoveries.

I then brought up the Bad Boy of physics, Richard Feynman, who famously said: “Philosophy of science is about as useful to scientists as ornithology is to birds.” True enough (except when it comes to ornithologists helping out avoiding the extinction of some bird species), but surely that does not imply that ornithology is thereby useless.

Next, I moved to a discussion of scientism. I suggested that in the strong sense this is the view that only scientific claims, or only questions that can be addressed by science, are meaningful. In a weaker sense, it is the view that the methods of the natural sciences can and should be applied to any subject matter. I think the first one is indefensible, and that the second one needs to be qualified and circumscribed. For instance, there are plenty of areas where science has little or nothing interesting to say: mathematics, logic, aesthetics, ethics, literature, just to name a few.

It is, of course, true that a number of philosophers have said, and continue to say, bizarro things about science, or even about philosophy itself (Thomas Nagel and Jerry Fodor come to mind as recent examples). But a pretty good number of scientists are on record has having said bizarro things about philosophy, or even about science itself (Lawrence Krauss, and more recently Freeman Dyson).

What I suggested as a way forward is that we should work toward re-establishing the classical notion of scientia, which means knowledge in the broader sense, including contributions from science, philosophy, math, and logic. There is also an even broader concept of understanding, which is relevant to human affairs. And I think that understanding requires not only scientia, but also other human activities such as art, music, literature, and the broader humanities. As you can see, I was trying to be very ecumenical...

In the end, I submitted that skirmishes between scientists and philosophers are not just badly informed and somewhat silly, they are anti-intellectual, and do not help the common cause of moving society toward a more rational and compassionate state than it finds itself in now.

The discussion that followed was very interesting. Alex Rosenberg did stress that philosophers interested in science need to pay close attention to what goes on in the lab, to which both Sean Carroll and Janna Levin responded that there are very good examples of important conceptual contributions made by philosophers to physics, particularly in the area of interpretations of quantum mechanics. Rosenberg also pointed out that some philosophers — for instance Samir Okasha — have contributed to biology, for instance in the area of debates about levels of selection.

We then talked about the issue of division of intellectual labor, with Dennett stressing the ability (and dangers!) of philosophers to take a bird’s eye view of things that is often unavailable to scientists. This, I commented, is because scientists are justifiably busy with writing grant proposals, doing lab work, and interacting tightly with graduate students. That was my own experience as a practicing evolutionary biologist. As a philosopher, I rarely write grant proposals, I don’t have to run a lab or do field work, and my interactions with graduate students are often in the form of visits to coffee houses and wine bars. All of which affords me the “luxury” (really, it’s my job) to read, think and write more broadly now than what I could do when I was a practicing scientist.

Along similar lines, Sean Carroll remarked — again going back to actual examples from physics — that scientists concern themselves primarily with how to figure things out, postponing the broader question of what those things mean. That’s another area where good philosophy can be helpful. Rebecca Goldstein added that philosophy is hard to do well, and that scientists should be more respectful and less dismissive of what philosophers do. Janna Levin observed that much of the fracas in this area is caused by a few prominent, senior (quasi-senile?) scientists and philosophers, but that in reality most scientists have a healthy degree of respect for philosophy.

At this point Coyne asked a reasonable question: we have talked about contributions that philosophers have made to science, what about the other way around? Several people offered the examples of Einstein, Bell and Feynman (ironically, the same guy of the philosophy-as-ornithology comment mentioned above), the latter for instance on the concept of natural law.

That was it, folks. What did I take from the experience? At the least the following points:

* On naturalism in general: we agreed that there are different shades of philosophical naturalism, and that reasonable people may disagree about the degree of, say, reductionism or determinism that the view entails.

* On determinism: given that even the physicists aren’t sure, yet, whether quantum mechanics is best interpreted deterministically or not (not to mention of the interpretation of any more fundamental theory), the question is open.

* On reductionism: Rosenberg’s extreme reductionism-nihilism was clearly, well, extreme within this group. Most participants agreed that one can, indeed should, still talk about morality and responsibility in meaningful terms.

* On emergence: there was, predictably, disagreement here, even among the physicists. Carroll seemed the most sympathetic to the concept, repeatedly talking, for instance, about the emergence of the Second Law of thermodynamics from statistical mechanics. Even Weinberg agreed that there are emergent phenomena in a robust sense of the term, but of course he preferred a “weak” concept of emergence, according to which the reductionist can write a promissory note that “in principle” things could be explained by a fundamental law. It was unclear what such principle may be, or even why that fundamental law couldn’t itself be considered emergent from something else (the “it’s turtles all the way down” problem).

* On meaning: following Goldstein, most of us agreed that there is meaning in human life, which comes out of the sense that we matter in society and to our fellow human beings. Flanagan’s concept of “eudaimonics” was, I think, most helpful here.

* On free will and moral responsibility: the debate between incompatibilists (Coyne, Rosenberg) and compatibilists (most of the rest, led of course by Dennett) continued. But we agreed that “free will” is far too loaded a concept, with Flanagan’s suggestion that we go back to the ancient Greeks’ categories of voluntary and involuntary action being particularly useful, I think. Even Coyne agreed that there is a Dennett-like sense in which we can think of morally competent vs morally incompetent agents (say, a normal person and one with a brain tumor affecting his behavior), thereby rescuing a societally and legally relevant concept of morality and responsibility.

* Relationship between science and philosophy: people seemed in broad agreement with my presentation (again, including Jerry), from which it follows that science and philosophy are partially continuous and partially independent disciplines, the first one focused on the systematic study of empirical data about the world, the second more concerned with conceptual clarification and meta-analysis (“philosophy of”). We also agreed that there are indeed good examples of philosophers of science playing a constructive role in science, and vice versa of scientists who have contributed to philosophy of science (take that, Krauss and Hawking!).

This, added to the positive effect of meeting one’s intellectual adversaries in person, sharing meals and talking over a beer or a glass of wine, has definitely made a stupendous success of the workshop as a whole. Stay tuned for the full video version on YouTube...

Saturday, October 27, 2012

From the naturalism workshop, part II


by Massimo Pigliucci

Second day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. Today we started with Steven Weinberg (Nobel in physics) introducing his thoughts about morality. Why is a physicist talking about morality, you may ask? Good question, I reply, but let’s see...

The chair of the session was Rebecca Goldstein, who mentioned that she doesn’t find the morality question baffling at all. For her, moral reasoning is something that we have been doing for a long time, and moreover where philosophy has clearly made positive and incremental contributions throughout human history. She of course accepts the idea of a naturalistic origin for morality, but immediately added that evolutionary psychological accounts are simply not enough. In the process, she managed to both appreciate and criticize the work of Jonathan Haidt on the different dimensions of liberal vs progressive moral reasoning.

Weinberg agreed with Goldstein’s broad claim that we can reason about morality, but was concerned with the question of whether we can ground morality using science, and particularly the theory of evolution. He declared that he has been “thoroughly annoyed” by Sam Harris’ book on scientific answers to moral questions. He went on to observe that most people don’t actually have a coherent set of moral principles, nor do they need it. Weinberg said that early on in his life he was essentially a utilitarian, thinking that maximization of happiness was the logical moral criterion. Then he read Huxley’s Brave New World, and he was disabused of such a simplistic notion. Which is yet another reason he didn’t find Harris compelling, considering that the latter is a self-described utilitarian.

Weinberg also criticized utilitarianism by rejecting Peter Singer-style arguments to the effect that more good would be done in the world by living on bare minimum necessities and giving away much of your income to others. Weinberg argued instead that we owe loyalty to our family and friends, and that there is nothing immoral about preferring their welfare to the welfare of strangers. Indeed, although I don’t think he realized it, he was essentially espousing a virtue ethics / communitarian type of ethics. Weinberg concluded from his analysis that we “ought to live the unexamined life” instead, because that’s what the human condition leads us to.

Goldstein’s response was that we don’t need grounding postulates to engage in fruitful moral reasoning, and I of course agree. I pointed out that ethics is about developing reasonable ways to think about moral issues, starting with (and negotiating) certain assumptions about human life. In my book, for instance, Michael Sandel’s writings are excellent examples of how to engage in fruitful moral reasoning without having to settle the sort of metaethical issues that worry Weinberg (interestingly, and gratifyingly, I saw Jerry Coyne nodding somewhat vigorously while I was making my points). Dennett added that there are ways of thinking through issues that do not involve fact finding, but rather explore the logical consequences of certain possible courses of action — which is why moral philosophy is informed by facts (even scientific facts), but not determined by them. And for Dennett, of course, we — meaning humanity at large — are the ultimate arbiters of what works and doesn’t work in the ethical realm.

Dawkins agreed with Goldstein that there has been moral progress, and that we live in a significantly improved society in the 21st century compared to even recent times, let alone of course the Middle Ages. Dawkins also mentioned Steven Pinker’s work demonstrating a steady decrease in violence throughout human history (Goldstein humorously pointed out that Pinker got the idea from her). Dawkins also made the good point that we talk about morality as if it were only a human problem because all other species of Homo went extinct. Had that not been the case, we might be having a somewhat different conversation.

Both Weinberg and Goldstein agreed that a significant amount of moral progress comes from literature, and more recently movies. Things like Uncle Tom’s Cabin, or Sidney Poitier’s role in Guess Who’s Coming to Dinner, have the power to help changing people’s attitudes about what is right and what is wrong.

Which led to my comment about Hume and Aristotle. I think — with these philosophers — that moral reasoning is grounded in a broadly construed conception of human nature. Aristotle emphasized the importance of community environment, and particularly of one’s family and early education environment; but also of reflection and conscious attempts at improving. Hume agreed that basic human instincts are a mix of selfish and cooperative ones, but also argued that human nature itself can change over time, as a result of personal reflection and community wide conversations.

Carroll noted a surprising amount of agreement in the group about the fact that morality arose naturally because we are large brained social animals with certain needs, emotions and desires; but also about the fact that factual information and deliberate reflection can both improve our lot and the way we engage in moral reasoning. Owen Flanagan, however, pointed out that most people outside of this group do think of morality in a foundational sense, which is untenable from a naturalistic perspective. Owen went on to remind people that David Hume — after the famous passage warning about the logical impossibility of deriving oughts from is — went on to engage in quite a bit of moral reasoning nonetheless, simply doing so without pretending that he was demonstrating things.

Weinberg claimed that he cannot think of a way to change other people’s minds about moral priorities when there is significant disagreement. But Dennett pointed out that we do this all the time: we engage in societal conversations with the aim of persuading others, and in so doing we are changing their nature. That is, for instance, how we made progress on issues such as women rights, gay rights, or animal welfare (as Goldstein had already pointed out).

Terrence Deacon remarked that there was an elephant in the room: how is it that this group agrees so broadly about morality, if a good number of them are also fundamental reductionists? Isn’t moral reasoning an emergent property of human societies? That is indeed a good question, and I always wonder how people like Coyne or Rosenberg (or Harris, who was invited but couldn’t make it to the workshop) can at the same time hold essentially nihilistic views about existence and yet talk about good and bad things and what we should (ought?) to do about them? Carrol agreed that we should be using the emergence vocabulary when talking about societies and morality. In his mind, the stories we tell about atoms are different from the stories we tell about ethics; the first ones are descriptive, the latter ones become prescriptive. To use his kind of example, we can use the term “wrong” both when someone denies the existence of quarks and when someone kills an innocent person, but that word indicates different types of judgments that we need to keep distinct.

Simon DeDeo asked what sort of explanation we have for saying that, say, Western society has gotten “better” at ethical issues? (We all agreed that, more or less, it has.) We don’t seem to have anything like, say, the evolutionary explanation of what makes a bird “better” at flying. But Don Ross replied that we do have at least partial explanations, for instance drawing on the resources of game theory. In response to Ross, DeDeo pointed out that game theory can only give an account of morality within a consequentialist framework. Both Ross and (interestingly) Alex Rosenberg disagreed. Dennett helped clarifying things here, making a distinction between what he called “second rate” (or naive) consequentialism, which is a bad idea easily criticized on philosophical grounds, and the broader concept that of course consequences matter to human ethical decision making. In general, I think that we are still doing fairly poorly in the area that we need to answer DeDeo’s question: a good theory of cultural evolution. But of course that doesn’t mean it cannot be done or will not be done at some point (as is well known, I’m skeptical of memetic-type theories in this respect).

In the second part of the morning session we moved to consider the concept of meaning, with Owen Flanagan giving the opening remarks. He pointed out that the historical realization that we are “just” animals caused problems within the context of the preceding cultural era during which human beings were thought of as special direct creations of gods. Owen brought us back 2,500 years ago, to Aristotle and the ancient Greek’s concept of eudaimonia, the life that leads to human flourishing. Aristotle noted that people have different ideas of the good life, but also that there are some universals (or nearly so). One of these is that no normal person wishes to have a life without friends. Flanagan thinks — and I agree — that we can use the Aristotelian insight to build a discipline of “eudaimonics,” one that is both descriptive and normative. The good  life is about the confluence of the true, the beautiful and the good (all lower case letters, of course).

An example I brought up of modern-day analysis of a concept that Aristotle would have been familiar with is the comparison between people’s day-to-day self-reported happiness vs their overall conception of meaning in their life when it comes to having children. Turns out that having children actually significantly decreases day-to-day happiness, but it also increases the long-term positive meaning that most people attribute to their lives.

Rebecca Goldstein argued that novelists have a unique perspective on the issue of meaning, because of the process involved in devising characters and their stories. She claims that her writing novels taught her that a major component of flourishing and meaning is the idea of an individual mattering to other people. (Again, Aristotle would not have been surprised.) Rebecca connected this to the question that she is often asked about how can she find meaning in life as an atheist. She had a hard time even understanding the question, until she realized that of course for theists meaning is defined universally by an external agency on the basis that we “matter” to the gods. So the atheist is still using the idea that mattering and meaning are connected, she just does away with the external agency.

Dennett suggested that we as atheists need to think of projects and organizations that help secular people feel that they matter in more productive ways than, say, joining a crusade to kill the infidels. Janna Levin brought up the example of a flourishing of science clubs in places like New York City, which provide a community for intellectual kin (and of course there are also a good number of philosophy meetups!). Still, I argued (and Carroll, Goldstein, Coyne, and Flanagan agreed) that attempts in that direction — like the various Societies for Ethical Culture — are largely a failure. Secularists, especially in Europe, find meaning and feel that they matter because they live in a society they feel comfortable in and are active members of. Just like the ancient Greeks’ concept of a polis that citizens could be proud of and contribute to. It’s the old Western idea of civic pride, if you will. 

I need to note at this point, that — just as in the case of morality discussed above — the nihilists / reductionists in the group didn’t seem to have any problem meaningfully talking about meaning, so to speak, even though their philosophy would seem to preclude that sort of talk altogether... (The exception was Rosenberg, who stuck to his rather extreme nihilist guns.)

The afternoon session was devoted to free will, with Dennett giving the opening remarks. His first point was that there is a difference between the “manifest image” and the “scientific image” of things. For instance, there is a popular / intuitive conception of time (manifest image), and then there is the philosophical and/or scientific conception of time. But it is still the case that time exists. Why, then, asked Dennett, do so many neuroscientists flat out deny the existence of free will (“it’s an illusion”), rather than replacing the common image with a scientific one?

Free will, for Dennett, is as real as time or, say, colors, but it’s not what some people think it is. And indeed, some views of free will are downright incoherent. He suggested that nothing we have learned from neuroscience shows that we haven’t been wired (by evolution) for free will, which means that we also get to keep the concept of moral responsibility. That said, contra-causal free will would be a miracle, and we can’t help ourselves to miracles in a naturalistic framework.

Citing a Dilbert cartoon, Dennett said that the zeitgeist is such that people think that it follows from naturalism that we are “nothing but moist robots.” But this, for Dennett, is confusing the ideology of the manifest image with the manifest image itself. An analogy might help: one could say that if that is what you mean by color (i.e., what science means by that term), then color doesn’t exist. But we don’t say that, we re-conceptualize color instead. For instance: it makes perfect sense to distinguish between people who have the competence and will to sign a contract, and those who don’t. We have to draw these distinctions because of practical social and political reasons, which however does not imply that we are somehow cutting nature at its joints in a metaphysical sense. Moreover, Dennett pointed out that experiments show that if people are told that there is no free will they cheat more frequently, which means that the conceptualization of free will does have practical consequences. Which in turn puts some responsibility on the shoulders of neuroscientists and others who go around telling people that there is no free will.

Jerry Coyne gave the response to Dennett’s presentation, not buying into the practical dangers highlighted by the latter (Jerry seemed to think that these effects are only short-term; that may be, but I don’t think that undermines Dennett’s point). Coyne declared himself to be an incompatibilist (no surprise there), accusing compatibilists of conveniently redefining free will in order to keep people from behaving like beasts. However, Jerry himself admitted to having changed his definition of free will, and I think in an interesting direction. His old definition was the standard idea that if the tape of the history of the universe were to be played again you would somehow be able to make a different decision, which would violate physical determinism. Then he realized that quantum indeterminacy could, in principle, bring in indeterminism, and could even affect your conscious choices (through quantum effects percolating up to the macroscopic level). So he redefined free will as the idea that you are able to make decisions independently of your genes, your environments and their interactions. To which Dennett objected that that’s a pretty strange definition of free will, which no serious compatibilist philosopher would subscribe to.

Jerry then plunged into his standard worry, the same that motivates authors like Sam Harris: we don’t want to give ground to theologically-informed views of morality, and incompatibilism about free will (“we are the puppets of our genes and our environments”) is the best way to do it. Dennett was visibly shaking his head throughout (so was I, inwardly...).

In the midst of all of this, Jerry mentioned the (in)famous Libbett experiments, even though they have been taken apart both philosophically and, more recently, scientifically, which Dennett, Flanagan, and Goldstein immediately pointed out.

During the follow-up discussion Weinberg declared his leaning toward Dennett’s position, despite his (Weinberg’s) acceptance of determinism. We weigh reasons and we arrive at conscious decisions, and we know this by introspection — although he pointed out that of course this doesn’t mean that all our own desires are transparent and introspectively available. Weinberg did indeed paint a picture very similar to Dennett’s: we may never arrive — given the same circumstances — to a different decision, but it is still our decision.

Rosenberg commented that we have evidence that we cannot trust our introspection when it comes to conscious decision making, again citing Libbett. Both Dennett and Flanagan once more pointed out that those experiments have been taken conceptually apart (by them) decades ago (and, I reminded the group, questioned on empirical grounds more recently). Dennett did agree that introspection is not completely reliable, but he remarked that that’s quite different from claiming that we cannot rely on it at all.

Owen Flanagan discussed experiments about conceptions of free will done on undergraduate students. The students were given a definition of free will and then asked questions about whether the person made the decision and was responsible for her actions. The majority of subjects turned out to be both determinists and compatibilists, which undermines the popular idea that the commonsense concept of free will is contra-causal.

I pointed out, particularly to Jerry and Alex Rosenberg, that incompatibilists seem to discard or bracket out the fact that the human brain evolved to be a decision making, reason-weighing organ. If that is true, then there is a causal story that involves the brain, and my decisions are mine in a very strong sense, despite being the result of my lifelong gene-environment interactions (and the way my conscious and unconscious brain components weigh them).

Sean Carroll also objected to Coyne, using an interesting analogy: if Jerry applied his argument about incompatibilism to fundamental physics, he would have to conclude for an incompatibility between statistical mechanics and the second law of thermodynamics. But, Sean suggested, that would be a result of confusing language that is appropriate for one level of analysis with language that is appropriate for another level. (Though he didn’t say that, I would go even further, following up on the previous day’s discussion, and suggest that free will is an emergent property of the brain in a similar sense to which the second law is an emergent property of statistical mechanics — and on the latter even Steven Weinberg agreed!)

Terrence Deacon asked why we insist in using the term “free” will, and Jerry had previously invited people to drop the darn thing. I suggested, and Owen elaborated on it, that we should instead use the terms that cognitive scientists use, like volition or voluntary vs involuntary decision making. Those terms both capture the scientific meaning of what we are talking about and retain the everyday implication that our decisions are ours (and we are therefore responsible for them). And dropping “free” also doesn’t generate confusion about contra-causal mystical-theological mumbo jumbo.

Dennett, in response to a question by Coyne about the evolution of free will, pointed out two interesting things. First, if we take free will to be the ability of a complex brain to exercise conscious decision making, then it is a matter of degrees, and other species may have partial free will. Second, and relatedly, human beings themselves are not born with free will: we develop competence to make (morally relevant, among others) decisions during early life, in part as the result of education and upbringing.

Jerry at some point brought up the case of someone who commits a murder because a brain tumor interfered with his brain function. But I commented that it is strange to take those cases — where we agree that there is a malfunction of the brain — and make them into arguments to reject moral responsibility. Dennett agreed, talking about brains being “wired right” or “wired wrong,” which is a matter of degree, and which translates into degrees of moral responsibility (lowest for the guy affected by the tumor, highest for the person who kills for financial or other gain). Jerry, interestingly, brought up the case of a person who becomes a violent adult because of childhood traumas. But Dennett and I had a response that is in line with our conception of the brain as a decision making organ with higher or lower degrees of functionality: the childhood trauma imposes more constraints (reduces free will) on the brain’s development than a normal education, but fewer than a brain tumor. Consequently, the resulting adult bears an intermediate degree of moral responsibility for his actions.

The second session of the afternoon was on consciousness, introduced by David Poeppel. He claimed — as a cognitive scientist — that there are good empirical reasons to reject the conclusion that Libbett’s experiments (again!) undermine the idea of conscious decision making. At the same time, he did point to research showing that quite a bit of decision making in our brain is in fact invisible or inaccessible to our consciousness.

Dennett brought up experiments on priming in psychology, where the subjects are told not to say whatever word they are going to be primed for. Turns out that if the priming is too fast for conscious attention to pick it up, the subjects will in fact say the word, contravening the directions of the experimenter. But if the time frame is sufficiently long for consciousness to come in, then people are capable of stopping themselves from saying the priming word. The conclusion is that this is good evidence that conscious decision making is indeed possible, and that we can study its dynamics (and limits).

Rosenberg warned that we have good evidence leading us to think that we cannot trust our conscious judgments about our motives and mental states. Indeed, as Dennett pointed out, of course there is self-deception, rationalization, ideology, and self-fooling. But it is also the case that it is only through conscious reasoning that we get to articulate and reflect on our thoughts. We need consciousness to pay attention to our reasons for doing things. Conscious reasons can be subjected to a sort of “quality control” that unconscious reasons are screened off from. For Dennett human beings are powerful thinking beings because they can submit their own thinking to analysis and quality control.

And of course Daniel Kahneman’s work on type I (fast, unconscious) vs type II (slow, conscious) thinking came up. Poeppel pointed out that sometimes type I thinking is not just faster, but better than type II. To which Dennett replied that if you are about to have brain surgery you might prefer the surgeon to make considered decisions based on his type II system rather than quick type I decisions. Of course, which system does a better job is probably situation dependent, and at any rate is an empirical question.

Carroll asked whether it is actually possible to distinguish conscious from unconscious thoughts, to which both Poeppel and Goldstein replied yes, and we are getting better at it. Indeed, this has important practical applications, as for instance anesthesiologists have to be able to tell whether there is conscious activity in a patient’s brain before an operation begins. However, the best evidence indicates that consciousness is a systemic (emergent?) property, since it disappears below a certain threshold of brain-wide activity.

Dennett brought up the example of the common experience of thinking that we understand something, until we say it out loud and realize we don’t. No mystery there: we are bringing in “more agents” (or, simply, more and more deliberate cognitive resources) into the task, so it isn’t surprising that we get a better outcome as a result.

Rosenberg asked if we were going to talk about the “mysterian” stuff about consciousness, things like qualia, aboutness, and what is it like to be a bat. I commented that the only sensible lesson I could take out of Nagel’s famous bat-paper is not that first person experiences are scientifically inexplicable, but that the only way to have them is to actually have them. Dennett, however, remarked that he pointedly asked Nagel: if you had a twin brother who was a philosopher, would you be able to imagine what it is like to be your brother? To which Nagel, unbelievably I think, answered no. Of course we are perfectly capable of imagining what it is like to be another biologically relevantly similar to ourselves being.

Flanagan brought up Colin McGinn’s “mysterian” position about consciousness, pointing out that there is no equivalent in neuroscience or philosophy of mind of, say, Godel’s incompleteness theorems or Heisenberg’s indeterminacy principle. Similarly, Owen was dismissive (rightly, I think) of David Chalmers’ dualism based on mere conceivability (of unconscious zombies who behave like conscious beings, in his case).

I asked, provocatively, if people around the table think that consciousness is an illusion. Jerry immediately answered yes, but the following discussion clarified things a bit. Turns out — and Dennett was of great help here — that when Jerry says that consciousness is an epiphenomenon of brain functioning he actually means something remarkably close to what I mean by consciousness being an emergent property of the brain. We settled on “phenomenon,” which is the result of evolution, and which has functions and effects. This, of course, as opposed to the sense of “epiphenomenon” in which something has no effect at all, and which in this context leads to an incoherent view of consciousness (but one that the “mysterians” really like).

At this point Rosenberg introduced yet another controversial topic: aboutness. How is it possible, from a naturalist’s perspective, to have “Darwinian systems” like our brains that are capable of semantic reference (i.e., meaning)? Terrence Deacon responded that the content of thought, its aboutness, is not a given brain state, but brain states are necessary to make semantic reference possible. Don Ross, in this context, invoked externalism: a brain state in itself doesn’t have a stable meaning or reference, that stable meaning is acquired only by taking into account the largest system that includes objects external to us. Dennett’s example was that externalism is obviously true for, say, money: bank accounts have numbers that represent money, but they are not money, and the system works only because the information internal to the bank’s computers refer to actual money present in the outside world.

Rosenberg seemed bothered by the use of intentional language in describing aboutness. But Dennett pointed out that intentionality is needed to explain a certain category of phenomena, just like — I suggested — teleological talk is necessary (or at the very least very convenient) to refer to adaptations and natural selection. And here I apparently hit the nail on the head: Rosenberg rejects the (naturalistic) concept of teleology, while Dennett and I accept. That is why Rosenberg has a problem with intentional language and Dennett and I don’t. 

And that, as it turns out, was a pretty good place to end the second day. Tomorrow: scientism and the relationship between science and philosophy.

Friday, October 26, 2012

From the naturalism workshop, part I


by Massimo Pigliucci

Well, here we are, in Stockbridge, MA, in the middle of the Berkshires, sitting at a table that features of good number of very sharp minds, and yours truly. This gathering is the brainchild of cosmologist Sean Carroll, entitled “Moving Naturalism forward,” its point being to see what a bunch of biologists, physicists, philosophers and assorted others think about life, the universe and everything. And we have three days to do it. Participants included: Sean Carroll, Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Dan Dennett, Owen Flanagan, Rebecca Goldstein, Janna Levin, David Poeppel, Alex Rosenberg, Don Ross, Steven Weinberg, and myself.

Note to the gentle reader: although Sean has put together an agenda of broad topics to be discussed, this post and the ones following it will inevitably have the feel of a stream of consciousness. But one that will be interesting nonetheless, I hope!

During the roundtable introductions, Dawkins (as well as the rest of us) was asked what he would be willing to change his mind about; he said he couldn’t conceive of a sensible alternative to naturalism. Rosenberg, interestingly, brought up the (hypothetical) example of finding God’s signature in a DNA molecule (just like Craig Venter has actually done). Dawkins admitted that that would do it, though immediately raised the more likely possibility that that would be a practical joke played by a superhuman — but not supernatural — intelligence. Coyne then commented that there is no sensible distinction between superhuman and supernatural, in a nod to Clarke’s third law.

There appeared to be some interesting differences within the group. For instance, Rosenberg clearly has no problem with a straightforward functionalist computational theory of the mind; DeDeo accepts it, but feels uncomfortable about it; and Deacon outright rejects it, without embracing any kind of mystical woo. Steven Weinberg asked the question of whether — if a strong version of artificial intelligence is possible — it follows that we should be nice to computers.

The first actual session was about the nature of reality, with an introduction by Alex Rosenberg. His position is self-professedly scientistic, reductionist and nihilist, as presented in his The Atheist’s Guide to Reality. (Rationally Speaking published a critical review of that book, penned by Michael Ruse.) Alex thinks that complex phenomena — including of course consciousness, free will, etc. — are not just compatible with, but determined by and reducible to, the fundamental level of physics. (Except, of course, that there appears not to be any such thing as the fundamental level, at least not in terms of micro-things and micro-bangings.)

The first response came from Don Ross (co-author with James Ladyman of Every Thing Must Go), who correctly pointed out that Rosenberg’s position is essentially a statement of metaphysical faith, given that fundamental physics cannot, in fact, derive the phenomena and explanations of interest to the special sciences (defined here as everything that is not fundamental physics).

Weinberg made the interesting point that when we ask whether X is “real” (where X may be protons or free will) the answer may be yes, with the qualification of what one means by the term “real.” Protons, in other words (and contra both Rosenberg and Coyne), are as real as free will for Weinberg, but that qualifier means different things when applied to protons than it does when applied to free will.

In response to Weinberg’s example that, say, the American Constitution “exists” not just as a piece of paper made of particles, Rosenberg did admit that the major problem for his philosophical views is the ontological status of abstract concepts, especially mathematical ones as they relate to the physical description of the world (like Schrödinger’s equation, for instance).

Dennett asked Rosenberg if he is concerned about the political consequences of his push for reductionism and nihilism. Rosenberg, to his credit, agreed that he has been very worried about this. But of course from a philosophical and epistemological standpoint nothing hinges on the political consequences of a given view, if such a view is indeed correct.

Following somewhat of a side track, Dennett, Dawkins and Coyne had a discussion about the use of the word “design” when applied to both biological adaptations and human-made objects. Contra Dawkins and Coyne, Dennett defends the use of the term design in biology, because biologists ask the question “what is this [structure, behavior] for?” thus honestly reintroducing talk of function and purpose in science. A broader point made by Dennett, which I’m sure will become relevant to further discussions, is that the appearance on earth of beings capable of reflecting on things makes for a huge break from everything else in the biosphere, a break that ought to be taken seriously when we talk about purpose and related concepts.

Owen Flanagan, talking to Rosenberg, saw no reason to “go eliminativist” on the basic furniture of the universe, which includes a lot more than just fermions qua fermions (see also bosons): it also includes consciousness, thoughts, libraries, and so on. And he also pointed out that, again, Rosenberg’s ontology potentially gets into serious trouble if we decide that things like mathematical objects are real in an interesting sense of the term (because they are not made of fermions). Flanagan pointed out that what we were doing in that room had to do with the meaning of the words being exchanged, not just with the movement of air molecules and the propagation of sounds, and that it is next to impossible to talk about meaning without teleology (not, he was immediately careful to add, in the Cartesian sense of the term).

Again interestingly, even surprisingly, Rosenberg agreed that meaning poses a huge problem for a scientistic account of the world, for a variety of reasons brought up by a number of philosophers, including Dennett and John Searle (the latter arguing along very different lines from the former, of course). He was worried that this will give comfort to anti-naturalists, but I pointed out that not being able to give a scientific (as distinct from a scientistic) account of something — now or ever (after all, there are presumably epistemic limits to human reason and knowledge) does not give much logical comfort to the super-naturalist, who would simply be arguing from ignorance.

Poeppel asked Rosenberg what he thinks explanations are, I assumed in the context of the obvious fact that fundamental physics does not actually explain the subject matters of the special sciences. Rosenberg’s answer was that explanations are a way to ally “epistemic hitches” that human beings have. At which point Dennett accused Rosenberg of being an essentialist philosopher (a la Parmenides), making a distinction between explanations in the everyday sense of the word and real explanations, such as those provided by science. But, argued Dennett, this is a very old fashioned way of doing philosophy, and it treats science in a more fundamentalist (not Dennett’s term) way than (most) scientists themselves do.

The afternoon session was devoted to evolution, complexity and emergence, with Terrence Deacon giving the introductory remarks. He began by raising the question of how do we figure out what does and does not fit in naturalism. His naturalistic ontology is clearly broader than Rosenberg’s, including, for instance, teleology (in the same sense as espoused earlier in the day by Dennett). Deacon rejects what Dennett calls “greedy” reductionism, because there are complex systems, relations, and other things that don’t sit well with extreme forms of reductionism. Relatedly, he suggested (and I agreed) that we need to get rid of talk of both “top-down” and indeed “bottom-up” causality, because it constrains us to think about the world in ways that are not useful. (Of course, top-down causality is precisely the thing rejected by greedy reductionists, while the idea that causality only goes bottom-up is the thing rejected by anti-reductionists.)

Ross concurred, and proposed that another good thing to do would be to stop talking about “levels” of organizations of reality and instead think about the scale of things (the concept of “scale” can be made non-arbitrary by referring to measurable degrees of complexity and/or to scales of energy). Not surprisingly, Weinberg insisted on the word levels, because he wants to say that every higher level does reduce to the bottom lowest one.

Deacon is interested in emergence because of the issue of the origin of life understood (metaphorically speaking) as a “phase transition” of sorts, which is in turn related to the question of how (biological) information “deals with” the constraints imposed by the second law of thermodynamics. In other words: the interesting question here is how did a certain class of information-rich complex systems manage to locally avoid the second law-mandated constant increase in entropy. (Note: Deacon was most definitely not endorsing a form of vitalism according to which life defies — globally — the second principle of thermodynamics. So this discussion is relevant because it sets out a different way of thinking about what it means for complex systems to be compatible with but not entirely determined by the fundamental laws of physics.)

All of the above, said Deacon, is tied up in what we mean by information, and he suggested that the well known Shannon formulation of information — as interesting as it is — is not sufficient to deal with the teleologically-oriented type of information that characterizes living organisms in general, and of course consciously purposeful human beings in particular.

Dennett seemed to have quite a bit of sympathy with Deacon’s ideas, though he focused on pre- or proto-Darwinian processes as a way to generate those information-rich, cumulative, second principle (locally) defying systems that we refer to as biological.

Rosenberg, as usual, didn’t seem to “be bothered by” the fact that we don’t have a good reductionist account of the origin of life. Methinks Rosenberg should be bothered a bit more by things for which reductionism doesn’t have an account and where emergentism seems to be doing better.

At this point I asked Weinberg (who has actually read my blog series on emergence on his way to the workshop!) why he thinks that the behavior of complex systems is “entailed” by the fundamental laws. He conceded two important points, the second one of which is crucial: first, he readily agreed that of course nobody can (and likely will ever be able to) actually reduce, say, biology to physics (or even condensed matter physics to sub-nuclear physics); so, epistemic reduction isn’t the game at all. Second, he said that nobody really knows if ultimate (i.e., ontological) reduction is possible in principle, which was precisely my point; his only argument in favor of greedy reductionism seems to be a (weak) historical induction: physicists have so far been successful in reducing, so there is no reason to think they won’t be able to keep doing it. Even without invoking Hume’s problem of induction, there is actually very good historical evidence that physicists have been able to do so only within very restricted domains of application. It was gratifying that someone as smart and knowledgeable in physics as Weinberg couldn’t back up his reductionism with anything more than this. However, Levin agreed with Weinberg, insisting on the a priori logical necessity of reduction, given the successes of fundamental physics.

Weinberg also agreed that there are features of, say, phase transitions that are independent of the microphysical constituents of a given system; as well as that accounts of phase transitions in terms of lower level principles are only approximate. But he really thinks that the whole research program of fundamental physics would go down the drain if we accepted a robust sense of emergence. Well, maybe it would (though I don’t think so), but do we have any better reason to accept greedy reductionism than fundamental physicists’ amor proprio? (Or, as Coyne commented, the fact that if we start talking about emergence then the religionists are going to jump the gun for ideological purposes? My response to Jerry was: who cares?)

Don Ross argued that fundamental physics just is the discipline that studies patterns and constraints on what happens that apply everywhere at all times. The special sciences, on the contrary, study patterns and constraints that are more spatially or temporally limited. This can be done without any talk of bottom-up causality, which seems to make the extreme reductionist program simply unnecessary.

Flanagan brought up the existence of principles in the special sciences, like natural selection in biology, or operant conditioning in psychology. He then asked whether the people present imagine that it will ever be possible to derive those principles from fundamental physics. Carroll replied — acknowledging Weinberg’s earlier admission — that no, that will likely not be possible in practice, but in principle... But, again, that seems to me to amount to a metaphysical promissory note that will never be cashed.

Dennett: so, suppose we discover intelligent extraterrestrial life that is based on a very different chemistry from ours. Do we then expect them to have the same or entirely different economics? If lower levels entail (logically) higher phenomena, the answer should be in the negative. And yet, one can easily imagine that similar high-level constraints would act on the alien economy, thereby yielding a convergently similar economy “emerging” from a very different biochemical substrate. The same example, I pointed out, applies to the principle of natural selection. Goldstein and DeDeo engaged in an interesting side discussion on what exactly logical entailment, well, entails, as far as this debate is concerned.

Interesting point by Deacon: emergence is inherently diachronic, i.e., emergent properties are behaviors that did not appear up to a certain time in the history of the universe. This goes nicely with his contention that talk of causality (top-down or bottom-up) is unhelpful. In answer to a question from Rosenberg, Deacon also pointed out that this historical emergence may not have been determined by things that happened before, if the universe is not deterministic but contingent (as there are good reasons to believe).

Simon DeDeo took the floor talking about renormalization theory, which we have already encountered as a major way of thinking about the emergence of phase transitions. Renormalization is a general technique that can be used to move from any group/level to any other, not just in going from fundamental to solid state physics. This means that it could potentially be applied to connecting, say, biology with psychology, if all the involved processes involved finite steps. However, and interestingly, when systems are characterized by effectively infinite steps, mathematicians have shown that this type of group theory is subject to fundamental undecidability (because of the appearance of mathematical singularities). Seems to me that this is precisely the sort of thing we need to operationalize otherwise vague concepts like emergence.

Another implication of what DeDeo was saying is that one could, in practice, reduce thermodynamics (macro-model) to statistical mechanics (micro-model), say. But there is no way to establish (it’s “undecidable”) whether there isn’t another micro-model that is equally compatible with the macro-model, which means that there would be no principled way to establish which micro-model affords the correct reduction. This implies that even synchronic (as opposed to diachronic) reduction is problematic, and that Rosenberg’s refrain, “the physical facts fix all the facts” is not correct. (As a side note, Dennett, Rosenberg and I agreed that DeDeo’s presentation is a way of formalizing the Duhem-Quine thesis in epistemology.)

It occurred to me at this point in the discussion that when reductionists like Weinberg say that higher level phenomena are reducible to lower level laws “plus boundary conditions” (e.g., you derive thermodynamics from statistical mechanics plus additional information about, say, the relationship between temperatures and pressures), they are really sneaking in emergence without acknowledging it. The so-called boundary conditions capture something about the process of emergence, so that it shouldn’t be surprising that the higher level phenomena are describable by a lower level “plus” scenario. After all, nobody here is thinking of emergence as a mystical spooky property.

And then the discussion veered into evolution, and particularly the relationship between the second law of thermodynamics and adaptation by natural selection. Rosenberg’s claim was that the former requires the latter, but both Dennett and I pointed out that that’s a misleading way of putting it: the second law is required for certain complex systems to evolve (in our type of universe, given its laws of physics). But the mere existence of the second law doesn’t necessitate  adaptation. Lots of other boundary conditions (again!) are necessary for that to be the case. And it is this tension between fundamental physics requiring (in the strong sense of logical entailment) vs merely being necessary (but not sufficient) for and compatible with certain complex phenomena that captures the major division between the two camps in which participants to the workshop are divided (of course, understanding that there is some porosity between the two camps themselves).

Tomorrow: morality, free will, and consciousness!

Thursday, October 25, 2012

Essays on emergence, part IV


ewinsidetv.files.wordpress.com
by Massimo Pigliucci

The previous three installments of this series have covered Robert Batterman’s idea that the concept of emergence can be made more precise by the fact that emergent phenomena such as phase transitions can be described by models that include mathematical singularities; Elena Castellani’s analysis of the relationship between effective field theories in physics and emergence; and Paul Humphreys’ contention that a robust anti-reductionism needs a well articulated concept of emergence, not just the weaker one of supervenience.

For this last essay we are going to take a look at Margaret Morrison’s “Emergence, Reduction, and Theoretical Principles: Rethinking Fundamentalism,” published in 2006 in Philosophy of Science. The “fundamentalism” in Morrison’s title has nothing to do with the nasty religious variety, but refers instead to the reductionist program of searching for the most “fundamental” theory in science. The author, however, wishes to recast the idea of fundamentalism in this sense to mean that foundational phenomena like localization and symmetry breaking will turn out to be crucial to understand emergent phenomena and — more interestingly — to justify the rejection of radical reductionism on the ground that emergent behavior is immune to changes at the microphysical level (i.e., the “fundamental” details are irrelevant to the description and understanding of the behaviors instantiated by complex systems).

Morrison begins with an analysis of the type of “Grand Reductionism” proposed by physicists like Steven Weinberg, where a few (ideally, one) fundamental laws will provide — in principle — all the information one needs to understand the universe [1]. Morrison brings up the by now familiar objection raised in the ‘70s by physicist Philip Anderson, who argued that the “constructionist” project (i.e., the idea that one can begin with the basic laws and derive all complex phenomena) is hopelessly misguided. Morrison brings this particular discussion into focus with a detailed analysis of a specific example, which I will quote extensively:

“The nonrelativistic Schrodinger equation presents a nice picture of the kind of reduction Weinberg might classify as ‘fundamental.’ It describes in fairly accurate terms the everyday world and can be completely specified by a small number of known quantities: the charge and mass of the electron, the charges and masses of the atomic nuclei, and Planck’s constant. Although there are things not described by this equation, such as nuclear fission and planetary motion, what is missing is not significantly relevant to the large scale phenomena that we encounter daily. Moreover, the equation can be solved accurately for small numbers of particles (isolated atoms and small molecules) and agrees in minute detail with experiment. However, it can’t be solved accurately when the number of particles exceeds around ten. But this is not due to a lack of calculational power, rather it is a catastrophe of dimension ... the schemes for approximating are not first principles deductions but instead require experimental input and local details. Hence, we have a breakdown not only of the reductionist picture but also of what Anderson calls the ‘constructionist’ scenario.”

Morrison then turns to something that has now become familiar in our discussions on emergence: localization and symmetry breaking as originators of emergent phenomena, where emergence specifically means “independence from lower level processes and entities.” The two examples she dwells on in some detail are crystallization: “the electrons and nuclei that make up a crystal lattice do not have rigidity, regularity, elasticity — all characteristic properties of the solid. These are only manifest when we get ‘enough’ particles together and cool them to a low ‘enough’ temperature”; and superconductivity: “The notion of emergence relates to superconductivity in the following way: In the N to infinity limit of large systems (the macroscopic scale) matter will undergo mathematically sharp, singular phase transitions to states where the microscopic symmetries and equations of motion are in a sense violated. ... [as Anderson put it] The whole becomes ‘not only more than but very different from the sum of its parts.’”

Morrison concludes the central part of her paper by clearly stating that we ought to take seriously the limits of reductionism “and refrain from excusing its failures with promissory notes about future knowledge and ideal theories.” Amen to that, sister.

The rest of the paper deals with some more specifically philosophical issues raised by the reductionism-emergence debate, one of which is the “wholes-parts” problem, referring to how — metaphysically — we should think about parts and the wholes they make up. But Morrison points out that emergence does not entail a change in the ontological status of parts (the parts don’t cease to exist when they form wholes). Rather, the problem is that emergent properties disappear if a system crosses a lower threshold of complexity. An example is superfluidity, which manifests itself as a collective effect of large ensembles of particles at low energy. Superfluidity cannot be rigorously deduced by the laws of motion that describe the behavior of the individual particles, and the phenomenon itself simply disappears when the system is taken apart. As Morrison sums up: “These states or quantum ‘protectorates’ and their accompanying emergent behavior demonstrate that the underlying microscopic theory can easily have no measurable consequences at low energies.”

Another concept tackled by Morrison and that we have already encountered is the use of renormalization theory as a way to describe emergent phenomena. She makes it clear that she doesn’t think of renormalization as just a mathematical trick, and certainly not as a friend of reductionism: “renormalizability, which is usually thought of as a constraint on ‘fundamental’ quantum field theories can be reconceived as an emergent property of matter both at quantum critical points and in stable quantum phases. ... [Indeed] what started off as a mathematical technique has become reinterpreted, to some extent, as evidence for the multiplicity of levels required for understanding physical phenomena.”

We have arrived at the end of my little excursion into the physics and philosophy of emergence. What have we gained from this admittedly very partial tour of the available literature? I think a few points should be clear by now:

* The concept of emergence has nothing inherently mystical or mysterious about it, it is simply a way to think about certain undeniable properties of the world that we can observe empirically.

* There are conceptually (Humphreys) and mathematically (Batterman, Castellani and Morrison) ways of operationalizing the idea of emergence.

* “Fundamental” physics itself provides several convincing examples of emergent phenomena, without having to go all the way up to biological systems, ecosystems, or mind-body problems (though all of those do, of course, exist and are both scientifically and philosophically interesting!).

* The reductionist program seems to be based on much talk that includes words like “potential,” “in principle,” and so on, that amount to little more than promissory notes based on individual scientists’ aesthetic preferences for simple explanations.

* While the reductionist-antireductionist debate is far from being settled (and it may never be), it is naive to invoke straightforward physics as if that field had in fact resolved all issues, particularly the philosophical ones.

* There doesn’t seem to be any “in principle” reason why certain laws of nature (especially if one thinks of “laws” as empirically adequate generalizations) may not have specific temporal and/or spatial domains of application, coming into effect (existence?) at particular, non-arbitrary scales of size, complexity, or energy.

So, there’s much to think about, as usual. And now I’m off to the informal workshop on naturalism organized by Sean Carroll, featuring the likes of Jerry Coyne, Richard Dawkins, Dan Dennett, Rebecca Goldstein, Alex Rosenberg, Don Ross, Steven Weinberg, and several others, including yours truly. Should be fun, stay tuned for updates...
____

[1] This reminds me of the following hilarious exchange between Penny and Sheldon on The Big Bang Theory show. The context is that Sheldon — the quintessential scientistic reductionist — volunteered to help Penny start a new business, a rather practical thing for a theoretical physicist.

Penny: “And you know about that [business] stuff?”
Sheldon: “Penny, I’m a physicist. I have a working knowledge of the entire universe and everything it contains.”
Penny: “Who’s Radiohead?”
Sheldon: “I have a working knowledge of the important things in the universe.”

Monday, October 22, 2012

Essays on emergence, part III


www.awomansplace.org
by Massimo Pigliucci

So far in this series we have examined Robert Batterman’s idea that the concept of emergence can be made more precise by the fact that emergent phenomena such as phase transitions can be described by models that include mathematical singularities, as well as Elena Castellani’s analysis of the relationship between effective field theories in physics and emergence. This time we are going to take a look at Paul Humphreys’ “Emergence, not supervenience,” published in Philosophy of Science back in 1997 (64:S337-S345).

The thrust of Humphreys’ paper is that the philosophical concept of supervenience, which is often brought up when there is talk of reductionism vs anti-reductionism, is not sufficient, and that emergence is a much better bet for the anti-reductionistically inclined.

The Stanford Encyclopedia of Philosophy defines supervenience thus: “A set of properties A supervenes upon another set B just in case no two things can differ with respect to A-properties without also differing with respect to their B-properties. In slogan form, ‘there cannot be an A-difference without a B-difference.’” A typical everyday example of supervenience is the relation between the amount of money in my pockets (A-property) and the specific make up of bills and coins I carry (B-property). While I am going to have the same amount of money (say, $20) regardless of the specific combination of coins and bills (say, no coins, 1 $10 bill and 2 $5 bills; or 4 25c coins, 9 $1 bills and 1 $10 bill), it is obvious that the total cannot possibly change unless I change the specific makeup of the coins+bills set (the opposite is not true, as we have just seen: we can change the composition of coins+bills without necessarily changing the total).

Again according to the SEP, “Supervenience is a central notion in analytic philosophy. It has been invoked in almost every corner of the field. For example, it has been claimed that aesthetic, moral, and mental properties supervene upon physical properties. It has also been claimed that modal truths supervene on non-modal ones, and that general truths supervene on particular truths. Further, supervenience has been used to distinguish various kinds of internalism and externalism, and to test claims of reducibility and conceptual analysis.”

Let’s say, for instance, that you think that mental properties supervene on the physical properties of the brain. What that means is that the same mental outcome (say, thought X, or feeling Y) could — in principle — be multiply instantiated, i.e., obtained by way of different brain states. This undermines a simplistic reductionism that would want to proclaim a one-to-one correspondence between physical and mental, but it still means that any change in the latter requires a change in the former, which is perfectly compatible with a physicalist interpretation of mental phenomena.

Humphreys claims that while accounts deploying supervenience often do so with an anti-reductionist aim, supervenience itself is no big foe of reductionism, for two reasons: (i) “If A supervenes upon B, then A is nothing but B’ talk”; and (ii) “if A supervenes upon B, then because A’s existence is necessitated by B’s existence, all that we need in terms of ontology is B.” I think that’s just about right, which explains why I’ve always felt that supervenience is an interesting philosophical concept, but has little to do with the debate about reductionism.

Well, what is supervenience good for, you might say? Humphreys gives the example of aesthetic judgment: “If aesthetic merit supervenes upon just spatial arrangements of color on a surface, and you attribute beauty to the Mona Lisa, you cannot withhold that [same] aesthetic judgement from a perfect forgery of the Leonardo painting.” Supervenience, then, becomes a way to assess consistency in the attribution of concepts, but has nothing interesting to say about ontological relationships, which is where the meat of the reductionism / anti-reductionism debate lies.

So for Humphreys one needs emergence, not just supervenience, to move away from reductionism. Fine, but we are still left with the need for a reasonable articulation of what emergent properties are. The author proposes a list of characteristics of emergence, though not all of them are necessary to identify a given phenomenon as emergent:

1) Novelty: “a previously uninstantiated property comes to have an instance.”
2) Qualitative difference: “emergent properties ... are qualitatively different from the properties from which they emerge.”
3) Absence at lower levels: “an emergent property is one that could not be possessed
at a lower level — it is logically or nomologically [1] impossible for this to occur.”
4) Law difference: “different laws apply to emergent features than to the features from which they emerge.”
5) Interactivity: “emergent properties ... result from an essential interaction between their constituent properties.”
6) Holism: “emergent properties are holistic in the sense of being properties of the entire system
rather than local properties of its constituents.”

Having thus set the stage, Humphreys goes on to consider some candidate examples of emergent properties. Interestingly, his first is none other than quantum entanglement, which provides the physical basis for higher level phenomena like superconductivity and superfluidity. According to Humphreys, quantum entanglement itself satisfies the 5th and 6th criteria (interactivity and holism), while when the phenomenon is considered as an explanation for, say, superconductivity, it minimally satisfies also criteria 1, 2 and 4 (novelty, qualitative difference and law difference).

The article then moves to a discussion of the general point that emergent properties can only manifest themselves in macroscopic systems, because they “enjoy properties that are qualitatively different from those of atoms and molecules, despite the fact that they are composed of the same basic constituents ... [properties] such as phase transitions, dissipative processes, and even biological growth, that do not occur in the atomic world.”

This is important because Humphreys then derives from his analysis a conclusion that is very much like the one Batterman arrived at in his paper, though beginning from a completely different starting point: “emergent properties cannot be possessed by individuals at the lower level because they occur only with [practically] infinite collections of constituents. Some of the most important cases of macroscopic phenomena are phase transitions, such as the transition from liquid to solid.” Hence the theoretical relevance of the mathematical singularities that describe phase transitions, which we have encountered in the first essay of this series.

The point is worth rewording more clearly: the reason mathematical singularities such as infinities pop up in description of emergent phenomena is because emergent phenomena occur when the number of components of a system is very large, effectively approaching infinity. Which in turns explains why only complex systems (of certain types) display emergent phenomena. Neat, no?

I know, I know, you are itching for less theory and more examples. Humphreys obliges, discussing the case of spontaneous ferromagnetism occurring below the Curie temperature. To wit:

“If one takes a ferromagnet whose Hamiltonian is spherically symmetric, then below the Curie temperature the system is magnetized in a particular direction, even though because of the spherically symmetric Hamiltonian, its energy is independent of that specific direction. This divergence between the symmetry exhibited by the overall system and the symmetry exhibited by the laws governing its evolution is an example of spontaneous symmetry breaking. We have here a case where there is a distinctively different law covering the N > infinity system than covers its individual constituents. This is exactly the kind of difference of laws across levels of analysis that we noted earlier as one criterion of a genuinely emergent phenomenon.”

To recap, supervenience — despite the crucial role it plays in many philosophical discussions — is not in fact a way to describe non reducible phenomena, for which task one really needs the more robust concept of emergence, with convincing examples to accompany it. This concept can be articulated in terms of Humphreys’ six criteria, and turns out to approximate Batterman’s approach based on the mathematics of phase transitions.

______

[1] For something to be nomologically impossible means that if instantiated it would violate a law of nature.

Thursday, October 18, 2012

Arguing pluralism instead of Church-State


www.globalpost.com
by Michael De Dora

When Vice President Joe Biden and Rep. Paul Ryan were asked about how their religious beliefs influence their views on abortion during last week’s debate, Americans were given more than just the chance to hear two vice presidential candidates discuss their faith and how it relates to a controversial political issue. They were given the chance to observe the candidates address a much broader subject: the relationship between religion and politics.

As could be expected, the two candidates outlined two very different approaches to this relationship. In order to discuss the broader points, let’s first take a look at what Biden and Ryan said.

Ryan’s answer:

I don’t see how a person can separate their public life from their private life or from their faith. Our faith informs us in everything we do. My faith informs me about how to take care of the vulnerable, of how to make sure that people have a chance in life.

Now, you want to ask basically why I’m pro-life? It’s not simply because of my Catholic faith. That’s a factor, of course. But it’s also because of reason and science.

You know, I think about 10 1/2 years ago, my wife Janna and I went to Mercy Hospital in Janesville where I was born, for our seven-week ultrasound for our firstborn child, and we saw that heartbeat. A little baby was in the shape of a bean. And to this day, we have nicknamed our firstborn child Liza, “Bean.” Now I believe that life begins at conception.

That’s why — those are the reasons why I’m pro-life. Now I understand this is a difficult issue, and I respect people who don’t agree with me on this, but the policy of a Romney administration will be to oppose abortions with the exceptions for rape, incest and life of the mother.

Biden’s answer:

... with regard to abortion, I accept my church’s position on abortion as a — what we call a de fide doctrine. Life begins at conception in the church’s judgment. I accept it in my personal life.

But I refuse to impose it on equally devout Christians and Muslims and Jews, and I just refuse to impose that on others, unlike my friend here, the — the congressman. I — I do not believe that we have a right to tell other people that — women they can’t control their body. It’s a decision between them and their doctor. In my view and the Supreme Court, I’m not going to interfere with that.

Ryan's response:

All I’m saying is, if you believe that life begins at conception, that, therefore, doesn’t change the definition of life. That’s a principle. The policy of a Romney administration is to oppose abortion with exceptions for rape, incest and life of the mother.

(You can find a full transcript here).

According to Ryan, there is no way (or no reason to try) to separate one’s beliefs regarding the veracity of religious claims from one’s approach to specific policies. For example, if you believe an embryo is a person made in the image of God, and deserving of certain rights, that will undoubtedly influence your approach to abortion. But, according to Biden, there is a way to separate these two. In his view, an elected official must realize that not everyone he or she represents practices his or her religion, and therefore should not have to live according to its dogmas. I think they each make an important point. Allow me to explain.

Ryan’s point cannot be easily dismissed. When Ryan says that he does not see “how a person can separate their public life from their private life or from their faith,” he is stating what counts as a fact for many people. Ryan — like many devoutly religious people — honestly and ardently believes that embryos are people, and that abortion is murder. Though I consider that position incoherent and unsupportable, it is difficult, if not impossible, for a person to believe that, yet sit idly by while thousands of abortions are happening every year. That is simply how belief works: once you accept some proposition as true, you are bound to act on it.

As for Biden, I have a hard time believing that he truly agrees with the Catholic Church on abortion, at least as fervently as Ryan. But that’s not necessarily what matters here. Biden has a compelling point in regard to making laws in a pluralistic society. While he readily admits that he has religious beliefs, he also realizes that public policy influences the lives of millions of different Americans. As such, he thinks public policy should not be based on his (or anyone’s) religious beliefs, which require a personal leap of faith, but on reasons that are accessible by all Americans.

You’ve probably noticed that Biden’s position does not employ the separation of church and state argument; he uses the pluralistic society argument. I suspect some secularists found Biden’s answer incomplete, but I think the pluralistic society argument could actually be more effective at convincing religious believers to adopt secular policies than a purely church-state argument (though I would note that pluralism is indirectly an argument in favor of church-state separation).

To be clear, I interpret the Establishment Clause of the First Amendment of the U.S. Constitution as mandating government neutrality on religion. Government should not favor religion over non-religion, non-religion over religion, or one religion over another. But there is nothing in the Constitution that states that religious lawmakers are required to leave their consciences at home when they arrive at their respective statehouses. In my view, secularists should realize this, and consider directly rebutting arguments for religiously based laws when they come to the surface, instead of asking politicians to dismiss them as personal or as outright absurd (even if they are). These beliefs are clearly influencing our political system, and should be exposed to critical reasoning.

While we cannot control the reasons people give for their beliefs, we can work to prevent religious-based reasons from entering the debate in the first place, steering political discourse towards secular reasoning. How? I think Biden’s pluralistic society argument is instructive here.

As it happens, this argument has been detailed before by a familiar figure: President Barack Obama. As Obama writes in The Audacity of Hope, “What our deliberative, pluralistic democracy does demand is that the religiously motivated translate their concerns into universal, rather than religion-specific, values.” [1] An example he uses is (oddly enough!) abortion:

If I am opposed to abortion for religious reasons and seek to pass a law banning the practice, I cannot simply point to the teachings of my church or invoke God’s will and expect that argument to carry the day. If I want others to listen to me, then I have to explain why abortion violates some principle that is accessible to people of all faiths, including those with no faith at all.

People cannot hear the divine voice others claim to hear, nor can they rely on others’ assertions that they have heard God’s voice. Furthermore, most people do not believe in the same holy book. In fact, even adherents to the same religious traditions often disagree over central tenets. And, of course, many people (reasonably, I might add) deny that the supernatural realm exists to begin with.

What does the pluralistic society argument mean for religious lawmakers? It doesn’t mean that they cannot hold or even speak about their religious beliefs in political debates. The fact that we live in a highly religious open democracy means that such reasons are bound to appear often. A person’s religious views naturally influence his or her views in politics, and we cannot bar these from entering the discourse. But politicians should also hold to certain practices regarding how to best make public policy. Since laws influence millions of different people who have different values, they cannot be defended by mere reference to a holy book or faith. Public policy must be based on natural world reasons that everyone can grasp and understand. Believe in religion if you like, but also believe that “I can’t make other people live according to my religion; I need to base laws on values that apply to everyone.”

At the least, this approach pushes religiously devout lawmakers to consider how they can defend their views on clearer grounds to all of their constituents. At its best, it will help foster a more reasonable public policy.

For Rep. Ryan, this means that it is not enough to simply tell the story of your wife’s childbirth and of the nicknaming of a seven-week-old embryo. If you think beans deserve equal or even more moral and legal consideration than women, you need a better argument than “I looked at an ultrasound and nicknamed what I saw; you should too.”

If you want to restrict abortion, you need to answer questions such as: what does it really mean to say that life begins at conception? Why do you think embryos are persons worthy of moral consideration and legal protection? Why shouldn’t a woman have the right to largely control her body and make reproductive decisions with her doctor? If you can’t answer these questions without reference to some religious principle, you should think deeply about whether you are fit for public office.

______


Note: a shorter version of this article first appeared on The Moral Perspective.

[1] Editorial Note: this is essentially John Rawls’ argument, as articulated in his A Theory of Justice.