by Massimo Pigliucci
About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Monday, October 29, 2012
From the naturalism workshop, part III
by Massimo Pigliucci
Saturday, October 27, 2012
From the naturalism workshop, part II
by Massimo Pigliucci
Labels:
consciousness,
emergence,
free will,
Massimo Pigliucci,
meaning,
naturalism,
reductionism
Friday, October 26, 2012
From the naturalism workshop, part I
by Massimo Pigliucci
Well, here we are, in Stockbridge, MA, in the middle of the Berkshires, sitting at a table that features of good number of very sharp minds, and yours truly. This gathering is the brainchild of cosmologist Sean Carroll, entitled “Moving Naturalism forward,” its point being to see what a bunch of biologists, physicists, philosophers and assorted others think about life, the universe and everything. And we have three days to do it. Participants included: Sean Carroll, Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Dan Dennett, Owen Flanagan, Rebecca Goldstein, Janna Levin, David Poeppel, Alex Rosenberg, Don Ross, Steven Weinberg, and myself.
Note to the gentle reader: although Sean has put together an agenda of broad topics to be discussed, this post and the ones following it will inevitably have the feel of a stream of consciousness. But one that will be interesting nonetheless, I hope!
During the roundtable introductions, Dawkins (as well as the rest of us) was asked what he would be willing to change his mind about; he said he couldn’t conceive of a sensible alternative to naturalism. Rosenberg, interestingly, brought up the (hypothetical) example of finding God’s signature in a DNA molecule (just like Craig Venter has actually done). Dawkins admitted that that would do it, though immediately raised the more likely possibility that that would be a practical joke played by a superhuman — but not supernatural — intelligence. Coyne then commented that there is no sensible distinction between superhuman and supernatural, in a nod to Clarke’s third law.
There appeared to be some interesting differences within the group. For instance, Rosenberg clearly has no problem with a straightforward functionalist computational theory of the mind; DeDeo accepts it, but feels uncomfortable about it; and Deacon outright rejects it, without embracing any kind of mystical woo. Steven Weinberg asked the question of whether — if a strong version of artificial intelligence is possible — it follows that we should be nice to computers.
The first actual session was about the nature of reality, with an introduction by Alex Rosenberg. His position is self-professedly scientistic, reductionist and nihilist, as presented in his The Atheist’s Guide to Reality. (Rationally Speaking published a critical review of that book, penned by Michael Ruse.) Alex thinks that complex phenomena — including of course consciousness, free will, etc. — are not just compatible with, but determined by and reducible to, the fundamental level of physics. (Except, of course, that there appears not to be any such thing as the fundamental level, at least not in terms of micro-things and micro-bangings.)
The first response came from Don Ross (co-author with James Ladyman of Every Thing Must Go), who correctly pointed out that Rosenberg’s position is essentially a statement of metaphysical faith, given that fundamental physics cannot, in fact, derive the phenomena and explanations of interest to the special sciences (defined here as everything that is not fundamental physics).
Weinberg made the interesting point that when we ask whether X is “real” (where X may be protons or free will) the answer may be yes, with the qualification of what one means by the term “real.” Protons, in other words (and contra both Rosenberg and Coyne), are as real as free will for Weinberg, but that qualifier means different things when applied to protons than it does when applied to free will.
In response to Weinberg’s example that, say, the American Constitution “exists” not just as a piece of paper made of particles, Rosenberg did admit that the major problem for his philosophical views is the ontological status of abstract concepts, especially mathematical ones as they relate to the physical description of the world (like Schrödinger’s equation, for instance).
Dennett asked Rosenberg if he is concerned about the political consequences of his push for reductionism and nihilism. Rosenberg, to his credit, agreed that he has been very worried about this. But of course from a philosophical and epistemological standpoint nothing hinges on the political consequences of a given view, if such a view is indeed correct.
Following somewhat of a side track, Dennett, Dawkins and Coyne had a discussion about the use of the word “design” when applied to both biological adaptations and human-made objects. Contra Dawkins and Coyne, Dennett defends the use of the term design in biology, because biologists ask the question “what is this [structure, behavior] for?” thus honestly reintroducing talk of function and purpose in science. A broader point made by Dennett, which I’m sure will become relevant to further discussions, is that the appearance on earth of beings capable of reflecting on things makes for a huge break from everything else in the biosphere, a break that ought to be taken seriously when we talk about purpose and related concepts.
Owen Flanagan, talking to Rosenberg, saw no reason to “go eliminativist” on the basic furniture of the universe, which includes a lot more than just fermions qua fermions (see also bosons): it also includes consciousness, thoughts, libraries, and so on. And he also pointed out that, again, Rosenberg’s ontology potentially gets into serious trouble if we decide that things like mathematical objects are real in an interesting sense of the term (because they are not made of fermions). Flanagan pointed out that what we were doing in that room had to do with the meaning of the words being exchanged, not just with the movement of air molecules and the propagation of sounds, and that it is next to impossible to talk about meaning without teleology (not, he was immediately careful to add, in the Cartesian sense of the term).
Again interestingly, even surprisingly, Rosenberg agreed that meaning poses a huge problem for a scientistic account of the world, for a variety of reasons brought up by a number of philosophers, including Dennett and John Searle (the latter arguing along very different lines from the former, of course). He was worried that this will give comfort to anti-naturalists, but I pointed out that not being able to give a scientific (as distinct from a scientistic) account of something — now or ever (after all, there are presumably epistemic limits to human reason and knowledge) does not give much logical comfort to the super-naturalist, who would simply be arguing from ignorance.
Poeppel asked Rosenberg what he thinks explanations are, I assumed in the context of the obvious fact that fundamental physics does not actually explain the subject matters of the special sciences. Rosenberg’s answer was that explanations are a way to ally “epistemic hitches” that human beings have. At which point Dennett accused Rosenberg of being an essentialist philosopher (a la Parmenides), making a distinction between explanations in the everyday sense of the word and real explanations, such as those provided by science. But, argued Dennett, this is a very old fashioned way of doing philosophy, and it treats science in a more fundamentalist (not Dennett’s term) way than (most) scientists themselves do.
The afternoon session was devoted to evolution, complexity and emergence, with Terrence Deacon giving the introductory remarks. He began by raising the question of how do we figure out what does and does not fit in naturalism. His naturalistic ontology is clearly broader than Rosenberg’s, including, for instance, teleology (in the same sense as espoused earlier in the day by Dennett). Deacon rejects what Dennett calls “greedy” reductionism, because there are complex systems, relations, and other things that don’t sit well with extreme forms of reductionism. Relatedly, he suggested (and I agreed) that we need to get rid of talk of both “top-down” and indeed “bottom-up” causality, because it constrains us to think about the world in ways that are not useful. (Of course, top-down causality is precisely the thing rejected by greedy reductionists, while the idea that causality only goes bottom-up is the thing rejected by anti-reductionists.)
Ross concurred, and proposed that another good thing to do would be to stop talking about “levels” of organizations of reality and instead think about the scale of things (the concept of “scale” can be made non-arbitrary by referring to measurable degrees of complexity and/or to scales of energy). Not surprisingly, Weinberg insisted on the word levels, because he wants to say that every higher level does reduce to the bottom lowest one.
Deacon is interested in emergence because of the issue of the origin of life understood (metaphorically speaking) as a “phase transition” of sorts, which is in turn related to the question of how (biological) information “deals with” the constraints imposed by the second law of thermodynamics. In other words: the interesting question here is how did a certain class of information-rich complex systems manage to locally avoid the second law-mandated constant increase in entropy. (Note: Deacon was most definitely not endorsing a form of vitalism according to which life defies — globally — the second principle of thermodynamics. So this discussion is relevant because it sets out a different way of thinking about what it means for complex systems to be compatible with but not entirely determined by the fundamental laws of physics.)
All of the above, said Deacon, is tied up in what we mean by information, and he suggested that the well known Shannon formulation of information — as interesting as it is — is not sufficient to deal with the teleologically-oriented type of information that characterizes living organisms in general, and of course consciously purposeful human beings in particular.
Dennett seemed to have quite a bit of sympathy with Deacon’s ideas, though he focused on pre- or proto-Darwinian processes as a way to generate those information-rich, cumulative, second principle (locally) defying systems that we refer to as biological.
Rosenberg, as usual, didn’t seem to “be bothered by” the fact that we don’t have a good reductionist account of the origin of life. Methinks Rosenberg should be bothered a bit more by things for which reductionism doesn’t have an account and where emergentism seems to be doing better.
At this point I asked Weinberg (who has actually read my blog series on emergence on his way to the workshop!) why he thinks that the behavior of complex systems is “entailed” by the fundamental laws. He conceded two important points, the second one of which is crucial: first, he readily agreed that of course nobody can (and likely will ever be able to) actually reduce, say, biology to physics (or even condensed matter physics to sub-nuclear physics); so, epistemic reduction isn’t the game at all. Second, he said that nobody really knows if ultimate (i.e., ontological) reduction is possible in principle, which was precisely my point; his only argument in favor of greedy reductionism seems to be a (weak) historical induction: physicists have so far been successful in reducing, so there is no reason to think they won’t be able to keep doing it. Even without invoking Hume’s problem of induction, there is actually very good historical evidence that physicists have been able to do so only within very restricted domains of application. It was gratifying that someone as smart and knowledgeable in physics as Weinberg couldn’t back up his reductionism with anything more than this. However, Levin agreed with Weinberg, insisting on the a priori logical necessity of reduction, given the successes of fundamental physics.
Weinberg also agreed that there are features of, say, phase transitions that are independent of the microphysical constituents of a given system; as well as that accounts of phase transitions in terms of lower level principles are only approximate. But he really thinks that the whole research program of fundamental physics would go down the drain if we accepted a robust sense of emergence. Well, maybe it would (though I don’t think so), but do we have any better reason to accept greedy reductionism than fundamental physicists’ amor proprio? (Or, as Coyne commented, the fact that if we start talking about emergence then the religionists are going to jump the gun for ideological purposes? My response to Jerry was: who cares?)
Don Ross argued that fundamental physics just is the discipline that studies patterns and constraints on what happens that apply everywhere at all times. The special sciences, on the contrary, study patterns and constraints that are more spatially or temporally limited. This can be done without any talk of bottom-up causality, which seems to make the extreme reductionist program simply unnecessary.
Flanagan brought up the existence of principles in the special sciences, like natural selection in biology, or operant conditioning in psychology. He then asked whether the people present imagine that it will ever be possible to derive those principles from fundamental physics. Carroll replied — acknowledging Weinberg’s earlier admission — that no, that will likely not be possible in practice, but in principle... But, again, that seems to me to amount to a metaphysical promissory note that will never be cashed.
Dennett: so, suppose we discover intelligent extraterrestrial life that is based on a very different chemistry from ours. Do we then expect them to have the same or entirely different economics? If lower levels entail (logically) higher phenomena, the answer should be in the negative. And yet, one can easily imagine that similar high-level constraints would act on the alien economy, thereby yielding a convergently similar economy “emerging” from a very different biochemical substrate. The same example, I pointed out, applies to the principle of natural selection. Goldstein and DeDeo engaged in an interesting side discussion on what exactly logical entailment, well, entails, as far as this debate is concerned.
Interesting point by Deacon: emergence is inherently diachronic, i.e., emergent properties are behaviors that did not appear up to a certain time in the history of the universe. This goes nicely with his contention that talk of causality (top-down or bottom-up) is unhelpful. In answer to a question from Rosenberg, Deacon also pointed out that this historical emergence may not have been determined by things that happened before, if the universe is not deterministic but contingent (as there are good reasons to believe).
Simon DeDeo took the floor talking about renormalization theory, which we have already encountered as a major way of thinking about the emergence of phase transitions. Renormalization is a general technique that can be used to move from any group/level to any other, not just in going from fundamental to solid state physics. This means that it could potentially be applied to connecting, say, biology with psychology, if all the involved processes involved finite steps. However, and interestingly, when systems are characterized by effectively infinite steps, mathematicians have shown that this type of group theory is subject to fundamental undecidability (because of the appearance of mathematical singularities). Seems to me that this is precisely the sort of thing we need to operationalize otherwise vague concepts like emergence.
Another implication of what DeDeo was saying is that one could, in practice, reduce thermodynamics (macro-model) to statistical mechanics (micro-model), say. But there is no way to establish (it’s “undecidable”) whether there isn’t another micro-model that is equally compatible with the macro-model, which means that there would be no principled way to establish which micro-model affords the correct reduction. This implies that even synchronic (as opposed to diachronic) reduction is problematic, and that Rosenberg’s refrain, “the physical facts fix all the facts” is not correct. (As a side note, Dennett, Rosenberg and I agreed that DeDeo’s presentation is a way of formalizing the Duhem-Quine thesis in epistemology.)
It occurred to me at this point in the discussion that when reductionists like Weinberg say that higher level phenomena are reducible to lower level laws “plus boundary conditions” (e.g., you derive thermodynamics from statistical mechanics plus additional information about, say, the relationship between temperatures and pressures), they are really sneaking in emergence without acknowledging it. The so-called boundary conditions capture something about the process of emergence, so that it shouldn’t be surprising that the higher level phenomena are describable by a lower level “plus” scenario. After all, nobody here is thinking of emergence as a mystical spooky property.
And then the discussion veered into evolution, and particularly the relationship between the second law of thermodynamics and adaptation by natural selection. Rosenberg’s claim was that the former requires the latter, but both Dennett and I pointed out that that’s a misleading way of putting it: the second law is required for certain complex systems to evolve (in our type of universe, given its laws of physics). But the mere existence of the second law doesn’t necessitate adaptation. Lots of other boundary conditions (again!) are necessary for that to be the case. And it is this tension between fundamental physics requiring (in the strong sense of logical entailment) vs merely being necessary (but not sufficient) for and compatible with certain complex phenomena that captures the major division between the two camps in which participants to the workshop are divided (of course, understanding that there is some porosity between the two camps themselves).
Tomorrow: morality, free will, and consciousness!
Thursday, October 25, 2012
Essays on emergence, part IV
ewinsidetv.files.wordpress.com |
The previous three installments of this series have covered Robert Batterman’s idea that the concept of emergence can be made more precise by the fact that emergent phenomena such as phase transitions can be described by models that include mathematical singularities; Elena Castellani’s analysis of the relationship between effective field theories in physics and emergence; and Paul Humphreys’ contention that a robust anti-reductionism needs a well articulated concept of emergence, not just the weaker one of supervenience.
For this last essay we are going to take a look at Margaret Morrison’s “Emergence, Reduction, and Theoretical Principles: Rethinking Fundamentalism,” published in 2006 in Philosophy of Science. The “fundamentalism” in Morrison’s title has nothing to do with the nasty religious variety, but refers instead to the reductionist program of searching for the most “fundamental” theory in science. The author, however, wishes to recast the idea of fundamentalism in this sense to mean that foundational phenomena like localization and symmetry breaking will turn out to be crucial to understand emergent phenomena and — more interestingly — to justify the rejection of radical reductionism on the ground that emergent behavior is immune to changes at the microphysical level (i.e., the “fundamental” details are irrelevant to the description and understanding of the behaviors instantiated by complex systems).
Morrison begins with an analysis of the type of “Grand Reductionism” proposed by physicists like Steven Weinberg, where a few (ideally, one) fundamental laws will provide — in principle — all the information one needs to understand the universe [1]. Morrison brings up the by now familiar objection raised in the ‘70s by physicist Philip Anderson, who argued that the “constructionist” project (i.e., the idea that one can begin with the basic laws and derive all complex phenomena) is hopelessly misguided. Morrison brings this particular discussion into focus with a detailed analysis of a specific example, which I will quote extensively:
“The nonrelativistic Schrodinger equation presents a nice picture of the kind of reduction Weinberg might classify as ‘fundamental.’ It describes in fairly accurate terms the everyday world and can be completely specified by a small number of known quantities: the charge and mass of the electron, the charges and masses of the atomic nuclei, and Planck’s constant. Although there are things not described by this equation, such as nuclear fission and planetary motion, what is missing is not significantly relevant to the large scale phenomena that we encounter daily. Moreover, the equation can be solved accurately for small numbers of particles (isolated atoms and small molecules) and agrees in minute detail with experiment. However, it can’t be solved accurately when the number of particles exceeds around ten. But this is not due to a lack of calculational power, rather it is a catastrophe of dimension ... the schemes for approximating are not first principles deductions but instead require experimental input and local details. Hence, we have a breakdown not only of the reductionist picture but also of what Anderson calls the ‘constructionist’ scenario.”
Morrison then turns to something that has now become familiar in our discussions on emergence: localization and symmetry breaking as originators of emergent phenomena, where emergence specifically means “independence from lower level processes and entities.” The two examples she dwells on in some detail are crystallization: “the electrons and nuclei that make up a crystal lattice do not have rigidity, regularity, elasticity — all characteristic properties of the solid. These are only manifest when we get ‘enough’ particles together and cool them to a low ‘enough’ temperature”; and superconductivity: “The notion of emergence relates to superconductivity in the following way: In the N to infinity limit of large systems (the macroscopic scale) matter will undergo mathematically sharp, singular phase transitions to states where the microscopic symmetries and equations of motion are in a sense violated. ... [as Anderson put it] The whole becomes ‘not only more than but very different from the sum of its parts.’”
Morrison concludes the central part of her paper by clearly stating that we ought to take seriously the limits of reductionism “and refrain from excusing its failures with promissory notes about future knowledge and ideal theories.” Amen to that, sister.
The rest of the paper deals with some more specifically philosophical issues raised by the reductionism-emergence debate, one of which is the “wholes-parts” problem, referring to how — metaphysically — we should think about parts and the wholes they make up. But Morrison points out that emergence does not entail a change in the ontological status of parts (the parts don’t cease to exist when they form wholes). Rather, the problem is that emergent properties disappear if a system crosses a lower threshold of complexity. An example is superfluidity, which manifests itself as a collective effect of large ensembles of particles at low energy. Superfluidity cannot be rigorously deduced by the laws of motion that describe the behavior of the individual particles, and the phenomenon itself simply disappears when the system is taken apart. As Morrison sums up: “These states or quantum ‘protectorates’ and their accompanying emergent behavior demonstrate that the underlying microscopic theory can easily have no measurable consequences at low energies.”
Another concept tackled by Morrison and that we have already encountered is the use of renormalization theory as a way to describe emergent phenomena. She makes it clear that she doesn’t think of renormalization as just a mathematical trick, and certainly not as a friend of reductionism: “renormalizability, which is usually thought of as a constraint on ‘fundamental’ quantum field theories can be reconceived as an emergent property of matter both at quantum critical points and in stable quantum phases. ... [Indeed] what started off as a mathematical technique has become reinterpreted, to some extent, as evidence for the multiplicity of levels required for understanding physical phenomena.”
We have arrived at the end of my little excursion into the physics and philosophy of emergence. What have we gained from this admittedly very partial tour of the available literature? I think a few points should be clear by now:
* The concept of emergence has nothing inherently mystical or mysterious about it, it is simply a way to think about certain undeniable properties of the world that we can observe empirically.
* There are conceptually (Humphreys) and mathematically (Batterman, Castellani and Morrison) ways of operationalizing the idea of emergence.
* “Fundamental” physics itself provides several convincing examples of emergent phenomena, without having to go all the way up to biological systems, ecosystems, or mind-body problems (though all of those do, of course, exist and are both scientifically and philosophically interesting!).
* The reductionist program seems to be based on much talk that includes words like “potential,” “in principle,” and so on, that amount to little more than promissory notes based on individual scientists’ aesthetic preferences for simple explanations.
* While the reductionist-antireductionist debate is far from being settled (and it may never be), it is naive to invoke straightforward physics as if that field had in fact resolved all issues, particularly the philosophical ones.
* There doesn’t seem to be any “in principle” reason why certain laws of nature (especially if one thinks of “laws” as empirically adequate generalizations) may not have specific temporal and/or spatial domains of application, coming into effect (existence?) at particular, non-arbitrary scales of size, complexity, or energy.
So, there’s much to think about, as usual. And now I’m off to the informal workshop on naturalism organized by Sean Carroll, featuring the likes of Jerry Coyne, Richard Dawkins, Dan Dennett, Rebecca Goldstein, Alex Rosenberg, Don Ross, Steven Weinberg, and several others, including yours truly. Should be fun, stay tuned for updates...
____
[1] This reminds me of the following hilarious exchange between Penny and Sheldon on The Big Bang Theory show. The context is that Sheldon — the quintessential scientistic reductionist — volunteered to help Penny start a new business, a rather practical thing for a theoretical physicist.
Penny: “And you know about that [business] stuff?”
Sheldon: “Penny, I’m a physicist. I have a working knowledge of the entire universe and everything it contains.”
Penny: “Who’s Radiohead?”
Sheldon: “I have a working knowledge of the important things in the universe.”
Monday, October 22, 2012
Essays on emergence, part III
www.awomansplace.org |
So far in this series we have examined Robert Batterman’s idea that the concept of emergence can be made more precise by the fact that emergent phenomena such as phase transitions can be described by models that include mathematical singularities, as well as Elena Castellani’s analysis of the relationship between effective field theories in physics and emergence. This time we are going to take a look at Paul Humphreys’ “Emergence, not supervenience,” published in Philosophy of Science back in 1997 (64:S337-S345).
The thrust of Humphreys’ paper is that the philosophical concept of supervenience, which is often brought up when there is talk of reductionism vs anti-reductionism, is not sufficient, and that emergence is a much better bet for the anti-reductionistically inclined.
The Stanford Encyclopedia of Philosophy defines supervenience thus: “A set of properties A supervenes upon another set B just in case no two things can differ with respect to A-properties without also differing with respect to their B-properties. In slogan form, ‘there cannot be an A-difference without a B-difference.’” A typical everyday example of supervenience is the relation between the amount of money in my pockets (A-property) and the specific make up of bills and coins I carry (B-property). While I am going to have the same amount of money (say, $20) regardless of the specific combination of coins and bills (say, no coins, 1 $10 bill and 2 $5 bills; or 4 25c coins, 9 $1 bills and 1 $10 bill), it is obvious that the total cannot possibly change unless I change the specific makeup of the coins+bills set (the opposite is not true, as we have just seen: we can change the composition of coins+bills without necessarily changing the total).
Again according to the SEP, “Supervenience is a central notion in analytic philosophy. It has been invoked in almost every corner of the field. For example, it has been claimed that aesthetic, moral, and mental properties supervene upon physical properties. It has also been claimed that modal truths supervene on non-modal ones, and that general truths supervene on particular truths. Further, supervenience has been used to distinguish various kinds of internalism and externalism, and to test claims of reducibility and conceptual analysis.”
Let’s say, for instance, that you think that mental properties supervene on the physical properties of the brain. What that means is that the same mental outcome (say, thought X, or feeling Y) could — in principle — be multiply instantiated, i.e., obtained by way of different brain states. This undermines a simplistic reductionism that would want to proclaim a one-to-one correspondence between physical and mental, but it still means that any change in the latter requires a change in the former, which is perfectly compatible with a physicalist interpretation of mental phenomena.
Humphreys claims that while accounts deploying supervenience often do so with an anti-reductionist aim, supervenience itself is no big foe of reductionism, for two reasons: (i) “If A supervenes upon B, then A is nothing but B’ talk”; and (ii) “if A supervenes upon B, then because A’s existence is necessitated by B’s existence, all that we need in terms of ontology is B.” I think that’s just about right, which explains why I’ve always felt that supervenience is an interesting philosophical concept, but has little to do with the debate about reductionism.
Well, what is supervenience good for, you might say? Humphreys gives the example of aesthetic judgment: “If aesthetic merit supervenes upon just spatial arrangements of color on a surface, and you attribute beauty to the Mona Lisa, you cannot withhold that [same] aesthetic judgement from a perfect forgery of the Leonardo painting.” Supervenience, then, becomes a way to assess consistency in the attribution of concepts, but has nothing interesting to say about ontological relationships, which is where the meat of the reductionism / anti-reductionism debate lies.
So for Humphreys one needs emergence, not just supervenience, to move away from reductionism. Fine, but we are still left with the need for a reasonable articulation of what emergent properties are. The author proposes a list of characteristics of emergence, though not all of them are necessary to identify a given phenomenon as emergent:
1) Novelty: “a previously uninstantiated property comes to have an instance.”
2) Qualitative difference: “emergent properties ... are qualitatively different from the properties from which they emerge.”
3) Absence at lower levels: “an emergent property is one that could not be possessed
at a lower level — it is logically or nomologically [1] impossible for this to occur.”
4) Law difference: “different laws apply to emergent features than to the features from which they emerge.”
5) Interactivity: “emergent properties ... result from an essential interaction between their constituent properties.”
6) Holism: “emergent properties are holistic in the sense of being properties of the entire system
rather than local properties of its constituents.”
Having thus set the stage, Humphreys goes on to consider some candidate examples of emergent properties. Interestingly, his first is none other than quantum entanglement, which provides the physical basis for higher level phenomena like superconductivity and superfluidity. According to Humphreys, quantum entanglement itself satisfies the 5th and 6th criteria (interactivity and holism), while when the phenomenon is considered as an explanation for, say, superconductivity, it minimally satisfies also criteria 1, 2 and 4 (novelty, qualitative difference and law difference).
The article then moves to a discussion of the general point that emergent properties can only manifest themselves in macroscopic systems, because they “enjoy properties that are qualitatively different from those of atoms and molecules, despite the fact that they are composed of the same basic constituents ... [properties] such as phase transitions, dissipative processes, and even biological growth, that do not occur in the atomic world.”
This is important because Humphreys then derives from his analysis a conclusion that is very much like the one Batterman arrived at in his paper, though beginning from a completely different starting point: “emergent properties cannot be possessed by individuals at the lower level because they occur only with [practically] infinite collections of constituents. Some of the most important cases of macroscopic phenomena are phase transitions, such as the transition from liquid to solid.” Hence the theoretical relevance of the mathematical singularities that describe phase transitions, which we have encountered in the first essay of this series.
The point is worth rewording more clearly: the reason mathematical singularities such as infinities pop up in description of emergent phenomena is because emergent phenomena occur when the number of components of a system is very large, effectively approaching infinity. Which in turns explains why only complex systems (of certain types) display emergent phenomena. Neat, no?
I know, I know, you are itching for less theory and more examples. Humphreys obliges, discussing the case of spontaneous ferromagnetism occurring below the Curie temperature. To wit:
“If one takes a ferromagnet whose Hamiltonian is spherically symmetric, then below the Curie temperature the system is magnetized in a particular direction, even though because of the spherically symmetric Hamiltonian, its energy is independent of that specific direction. This divergence between the symmetry exhibited by the overall system and the symmetry exhibited by the laws governing its evolution is an example of spontaneous symmetry breaking. We have here a case where there is a distinctively different law covering the N > infinity system than covers its individual constituents. This is exactly the kind of difference of laws across levels of analysis that we noted earlier as one criterion of a genuinely emergent phenomenon.”
To recap, supervenience — despite the crucial role it plays in many philosophical discussions — is not in fact a way to describe non reducible phenomena, for which task one really needs the more robust concept of emergence, with convincing examples to accompany it. This concept can be articulated in terms of Humphreys’ six criteria, and turns out to approximate Batterman’s approach based on the mathematics of phase transitions.
______
[1] For something to be nomologically impossible means that if instantiated it would violate a law of nature.
Thursday, October 18, 2012
Arguing pluralism instead of Church-State
www.globalpost.com |
When Vice President Joe Biden and Rep. Paul Ryan were asked about how their religious beliefs influence their views on abortion during last week’s debate, Americans were given more than just the chance to hear two vice presidential candidates discuss their faith and how it relates to a controversial political issue. They were given the chance to observe the candidates address a much broader subject: the relationship between religion and politics.
As could be expected, the two candidates outlined two very different approaches to this relationship. In order to discuss the broader points, let’s first take a look at what Biden and Ryan said.
Ryan’s answer:
I don’t see how a person can separate their public life from their private life or from their faith. Our faith informs us in everything we do. My faith informs me about how to take care of the vulnerable, of how to make sure that people have a chance in life.
Now, you want to ask basically why I’m pro-life? It’s not simply because of my Catholic faith. That’s a factor, of course. But it’s also because of reason and science.
You know, I think about 10 1/2 years ago, my wife Janna and I went to Mercy Hospital in Janesville where I was born, for our seven-week ultrasound for our firstborn child, and we saw that heartbeat. A little baby was in the shape of a bean. And to this day, we have nicknamed our firstborn child Liza, “Bean.” Now I believe that life begins at conception.
That’s why — those are the reasons why I’m pro-life. Now I understand this is a difficult issue, and I respect people who don’t agree with me on this, but the policy of a Romney administration will be to oppose abortions with the exceptions for rape, incest and life of the mother.
Biden’s answer:
... with regard to abortion, I accept my church’s position on abortion as a — what we call a de fide doctrine. Life begins at conception in the church’s judgment. I accept it in my personal life.
But I refuse to impose it on equally devout Christians and Muslims and Jews, and I just refuse to impose that on others, unlike my friend here, the — the congressman. I — I do not believe that we have a right to tell other people that — women they can’t control their body. It’s a decision between them and their doctor. In my view and the Supreme Court, I’m not going to interfere with that.
Ryan's response:
All I’m saying is, if you believe that life begins at conception, that, therefore, doesn’t change the definition of life. That’s a principle. The policy of a Romney administration is to oppose abortion with exceptions for rape, incest and life of the mother.
(You can find a full transcript here).
According to Ryan, there is no way (or no reason to try) to separate one’s beliefs regarding the veracity of religious claims from one’s approach to specific policies. For example, if you believe an embryo is a person made in the image of God, and deserving of certain rights, that will undoubtedly influence your approach to abortion. But, according to Biden, there is a way to separate these two. In his view, an elected official must realize that not everyone he or she represents practices his or her religion, and therefore should not have to live according to its dogmas. I think they each make an important point. Allow me to explain.
Ryan’s point cannot be easily dismissed. When Ryan says that he does not see “how a person can separate their public life from their private life or from their faith,” he is stating what counts as a fact for many people. Ryan — like many devoutly religious people — honestly and ardently believes that embryos are people, and that abortion is murder. Though I consider that position incoherent and unsupportable, it is difficult, if not impossible, for a person to believe that, yet sit idly by while thousands of abortions are happening every year. That is simply how belief works: once you accept some proposition as true, you are bound to act on it.
As for Biden, I have a hard time believing that he truly agrees with the Catholic Church on abortion, at least as fervently as Ryan. But that’s not necessarily what matters here. Biden has a compelling point in regard to making laws in a pluralistic society. While he readily admits that he has religious beliefs, he also realizes that public policy influences the lives of millions of different Americans. As such, he thinks public policy should not be based on his (or anyone’s) religious beliefs, which require a personal leap of faith, but on reasons that are accessible by all Americans.
You’ve probably noticed that Biden’s position does not employ the separation of church and state argument; he uses the pluralistic society argument. I suspect some secularists found Biden’s answer incomplete, but I think the pluralistic society argument could actually be more effective at convincing religious believers to adopt secular policies than a purely church-state argument (though I would note that pluralism is indirectly an argument in favor of church-state separation).
To be clear, I interpret the Establishment Clause of the First Amendment of the U.S. Constitution as mandating government neutrality on religion. Government should not favor religion over non-religion, non-religion over religion, or one religion over another. But there is nothing in the Constitution that states that religious lawmakers are required to leave their consciences at home when they arrive at their respective statehouses. In my view, secularists should realize this, and consider directly rebutting arguments for religiously based laws when they come to the surface, instead of asking politicians to dismiss them as personal or as outright absurd (even if they are). These beliefs are clearly influencing our political system, and should be exposed to critical reasoning.
While we cannot control the reasons people give for their beliefs, we can work to prevent religious-based reasons from entering the debate in the first place, steering political discourse towards secular reasoning. How? I think Biden’s pluralistic society argument is instructive here.
As it happens, this argument has been detailed before by a familiar figure: President Barack Obama. As Obama writes in The Audacity of Hope, “What our deliberative, pluralistic democracy does demand is that the religiously motivated translate their concerns into universal, rather than religion-specific, values.” [1] An example he uses is (oddly enough!) abortion:
If I am opposed to abortion for religious reasons and seek to pass a law banning the practice, I cannot simply point to the teachings of my church or invoke God’s will and expect that argument to carry the day. If I want others to listen to me, then I have to explain why abortion violates some principle that is accessible to people of all faiths, including those with no faith at all.
People cannot hear the divine voice others claim to hear, nor can they rely on others’ assertions that they have heard God’s voice. Furthermore, most people do not believe in the same holy book. In fact, even adherents to the same religious traditions often disagree over central tenets. And, of course, many people (reasonably, I might add) deny that the supernatural realm exists to begin with.
What does the pluralistic society argument mean for religious lawmakers? It doesn’t mean that they cannot hold or even speak about their religious beliefs in political debates. The fact that we live in a highly religious open democracy means that such reasons are bound to appear often. A person’s religious views naturally influence his or her views in politics, and we cannot bar these from entering the discourse. But politicians should also hold to certain practices regarding how to best make public policy. Since laws influence millions of different people who have different values, they cannot be defended by mere reference to a holy book or faith. Public policy must be based on natural world reasons that everyone can grasp and understand. Believe in religion if you like, but also believe that “I can’t make other people live according to my religion; I need to base laws on values that apply to everyone.”
At the least, this approach pushes religiously devout lawmakers to consider how they can defend their views on clearer grounds to all of their constituents. At its best, it will help foster a more reasonable public policy.
For Rep. Ryan, this means that it is not enough to simply tell the story of your wife’s childbirth and of the nicknaming of a seven-week-old embryo. If you think beans deserve equal or even more moral and legal consideration than women, you need a better argument than “I looked at an ultrasound and nicknamed what I saw; you should too.”
If you want to restrict abortion, you need to answer questions such as: what does it really mean to say that life begins at conception? Why do you think embryos are persons worthy of moral consideration and legal protection? Why shouldn’t a woman have the right to largely control her body and make reproductive decisions with her doctor? If you can’t answer these questions without reference to some religious principle, you should think deeply about whether you are fit for public office.
______
Note: a shorter version of this article first appeared on The Moral Perspective.
[1] Editorial Note: this is essentially John Rawls’ argument, as articulated in his A Theory of Justice.
Labels:
abortion,
church-state,
Michael De Dora,
pluralism,
politics
Subscribe to:
Posts (Atom)