About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Thursday, October 11, 2012

Essays on emergence, part I


bringingforthworldequality.files.wordpress.com
by Massimo Pigliucci

I am about to go to an informal workshop on naturalism and its implications, organized by cosmologist Sean Carroll. The list of participants is impressive, including Pat Churchland, Jerry Coyne, Richard Dawkins, Dan Dennett, Rebecca Goldstein, Alex Rosenberg, Don Ross and Steven Weinberg. You may have recognized at least four names of people with whom I often disagree, as well as two former guests of the Rationally Speaking podcast (not to mention Don Ross’ colleague, James Ladyman).

The list of topics to be covered during the discussions is also not for the faint of heart: free will, morality, meaning, purpose, epistemology, emergence, consciousness, evolution and determinism. Unholy crap! So I decided — in partial preparation for the workshop — to start a series of essays on emergence, a much misunderstood concept that will likely be at the center of debate during the gathering, particularly as it relates to its metaphysical quasi-opposite, determinism (with both having obvious implications for most of the other topics, including free will, morality, and consciousness).

It’s a huge topic, and the way I’m going to approach it is to present a series of commentaries on four interesting papers on emergence that have appeared over the course of the last several years in the philosophical (mostly philosophy of physics) literature. Keep in mind that — although I make no mystery of my sympathy for the idea of emergence as well as of my troubles with reductive physicalism — this is just as much a journey of intellectual discovery for me as it will be for most readers. (A good overview can be found, as usual, in the corresponding entry in the Stanford Encyclopedia of Philosophy.)

That said, let us begin with “Emergence, Singularities, and Symmetry Breaking,” by Robert W. Batterman of the University of Western Ontario and the University of Pittsburgh. The paper was published in Foundations of Physics (Vol 41, n. 6, pp. 1031-1050, 2011), but you can find a free downloadable copy of an earlier version here.

Batterman begins by putting things into context and providing a definition of emergent properties: “The idea being that a phenomenon is emergent if its behavior is not reducible to some sort of sum of the behaviors of its parts, if its behavior is not predictable given full knowledge of the behaviors of its parts, and if it is somehow new — most typically this is taken to mean that emergent phenomenon displays causal powers not displayed by any of its parts.” If you think that this amounts to invoking magic, you are not seriously engaging in this discussion, and you may as well save your time and quit reading now.

Batterman sets up his paper on an interesting premise: instead of looking at how philosophers of various stripes have conceptualized emergence and then examining possible cases from the sciences, he goes about it precisely the other way around: “I think it is better to turn the process on its head. We should look to physics and to ‘emergent relations’ between physical theories to get a better idea about what the nature of emergence really is.” This, the attentive reader might have noticed, is just about the same idea of practicing naturalistic (i.e., science-informed) metaphysics proposed by Ladyman and Ross in their Every Thing Must Go, already discussed on this blog and its accompanying podcast.

The second interesting twist in the paper is that Batterman avoids the usual treatment of emergence in terms of mereology, i.e., of parts vs whole. Instead, again bringing Ladyman and Ross to mind, he suggests that the most promising approach to understanding emergence is to look at the mathematical features that play an explanatory role [1] when theories are compared at different energy scales. From then on things become pretty complicated, but I’ll do my best, using extensive quotations from the paper itself.

Batterman takes on an alleged (as it turns out) case of reduction of a phenomenological to a more “fundamental” theory: the relationship between classical thermodynamics (phenomenological) and statistical mechanics (fundamental). The fact is, “the quantities and properties of state in orthodox thermodynamic equations appear largely to be independent of any specific claims about the ultimate constitution of the systems described,” which would seem to cast some doubts on the simple version of the reduction story. As Batterman puts it, “Reduction in this context typically is taken to mean that the laws of thermodynamics (the reduced theory) are derivable from and hence explained by the laws of statistical mechanics (the reducing theory) ... [but] there are very good reasons to deny that all thermodynamic (and hydrodynamic) phenomena are reducible to “fundamental” theory,” and these reasons have to do with phase transitions (solid and liquid, liquid and gas, etc.).

The crucial claim is that phase transitions are qualitative changes that cannot be reduced to fit the more fundamental explanatory principles of statistical mechanics. Phase transitions, therefore, count as genuine emergent phenomena.

The question, of course, is what — then — might explain the qualitative / emergent phenomena. Batterman does not go for the popular, easy but somewhat vacuous, solution of invoking “higher organizing principles.” Instead, he looks toward mathematical singularities in order to get the work done. Let me explain, as far as I understand this.

Batterman directs his reader to the role played by renormalization group theory within the context of condensed matter theory. This is because renormalization theory “provides an explanation for the remarkable similarity in behavior of ‘fluids’ of different molecular constitution when at their respective critical points.” It turns out, experimentally, that there is a universal pattern that describes the behavior of substances of very different micro-constitution, an observation that — suggests Batterman — would make it puzzling if the right explanation were to be found at the lower level of analysis, in terms of the specific micro-constitution of said fluids.

Batterman examines a typical temperature-pressure phase diagram for a generic fluid, and concludes that “Thermodynamically, the qualitative distinction between different states of matter is represented by a singularity in a function (the free energy) characterizing the system’s state.
Thus, mathematical singularities in the thermodynamic equations represent qualitative differences in the physical states of the fluid in the container.” The important part here is that the explanatory work is done by mathematical singularities, constructs of which scientists are often wary, indeed — according to Batterman — downright prejudiced against. But prejudice, it turns out, is a poor reason not to bite the bullet and embrace singularities.

Batterman continues: “The renormalization group explanation provides principled physical reasons (reasons grounded in the physics and mathematics of systems in the thermodynamic limit) for ignoring details about the microstructure of the constituents of the fluids. It is, in effect, an argument for why those details are irrelevant for the behavior of interest.” [Italics in the original] In terms of the necessary presence of singularities, Batterman acknowledges that a number of physicists and philosophers consider the appearance of singularities to be a failure of the physical model, but goes on to say: “On the contrary, I’m suggesting that an important lesson from the renormalization group successes is that we rethink the use of models in physics. If we include mathematical features as essential parts of physical modeling then we will see that blowups or singularities are often sources of information.” Singularities are our best friend, insofar as a mathematical understanding of physical processes is concerned.

At the end of the day, of course, both reduction and emergence are necessary, with the latter playing a particular role in the broader picture: “While one may be able to tell detailed (microstructurally dependent) stories about why individual fluids/magnets behave the way they do at criticality [i.e., at the point of phase transitions], such stories simply cannot account for the key property of the emergent protectorates [properties] — namely, their insensitivity to microscopics.”

For Battermann it is downright puzzling that one should always seek explanations at lower levels, even when it it clear that higher level phenomena of a particular class behave uniformly independently of their micro-scale makeup. He asks: “why should that individual derivation [for a particular fluid] have any bearing on a completely different individual derivation for a different fluid with a potentially radically different microstructural constitution?” Why indeed.

At one point Batterman turns the table on the reductionist, essentially accusing him — as I often find myself doing — of ignoring the data in the service of an unjustified a priori preference for a particular ontology: “One crucial and obvious feature of the world is that different things — particles, organisms, etc.— exist or appear at different scales. The world is not scale invariant with respect to space and time.” So why expect that “fundamental” scale-invariant theories should account for all of the world’s features?

Batterman also gets briefly into another example, pertinent to a theory that is arguably the most powerful currently accepted by the physics community, quantum field theory. Despite its spectacular successes, including the incredibly high degree of experimental confirmation, “the theory has been deemed by many to be foundationally suspect. Those who hold that a successful theory should yield predictions from-first-principles, as it were, independent of experimental/phenomenological input believe that it cannot be the final theory. ... more upsetting to many is the fact that quantum field theory when actually used for calculations and predictions typically engenders all kinds of divergences [i.e., mathematical singularities and infinities]. With these monsters ever present, it is claimed that there must be something wrong with the foundations of the theory.” Again, as in the case of phase transitions above, however, it is possible to actually see these “monsters” as the theoretician’s friends, given the amount of explanatory work they actually make possible.

So, the bottom line is that physicists should take singularities and infinities as important — and informative — features of their theories, not as “monsters” indicating underlying flaws in the theoretical architecture. This has happened before in the history of mathematic itself, when Goerg Cantor had a hard time explaining to his colleagues that infinities aren’t mathematically suspect, they are a crucial part of the story. Similarly, the kind of singularity that Batterman is talking about may turn out to be one of the loci of explanation for large classes of emergent properties, as well as a much more solid basis for studying emergence than generic appeals to somewhat mysterious higher organizing principles.

_____

[1] The very idea of mathematics explaining rather than simply quantifying / describing things may sound weird, though it appears to be accepted by many mathematicians and physicists, as well as by philosophers of both disciplines. See, of course, the RS entry on mathematical Platonism and links therein.

67 comments:

  1. When it comes to empirical science--from which I would exclude questions of free will, morality, meaning, purpose, epistemology, and consciousness--I don't see that questions of emergence (or reduction) are about anything more than the limitations in our models, or our inability to simultaneously conceptualize vastly different scales. What am I missing?

    ReplyDelete
  2. Massimo, thanks for the plug for the workshop. I haven't read Batterman's paper, but from your quotes above I suspect I would be tearing my hair out in frustration if I did. In particular, to claim that a phenomenon is emergent "if its behavior is not predictable given full knowledge of the behaviors of its parts, and if it is somehow new — most typically this is taken to mean that emergent phenomenon displays causal powers not displayed by any of its parts" seems unnecessarily contentious, and exactly why these discussions crash and burn almost immediately.

    If you believe in atoms and the laws of physics, the behavior of a gas of particles certainly *is* predictable given full knowledge of the behaviors of its parts, *in principle.* In practice, of course, it's hopeless, and it's much more sensible to use the renormalization group (or its moral equivalent) to talk about effective theories of thermodynamics and phase transitions etc. That setup is more than enough to have extremely interesting and productive conversations about reduction and emergence, without invoking new "causal powers" that seem incompatible with the straightforward mathematics.

    ReplyDelete
  3. Massimo, the venue for this workshop is located less than 20 mi. from where I live (do enjoy the porch & the fallen folliage, btw), and it features other thinkers/authors whom I've followed and admired over the years (viz. Owen Flanagan and Terrence Deacon)!

    That said, I'm sad that it's not open to the public, but I look forward to the video recordings and anything else you might have to report on it.

    Otherwise, no comment on this post, just yet. :-)

    ReplyDelete
  4. Nick,

    > I don't see that questions of emergence (or reduction) are about anything more than the limitations in our models, or our inability to simultaneously conceptualize vastly different scales. What am I missing? <

    Well, I could be flippant and respond: the point of the article. But I guess it was my fault for not being clear. If people like Batterman are right, there are *in principle* reasons for emergent properties, described, for instance, by singularities in our models. So, it isn't a question of limitations of the latter, is that there sometimes truly are genuinely qualitative transitions in nature, which is what we call emergent properties.

    Sean,

    with all due respect, maybe you should read Batterman's paper, while at the same time avoid tearing your hair out. First, he doesn't invoke any non-straightforward mathematics. On the contrary, he uses perfectly understood mathematical features to gain a more precise account of emergence.

    Second, when you say:

    > the behavior of a gas of particles certainly *is* predictable given full knowledge of the behaviors of its parts, *in principle.* <

    And what principle would that be? You are making an entirely unwarranted metaphysical assumption here, which you cannot actually back up epistemically. That's pretty slippery ground, seems to me...

    ReplyDelete
  5. "And what principle would that be? You are making an entirely unwarranted metaphysical assumption here, which you cannot actually back up epistemically."

    I'm assuming the validity of Newton's laws of motion. Or the Schrodinger equation, if you want to do quantum mechanics. Seems like pretty non-slippery ground to me! Of course we don't know the ultimate true laws of physics, but if that's the only claim being made, it would be useful to just say that.

    ReplyDelete
  6. Massimo- I can't comment on whether emergence or reductionism is the right perspective for quantum mechanics or thermodynamics (although it seems to me like you /or Batterman are giving a reductionist account of emergence in terms of singularities). In molecular biology, though, talk of 'emergence' just seems completely out of place, a kind of question stopping begging off.

    I've encountered probably thousands of interesting biological questions (how is genetic information stored, how do enzymes catalyse reactions, how do cells divide, how does signal transduction work, ad nauseum), and there is always a fruitful reductionist answer. There aren't, though, any useful emergent laws: molecular and cellular biology are notorious for this. But these reductionist stories are hard won, and most are still in progress. To say "its just emergent" is to say "you reductionists needn't worry your silly heads, you'll never solve *this* one". Worse: in molecular biology, the emergent answers aren't helpfully explanatory- you don't anything like that, anything law-like until you move up the chain of abstraction to populations (where laws pertaining to ecology, genetics, evolution become useful).

    That leads me to think that the choice between reductionism and emergence is an *epistemic* issue, and issue of which way of building models is more explanatory and informative. What really troubles me is that people often want to use epistemic arguments for or against either reductionism or emergence to make *metaphysical* points: the universe is lawlike in a continuous (reductionist) or discrete (emergence) way. I suspect you'd be sympathetic to this distinction, Massimo, but it seems like you also want to have it both ways.

    ReplyDelete
  7. "a phenomenon is emergent if its behavior is not reducible to some sort of sum of the behaviors of its parts, if its behavior is not predictable given full knowledge of the behaviors of its parts"

    When you use the renormalization group, you are in fact reducing its behavior to the sum of its parts. In fact, you're reducing it even more than that, you're reducing it to a *subset* of the sum of its parts (ignoring many details that are irrelevant).

    It seems the paper is claiming that phase transitions are not predictable from full knowledge of the behavior of parts. But isn't that what phase transition theory is--a prediction of large-scale behavior based on the behavior of parts? I suppose Batterman contends that renormalization group theory uses additional ideas not contained in the fundamental behavior. But those additional ideas are just mathematical ones (complicated ones, but still). By that standard, predicting the orbits of planets also requires additional ideas not contained in the fundamental laws, because it requires calculus. Is orbital motion emergent?

    ReplyDelete
  8. This is the first of Massimo's essays that I will have to re-read a couple of times. It's very technical and for the looks of it, way over my head.

    I do still wonder how can you get a behavior that is more than the sums of its parts. As unpredictable as it might be, how can you move in another dimension other than what your fundamental blocks allow you to?

    ReplyDelete
  9. Sean,

    > I'm assuming the validity of Newton's laws of motion. Or the Schrodinger equation, if you want to do quantum mechanics. Seems like pretty non-slippery ground to me! <

    I guess I missed the stepped where you derived physical reductionism and the absence of qualitatively new emergent behavior from Newton's laws...

    Evan,

    > That leads me to think that the choice between reductionism and emergence is an *epistemic* issue, and issue of which way of building models is more explanatory and informative. What really troubles me is that people often want to use epistemic arguments for or against either reductionism or emergence to make *metaphysical* points <

    Correct, and I always draw that distinction. Now, epistemically speaking, there is no question that we observe emergent properties that are not reducible "all the way down." Anyone who thinks otherwise isn't taking the science seriously.

    Metaphysically speaking, I am sympathetic to emergence, but I am irked by people who simply *assume* reductionism and talk as if it came straight out of empirical data. It most certainly doesn't.

    > There aren't, though, any useful emergent laws: molecular and cellular biology are notorious for this. But these reductionist stories are hard won, and most are still in progress <

    Nobody denies the success of the reductionism approach - as a methodological assumption. The question, again, is metaphysical. What I like about Batterman's paper is that he does try to go beyond a generic nod toward "higher emerging principles" and operationalize (mathematically) the concept. That is not at all the same as reducing emergent properties to lower levels, but it does amount to take them out of the realm of magic, I think.

    miller,

    > I suppose Batterman contends that renormalization group theory uses additional ideas not contained in the fundamental behavior. But those additional ideas are just mathematical ones <

    Correct. The claim is that the mathematical singularities carry information that is not the same as that of standard reductive models. You raise an interesting question, however, when you wrote "just" mathematical, a question about the status of mathematics in science. Is it just descriptive or also explanatory? My response, years ago, would have been the former. But now I am inclined toward the latter - together with a number of epistemologists and philosophers of science.

    Carlos,

    > As unpredictable as it might be, how can you move in another dimension other than what your fundamental blocks allow you to? <

    Well, because the whole metaphor of "fundamental blocks" is just that, a metaphor, and in itself it doesn't justify strong metaphysical conclusions, such as extreme reductionism. Particularly when they do fly in the face of the empirical evidence.

    ReplyDelete
    Replies
    1. Massimo: I'm amused by the transition in your last comment from "just mathematics" to "just metaphor" (where "just" = "only").

      Some would say: Let's eliminate the "just" in both cases and treat mathematics as a practice that's rich in metaphor (e.g. see here).

      Delete
    2. Wiat, didn't you just shifted from pro-emergent properties to scientific anti-realism?

      Delete
  10. At the risk of getting in WAY over my head, am I missing something in all of the above as I thought simple systems are devoid of emergence while complex systems are more or less defined by emergence? It seems to me that one cannot discuss emergence without first getting simple systems out of the way so things like Newtonian mechanics etc do not confuse the issue as they are simple systems or concern simple systems most of the time. Second, I also though that “cause” was now out of favor for complex systems and replaced by a causal chain. If I am way out of the loop here, feel free to ignore me.

    ReplyDelete
  11. I don't know what it would mean to "derived physical reductionism," nor do I think that qualitatively new emergent behavior is absent from Newton's laws (depending on definitions). The point is simply that Newton's laws, applied to a set of particles, gives you a closed set of equations. With appropriate initial conditions, the solutions are unique. There is no room for additional causal influence. The equations give unique answers; you can't get a different answer without violating the equations.

    There is an important and interesting discussion to be had about emergence, and it has nothing to do with being unable to predict behavior from component parts, nor with new "causal powers."

    ReplyDelete
  12. Massimo, I think I come from generally where you do on this issue. But, I didn't know you had quarrels with Steve Weinberg!

    More seriously, I think both the initial post, and the responses so far to critiques (or criticisms? especially if Sean is the fourth of your "four horsemen" - I'm reasonably sure about the other three) is very good.

    ReplyDelete
  13. Interesting post and article.

    I'm not sure emergent phenomena in physics and the ones considered in philosophy are entirely comparable.

    In physics, someone sets up a nice equation, lets x go to infinity, and solves the problem. It's counterintuitive, people argue, but they get used to it and (fifty years later) Bob's your uncle. That's because in physics, we are looking for a model that makes the correct predictions or that accurately describes the phenomena and not more.

    If someone sets up an equation that describes the activity of braincells, and finds a singularity for infinitely many braincells: would you consider that a satisfactory explanation of consciousness?

    Now I think about it, maybe I would. But then, for me understanding the world is building a model. I'm not sure people who argue about mind and body would agree.

    ReplyDelete
  14. Massimo,

    So you are saying that for the purposes of determining whether something is emergent, it is sufficient to show that its large-scale behavior can only be predicted with "new" mathematical concepts? Alright, I'm okay with that definition, and I consider phase transitions to be a good example.

    I suspect that the distinction between emergent and non-emergent behavior is subjective. It is a judgment call to say that RG theory adds something "new" to the table, where less exotic math like calculus does not. But the existence of borderline cases does not invalidate a distinction. In future parts, I hope to hear why you think this is a useful distinction to make!

    ReplyDelete
  15. << The crucial claim is that phase transitions are qualitative changes that cannot be reduced to fit the more fundamental explanatory principles of statistical mechanics. >>

    That will be news to my lecturer, prof. Haim Sompolinski, given that he taught us a whole course on precisely that topic... Granted, much of that course dealt with characterizing singularities.

    Despite being an Orthodox Jew, Sompolinski had no patience for strong emergence and insisted that such views amounted to non-local laws of nature - special rules for special constellations of matter or locales or so on. This he found unfounded and opposed to the scientific worldview. [This he said in another course, on free-will, incidentally, co-given with a philosopher.] I agree.

    I recommend his online-accessible "A Scientific Perspective on Human Choice", which discusses almost all of the conference's topics as they relate to free-will, and specifically rejects strong emergence. [Sompolinski is a leading expert on neural-network physics.]

    http://elsc.huji.ac.il/sompolinsky/files/book_chapter-_judiasm_science_and_moral_responsblity.pdf

    Cheers,

    Yair Rezek

    ReplyDelete
    Replies
    1. I wholeheartedly agree. My thermodynamics lessons are a bit far behind me but the lecturers made a point of deriving the thermodynamic concepts and equations from the newtonian mechanics underlying them.

      Physicists spend quite a lot of time doing modelling of systems particle by particle. Computational costs grow exponentially has you add elements to the systems but so far, no metaphysical wall has been reached (a point where the elementary modelling gives results that differ from observation or the "emergent" laws of thermodynamics.) Which, inductively I admit, given no reason to believe otherwise, would have us think that no such wall exists.

      The transition between statistical mechanics and thermodynamics is a conceptual one. We switch models because the more fundamental ones become computationally too expensive. In the end, the more fundamental ones are more precise (like relativity is more precise than newtonian mechanics, just harder to solve/compute) and, we guess, closer to reality.

      I have to confess I am a scientific anti-realist. Our theories are only models and we have no idea what reality is really like (we can not be sure there is not another, more fundamental, model underlying our current, most fundamental model.) This means I have mostly given up on metaphysics and that may be why I miss your point (or that I think there is no point.)

      Anyway, even though I haven't studied thermodynamics far enough to reach that level,I think that saying that phase changes cannot be explained from the underlying model of statistical mechanics is wrong as a matter of fact. Cue books with titles like this one: Yeomans J. M., Statistical Mechanics of Phase Transitions, Oxford University Press, 1992.

      So far as we know, when you integrate the standard model and quantum mechanics over large numbers you get the atomic theory and when, likewise, you integrate atomic theory, you get chemistry and thermodynamics and this integration is continuous at every levels. If they are conceptual gaps in our modelling of reality (we cannot integrate biochemistry to a theory of consciousness for instance) it is likely due to our own ignorance rather than to a metaphysical border that is crossed at some point. (Once again, this is induction based reasoning: we have already bridged similar gaps before. At some time, it was believed that organic and mineral matter were in two separate realms. It turned out that organic chemistry is not different from the rest of chemistry.)

      Delete
    2. We seem to agree on much, so I would like to ask you to clarify two points where we don't, in the hope that I'll learn something new.

      First, anti-realism. The main argument for realism is the No Miracles Argument - the idea is that our models work because they reflect how reality works. Certainly, we could discover a new more-fundamental theory. But that won't really change why our old theories worked - just like the discovery of Einsteon's relativity didn't undermine the validity of Newtonian mechanics. The primitive physical-chemistry likewise gave way to a quantum-based one, but the old ideas (like atoms, bonds, the periodic table) stayed. The idea behind Realism is not that we are capturing ALL of reality in our models - but rather, that these provide a certain level of description of reality. If you reject that, I'm at a loss to understand why you think our models work at all.

      Second, consciousness. I generally agree that integration will give us all the sciences. I do not, however, believe that it can give us consciousness. Integration will still yield an objective, third-person, functional description, whereas for consciousness we want a subjective, first-person, internal description. I think there IS a metaphysical wall there, or rather - that we need to assume something about consciosness itself to integrate to human consciousness, merely building on the fundamental physics won't suffice.

      Cheers,

      Yair Rezek

      Delete
    3. Jean-Nicolas Denonne:

      "Computational costs grow exponentially has you add elements to the systems but so far, no metaphysical wall has been reached (a point where the elementary modelling gives results that differ from observation or the "emergent" laws of thermodynamics.) Which, inductively I admit, given no reason to believe otherwise, would have us think that no such wall exists."

      You might want to read up on chaos theory. There indeed is a wall (not a metaphysical one, but a mathematical one), when a deterministic system stops being predictable after a finite amount of time. This is a well-known property of nonlinear partial differential equations (coupled to finite precision of initial data due to Heisenberg's uncertainty relations).

      So, as you make your physical system more and more complex (by adding more "pieces"), at some point your system will stop being predictable by the equations that govern the evolution of each "piece". That is the "wall", and one cannot circumvent it, regardless of available computer power.

      For a real-world example, just look at the level of precision we have for the next month's weather forecast. :-)


      HTH, :-)
      Marko

      Delete
  16. Perhaps this is one of your four articles, as it addresses the current one in detail.

    http://philsci-archive.pitt.edu/8757/

    ReplyDelete
  17. Very thought provoking essay. I believe this is the first time, that I see more in "emergence" than a quagmire of definitions.

    So I wonder, if emergence is just a different point of view? To give an example: In (the mathematical description of) a two particle system, we have some interaction term, which is completely irrelevant in the one particle system. However it is possible to ascribe the single particle a potential, such that we can determine the potential from different experiments and predict the interaction term for the two particle system in question.

    ReplyDelete
  18. I'd like to invite you (once again) to look at my paper on emergence. (This is the most recent. Earlier ones are cited.)

    Abbott, Russ (2010) "Abstract data types and constructive emergence" Newsletter on Philosophy and Computers of the American Philosophical Society, v9-n2, Spring 2010, pp 48-56.

    Preprint: https://sites.google.com/site/russabbott/Abbott-Abstractdatatypesandconstructiveemergence.pdf?attredirects=0 (pdf)

    ReplyDelete
  19. Nice article. Just a short suggestion: you may want to extend the treatment of emergence beyond just the renormalization group formalism. There is another rigorous formalism - chaos theory - where emergence is naturally studied.

    In short, the behavior of complex systems is fundamentally non-predictable, regardless of the fact that you know the laws of motion for the constituent subsystems. One just cannot know initial conditions with infinite precision, due to Heisenberg inequalities. So such a complex system exhibits behavior which is not predictable and not reducible to fundamental laws of physics. A qualitatively new emergent phenomenon, if you will.

    Together wit Goedel's incompleteness theorems, this gives a natural possibility for the existence of stuff like consciousness, free will, God, etc.

    Best, :-)
    Marko

    ReplyDelete
  20. I reread your article a couple of time and I have some problems with the way you treat singularities.

    Indeed, phase changes correspond to singularities in the equations of thermodynamics. That is because the assumptions made when creating that model break down. But if you go back to the underlying more fundamental model, which is free of those assumptions, you discover that there is nothing mysterious. (the singularity is an artifact of the mathematical formulation of the higher level model.)

    I am afraid you feel drawn to the mystery posed by the singularities and are convinced that they must correspond to something mysterious in reality when, really, there are just limitations of models that are less than perfectly accurate (for computational/conceptual efficiency reasons.)

    We see that in phase transitions (and in many other singularties) the mystery disappears when you go to a more fundamental level. We have yet to see a singularity in one of our models translate into something "singular" in reality. Of course, in some case, this is an open question because we have not discovered a more fundamental model: ie. relativity (and to make it even more vexing, we cannot even examine what happens in reality since its most famous singularity ends up hidden in black holes.)

    But you are right to say that singularities are a good source of information (and most scientists agree with that, from what I hear.) The key information is that the model is flawed and that the singularities are doors which we can lockpick to try and discover more fundamental/accurate models. I disagree when you say that singularities correspond to something radically different and somewhat mysterious in reality. Mysteriousness is a property of (incomplete) models, not of reality.

    On the point of scale:

    "“why should that individual derivation [for a particular fluid] have any bearing on a completely different individual derivation for a different fluid with a potentially radically different microstructural constitution?” Why indeed."

    It is because when you zoom out, the differences that seemed significant at the molecular level are negligible relative to other variables, like mass, speed and momentum for instance, and can be approximated away.

    It stays an approximation, though, and you see it when the molecular structure differs too much from the assumptions you made in your approximations (ie. ideal fluid laws do indeed a good job of describing more or less accurately a large array of substances but that job gets sloppier and sloppier as your substance deviates from their assumptions.)

    So individual derivations for a particular fluid have bearing on a completely different individual derivation for a different fluid in as much as the different fluids are not too different in the properties that matters most for what you are modeling. If its microstructural constition is indeed radically different (with regard to aforementioned properties or introduce new significant effects like electric polarisation does), you get radically different results and not much bearing from one derivation on the other.

    Battermann also says:

    “One crucial and obvious feature of the world is that different things — particles, organisms, etc.— exist or appear at different scales. The world is not scale invariant with respect to space and time.”

    Two things bother me with that sentence:
    - first what looks like a reliance on intuition ("obvious feature"). So many of our intuitions are wrong that I find that unwarranted
    - second: "different things exists". This sounds sloppy and somewhat essentialist. In what sense do the higher level "things" exist? Is not this "thing" concept just a way developped by the human brain to categorise reality? This sounds question begging to me. (I do realise my own way to present the problem might sound question begging to you, too.)

    ReplyDelete
    Replies
    1. I couldn't agree more.

      I would like to emphasize, though, on the matter of generality, that often in complex phenomena the microscopic details don't matter. One derivation captures what is going on in many systems because only a few aspects of the microscopic interactions/relations/situation matter.

      (On the matter of scale invariance - I didn't find that sentence too wrong per-se, only misleading. Yes, the universe is isotropic and homogenous on LARGE scales [if that; quantum cosmology casts doubt even on that axiom], not on smaller ones. And its evolution in time is irreversible (and non-fractal), so clearly isn't scale invariant [again - unless quantum cyclical/recurrence models are correct]. BUT - this does not in any way relate to strong emergence, as the article tries to imply. The question is whether the *laws of nature* are invariant in space and time, and to the best of our knowledge (even in quantum models) - they are.)

      Yair Rezek

      Delete
  21. M, Am I wrong in that recent emergence literature somehow misses, the earlier 1840s', 1920s, 1950s so forth, emergence work? Deapite Laughlin, Prigogine or Josephson and many others illuminating discussions, I feel the earlier work is not rated in its robust description of possible emergent properties of dissipative structures.

    ReplyDelete
  22. DrRay,

    > I thought simple systems are devoid of emergence while complex systems are more or less defined by emergence <

    Not exactly. Not all complex systems develop emergent properties, but yes, emergence is a characteristic of (certain) complex systems.

    Sean,

    > I don't know what it would mean to "derived physical reductionism" <

    I know, my point was rhetorical: you jumped from a solid physical theory to a questionable metaphysical conclusion, without filling the dots. I don't know how those dots could be filled, which is why I refrain from such jumps.

    > The point is simply that Newton's laws, applied to a set of particles, gives you a closed set of equations. With appropriate initial conditions, the solutions are unique. There is no room for additional causal influence <

    I find that argument wholly unconvincing. First, Newtonian laws are known to be approximations, so clearly there *is* room, in a sense. Second, those laws tell you precisely nothing about all sorts of complex systems, like the temperature of phase transitions of water, or the functioning of ecosystems, so to claim (global) causal closure seems strange.

    > There is an important and interesting discussion to be had about emergence, and it has nothing to do with being unable to predict behavior from component parts, nor with new "causal powers." <

    And what would that discussion be? Are you not begging the question in favor of reductive physicalism by excluding strong emergence a priori?

    ablogdog,

    > If someone sets up an equation that describes the activity of braincells, and finds a singularity for infinitely many braincells: would you consider that a satisfactory explanation of consciousness? <

    As you admit, the question there hinges on what we mean by "explanation," and what role mathematics plays in explanations. Far too much for this series of essays, but these entries are pertinent:

    http://plato.stanford.edu/entries/scientific-explanation/
    http://plato.stanford.edu/entries/mathematics-explanation/

    miller,

    > So you are saying that for the purposes of determining whether something is emergent, it is sufficient to show that its large-scale behavior can only be predicted with "new" mathematical concepts? <

    Not exactly, but close. What I find interesting in Batterman's paper (and others along similar lines) is that people are beginning to put some meat on the formalism with which we think of emergence, instead of just saying "this is an emergent property" and leave it at that.

    > I suspect that the distinction between emergent and non-emergent behavior is subjective. <

    I suspect not. If it should turn out, for instance, that emergent properties are always going to be accompanied by specific mathematical signatures, then it will be possible to separate apparent from actual emergence on the basis of those signatures. But that's speculation at this point.

    ReplyDelete
    Replies
    1. I was trying to make the point that if you don't state the goal (in this case what kind of explanation is being sought), you're unlikely to reach it.

      Thanks for the links, the mathematical one made me smile. It's nice to know philosophers now believe it might be fruitful to look at some examples, before finalizing their theory. Way to go :-).

      I don't think, however, they'll get far without understanding that the explanatory power of a proof is (largely) subjective, and even for one reader changes over time. Reading the same proof after a year / decade of further study in the field may "explain" far more than it did before.

      I'm not even sure we understand things better as we go along, possibly we just get more accustomed to them, and better at handling them. Burning paths deeper into the brain until suddenly we believe we've understood something...

      Delete
  23. Carlos,

    > didn't you just shifted from pro-emergent properties to scientific anti-realism? <

    I don't think so, I consider myself a structural realist.

    Yoshi,

    > didn't you just shifted from pro-emergent properties to scientific anti-realism? <

    I guess it depends on what you mean by that. As I mentioned above, if it turns out that there are specific mathematical signatures of emergence, then it's not a question of point of view. But of course *everything* we do while thinking about the nature of reality is tied to a human epistemic point of view. See the idea of perspectivism in the philosophy of science:

    http://goo.gl/Si3AB

    Yair,

    > That will be news to my lecturer, prof. Haim Sompolinski, given that he taught us a whole course on precisely that topic... Granted, much of that course dealt with characterizing singularities. <

    QED.

    > Sompolinski had no patience for strong emergence and insisted that such views amounted to non-local laws of nature - special rules for special constellations of matter or locales or so on. This he found unfounded and opposed to the scientific worldview. <

    I find that a bit philosophically naive, if you forgive me. There is no such a thing as a monolithic "scientific worldview." Science is a toolbox to understand the world, and if it should turn out that there are indeed non-local laws (or, better, using Ladyman and Ross' phrasing, "patterns") so be it. What do we lose in the process?

    David,

    > Perhaps this is one of your four articles, as it addresses the current one in detail. <

    Thanks for the link, I'll take a look at the paper. But for this series I'm limiting myself to published papers.

    ReplyDelete
    Replies
    1. ( I am not sure what happened, perhaps blogspot has keyboard shortcuts. If my half written comment appeared in your moderation queue, please delete it.)

      Assuming that you answered to the "point of view" remark, what I meant is essentially the difference between weak and strong emergence. For the singularities in thermodynamics, I can either derive the thermodynamics equations from Newtonian dynamics and then calculate the behavior of a specific system at a phase transition ( where they have singularities), what I would call the emergence point of view. Or I can solve the Newton equations directly for the system in question ( for something like 10^6 particles), calculate from this ensemble the free energy (and then try to extrapolate to the thermodynamic limit), this I would call the reductionist point of view. My argument is, that the reductionist point of view is as valid as the emergence point of view and emergence is just a matter of convenience, not a fundamental property of nature.

      On the other hand, I am now revisiting my comment about one and two particle systems and I am no longer sure that this is not an example for strong emergence. My current argument is, the movement of one billiard ball on an infinite, frictionless table is completely determined by the geometry of the table, while the movement of two billiard balls can contain collisions and no amount of observation of one ball can give information about that.

      Delete
  24. Jean-Nicolas,

    your comments are much appreciated, but too long to deal with in detail here. Just a few highlights:

    > phase changes correspond to singularities in the equations of thermodynamics. That is because the assumptions made when creating that model break down. <

    According to Batterman that's one way to look at it. The other is to bite the singularity and consider it an integral part of the model, one that signals a qualitative transition.

    > you discover that there is nothing mysterious <

    I think we should drop the charge of mysteriousness from this conversation. Nobody is talking about spooky mystical stuff, only of stuff that we don't seem to understand particularly well yet.

    > We have yet to see a singularity in one of our models translate into something "singular" in reality. <

    What do you call a phase transition, if not a "singularity" in this sense?

    > Mysteriousness is a property of (incomplete) models, not of reality. <

    Mysteriousness is a measure of human epistemic ignorance, which is vast.

    > when you zoom out, the differences that seemed significant at the molecular level are negligible relative to other variables, like mass, speed and momentum for instance, and can be approximated away. <

    That doesn't seem to be an explanation for why substances characterized by very different molecular details should behave qualitatively in the same way. Invoking approximations doesn't do much explanatory work, I think.

    > what looks like a reliance on intuition ("obvious feature"). So many of our intuitions are wrong that I find that unwarranted <

    Take intuition out of the scientist's toolbox and you have no science.

    > "different things exists". This sounds sloppy and somewhat essentialist. <

    It sounds like a straightforward observation to me.

    > the lecturers made a point of deriving the thermodynamic concepts and equations from the newtonian mechanics underlying them. <

    But I bet they didn't derive any prediction about phase transitions (like at what temperature water goes from liquid to gas, for instance).

    > The transition between statistical mechanics and thermodynamics is a conceptual one. We switch models because the more fundamental ones become computationally too expensive. <

    That comes close to begging the question, as a look at Batterman's paper will strongly suggest.

    > I have to confess I am a scientific anti-realist. <

    Then for you this discussion ought to be moot, no?

    > saying that phase changes cannot be explained from the underlying model of statistical mechanics is wrong as a matter of fact <

    Well, I haven't read the book you mention, nor am I a physicist, so I'll leave it to others to comment. But the people I am reading seem to know quite a bit of physics, and they are not convinced. Of course, again, it depends what one means by "explained from." Nobody is arguing that lower levels theories have *no* explanatory power on emergent phenomena. The claim is that there are certain qualitative behaviors that cannot be derived by lower level theories, but that can nevertheless be described accurately given certain mathematical tools.

    > when you integrate the standard model and quantum mechanics over large numbers you get the atomic theory and when, likewise, you integrate atomic theory, you get chemistry and thermodynamics and this integration is continuous at every levels. <

    That's a bold claim that, I understand, is purely theoretical. There is quite a bit of doubt even on what ought to be a relatively straightforward enterprise: the reduction of chemistry to physics:

    http://plato.stanford.edu/entries/chemistry/

    artikcat,

    > recent emergence literature somehow misses, the earlier 1840s', 1920s, 1950s so forth, emergence work? <

    I can't make that judgment call because I have just started wading into this. But yes, that would be my first reaction too.

    ReplyDelete
    Replies
    1. M i dont want to be harsh, but it seems to me that specially the physics community has been oblivious to the work mentioned above. Some of those papers, even whithout the science we know now, are particulary clear. British "emergentism" was a forceful tide of thought in its time. Maybe the US scientists tend to curve-ball european '"knowledge" (which is a well described phenomena in other areas)

      Delete
  25. Massimo:

    << QED. [in response to singularities studied in course]>>

    Not at all. The singularities did not, for an instant, imply the breakup of the fundamental theories we were working from, or the emergence of any order not underlaid by them.

    << I find that a bit philosophically naive, if you forgive me. There is no such a thing as a monolithic "scientific worldview." >>

    I would first hasten to say this is my expression, not Sompolinski's. If you'll read the above article, you'll see he summarizes quite a bit of philosophy, and I'm not sure he'll use that phrase.

    For my part, however, "scientific worldview" isn't a phrase that is related to philosophy or science in the abstract but rather to "our" contemporary worldview. It signifies the worldview held by the vast majority of scientists and science-affectionados, especially of a naturalistic bent - stuff like believing in evolution, the big bang, the laws of physics...

    << ...if it should turn out that there are indeed non-local laws (or, better, using Ladyman and Ross' phrasing, "patterns") so be it. What do we lose in the process? >>

    Naturalism. Or at least, simple naturalism. If said non-localities relate to human consciousness, then we would effectively discover soul-stuff and would definitely lose naturalism.

    Which is fine - if that's reality. Science is a tool to discover the truth, not to establish naturalism. But it HAS established naturalism. There is no reason to suspect that there are souls, i.e. strongly emergent causally-affecting Aristotelian-forms of the mind. And every reason to cling to those things which we have been able to establish beyond all reasonable doubt - the laws of physics.

    This is just Hume's argument against miracles. We base science (and rational thought) on a Principle of Uniformity. We fill the content of its causal laws with (uniform!) laws of physics, which we have NEVER seen violated. To think that they are false, that there are miracles/strong emergence, requires proof that these most-firmly-established regularities don't actually hold - a tall order indeed!

    And although I'm sure he can handle himself, I can't help but respond to one related answer you gave to Jean-Nicolas,

    << There is quite a bit of doubt even on what ought to be a relatively straightforward enterprise: the reduction of chemistry to physics >>

    I come from the Fritz-Haber Institute for Molecular Dynamics - whose entire premise is built on this "doubtful" reduction. Needless to say, the SEP article [I only read the part on reduction] has hardly moved me from this position.

    This actually brings another conference topic - scientism. GIVEN that quantum theory has made spectacularly successful predictions in chemistry, and GIVEN that no observation is opposed to it underlying chemistry (both observations correct beyond all reasonable doubt - even if the SEP article attempts to doubt the latter on flimsy grounds), we SHOULD believe that chemistry is reduced to QM. The SEP's main argument again this reduction is that certain chemical features have not been derived from pure QM. But that's as irrelevant as the fact that certain features of the fossil record haven't been predicted by evolution (yet are consistent with it); that isn't evidence against evolution.

    The SEP does also argue that some chemical features cannot be derived by QM at all. These arguments are exceedingly weak. Isomers, for example, are certainly well-studied (and modeled) by quantum mechanics. If the authors (and Hendry which they quote) really want to know "why a given collection of atoms will adopt one molecular structure", I suggest they do some molecular dynamic calculations. They'll find these are based on QM.

    Yair Rezek

    ReplyDelete
  26. I decided that my main disagreement with emergence as an idea is the idea that it is adding in new stuff. Deriving behavior of large systems from microscopic behavior is not about adding new stuff, it's about *removing* information. RG theory is an especially clear example of this; the whole RG process is explicitly about removing atoms in your lattice and zooming out. In a way, we are adding new stuff (such as all the mathematical apparati of RG theory), but what we're adding are creative tools to eliminate information. (I expanded upon this point on my blog: http://skepticsplay.blogspot.com/2012/10/my-position-on-emergence-as-physicist.html)

    And doesn't it make more sense this way? Emergence is about the appearance of patterns on the large scale. It's a common mistake to think that patterns represent increased information; in fact they represent decreased information.

    ReplyDelete
  27. Massimo:

    "Second, those laws tell you precisely nothing about all sorts of complex systems, like the temperature of phase transitions of water, or the functioning of ecosystems, so to claim (global) causal closure seems strange."

    Would you disagree that in statistical mechanics, all higher-level laws, such as those concerning temperatures and phase-transitions, are derived from the assumption that the A) all higher-level characteristics, such as temperature/phase, can be defined in terms of the "microstate" which gives the exact physical state of all the individual particles? (i.e. temperature can be defined as a mathematical function on the microstate, so that if you know the microstate it's just a matter of plugging it into the function to find the temperature) And B) that predictions about how a system will behave given the values of its higher-level characteristics are, in statistical mechanics, based on considering the ensemble of microstates with those values, and figuring out the future statistical behavior of the ensemble based on the assumption that each individual microstate evolves according to the fundamental laws of physics (quantum or Newtonian) alone?

    It seems to me that discussions of "reductionism" often assume a strawman, suggesting that reductionists think it would be useful or possible in practice to figure out the behavior of every system based on only the "microstate" and the fundamental laws governing the evolution of microstates, throwing out all references to higher-level quantities/characteristics and higher-level derived laws. Most reductionists wouldn't say that! Rather, they just believe it would be possible in principle to deduce all behaviors from the microstate and the fundamenatl micro-laws, along with knowledge of how higher-level characteristics like temperature can be "read off" a given microstate. So those who dispute the non-strawman version of reductionism should presumably be disputing one of two things:

    1. That any higher-level characteristic could in principle be "read off" of the microstate, if one knew exactly what that was.

    2. That the evolution of microstates over time (and thus the evolution of higher-level characteristics like temperature, assuming #1 is correct) is entirely determined by the most fundamental physical laws governing them. This does not necessarily require determinism to be true (although quantum mechanics might be deterministic, if either the many-worlds interpretation or the Bohm interpretation is correct); a Laplacian demon who knows the exact initial microstate might only be able to make statistical predictions about the state at a later time, but the point is that these would be the best possible predictions anyone could make given the knowledge of the initial state, one would not be able to make more accurate predictions with knowledge of additional high-level laws beyond the fundamental physical ones, since #2 assumes all high-level laws can be derived from the statistical behavior of ensembles of microstates in the style of statistical mechanics.

    ReplyDelete
  28. (continued from previous comment)
    Of course these are both assumptions, there is no definitive evidence they are true. Still it seems to me that science has pretty consistently been successful in making it plausible that these assumptions are true by showing in various approximate ways how the behavior of higher-level systems can be explained in terms of lower level ones. For example, we don't have anything like the computational power to show how the growth of an embryo can be predicted from the initial microstate and basic quantum laws, but scientists have had a good deal of success in explaining how cells in an embryo self-organize themselves by a process in which each cell is sending out molecular signals, so that each cell finds itself with different concentrations of different molecules in its immediate neighborhood, and these molecules turn various of its DNA sequences "on" and "off" and thereby determine how the cell divides and what cell type it turns into, and also in this way determine what molecular signals the cell itself sends out (Enrico Coen's book "The Art of Genes" has a nice primer on this). There was also a recent impressive example of simulating all the gene/protein activity in a single cell at http://www.nature.com/scitable/blog/bio2.0/what_the_simulated_cell_actually (though not a basic quantum simulation of every particle in the cell, which would require vastly more computational power than is currently available). Likewise, Massimo mentioned the difficulty of reducing chemistry to quantum physics, but AFAIK this is mainly a problem of not having enough computing power, not a problem of there being cases where we do have enough computing power to make detailed predictions using quantum laws alone, but find that they contradict what's observed empirically in chemistry (if there are any such cases, someone please point them out). There is a whole field called "Ab Initio Quantum Chemistry" that shows various successful cases of chemical behavior being derived entirely from quantum laws, for example see the following story about the interactions of water molecules being derived in such a way: http://www.udel.edu/PR/UDaily/2007/mar/water030207.html (also see the section "first principle findings" in this article: http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=117969 )

    In a lot of ways I think the hypothesis that "reductionism" holds universally (as defined by my 1 and 2 above, again avoiding strawman definitions that say reducing everything to physics should be a practical goal in all fields, or that the high-level behavior is not "interesting" on its own terms) is similar to the hypothesis that the Darwinian mechanism of random variations + natural selection provides an explanation for the evolution of all new useful adaptations in the history of life; there is no hope of showing definitively that this is true in every individual case, but we can show lots of individual cases that seem consistent with the idea (for example showing with phylogenetic or fossil evidence that a given trait evolved in a series of small steps, or genetic evidence of new traits arising via known types of mutations like what's mentioned at http://pandasthumb.org/archives/2004/05/gene-duplicatio.html ), and no examples of falsifying cases that seem clearly incompatible with the hypothesis (like the classic example of a rabbit fossil found in precambrian strata). This is not to say the evidence for the universality of reductionism is quite as strong as the evidence for the universality of Darwinian evolution in producing novel adaptations, just that it's the same type of case where all we can hope for is a steady drip-drip of new examples that make it more plausible in individual cases, and that both are also to some degree favored as a null hypothesis by Occam's razor.

    ReplyDelete
  29. Yair,

    > The singularities did not, for an instant, imply the breakup of the fundamental theories we were working from, or the emergence of any order not underlaid by them <

    You'll have to take that one up with Batterman. Though I suspect "breakup" is too strong a word, inadequacy would be better.

    > It signifies the worldview held by the vast majority of scientists and science-affectionados, especially of a naturalistic bent <

    There is no question of doubting naturalism here. But naturalism doesn't exclude emergence. As for any view being widely held by the scientific community, you'd be surprised, given the number of intelligent design supporters among scientists...

    > If said non-localities relate to human consciousness, then we would effectively discover soul-stuff and would definitely lose naturalism. <

    I don't see that that follows at all.

    > Science is a tool to discover the truth, not to establish naturalism. But it HAS established naturalism. <

    I'm a naturalist, but science has done no such thing. At best one can claim that methodologically speaking the assumption of naturalism works well, no more.

    > This is just Hume's argument against miracles. <

    I'm very familiar with that argument, and this isn't it at all.

    > We fill the content of its causal laws with (uniform!) laws of physics, which we have NEVER seen violated. <

    Laws are just empirical generalizations. And there are plenty of generalizations in science that are only local (e.g., anything to do with ecology and evolution).

    > Needless to say, the SEP article [I only read the part on reduction] has hardly moved me from this position. <

    So be it. My point is simply that your conclusion is far from widely accepted, and apparently there are reasonable people who have thought deeply about it who disagree.

    > we SHOULD believe that chemistry is reduced to QM <

    Theory reduction requires actual work, not belief.

    ReplyDelete
    Replies
    1. << > If said non-localities relate to human consciousness, then we would effectively discover soul-stuff and would definitely lose naturalism. <

      I don't see that that follows at all. >>

      Which saddens me. Do you disagree that what you are describing is essentially the Aristotelian concept of soul? Do you disagree that naturalism rejects (Aristotelian) souls?

      << > This is just Hume's argument against miracles. <

      I'm very familiar with that argument, and this isn't it at all. >>

      I suspect that is the disconnect. You don't see that strong emergence implies going beyond the laws of microscopic physics, and hence is in opposition to them.

      << I'm a naturalist, but science has done no such thing. At best one can claim that methodologically speaking the assumption of naturalism works well, no more. >>

      You could claim the same thing about evolution. Or the fact the earth is a sphere. Or any other scientifically-established fact. I repeat my statement from above that this touches on scientism, and on what what naturalism is, and of course on reduction.

      From where I am sitting scientism about the external world is correct, naturalism is about having universal (rather than local/special) underlying regularities, and reduction to just such laws is a well-established scientific fact. It appears you disagree with all three thesis.

      << Laws are just empirical generalizations. And there are plenty of generalizations in science that are only local (e.g., anything to do with ecology and evolution). >>

      Our contemporary laws of physics are *local* empirical generalizations. As for the non-local laws - that goes back to the question of reduction, and we've been there, done that.

      << My point is simply that your conclusion is far from widely accepted, and apparently there are reasonable people who have thought deeply about it who disagree. >>

      I suspect that it forms the bon-ton of all of chemistry, especially the more you go up the chain of expertise, but grant this is only my impression. I also suspect that it is the correct view, but grant that I don't have the time and tools to argue exhaustively for that; the above will have to stay as it is.

      << Theory reduction requires actual work, not belief. >>

      My contention was that this work has already been done, by numerous researchers showing and testing reduction at various levels; from the calculation of the proton mass from the Standard Model to the connections between neural networks and computer theory. Reductionism works.

      Cheers,

      Yair Rezek

      Delete
  30. miller,

    > I decided that my main disagreement with emergence as an idea is the idea that it is adding in new stuff. Deriving behavior of large systems from microscopic behavior is not about adding new stuff, it's about *removing* information. <

    Yes, I suspect lots of people have that problem. But you realize of course that this is a purely aesthetic preference. There is neither an empirical nor a philosophical reason why it should correspond to the way the world actually is.

    > It's a common mistake to think that patterns represent increased information; in fact they represent decreased information. <

    I doubt that's the case. New patterns do represent new information. Just try deriving the intricacies of earth's ecosystems by deploying quantum mechanics. The question is where does this information come from: is it all in the basic laws of nature, or are there local / emergent phenomena we also need to take into account? I don't know, but I don't think it's a slam dunk in favor of reductionism.

    Yoshi,

    > For the singularities in thermodynamics, I can either derive the thermodynamics equations from Newtonian dynamics and then calculate the behavior of a specific system at a phase transition ( where they have singularities), what I would call the emergence point of view. Or I can solve the Newton equations directly for the system in question ( for something like 10^6 particles), calculate from this ensemble the free energy (and then try to extrapolate to the thermodynamic limit), this I would call the reductionist point of view <

    That, to me, amounts to a statement that goes well beyond what you can actually establish. Empirically, you simply can't do that, so you don't know whether solving Newton's equations directly would give you the desired result. Besides, where exactly from those equations would you get parameters dealing with, say, temperatures and densities of phase transitions?

    ReplyDelete
    Replies
    1. > That, to me, amounts to a statement that goes well beyond what you can actually establish. Empirically, you simply can't do that, so you don't know whether solving Newton's equations directly would give you the desired result. Besides, where exactly from those equations would you get parameters dealing with, say, temperatures and densities of phase transitions? <

      Well the temperature is essentially the average energy of the particles, and in a very simple toy model I actually believe that I can empirically show a phase transition from first principles. If you consider a one dimensional system with N particles, where each particle carries a potential which has a preferred distance l and a depth V(l) < 0 and goes to zero at infinity. ( For example a Lennard-Jones potential) The minimal energy configuration of this system is reached, when the particles are at rest in a distance l from each other, forming a 1D crystal with total length N*l. ( Then the total energy of the system is N*V(l) and you would need to add energy to move or accelerate any one particle.) On the other hand, if the average energy of the particles ( the temperature) is far greater than the depth of the potential, then the motion of the particles is more or less unperturbed and they would occupy the entire space of the system, that is the average distance between them would be L/N where L is the size of the container. The system behaves then like a gas. Therefore there should be a phase transition somewhere in between and the critical energy should be zero ( that is -V(l) kinetic energy) such that a particle can just escape its nearest neighbor. The phase transition should be observable on a microscopical level when the particles change from a bound state to a unbound one. ( I am also somewhat confident, that I can show this behavior numerically for systems with a few hundred particles.)

      However, we already know that there are problems which are solvable for a reductionist. The interesting question is, are systems which are not. And furthermore, what do you mean precisely with empirically? There is of course no chance to solve such a system for a real macroscopic case. So would you consider a lack of computational resources emergent behavior?

      Delete
  31. Jesse,

    again, thanks for the extensive comments, but I really can't address everyone's point in any detail. This ain't my day job, I'm afraid. So, just a few picks:

    > Most reductionists wouldn't say that! Rather, they just believe it would be possible in principle to deduce all behaviors from the microstate and the fundamenatl micro-laws <

    Yes, but the more I hear that kind of statement the more I begin to think that it amounts to a sort of bizarre metaphysical faith that simply ignores the facts on the ground.

    > knowledge of how higher-level characteristics like temperature can be "read off" a given microstate <

    How, exactly? Someone's got to do the work, not just talk the talk.

    > a Laplacian demon who knows the exact initial microstate might only be able to make statistical predictions <

    For me the plausibility of even that idea went out the window after reading Ladyman and Ross. As they put it, there ain't no "microbangings," which seems to undermine the possibility of a Laplace demon, statistical or not.

    > t seems to me that science has pretty consistently been successful in making it plausible that these assumptions are true by showing in various approximate ways how the behavior of higher-level systems can be explained in terms of lower level ones. <

    Tell that to anyone who is not a fundamental physicist. We haven't even been able to reduce classical genetics to molecular genetics, as Philip Kitcher has convincingly (I think) argued.

    > scientists have had a good deal of success in explaining how cells in an embryo self-organize themselves by a process in which each cell is sending out molecular signals <

    We need to be weary of an opposite straw man here: exploring the possibility of true emergent properties, like I'm doing in these posts, does not at all amount to deny the success of reductionist programs in science. It simply implies that said success is not universal. Same applies to your comments about reducing chemistry to quantum mechanics: most of it, yes; all of it, there appears to be some serious doubt.

    I don't find your analogy with Darwinism convincing. First, there actually *are* explanations of even adaptive features in biology that do not invoke natural selection (or only natural selection). Second, in the case of emergent properties we *do* have plenty of examples of phenomena that do not appear prima facie explainable by deploying only fundamental physics. Whether this will hold true seconda facie, so to speak, remains to be seen.

    > some degree favored as a null hypothesis by Occam's razor <

    I've written a whole chapter of my Making Sense of Evolution about why null hypotheses are a really, really bad idea. As for Occam's razor, it's a heuristic know to fail, not a rock solid metaphysical, let alone empirical, principle.

    ReplyDelete
    Replies
    1. Jesse:

      "Most reductionists wouldn't say that! Rather, they just believe it would be possible in principle to deduce all behaviors from the microstate and the fundamenatl micro-laws"

      Massimo:
      "Yes, but the more I hear that kind of statement the more I begin to think that it amounts to a sort of bizarre metaphysical faith that simply ignores the facts on the ground."

      Guys, it is *not* possible to deduce all behaviors from the microstate and fundamental micro-laws. Physicists knew that back in the 19th century, since Poincare figured that the 3-body problem in Newtonian gravity exhibits chaotical behavior. Deterministic mechanics has an assumption of the exact knowledge of initial conditions (i.e. the microstate), with infinite precision. Heisenberg's uncertainty principle says that this is just wishful thinking, and gives a lower limit on precision. And since there are unremovable errors in initial conditions, they will grow large exponentially for chaotic systems, rendering them unpredictable after a finite amount of time. A 3-body Newtonian gravitating system is an example of this. A weather forecast is another example. The wikipedia article on chaos theory lists a whole bunch of other examples.

      So I really suggest that you read up on chaos theory. Even just a wikipedia article will be illuminating. Nature is *not* deterministic, and emergent behavior can *not* (always) be deduced from microstate and fundamental laws. Reductionism works only for non-chaotic systems.

      Best, :-)
      Marko

      Delete
  32. Yair,

    > Do you disagree that what you are describing is essentially the Aristotelian concept of soul? <

    Absolutely, I don't see what any of this has to do with anyone's conception of the soul, whatever that is.

    > You don't see that strong emergence implies going beyond the laws of microscopic physics, and hence is in opposition to them. <

    That's where your problem lies: going beyond does not at all mean to be in opposition with. The principles of community ecology "go beyond" those of population genetics, in the sense that they apply to more complex systems, but they do not oppose population genetics.

    > From where I am sitting scientism about the external world is correct <

    I don't know what you mean by "scientism," since it's usually deployed as an insult toward people who apply science in domains where it doesn't belong. But perhaps you are simply sitting in the wrong place, maybe a change of perspective would help? ;-)

    > Reductionism works. <

    Often.

    ReplyDelete
    Replies
    1. Massimo:

      << That's where your problem lies: going beyond does not at all mean to be in opposition with. The principles of community ecology "go beyond" those of population genetics, in the sense that they apply to more complex systems, but they do not oppose population genetics. >>

      That is just not an option open to you. As JesseM is trying to force you to face-up to, physics predicts certain end-states (or probabilities thereof), given initial conditions. Either your new laws predict different ones or not. If they do, they are in violation of the laws of physics. If they make no predictions contrary to physics, and they do make observable predictions, then these predictions are either summaries [i.e. coarse descriptions of aspects] of the microphysical results or not. If they are summaries [e.g: thermodynamics as statistical mechanics], then reductionism is correct. If they are not summaries, then you need to explain how something like the position of a measuring needle [or the location of cells and muscles, to use JesseM's example] is not a summary of the underlying physics . This seems rather absurd.

      << perhaps you are simply sitting in the wrong place, maybe a change of perspective would help? ;-) >>

      That is always a possibility. From where I'm sitting, though, it doesn't look likely :-)

      << > Reductionism works. <

      Often. >>

      JesseM replied to this point at greater length and better than I could. [I have minor quibbles about his physics, but nothing that affects his point.] I consider reductionism to be a fairly well-established scientific fact; to quote JesseM, "the evidence so far has been a steady drip of new cases that support that hypothesis, and no evidence that falsifies it."

      << > Do you disagree that what you are describing is essentially the Aristotelian concept of soul? <

      Absolutely, I don't see what any of this has to do with anyone's conception of the soul, whatever that is. >>

      From the SEP entry on "Aristotle's Psychology": "[Aristotle] introduces the soul as the form of the body, which in turn is said to be the matter of the soul". These forms have causal power, as in "It is manifest, therefore, that what is called desire is the sort of faculty in the soul which initiates movement" (Aristotle). So there is no real substance, matter, that is the soul - only "emergent" properties of the substance (matter) of the body (brain). Matter constructed in these forms has certain causal powers, like consciously initiating movement, that are not REDUCIBLE to the constituents but rather arise from the whole object's (i.e. brain neural network?) structure. This is the thesis of strong emergence, this is the thesis of Aristotle. If you believe in strong emergence (for consciousness), you believe in (broadly-Aristotelian) souls.

      Cheers,

      Yair Rezek

      Delete
  33. My reply to Massimo is a bit long so I'll have to break it up again, here's part 1:
    "again, thanks for the extensive comments, but I really can't address everyone's point in any detail. This ain't my day job, I'm afraid."

    No problem, I'll understand if you don't respond to my current reply; hopefully you will be able to read and consider it though, and if anyone else wishes to comment I'll try to reply to them.

    "> knowledge of how higher-level characteristics like temperature can be "read off" a given microstate <

    How, exactly? Someone's got to do the work, not just talk the talk."

    It's part of the very meaning of "statistical mechanics" that all macro-quantities (like pressure) and characteristics (like phase) must be given explicit definitions in terms of microstates; if one can't do this, one is not really doing statistical mechanics at all. In the specific case of temperature, it is defined as the inverse of the derivative of the system's total energy with respect to its entropy, when the volume and number of particles is held constant. Both energy and entropy can be "read off" the microstate in a well-defined way, so this is definitely a micro-level definition of temperature. In quantum mechanics the "microstate" would be the full quantum state vector of the system, and every state vector has a well-defined "expectation value" for total energy. Meanwhile, the entropy of a macrostate is defined to be proportional to the logarithm of the total number of distinct microstates which would all have the macro-variables corresponding to that macrostate (if the microstates can vary continuously, so there are an infinite number of them, one can also define entropy as the volume in "phase space" occupied by a given macrostate, but in QM I believe they avoid the issue of continuously varying microstates by considering only the "basis vectors" for a given system, where the number of basis vectors is large but finite and any possible microstate is equivalent to a weighted sum of basis vectors). I suppose the entropy would in principle depend on what macro-variables you choose to examine, but typically one divides the whole system into subsystems and then defines the "macrostates" in terms of how much of the total energy is held by each subsystem.

    Obviously outside of statistical mechanics, we may have macro-entities defined in a more qualitative know-it-when-you-see-it way, like "cells" in biology (although in any case where the definitions aren't mathematical, there are presumably going to be ambiguous cases; for example, in the process of a cell dying and then breaking down, I don't think anyone would claim there is a precise well-defined moment when it ceases to be a "cell"). But as a thought-experiment, if you were really given the precise microstate of a physical system (or the evolution of the microstate over a given interval), as well as some sort of arbitrarily powerful visualization software that could, for example, plot the density (or probability density in QM) of each type of atom in space on large scales with 3D pixels of different colors and hues, and this visualization software would allow you to zoom in on any region, look at cross-sections, etc. Would you really suggest that in this ideal case, it would still not be possible to "read off" the identity and location of macro-entities like cells, muscles etc. from the microstate? This would seem tantamount to claiming that cells and muscles and so forth are not made only of atoms.

    ReplyDelete
  34. Reply to Massimo, part 2:
    "> a Laplacian demon who knows the exact initial microstate might only be able to make statistical predictions <

    For me the plausibility of even that idea went out the window after reading Ladyman and Ross. As they put it, there ain't no "microbangings," which seems to undermine the possibility of a Laplace demon, statistical or not."

    Reading the summary at http://theboundsofcognition.blogspot.com/2010/08/ladyman-ross-on-metaphysics-of.html it sounds like their criticism of "microbangings" is just a critique of the classical notion where everything is made of little particles with well-defined positions and speeds, and the particles influence one another by collisions. But nothing in the notion of a Laplacian demon requires such a classical conception (this conception would not even accurately describe classical electromagnetism!) The basic concept is just that every system has some type of precise physical state--the same as what is meant by a system's "microstate"--and that if an ideal Laplacian demon could know a system's initial microstate, along with the fundamental physical equations which determine how microstates evolve, along with a perfect ability to calculate what happens when you plug any arbitrarily complicated initial microstate into these equations and evolve it forward, then this ideal Laplacian demon could make the most accurate possible predictions about the state of the actual system at a later time. The demon is obviously an idealization of far less precise real-world predictions by humans, but this is basically a claim about how the real laws of nature work, that real systems do in fact have precise "microstates" and that their evolution is in fact determined by some fundamental physical laws (which might have a random element).

    Quantum mechanics does assume every system has a precise physical state, the "quantum state" or "state vector". And for any "measurement basis" such as position or momentum, any given quantum state can be broken down into a complex sum (i.e. sum where the coefficients are complex numbers), or "superposition", of different possible quantum states where all the particles in the system have precise values for the measurement variable being used as a basis (precise positions, say), each of these states in the sum being called a "basis vector". The system's quantum state at a given moment tells you the probability distribution of getting different outcomes if you perform any measurement on the system at that moment, and immediately after a measurement with maximal possible precision, the system's state is treated as having "collapsed" onto a single basis vector, so that all the other coefficients in the sum are now zero (that single basis vector is the system's new quantum state at the moment of measurement). Between measurements the quantum state evolves in a deterministic way, obeying the Schroedinger equation (in the many-worlds interpretation it continues to evolve in a deterministic way when a measurement is made, leading the experimenter and measuring device to end up in a superposition of different states where they found different outcomes). So, this is all still compatible with what I said about the Laplacian demon: if this demon knows the precise initial quantum state of the system, then if the universe is correctly described by the laws of quantum mechanics, the demon should be able make the best possible predictions about the results of later measurements on the system (or perfect predictions about the system's later quantum state, if the many-worlds interpretation is true and there is no "collapse" of the superposition to a basis vector during measurements.) In the context of reductionism, the important point is that according to QM there's no additional macro-information or macro-laws that would allow a different demon to make better predictions.

    ReplyDelete
  35. Reply to Massimo, part 3:
    ">t seems to me that science has pretty consistently been successful in making it plausible that these assumptions are true by showing in various approximate ways how the behavior of higher-level systems can be explained in terms of lower level ones. <

    Tell that to anyone who is not a fundamental physicist. We haven't even been able to reduce classical genetics to molecular genetics, as Philip Kitcher has convincingly (I think) argued."

    I would have to see Kitcher's argument, and from the article at http://plato.stanford.edu/entries/molecular-biology/#3.1 it seems he has several on the subject of classical vs. molecular genetics so I'm not sure if you're referring to a specific paper or a number of them. But from the summary, it sounds like Kitcher was saying that classical genetics provides conceptual simplifications of the "gory details" of molecular genetics, like paired gene variants:

    However, classical genetics retained its own schema. For example, the independent assortment of genes in different linkage groups (the amended version of Mendel's second law) was explained, according to Kitcher, by instantiating a pairing and separation schema, thereby showing that chromosomal pairing and separation was a unifying natural kind (Kitcher 1999). Such unification would be lost if attention were focused on the gory details at the molecular level.

    This seems to be an issue of how best to understand high-level phenomena, what to pay attention to and what to leave out; but as I already said, I think it is just a strawman notion of reductionism which says that the micro-level is the best way to understand complex systems, that's why my notion of reductionism referred only to the in-principle possibility of reading off macro-characteristics from a complete description of the microstate, and to the notion that the evolution of microstates is determined solely by fundamental physical laws. There is no reference there to the best way to understand or explain anything! As an analogy, in the well-known cellular automata "The Game of Life" (at http://en.wikipedia.org/wiki/Conway's_Game_of_Life ) the evolution of the black and white pixels is entirely determined by the fundamental rules describing how a pixel's color on each turn changes (or remains unchanged) based on its color and the color of its 8 neighbors on the previous turn. But these fundamental dynamics lead to all sorts of interesting higher-level phenomena which follow higher-level rules that wouldn't be obvious from the lower-level ones (though they can always in principle be deduced from them), like the "glider" patterns which have a characteristic behavior that involves moving diagonally while cycling between four different patterns of black squares. I would say that a person didn't understand much about The Game of Life if they only knew about the bottom-level rules but knew nothing of interesting higher-level patterns that typically emerge from randomly-chosen initial states.

    "We need to be weary of an opposite straw man here: exploring the possibility of true emergent properties, like I'm doing in these posts, does not at all amount to deny the success of reductionist programs in science. It simply implies that said success is not universal."

    But the success never could be universal, given that we will never have the time or computational power to verify that every macro-system's behavior can be predicted in a reductionist way. I have not claimed that there is any definitive proof that reductionism holds universally (just like there is no definitive proof that conservation of energy holds universally in every local region of space, as is assumed by current physical theories), just that the evidence so far has been a steady drip of new cases that support that hypothesis, and no evidence that falsifies it.

    ReplyDelete
  36. Reply to Massimo, part 4:
    "Same applies to your comments about reducing chemistry to quantum mechanics: most of it, yes; all of it, there appears to be some serious doubt."

    Well, note the distinction I made earlier between two types of "difficulties" in reducing chemistry to quantum mechanics: "Likewise, Massimo mentioned the difficulty of reducing chemistry to quantum physics, but AFAIK this is mainly a problem of not having enough computing power, not a problem of there being cases where we do have enough computing power to make detailed predictions using quantum laws alone, but find that they contradict what's observed empirically in chemistry (if there are any such cases, someone please point them out)." While the second type of difficulty would actually constitute evidence to falsify the hypothesis, the first type of difficulty--that of lacking sufficient computing power--is not really evidence against the hypothesis, it just means there are cases where we don't have clear evidence for or against it. Much the same is true of Darwinian evolution, there are cases of complex adaptations where we lack phylogenetic, fossil, or genetic evidence about the sequence of smaller adaptive changes that produced them, but this is not evidence against the idea that complex adaptations are produced by a series of smaller adaptive changes; pointing to specific cases where we are ignorant of the details and using that to try to argue that the theory must be wrong is the classic "God of the gaps" strategy. An observation can really only be counted as evidence against a theory if, in a possible world where the theory was actually correct, such an observation would be unlikely or impossible. In a possible world where all chemistry really could in principle be derived from quantum physics, yet we still had the same limited computing power as in our world, do you think the extent to which we would have actually verified this emergence would be any better than in the real world?

    "I don't find your analogy with Darwinism convincing. First, there actually *are* explanations of even adaptive features in biology that do not invoke natural selection (or only natural selection)."

    Are you talking about genetic drift? It's true that in a small population a fitness-enhancing allele might significantly increase its frequency for reasons that have nothing to do with selection, but one could simply define the neo-Darwinian theory as the idea that "for any heritable adaptive traits in an organism, the genes (or other stores of hereditary information) for them arose as random variations, and increased their frequency in the population due to some combination of natural selection and random chance." And as the number of genes involved in a given trait increases, the probability that all or most of them could have spread through the population for reasons unrelated to selection would presumably get very close to zero; if the neo-Darwinian hypothesis is true, only in the case of adaptive traits involving a very small number of genes is it at all plausible that selection didn't play a critical role, or that the whole set of genes didn't arise as a series of small adaptive mutations.

    ReplyDelete
  37. Final part of reply to Massimo:
    "Second, in the case of emergent properties we *do* have plenty of examples of phenomena that do not appear prima facie explainable by deploying only fundamental physics."

    Again I would ask you to distinguish between the two notions of "explainable" above, and ask you to consider a thought-experiment where a magic genie gives us an ideal supercomputer that could calculate the evolution of any arbitrarily complex initial state according to quantum laws (primarily quantum electrodynamics, for basically any terrestrial macro-phenomenon), and an ideal measuring device which could give us the precise initial quantum state of any system (in a measurement basis where the range of possible positions and momenta in superposition for each particle is the smallest possible under the uncertainty principle, say)--in this case, would you say there'd be be compelling arguments to bet against the idea that the correct macro-behaviors would emerge from the quantum simulation (as opposed to just remaining cautiously agnostic about what would happen until the simulation had been performed)? If so, what would the arguments be?

    ReplyDelete
  38. Marko, when scientists and mathematicians talk about chaos they are usually referring to what is called "deterministic chaos", which can be shown to exist in mathematical models which are, by construction, completely deterministic. These models are such that if you knew the initial state with infinite precision, and a perfect (non-computable) ability to figure out the consequences of operating on that state with the dynamical equations, one could indeed predict the system's exact later state (and the ideal Laplacian demon would have both these abilities). If one knows the initial state to only finite precision, one's predictions will diverge from the system's actual behavior by a large amount after a finite time (the butterfly effect), but by making the precision of the initial state better and better, you can make the time before your predictions become significantly inaccurate longer and longer. Obviously in practice there are limits to how precisely we can measure an initial state, but if we're talking about gathering evidence for reductionism in practice (rather than what reductionism says about the abilities of an idealized Laplacian demon in principle), I'd say that it's mostly sufficient to show that a simulation based on reductionist assumptions shows the same type of behavior as the original system (in both statistical and qualitative senses). For example, say that we have an atmospheric simulation based on dividing the atmosphere into a huge number of tiny cells and using some physics (thermodynamics and such) to figure out how each cell's properties should affect its neighbor. And suppose the resulting simulation shows all the same types of weather patterns that are seen in real life, with the same time-averaged statistics in terms of things like frequency and size distributions of storms and pressure systems and such. Wouldn't this be pretty good evidence that the known physics of how small volumes of air influence their neighbors is sufficient to account for large-scale atmospheric phenomena, even if chaos prevented the simulation from being able to actually predict real weather more than a week or so out? Similarly if we had "mind uploads", detailed simulations of human brains at the synaptic level based only on known rules for local interactions between individual neurons, and they were behaviorally indistinguishable from real humans (passing the Turing test and such), wouldn't this be pretty strong evidence that intelligent human behavior emerges in a similar way from local neural interactions, even if chaos prevented the upload from allowing us to precisely predict what the real brain would do when presented with some given sensory inputs?

    ReplyDelete
    Replies
    1. Jesse,

      I agree with most of what you said, and I'd say that intelligent human behavior does indeed emerge from local neural interactions (or a simulation thereof).

      However, there is one thing I need to point out. If you have no way of calculating precisely the future state of the system (say, a simulated brain) other than letting the system evolve on its own (i.e. letting the simulation run its course), then you do not have a mechanism for "explaining" (i.e. predicting the future properties of) its behavior in terms of the fundamental low-level laws of physics (or the simulation algorithms). That is the missing piece between determining the "type" of behavior versus determining the "details" of behavior. Massimo already pointed this out as "walking the walk" --- if chaos prevents one to calculate the future state of the simulated brain, there is no way for the simulation algorithms to give an answer to the question "does the simulated brain have a consciousness (or free will, or such)?". The simulated brain might or might not develop these properties, and there is absolutely no way to tell in advance what will happen, short of letting the simulation run and looking at the end-result. That is the point of the new quality emerging, for which reductionism does not work.

      For example, suppose you are given enough computer hardware to simulate a bunch of human-brain-sized neural networks. You let them evolve over a course of several years, with appropriate sensory input similar to the one a real human would get. Then you find that some of your brains develop self-consciousness, while others do not. Then I claim that there is in principle no way to explain *why* did the first group develop self-consciousness, while the second group didn't. Chaos theory forbids us to know that. They all run the same algorithm, with similar (but never exactly equal) initial data and subsequent sensory input. So their consciousness is a consequence of the interactions between the algorithm and initial data, but there is no way to know what initial data would give rise to a conscious brain, and what data wouldn't, aside from trial&error method.

      So there are two "types" of behavior (conscious and non-conscious), in principle determined by the same simulation algorithm, but every instance of the simulation will bifurcate to one or the other type at some point in time, and there is absolutely no way to tell which way it will go. In other words, the deterministic algorithm does give a "fertile ground" for the development of self-consciousness (or some other property of the brain), but still cannot predict its appearance, or lack thereof. There is always a random element (given through fundamentally slightly randomized initial data) that starts growing and flips the algorithm one way or the other at some point. So the brain properties like self-consciousness et al can indeed be *described* by the algorithm, but not *predicted* by it.

      Best, :-)
      Marko


      Delete
  39. Also, on the subject of quantum uncertainty, it's a misconception to say that uncertainty means we just don't know each particle's precise position and momentum even though they do in reality have precise values. This could be true if a "hidden-variables" interpretation known as Bohmian mechanics is correct (see http://plato.stanford.edu/entries/qm-bohm/ for info), but the interpretation is constructed in such a way that it's impossible in principle to have any empirical evidence to support it, and physicists and philosophers who consider matters of interpretation often usually find this interpretation too contrived and inelegant to seem very plausible. More commonly it is supposed that the "quantum state" of a system really does give the full physical information about it, so that quantum wave/particles really are in some sense smeared out over a superposition of different positions and momenta, "uncertainty" is not a matter of a gap in our knowledge about some true objective reality. Moreover, the evolution of the quantum state is linear, so that it is not obvious how chaos theory can really apply to quantum systems, as chaos requires nonlinearity...this makes the question of whether there can be any such thing as "quantum chaos" a bit tricky. Apparently the answer has to do with a quantum concept called decoherence, where one considers the behavior of just one subsection of a quantum system that is in thermal contact with a larger part of the system; if one considers the behavior of the subsystem in isolation one can get behavior that looks chaotic even though the system as a whole is evolving in a linear way. See the discussion starting on p. 7 of this article for more: http://www.iqc.ca/publications/tutorials/chaos.pdf (and also the "environment as record" section on p. 10)

    ReplyDelete
  40. Hi Massimo

    In response to your very interesting essay, Sean Carroll wrote on October 11, 2012 4:46 PM
    “I don't know what it would mean to "derived physical reductionism," nor do I think that qualitatively new emergent behavior is absent from Newton's laws (depending on definitions). The point is simply that Newton's laws, applied to a set of particles, gives you a closed set of equations. With appropriate initial conditions, the solutions are unique. There is no room for additional causal influence. The equations give unique answers; you can't get a different answer without violating the equations. There is an important and interesting discussion to be had about emergence, and it has nothing to do with being unable to predict behavior from component parts, nor with new "causal powers."

    However this billiard ball point of view, based in Newtonian physics, is invalid once one takes quantum physics into account. A classic example is the fact that the mechanism of superconductivity cannot be derived in a purely bottom up way, as emphasized strongly by Bob Laughlin in his Nobel prize lecture, see R B Laughlin (1999): `Fractional Quantisation'. Reviews of Modern Physics 71: 863-874. The reason is that existence of the Cooper pairs necessary for superconductivity is contingent on the nature of the relevant ion lattice; they would not exist without this emergent structure, which is at a higher level of description than that of the pairs. Hence their very existence is the result of a top-down influence from this lattice structure to the level of the Cooper pairs. The concept of a given set of unchanging interacting particles is simply invalid. They only exist because of the local physical context. One can also find many examples where the essential nature of the lower level entities is altered by the local context: neutrons in a nucleus and a hydrogen atom incorporated in a water molecule are examples.

    These kinds of top-down effects completely undermine Sean's Newtonian billiard ball picture. They provide good reasons that top down causation can take place in systems such as digital computers and the human brain without contradicting the underlying physics. And that is why the existence of computers such as the one you are using at this moment is possible, without invoking magic. It is the result of top down causation from the human mind into the physical world.

    George Ellis

    ReplyDelete
    Replies
    1. George, as I mentioned in part 2 of my last reply to Massimo, nothing in the modern notion of "reductionism" (as I think most physics-literate scientists would define it) requires us to have a classical picture of billiard-ball-like particles with well-defined positions and speeds that interact via localized collisions. Reductionism is still true if a complete knowledge of the system's initial quantum state would in principle allow for the best possible predictions about its later behavior using fundamental quantum laws, and although I haven't studied superconductivity, I'm pretty sure the emergent "ion lattice" that Laughlin refers to is just a particular type of quantum state that a system of particles can be in, whose behavior is predicted theoretically according to the standard method of plugging it into the Schroedinger equation (or using various approximation schemes that are based on the assumption that the exact evolution would be given by the Schroedinger equation). See for example the following paragraph on p. 92 (p. 10 of the pdf) of the discussion of superconductivity at http://www.qudev.ethz.ch/phys4/studentspresentations/supercond/Ford_The_rise_of_SC_6_7.pdf :

      This coherence results in a superconductor behaving rather as if it is a “giant
      molecule” i.e. an “enormous quantum state”. In the early nineteenth century, when
      Ampère had proposed that magnetism can be understood in terms of electric currents
      flowing in individual atoms or molecules, it was objected that no currents were
      known to flow without dissipation. He has long since been vindicated by quantum
      theory, which gives rise to stationary states in which net current flows with no
      resistance. A superconductor is a dramatic macroscopic manifestation of a quantum mechanical state, which behaves like a giant molecule with no obstruction for
      electron flow: there is no resistance.


      Also note that, as I mentioned, a system's initial quantum state can always in principle be measured empirically, by measuring the state of every single particle at the initial time in some measurement basis (for example, measuring each particle's exact position, or choosing a basis where every particle's position and momentum are determined to be within a certain small interval).

      Delete
    2. JesseM, sure the lattice emerges according to well defined bottom up principles. Once this has happened, it then exists as an entity at a higher level of scale than the particles out of which it is composed (you have to use quite different variables to describe this lattice structure than you do to describe the particles: these variables are not reducible to lower level variables). Until it has so emerged, no Cooper pairs are possible. Once it has emerged, they can come into existence. Thus it is the very existence of the higher level structure (the lattice) that enables the lower level constituent entities (the Cooper pairs) to come into being.

      Cooper pairs can’t act in a bottom-up way to create anything until they exist. They only exist once the higher level structure calls them into existence. It is because superconductivity depends on this mechanism that you can’t derive it in a purely bottom up way. This is stated strongly by Laughlin in his Nobel Prize lecture as follows:

      "One of my favourite times in the academic year occurs in early spring when I give my class of extremely bright graduate students, who have mastered quantum mechanics but are otherwise unsuspecting and innocent, a take-home exam in which they are asked to deduce superfluidity from first principles. There is no doubt a very special place in hell being reserved for me at this very moment for this mean trick, for the task is impossible. Superfluidity, like the fractional Hall effect, is an emergent phenomenon - a low-energy collective effect of huge numbers of particles that cannot be deduced from the microscopic equations of motion in a rigorous way, and that disappears completely when the system is taken apart ... The students feel betrayed and hurt by this experience because they have been trained to think in reductionist terms and thus to believe that everything that is not amenable to such thinking is unimportant. But nature is much more heartless than I am, and those students who stay in physics long enough to seriously confront the experimental record eventually come to understand that the reductionist idea is wrong a great deal of the time, and perhaps always... The world is full of things for which one's understanding, i.e. one's ability to predict what will happen in an experiment, is degraded by taking the system apart, including most delightfully the standard model of elementary particles itself."

      [R B Laughlin (1999): `Fractional Quantisation'. Reviews of Modern Physics 71: 863-874]

      The key reason this is true is that contextual effects enable lower level entities to exist, which is why it can’t be derived in a purely bottom up way. For other examples of top-down causation, see my article at arXiv:1108.5261

      George Ellis

      Delete
  41. The argument in the essay fails. Are phases more fundamental than micro-constituents? No, liquid acetone is very different from liquid water. Are phases more fundamental than thermodynamics? No, micro-constituents ultimately wind down (lose heat) whatever their phase. What is the reductive level? Micro-constituents and their thermodynamic behavior.

    ReplyDelete
  42. Batterman does a great job of bringing out the importance of singular limits and phase transitions. Nonetheless, I think he goes too far in thinking that this poses a problem for ontological reduction (explanatory reduction is another matter).

    For a good response to Batterman, I recommend the following paper by Callender and Menon: "Turn and Face the Strange... Ch-ch-changes".

    George Ellis comments: "the essential nature of the lower level entities is altered by the local context."

    Yes, but this is simply to say that when an entity is interacting with its environment, those interactions cannot be ignored. In principle, every contextual (top-down) feature could be accounted for (by a Laplacian demon) in terms of same-level (bottom-bottom) effects.

    This is all we need for the completeness of effective physical theory (say quantum electrodynamics), and that's enough to rule out strong emergence. (Where "strong emergence" is the claim that there are real causal features that are not instantiated in the causal features of the micro-physics.)

    ReplyDelete
  43. JesseM, Peter:

    Sure the lattice emerges according to well defined bottom up principles. Once this has happened, it then exists as an entity at a higher level of scale than the particles out of which it is composed (you have to use quite different variables to describe this lattice structure than you do to describe the particles: these variables are not reducible to lower level variables). The key point is that until it has so emerged, no Cooper pairs are possible. Once it has emerged, they can come into existence. Thus it is the very existence of the higher level structure (the lattice) that enables the lower level constituent entities (the Cooper pairs) to come into being.

    Sure you can work out in a bottom up way what effects the Cooper pairs mediate. But they can’t act in a bottom-up way to create any physical effects at all until they exist – and their existence is contingent on the detailed nature of the higher level structure. That’s a clear case of the causal effects of context on lower levels – that is, by any reasonable definition, causation flowing downwards in the hierarchy of structure. But if you dislike the word “causation”, just call it a contextual effect,

    It is because superconductivity depends on this mechanism that you can’t derive it in a purely bottom up way. This is stated strongly by Laughlin in his Nobel Prize lecture as follows:

    "One of my favourite times in the academic year occurs in early spring when I give my class of extremely bright graduate students, who have mastered quantum mechanics but are otherwise unsuspecting and innocent, a take-home exam in which they are asked to deduce superfluidity from first principles. There is no doubt a very special place in hell being reserved for me at this very moment for this mean trick, for the task is impossible. Superfluidity, like the fractional Hall effect, is an emergent phenomenon - a low-energy collective effect of huge numbers of particles that cannot be deduced from the microscopic equations of motion in a rigorous way, and that disappears completely when the system is taken apart ... The students feel betrayed and hurt by this experience because they have been trained to think in reductionist terms and thus to believe that everything that is not amenable to such thinking is unimportant. But nature is much more heartless than I am, and those students who stay in physics long enough to seriously confront the experimental record eventually come to understand that the reductionist idea is wrong a great deal of the time, and perhaps always... The world is full of things for which one's understanding, i.e. one's ability to predict what will happen in an experiment, is degraded by taking the system apart, including most delightfully the standard model of elementary particles itself."

    [R B Laughlin (1999): `Fractional Quantisation'. Reviews of Modern Physics 71: 863-874]

    The same basic effect – lower level entities only exist by courtesy of the detailed nature of the higher level structure – occurs all over the place in biology, for example in symbiosis, in ecosystems, and in the case of say blood cells in the human body.

    ReplyDelete
    Replies
    1. The biological examples (and perhaps all 'emergent' examples) should be predictable from the reductive, and not merely in retrospect with reference to additional levels such as natural selection. The environment is a cause capable of physical reduction, and entities are also reductively understandable physically (chemically). The interrelation is merely a matter of complexity. The superfuidity example should also be a matter of resolving the various causes, each of which should be capable of reduction, with or without human intervention as one of those causes.

      Delete
  44. The previous post merely says that the behavior of an intact system depends on its intact constitution. That's no problem for reductionism as there may be levels of aggregation as intact systems that are unpredictable from the reductive level, understandable only in retrospect, or perhaps not even understandable in retrospect (depending on the reductive bases used). It doesn't invoke magic, just added complexities.

    ReplyDelete
  45. Hi Massimo,

    This is a great topic, so kudos for posting about it. I've read a little (to be clear, VERY little) on emergence regarding things like spontaneous symmetry breaking and there's something that I'd love to get some clarity on. My understanding is that we might call something emergent if we can't appeal to the lower-level physical make up to explain the emergent phenomena, like in cases where the lower level constituents may be irrelevant to something like a phase transition. If I'm understanding this correctly (probably ain't), how might this be different from something like the phenomena of paper targets tearing when you shoot ammunition made of different materials? It seems similar in that it makes little difference about what the bullet is made of, whether it's steel, lead, wood, rubber, etc; the paper target will still tear in the same/similar ways. Crude (and likely inappropriate) analogy , but I had to ask with my limited understanding of this fascinating topic. Thanks

    ReplyDelete
  46. Hi George, I'm very glad we can continue our conversation (thanks for pointing me here).

    As I see it, the debate over downward causation is not about whether higher-level entities can produce effects at the lower level, the question is whether there can be higher-level causes that are not also lower-level causes.

    An eliminativist will claim that there are no higher-level causes, but not all physicalists are eliminativists. Surely we can think that everything is microphysical without thereby denying the existence of chairs, ducks, and so on.

    A non-elimnativist physicalist (like me) will claim that higher level entities and causes emerge (in the sense of weak emergence) from micro-physical entities and causes. So we will say that higher-level causes are real, but they are also microphysical causes (in that they supervene on the microphysics, and are nothing but constrained, structured, microphysical causal features).

    So I take it we both agree that there are real higher-level, contextual features that causally impact the lower-level processes. And obviously if we neglect these features we won't have an adequate account of the causes at work.

    However, in any particular circumstance one can account for these very same (high-level) causal features in terms of the low-level causal processes at work. The low level microphysical description is complete in this sense.

    I think we also agree here, but I'm not certain because in some of your published work you claim that the effectiveness of higher-level causes requires indeterminism at the lower level. This sounds like an insistence on strong emergence (which claims the microphysical causal account is not the most complete account there is), rather than the weak emergentism that I defend. (Perhaps you no longer think that emergence requires "causal looseness" at the lower level? I don't see you mention indeterminism in your more recent FQXI paper.)

    A couple more points:

    1. The debate between Craig Callender and Bob Batterman is about whether phase transitions and singular behavior imply a limitation on the microphysical description. Craig and I hold that the explanatory power of renormalization group methods gives us no reason to think that the microphysical ontology is incomplete. We do get ontological reduction (though not elimination).

    2. The example of Cooper pairs is a bit more tricky than the cases I (and Batterman) discuss above because it is an essentially quantum mechanical phenomenon. This complicates the issue for two reasons: (i) Entangled quantum states are holistic in that they cannot be reduced down to intrinsic states of individual particles, and (ii) without a solution to the quantum measurement problem, it's difficult to give an clear account of the relationship between macroscopic classical measuring devices and the quantum entities that constitute these devices.

    As you know from our in-person conversation, I think that this does indicate that part-whole reduction fails at the quantum level. But I don't think this undermines the more general claim that the micro-physical story is complete.

    But debates over interpretations of quantum theory would take us away from the questions raised here by Massimo (and Batterman). Those questions can be answered by an account of weak emergence that insists on causal completeness at the microphysical level: to the extent that an event is caused, it is completely caused by microphysical processes. Some of those microphysical processes are also higher-level entities and processes, and thus there are real high-level causes.

    ReplyDelete
    Replies
    1. Hi Peter,

      I appreciate your approach. The key issue for me is that you agree that there are real high-level causes. Good. I agree that given the lower level dispositions of states resulting from those higher causes, the lower level physics gives a complete causal account; but overall, the causal account is incomplete without taking into account the way higher level causes lead to those specific lower level dispositions. A good example is a digital computer, where an engineering computation is taking place because a) C++ has been loaded as the current high level software, b) simulation software incorporating the relevant finite element algorithms has also been loaded, and c) specific initial data to run the relevant algorithms has been entered via the keyboard. The outcome that occurs is then a unique result of the lower level electronic states resulting from a), b) and c). However there is no way that the numerical algorithms embodied in the patterns of lower level excited states can be derived in a purely bottom up way from the underlying physics. It’s a category error to assume that purely physical processes can lead to existence of any such algorithms.

      Whether we agree on causation or not depends on the weight you put on the words “nothing but” in the phrase “are nothing but constrained, structured, microphysical causal features”. I think the explanation above says it’s a mistake to use the phrase “nothing but”, because significant other causal effects are at work. Part of the problem for a true reductionist is that the electronic gate states are “nothing but” specific states of quarks and electrons; and these again are “nothing but” excitations of superstrings – if they exist, which may or may not be the case. Which level are you claiming is the true microphysical level? The embarrassment for a true reductionist is that we don’t know what the lowest level structure is – we don’t have any viable theory for the bottom-most physical states. Thus if we take your phrase at face value, all physics is “nothing but” we know not what. Why not leave those words out?

      As to quantum indeterminism: it plays two significant roles. Firstly, it ensures that outcomes on Earth today cannot have been uniquely determined by initial conditions in the early universe, inter alia because cosmic rays have altered our genetic history significantly, and emission of a cosmic ray photon is (assuming we believe standard quantum physics) an inherently indeterminate process. The complex structures that exist have come into existence through emergent processes with their own logic that is independent of the underlying physics. Secondly, the most important such process is adaptive selection, which is guided by selection effects operating in a specific ecological context; and that is an inherently top-down process (see “Natural Selection and Multi-Level Causation” by Martínez and Moya in Volume 3 (2011) of Philosophy and Theory in Biology, available from Massimo’s website). In the example just given, quantum uncertainty plays a key role in providing the repertoire of variant states on which adaptive selection can operate (without the cosmic rays, there would have been fewer such states to choose from). The conjecture is that perhaps something similar might be true in the way the brain works. But that’s very hypothetical.

      Delete
  47. An extreme example of the 'levels' analysis in the previous post is a non-natural element. An intact element created by humans using micro constituents but not otherwise predictable from them (except with reference to human determination and imagination). In retrospect, given their 'emergence', no problem for reductionism, even though complicated in prediction by human intervention.

    ReplyDelete
  48. One reason I suspect that physicists so readily reject this notion of emergence is that singularities have been a great indicator that a new theory is needed. As infinites do not seem to exist in observable universe, (how would one possibly observe an infinite, it would take quite a long time, or all of it) physicist see them as markers that read “new idea needed here”. The idea of infinite increasing energy as electromagnetic wave frequency gets arbitrarily smaller lead to the discovery of the quantum. These types of discoveries show that when our models “brake down” there is a fundamentally new explanation for what the model predicts. This does not mean that the old model is wrong, just not quite accurate at more extreme ends of the graph. The old theory fits well with the new one in one reference frame, but diverges when that frame is moved toward the extreme.

    ReplyDelete
  49. Massimo:

    Pt 1:
    You begin paragraph 5:
    "Batterman begins by putting things into context and providing a definition of emergent properties: 'The idea being that a phenomenon is emergent if...' etc. "

    To say that Batterman begins by providing a definition of emergent properties, is misleading. Batterman is not providing HIS definition of emergence. Instead he is saying that most philosophical discussions are based around the notion of this definition of emergent properties. Batterman continues to indicate his rightful dissatisfaction with the stated definition by saying:

    "We should look to physics and to ‘emergent relations’ between physical theories to get a better idea about what the nature of emergence really is."

    He is clearly unhappy with current definitions and discussions as to what emergence is.

    Pt 2:
    According to his introduction, Batterman wants philosophers to think of "emergent phenomena" rather than "emergent properties" - Footnote 1 pg 2: "Most philosophical discussions focus on emergent properties. As the discussion below will show, in the context of physical theory, I think this is a mistake and prefer to speak rather loosely of emergent 'phenomena.’ "

    I agree that focusing on emergent properties is fruitless and one reason why definitions of emergence tend toward the antireductionism, unpredictability, & novelty bias. However, I am equally unpersuaded by Batterman that philosophy should look toward emergent phenomena.

    Alternatively, what I propose is that to understand emergence, one must have a coherent definition of system - c.f. mind-phronesis.co.uk/book/philsys/
    Imagine that outside Buckingham Palace 2 million people have come out to get a glimpse of Prince William & Kate Middleton with their newborn. The crowd mill about chatting with excitement.
    Image this: A colossal bucket appears and all 2 million people are instantly transported into the bucket, filling it to its rim - all the bodies are heaped one on top of the other. They have special armoured clothing that stops them being crushed & special breathing apparatus, but the forces of gravity upon this mass of individuals in this giant bucket ensure that there is no milling about nor exciting chatter in the cramped conditions.

    i) Is this bucket of people and their altered behaviour an emergent phenomenon? - Immovable bodies that were once free-flowing.
    ii) Does this bucket of people constitute a system? - A single body that was once two million independent agents.

    Answers:
    i) Two million immovable groaning individuals does not constitute an emergent property or phenomenon. If one is of the opinion that it does constitute an emergence, is one not then in the position of having to determine why ALL forms of behaviour or property are not, to some degree, emergent? Behaviour, properties, & phenomena are not the defining characteristics of emergence. Similarly, analysing the properties and phenomena associated with consciousness for instance, will not give insight into why or how it emerged or indeed, whether it can be classed as an example of emergence.

    ii) The body of people in the bucket is not a system, even if it can be modelled mathematically as a single body state. Indeed, the 2 million individual agents did not count as a system either, despite our abilities to mathematically model milling and flowing behaviour. If one is of the opinion that modelling or identifying unity to complexity defines a system or grants systems status, is one not then in the position of having to determine why ALL forms of behaviour, structure, or property are not, to some extent, systems? Conglomerations of parts that might be modelled as single properties or phenomena, are not the defining characteristics of systems constructs. Complexity, complex structures, complex flows of agents, or parts that comprise wholes, are not necessarily systems.

    A reductive explanation of emergence comes from a coherent definition of system. A task for the philosopher as much as for the scientist.

    ReplyDelete

Note: Only a member of this blog may post a comment.