About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Monday, October 15, 2012

Essays on emergence, part II


The superconducting super collider
blogs.dallasobserver.com
by Massimo Pigliucci

Last time we examined Robert Batterman’s idea that the concept of emergence can be made  more precise by the fact that emergent phenomena such as phase transitions can be described by models that include mathematical singularities (such as infinities). According to Batterman, the type of qualitative step that characterizes emergence is handled nicely by way of mathematical singularities, so that there is no need to invoke metaphysically suspect “higher organizing principles.” Still, emergence would remain a genuine case of ontological (not just epistemic) non-reducibility, thus contradicting fundamental reductionism.

This time I want to take a look at an earlier paper, Elena Castellani’s “Reductionism, Emergence, and Effective Field Theories,” dated from 2000 and available at arXiv:physics. She actually anticipates several of Batterman’s points, particularly the latter’s discussion of the role of renormalization group (RG) theory in understanding the concept of theory reduction.

Castellani starts with a brief recap of the recent history of “fundamental” physics, which she defines as “the physics concerned with the search for the ultimate constituents of the universe and the laws governing their behaviour and interactions.” This way of thinking of physics seemed to be spectacularly vindicated during the 1970s, with the establishment of the Standard Model and its account of the basic building blocks of the universe in terms of particles such as quarks.

Interestingly, according to Castellani, this picture of the role of fundamental physics began to be questioned — ironically — because of the refinement of quantum field theory, with the Standard Model coming to be understood as a type of effective field theory (EFT), “that is, the low-energy limit of a deeper underlying theory which may not even be a field theory.” She goes on to explain that, as a result, two positions have emerged: (a) there must be a more fundamental theory than the Standard Model, such as string theory, M-theory, or something like that. This is what Steven Weinberg calls “the final theory.” Or, (b) there is no final theory but rather a layered set of theories, each with its own, interlocking domain of application. Roughly speaking, (a) is a reductionist position, (b) an anti-reductionist one.

One of the most helpful bits of Castellani’s paper, at least in my mind, is that she clearly distinguishes two questions about the idea of “fundamentality:” (i) is there, in fact, a fundamental theory (i.e., is (a) or (b) above true)? And (ii) what, philosophically, does it mean for a theory to be fundamental with respect to another?

The author is (properly, I think) agnostic about (i), and devotes the latter part of the paper to (ii). Here are her conclusions, verbatim:

“the EFT approach provides a level structure of theories, where the way in which a theory emerges from another ... is in principle describable by using the RG methods and the matching conditions at the boundary. ... [but] The EFT approach does not imply antireductionism, if antireductionism is grounded on the fact of emergence.”

In other words, the concept of effective field theories is a good way to articulate in what sense one theory is more fundamental then another, but it does not settle the issue of whether there is, in fact, a theory of everything, and if so what the nature of such a theory might be. The issue of emergence and reductionism remains pretty much untouched as a result.

So what are we left with, you might ask? With some interesting insights found in the middle part of Castellani’s paper (particularly section 2, for people who wish to go to the source).

A good one is that the discussion about the existence of a final theory and the concept of fundamental physics is far from purely academic, and far from having generated an agreement even among physicists. Castellani brings us back to the famous debate in the late ‘80s and early ‘90s concerning the funding in the United States of the superconducting super collider, which was eventually canceled despite valiant efforts by prominent physicists like Weinberg. As the author points out, a major argument for spending billions on the collider was precisely that high energy physics was presented as “fundamental” to all other physics (indeed, to all other science) and therefore worthy of the expenditure.

But not everyone agreed, even in the physics community. Solid state physicist Philip Anderson was one of the dissenters, and published a highly influential article, back in 1972, opposing “what appears at first sight to be an obvious corollary of reductionism: that if everything obeys the same fundamental laws [the reductionist hypothesis], then the only scientists who are studying anything really fundamental are those who are working on those laws.”

Interestingly, Anderson did not reject reductionism, but what he called “constructionism”: the idea that one can work one’s way up from fundamental laws to reconstruct the whole of the universe and its workings. His rejection of constructionism was based on his acceptance of emergent properties: “at each new level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other.” As Castellani summarizes: “Condensed matter physics [according to Anderson] is not applied elementary particle physics, chemistry is not applied condensed matter physics, and so on. The entities of [science] X are emergent in the sense that, although obedient to the laws of the more primitive level [studied by science] Y (according to the reductionist hypothesis), they are not conceptually consequent from that level (contrary to the constructionist hypothesis).” Indeed, Anderson went so far as writing the following provocative statement: “the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.” So much for dreams of a final theory of everything.

Another famous objector to Weinberg’s “fundamental physics is all there is” approach was biologist Ernst Mayr, who traded barbs with Weinberg in a famous exchange in the pages of Nature in 1987-88. Weinberg maintained that “there are arrows of scientific explanation that ... seem to converge to a common source  ... [And particle physics deals with nature] on a level closer to the source of the arrows of explanation than other areas of physics,” and is therefore where the real action is. Mayr would have none of it, charging Weinberg with bad and confused reductionism. In the philosophical literature there are, broadly speaking three types of reductionism: constitutive (the method of studying something by looking at its components), theoretical (theories at one level can be shown to be special cases of theories at another level), and explanatory (knowledge of the lower level is sufficient). Both Weinberg and Mayr agree that explanatory reductionism is out of the question, but for Mayr so is theory reductionism, because it is inconsistent with emergence, which in turn was, for Mayr the biologist, an obvious fact about which fundamental physics is simply silent.

To recap the game so far, Castellani’s paper is more neutral about the reductionist / anti-reductionist debate and the role of emergence in it than the paper by Batterman which we examined last time. But it was useful to me because it provides some intriguing historical and even sociological background to the controversy, and also because it clearly separates the issue of effective (i.e., approximate) field theories from the question of whether there actually is a fundamental theory and, if so, what its nature might be. Stay tuned for part III...

29 comments:

  1. I confess that to me the only interest in studying emergence (real of perceived) lies in finding ways to predict emergent phenomena from lower level information, and mathematical modeling or any other tools we already have. This could help us in understanding ecology, sociology etc.

    Debating which science is the most fundamental seems to me to be trying to reestablish a hierarchy of sciences - I thought 2300 years after Aristotle, we'd just managed to drop the idea. Especially as we all seem to agree that, reducible or not, referring to particle physics doesn't explain what the rat's doing in that maze.

    ReplyDelete
  2. This is a question I've been thinking about for a while which I can't really decide what the answer should be.

    1. Assume a natural system that displays (strong) emergent behavior.
    2. We study this system's parts so well that we have a perfect model of how the parts function (the parts by themselves show no emergent behavior).
    3. We simulate this system on a computer (we have only the behavior of the parts in the simulation).

    Would the computer simulation also display the same (strong) emergent behavior?

    ReplyDelete
    Replies
    1. Hi Johan,

      I think your simulation idea is a good way to think about this. Assuming, as you do, a very complete simulation (perhaps impossible in practice), different outcomes are possible:

      1. We observe the said emergent behaviour.
      2. We observe one of many possible emergent behaviours, perhaps mutually exclusive. By running the simulation many times, different emergent behaviours are observed.
      3. No emergence happens at all, or at least not the one under study.

      We could say that, under 1 and 2, the emergent behaviour is reducible to the parts. I am not sure I get emergence right but I gather that “real” emergence is something that could not be obtained through a simulation.

      Delete
    2. I'd say yes, it would. I see no reason why emergent behavior would depend on a particular hardware implementation. If you have a biological system displaying emergent behavior, so should an electronic one, provided that you duplicate all (and I mean ALL) aspects of the biological system in the simulation, including the interaction with the environment, etc.

      Delete
    3. The debate about "fundamentalness" between particle physicists and solid state physicists is an old one, and IMO has nothing seriously to do emergence itself.

      It is indeed true that the Standard Model can be approximated down to QED, which can be further approximated to the nonrelativistic Schroedinger equation for a bunch of particles, which is the starting point for any and every research in solid state physics. In that sense, the SM is more "fundamental" than anything solid-state.

      On the other hand, given the SM Lagrangian, you do not actually *know* any solid state physics. You just know how the equations look like. *Solving* those equations, and studying the *properties* of a particular set of solutions, is an entirely different matter. Extracting relevant results from the fundamental equations is a highly creative and nontrivial business, and that is what solid state research is all about.

      Solid state physicists certainly do know that everything they do is still just an "application" of the Schroedinger equation to a very complicated many-body system. In that sense, it is "applied science", since they are never going to discover any new "fundamental" law of nature. But this "applied science" task in front of them is no less hard than the one studied by particle physicists. In addition, it has a much bigger practical value.

      As an example, look at classical electrodynamics. Given the set of Maxwell equations and a Lorentz force law, can you "predict" properties of stuff like dynamos and electric motors, AC electric currents, lightnings, lasers, etc? In principle --- yes you can, all that stuff are the solutions of Maxwell equations, in the end. But in practice it took a whole army of scientists to work out all these things. The Maxwell equations do contain all that stuff inside, but knowing how the equations look like is nowhere near knowing all their interesting solutions.

      Of course, emergence has nothing whatsoever to do with any of the above. Solid-state physical systems, regardless of the huge number of particles involved, are still too simple and too regular to allow for qualitatively new emergent phenomena. The jury is still out on high-Tc superconductivity problem, but my feeling is that this is also reducible to Schroedinger equation somewhere down the line. I believe that one needs to go on an even higher scale (mainly into biology) to get qualitatively new properties of nature, because only this higher scale is complicated enough to exhibit chaotic behavior. Solid state physics is still very much reducible (in principle at least) to particle physics.

      Delete
    4. carefully written & informative

      Delete
  3. JP, vmarko,

    I don't see why emergent properties couldn't be simulated. Again, emergence isn't magic. But if the emergent behavior cannot be accounted for in terms of properties of the constituent parts of the system, then the simulation doesn't really help much understanding what is going on.

    ReplyDelete
  4. wmarko,

    I'm curious: I keep hearing that "X is reducible to Y 'in principle,'" usually followed by no explanation whatsoever of which principle we are talking about, or how physicists might go about showing that such principle is indeed valid. Any idea?

    ReplyDelete
    Replies
    1. When a physicist says that "X is reducible to Y in principle", he means that X would be reducible to Y explicitly, given enough computational power. But since we do not have unlimited computational power, X is reducible to Y only "in principle", since it is not feasible to do in practice.

      So the words "in principle" do not refer to any kind of a statement or an axiom or such, but rather refer to a "generally imaginable situation".

      There are many contexts where one can see this phrase being used. For example, given some gas in a bottle, one can track down and calculate the trajectories of every individual molecule of the gas (using, say, Newton's laws of motion), "in principle". This means that, given a big enough computer, one could do it explicitly for a gas of 10^23 molecules. But obviously, no such computer exists on Earth, so the task is impossible to carry out in practice. There is, however, no theoretical restriction on this, so if one *did* have a big enough computer, one would be able to carry out the calculation *explicitly*. That is what the words "in principle" mean in this context.

      A second example is factorization of a product of two large primes (commonly used in cryptography) --- it can always be performed "in principle", but with today's hardware the actual procedure would take several hundreds of years to complete. Therefore it can be done only "in principle", but not practically. There is no theoretical restriction on doing it, but it takes an awful lot of time to do.

      On the other hand, if there *is* a theoretical restriction on what is possible (often called a "no-go theorem"), then some things are known to be not possible, not even "in principle" --- meaning that no amount of computer power would help you. For example, chaotic systems have this property --- one cannot calculate their time evolution beyond a certain point in time, despite having a deterministic set of equations that govern the system. Simply put, the errors present in the initial conditions will grow exponentially, and at some point outgrow the "size" of the system itself. At that point one cannot know the future state of the system, not even "in principle". And that happens *despite* all the detailed laws and knowledge about the constituent pieces of the system.

      Also, the speed of light in vacuum cannot be outrun, not even "in principle" (i.e. regardless of your technological ingenuity), since it would take an infinite amount of energy to reach the speed of light (for a body of nonzero mass).

      HTH, :-)
      Marko

      Delete
  5. Hi Massimo,

    That was not clear from the discussion following your previous post. If you mean “can be simulated from the behavior of the parts” (without added rules), then I am not sure what the opposition between reductionism and emergence would be. I think the idea that something can be simulated from its parts is close to what is meant by “being reducible to”.

    ReplyDelete
  6. JP,

    again, simulating doesn't mean reducing. First, there are different meanings of the word "reduction" in this context. Second, these meanings have been made precise in the relevant philosophical literature, and I don't see any reason why a simulation would fit the bill. See, for instance:

    http://plato.stanford.edu/entries/physics-interrelate/
    http://plato.stanford.edu/entries/scientific-unity/
    http://plato.stanford.edu/entries/qm-collapse/

    ReplyDelete
  7. Another good one, Massimo. The difference between one theory being causally "above" another vs. one theory being a final one is a good distinction.

    And, in that line, and to riff on the likes of Goedel, Escher, Bach, with recurring loops and self-reference, is it possible to say that maybe there just isn't a "theory of everything," especially per the "G" part of that book?

    ReplyDelete
    Replies
    1. Let me add to this, per Massimo's longer essay, somewhat related to this, at Aeon. (Massimo, you should link that here.)

      I think that a fair chunk of reductionists seeking a theory of everything are looking not just for aesthetics, as Massimo says there, but for a secularist version of religious certainty. I've long felt that.

      Delete
  8. massimo:

    << I don't see why emergent properties couldn't be simulated. Again, emergence isn't magic. But if the emergent behavior cannot be accounted for in terms of properties of the constituent parts of the system, then the simulation doesn't really help much understanding what is going on.
    ... again, simulating doesn't mean reducing. >>

    I would strongly suggest starting any discussion of simulation with physicists, and scientists, with something like these words. For a physicist, if the emergent properties are simulated, then this IS reduction.

    Since you reject that, I don't understand at all what you mean by the word "reduction". *Understanding* the emegrgent behaviour may require different concepts, that may be applicable to various other circumstances, but this does not undermine the fact that this is a case of theoretical & constitutive reduction: by simulating the parts we derive the emergent behaviour, that is constituted by the behaviour of the parts; and by simulating the parts according to the fundamental theory we derive behaviour according to the higher-level effective theory, which is a special case of the more general more fundamental theory.

    Yair Rezek

    ReplyDelete
    Replies
    1. Retrospect is the shortcut to understanding. You don't need to purely predict, but fall off your perch if the retrospect reveals that any whole en route to human behavior is not reducible to the behavior of the original hydrogen & helium from which we 'emerged', simply by winding back the clock and tracking all the laws involved.

      Delete
  9. Yair,

    I think the problem is an equivocation on the meanings of "simulating" and "understanding." For instance, people have simulated certain properties of systems capable of learning, using artificial neural networks. However, typically when researchers "open up" the simulation box (which developed on its own by trial and error), they do not get an account (understanding) of how the network developed the ability to learn.

    An even simpler example goes back to the discussion of phase transitions in the first post: if Batterman is right that the key there is to take seriously the mathematical singularities of our models, presumably one can build a simulation based on that mathematics and obtain the emergent behavior. But that most emphatically does *not* mean that said behavior has been reduced, for the reasons explained in Batterman's paper.

    Gadfly,

    the link to the Aeon essay is this:

    http://www.aeonmagazine.com/world-views/massimo-pigliucci-on-consilience/

    ReplyDelete
    Replies
    1. I had read it already, Massimo ... just thought that it related enough to the ideas here that others, if they were unaware of it, should give it a read.

      Delete
  10. Massimo:

    << I think the problem is an equivocation on the meanings of "simulating" and "understanding." >>

    I believe virtually all physicists would be painfully aware of the difference between simulating and understanding. From a physicist's point of view, the problem may very well be an equivocation between "reduction" and "understanding". There is no question that in order to understand ideal gases you need to make use of classical and macroscopic concepts that will not be found in the fundamental equations of the standard model (SM). Nevertheless, the fact that the behaviour of the gas can be simulated from quantum mechanics and the standard model does mean, for a physicist, that the thesis of reduction is vindicated - the (emergent) behaviour of gases is merely a result of the fundamental laws operating, even if to UNDERSTAND it we need to carve up the world using particular concepts that have very loose, if any, connection to this level of description.

    Yair Rezek

    ReplyDelete
  11. The parts making up the whole become a whole, that's why it's difficult to know their separate contributions. It doesn't mean a reductive explanation fails. It means it might only be understandable in retrospect and even then with difficulty due to the problem of attribution of each part's contribution to the whole. This applies particularly in biology, with a reducible environment selecting a reducible entity. The entity is not the whole; it is the entire system of environment, entity & evolution. Levels don't exist as wholes in isolation; they ultimately make up one reducible whole as an entire universe.

    ReplyDelete
  12. Yair,

    neither one of us really can claim to know what physicists think or what their understanding of philosophical issues are. But my point was simpler: it's easy to show that one can simulate a phenomenon by ways that have nothing whatsoever to do with the way the phenomenon works. The trivial case is that of video games: you can simulate, say, soccer or car racing, but there is no connection at all with the way those things actually work in the physical world. In general, just because one can simulate property X it doesn't mean *either* that one understands X, *or* that one has shown that X actually works the way in which the simulation is set up.

    Dave,

    > The parts making up the whole become a whole, that's why it's difficult to know their separate contributions. It doesn't mean a reductive explanation fails. <

    Of course. But it doesn't follow that reductive explanations succeed or are even possible in principle. One has to do the actual work to show that they are.

    > simply by winding back the clock and tracking all the laws involved <

    Good luck with that!

    ReplyDelete
    Replies
    1. Sure, you need it, that's called the actual work. Don't leave all the work to me! It's all traceable back to the initial nucleosynthesis in the first three minutes (wisely referenced by Weinberg), whether you like it or not. And it's all understandable, whether it can be reproduced in experiments or not. The task of documenting an understanding is not a duck shoving exercise, it is shared equally by all.

      Delete
    2. > But it doesn't follow that reductive explanations succeed or are even possible in principle. <

      What principle would that be? If you read my post, any 'whole' is a subset of a greater 'whole' and ultimately a universe. Parts constitute a 'whole', and can even be said to lose their identity as parts 'in principle' by that fact. I'm not falling for your definitional conundrum of 'parts' & 'wholes'. Simply view reduction in its full context of universal evolution from particle & antiparticle annihilation - reduce by extending back to original constituents, not by arguing definitonal conundrums in static present time.

      Delete
  13. This will help: big bang nucleosynthsesis; supernova nucleosynthesis; accretion disk; rocky inner planet; DNA in environment; primate; blogging. It's called the entire scope of science, and it aint my responsibility.

    ReplyDelete
  14. Dave,

    > Don't leave all the work to me! It's all traceable back to the initial nucleosynthesis in the first three minutes (wisely referenced by Weinberg), whether you like it or not <

    At the moment, that strikes me as a statement of pure metaphysical faith.

    > What principle would that be? If you read my post, any 'whole' is a subset of a greater 'whole' and ultimately a universe. <

    That's not a principle, it's a description. And by the way, my take doesn't depend at all on any distinction between parts and wholes, but rather on the complexity of systems.

    > This will help: big bang nucleosynthsesis; supernova nucleosynthesis; accretion disk; rocky inner planet; DNA in environment; primate; blogging. <

    It helped precisely nothing, I'm afraid.

    ReplyDelete
    Replies
    1. Than again, maybe if you substitute 'constituents' for 'parts' and 'complexities' for 'wholes', and revisit the past two blogs, that might help.

      Delete
    2. Nope, still doesn't help, but that's ok.

      Delete
    3. > That's not a principle, it's a description. <

      I missed that one. If you read the post, the analysis of whole & parts sets the context for any 'principle' you might care to name (if you actually cared to name some). That's just your selective excerpt distorting the reference.

      Delete
  15. Dave,

    You misunderstood. You invoked principles of reduction, I'd like to know what they are. Generic talk of wholes and parts a principle does not make.

    ReplyDelete

Note: Only a member of this blog may post a comment.