Throughout last semester several students at CUNY’s Graduate Center, a colleague, and I have spent some time navigating the complexities of James Ladyman and Don Ross’s Every Thing Must Go: Metaphysics Naturalized (which also contains essays co-authored by David Spurrett and John Collier). The book is not for the faint of heart, but it has given me much food for thought, which Julia and I will soon explore further in a forthcoming episode of the Rationally Speaking podcast featuring Ladyman as a guest (I should also add that James contributed to an edited collection on the philosophy of pseudoscience that I just finished putting together with the help of Maarten Boudry which will be published next year by the University of Chicago Press).
I must admit that the title of the first chapter — “In defense of scientism” — did not dispose me well toward the book. I think the term scientism ought to be reserved for what it has traditionally indicated, an unwarranted over reliance on science (yes, there is such a thing), or the thoughtless application of science where it doesn’t belong (ditto), and it pisses me off to no end when philosophers actually use it as a positive term (as, most egregiously, in Alex Rosenberg’s so-called Atheist’s Guide to Reality). However, I got past the initial annoyance, and started to appreciate the (complex) arguments made by Ladyman, Ross and their occasional co-writers. Indeed, by the end of the book it turns out that Every Thing Must Go is, among other things, a pretty good argument against the sort of scientism that worries me, and in particular against the nowadays very popular physical reductionism espoused by the likes of Rosenberg, Harris & co. But I’m getting ahead of myself.
Before we get to the heart of the matter there are two background issues within which Everything Must Go (henceforth ETMG) needs to be understood: one is the post-positivism return of a metaphysics increasingly decoupled from physics (and science more broadly), the other is the debate among philosophers of science concerning realism and anti-realism when it comes to interpreting scientific theorizing.
Background, 1: metaphysics and naturalism. Metaphysics, of course, is one of the traditional branches of philosophy, taking its name from the fact that Aristotle’s work on the subject was traditionally catalogued after (“meta”) his stuff on physics. The scope of metaphysics is nothing less than an understanding of the nature of the world, an objective increasingly shared with the sciences, particularly physics. And that, of course, is where the trouble started. David Hume famously advanced the notion that books that do not present empirical evidence or mathematical reasoning are, shall we say, not terribly useful, and the resulting “Hume Fork” has often been seen as a direct attack on traditional metaphysics. Here are the crucial quotes from his Enquiry Concerning Human Understanding:
“All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, Relations of Ideas, and Matters of Fact. Of the first kind are the sciences of Geometry, Algebra, and Arithmetic ... [which are] discoverable by the mere operation of thought ... Matters of fact, which are the second object of human reason, are not ascertained in the same manner; nor is our evidence of their truth, however great, of a like nature with the foregoing. ... If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.”Ouch. During the 20th century, a frontal attack on metaphysics was carried out by the logical positivists, a movement linked to the so-called Vienna Circle of thinkers (most prominently featuring Rudolf Carnap). Famously, positivists thought that metaphysics isn’t even wrong, it’s just meaningless, in part because it fails their famous verifiability criterion, the idea that a statement is meaningful only if there is a way to determine its truth.
While it is often said that logical positivism fell victim of its own verifiability criterion (which is not itself verifiable), that’s a bit too simplistic. What happened was that a series of thoughtful and in-depth critiques (by the likes, for instance, of Karl Popper, Hilary Putnam, Willard Van Orman Quine and Thomas Kuhn) increasingly and convincingly showed the inadequacy of the positivist stance.
That’s where Ladyman and Ross come in. They actually label their position “neo-positivism” (a bold choice, in modern philosophical circles) and set it straight against currently going analytical metaphysics, which they provocatively label (with a nudge to Hume) “neo-scholasticism.” For Ladyman and Ross there simply isn’t going to be any metaphysics without taking on board all of science, and particularly physics. Indeed, they redefine the very goal of the metaphysician as that of making coherent sense of the various pictures of the world emerging from physics and the so-called special sciences (the latter referring to everything but physics, from chemistry and biology to psychology and economics). By the end of ETMG they had me pretty much convinced on this point, with one caveat: for them a coherent picture of science can emerge solely from a unification of the sciences, while I am content with coherentism, whether it comes about by unification or by recognizing and characterizing what John Dupré has famously referred to as the “disunity of science.” (Turns out, in my opinion, I think Ladyman and Ross’s own conclusions point more toward a mild form of disunity than toward unity, and unity is most certainly not going to be achieved, according to them, by way of simple reduction of the special sciences to physics. More on this in the next post.)
Background, 2: realism vs anti-realism in philosophy of science. The second bit of background necessary to appreciate ETMG has to do with a fascinating debate that has unfolded in philosophy of science over the last several years, between so-called realists and anti-realists (more recently referred to, somewhat confusingly, as empiricists).
To put it very briefly, a realist is someone who thinks that scientific theories aim at describing the world as it is (of course, within the limits of human epistemic access to reality), while an anti-realist is someone who takes scientific theories to aim at empirical adequacy, not truth. So, for instance, for a realist there truly are electrons out there, while for an anti-realist “electrons” are a convenient theoretical construct to make sense of certain kinds of data from fundamental physics, but the term need not refer to actual “particles.” It goes without saying that most scientists are realists, but not all. Interestingly, some physicists working on quantum mechanics belong to what is informally known as the “shut up and calculate” school, which eschews “interpretations” of quantum mechanics in favor of a pragmatic deployment of the theory to solve computational problems.
There are several interesting arguments for and against both positions, so much so that I have often found myself standing on neutral ground in this regard, until a third option gradually shaped up, an option that we will take up next time, because it is the one actually defended by Ladyman and Ross, and which provides much of the philosophical structure in ETMG.
Perhaps the two best arguments in favor of anti-realism are the underdetermination of theory by the data and the pessimistic meta-induction. The first one says that the empirical evidence available at any given time will always under-determine (i.e., will not be able to completely discriminate between) alternative theories. Perhaps the best way to picture this is to plot some points on a standard X-Y axis and then fit a curve to them. If you think of the points as data and of the curve as the theory explaining them, you will immediately realize that there is literally an infinite number of curves that can equally well fit the data: the points under-determine the curve. But, you say, surely by adding new data I will eliminate many of those other curves from competition, thus approaching the “true” function. You will, but the new set of data is still going to be fit adequately by an infinity of “theories,” and so on. Okay, the next line of defense for the realist is to do what scientists often do and invoke criteria like simplicity and elegance to choose among the available curves (some of which will be horribly complicated). Yes, we can go that route, but now we are introducing extra-empirical, indeed downright aesthetic, criteria. Which is fine, but not something you can justify on scientific grounds (pragmatic considerations do help — but that is the point of the anti-realist: that science is after what works, not what is true!). Incidentally, the current debate over string theory — which comes with a “landscape” of 10 to the 500 different configurations! — may be a spectacular confirmation of underdetermination.
The second argument in favor of anti-realism is the pessimistic meta-induction. This is the idea that all past scientific theories have eventually been discarded as wrong or flawed in some significant way. Applying inductive reasoning to future scientific theories based on such past experience, it seems that there is no basis on which to argue that currently accepted theories have any better chance of being true.* The standard counter to this objection is that successive scientific theories approximate the truth better and better, but even this faces problems: first, without access to a “God’s eye view” of Truth, how can we tell? Second, there are pretty convincing examples of theories that represent radical rethinking on the part of scientists, not just incremental improvements (think of Ptolemy to Copernicus, of course, but also of the conception of space and time in Newton vs Einstein).
The best argument in favor of scientific realism is known as the “no miracles” argument, according to which it would be nothing short of miraculous if scientific theories did not track the world as it actually is, however imperfectly, and still managed to return such impressive payoffs, like, you know, the ability to actually send a space probe to Mars. Even so, the anti-realist can reply, we know of scientific theories that are wrong in a deep sense and yet manage to be empirically adequate, Newtonian mechanics perhaps being the prime example.
The above is just a very brief sketch of the debate between realism and anti-realism in philosophy of science, as people on both sides have come up with a fascinating series of moves and counter-moves in logical space throughout the past decades. A good summary of it can be found in Ladyman’s textbook in philosophy of science, for readers interested in digging in a little deeper. All in all I do agree with Ladyman that the realist position seems to have the upper hand, but only slightly so, and taking seriously the anti-realists provides a refreshing bath in epistemic humility.
Next time: the third option, the advent of naturalistic metaphysics, and the surprising consequences all of this may have for the way you see the world...
* I talked about the pessimistic meta-induction at TAM a couple of years ago, and Richard Dawkins approached me afterwards to let me know that — clearly — the Darwinian theory is the obvious exception to the meta-induction, thus displaying a surprising amount of ignorance of both the history of biology and the current status of evolutionary theory. Cue the onslaught of incensed comments by his supporters...