Monday, August 05, 2013
Computation, Church-Turing, and all that jazz
I recently examined (and found wanting) the so-called computational theory of mind, albeit in the context of a broader post about the difference between scientific theories and what I think are best referred to as philosophical accounts (such as the above mentioned computational “theory”). Defenders of strong versions of computationalism (which amounts to pretty much the same thing as strong AI) often invoke the twin concepts of computation itself and of the Church-Turing thesis to imply that computationalism is not only obviously true, but has been shown to be so by way of mathematical proof. The problem is, if one looks a bit closer at what computation means, or examines exactly what Church-Turing says, no such conclusion can handily be achieved. Let me explain.
Let’s start with the Church-Turing thesis. Jack Copeland has written an exhaustive entry about it for the ever more excellent Stanford Encyclopedia of Philosophy, and you are welcome to check the details and sources thereof. First off, of course, let’s say exactly what the Church thesis and the Turing thesis actually say (they are two different theses, which were soon shown to be equivalent formulations of the same basic concept).
Turing thesis: Logical computing machines (now known as Turing machines) can do anything that could be described as “rule of thumb” or “purely mechanical.”
Church thesis: A function of positive integers is effectively calculable only if recursive.
Don’t ask me how on earth these two turn out to be equivalent statements, computational logic is not my field. It has something to do with the equivalency between something called “lambda definibility” (Church's approach) and Turing's adopted definition of computability.
What is important for my purposes here is that — as Copeland clearly explains in detail — there has been much confusion and misunderstanding in philosophy of mind regarding the Church-Turing thesis, a situation that apparently hasn’t spared big names in the field, from Dan Dennett to Patricia Churchland.
To give you a taste of what the problem is, let me quote Copeland extensively:
“A myth seems to have arisen concerning Turing’s paper of 1936, namely that he there gave a treatment of the limits of mechanism and established a fundamental result to the effect that the universal Turing machine can simulate the behaviour of any machine. The myth has passed into the philosophy of mind, generally to pernicious effect.”
Ouch. Here is a (small) sample of things philosophers of mind have said that, according to Copeland, are just plain wrong:
Dan Dennett: “Turing had proven — and this is probably his greatest contribution — that his Universal Turing machine can compute any function that any computer, with any architecture, can compute.” (NOT TRUE)
Paul and Patricia Churchland: [Turing’s] “results entail something remarkable, namely that a standard digital computer, given only the right program, a large enough memory and sufficient time, can compute any rule-governed input-output function. That is, it can display any systematic pattern of responses to the environment whatsoever.” (NOT TRUE)
What Turing and Church have proven, more modestly (but still remarkably!), is that Turing’s universal machine can carry out any computation that any particular Turing machine can perform. This simply does not have the sort of implications for what actual (physical) computers can do, or about computability in general, that so many overenthusiastic computationalists insist it does, in part because T and C’s notion of a computer was not at all equivalent to that of a finitely realizable physical system; nor was their concept of computation as broad as many philosophers of mind would desire (indeed, it is noteworthy that Turing’s use of the word “computer” referred to human computers, i.e., to individuals that can carry out “mechanical” calculations by way of rule of thumbs — given that, please take some time to contemplate the irony of the meaning of a Turing test...).
In fact, as Copeland points out, we already know of the possibility of machines (sometimes referred to as “hyper-computers”) that generate functions that are not Turing-machine computable. And if that were not enough, Copeland cites a number of theoretical papers about the existence of physical processes whose behavior does not conform to that of Turing machines. So much for grand statements about the “universality” of “computation” (more on this in a sec).
All of this is important for computational theories of mind because they assume that psychological processes akin to those going on in the human brain are Turing-machine computable. But Copeland clearly states that this is, at best, an open question, not at all a settled fact.
Here is another direct quote from the SEP article that ought to give pause to computationalists: “[it] is common [and erroneous] in modern writing on computability and the brain ... to hold that Turing’s results somehow entail that the brain, and indeed any biological or physical system whatever, can be simulated by a Turing machine ... The Church-Turing thesis does not entail that the brain (or the mind, or consciousness) can be modelled by a Turing machine program, not even in conjunction with the belief that the brain (or mind, etc.) is scientifically explicable, or exhibits a systematic pattern of responses to the environment, or is ‘rule-governed.’”
So, please, can we stop invoking Church-Turing as a trump card in favor of computational theories of mind? Thank you.
Now to the broader issue of computation itself, specifically when it comes to computation as a process to be carried out by actual physical systems (after all, philosophy of mind, artificial intelligence research, and neurobiology are concerned with flesh and blood brains, or with possible silicon equivalents — both of which are actual or actualizable physical systems). Here an excellent source summarizing the relevant discussions is the SEP article by Gualtiero Piccinini. It too, as it turns out, deals in part with Church-Turing, as well as with a number of accounts of computation in philosophy (simple mapping, as well as causal, counterfactual, dispositional, semantic, syntactic and mechanistic accounts — go read about them to your heart’s content). But the part I wish to focus on addresses the crucial question of whether every physical system is computational — another oft-used trump card in the hands of computationalist theorists of mind.
The notion that all physical systems carry out computations is appropriately referred to as pancomputationalism, and prima facie I find it just as interesting as its non-physicalist counter, panpsychism — i.e., not at all. Still, let’s take a closer look.
As it turns out, there are two varieties of pancomputationalism on the market: strong, or unlimited pancomputationalism; and weak, or limited pancomputationalism. The first version says that every physical system performs every computation, while the latter says that every physical system performs some computation.
Piccinini also distinguishes different schools of thought about pancomputationalism with respect to its source, i.e. how one answers the question of why one thinks everything computes. One possibility is that computation is a matter of free interpretation, i.e. that whether a given system can be described as computing is a question of how one describes that system; a second alternative is that computation just is the causal structure of a system, so that everything that involves causality ipso facto computes. Thirdly, there is the possibility that everything is computational because everything carries information. And finally, pancomputationalism may stem from the idea that the physical universe itself is a computer, from which it obviously follows that everything within it is computed.
So let’s proceed with our discussion in two parts: one dealing with strong vs weak pancomputationalism, the other addressing the alleged sources of pancomputationalism of whatever sort one accepts.
Somewhat ironically, a major thrust of unlimited (strong) computationalism is a criticism of computational theories of mind: Ian Hinckfuss and William Lycan have articulated this position by arguing that a bucket of water contains a huge number of microscopic processes, which could in theory implement a human program, or any arbitrary computation whatsoever, at least for a time. John Searle (he of the Chinese Room) elaborated on this, suggesting that whether a physical system implements a calculation is a matter of how the observer interprets the system, and it was Hilary Putnam, a major originator of the computational theory of mind (again, the ironies never end!) that first formalized the concept of unlimited pancomputationalism.
The problem, as Piccinini points out in his article, is that if unlimited pancomputationalism is true, then the notion of computation becomes trivially true and vacuous, which strongly undermines the computational theory of mind. Indeed, unlimited pancomputationalism even undermines computer science itself, because there is no distinction anymore between a proper computer (even a broadly construed one) and pretty much anything else, rocks included.
Fortunately, there doesn’t seem to be any good reason to believe in unlimited pancomputationalism, since it relies on the above mentioned simple mapping, which I understand is by far the weakest account of computation available to date.
Let us therefore turn to limited pancomputationalism, the idea that, while everything computes, not all things can carry out all computations. This view of computation is not as vacuous as the preceding one, but it is still open to accusation of trivialization, which again directly undermines computational theories of mind (because if everything computes, then minds still aren’t that different in principle from rocks, though rocks may be capable of much more limited “computations”).
Weak pancomputationalists do have a response here, however: the predictive power of their position hinges precisely on the fact that each particular system is limited insofar as the computations that that system can carry out are concerned, so that one can come up with meaningful (computational) theories explaining why brains are brains and rocks are rocks.
I must admit that such reply leaves me extremely cold. But it is important to understand that — just as in the previous case — whether limited pancomputationalism is a coherent option depends on one’s account of computation. As it turns out, semantic, syntactic and mechanistic accounts are rather restrictive and not easily compatible with any form of pancomputationalism, while the simple mapping and causal accounts are a bit more friendly to it.
Time to turn our attention to the alleged sources of pancomputationalism. The first one is, as we’ve seen, that everything can be considered computational if the observer describes it in a particular way. The obvious (and, I think, fatal) objection here is that then pancomputationalism is a hopelessly subjective notion and we can safely ignore it from both a scientific and a philosophical perspective.
The idea that computation is equivalent to causation is a bit more intriguing, but it opens up its own Pandora box. To begin with, ever since Hume the concept of causation itself has been anything but philosophically straightforward, and it is not at all clear what is added by re-baptizing it as computation. Moreover, there are quantum mechanical phenomena that appear not to be causal (or at least not to require the concept of causality to be deployed in order to understand them), which means that pancomputationalism wouldn’t, in fact, be “pan” after all. Lastly, we already have a term for causation, and we already use the word “computation” to indicate a (usefully) restricted class of phenomena, so why mix the two up and increase confusion?
Similar considerations apply to the equation of information with computation. The term information itself can be defined in a variety of manners, and it is far from clear that everything carries information except in a trivially true sense of the term. Seems like pairing up equally vacuous concepts is unlikely to generate much insight into how the world works.
So we finally arrive at the provocative idea that everything is computational because the universe itself is a computer. But what does that mean?
The notion is referred to as ontic pancomputationalism, and it comes into the philosophy of computation straight from fundamental physics. Scientifically speaking, ontic pancomputationalism is a claim based on our empirical understanding of the world, while philosophically speaking it is a (logically independent) metaphysical statement about the nature of the world.
As Piccinini puts it, “The empirical claim is that all fundamental physical magnitudes and their state transitions are such as to be exactly described by an appropriate computational formalism.” There are two possibilities here: cellular automata or quantum computing. Some physicists — most prominently Stephen Wolfram — have proposed that the universe itself is a cellular automaton, that is that its phenomena unfold algorithmically. The problem with this, and it’s a serious one, is that this theory predicts that all fundamental physical quantities are discrete (as opposed to continuous), and that moreover space and time themselves must be fundamentally discrete. This is because cellular automata are classical (as opposed to quantum) systems. But Richard Feynman (cited in Piccinini’s article) argued that it is difficult to see how the quantum mechanical features of the universe can be simulated by a cellular automaton.
Enter quantum computability, where bits are replaced by “qubits,” units of information that can take any (superimposable!) state between 0 and 1 and can exhibit quantum entanglement (does your head spin yet? Good, you’re in ample company). This essentially says that the universe is indeed a computer, but a quantum, not a classical one. From an empirical perspective quantum ontic pancomputationalism is simply a redescription in computational language of standard quantum mechanics — importantly, there is no differential empirical content here, which raises the question of why bother with the whole exercise to begin with.
The metaphysical claim of ontic pancomputationalists is that the universe itself is made of computation (or, equivalently, of information). Here is again Piccinini’s rendition of the idea: “according to the metaphysical claim of ontic pancomputationalism, a physical system is just a system of computational states. Computation is ontologically prior to physical processes, as it were.” I kind of like the idea myself, but you know where that leads, right? Piccinini: “If computations are not configurations of physical entities, the most obvious alternative is that computations are abstract, mathematical entities, like numbers and sets. ... the building element [of the universe] is the elementary ‘yes, no’ quantum phenomenon. It is an abstract entity. It is not localized in space and time ... Under this account of computation, the ontological claim of ontic pancomputationalism is a version of Pythagoreanism. All is computation in the same sense in which more traditional versions of Pythagoreanism maintain that all is number or that all is sets.” Oh no, the specter of mathematical Platonism rises again!
So there you have it, folks. Do not use Church-Turing as the theoretical underpinning for computational theories of mind, because it ain’t no such thing. And be very careful what sort of pancomputationalist you may end up being (if any), and why. The logical consequences may be very unpalatable for your metaphysical worldviews.
Unknown at 3:38 PM