Readers of this blog are familiar with my criticism of some scientists or scientific practices, from evolutionary psychology to claims concerning the supernatural. But, especially given my dual background in science and philosophy, I pride myself in being an equal opportunity offender. And indeed, I have for instance chastised Alvin Plantinga for engaging in seriously flawed “philosophy” in the case of his evolutionary argument against naturalism, and have more than once sympathetically referred to James Ladyman and Don Ross’ criticism of analytic metaphysics as a form of “neo-scholasticism.”
Here I want to elaborate on a thought I have had for some time, and that keeps bothering the hell out of me: the issue of what sort of intellectual products we get out of philosophical inquiry. In particular, philosophers often speak of producing “theories,” as in the correspondence theory of truth, for instance. But I’ve come to think that it is better to regard such things as “accounts” or “views” — tellingly, terms used by a number of philosophers themselves — lest we confuse them with the meaning of the word “theory” in disciplines like science. (It’s also worth noting that mathematicians don’t seem to talk much about theories either, but rather of conjectures, or — of course — proofs.)
This may seem yet another “just semantic” issue, but I never understood why so many people hold semantics in such disdain. After all, semantics deals with the meaning of our terms, and if we don’t agree at least approximately on what we mean when we talk to each other there is going to be nothing but a confusing cacophony. As usual when I engage in “demarcation” problems, I don’t mean to suggest that there are sharp boundaries (in this case, between scientific theories and philosophical accounts), but rather that there is an interesting continuum and that people may have been insufficiently appreciative of interesting differences along such continuum.
To fix our ideas a bit more concretely I will focus on the so-called computational theory of mind, largely because it has been on my, ahem, mind, since I’ve been invited to be the token skeptic in a new collection of essays on the Singularity (edited by Russell Blackford and Damien Broderick for Wiley). Specifically, I have been given the task of responding to David Chalmers’ chapter in the collection, exploring the concept of mind uploading. As part of my response, I provide a broader critique of the computational theory, which underlines the whole idea of mind uploading to begin with.
So, what is the computational theory of mind (CTM, for short)? Steven Horst’s comprehensive essay about it in the Stanford Encyclopedia of Philosophy begins with this definition: the CTM is “a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of ‘computer’ to be developed), and that thought literally is a kind of computation ... [it] combines an account of reasoning with an account of the mental states,” and traces its origins to the work of Hilary Putnam and Jerry Fodor in the 1960s and ‘70s. (Notice Horst’s own usage of the words “view” and “account” to refer to the CTM.)
The connection between the two accounts (of reasoning and of mental states) is that the CTM assumes that intentional states are characterized by symbolic representation; if that is correct (that’s actually a big if), then one can treat human reasoning as computational in nature if (another big one!) one assumes that reasoning can be accomplished by just focusing on the syntactic aspect of symbolic representations (as opposed to their semantic one). To put it more simply, the CTM reduces thinking to a set of rules used to manipulate symbols (syntax). The meaning of those symbols (semantics) somehow emerges from such manipulation. This is the sense in which the CTM talks about “computation”: specifically as symbol manipulation abstracting from meaning.
This focus on syntax over semantics is what led, of course, to famous critiques of the CTM, like John Searle’s “Chinese room” (thought) experiment. In it, Searle imagined a system that functions in a manner analogous to that of a computer, made of a room, a guy sitting in the middle of it, a rule book of Chinese language, an input slot and an output slot. Per hypothesis, the guy doesn’t understand Chinese, but he can look up pieces of paper passed to him from the outside (written in Chinese) and write down an answer (in Chinese) to pass through the output slot. The idea is that from the outside it looks like the room knows Chinese, but in fact there is only symbol manipulation going on inside, no understanding whatsoever.
You can look up the unfolding of the long debate (with plenty of rebuttals, counter-rebuttals, and modifications of the original experiment) that Searle’s paper has engendered, which makes for fascinating reading and much stimulation for one’s own thought. Suffice to say that I think Searle got it essentially right: the CTM is missing something crucial about human thinking, and more precisely it is missing the semantic content of it. I hasten to say that this does not at all imply that only humans can think, and even less so that there is something “magical” about human thinking (Searle refers to his view as “biological naturalism,” after all). It just means that you don’t get semantics out of simple syntax, period. The failure of the strong Artificial Intelligence program since Searle’s original paper in 1980 bears strong witness to the soundness of his insight. (Before you rush to post scornful comments, no Deep Blue and Watson — as astounding feats of artificial intelligence as they are — are in fact even more proof that Searle was right: they are very fast at symbol manipulation, but they don’t command meaning, as demonstrated by the fact that Watson can be stumped by questions that require subtle understanding of human cultural context.)
Interestingly, Jerry Fodor himself — one of the originators of the CTM — has chided strong adherents of it, stating that it “hadn't occurred to [him] that anyone could think that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works,” since only some mental processes are computational in nature. (Fodor’s book on this, The Mind Doesn’t Work That Way, MIT Press, is a technical and yet amusing rebuffing of simplistic computationalists like Steven Pinker, whose earlier book was entitled, of course, How the Mind Works.)
At any rate, my focus here is not on the CTM itself, but on whether it is a theory (in the scientific sense of the term) or what I (and others) refer to as an account, or view (in the philosophical sense). Even a superficial glance at the literature on the CTM will show clearly that it is in no sense a scientific theory. It does not have a lot of empirically verifiable content, and much of the debate on it has occurred within the philosophical literature, using philosophical arguments (and thought experiments). This is to be contrasted with actual scientific theories, let’s say the rival theories that phantom limbs are caused by irritation of severed nerve endings or are the result of a systemic rearrangement of a broad neuromatrix (in Ronald Melzack’s phrasing) that creates our experience of the body. In the case of phantom limbs the proposed explanations where based on previously available understanding of the biology of the brain, and led to empirically verifiable predictions that were in fact tested, clearly giving the edge to the neuromatrix over the irritated nerve endings theory.
Here is a table that summarizes the difference between the two cases:
The philosophobes among you may at this point have developed a (premature) grin, seeing how much “better” the last row of the table looks by comparison with the first one. But that would be missing the point: my argument is that philosophical accounts are different from scientific theories, not that they are worse (or better, for that matter). Not only the methods, but also the point of such accounts are different, so to compare them on a science-based scale would be silly. Like arguing that the Yankees will never win a SuperBowl. (For the Europeans: that Real Madrid will never win the FIBA championship.)
Remember, philosophy moves in conceptual, rather than empirical space (though of course its movements are constrained by empirical knowledge), and its goals have to do with the clarification of our thinking, not with the discovery of new facts about the world. So it is perfectly sensible for philosophers to use analogies (the mind is like a computer) and to work out the assumptions as well as the implications of certain ways of thinking about problems.
But, you may say, isn’t science eventually going to settle the question, regardless of any philosophical musings? Well, that depends on what you mean by “the question.” IBM has already shown that we can make faster and more sophisticated computers, one of which will eventually pass Turing’s famous test. But is passing the test really proof that a computer has anything like human-type thinking, or consciousness? Of course not, because the test is the computational equivalent of good old behaviorism in psychology. A computer passing the test will have demonstrated that it is possible to simulate the external behavior of a human being. This may or may not tells us much about the internal workings of the human mind.
Moreover, if Fodor (and, of course, Searle) is correct, then there is something profoundly misguided about basing artificial intelligence research on a strict analogy between minds and computers, because only some aspects of minding are in fact computational. That would be a case of philosophy saving quite a bit of time for science, helping the latter avoid conceptual dead ends. Finally, there is the whole issue of what we mean by “computation,” about which I hope to be writing shortly (it is far from an easy or straightforward subject matter in and of itself).
This distinction between philosophical accounts and scientific theories, then, helps further elucidate the relationship between (some branches of) philosophy and science. Philosophers can prepare the road for scientists by doing some preliminary conceptual cleaning up. At some point, when the subject is sufficiently mature for actual empirical investigations, the scientists come in and do the bulk of the work (and all the actual discoveries). The role of the philosopher, then shifts toward critically reflecting on what is going on, occasionally re-entering the conceptual debate, if she can add to it; at other times pushing the scientist to clarify the connection between empirical results and theoretical claims, whenever they may appear a bit shaky. And the two shall live happily together ever after.
There are more than one form of computation theory of mind. In this essay you use the phrase to refer to symbolic processing but there are others - connectionism being one of the the more prominent one. Connectionists fought hard against the symbolic processing crowd, and the fight was primarily in the science arena. You wrote that the debate of the computation theory of mind is primarily in philosophical literature. That is only part of the story. A large part of the debate is in psychology and cognitive science. Different types of computation theories do make different predictions. Empirical examinations of predictions made by symbolic, connectionist, and probabilistic theories have played out in the field of linguistic acquisition, brain lesion studies, among many.
ReplyDeleteYour position is that the examination of the computation theory of mind is philosophical. In fact, it's best described as theoretical (as in theoretical physics, theoretical neuroscience, theoretical linguistics, theoretical psychology... etc). Theoretical scientists examine models, explore the limits, and propose new ones. Theoretical scientists move in conceptual spaces as well. None of the properties that you attribute to computational theory of mind are particularly philosophical. They are also found in theoretical sciences.
You asserted that the examination of computation theory of mind is "a case of philosophy saving quite a bit of time for science, helping the latter avoid conceptual dead ends." That is simply not how things played out. There has always been robust development of alternatives to symbolic processing theories. Connectionism is only one among many. Connectionism, as a movement, is no more but it has been seamlessly integrated with computational neuroscience and machine learning. As the symbolic processing theorists started to become less dominant, other branches flourished. As a matter of fact, tell us how philosophy help science avoid conceptual dead-ends in this particular case. Let's say we all agree that symbolic processing is no worthy of pursing. What do we do? To avoid deadends, philosophers have to propose alternatives. What alternative have philosophers brought to get the scientists out of the deadends? It's the theoretical scientists who actually did it.
BTW, mathematicians actually use the term "theory" quite a bit: Galois theory, ramsey theory, graph theory, set theory, knot theory, index theory... a theory in math is a flavor of ideas that can be used in many ways to attack a wide variety of problems. The meaning is not the same as a scientific theory but there are reasons why the word is chosen.
Note that in Chapter 10 of The Rediscovery of Mind, John Searle wrote "... one of the unexpected consequences of this whole investigation is that I have quite inadvertently arrived at a defense-if that is the right word-of connectionism. Among their other merits, at least some connectionist models show how a system might convert a meaningful input into a meaningful output without any rules, principles, inferences, or other sorts of meaningful phenomena in between."
ReplyDeleteThis would be a case of scientists getting a philosopher out of conceptual deadends. If the symbolic processing model is not right, how can the mind possibly work? Why, Searle found out that scientists already had something developed.
From my viewpoint, I don't think we can say all that much until we (humans, I mean) make a conscious robot to some degree. ("Build the dang robot!" a la McCain.) What the architecture and "I-code" - introspective code or reflective code - of this robot will be, we may not know yet. But this is computer science and artificial consciousness and so forth. Even after we've made a robot that we think "thinks", there will still be philosophical questions for the robot.
ReplyDeleteEven if you built a conscious robot, the question wouldn't be close to settled.
DeleteSearle isn't particularly arguing against the possibility of building an intelligent robot, he's arguing that no matter how smart you make it, it will only appear to be conscious.
Even if we built such a robot, there would be plenty of people who would deny that it was conscious.
Interesting post! I think I agree about "accounts" vs "theories."
ReplyDeleteOn CTM, this is how things look to me; let me know if you do not endorse any one of these summary points:
(a) neither Searle nor you has an account of what "really understanding" or "qualia" are exactly, but
(b) you believe they do occur (and are not just a conceptual confusion) in humans, while also
(c) claiming to have knowledge that certain activities (e.g., Chinese room) cannot possibly give rise to real understanding/qualia, even if
(c) those activities are empirically indistinguishable from the activities in humans which do give rise to real understanding/qualia.
To me this is isomorphic to
(a) Not knowing the nature of the immortal soul,
(b) Believing strongly that most humans possess immortal souls,
(c) Believing strongly that some things (e.g., animals) do not have immortal souls, and
(d) Believing strongly that mere similarity of behaviour is no proof (e.g., Hemant Mehta lost his soul after he sold it on eBay - don't let his identical behaviour fool you!)
Then there are all the other problems, such as how the Chinese room can be trivially modified to prove that brains don't really think.
Delete("Imagine a room full of unconscious single-celled organisms transmitting electrical impulses, fed with input impulses from the outside that correspond to Chinese characters...")
By the way Massimo, I'll make you a deal. If you will admit that the Chinese room argument fails (or proves too much - e.g., that human brains don't think), I will give you an *even better* reductio against functionalism/CTM, which I invented and have not seen elsewhere, although odds are it exists somewhere in the phil literature.
Delete>I will give you an *even better* reductio against functionalism/CTM<
DeleteI for one am very interested to hear what it is.
I'm about as pro-CTM as it's possible to be, so I'd be fascinated to learn if there was a convincing case against it - I certainly haven't come across one to date!
Well Disagreeable, I am tempted to bite the bullet on this "reductio", or just hold out for a future resolution, so I actually don't think it's deadly to CTM. But it's disconcerting, at least. I suppose I can give it away "for free". :)
DeleteThe basic idea is that the status of some physical system as a mind is highly dependent on the correct assignment of inputs and outputs. This is not a problem just for phil mind; it holds true for information processing in general. This html page is fundamentally just a bunch of ones and zeroes; if I rewrite ASCII code conventions according to my whim, I can make it read as the Declaration of Independence, or an excerpt from Fifty Shades of Grey, or whatever I want.
Now consider (just as a random example) the ~10^28 sand grains in the Sahara, blowing about in the wind, jiggling to the patter of lizard feet, their electron clouds thrumming about their nuclei. Prima facie, those sand grains are not conscious, individually or in aggregate.
However, if only we knew how to assign and interpret inputs and outputs correctly, there ought to be a way of interpreting the movements of those sand grains as instantiating a conscious mind (if only for an instant of its existence).
What's more, there does not appear to be any principled way of saying that one assignment and interpretation of inputs and outputs is any better than another. All we have is our much-abused common sense that says the Sahara ain't conscious.
In essence the problem is that because (a) mind is by hypothesis computation, and (b) computation only "happens" or "does not happen" depending on how one defines and interprets inputs and outputs to a physical system, (c) CTM seems to imply the existence of a combinatorial explosion of conscious experiences happening right now and at every instant, of which we are thankfully ignorant.
The vast majority of them are extremely short-lived (because the interpretation falls apart with the further physical evolution of the system), but given the uncountable infinity of different possible I/O assignments & interpretations, there ought to be some that are long-lived. Cue the "and maybe we're living it now!" dramatics.
Let us label this the "relativity of interpretation" problem.
If it's correct, this is a hell of an ontology to sign up to!
Hi Ian,
DeleteI've seen Searle make similar arguments in lectures.
And it is on the face of it a good argument, but not, I think, the death knell for the CMT.
However, resolving this really does lead you into some very weird (albeit correct in my view) metaphysical positions if you believe in the CMT.
In my view, your argument makes the same fundamental error as The Chinese Room -- it proves that physical systems can't be conscious, but perhaps minds are not physical systems.
I suppose you could describe me as a dualist, although not in the old-fashioned sense. I don't think there is any mystical force responsible for human intelligence, however I do think there is a distinction between the interior subjective world and the objective world around it.
So I see brain and mind as distinct. As a mathematical platonist and a believer in the MUH, I think the mind is a mathematical structure, a combination of an algorithm with some state. What the brain is doing, in my view, is tapping into this mathematical structure because it is adaptive, in much the same way that bees tap into the mathematical structure of a hexagonal mesh of cylinders as they build a honeycomb.
But the brain itself is not conscious. It is the mind that is conscious, and the mind can be thought of as an incredibly complex algorithm. Once you view algorithms as the seat of consciousness, and not their physical instantiations, then the problem of interpretation goes away -- it's not the sand that is conscious, it is the algorithms that we infer from the movement of the sand.
In this view, all possible minds exist. They don't spring into existence because we interpret the sand in a particular way, they exist independently of our interpretation and independently of the sand. Indeed, all possible universes exist. That's the MUH in a nutshell.
It's a tough one to swallow, I know, but this is where I've arrived after a lot of deliberation and reading on the subject.
I've touched on this a little in my discussion of why I don't think we should bind our identity to our brains:
http://disagreeableme.blogspot.co.uk/2012/05/essential-identity-cogito-ergo-sum.html#more
And in why I think we should instead identify with the "software" (and the implications of this view).
http://disagreeableme.blogspot.co.uk/2012/06/essential-identity-paradoxes-resolved.html#more
Ian,
DeleteIsn't your interpreter doing most of the heavy lifting in the Sahara example? Someone who subscribes to CTM can agree that if an appropriate interpreter is attached to the Sahara desert, then consciousness will appear, and when the two components are separated it will vanish. What’s more, if the interpreter is not simply a regular computer, but is also has biological components (http://news.discovery.com/tech/robotics/brain-dish-flies-plane-041022.htm), then the resultant system could even be conscious under Biological Realism, although at this point it’s obvious that the Sahara is playing the same role as the stone in the story of the Stone Soup.
http://en.wikipedia.org/wiki/Event_symmetry
Deletehttp://en.wikipedia.org/wiki/Event_symmetry#Greg_Egan.27s_dust_theory
DeleteThe handling in _Permutation City_ is more explicit regarding computational theories of consciousness.
Descartes, Einstein and the Nature of I
ReplyDeleteI went searching for truth One day, out of necessity, and dove into nature, philosophy, and science. Fortune came my Way with a gift, the unifying equation that unites us All, a simple truth that will set us free. I found just me.
"Free at last..."
=
scitation,
ReplyDelete> There are more than one form of computation theory of mind. In this essay you use the phrase to refer to symbolic processing but there are others - connectionism being one of the the more prominent one. <
Indeed, and in fact I am much more sympathetic to connectionism. However, I am also worried that the meaning of the word “computation” gets inflated so much that it doesn’t really mean anything anymore.
> Theoretical scientists move in conceptual spaces as well. None of the properties that you attribute to computational theory of mind are particularly philosophical. They are also found in theoretical sciences. <
Yes, of course scientists move also in theoretical space, but I think it should be clear from a reading of the two literatures on mind (philosophical and scientific) that those conceptual spaces are not at all the same. The philosophical tends to me at a more abstract level, one of clarification of concepts. The scientific one is more tied to specific empirically testable hypotheses. Both are needed, I think.
> There has always been robust development of alternatives to symbolic processing theories. <
Yes, but those alternatives also originated within a fertile interaction between philosophers and computational theorists (many of whom are philosophically inclined, incidentally, beginning of course with Turing himself).
> To avoid deadends, philosophers have to propose alternatives. <
Nope, that’s the job of scientists.
> mathematicians actually use the term "theory" quite a bit <
Yes, but as you say, the meaning is different, which is why I like other terms better, like conjecture, and so on. Some of the theories you invoke as examples, by the way, come straight out of logic — where the meaning of the word is also different than in science, obviously.
Philip,
> From my viewpoint, I don't think we can say all that much until we (humans, I mean) make a conscious robot to some degree. <
Well, yes, but how are we going to do that if we are not clear on what consciousness means and how it works? By trial and error?
Ian,
> neither Searle nor you has an account of what "really understanding" or "qualia" are exactly <
Wrong. Searle does away with qualia talk altogether. Consciousness is a qualitative state of “what it is like.” It’s clearly generated by brain activity, but is systemic in nature, and doesn’t look at all like symbol manipulation.
> (c) claiming to have knowledge that certain activities (e.g., Chinese room) cannot possibly give rise to real understanding/qualia, even if (c) those activities are empirically indistinguishable from the activities in humans which do give rise to real understanding/qualia. <
They are not *behavioristically* distinguishable, but behaviorism has, thankfully, gone the way of the dodo.
Your isomorphy is nonsense on stilts, my friend...
> the Chinese room can be trivially modified to prove that brains don't really think. <
You need to read more Searle. He modified the CR in a number of ways, and the point remains.
> ("Imagine a room full of unconscious single-celled organisms transmitting electrical impulses, fed with input impulses from the outside that correspond to Chinese characters...") <
You are missing the point: it matters what things are made of. Neurons work, cardboard likely not. Helium certainly not.
> If you will admit that the Chinese room argument fails (or proves too much - e.g., that human brains don't think), I will give you an *even better* reductio against functionalism/CTM <
Sorry, no deal. But you are more than welcome to write a post about your take...
Massimo, you wrote "Yes, but those alternatives also originated within a fertile interaction between philosophers and computational theorists (many of whom are philosophically inclined, incidentally, beginning of course with Turing himself)."
ReplyDeleteYou are bluffing. Let's take connectionism for example. How did it grow out of fertile interaction between philosophers and computational theorists? I'm quite familiar with the history of connectionism. I'd like to see some concrete evidence. Which philosopher influenced which connectionist? If your statement originated from real knowledge about the history of cognitive science, instead of wishful thinking, it should not be difficult to give names. After all, so much has been written about it. The Stanford Encyclopedia of Philosophy that you cited states that philosophers were late to the game. They only became aware of the development after the publication of Parallel Distributed Processing. "Philosophically inclined" is a tricky word. That can be anybody that you like.
In the essay, you wrote "Philosophers can prepare the road for scientists by doing some preliminary conceptual cleaning up". That is no supported by history in the development of the computational theory of mind. Philosophers were late to the game, not ahead. For example, Searle's Biological Realism is simply restating what neurophysiologists have been saying all along. Searle himself states that in "Mind: An Introduction".
Massimo, it seems like your distinction between scientific theory and philosophical account boils down to whether anyone has yet devised an appropriate way to empirically test the theory/account. So presumably any philosophical account could eventually become a scientific theory, and historically many have (thinking of physical and social sciences that used to be within the realm of philosophy).
ReplyDeleteMassimo,
ReplyDelete> > From my viewpoint, I don't think we can say all that much until we (humans, I mean) make a conscious robot to some degree. < <
> Well, yes, but how are we going to do that if we are not clear on what consciousness means and how it works? By trial and error? <
Biological evolution made our conscious brains through trial and error. (Took a long time.) There is some degree of trial and error in evolutionary approaches to AC. I don't know if neurobiologists, for example, will tell us enough of what to do. (Probably not.) So it will be a mix of knowledge from various fields + real trial and error that makes it. Even if we make a conscious robot, will we be clear on what consciousness is? Maybe not.
The mind is a predictive system. It looks for consequences just as a computer might, except for reasons that a computer not only might not but cannot.
ReplyDeleteOur predictive systems are using probabilistic logic in one sense, but not necessarily as in Bayesian or other factual premise driven systems. Because what we in our subconscious processing are looking for are familiar patterns, and we assess them not only for consequences of expected behaviors but for past purposes that those patterns must (from historical assessments) represent.
The problem then becomes one of how those predicted or predictive purposes might help us to anticipate the tactical natures of opposing learned or inherited strategies.
The probabilities depend on the perceived purposes, hence the premises involved are not so much factual as conceptual.
I could go on, but I'm sure I lost most of you who can't conceive of the concept of predictive purposes, let alone of past predictive.
There is a teleonomic character to the nested chemical systems that interact to comprise what we call life. Sub-systems appear to form purposeful relations in the sense that they support the stability of higher level systems.
DeleteThe chemist Addy Pross has a recent book that suggests 'dynamic kinetic stability' is the crucial concept that will help us connect the inanimate->living system continuum. He sees a higher level system obtaining 'dynamic kinetic stability' when systems that can replicate also manage an internal metabolism that can extract energy from the environment. The ability to extract energy allows the sub-systems to maintain in far from equilibrium states.
Terrence Deacon also addresses the same topic in his book 'incomplete nature'. Deacon also leans on thermodynamics within a framework of inter-dependent nested systems and mutual constraint.
Massimo was somewhat critical of the Pross book for it's attempt to link DKS to fitness in evolutionary theory.
It seems to be to be very premature to think we can create conscious AI, orequate the minds workings to computation, before we have clearly the the less complex transition from inamimate to to living systems.
In my view Deacon and Pross are on the right track. How we see a system seems dependent on our point of reference. If viewed as a sub-system it is under thermodynamic control. If viewed as emerging from lower levels it may be useful to view it in terms of DKS or something similar.
At this stage I think reduction to computation & elimination of the importance of the chemical substrate as part of the process that links systems is wrongheaded.
You don't seem to see chemically reactive systems as having been intelligently constructed and evolved, or do you?
DeleteA recent computer simulation study (http://www.bbc.co.uk/news/science-environment-22261742) suggests that the evolution and emergence of intelligence is connected to maximum entropy production. The simulations are based on a 'causal entropic force' , which is guided by the goal of keeping as many options open as possible.
DeleteEntropy as you know is a fundamental physical law which itself is not reliant on 'intelligent' direction. At least not based on what I feel is a helpful definition intelligence. Exactly where to draw the line on where intelligence emerges I think is an open question but I wouldn't put it prior to entropy.
So a computer simulation suggested that evolution of intelligence is connected to maximum entropy production, but of course not how that occurred or why, or had any way of knowing whether intelligence was in fact a causative factor from the start. Because you assume that entropy is a fundamental physical law "not reliant on intelligent direction" yet with no understanding that intelligently constructed systems are, as all physical forces must be, self regulated and thus self directed as a result of that construction. You will respond that laws of force are fixed, and yet not recognize that many physicists have begun to disagree with that conception. But of course your computer won't if you've not told it otherwise.
DeleteSo if your simulation showed you where intelligence appeared to evolve as a proactive process, it would not be concerned whether the entropy itself was the reactive process that intelligence was using to evolve itself.
That wasn't in the program, right?
And in any case, it's always amazed me that not only scientists but the philosophers that gave science birth are content to observe that things emerge, what they emerge from, and when, and see that as an adequate explanation of the entire process.
> intelligently constructed systems are, as all physical forces must be, self regulated and thus self directed
DeleteI'm pretty sure you're assuming that entropy is a "physical force" here, however it's not comparable to e.g. gravity or normal force, so it's probably not a physical force the way you're talking about them here. It's actually more akin to a computational or mathematical law.
Just a note, entropy is by definition not a "physical force" like gravity or normal force - it's a computational rule.
DeleteJaime, you had entropy as a "fundamental physical law," remember? Anything with the force of law is either agency directed or self directed. I don't believe that there are god like agencies in charge of the universe's rules or its laws (assuming there's a difference), unless life forms are somehow godlike, but that's just me.
Delete"Yes, but as you say, the meaning [in mathematics] is different, which is why I like other terms better, like conjecture, and so on."
ReplyDeleteBut the word "theory" in mathematics is *not* a synonym for "conjecture." A conjecture is a claim about the truth of a specific proposition (e.g., the Poincare conjecture). A "theory" in mathematics is an organized, systematic, logically-connected body of propositions on a certain subject matter (e.g., theory of linear operators, theory of finite fields, theory of computable functions).
You are, of course, perfectly correct to point out that the word "theory" is used quite differently in math/logic than it is used in the natural sciences, but it doesn't make sense to say, "I like other terms better, like conjecture", because the term 'conjecture' is used for something completely different.
Anyway, a quick glance at an English-language dictionary reveals that there are many widely-accepted uses of the word 'theory'
http://www.merriam-webster.com/dictionary/theory
I see no reason to privilege the natural scientist's use of the term and to suggest that philosophers/logicians/mathematicians ought to pick different ones. Recognizing that a philosophical "theory" is not the same thing as a scientific "theory" is no more difficult than recognizing that the river "bank" is not the same kind of "bank" that grants a mortgage.
People often say (usually when correcting creationists) that a physical theory is a widely accepted and corroborated hypothesis.
DeleteI don't think this is correct. I think it's a system of thought which is built on a set of axioms or assumptions. How well supported by evidence it might be is not relevant.
In this view, the use of "theory" in philosophy (CTM), mathematics (set theory) and physics (string theory) is entirely consistent.
Hi Massimo,
ReplyDeleteI have to say I was kind of disappointed with this post. I have long regarded the Chinese Room as one of the most obviously misconceived thought experiments to have been so widely popularised.
To my mind, it has been thoroughly destroyed as a convincing argument against the CToM by the "virtual mind" rebuttal, which is a variant of the "system" rebuttal.
Searle is attacking a straw man. The guy in the room is the CPU, the hardware. *Nobody* literally believes that the hardware will understand anything in an AI. It's the software that runs on the hardware that does the understanding. It's the whole system that supports a virtual mind, and the mind of the guy in the room is not the system.
I've explained these points and more on my blog.
http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-chinese-room.html
Searle has no convincing argument to save the Chinese Room against this point of view. He falls back to assertions such as "you can't get semantics from syntax", which may be self-evident to you but is far from it to me.
I would put forward the hypothesis that indeed you can, and indeed that semantics *are* syntax, in the sense of a structure of interconnected symbols. If the brain is a computational system, then all of what we call "meaning" is simply a very rich interconnected map of nodes symbolising different concepts. The fact that the word "car" has meaning to you is only due to links to representations of three-dimensional car shapes, the concepts of wheels, travel, fuel, specific memories etc. Each of these ancillary concepts then have their own maps of meaning, ultimately leading to a complete conceptual map of the world.
I suggest that if you had a computer system that had just as rich a set of associations, then "car" would have just as much meaning to that system as it does to you.
It's only from a point of view outside such a map of nodes that we need to interpret the meaning of the symbols - the job of deriving semantics from syntax. From within the point of view of that system (i.e. the mind comprising it), there is no need to interpret, for interpretation is simply the act of translating from a non-native representation to the native representation in one's own mind. You don't need to interpret your own mind state because it's already native. The "syntax" *is* the semantics.
More on this idea here:
http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-semantics-from.html
http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-meta-meaning.html
It's been a long time since I looked at the Chinese Room, but my sense is that the essential aspect of it was not what kind on manipulations (symbolic, connectionist, ...) were conducted within the room, but that the input and the output were symbolic. As a computer scientist this always seemed a trivial issue to me, but Searle seemed to think it was important.
ReplyDeletescitation,
ReplyDeletefirst of all, I would appreciate it if you were to tone down the rhetoric a notch or two. This is supposed to be a friendly discussion, and throwing around words like “bluffing,” “wishful thinking” and the like isn’t very helpful.
> How did it grow out of fertile interaction between philosophers and computational theorists? I'm quite familiar with the history of connectionism. I'd like to see some concrete evidence. <
From what I know, connectionism was started in the 1940s by Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician (i.e., a philosopher).
Moreover, important critiques of connectionism (whether you think they succeeded or not, I really don’t have a bone in that fight) were articulated by philosophers, beginning with Fodor in the ‘80s. Indeed, it may be in part because of these critique that few people these days label themselves as connectionists, though the trust of the idea remains important.
But my point was broader: that thinking about mind and consciousness started in philosophy and then moved to science. These days, there is a lot of interaction between the two fields, as there is between the fields of logic (philosophy) and computational science.
> "Philosophically inclined" is a tricky word. That can be anybody that you like. <
Not really. Clearly you have never read Lawrence Krauss. Or Richard Dawkins. Or Jerry Coyne. Or Stephen Hawking. Or Neil deGrasse Tyson. Or...
> "Philosophers can prepare the road for scientists by doing some preliminary conceptual cleaning up". That is no supported by history in the development of the computational theory of mind. <
Again, you are reading me too narrowly: I meant that philosophers have been thinking about mind and consciousness far ahead of the time it was possible to do any science about it (Descartes, Locke, Hume, etc.).
> Searle's Biological Realism is simply restating what neurophysiologists have been saying all along. <
I think that’s a misreading of the history of the field. Most neurophysiologists simply *assumed* something like biological realism. That’s very different from articulating it as a generally defensible position in conceptual space.
contrarianmoderate,
> it seems like your distinction between scientific theory and philosophical account boils down to whether anyone has yet devised an appropriate way to empirically test the theory/account. So presumably any philosophical account could eventually become a scientific theory, and historically many have <
Yes, pretty close. I also maintain that philosophical “theorizing” is more abstract than scientific theorizing, as explained in my response to scitation above. So there are some types of philosophical accounts (such as those of truth) that are never going to be empirically testable — but they are not meant to be.
Philip,
> Biological evolution made our conscious brains through trial and error. <
Well, yes, but we are talking about science here, not evolution.
> There is some degree of trial and error in evolutionary approaches to AC. <
Granted, but — again — unless we have understanding (which requires a good theory of consciousness) then we ain’t doing science. At best we are tinkering.
C,
ReplyDelete> But the word "theory" in mathematics is *not* a synonym for "conjecture." <
Yes, I didn’t mean to imply that. I did say that mathematicians use the word “theory” (and so do philosophers, obviously), but, as you say:
> You are, of course, perfectly correct to point out that the word "theory" is used quite differently in math/logic than it is used in the natural sciences <
> Recognizing that a philosophical "theory" is not the same thing as a scientific "theory" is no more difficult than recognizing that the river "bank" is not the same kind of "bank" that grants a mortgage. <
I wish. As much as I often criticize scientists, however, I fear a number of philosophers delude themselves when they use the word “theory” that they are doing something more than provide an account of a certain concept.
Disagreeable,
> I have to say I was kind of disappointed with this post. <
Oh well, one can’t always win, right?
> I have long regarded the Chinese Room as one of the most obviously misconceived thought experiments to have been so widely popularised. <
I rather think it is one of the most widely misunderstood ones.
> *Nobody* literally believes that the hardware will understand anything in an AI. It's the software that runs on the hardware that does the understanding. <
So the rule book understands Chinese? Or is it the rule book plus the act of passing notes in and out of the room? Whatever. Software doesn’t understand anything, it just manipulates symbols. And by the way, Searle has replied to the “whole system” response by getting rid of the room entirely. His point still stands: there is no understanding without semantics, and symbol manipulation doesn’t get you semantics. And if that’s not self-evident to you, as you say, I don’t know how else to explain it. Surely you do see that there is a difference between the two? Then it is up to computationalists to provide an account of how semantics emerges from syntax. From where I stand, it looks like magic...
> If the brain is a computational system, then all of what we call "meaning" is simply a very rich interconnected map of nodes symbolising different concepts. <
But the brain is not a computational system, see Fodor.
> The fact that the word "car" has meaning to you is only due to links to representations of three-dimensional car shapes, the concepts of wheels, travel, fuel, specific memories etc. <
So by that standard a neural network that recognizes cars understands the meaning of the concept car? Seriously?
> Even if we built such a robot, there would be plenty of people who would deny that it was conscious. <
How would you know you’ve succeeded? By way of a Turing test?
> People often say (usually when correcting creationists) that a physical theory is a widely accepted and corroborated hypothesis. I don't think this is correct. I think it's a system of thought which is built on a set of axioms or assumptions. How well supported by evidence it might be is not relevant. <
I think that’s a radically misguided view of what a scientific theory is.
Seth,
> At this stage I think reduction to computation & elimination of the importance of the chemical substrate as part of the process that links systems is wrongheaded. <
I think it is *always* wrong headed. Even if Deacon and Pross are on the right track, I doubt they would maintain that you can get life out of *any* chemical substrate (let alone without substrate at all, which is what at least some computationalists are claiming).
Jerry,
> that the input and the output were symbolic. As a computer scientist this always seemed a trivial issue to me, but Searle seemed to think it was important. <
Because of the above mentioned distinction between syntax and semantics, or symbol manipulation and understanding. Which Disagreeable would like to have disappear by magic.
>So the rule book understands Chinese?<
DeleteI'd actually say it's the rules that understands Chinese.
Most Strong AI proponents say that you have to actually implement the rules in order for understanding to take place.
>Software doesn’t understand anything, it just manipulates symbols.<
I disagree! And The Chinese Room certainly does not answer this point of view.
>And by the way, Searle has replied to the “whole system” response by getting rid of the room entirely.<
I know! And what a magnificent failure of imagination such an argument represents!
I've answered this on the blog posts I linked. If you get rid of the room, it's still the software (the rules) that understands. It doesn't matter if it's all in his head - his mind is just the substrate on which another, distinct mind is supported. This might seem laughable but keep in mind how extremely unrealistic this scenario is. It's perfectly analogous to the concept of a virtual machine in computer science - the physical computer and its operating system supports an entirely distinct virtual machine and operating system.
>there is no understanding without semantics, and symbol manipulation doesn’t get you semantics<
An assertion I disagree with.
>Then it is up to computationalists to provide an account of how semantics emerges from syntax.<
I have done so in brief in a comment and more fully in my linked blog posts.
>But the brain is not a computational system, see Fodor.<
Well, as long as we're throwing assertions around, I'll say that it is indeed a computational system, see Dennett.
Besides, you said that Fodor's was an argument against simplistic computationalism. I'm not so sure he's against computationalism per se. I haven't read the book, but I'll try to get a hold of it. By the time that happens, unfortunately, you'll likely have moved on from this topic.
>So by that standard a neural network that recognizes cars understands the meaning of the concept car? Seriously?<
No, a neural network that has a rich map of associations representing cars and everything to do with them understands cars. To the extent that that map is rich and full of associations, the map is meaningful. To the extent that the map is sparse and not well connected, it is not meaningful. Meaning arises only from the richness of connections.
>How would you know you’ve succeeded? By way of a Turing test?<
You wouldn't. That's my point. We could never know that we had built a conscious robot. The only way to establish consciousness is by way of philosophical arguments like this one. It can never be empirically tested.
>I think that’s a radically misguided view of what a scientific theory is.<
Well, you're absolutely right, that's certainly not how it's usually defined. However I think if you look at usage another picture emerges. String theory is not a scientific theory at all according to the definition of the term, (it being unsubstatiated) however it's certainly described as theory, and I think this as appropriate because it's consistent with usage in other disciplines.
"No, a neural network that has a rich map of associations representing cars and everything to do with them understands cars. "
DeleteSo what if instead of producing an AI system that understands 'the world', you make one that understands a much simplified virtual world, one that has only a few components. Any system that can, given inputs, give the correct outputs, would then understand it? What if that virtual world was so simplified that a random process, the majority of the time, gave the correct outputs when given inpus (or iow, solved a problem correctly, etc)?
"The only way to establish consciousness is by way of philosophical arguments like this one. It can never be empirically tested."
DeletePlease, everybody, never ask me to testify that you are sentient beings. To be precise, I barely can assert that I'm conscious. Somehow I sometimes feel that something is conscious of me instead of myself. I know, there's nothing worst to the rationalist mind than to realize it could be a bunch of thoughts of something else.
And, in fact, guys, try this experiment: describe what you guys are; and don't be surprised if after a couple of hours you don't get more than some concepts, ideas and absolutely nothing about you, nothing exclusively meaning you. :(
Perhaps consciousness is more like something I suppose Baron P is trying to teach us: not more miraculous than gravity and maybe just a special circumstance of it, I mean, maybe just another 'force' in nature.
All this leads me to think that artificial consciousness doesn't work because we have a too presumptuous account of what ourselves are. I don't know if a more modest 'theory' of our consciousness would help in building a self-aware machine, but for sure it will be of great help to ourselves. :)
@Robert Schenck
Delete>So what if instead of producing an AI system that understands 'the world', you make one that understands a much simplified virtual world, one that has only a few components.<
Such a system would genuinely understand that world, however its understanding would be a much meaner, poorer sort than we typically experience.
But if I described such a world to you then you'll have much the same type of understanding. Suppose I tell you that the world consists of an infinite continuous Euclidean 6-dimensional space, with lots of point particles uniformly but randomly distributed moving in straight lines in uniformly distributed directions and never interacting with each other. If the world is so simple, then I'd say you now understand it just about as well as a neural net that grasped these facts would.
>What if that virtual world was so simplified that a random process, the majority of the time, gave the correct outputs when given inpus (or iow, solved a problem correctly, etc)?<
I can't conceive of a completely arbitrary random process that would answer any question correctly most of the time - it would seem that 50% is the best you could achieve, and only for Yes/No questions.
To the extent that the process is weighted to give the correct example most of the time, then you're probably starting to get some level of understanding. Again, the level of understanding and meaning would depend on the complexity of the algorithm and the semantic map it's working with. If it's very simple then there isn't much understanding.
"But the brain is not a computational system, see Fodor."
DeleteBut any theory that isn't computational is useless. (What you can't compute, you must refute.)
Everything is a computational system, at least that's the working assumption. (The universe is made of code.)
>I am also worried that the meaning of the word “computation” gets inflated so much that it doesn’t really mean anything anymore.<
ReplyDeleteI'm worried that you have too naive an interpretation of the term.
"Symbol manipulation" etc certainly sounds pretty dumb - Searle in particular sometimes seems to imply that we're basically taking input symbols, looking them up in a lookup table and outputting the results.
It looks especially dumb when compared to more opaque, complex and mysterious models such as connectionism.
However but as long as we're talking about a process that could in principle be performed by a Turing machine, then it's computation. It's perfectly well defined and there's no ambiguity at all.
And connectionism really is computation in this sense. Even massively parallel models such as neural nets can be implemented as regular sequential programs run on a bog standard computer - by programs which are themselves essentially symbol manipulation when viewed at a low level.
Massimo
ReplyDelete'I think it is *always* wrong headed. Even if Deacon and Pross are on the right track, I doubt they would maintain that you can get life out of *any* chemical substrate (let alone without substrate at all, which is what at least some computationalists are claiming).'
I agree that both Deacon and Pross are trying to lay a framework for very particular types of relations between chemical systems that lead to the emergence of life. I think we are mostly in agreement.
Hi Massimo,
ReplyDeleteI'd like to clear up what I'm saying about semantics and syntax.
Semantics is meaning.
Syntax is whatever mechanical or other means might be used to represent or compute results - syntax is not intrinsically meaningful, but a mind can infer/interpret meaning from it.
So syntax and semantics are distinct, because syntax only represents semantics - you need some sort of code to decipher the syntax and recover the semantics. The process of inferring semantics from syntax is essentially that of translating an external representation to an internal one.
But I think that we're taking for granted what "meaning" means.
If the CTM is right, then meaning is nothing more than the associations between symbols in a syntactic system. In the unique case of one's own brain, semantics and syntax are one and the same. Your brain states may not appear to be meaningful to me, but they are meaningful to you, precisely and only because you *are* that syntactic system. There is no translation between the external representation and the internal representation because we're starting from the internal.
You may not agree with this viewpoint, but please don't dismiss it out of hand as patently absurd, or at least explain to me why you think it is.
I like this view. I guess that those philosophers and artificial mind researchers should have some work on Peirce's basics as a start. And yes, it's like Baron says, they must give their machine some sense of their environment and also some needs, like 'hunger' and 'thirst':), as well as a good dose of fear - of being unplugged, for instance, or of a predator...
DeleteThis is why I don't believe in mathematical platonism: if we don't put a number in that artificial mind, it doubt it will appear there until it needs to 'count' how many enemies it has to face to survive :)
How do you have meaning in a system that doesn't seem to have a memory bank of memories of whatever it hasn't thus been able to learn the meaning of, and has no reason to use that theoretical "meaning" for further learning, without again any place to put the otherwise useless, and therefor meaningless, information?
ReplyDeleteAnd must I mention that you've also left out any mention of any learning process through which one normally attains some form of meaning as well.
Massimo, a well written essay, although in the end it leaves open the problem it started by: we are not able to make any sort of agreement on meanings in general. And to achieve some theoretical comfort, I prefer to use the term 'description' to refer theories, accounts or whatever. Well, at least a good description has to be proven as related to whatever it describes, no matter if the 'experience' which shows this correlation as valid has taken place outside or inside one's mind. I say this based in my belief that even in mathematics and in logic one experiments all the time,in a way that shares some similarities with the way one experiments outside the mind. But, anyway, the word theory, if used to designate something that satisfies the criterion of being a valid description, is adequate, naturally.
ReplyDelete....
The following fragment called my attention:
"the syntactic aspect of symbolic representations (as opposed to their semantic one)"
I can not understand how this opposition works outside the linguistic (and logical!) theoretical context which recognizes syntax and semantics as its objects, but must also emphasizes that absolutely they are not independent of each other.
It's logically possible, of course, to organize syntactically any collection of objects and so provide some meaning that springs from the ways such objects are organized, although I believe that those meanings won't surpass a threshold of concepts like similarity, repetition of similarities and what is derived from them, like opposition and some of its variations, amongst them symmetrization, retrogradations etc. This probably shows that the syntactic aspect is necessarily linked to the semantical one: one springs from the other in a way that is possible to look at any meaning as syntax as well; another way to say this is that any syntax is able to determine a semantics.
Tonal music is likely to work this way (if you clean it from the meanings we use to refer to it, like song, sonata, funeral march etc and also from the associated qualia): assortments of sounds that by themselves mean something, although basically this meaning serves just as a kind of guide to each assortment of sounds itself; one can say that fundamentally tonal music refers to (means) itself, and it does that by simply applying syntax, a syntax that ultimately has the sound and its harmonics as its paradigm (say, a syntax organized around some acoustic properties of the so called musical sound).
From another point of view, a peircean sign (icon, indice or symbol) can be represented as a syntax that shows, for instance, how its object is related to other objects and to the individuals who use it. So, it seems that every sign (in the peircean sense) can be syntactically expressed.
Then, 'syntax as opposed to semantics' is an idea that makes no sense to me and trying to make no matter what (machine or human) think out of this framework seems to be vain. OK, unless, naturally, all those meanings - of syntax, semantics, meaning, symbol, sign etc - have changed.
I'd just add, on the musical account, that I used the Tonal Music example as a means to ease my argumentation. In fact, every piece of music (including speech - if you ignore its meaning) works the same way: a display of sounds under a set of rules that aims just make sense, be meaningful.
DeleteMusic, to me, is an emotional language that is meaningfully inferential - composed of sounds that evolved over time to convey our fears and pleasures at the emotional and intuitional level of our evolving mental functions. The meanings we will gather from today's music will still depend on the varieties of emotional responses still evolving in our various cultures, as well as on our particular histories and philosophies.
DeleteHistorically, musical sounds had to have been the precursors to our verbal languages, right?
Whales and such in the oceans still use these sounds as language, as do elephants and such on land.
I invite you, guys (Massimo, Baron P and others), to perform a sequence of auditions, if possible, beginning with some restored Greek music, then going into the also restored early medieval music, then Gregorian chant and the early polyphonies until at least the baroque. Also have a sight on Boethius' opinion on what is a lay-listening if compared with (what he called) an expert-listening (this idea goes through time until at least XIX century musicology and in my opinion it's still true that by simply daily practicing a language like music under some presuppositions (like systematic musicology, for instance) is an easy path to a differentiated listening. So, if you make all this way from Greece to the baroque, I'm confident you'll understand what I'm talking about: a slow building of some structures that are slowly identified with some emotions and cultural objects (like parties, civic events etc), but aren't - as structures - necessarily so. They are pure construction under very severe rules that are, as I said, ultimately caught in the behaviors of harmonics, say, they're pure coding, like the sound combinations of our natural languages, to which we added meanings. If you also have the time, give a try on some ethnic music before knowing what meaning is attributed to it by its respective culture: I'm sure that your meaningful associations will diverge from the ones of that culture. I mean that all the meaning attributed to music (and I could comfortably advance: 'to any form of art') is arbitrary, although some can be traced (perhaps) back to our bodies' rhythms etc (it's really a very, very long conversation, believe me - not as simple as most of people think it is).
DeleteWell, I'm not trying to unlink emotion - or whatever - from music, what would be stupid, as our first approach of no matter what in the world is emotional (some people like to say 'aesthetic'), and art in general maybe has the particular property that makes it aesthetically approachable forever.
Waldemar: You give structure to meaning, right? And if you then give meaning (such as what you consider to be musical meaning) to a previously formed structure, in my view you're actually adding to or "evolving" the initial meaningful reason for that structure. (I would say meaningful purpose, but it's not understood here that reason and purpose are inseparable.)
DeleteSo while it's correct that we arbitrarily attribute some particular set of meanings to music, as well as visual arts, poetry, prose, and even dance, that doesn't "mean" that the forms involved were meaninglessly originated.
Baron,
Delete"that doesn't "mean" that the forms involved were meaninglessly originated."
For sure they are not. They have a meaning which 'emanates' from the fabric of, in music, the sounds. These coded sounds, say, ordered after a set of very rigorous rules, mean intrinsically, so much that we historically are giving them names that are analogous to our emotions or to some worldly objects (Vivaldi was supposed to mimic a hen in one of his concerti and editors used to name Beethoven's sonatas and symphonies emotionally, like Appassionata, Pathétique, Pastoral etc). Systematic musicology is supposed to try evading those tricky associations in order to determine how intrinsically the musical code works.
As an astonishing example, read a little about Bach's The Art of Fugue, how he composed that kind of audible treatise on fugue composition, exploring every possibility in this form which uses strictly counterpoint: its pieces have no names, no associations at all except numbers and the name of the contrapuntal form used when it is not a fugue. Then, try to listen to the collection (about 2 hours) and evaluate the power of strict rule observing results, pure computation with sounds. I'm not overstating. I believe musical comprehension is vital to any philosopher or scientist - not overstating again.
And that's the point: codes mean by themselves when applied to sets. And these 'primary' meanings are, from a visual perspective, similarities, inversions, retrogradations etc (they are the same in music but you can't notice them without due training). The problem is they are not considered significant as concepts which are indeed in the basis of knowledge.
Waldemar, That's all very well, but you've missed the point that music has evolved from our emotional concepts, the recognition by our biological systems that individual choice making was also a social necessity and meaningful communication was necessary for group survival. Absent our present abilities for succinct speech, we developed the ability to signify our emotionally (yet intelligently) determined messages as feelings. Our classical music, and the operatic in particular, have retained and artistically magnified the effects of those messages, and we "suspend our disbelief" as it were to enjoy the feelings of another's pain, fear, sadness and despair as much as we enjoy the feeling of their ecstasies and joys.
DeleteBaron.
Delete"you've missed the point that music has evolved from our emotional concepts"
I didn't miss that point. I didn't mention it because I understand it is axiomatic: what in knowledge didn't come out of emotions? (this word needs some polishing before being used more comfortably, but I bet we both understand what it means) If we want to appreciate a case of 'transubstantiation', here we have a truly one: how do sentiments (judgments or responses given by the mind to what it receives from the world) become reason? A process we witness in a daily basis in children and even in pets, although it seems we refuse to admit this occurs.
Sentiments are the representatives of reason. The emotional brain is the seat of our predictive apparatus; where our assessments, expectations, choices, are all the products of our predictive forms of trial and error intelligence. So in effect, it's primarily that elementary function of our reasoning that is giving us our initial responses and decisive judgments.
DeleteIt's that reasoning that still uses emotional signals to communicate how it feels, and it's talking to our rational brain processes as well as to those of any others it has the need to deal with.
I'm not going to try to tell you who to read for a better explanation. You're one of the smarter guys here and likely better read on many of these subjects than I am.
But I do have a somewhat different perspective, which is that form will follow function rather than precede it.
And further, if not a bit beside the point, I hold that if a form has been intelligently constructed to react intelligently, then that's the intelligent purpose that it serves. To have a purpose would seem to require intelligence for the having of it, and especially for the ability to choose its purposes (i.e., the particular purposes that a particular intelligent agent can have). But I digress.
Baron, I'd really appreciate your suggestions on who to read. So, please, let me know.
DeleteOn form and function, I'd say they're just sides of the same coin, like syntax and semantics. I'm not sure if there's a precedence in there. If I'm not mistaken,some continental (European) discussions on that subject (in philosophy of art) led to an agreement on that form is already function. I also agree, although not always with the way the conclusion is deducted. But I need to go back to those writings to be completely sure of what I'm talking about.
On intelligence, I'm becoming very suspicious of that idea. To me it sounds too heavy, carrying so many preconceptions that if I keep using it this is because there's no convenient substitute (at least in Portuguese) to it. But, well, keep digressing on it, so that I can grasp the way you understand it.
Waldemar,
DeleteFor an understanding of emotions, read Antonio Damasio, Descartes' Error: Emotion, Reason, and the Human Brain.
For a good philosophical perspective on modern science and biology, Steven Rose, Biology Beyond Determinism. For a look at the best biological mind of today, read James A Shapiro, Evolution: A View from the 21st Century.
Also see his paper: Bacteria are small but not stupid:
Cognition, natural genetic engineering, and sociobacteriology.
And yes, form and function are sides of the same coin, but note that the coin is the form and it's function is to serve its user, not itself. More to the point, where some see a similarity here to the chicken and the egg dilemma, I see this: the chicken can decide to reconstruct its eggs, but the egg can't decide to reconstruct its chickens.
In other words the function is the intelligent part of the form, not vice versa. And also yes, the words for the concept of intelligence are seemingly overused, as intelligence can mean in English the opposite of non-intelligence, and intelligent can mean the opposite of unintelligent.
As for those Europeans, tell them that, sure, a function must use a form to operate, but a spandrel just sits there stupidly.
We also have words like intentional and intensional, and don't know for sure what either of them means.
DeleteBaron, I know some of Damasio's books. Thanks for the other titles.
DeleteI don't habitually go so far with 'form-function' concept. I just accept that any possible form stands for at least one function.
And I 'normally' understand 'intention' as a concept related to a purpose of a person (there's a lot of trouble in inferring it outside this constraint: some people still object that a dog or even a monkey can hold purposes...).
As for 'intension' (that is sometimes 'comprehension'), it's generally present in classical logic designating the collection of properties of a given concept as complemented by 'extension', which designates the set of objects a given concept is applicable to.
Please, tell me if I'm wrong.
But 'intelligence', as it's applied, for instance, in the expression 'intelligent design', seems to express only a kind of astonishment, the one we experience while perceiving that somehow we are able to know something about the world, that there's a kind of complicity between the things and our 'in se' and 'per se' undefinable ability to know, say, in short, we attribute intelligence to just the act of comprehending things without the minimal idea of what comprehending is. And what is worse, we dare to use it as a measure, and this seems to me just a way to establish the domination of a group over another. By the way, I'm a proud member of ASNEM. :)
I'm proud to say I wouldn't join any organization that would have me as a member.
DeleteYes we are astonished to find, if we dare to look, intelligent design in almost (if not every) element of nature. Unless we had been taught, or felt instinctively, that the universe was being run by something that did everything "on purpose." Like, say, a god or two.
But then more and more of us were exposed to a school of "philosophers" who presumed to "teach" us that the universe, while acting with the appearance of a purpose, was merely an extremely complex compendium of accidents.
Yet here and there and now and then someone would peep up and whisper that there are accidents that we've found to serve a purpose. But they've generally been shouted down with cries that only entities that closely resemble humans have magically acquired the facility for acting purposively, and that's only because they've also been the first in nature to accidentally acquire intelligence.
(As you will note, we're informed again on this very blog below, that lower forms of what we've been taught to call living, in particular our plants, nevertheless can "process" information without it.)
It has also been whispered by some who have pretended to be scientifically gifted that it's possible for the universe to have tried intelligently with occasional success to effectively design itself, but since I'm told that intelligence at the very least implies an agent that's exercising agency, we should (if of course we could) fuggedaboutit.
Massimo, my apologies. I do appreciate that you spend your valuable time answering questions.
ReplyDeleteI disagree with all of the top three cells of the table you present. The CTM, while being partly based on philosophical analysis, is also based on computational models, information theory, and the study of actual computational systems. And while the CTM is certainly debated in philosophical circles, the only reason it is not debated among neurobiologists is that, presently, there is no alternative to it. Computation is simply the only framework scientists have to work with in terms of how physical matter can store and process information. Show me an alternative theory that can account for how neurons achieve cognition and I guarantee you it will be debated in the scientific journals as vigorously as the phantom limb theories you present. Finally, the CTM is not primarily tested with thought experiments, but rather it is tested every time a cognitive psychologist finds a match between a computational model (of say, visual perception) and human or animal performance in the laboratory. It is tested and has been tested continuously for the past several decades, and, as evidenced by the burgeoning field of computational psychology, it shows all the signs of being a fruitful, predictive theory. It would be extremely hard to account for our sophisticated understanding of, say, the architecture of the visual system, if the CTM were false.
ReplyDeleteI'd like to present a quick sketch of why I believe in the CMT, and why I think it is entailed by philosophical naturalism.
ReplyDelete1. The brain is a physical object which obeys the laws of physics
2. The laws of physics can in principle be simulated on a computer
3. Therefore a brain (and its environment) can in principle be simulated on a computer
4. Therefore a computer can in principle be made to behave like a brain.
5. Therefore the operation of a brain is computational.
This argument does establish that naturalism entails that a simulated brain must be capable of exhibiting intelligence (weak AI) but it does not establish that such a simulated brain would be conscious (strong AI).
However, I think there are excellent reasons to think that it would.
If such a brain were absolutely behaviourally indistinguishable from a conscious brain but unconscious, it would be a philosophical zombie. If you accept the arguments for weak AI but reject strong AI, then you are accepting the Philosophical Zombie Hypothesis, which I think is a dubious position to take.
http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-philosophical.html
But even better, I would ask why nature would have bothered to evolve "real" consciousness if "simulated" consciousness is all that is needed. Is it really credible that "real" consciousness is a fortunate byproduct of the particular materials nature happened to choose to build our brains?
I think not.
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-evolutionary-parsimony.html
There's also the question of what consciousness could be from a physical perspective if it is not computation. What could it be about neurons that makes them so special? I really don't think this question has an answer.
http://disagreeableme.blogspot.co.uk/2013/01/strong-ai-naturalism-and-ai.html
But if we accept the idea of weak AI, that a computational system could behave as if it were conscious (including having a "belief" in its own consciousness and publicly claiming to be conscious), then perhaps we are such systems. Perhaps we are not conscious at all, but simply think that we are.
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-illusion-of-real-consciousness.html
I'm not actually claiming that we are not conscious, but that the distinction between "real" consciousness and "simulated" consciousness is incoherent because there is no way we could be sure that we are not simulated. The answer is surely that there is no distinction to be made.
Another way of tackling the Chinese Room:
ReplyDeleteThe Chinese Room argument does not establish that the "system" does not have "beliefs". Searle himself uses the example of asking the Chinese Room what the longest river in China might be. Though Searle himself does not know the answer, the "system" does.
So the system believes that the longest river in China is the Yangtze, but the guy in the room has no such belief. You could continue quizzing the room in this way and find that it has its own personality, knowledge, attitudes, opinions, ethical and factual convictions -- none of which it shares with Searle. It could be a perfect philosophical zombie.
But if we can attribute all these qualities to the system (even if the system is carried out entirely in Searle's head) and not to Searle himself, then why is consciousness uniquely singled out as the one aspect the system cannot have, on the basis only that it is as hidden from Searle as are the rest of its qualities.
The only thing that distinguishes consciousness from those other qualities is that it is empirically undetectable and so more mysterious. Searle and others may be assuming too much in asserting that the system is not conscious and that you can't get semantics from syntax, simply because they don't understand what consciousness or semantics really are.
Disagreeable,
ReplyDelete> you said that Fodor's was an argument against simplistic computationalism. I'm not so sure he's against computationalism per se <
Fodor is one of the fathers of computationalism, but in the book in question he makes a good argument that to think that everything the brain does - particularly systemic qualitative work, like consciousness - is computational is crazy. Or at the least unwarranted.
> I'd like to present a quick sketch of why I believe in the CMT, and why I think it is entailed by philosophical naturalism. <
As you know, there is a difference between simulating X and reproducing X. The flaw in your argument is (at the least) with n. 5:
> Therefore the operation of a brain is computational. <
It just doesn't follow from n. 1-4. Yes, there are people that say that everything is computational in the universe (more on this in a forthcoming post), but even if we grant this, then computationalism becomes vacuous and irrelevant, because it doesn't tell you how human consciousness differs from being a rock.
> The Chinese Room argument does not establish that the "system" does not have "beliefs". <
That's right, it only brings out into the open how ridiculous it is to think so. Thought experiments are about probing our intuitions, getting hidden or unclear assumptions up to the surface, they don't "establish" anything. That's the job of actual experiments.
Ian,
> very clever reasoning, though Disagreeable is correct: Searle and others have advanced this kind of objection to CTM before. It falls under a broad category of positions known as pancomputationalism. Which not at all coincidentally happens to be the topic of my next post...
Waldemar,
> This probably shows that the syntactic aspect is necessarily linked to the semantical one: one springs from the other in a way that is possible to look at any meaning as syntax as well; another way to say this is that any syntax is able to determine a semantics. <
That you cannot have semantics without syntax is certainly the case. But how does it follow that semantics reduces to syntax? That would be like saying that Van Gogh could not produce Starry Night without paint, so the chemical composition of the pain is all you need to understand the painting. Clearly, not. (This is obviously similar to your musical example, but I don't think you drew the right conclusion there.)
Philip,
> But any theory that isn't computational is useless. (What you can't compute, you must refute.) <
See my comments above about the vacuousness of this approach. And let's wait until I have time to out together my thoughts about pancomputationalism more coherently for the next post.
David,
> It is tested and has been tested continuously for the past several decades, and, as evidenced by the burgeoning field of computational psychology, it shows all the signs of being a fruitful, predictive theory. <
All good points, so I should probably clarify my position about the CTM, which happens to be similar to Fodor's: I do think that human thinking has computational aspects, but I am also convinced that some of the most interesting stuff our brain does (consciousness) in not computational (again, see disclaimer about pancomputationalism above). So I think - again, with Fodor - that the CTM is not false, but incomplete.
>he makes a good argument that to think that everything the brain does - particularly systemic qualitative work, like consciousness - is computational is crazy<
DeleteReading reviews of the book, it seems that he is instead arguing that it is crazy to think that the brain is literally organised according to the architecture of a Turing/von Neumann machine, and in particular the model proposed by Pinker.
I really don't think he's saying that the brain is not fundamentally computational in the sense that a Turing machine could not perform the same functions, given an arbitrary amount of memory and time.
It might be worth reading Pinker's response.
http://pinker.wjh.harvard.edu/articles/papers/So_How_Does_The_Mind_Work.pdf
>As you know, there is a difference between simulating X and reproducing X. The flaw in your argument is (at the least) with n. 5:<
I acknowledge that difference. This argument shows that the empirical, behavioural aspect of a brain is computational. I then go on to explain why I think such a simulation would be conscious.
But let's get back to the difference between simulating X and reproducing X:
Simulating computation X really is reproducing computation X - just try using any virtual or emulated machine. If the CMT is true, then the mind is a computation and so simulating the mind really is reproducing the mind. You don't get to assert otherwise without an argument to back it up, otherwise you're begging the question.
>That's right, it only brings out into the open how ridiculous it is to think so. <
It certainly does not, at least not in the sense that I'm talking about. The argument purports to show how ridiculous it is to think a computation can be conscious. It makes no attempt to show that computations can't have beliefs. I've explained this and you've completely ignored my point:
The system *clearly* has beliefs. Any other position is absolutely untenable. Even Searle acknowledges that the system can have knowledge he doesn't have himself.
This belief or knowledge need not be conscious -- let's assume we're talking about a philosophical zombie. It doesn't even need to be a sophisticated AI, for even trivial systems can be said to have beliefs. The blogspot system currently believes you are Massimo Pigliucci because you are logged in with Massimo Pigliucci's credentials. There's nothing mysterious going on here.
Now, again, I make the point that the room has the unconscious attributes of "personality, knowledge, attitudes, opinions, ethical and factual convictions -- none of which it shares with Searle."
So why is it that we can see that the system knows things that Searle doesn't know, even while it's taking place entirely in his head, but we're not comfortable saying it's conscious? Why do you single out consciousness from all those other attributes the system does not share with Searle?
Searle says:
"Now, I don't understand Chinese, but there's nothing else in the room, so if I don't understand then nothing understands!"
How is this not absolutely analogous to the sentence:
"Now, I don't know the Yangtze is the longest river in China, but there's nothing else in the room, so if I don't know it then nothing knows it!"
And yet the system as a whole is perfectly capable of demonstrating this knowledge. How is this possible if there is nothing in the system but Searle himself?
>That you cannot have semantics without syntax is certainly the case. But how does it follow that semantics reduces to syntax? That would be like saying that Van Gogh could not produce Starry Night without paint, so the chemical composition of the pain is all you need to understand the painting. Clearly, not. (This is obviously similar to your musical example, but I don't think you drew the right conclusion there.)<
DeleteSo, a Van Gogh painting can be thought of as a very specially arrangement of paint on a canvas. Similarly, meaningful semantics can be thought of as a very special arrangement of syntactical symbols within a mind. Not all syntax has interesting semantics, and not all arrangements of paint make for beautiful paintings, but beautiful paintings, at a low level, are still just paint.
Massimo, if you're saying that it's because of the paint composition that we see the colors and the derived shapes, I'll agree with you. What I'm saying is that there is a set of 'natural' meanings that springs from any syntax.
DeleteRecall the story about Leibniz guesses on binary code: some people say that he made contact with the first translation of the I Ching and so their figures were 'read' by him as pure binary numbers. As a young boy I read the same figures as inversions and retrogradations and after learning a bit of mathematics I started to see the same number code in them. When I learned about musical scales construction I started to see in them a set of six tones scales too (I wasn't the first to see them this way).
I mean that we're able to interpret the world through pointing what similarities (and dissimilarities) its objects share and this is enough to start 'decoding' anything: and this is the presupposition of any knowledge, say, that the world can be 'decoded', that we infer rules of composition of no matter what around us if we have the suitable tools and background to do it. So, the bigger the background, the more you grasp from them in the process. I hope current philosophy has no problem with that, which I think is a self-evident observation of an ability that surely is not just human (although no other animals are able to think about how they deal with the world).
Hi Massimo,
ReplyDeleteI know you're busy, but I want to know if you intend to answer the following points I have made.
1. My attempt to explain how to get semantics from syntax (it's not "magic" as you have suggested).
2. How leaving the room does not answer the virtual mind reply (there's another distinct mind existing on the substrate of his own mind)
3. How, given that it seems a computer should be capable simulating a brain, then at least the operation of a brain can be captured by computation.
4. How, given that a computer can operate identically to a brain, there are reasons to think that perhaps it would indeed be conscious (as outlined in the post with the 5-step argument for the possibility of a computer behaving like a brain).
5. That there is nothing vague about the definition of computational. Any calculation that can be carried out by a Turing Machine is computational, and this would include connectionist models and presumably all the laws of physics. The CMT only holds that a Turing Machine could be conscious in the same way that we are - it says nothing about how that system is designed, implemented or organised.
6. How computational systems can have beliefs in an unconscious sense, and how a perfectly analogous argument to The Chinese Room can be used to show that this is impossible - which is an obviously incorrect conclusion.
You obviously don't owe me a response to any of these, but one of the reasons I comment on this blog is because you are so good at responding to your commenters. It's very frustrating when I feel my best arguments have largely been ignored.
I believe two different topics are being discussed and sometimes confused here.
ReplyDelete1. Syntax and semantics. Supporters of the CTM and many of the responders to CR agree semantics does not arise solely from syntax (which only preserves it) but also needs causal interaction of the syntactic states among themselves and the outside world (at least if you are an externalist).
2. Consciousness. If one asked a robot why it was carrying an umbrella today, and it said it thought it was going to rain and it did not want to get wet, it would seem reasonable to conclude that the robot had a belief which implies semantic contents to its "thoughts". But would it be conscious? In other words, would it have what Block called access-consciousness? Would the robot be aware of its beliefs it the same we are?
Whether and if so how consciousness could arise from computation is unknown. Possibly adding a Higher-Order theory of consciousness would address this.
But of course the how question is also true for physical brains as well.
>Supporters of the CTM and many of the responders to CR agree semantics does not arise solely from syntax (which only preserves it<
DeleteI'm not sure I agree with this.
I guess it depends on what you mean by syntax. In these debates it's usually taken to mean anything formal or mechanical, so it would probably include causal interactions.
In particular, I'm not convinced that interaction with the outside world is necessary. It could be interaction with an entirely virtual world, for instance, in which the entire closed system (a mind and its virtual environment) would have no interaction with an external reality but there would be instances of meaning within that system.
This is a challenging discussion since it depends on how one interprets syntax, semantics, and meaning.
DeleteI am taking syntax to the rules about how the signs of a language are put together and semantics to refer to how the sentences formed by the syntactic rules and the signs acquire meaning.
There are different ways to define meaning: one approach from philosophy of language is that meaning is the conditions which much pertain to make a sentence true.
From there, it seems reasonable to argue that meaning requires aboutness (to determine the conditions of truth), usually called "intentionality" by philosophers. In the same way that mental contents can be characterized by intentionality, one might argue somehow the CR system must acquire intentionality if the CR is to be said to understand the meaning of its output sentence.
Intentionality and meaning have been linked to causal connections by some theories of mind and language. So the CR causally interacting with your virtual world would understand meaning in terms of the entities in that virtual world. For example, that is what meaning would be to an inhabitant of the Matrix in that movie. Putnam wrote a paper on Brains in a Vat which addresses a similar theme.
(To avoid the behaviorism label when discussing interaction, one could add that the CR system involves internal states which also enter the causal chains).
Now the above is a gross simplification; it also ignores other approaches to meaning and intentionality. My point is that if you subscribed to the above approaches, you might think it reasonable that the right CR room algorithm, in interaction with some real or virtual world, could be said to understand sentences about that world in some sense.
But even if you accepted that, you might still argue that the CR does not understand the sentences the way people do because it is not aware of (or conscious of) its understanding.
That is why I think semantics and consciousness are separate issues.
For me and I suspect many people, it is seeing how the CR system would be aware of its understanding that is the more puzzling of the two issues. Of course, that is just another way of talking about the explanatory gap issue of consciousness which applies to any physical system.
All in all, an excellent summary of the issues, brucexs.
DeleteI agree that semantics and consciousness are separate issues. Nevertheless, when pressed with the "virtual mind" reply, Searle's only response is to assert once more "you can't get semantics from syntax", and Massimo has echoed this.
As for the explanatory gap, I'm not sure there is one, as I suspect that the concept of "real consciousness" is incoherent.
If we allow that we might in principle have a computer which operates analogously and behaves precisely like a human, then it has beliefs (in so far as it has representational states analogous to conscious beliefs) and it is vulnerable to illusions.
This entity will insist that it is conscious, just like a real human would. Since this behaviour is not surprising, and can be understood in terms of supervention on simple computations, you might say that we more or less understand why it claims to be and indeed believes that it is conscious. There is no mystery there.
However, is it not possible that we are just like this system? That we only believe we are conscious? That consciousness is an illusion? Since all of our beliefs and behaviour would be the same either way, there is genuinely no way to be sure.
But since we can in principle understand "simulated consciousness", Occam's razor would suggest that we discard the unsubstantiated and incoherent "real consciousness" hypothesis altogether.
Now, rather than saying that we are not conscious, I'm saying that the idea of "real consciousness" as distinct from "simulated consciousness" is nonsense. The two are the same thing.
This theme is elaborated on my blog:
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-illusion-of-real-consciousness.html
I've not kept up with the whole discussion, but given the discussion of Turing machines and computational theories of mind, I thought you'd might be interested in this paper.
DeleteIt shows that Turing saw the Church/Turing hypothesis as a limitation of what human computers acting algorithmically could do, and NOT as a limitation of what conceivable (but possibly not buildable) computers (O-machines) could do.
I had not been aware of this twist.
http://www.alanturing.net/turing_archive/pages/pub/turing1/turing1.pdf
A note on conscious code (re: the "computational mind"):
ReplyDeleteIn my view such code will at least incorporate principles of refection and introspection in programming language theory, a subject that began with Brian Smith's thesis.
Procedural reflection in programming languages
Also, any "computational theory" for a physical system that does not incorporate a truly random element, I would dismiss.
As we've discussed in the past, I don't think that the concept of "code" or "programming languages" are particularly relevant when discussing the nature of reality or consciousness.
DeleteHowever I agree that a conscious mind would need something analogous to reflection and introspection - it would need to be able to sense stuff about its own state.
>Also, any "computational theory" for a physical system that does not incorporate a truly random element, I would dismiss.<
Because in our world physics has a random element? There could be other more deterministic universes, so I wouldn't say it's evidently an absolute necessity. In any case you may get what you need from pseudorandom numbers - there's no evidence that truly random numbers will let you do anything that a sufficiently unpredictable pseudorandom number generator would not.
"there's no evidence that truly random numbers will let you do anything that a sufficiently unpredictable pseudorandom number generator would not"
DeleteSince we can get quantum random numbers whenever we need them, why not use them? Why take the chance?
Sure, use quantum randomness if you can, I'm just making the point that it may not be necessary.
DeleteWe would agree that who composed the book, in which are the correct answers in Chinese, knows the process of correctly answering the input with the output Chinese symbols.
ReplyDeleteIf we put this guy (that composed the answering book) and no book at all in the room, the effect would be the same to the outside audience: they would still think the room 'knows' Chinese. But this time we won't doubt that there's a sentient being that really understands Chinese. Then, what difference makes if he answers himself the question or if he puts all his knowledge in a book to be used by whom doesn't know this language? We wouldn't say that the book is sentient, naturally, although it has a result which only a sentient being is capable of conceive.
But what if the first guy (the one that doesn't know Chinese) is capable of knowing Chinese?
Perhaps what's needed to build a sentient machine is just enough 'space' (for memory) to make it reflect, to get acquainted of its own computational process, although I'm not sure if this is doable or easily doable.
But from the little I know of this subject I'd advance that we're trying to build an excessively altruist machine, a machine concerned solely with human problems. Give it the ability of evaluating its own needs (a bit of egoism), and maybe it will pass the Turing test as a dog or a monkey would.
I don't believe in consciousness without senses and senses' centralized evaluation and the ability of also evaluating own needs (in old Greek philosophy terms, without the ability life forms have of judging what is good or bad to itself, in short, without being able to feel pleasure - which is, ultimately, the built-in knowledge, Epicurus and Epictetus say, any animal comes to life with, and so, this might (or must, I dare to say) be the building block of the remaining knowledge it gets throughout life).
I see a lack of considerations like those above in the few I know from the sentient machine building efforts. Please, let me know if I'm wrong.
>We would agree that who composed the book, in which are the correct answers in Chinese, knows the process of correctly answering the input with the output Chinese symbols.<
DeleteIt's not strictly necessary that anyone involved in building the system understands Chinese.
The book could be composed by some dumb process, such as evolution by natural selection, or perhaps the system has been designed to teach itself Chinese through exposure to Chinese media. The human brain is not born understanding Chinese but has no problem acquiring this ability by exposure to it as a child.
>Then, what difference makes if he answers himself the question or if he puts all his knowledge in a book to be used by whom doesn't know this language?<
Because the book is not just a record of knowledge. It doesn't contain an exhaustive list of the correct responses to any input. Instead, it contains an incredibly complex algorithm that precisely corresponds to everything that a human mind can do, and answering a question involves performing computations that are analogous to what a human brain does as it tries to form an answer.
To pass the Turing test, it would need to be able to remember, learn, adapt just like a human. It will inevitably accumulate knowledge that its creators did not possess.
If knowledge is a loaded term, let's use "information" instead.
So if the guy in the room doesn't have this information, and the creator of the system doesn't have this information, then what does? It can only be that the whole system has the information, that the system as a whole can learn even though the guy learns nothing (except perhaps how to manipulate the symbols a bit more efficiently!).
And if this is so, I don't know why it seems so strange that the system as a whole might be conscious.
Disagreeable,
DeleteIn the way you pointed somewhere in your comments to this post, I believe we agree that for a machine to be taken as conscious it seems to be necessary that it should have senses to put it in contact with any kind of world, virtual or not. This point, to me paramount, is that thinking can't exist apart from sensing a world and from sensing itself, that the mind is most probably a sense organ, a kind of sum of the remaining ones, the ones directed to its outside.
In short, no matter how many mind experiments we conceive, if we don't insert sensing in them, thinking will perhaps never make sense: it's possible that our idea of what thinking is is completely wrong.
We conceptualize by identifying and comparing individuals in the world but become likely trapped inside conceptualization by presupposing things as almost noncommunicable, an assumption countering another which states that since we assume all those conceptualized things as sharing the same world, they necessarily link to one another, they necessarily have connections and so can be conceptualized one through another.
>I believe we agree that for a machine to be taken as conscious it seems to be necessary that it should have senses to put it in contact with any kind of world, virtual or not.<
DeleteI'm actually not convinced of this. I don't think
consciousness is reduced when you are put in a sensory deprivation tank, for instance, or if you suffered the kind of brain or spinal cord injury that would completely cut you off from the outside world.
I do think it's likely that a *human* mind would probably go insane before too long if it had no sensory input, but I'm not sure that the same is necessarily true of minds in general.
Isn't it just what happens in a coma? Medicine can detect cerebral activity, for sure,and that might be thought, I also concede. But what kind of thinking? Coherent? A coherent pleasure? A continuous nightmare? I guess that aside the economic considerations on keeping people alive in this state (especially if they need machines) is also a row of worries about, first the total absence of consciousness, and second, the quality of 'life' those individual experience in fact. This leads me to consider if brain activity is synonymous with thinking. Perhaps my poor knowledge in this field is too outdated, I mean: are we able to tell with no doubt that any mind with no senses is still thinking and, if so, coherently and coherently being able to get any pleasure from that circumstance? And what about living minds in bodies that never have sensed the world - I don't even know if that exists, but if so (another mind experiment) -, are we able to infer their thinking/consciousness or just their cerebral activity?
DeleteIn the case of machines, I believe that the very fact that we input them info can be taken as a way of giving them a kind of sense. And in this circumstance, as I said before, I guess we are unable to either detect if some of them have already reached consciousness, or know if this kind of consciousness - constantly perfectible until the 'super AI' - deserves a new classification in the universe of what is conscious. All this because we indeed don't know what consciousness is (let's see if any of its theoreticians agree).
I think a coma is more than loss of sensory perception, Waldemar. Don't some people say that there may sometimes be sensory perception during a coma?
DeleteI think there's a distinction between consciousness and awareness of your surroundings. I can imagine being entirely without sensory perception. I don't imagine it would be pleasant (unless perhaps it was voluntary and I could end it any time), but I don't imagine I would immediately lose consciousness.
Brain activity is not synonymous with thinking. People in comas are probably not thinking. I'm not making any claims about people with specific medical conditions, I'm just saying that it's not obvious to me that sensory perception is a requirement for consciousness.
>And what about living minds in bodies that never have sensed the world<
I would guess those brains would not develop normally, and so they may not be conscious. Again, not making claims about specific medical conditions.
> All this because we indeed don't know what consciousness is<
Agreed, in so far as it's not a concept that is really defined. I suspect it's probably just a little incoherent.
Dear Mr Pigliucci,
ReplyDeleteI like very much your blog, and I use to print many posts to read them with more time.
However, since you do not have a formal "print" option, the printing is problematic. Font size is pretty small (that is a more general problem, I think), and I can't print the exact numbers of pages to read.
I wonder if including a print option is too expensive for you.
Thank you very much,
Pablo Mira (from Argentina)
Select the article, then print the selection. You can adjust the size of the font in your print dialog.
Deletehttp://justcreative.com/2008/01/23/tutorial-how-to-print-blog-articles-the-smart-way/
Disagreeable,
ReplyDelete>1. The brain is a physical object which obeys the laws of physics
2. The laws of physics can in principle be simulated on a computer
3. Therefore a brain (and its environment) can in principle be simulated on a computer
4. Therefore a computer can in principle be made to behave like a brain.
5. Therefore the operation of a brain is computational.<
For any physically realizable system/object X, one can say:
1. X is a physical object which obeys the laws of physics
2. The laws of physics can in principle be simulated on a computer
3. Therefore X can in principle be simulated on a computer
4. Therefore a computer can in principle be made to behave like X.
5. Therefore the operation of X is computational. QED
What you are stating here it the Computational Theory of Everything (CTE.) Of course, CTE implies CTM, trivially. However, CTE implies that minds (or brains) are not particularly computational, only as computational as everything else.
Sure, the CTE. I'm on board with that.
DeleteThe thing is the CTE is not particularly controversial apart from where it applies to the mind. Nobody has any problem believing that Brownian motion is the result of dumb mechanical interaction. Most people have no problem with the idea that evolution has designed us mechanically.
However, most people do *uniquely* have a problem with the notion that our minds are the result of mechanical interactions.
The CTM is really about disabusing people of this false intuition that there is something especially weird going on with consciousness, that it is somehow unlike other physical systems.
Everybody,
ReplyDeleteThe fact that computational methods are used to simulate X, does not imply that X is particularly computational. Computers are readily available and computer graphics is a useful tool for selling questionable ideas.
It's not about whether something is *particularly* computational. It's about whether something is in principle realisable by mechanistic computational means.
DeleteThought experiment:
ReplyDeleteWe meet a race of friendly, intelligent, benevolent aliens.
We discover that their brains are physically quite different from ours, but their behaviour is very comparable.
If the Chinese Room thought experiment is correct, then we are conscious because of some mysterious physical property of neurons, not because of the computations those neurons are performing. Since their brains are not made of anything like neurons, it is possible (and perhaps likely) that their brains are merely doing symbolic shuffling as in Searle's thought experiment.
It is also possible they are genuinely conscious like us, but since their brains are so physically different it's hard to see how this view is supported. In any case, there would appear to be absolutely no way to distinguish one from the other.
(Also, from their point of view, they have no way to know whether we are conscious.)
So, would you just give them the benefit of the doubt, or would you have a hunch one way or the other? How would you feel if they sincerely doubted your consciousness on the basis that your brain was a mesh of neurons instead of an "ionic relay network"?
But if we allow the possibility that "ionic relay networks" might support consciousness, why not also give the benefit of the doubt to electronic intelligences? Maybe Searle in his Chinese Room is not producing a conscious mind (although I don't accept that for a minute), but perhaps it just so happens that electronic substrates can, just like neural and ionic substrates.
This situation seems crazy to me. The CTM is a much more satisfactory outlook. As long as their thought processes are computationally analogous to ours, we can be confident they are conscious.
Waldermar,
ReplyDelete> I mean that all the meaning attributed to music ... is arbitrary, although some can be traced (perhaps) back to our bodies' rhythms etc <
Perhaps, perhaps not. That’s why I switched to an example from the visual arts, and one that is not taken from abstract art (Van Gogh’s Starry Night).
> What I'm saying is that there is a set of 'natural' meanings that springs from any syntax. <
Not until you take much more than the syntax itself into account. There is no way you understand Van Gogh’s painting by simply looking at the structure of the painting.
> we're able to interpret the world through pointing what similarities (and dissimilarities) its objects share and this is enough to start 'decoding' anything. <
Okay, not sure what this tells me about syntax vs semantics, or even more to the point, about how the CTM explains qualitative thinking such as consciousness.
> Perhaps what's needed to build a sentient machine is just enough 'space' (for memory) to make it reflect, to get acquainted of its own computational process <
But that is precisely the point: there doesn’t seem a way of doing this by simply programming symbol manipulation rules in the machine.
Filippo,
thanks, couldn’t have put your points any better or any more succinctly.
Disagreeable,
> I really don't think he's saying that the brain is not fundamentally computational in the sense that a Turing machine could not perform the same functions, given an arbitrary amount of memory and time. <
Actually, that is precisely what Fodor is saying, and part of it seems to be that a lot of people have an inflated idea of what Church-Turing actually implies. As I promised several times now, more on this in the next post...
> Simulating computation X really is reproducing computation X <
No. You can simulate the process of photosynthesis, but you ain’t getting no sugar out of it.
> If the CMT is true, then the mind is a computation and so simulating the mind really is reproducing the mind. <
That smells to me like a gigantic case of begging the question. Yes, of course that would be the case, but I’m saying it’s not, or at the least that we don’t have particularly good reasons to think it is.
I've added emphasis in my comment:
Delete>> Simulating *computation* X really is reproducing *computation* X <<
No. You can simulate the process of photosynthesis, but you ain’t getting no sugar out of it.<
Photosynthesis isn't a computation. It's a physical process with physical consequences (although of course it can be modelled computationally).
On the other hand, consciousness seems to have no physical consequences, and if this is true it is possible that it is an abstract entity such as a computation. The CMT holds that it is indeed a computation. Therefore, if the CMT is true, simulating consciousness really is reproducing consciousness.
>That smells to me like a gigantic case of begging the question.<
Note that I said "if". IF the CMT is true. You're begging the question by asserting that you can't reproduce X by simulating X. I'm just pointing out that this assertion does not hold for abstract entities such as computations.
"Photosynthesis isn't a computation."
DeleteI don't know about that, Disagreeable.
In the (Google) news: "Liver-like cells made with 3D printer" "3-D Printing Could Build New Bone" "If You Think 3D Printing Is Disruptive, Wait for 4D" "With a 3-D printer, Princeton University researchers are growing synthetic ears" "A 3-D Printer for Human Embryonic Stem Cells" ...
Nature is a 3D (and 4D) printer!
Well, i suppose it is a computation if all of reality is a computation. However it is also physical, which makes it a little different from, say, factorizing a large number.
DeleteMassimo,
DeleteI wasn't even daring to touch the point that we are able to appreciate Van Gogh as just structure, mainly because this structure is figurative and I'm obliged to link it to the world. My point, sorry for insisting, is that the structure is fundamental for this understanding, perhaps more, much more than any current and past art theory assume: the contrast of the colors, the irregularity of lines, all this pointing to what we suppose are the painter's emotions while working in the paint etc. And it is precisely this structure that authorizes us to do whatever associations we think are suitable to every work of art. I used the musical structure here, beyond personal reasons, because it's less compromised with symbols or icons that remit to its outside. I just wanted to show that the view to which semantics contains syntax and syntax contains semantics can be seminal to the understanding of knowledge in general (and I'd say that America - and the world - should go back to Peirce's Semiotics if it's intended to get some progress in this and a lot of other fields).
And yes, it's possible that all this doesn't imply consciousness, but it's also possible that the experiments are led by feeble presuppositions. I don't know, the matter is too deep.
Notwithstanding, I finally invoke we at least reflect on our idea of consciousness, which seems that soon will become useless to infer if any of us in this blog is a real person. I wrote on this here, yesterday, and I repeat: perhaps we should lessen our expectations on what as consciousnesses we are. This means that, yes, I don't feel anything special in being - or in experiencing to be - a sentient creature, a sentiment I'm sure I share with old guys like Thales and Anaxagoras and maybe with Parmenides - with no shame.
Disagreeble,
ReplyDelete> The system *clearly* has beliefs. Any other position is absolutely untenable. Even Searle acknowledges that the system can have knowledge he doesn't have himself. <
Since when knowledge and belief are the same thing? And at any rate, who cares? We are talking about human qualitative thinking, i.e., consciousness (necessary for understanding), not mere knowledge.
> let's assume we're talking about a philosophical zombie <
Let’s not, I think the idea of philosophical zombies is one of the most inane ones ever to enter discussions of consciousness.
> The blogspot system currently believes you are Massimo Pigliucci <
Much depends on what you mean by “belief,” a spectacular case — if you don’t mind — of equivocation. Does blogspot *understands* who I am and what I’m doing? If not, the rest is irrelevant, since nobody is arguing that computers don’t “know” things in the mechanical sense of the term.
> There's nothing mysterious going on here. <
There is nothing mysterious going anywhere. I find it a bit irritating that supporters of CTM implies, and sometimes overtly state, that any view different from theirs somehow wanders into mysticism or supernaturalism.
> So why is it that we can see that the system knows things that Searle doesn't know <
We know exactly why: the knowledge of the system is smuggled in by way of the conscious being who wrote the rule book that Searle is using to connect inputs and outputs while inside the room.
> Any calculation that can be carried out by a Turing Machine is computational, and this would include connectionist models and presumably all the laws of physics. <
This is a much more controversial — probably false — notion that you seem to think. Again, ‘till next post...
> So, a Van Gogh painting can be thought of as a very specially arrangement of paint on a canvas. <
Yes, which tells you not very much about Starry Night, unless you know a hell of a lot more than just that.
> meaningful semantics can be thought of as a very special arrangement of syntactical symbols within a mind. Not all syntax has interesting semantics <
Right, right. Now, how / who decides what’s “interesting”? How do we get that just from the symbols’ arrangement?
> but beautiful paintings, at a low level, are still just paint. <
Which, predictably at this point (sorry, I don’t mean to be disrespectful!), entirely misses the point. Maybe that’s where we should concentrate our efforts.
> It's not strictly necessary that anyone involved in building the system understands Chinese. <
It is if you want “understanding.” If you just want translations, no, google translate will do.
> If knowledge is a loaded term, let's use "information" instead <
I’m afraid that’s just as loaded and hard to wrap one’s mind around.
> The CTM is really about disabusing people of this false intuition that there is something especially weird going on with consciousness, that it is somehow unlike other physical systems. <
It *is* unlike other physical systems, just like living things are different from not living things. That doesn’t mean there is something “weird” or magical going on. Just different.
> It's not about whether something is *particularly* computational. It's about whether something is in principle realisable by mechanistic computational means. <
I’m afraid you missed Filippo’s point here.
>since nobody is arguing that computers don’t “know” things in the mechanical sense of the term.<
DeleteI know, and this is why I'm building an argument on the uncontroversial foundation of this mechanical sort of knowing.
I'm not saying that simple computer systems understand anything, or are consciously aware of anything. I'm saying they have representations of propositions which may be true or false, justified or not. In this sense, it is surely uncontroversial that they can have mechanical beliefs, knowledge or information, however you want to characterise it.
Now, with that cleared up, I want to make an argument against the Chinese room.
Let's suppose that the guy in the room doesn't know anything about China. Let's suppose that the programmers who designed the system know nothing about China. However, let's suppose that the system learned about China in much the same way that a baby born in China does - from interacting with Chinese people and absorbing Chinese culture. If it makes it any easier to imagine this, you can imagine that the system started off as an electronic robot but was later dumped to a book for John Searle to work from.
Now, even though the designers of the system know nothing about China, and even though Searle knows nothing about China, the system in John Searle's head knows just about everything there is to know about China.
But if Searle is the only person in the room, then what's doing the knowing?
The answer, obviously, is that the system of rules has this mechanical knowledge. But if we can attribute this to the system of rules in his head but not to Searle himself, why is it so obviously ridiculous to draw much the same sort of conclusion about other phenomena, such as consciousness?
> I find it a bit irritating that supporters of CTM implies, and sometimes overtly state, that any view different from theirs somehow wanders into mysticism or supernaturalism.<
DeleteI didn't say "mystical" or "supernatural", I said "mysterious". If the CTM is false, then consciousness is surely mysterious, because there is nothing even approaching an account of what it might be otherwise. A phenomenon that has no explanation is pretty much the definition of a mystery, no?
>> Any calculation that can be carried out by a Turing Machine is computational, and this would include connectionist models and presumably all the laws of physics. <
DeleteThis is a much more controversial — probably false — notion that you seem to think. <
I'm aware that this is controversial. This is why I didn't quite assert it but used the word "presumably". I'm not quite committed to this view. Looking forward to your next post (but very much hope you don't give up responding to this one for a while!)
>Now, how / who decides what’s “interesting”? How do we get that just from the symbols’ arrangement?<
DeleteWho decides what painting is beautiful or worthwhile?
>It *is* unlike other physical systems, just like living things are different from not living things. That doesn’t mean there is something “weird” or magical going on. Just different.<
DeleteLife is unlike other physical systems at a high level only. At a chemical or atomic level, it's pretty much the same. We don't need any new physics to explain it.
On the other hand, for consciousness to be something other than computational, it seems some new understanding of physics would be needed, something to make neurons fundamentally different to other computing substrates. If this is true, it would be weird, but then that's hardly a showstopper. Quantum mechanics is weird. There's probably a lot of weird physics that have not yet been uncovered.
Though I do actually think that consciousness is weird or mysterious if CTM is fals, please don't take this to be derogatory.
>I’m afraid you missed Filippo’s point here.<
DeletePerhaps he will be so kind as to explain why.
>I think the idea of philosophical zombies is one of the most inane ones ever to enter discussions of consciousness.<
DeleteI couldn't agree more!
But don't you see that the Chinese Room, if Searle is right, is a philosophical zombie?
It has all the behaviour of a person, all the (functional, not phenomenal) understanding of a person, everything except for consciousness.
What's the difference between this concept and a philosophical zombie? There is none! If the idea of a philosophical zombie is inane, then you must reject the Chinese Room.
>Let’s not, I think the idea of philosophical zombies is one of the most inane ones ever to enter discussions of consciousness.
DeleteI'm always puzzled by your disdain for Chalmers, because it seems like there is about one photon of daylight between you and the zombie hypothesis.
Chalmers says that a living, physical, human body is not necessarily conscious - and you disagree. That seems to be the major disagreement.
Otherwise, you both seem to endorse the metaphysical possibility of creatures indistinguishable from humans in all outward behaviour, but with no mental life. These are not quite p-zombies, but damned close - call them "weak p-zombies". An example might be a small computer that mimics the I/O behaviour of your brain but runs on silicon, hooked up to your CNS and controlling your body.
For my part I think that both strong and weak p-zombies are necessarily impossible. But I like the zombie concept because it crystallizes everybody's confusions about consciousness into a nice target for criticism.
Disagreeable,
ReplyDelete> If the Chinese Room thought experiment is correct, then we are conscious because of some mysterious physical property of neurons, not because of the computations those neurons are performing. Since their brains are not made of anything like neurons, it is possible (and perhaps likely) that their brains are merely doing symbolic shuffling as in Searle's thought experiment. <
Here we go again with the “mysterious” charge. No, it would be an *empirical* question how the aliens in question could be conscious. Very likely they’ll have similar physico-chemistry to ours (because life itself is likely limited in that respect), so whatever biological explanation applies to us would probably apply to them.
> How would you feel if they sincerely doubted your consciousness on the basis that your brain was a mesh of neurons instead of an "ionic relay network"? <
Who cares how would I feel? Is this about not hurting the sensitivity of the Chinese room? Whether or not the alines are conscious, and if so, how, it’s an empirical matter, not to be settled by thought experiment.
> As long as their thought processes are computationally analogous to ours, we can be confident they are conscious. <
You do understand that you are smuggling massive assumptions in that apparently innocent phrase, “computationally analogous,” right?
brucexs,
very good points. When you say:
> even if you accepted that, you might still argue that the CR does not understand the sentences the way people do because it is not aware of (or conscious of) its understanding. <
You are getting at some of the things that really bother me about these discussions. I don’t think it is coherent to say that there can be understanding without consciousness. Which is why I maintain that the Chinese room “knows” (in the sense of “can correctly act upon”) Chinese, but does not understand it.
>No, it would be an *empirical* question how the aliens in question could be conscious.<
DeleteReally? Questions of consciousness really seem to me not to be empirical at all. If we can postulate that a computer could give every empirical outward indication of consciousness and yet not actually be conscious, then that rather suggests that there is no way to observe consciousness directly or indirectly.
>Very likely they’ll have similar physico-chemistry to ours (because life itself is likely limited in that respect), so whatever biological explanation applies to us would probably apply to them.<
Well, in the thought experiment I stipulated that their brains are very different physically. They might have similar metabolisms and genetic chemistry, but that doesn't mean that high level systems are made of the same materials. Skeletons are made of completely different materials in different portions of the animal kingdom, for example.
>Who cares how would I feel?<
Agreed, that doesn't establish anything one way or another. I'm just encouraging you to put yourself in such a position in an attempt to think about your intuitions about the problem from another angle.
>You do understand that you are smuggling massive assumptions in that apparently innocent phrase, “computationally analogous,” right?<
Hey, it's my thought experiment, I can assume what I like!
@Massimo:
ReplyDelete"Waldermar,
> I mean that all the meaning attributed to music ... is arbitrary, although some can be traced (perhaps) back to our bodies' rhythms etc <
Perhaps, perhaps not. That’s why I switched to an example from the visual arts, and one that is not taken from abstract art (Van Gogh’s Starry Night)."
So what you're saying, Massimo, is you don't know where audible artistry came from but otherwise you have a good idea where some visual artistry came from, except not if it required the viewer to draw some meaning from it. So that apparently leaves visual art as "plain spoken" and decorative, but you've then left abstractions in art in general as what? A confusing way of satisfactorily wasting time? Weird.
As an aside, I seem to have some sort of dyslexia when it comes to acronyms! I keep calling the CTM the CMT, and I can't seem to stop doing it.
ReplyDeleteApologies!
I had a similar problem on a previous comment thread when I repeatedly referred to Fermat's Last Theorem as the FMT. I must just like *MT!
Disagreeable,
ReplyDeletewell, you managed to post ten times (!!) in the time it took me to go to the gym. I'm sorry, as you noted, I pay a lot of attention to comments on this blog, and that's because I like being challenged and I truly learn a lot from you guys. But I simply have no time to read, let alone respond, to this flow of comments. I wish this were my full time job, but it ain't. Indeed, I'm not paid at all! Cheers.
Yeah, I'm just a bit frustrated with the online format.
DeleteI'll try to control myself and bow out to let you concentrate on interacting with others who might have other points to make.
Following these blogger threads is deadly, too. You are a saint compared to me as far as following arguments in comments.
DeleteYou really astonish me, Massimo, in every sense. Perhaps now I can be sure you indeed are not an incredible algorithm: no algorithm goes to the gym. :)
DeleteThe real topic of the post is that it is not the job of philosophy to produce theories, at least in the sense of scientific ones. There is not much disagreement with that, but it was perhaps to be expected that the discussion would immediately turn to the example.
ReplyDeleteI think that DisagreeableMe makes a lot of sense (but yes, way too many comments).
The Chinese Room is a bad thought experiment because it is really just begging the question, although it achieves its perceived force mainly through nudging the reader into the fallacy of composition and the argument from personal incredulity. Somebody using the Chinese Room analogy tries to argue that a set of parts following mechanical rules cannot understand Chinese by smuggling into the discussion the assumption that a set of parts following mechanical rules cannot understand anything. I fear I cannot see as clearly as Searle that the conclusion of the room as a whole being able to understand Chinese is absurd.
If we do not assume that something magical is going on in our heads then our brains are simply molecules following the physical rules of the universe. It follows directly from this that one could in principle build a machine of parts mechanically following some rules that has exactly the same capabilities as a human mind.
Indeed the idea is proved by the observation that such a machine already exists: our brain. The only tricky question here is whether the assumption of nothing magical going on in our heads is correct, but apart from the fact that there is very convincing empirical evidence backing that assumption it is also one that is (unless I misunderstand something very badly) shared by the host of this blog.
At most there seems to be an argument here that we cannot build a machine with human-like capabilities because it would run with a processor etc instead of neurons, and you need neurons to make a human mind. But that is a very weak claim; it would simply mean that we need to build the thinking machine in a different way than we currently build desktop computers.
Finally, I am deliberately using the construct "the same capabilities as a human mind" because I entertain strong doubts that the concept "consciousness" is at all useful. Does it even have any meaning? To the best of my understanding, I am merely a thinking machine that can have thoughts about itself in relation to other items. Program a computer to model its own position relative to other things in the same room and you have the exact same situation minus a lot of complexity. Really a lot of complexity, sure, but I fail to see a qualitative difference.
I find it telling, and rather amusing, that both sides of this discussion are prone to accuse the other of magical thinking: viz. Massimo's accusation that strong AI proponents think semantic content inexplicably emerges from syntactic symbol manipulation, and the pro-AI crowd's uneasiness with admitting that behavioural/functional issues are not the end of the matter with regards to consciousness, on the grounds that it makes things 'mysterious'.
ReplyDeleteWith regard to the former, I think that if one cognizes about it carefully enough, it becomes rather clear that Searle's argument is unsuccessful insofar as it is construed as a REFUTATION of CTM, since he makes an invalid inference from the first person nature of his experience to claims about his own sub-conscious cognitive architecture; namely, that it is not formal symbol manipulation, even though it very well could be. But this doesn't mean it has no merit whatsoever; in fact, I would say it's extremely good at forcing one to clarify one’s intuitions about the distinctions and/or interrelations between intelligence, intentionality, and consciousness.
For my part, I was under the impression that the Lowenheim-Skolem Theorem undermines the idea that semantics is reducible to syntax: http://plato.stanford.edu/entries/computational-mind/#DoeSynExpSem.
I haven’t looked into it myself, but even if that isn’t the case, I do think the Chinese Room is an excellent gadfly insofar as it forces computationalists to recognize that to provide an account of intelligence is not, ipso facto, to provide an account of intentionality, (which I take to be what the issue is all about with regards to whether or not the computer/room truly ‘understands’).
With regards to ‘understanding’ so construed, computationalists do indeed owe a detailed theoretical account of HOW syntactic symbol shuffling (alone or in conjunction with causal interaction with the world or whatever) generates semantic content. I take it this is what Massimo means by claiming that AI proponents are guilty in appealing to magic to make their case, since even though it MIGHT be true that ‘understanding’ is the result of formal symbol manipulation, there is as of yet no full blooded account of HOW this could be so; and given that the whole point of computers, since they are at bottom formal systems, is to construct devices defined entirely in terms of syntactic rules without appealing to propositional content of any sort, the idea that syntactic manipulations could grow in complexity without any difference in kind emerging (the semantics) is not prima facie absurd. One can, after all, create formal systems with no interpretations (and hence no semantics) whatsoever, such that one is left with a bunch of empty string manipulation rules.
On zombies: I’m quite often surprised at the vitriol these fine fellows engender within some people. Whether or not they are metaphysically possible, (and I do think arguing any such possibility purely from conceivability is dubious at best), I do think that, conceptually, they provide a perfect litmus test for the adequacy of theories of consciousness. If one’s theory doesn’t explain WHY zombies are impossible apart from asserting a primitive identity relation, then your theory is destined for the trash bin. Stevan Harnad does a good job of articulating this: http://cogprints.org/2130/1/dennett-chalmers.htm
Ian,
ReplyDelete> I'm always puzzled by your disdain for Chalmers, because it seems like there is about one photon of daylight between you and the zombie hypothesis. <
I keep being puzzled by your puzzlement, particularly by your insistence that somehow Chalmers and I are espousing very close positions. His position is that of a dualist who rejects physicalism, mine is that of a biological naturalists who says that not only consciousness is firmly rooted in the physical brain, but that not all possible substrates will do — as explained a number of times in this post and the commentary therein. Could you shed some light on why you think this entails closeness to Chalmers?
Incidentally, I just finished writing my first technical anti-Chalmers paper, though it’s on mind uploading, not zombies. (Even more interestingly, he now styles himself a functionalist about philosophy of mind, one of the positions that he attacked when he proposed his zombie stuff.)
Waldemar,
> My point, sorry for insisting, is that the structure is fundamental for this understanding <
But I never denied that. I am however additionally maintaining that it is not sufficient.
So, if the syntax is fundamental for the understanding of a structure to which it is applied, the fact that there is 'understanding' implies the work of some concepts, no matter how these concepts are represented. And if concepts occur, it's certain a semantics is present too, at least that fundamental part of it, which contains concepts like similar, near, opposition etc. The problem here seems to be an agreement on if those concepts are really fundamental, if there are not other the former depend on. If not, if those concepts are indeed fundamental, that entails they are 'inside' the upper-level concepts. And this point is what matters to me (I'm not really undertaking a defense of conscious AI, but instead that such a conception of syntax makes sense): it is possible to trace back to any given sign a web of those syntactic-sprang meanings linking every aspect of that sign's semantics. I particularly see any semantics as a web of those meanings, which are the syntax itself. In short, syntax is already semantics or must have one to be effectively applied. I repeat: how this is going to give us artificial consciousnesses is another task I can only 'wishfully think' about.
Delete>Could you shed some light on why you think this entails closeness to Chalmers?
DeleteIn essence, you chop up the conceptual landscape a bit differently, but you both accept the metaphysical possibility of weak p-zombies.
>...but that not all possible substrates will do...
I think this is where a lot of the confusion is. I agree that not all substrates will do - for "engineering reasons", so to speak. As Alex said, hydrogen just isn't going to work - you'll never get the behaviour out of it that could make it implement a brain.
You sometimes seem to be espousing a different position, though, which is that given that some substrate *does* replicate the I/O behaviour of a human brain to the CNS, that is no evidence of phenomenal consciousness, or very weak evidence.
If we met a species of intelligent aliens with brains running on silicon, and they claimed to be phenomenally conscious, should we be deeply skeptical about their claim to consciousness because they don't have our substrate?
If not, then I don't see any reason to be deeply skeptical about any other device that mimics a human brain's I/O behaviour (remember that this I/O behaviour includes the physical act of claiming to be phenomenally conscious).
Hi Ian,
DeleteWith you so eloquently expressing the views I have myself there's no reason for me in the conversation anyway, even if I didn't need to atone for going overboard yesterday. Keep it up!
I'd love to hear a conversation between the two of you on the subject. It would be good to fill the void left by the hiatus of the podcast!
Alex,
ReplyDelete> The Chinese Room is a bad thought experiment because it is really just begging the question <
I keep being baffled by how many people impute more than reasonable to the Chinese Room, and in fact to pretty much all thought experiments. Thought experiments cannot demonstrate the reality or impossibility of anything. They are simply meant to bring out hidden or unspecified assumptions for further examination and discussion. In the case of the CR, the whole point is to flesh out the difference between symbol manipulation and understanding. You can still maintain that understanding somehow emerges from symbol manipulation, but now you owe us an explanation of how that actually works. That’s it.
> If we do not assume that something magical is going on in our heads then our brains are simply molecules following the physical rules of the universe. It follows directly from this that one could in principle build a machine of parts mechanically following some rules that has exactly the same capabilities as a human mind. <
And Searle doesn’t disagree. He explicitly says that human beings are biological machines. His point is, again, that those machines cannot just be doing the sort of symbol-shuffling computations that computers do. They must be doing something else. And they probably must be built out of a limited range of materials. As a biologist, this concept should be bread and butter for you. Take photosynthesis: yes, it can be done using more than one type of light-capturing molecule, but most molecules won’t do because they don’t capture light. Biological naturalism simply suggests that consciousness — qua physical phenomenon — is bound by similar constraints. No magic necessary (there goes the m-word again!).
> I entertain strong doubts that the concept "consciousness" is at all useful. Does it even have any meaning? <
I don’t understand consciousness skepticism. You were presumably conscious when you wrote those words. It simply means that you were having a qualitative experience of what it feels like to think and write down those words. What’s mysterious, unnecessary, illusory, etc. about that? And to claim — as many do — that one can do philosophy of mind or neurobiology without dealing with consciousness and how it arises is a pure and simple cop out.
Jolly,
> I would say it's extremely good at forcing one to clarify one’s intuitions about the distinctions and/or interrelations between intelligence, intentionality, and consciousness. <
Precisely. Particularly, as I said above, it forces into the open the need for an account of how semantic meaning originates from syntactic symbol manipulation.
> I was under the impression that the Lowenheim-Skolem Theorem undermines the idea that semantics is reducible to syntax <
Very good point. That article, interestingly, also says that most proponents of CTM themselves never argued for the supervenience of semantics on syntax — and it was Putnam who explained why that isn’t going to happen.
> it forces computationalists to recognize that to provide an account of intelligence is not, ipso facto, to provide an account of intentionality <
I would go even further: not only there is a distinction between intelligence and intentionality, but CTM doesn’t even get you intelligence, it simply gets you information processing. Plants do that...
intelligence |inˈtelijəns|
Deletenoun
1 the ability to acquire and apply knowledge and skills:
Plants do that.
"those machines cannot just be doing the sort of symbol-shuffling computations that computers do"
ReplyDeleteIs this just what computers did in the past, or do now in 2013? Or do in 2025? Or will ever do?
(If a computer is defined as something that can't do what a human brain can do, maybe so.)
We might be able to make a robot that does what an ant does in 2013. What robot can we make in 2050? 2100?
http://www.nbcnews.com/technology/robotic-ants-provide-path-real-ant-brains-1C9132388
Sorry, I may have interpreted more into the thought experiment than its creator intended. Some people seem to use it that way though, only maybe not here.
ReplyDeleteThere are really very different questions floating around: Is the mind computational? No idea. How do we get from symbol manipulation to understanding? No idea. Does the room understand Chinese? IMO yes. Is it reasonable to assume that a thinking machine could be built that can do all the things a human mind can do, including consciousness whatever it is? Again, yes.
But yes, the substrate is important. In a way that is trivial: if all you have is a tank full of hydrogen you cannot build a thinking machine, so it is clear that not any substrate will do for a mind. The idea that it positively has to be neurons and nothing else, however, appears implausible. It seems reasonable to assume that one could build a thinking machine from something that is not neurons but mimics their functionality, and we cannot rule out other possibilities.
> It simply means that you were having a qualitative experience of what it feels like to think and write down those words.
I am simply not entirely convinced that this is anything qualitatively different from what a computer can do, and the same for "understanding" and "intentionality". For comparison, go away from us along the tree of life. Whether you think it is in marmosets, in lizards or in snails, somewhere you will find a relative of us humans that you would not consider to have these attributes but that still has neurons. What has happened between our ancestors who were more or less at that stage of complexity and us? Well, really only more neurons and more complicated wiring.
So whatever consciousness, intentionality and understanding are, I would say that they all come in degrees, and that you may get a few % of what we have with comparatively simple systems.
Lest I be misunderstood as part of a "pro-AI crowd" that I do not consider myself to be part of, I merely think that it should ultimately somehow be possible to build hard AI. The whole brain uploading idea, on the other hand, is complete bunk. Simulating a brain is not going to produce a human mind. Even if the mind would turn out to be entirely computational, the body isn't, and a brain that is not in a human body will not "run" a human mind even if it "runs" a mind. (In fact I'd assume that a faithfully computing human brain without a physical body would immediately go insane because human brains are built to go with the body.)
(Temporarily breaking silence...)
DeleteHi Alex,
>In fact I'd assume that a faithfully computing human brain without a physical body would immediately go insane because human brains are built to go with the body.<
Why so, if we give it a faithfully simulated virtual body and virtual environment to interact with?
"In fact I'd assume that a faithfully computing human brain without a physical body would immediately go insane because human brains are built to go with the body."
DeleteI agree, Alex. This is more or less what Damasio and his colleagues try to teach us: they couldn't conceive a model of mind outside a body, there's no intellectual activity not compromised by the perceptions of the subject's surroundings as well as of his/its body's internal perceptions. And if so, one should at least think more carefully on what could be the role of the senses in the consciousness frame. Then, no matter if a virtual or a biochemical body, the problem is the approach to what a mind in fact is (or should be): all attempts of creating artificial minds were (are?) directed to what we maybe can call a 'platonic mind'.
I have roughly the same problem with CTM as with a previous post "http://rationallyspeaking.blogspot.ca/2013/04/why-problem-of-consciousness-wont-go.html" where the author wanted to proof that consciousness and the phenomenal lies beyond the "physical world". By physical world he meant "the description of the world that comes out of physics". This means that he discarded chemistry, cell biology and neurobiology in his description of consciousness. Excluding these sciences makes it hard to precisely define what you mean by experience or representation and the artificial reduction of complexity makes it a bad starting point for philosophy in my opinion. I must admit that I am not a specialist CTM and that I have not read the book of Jerry Fodor, but is the CTM based on the best available account of neurobiology as we understand nowadays? Simple neural circuits make for a relevant analogy with computation as it was meant in the days that CTM originated. But extrapolating the workings of lets say long term potentiation to the working of higher order brain functions is a bridge to far. This is why I do not believe in the validity of "zombie arguments". The workings of neurons are well known and there are nice mathematical models of how they work, but the description of how the higher-level brain works is well, just descriptive. It is fair to say that we do not have a decent theory of how the mind works at this moment. So how can we make a decent analogy between the brain and CTM without incorporating the latest advances in neurobiology? I guess that the too narrow definition of the mind is also the reason that Fodor claims that only some mental states are computational in nature (the simpler ones?). A decent account of CTM should include a description of the mind that spans all the sciences (everything must go). This would also put stronger empirical constraints on our exploration through conceptual space. Our knowledge of lower level neurobiological circuits already inspired advanced computational methods such as artificial neural networks, which can handle decent learning tasks. Software evolves fast (and we can program it to evolve) and a new generation of software based on (future) neurobiology may well create simple or advanced "consciousness", so we should not be put off by the seemingly misguided account of artificial intelligence that is based on the analogy between "old accounts of the mind" and "old" computers. We should first make sure that we understand well what the subjects of the analogy are.
ReplyDeletePhilip,
ReplyDelete> Is this just what computers did in the past, or do now in 2013? Or do in 2025? Or will ever do? <
I find this sort of argument wholly unconvincing. Sure, nobody knows what technology will be able to do in the future. But the question is whether here and now we have good reasons to think that human consciousness can be replicated in anything like a digital computer. I don’t think so. More to the point, I think that “one day we may be able to do it” is just wishful thinking, unless it’s backed up by a serious theory of why and how this is going to be possible.
Alex,
> I may have interpreted more into the thought experiment than its creator intended. Some people seem to use it that way though <
Oh yes, there are all sorts of wild claims about what Searle may or may not have shown.
> Is the mind computational? No idea. How do we get from symbol manipulation to understanding? No idea. Does the room understand Chinese? IMO yes. Is it reasonable to assume that a thinking machine could be built that can do all the things a human mind can do, including consciousness whatever it is? Again, yes. <
I agree with your responses, except in the case of the Chinese Room. Not unless we add something else to the system. What, exactly? Well, if we knew, we would have answers to your other questions as well...
> the substrate is important. In a way that is trivial ... The idea that it positively has to be neurons and nothing else, however, appears implausible. <
I don’t think anyone is putting forth the latter idea. Certainly neither I nor Searle. As for the importance of substrate being trivial, well, tell it to Chalmers, Kurzweil and the whole “mind uploading” crowd.
> Whether you think it is in marmosets, in lizards or in snails, somewhere you will find a relative of us humans that you would not consider to have these attributes but that still has neurons. What has happened between our ancestors who were more or less at that stage of complexity and us? Well, really only more neurons and more complicated wiring. <
Yes, though some quantitative differences at some point create sufficiently sharp discontinuities that the outcome is qualitatively different, in my book. But of course we don’t even know if the kind of substrate and wiring of which computers are made can in principle give rise to consciousness.
> I merely think that it should ultimately somehow be possible to build hard AI. The whole brain uploading idea, on the other hand, is complete bunk. <
Yup, on both counts.
Waldemar,
ReplyDelete> it is possible to trace back to any given sign a web of those syntactic-sprang meanings linking every aspect of that sign's semantics. <
As it was mentioned before during this discussion, Putnam pointed out that there are solid theoretical reasons to think that semantics cannot be reduced to syntax, even in principle. At the very least, however, strong AI people owe us an account (not just hand waiving) of how exactly such reduction is to be accomplished. So far, I haven’t heard it.
Kurt,
> It is fair to say that we do not have a decent theory of how the mind works at this moment. <
Precisely, though that doesn’t seem to discourage people like Chalmers from pontificating about it.
Ian,
> In essence, you chop up the conceptual landscape a bit differently, but you both accept the metaphysical possibility of weak p-zombies. <
No, I don’t. What makes you think so??
> You sometimes seem to be espousing a different position, though, which is that given that some substrate *does* replicate the I/O behaviour of a human brain to the CNS, that is no evidence of phenomenal consciousness, or very weak evidence. <
It’s a strange way to put my position. The problem is that I don’t think it’s just a matter of inputs and outputs. Consciousness is the ability to have first person experiences of mental states. I think this is possible only given certain functional *and* material set ups. And in order to be able to do that we need to understand a hell of a lot more about the neural basis of consciousness.
> If we met a species of intelligent aliens with brains running on silicon, and they claimed to be phenomenally conscious, should we be deeply skeptical about their claim to consciousness because they don't have our substrate? <
Not deeply, but perhaps somewhat skeptical. Ultimately this is a hyper-version of the problem of other minds. Which is a mess.
> I don't see any reason to be deeply skeptical about any other device that mimics a human brain's I/O behaviour <
See my comments to Alex above. It depends on what you mean by “mimics.”
Massimo,
DeleteIn my opinion, it is Putnam who's owing me a very precise account on how he linked the Lowenheim-Skolem theorem with the syntax-semantics conversion problem. Any idea where to find the paper? SEP says Putnam 1980 and lists just 1988 in the bibliography. I'll try this option. Also SEP states he evokes 'mentalese'... OMG! I need to put my finger in this wound!
> . But the question is whether here and now we have good reasons to think that human consciousness can be replicated in anything like a digital computer. I don’t think so. <
DeleteWorkers in AC (see First International Workshop on Artificial Consciousness coming up) think that even human-level consciousness can be replicated one day in a robot's code. They may be fools, but I think the ACers are right.
Waldemar,
DeleteI believe the paper where he talks about the Lowenheim-Skolem theorem is the one just below the 1988 paper, titled 'Models an Reality. I think there's a typo in the bibliography's date of publication. Try this: https://pantherfile.uwm.edu/hinchman/www/Models&Reality.pdf
Massimo said >And in order to be able to do that we need to understand a hell of a lot more about the neural basis of consciousness.<
DeleteIndeed also the biological basis.
Massimo,
ReplyDeleteSo we agree on nearly everything only I did not realize it. Again, sorry for the hasty reaction.
But this:
> Yes, though some quantitative differences at some point create sufficiently sharp discontinuities that the outcome is qualitatively different, in my book.
I find hard to believe. Admittedly, my choice of qualitative vs quantitative was a bit poor. Of course there is a qualitative difference between us and snails as far as cognitive capacities and agency are concerned. I merely mean that there cannot be a qualitative difference in the biology that makes it possible because they are using neurons and we are using neurons, and that is it. One could intuit from there the possibility that the difference between a pocket calculator and hard silicon-based AI does similarly not necessarily have to be one of a qualitative difference between the two substrates even if the capabilities are qualitatively very different, but that is not really the point I want to make at the moment.
The point is that there cannot have been a sharp discontinuity between the snails and us unless our current understanding of evolution is completely wrong. After all, as far as we know it proceeds through gradual changes in allele frequencies, gene duplication followed by gradual specialization, etc. Unless it proceeds through hopeful monsters, horses suddenly giving birth to whales or some such creationist straw man, there are no sharp discontinuities, and it appears implausible that an animal was suddenly born with full consciousness to a mother that was a philosophical zombie.
The second difference is the Chinese room. In my opinion, if it gives the right answers then it understands Chinese. I find it hard to think of another way of defining "understand" in the context of this discussion while avoiding to beg the question.
Disagreeable Me,
I am inclined to consider it plausible to think that we can get _a_ mind from a computer; if it passes the Turing test I would consider that awesome and not get too hung up about whether it has properties that do not appear to have a clear, testable definition.
But in the case of human minds Massimo's analogy with photosynthesis holds. There is every reason to think that a human mind is the process of our physical wetware brain operating. The mind is not some software that you can take out of the brain and run in a computer. If you simulate the brain (and I have no idea how one would even in principle faithfully simulate the position of every single atom in a human brain, because that is what it would take), it will be like simulating weather patterns: Yes, you can have rain in your simulation but there will be no actual wetness involved.
Apart from the many other problems that are likely to kill the idea of brain uploading, it may turn out that the only way to get a replica of a human mind is to build a physical matrix that has the same properties as a neuronal network (which you could then feed with the signals that trick it into believing that it has the sensory input from a body). But that defeats the purpose in more than one way. Instead of flitting through cyberspace and doing all manner of science fictiony things, it needs the illusion of still being in a body to avoid going insane. Instead of copying a brilliant human mind into a thousand robots to let it do a lot of useful work without the limitations of the human body, it needs the illusion of being in a limiting human body to avoid going insane. And instead of being a bit of software that can be copied and restored, thus achieving immortality as long as there is electricity, the mind will be merely the process of that one particular instance of a physical matrix running, precisely as it is now.
Try this intuition pump:
DeleteImagine replacing neurons one by one with tiny electronic components that do precisely the same thing, wired together in the same way. I'm guessing by what you've written that you would suppose that the brain would still be conscious at the end of it, and the person needn't even be aware of the change.
Now imagine replacing small clumps of neurons with larger electronic components that simulate that clump of neurons. What does your intuition tell you now? Imagine larger and larger clumps and what they would entail.
Now imagine replacing the whole brain with a black box that simulates the whole brain. It's physically inside a body, wired to the the muscles and senses through the spinal cord, etc. What's your intuition now?
Now imagine that the black box is actually just a terminal for a remote server that does the processing - the workings of the brain is actually being implemented on a massive server farm elsewhere.
Now imagine building a robot body with precisely the same motor control capabilities and sensing apparatus as a human body, and connect the black box brain terminal to that body instead of a human one.
And now imagine having the robot body in a virtual environment. You're replacing the visual information supplied to the brain with computer graphics, and bodily sensory information with appropriate input representing normal gravity, etc. The virtual body's movements correspond precisely to the brain's commands.
I don't see any point in this gradual virtualisation where it becomes evident that consciousness would disappear or the mind would become insane.
> Imagine replacing neurons one by one with tiny electronic components that do precisely the same thing
DeleteWell no, that is right where it stops. An electronic component does not grow or die, it does not form new synapses, it does not release ions into the solution around it, and so on. And if you make an artificial replica, not electronic, that does precisely the same thing as a brain cell, guess what?
It will be just as mortal, squishy, non-copyable, prone to turning into a tumor or being bogged down by prions as a brain cell and thus completely useless to the transhumanist or singularitarian who hoped to become immortal or to copy talented minds into machines. Because that is what being precisely like a brain cell means.
Many people do not fully appreciate trade-offs: Some capabilities can only be achieved by accepting downsides that come with them, and the squishy nature of brain cells may be a downside that comes with their fantastic capability to make a human brain.
(That is where being an evolutionary biologist or ecologist can serve as an intuition pump because one is constantly confronted with the same situation: To be able to invest a lot into an individual child, you trade away your ability to have many of them. To be able to have very targeted, pollen-saving pollination, you trade away the flexibility that comes with a more generalist pollination syndrome. Etc.)
I hear what you're saying Alex, and you make a good deal of sense. You'd need a very sophisticated component to grow synapses and to interact with the chemical environment of the brain.
DeleteBut, to start with, I'm not necessarily saying this would be a way to become superhuman or immortal. You could instead imagine that this is a treatment for MS, for instance.
So for the sake of the thought experiment, can we assume that the artificial neurons are little machines that include all the beneficial behaviour of biological neurons, including growth, death, synapse formation and chemical interchange? (I'm not convinced that this would necessarily make them vulnerable to prions or cancer - within the "black box" of their interface with the rest of the brain they are completely different physically).
What would that imply for consciousness etc?
(BTW, I'm not proposing that this idea would ever be feasible. It's purely a thought experiment).
Disagreeable,
DeleteUnfortunately I somehow lost the answer I wrote to your objection on the CR subject. Sorry.
I won't rewrite it here. Later we talk about it if you posted something of the like in your blog. (If not, post it!)
Here I'll say that I still didn't give up my hope that the human mind can be 'wholly thinkable' :), although Massimo's and others' statements are fully apropos. The thought, if we call it the final product of all the brain processing, the final output by which we're able to say 'We think', doesn't have yet a scientific representation, or at least a comfortable one. My personal bet is that this is just 'mind perception', because I insist that the brain is just a sense organ that does a lot of processing too (like the other sense 'organs' process worldly signals before sending them to that central for further processing). For me this is a tricky engineering problem: the input is also the output (and this at a speed I doubt any hardware has already reached).
But what we get from all those brain activities is not a sequence of electric shocks or of chemicals being spread by the neurons (although a repugnant idea, if things worked this way mind science/philosophy would be easy, 'soup' - like in Brazilian Portuguese we say). What we get from all that is just... thought!!! And thought isn't perceived in the way the other objects of perceptions are, for we can eventually stop sensing electrical shocks, but not stop sensing thought. Yes, you can build a sound argument by negating the consequent of Descartes' consequent and thus negate the antecedent, though this is just a valid argument that not necessarily corresponds to any worldly or mental possibility, for you cannot truly, sanely say 'I do not think'.
But if we can easily use the phrase 'you do not think', you first must check if the addressed subject is indeed dead (and in this case addressing him wouldn't make much sense) and have previously assumed that 'no dead human thinks'. And although we think we know enough to be sure that the last statement is true, there's who think it is not so and thus 'empirically' tries to prove that death is not absence of thinking. I'm just trying to show that inferring someone else's consciousness is a huge problem. From the 'self' side, no matter if you try to prove that I'm not thinking, I'm sure that I am (wrongly or not), and it is just I who is able to prove (and perhaps just to myself) that I indeed think.
You probably told this in a somewhat different way in a comment to this very post: perhaps we won't ever be able to prove someone else's consciousness and so what we should need isn't our certainty that a machine is conscious, but that this machine be able to say to itself 'I think, therefore I am' and be truly convinced of that (don't ask me how,please). And, I don't know (and perhaps no one ever will) why or how, when such a machine say this, that it's possible - or probable - we will agree with it (now 'him/her').
I fear my English has not been enough to express what I tried to above. In any case, I tried. :)
Hi Waldemar,
DeleteBy all means, we can discuss this on my blog. My post refuting the Chinese Room is here:
http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-chinese-room.html
Briefly, on your comments here, I feel pretty confident that we will never be able to detect "thought" or "consciousness" empirically, only the electrical sparking of neurons or (indeed the activation of electronic circuits). Thoughts and consciousness only exist in the physical world as in so far as they supervene on these mundane physical events, but really they are only detectable subjectively.
I don't think there will ever be a scientific account of how this happens because I think it's purely a philosophical problem. Science can only explain how we have the abilities and mental functions we have, it can't explain the subjective aspect, because subjective phenomena do not objectively exist.
I'm not sure, but I think that perhaps you agree with me?
Just a correction to "Descartes' consequent" (at the end of the fourth paragraph), that is to be read "Descartes' cogito". And please don't ask me how that word went to that place.
DeleteAlso see:
ReplyDeleteCONSCIOUSNESS AND SENTIENT ROBOTS
PENTTI O A HAIKONEN
It is argued here that the phenomenon of consciousness is nothing more than a special way of a subjective internal appearance of information. To explain consciousness is to explain how this subjective internal appearance of information can arise in the brain. To create a conscious robot is to create subjective internal appearances of information inside the robot. Other features that are often attributed to the phenomenon of consciousness are related to the contents of consciousness and cognitive functions. The internal conscious appearance of these is caused by the mechanism that gives rise to the internal appearances in the first place. A useful conscious robot must have a variety of cognitive abilities, but these abilities alone, no matter how advanced, will not make the robot conscious; the phenomenal internal appearances must be present as well. The Haikonen Cognitive Architecture (HCA) tries to facilitate both internal appearances and cognitive functions. The experimental robot XCR-1 is the first implementation experiment with the HCA.
Keywords: Consciousness; machine consciousness; conscious robots
International Journal of Machine Consciousness
Read More: http://www.worldscientific.com/doi/abs/10.1142/S1793843013400027
Well, this looks like a fun place to drop in for a visit. I have to say I'm a little surprised to find that a Journal devoted to the topic of machine consciousness is even out there, considering the somewhat taboo nature of the subject of had in early AI circles due to all its associated epistemological problems in contrast to the more well defined engineering challeneges. But I'm all for getting as many people as possible to to take consciousness seriously.
DeleteJollyRancher, thanks for the tip.
ReplyDeleteAnd, Massimo, it's probable you already know this paper (https://docs.google.com/viewer?docex=1&url=http://lnx.journalofpragmatism.eu/wp-content/uploads/2012/01/Ludwig.pdf), and its subject is certainly something that would be of the utmost interest here. If you already have a post on the subject, please, let me know.
Philip,
ReplyDelete> Workers in AC (see First International Workshop on Artificial Consciousness coming up) think that even human-level consciousness can be replicated one day in a robot's code. <
Yes, but do they have any good reasons to think so? I doubt it. Any talk of replicating consciousness in a code is, I think, hopelessly misguided.
Alex,
> The point is that there cannot have been a sharp discontinuity between the snails and us unless our current understanding of evolution is completely wrong. ... it appears implausible that an animal was suddenly born with full consciousness to a mother that was a philosophical zombie. <
Agreed, consciousness is a matter of degrees, like everything else in biology. Yet there are plenty of biological examples of quantitative changes that are so great to be essentially qualitative: language being one. Yes, other animals have forms of communication; yes — or rather maybe — some other primates even have rudimentary grammar. But the jump to Homo sapiens is fantastic nonetheless: Shakespeare, and all the rest.
> The second difference is the Chinese room. In my opinion, if it gives the right answers then it understands Chinese. <
No, no, no. That’s a behavioristic test (just like Turing’s test) and it won’t do. If you believe that, then google translate has a pretty good understanding of Italian — I checked. But no, google translate is only a (very sophisticated) program that *mindlessly* carries on certain tasks. And it is that lack of mind that precludes understanding.
Your rebuttal to Disagreeable, and consequent analysis of mind uploading and singularitarianism, is precisely right. Couldn’t have put it better myself.
>Your rebuttal to Disagreeable, and consequent analysis of mind uploading and singularitarianism, is precisely right. Couldn’t have put it better myself.<
DeleteAs far as I can see, it addressed the practical issues. I'm not really a singularitarian - I'm not claiming that mind uploading will happen, I'm just claiming that there's no physical reason it couldn't happen in principle. It's just very very very hard, perhaps too hard.
But back to the Chinese Room. Are you actually familiar with the Virtual Mind reply? I think it totally undermines Searle's thought experiment, and in particular it shows the problem with the "just leave the room" answer to the System Reply.
http://en.wikipedia.org/wiki/Chinese_room#System_and_virtual_mind_replies:_finding_the_mind
If you do know it, what would be your objection to it?
Also well worth a read, the Virtual Mind reply on the SEP
Deletehttp://plato.stanford.edu/entries/chinese-room/#4.1.1
> Any talk of replicating consciousness in a code is, I think, hopelessly misguided. <
DeleteI'm too afraid to say something like that. One day in the future a conscious robot might read it and get mad at me!
@Massimo.
ReplyDelete"Agreed, consciousness is a matter of degrees, like everything else in biology."
Like intelligence? That example of "Shakespeare" is a tell. Yes, a monkey could never write King Lear, but then I'll wager that you couldn't either. And a bacterium can individually choose to do things that Shakespeare had no inkling of (nor apparently do you).
Yes, Baron, perhaps this point is meaningful within our previous chat on intelligence: if measurable, intelligence should be so according to the context to which it applies. The context (environment) of a bacteria is not the same of a monkey, for instance, and so their survival solutions must be different. Then it would exist different and not comparable kinds of intelligence and even to the universe of just human elements this statement should apply. I'm not telling that there isn't a 'universal set' of human capabilities and that an individual possessing one of these capabilities wouldn't be able to acquire the other ones. But this would depend on the context (the environment). Change it and the individual would acquire an ability (intelligence?) he hadn't yet.
DeleteThat reminds me a bit of Massimo's contentions about syntax, to which I would point out that yes, you don't get meaning out of syntax, but you do get meaning out of the culturally shared agreement as to what the syntactical arrangement of symbols was meant to infer. In other words, you get meaning from syntactical arrangements that have been culturally evolved, provided you're sufficiently familiar with that culture's also evolving rules.
DeleteSo of course intelligence is contextual, just as are the processes through which it must be culturally communicated to evolve. But let's make an analogy here with intelligence and life. If life evolves into a myriad of different strategically operational forms, yet is still considered life, then wouldn't the intelligence that has contributed to that evolution, if changed as you imagine in the process, have been strategically and formatively evolved as well, and thus must still be considered as intelligence?
And don't forget that I've offered up this standard definition from the start:
intelligence |inˈtelijəns|
noun
1 the ability to acquire and apply knowledge and skills: an eminent man of great intelligence | they underestimated her intelligence.
This comment has been removed by the author.
DeleteSorry, I publish the comment again, with some small but crucial corrections.
DeleteBaron, by now I'm trying to just understand what Putnam stated in his paper about syntax-semantics conversion. It's not sure I'll be able to get something valuable from there, but by now it seems I was trying to (badly) expose something very different.
The idea seems to be extremely simple to me (see, I try it again!) and I believe it expresses the belief that we are able to interpret or to know the world: given any group of objects, we'll ever believe we're able to express in at least one idea the way it is arranged. And I'd say more: we immediately associate some relations between elements of this arrangement with some of what I call built-in concepts, like: similar/different, high/low, big/small etc. Observe that 'built-in' means just that 'we are equipped to form those concepts once we get contact with the world' and that if some similarities, for instance, are indeed taught to us, it is because they have been grasped by someone else before. So, the mere display of objects, which can be said to be syntactic, automatically evokes those concepts which, after Peirce's sign theory, become the immediate 'interpretants' (or meanings) of that arrangement or of one of its subsets.
Then, when I mentioned Tonal Music, recall I said that its basic (let's say so) meaning is in fact a little more complex, as it involves those built-in concepts (that ultimately helps us to identify objects in the world, preventing us to see everything as a continuum) and what I called a paradigm, the series of harmonics, to which each small portion of the tonal discourse (each musical cell, each musical motive) 'bends' itself and which also works as a complementary, omnipresent object of reference (each note in tonal music - if you're not playing it with an orchestra of tuning-forks - sounds its series of harmonics). So, if listened by an individual that never listened it before, it will make sense because this sense it makes is inherent to the codification.
I believe that, conversely, even taking peircean symbols, which are signs assumed as referring arbitrary objects, it's possible to grasp their syntaxes, I mean, the relations that happen to make them as they are.
By the above you can see that I tried to show just that there's no semantics (possibility of establishing meaning) without a syntax (in short, an organization) and there's no syntax that doesn't entail a semantics (in short, that doesn't make at least some sense). Just this, and I guess this has nothing to do with referred Putnam's paper - not to my knowing, at least. If this exposition is completely false, I sadly must recognize that a good part of my master degree thesis is regrettably garbage and no one was able to show me this during the argumentation (20 year ago). Pure shame!!
My first reaction to both versions of this comment is to point out that if you see a group of at least somewhat familiar objects, you will likely have been able early on to presume their functional purposes. And partly because as in some forms of hieroglyphics, the symbols we've devised to represent them are pictorial - which is not the case with languages which depend on the alphabetical construction of words. Even so, you might put three or four English words together randomly, and still get most of us to draw some reasonable inferences from them. Unless of course you were a Chechan speaker with enough differences in the linguistic, symbolic and syntactic systems to throw you completely off.
DeleteSo of course we need some cultural instructions to handle syntax, but not necessarily because those arrangements themselves must supply the meaning.
And much of our cultural learning is retained through our instinctive functions. Which again reflect how we must most safely react to others of our culture's signals.
Language in the end being that; a culturally designed signaling process of sometimes open and more often closed communication. (Getting to a question for another time as to why languages in even similar cultures may have necessarily differed since their invention.)
And getting back to music, I note you're still holding that: "So, if listened by an individual that never listened it before, it will make sense because this sense it makes is inherent to the codification."
But I hold that since music has become an instinctive means of signaling the feelings that our emotional systems had once needed to communicate to others of a similar human culture, the meaning that we still sense today is inherent to that human's nature and his culture. Which is no doubt why some Eastern music is highly discordant to a Westerner and vice versa. BUT still recognizable as music!
Baron, perhaps we are plagued by a feature that logic and mathematical logic in particular have as part of their nature: the ability of dealing with infinity as if they could hold it with their own hands. But dealing with infinity is tricky as it often slips in between the fingers. By just watching how logicians and mathematical logicians output their discoveries we tend to forget that they provide just frames where the remaining workers will have a hard time in order to obtain a few countable things. Thus, there must be those who roll up their sleeves to get that minimum. Not indeed an easy job to take every piece of the world and get the best from it. One must be patient.
DeleteYou know, that "BUT still recognizable as music!" of yours makes me recall that speech is also music, has all its properties. It's a kind of music. Why do we understand it as just speech?
Good question. Because depending upon the country and the culture, some speech depends almost as much on tones as on its syllables for discerning the exact meaning. Apparently we can't write such tonal words in English so perhaps that's why we've tended to flatten out the "intonation" language. I also note that in America we tend to end a question with an up tone and in some parts of Australia they end it with a down tone. So the varied music of our original cultures seems still with us in our varied speech.
DeleteAlso in the Southern States of the USA, we have Country songs that follow the same tonal patterns of the ordinary speech. The difference seems to be in the amount of exaggeration up and down the scale. And in fact you may often hear the words of those songs without the music, but seldom hear the music without the words. We also have talk singing in the Western US, as in some old cowboy laments. Like in When the works all Done this Fall, or Streets of Laredo.
But I'm not a music scholar so anything else I'd have to say about the above would be a semi-educated guess.
Massimo,
ReplyDeleteMaybe I am missing some nuances here, but in the absence of a test that would allow me to objectively decide whether a being that passes the behavioural tests has a consciousness and understanding or not the behavioural tests are all I have.
As in other cases (see immediately below) I am not sure what exactly one is debating. If the question is, "does it understand the language in the sense of being able to translate" then obviously Google Translate has some understanding (although certainly still nowhere near that of a competent speaker, at least for German-English or German-Spanish). If the question is whether it understand a language in the sense of being able to act on information or answer questions, then no. But the thing is, if an AI can act on information and answer questions then claiming that it still doesn't understand is, in the absence of a clear, objective, empirical test for understanding, begging the question. Maybe there is no qualitative extra or maybe there is, but at the moment I would not even know how to find out without assuming the conclusion.
Disagreeable Me,
> But, to start with, I'm not necessarily saying this would be a way to become superhuman or immortal. You could instead imagine that this is a treatment for MS, for instance.
But claiming that one could replace a small part of the brain to cure a disease, and that the human would still be mostly (!) the same, is very different from the claim that a simulation of a person is a person. I feel it is increasingly unclear what the discussion is about.
> As far as I can see, it addressed the practical issues. I'm not really a singularitarian - I'm not claiming that mind uploading will happen, I'm just claiming that there's no physical reason it couldn't happen in principle.
I don't see any reason why one could not build hard AI somehow, but mind uploading is a very specific thing: The claim that one could get a human mind from the human body and into a computer. If this is not practically possible then it can, indeed, not happen in principle.
>But claiming that one could replace a small part of the brain to cure a disease, and that the human would still be mostly (!) the same, is very different from the claim that a simulation of a person is a person. I feel it is increasingly unclear what the discussion is about. <
DeleteThat (!) is probably telling!
The purpose of the replacement is not so important for now. By mentioning a cure for a disease, I'm just encouraging you to forget about fantasies of superhuman noncorporeal intelligence.
The argument goes that if you can accept that portions of the brain could be replaced by artificial or simulated prostheses, then why not the whole thing? At what point would you become unconscious or insane?
Seriously? If I lose a leg and get an artificial one I am still mostly me. If you build a robot that outwardly looks like me, it is not me any more. This is not rocket science.
DeleteClearly you can replace me bit by bit. I know that because it happens all the time: individual molecules and cells get replaced over the course of our lives. But they get replaced with functional equivalents, not with a wire that has completely different capabilities than living tissue.
Saying I should imagine that there could be an electronic compartment that is exactly as good at making a human as our current brain cells but SOMEHOW without all the trade-offs is essentially saying "imagine I could do magic".
I don't identify at all with my body. I identify with my mind. If I get an artificial heart, I am still completely me. If I get an artificial leg, I am still me. Replace my whole body, I'm not "mostly" me, I am *completely* me.
DeleteI include my brain as part of my body, and think that the mind is something it produces as part of its function. I think if you replaced it with a black box which had the same functionality, but implemented very differently, my mind would continue to exist.
>But they get replaced with functional equivalents, not with a wire that has completely different capabilities than living tissue.<
In the thought experiment, the replacements are functional equivalents, just as a robot leg might be functionally equivalent to a biological one.
>Saying I should imagine that there could be an electronic compartment that is exactly as good at making a human as our current brain cells but SOMEHOW without all the trade-offs is essentially saying "imagine I could do magic".<
You don't know that the tradeoffs are fundamentally necessary, you're only assuming this. I don't claim to be able to show that they're not -- I'm really arguing for open-mindedness.
There is no reason to suppose that nature has produced an optimum design. Without knowing that brain cells *have* to be susceptible to prions in order to function, you're not justified in saying it's impossible without magic.
Again, I am not sure what we are actually discussing. In this whole thread, I have argued several things, among them that the Chinese Room does not convince me as an argument. As relevant to you, I thought I was mostly arguing that mind/brain uploading is implausible. Even if you could build artificial brain cells that are fully equivalent but less squishy than the real ones and replace the human brain bit by bit, that would still mean that there is one human waling around with what amounts to prosthetics, not that your mind gets to be a piece of software in the cloud. Those are two different issues.
DeleteI don't identify at all with my body. I identify with my mind. If I get an artificial heart, I am still completely me. If I get an artificial leg, I am still me. Replace my whole body, I'm not "mostly" me, I am *completely* me.
Well, that is nice but it assumes mind-body dualism, and that does not appear to be a plausible idea any more. You are your body, and I am my body. Your mind is merely your brain doing the more conscious parts of its job.
The mind is a process, not a tangible thing that can be transferred. You could just as well argue that the important thing about a car is the driving, and if we somehow transfer "its driving" to a bus and physically destroy the car then the car still exists. Worse, ultimately you want to simulate the Volkswagen in a computer and claim that there is still driving going on.
Disagreeable,
DeleteAccording to Damasio (The mystery of consciousness - in Portuguese) and others in the same neuroscience area, each mind works minutely with every kind of data, although all of them are truly body information, as the data from senses go the same way the data from the organs go. The brain parses them all according their origins and this happens not in the conscious level, I mean, the majority of what happens in our brain can't be consciously accessed, though they are influencing more or less our conscious activity. So, Damasio isn't sure if future changes in the body (I mean, not in the brain) won't affect the notion of self at all. He ''quotes Alex's opinion to the letter: our minds, although producing 'immaterial' objects, exist for our bodies as a whole.
If you mean mind/brain uploading is implausible in the sense that it's quite unlikely to be realised any time in the forseeable future, then I'm with you. If instead you're saying that it is in principle a nonsensical idea, then that's what I disagree with.
Delete> that would still mean that there is one human waling around with what amounts to prosthetics, not that your mind gets to be a piece of software in the cloud. <
But back to the intuition pump: what if the prosthetics have their processing done remotely, e.g. because it isn't feasible to have a supercomputer within a human skull? Wouldn't your mind be at least partially within the cloud then?
>Well, that is nice but it assumes mind-body dualism, and that does not appear to be a plausible idea any more.<
Yes. Certain types of dualism are not plausible any more. However other forms seem reasonable to me. I'm finding it hard to figure out which kind of dualist I am (property dualism or epiphenomenalism perhaps), but I see the distinction between the mind and the brain as analogous to the distinction between hardware and software, or between a sound wave and the particles that make up the medium. The wave is a pattern, not something that has a fundamental physical existence. The wave could pass from one medium to another, and still be considered the same wave. I consider my mind to be quite similar to this.
>The mind is a process, not a tangible thing<
I couldn't agree more. But this is precisely why it can be transferred from the physical to the virtual.
>You could just as well argue that the important thing about a car is the driving, and if we somehow transfer "its driving" to a bus and physically destroy the car then the car still exists. <
A better analogy might be its momentum. You absolutely can transfer the momentum from a car to the bus. If this involves destroying the car, so be it, the momentum is conserved. So, if the analogy is car=body, momentum=mind, then I don't see a problem.
We are probably not going to get anywhere.
DeleteIf you put me on a table, scan my brain, magically make a perfect copy of it, and then wake me up, I will look at the machine and think, "huh, nothing has changed". Then I will grow old and die while not me but a copy of me continues to exist in some machine. If, in the same situation, you would get up and walk out of the room thinking, "now I am in the machine", then we will simply never reach agreement.
And that is the ridiculously hypothetical best case scenario. In reality, again, a simulation of me is not even a copy of me but merely, well, a simulation of me, just like a simulation of a car inside a computer is not a car.
I agree that it raises deeply problematic issues about identity, in the same way that a transporter does, especially if the original happens not to be destroyed.
DeleteIn the scenario you describe, my view would be that at the point I lie down to be scanned, I have two future selves, both me. My life forks, essentially. From my perspective, it will seem like I have a 50% per cent chance of ending up in the machine or remaining physical, but actually both possibilities are realised.
I have just finished the 3rd Ed of my book, which answers these issues concisely enough for you to all understand. It's free at my site http://thehumandesign.net
ReplyDeleteJust to briefly correct one misconception among many in the piece, the Chinese Room is essentially useless. It is nothing but a tabulator, and to say a mind is never a tabulator by comparing it to something that is nothing but a tabulator is silly. We are both a tabulator and something more than a tabulator, and its is destructive to progress to use the Chinese Room to close the door entirely, so to speak.
ReplyDeleteI should provide more fullness, to be fair. The mind is entirely tabulation, and entirely a dynamic interface of entity and world.
ReplyDeleteIt is tabulation as automatic neural processes to do nothing more than represent the entity in the world, and it is dynamic and changeable from the entity's interfaces.
Do you think a brain actually does something? It finalizes entity-in-world neuronal capture for a representation using nothing but automatic matching of receptor inputs with networks - pure pattern matching or tabulation by any other description.
Do you think the mind cannot be viewed from either perspective? It is from an entity-in-world, and it is an experience of entity-in-world automatically facilitated by neural tabulation between receptors and networks.
As a facilitated experience in the brain on finalization, it is absolutely nothing but automatic tabulation. That's my working hypothesis, for reasons simply stated above, which can be easily expanded if you want to know more, as the are fundamentally sound from the outset. That's how to advance.
Or do you think otherwise? Or do you take what other people think and circle them around to get dizzy? These are important questions for people like Socrates and me, because we like to challenge our own claims at being effective thinkers and expressers of concepts by words.
Worth thinking about, or just read my free book and get well educated on these matters.
I will be doubly fair to the piece writer, if this gets posted, by saying all the bit and bobs we like to think of as the content of awareness, its meaning and syntax and so on and so forth can actually be fully ordered and understood in the proper perspective. They are otherwise bit and bobs we try to make sense of in themselves, and as created by an anatomy.
ReplyDeleteLanguage and linguistics are essentials, and fit within the proper perspective given in my book to explain what syntax and meaning are, as created by anatomy. Anatomy as the entity-in-world that creates its own representations by its automatic facility of neurons.
I will no doubt lose you here as you have probably not availed yourselves of the opportunity to read my book despite telling you all about it here regularly over the past year or more. But, the entity has the capacities in its very anatomical structure, a head & body, fingers, eyes, gut, whatever, all the parts together make a unified whole with chemical capacities to construct representations.
Anatomy is itself constructed as a 'representation machine' that uses neurons to do the job. I asked the piece writer to have a look at my book over a year ago but he didn't have any time to take a peek. Pity, really, and despite my frequent visits here I never get responses from him to my posts. I must be on the ignore list, on that evidence, unless my posts were not challenging and worthwhile, in which case I am a bloated idiot.
Embarrassing, really, for me that is, but I don't mind sharing it. It was worth the good old college try, but I realize its not the way things are done, no luck, no connections here or there, just a toss of a coin whether anyone reads it. That's the way of the world. But do have a peek nonetheless, and get beyond the cover title - that would be a bit of a premature about-face.
Just for overall information on the question syntax-semantics conversion and Putnam's proof on its impossibility, some remarks in this paper (http://www3.nd.edu/~tbays/papers/putnam.pdf) on how and where his argument fails.
ReplyDeleteThanks for the Putnam connection. I read him on this topic recently and wondered if his work had any resonance.
DeleteNice post Massimo. An old book by Hillary Putnam applies some analytical philosophy to the computational theory of mind he once advanced, and finds it logically unsound. He uses a similar strategy to find that there is a logical equivalence between behaviourist and functionalist theories of mind, and that any description of cognition in the terms of one can be fully rendered in the terms of the other. As I can't summarize the symbolic logic, you would be doing yourself a favour by looking at his Representation and Reality (MIT, 1988).
ReplyDeleteEnjoy this post. I have a couple of points:
ReplyDelete1. Is it not the case that in philosophy, the only opinion on consciousness that might, on your account, accurately be described “a theory” is one that claims to provide a ‘reductive explanation’? Only a reductive explanation can be empirically tested and assessed in view of its claimed coherence with physics.
Ever since I had a philosophy paper rejected on the basis that it was "largely speculative”, I too have contemplated what it is that makes ANY philosophy paper on consciousness anything but speculative. Are there any exceptions?
2. Whilst my Hierarchical Systems “Theory” of consciousness claims to be reductive (http://mind-phronesis.co.uk/emergence-and-evolution-of-human-consciousness), I do often wonder if in being so, makes it no longer philosophy... Philosophers hate resolution and merely dabble with coherence. They are like the perverse goalkeepers in football - eager to have the ball kicked challengingly toward them, but hating it should the ball go in.