About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Tuesday, December 29, 2009

From the APA: epistemology

Interesting sounding symposium put together by Stephen Grimm of Fordham University. The first speaker is Chris Tucker (Notre Dame and Auckland), with a talk on “Why open-minded people should endorse dogmatism.” Ok, ok, I’m holding my knees...
Tucker defines dogmatism as “if it seems to S that P, then S has prima facie non-inferential justification for P.” Right, so he is not talking about anything like what most people mean by “dogmatism.” Keep that in mind. Moreover, “seems” above does not reflect a mere belief on the part of S, but a forceful feeling of truth. For instance, it seems to me that there are other people in this room, or that 2 + 2 equal 4. That kind of seeming.
[Sociological note to self: unlike education — see previous post — epistemology appears to be of almost exclusive interest to male-gendered philosophers. Go figure.]
Tucker differentiates between “seemings” and sensations, as it should be obvious from the example of the 2+2=4 above. He draws this distinction more precisely on the basis of neurobiological evidence concerned with people who have “seemings” but not the corresponding sensation, and vice versa, which I find a good example of how a philosophical (conceptual) distinction makes sense of puzzling empirical data.
Broadly speaking, it appears that Tucker is saying that “dogmatism” based on “seemings” is justified when the person experiencing the seeming is in a position to have a strong, likely correct, intuition of what is going on. I’m happy to agree that this is very sensible, but I maintain that to call this “dogmatism” is a cheap trick to justify an attention-grabbing title in a talk that would otherwise not be quite so remarkable.
The second talk is by Kay Mathiesen (University of Arizona) on “groups as epistemic agents.” She argues that group-level epistemic beliefs are not necessarily the same thing as the sum of the epistemic beliefs of the individuals making up a group. Ok, I’m skeptical of this one too...
The example presented is that of two parents faced with the question of when their daughter can date. The mother seems to think that 14 is a sufficient age, the father goes for 18. They decide to present a unified front and tell their daughter that 16 is acceptable. Since neither of the two people actually thinks 16 is the answer, the “group” made up of the two parents holds to a different belief than either individual member of the group. This is certainly a good point, but I would most definitely not call this a case of “belief,” a term that immediately brings to mind very different kinds of situations. The author herself agrees that “belief” may not be the best term here, suggesting “holding a position” as an alternative. But if we are talking about holding a position rather than a belief, then in what sense is the group behaving as an epistemic agent? Where is the epistemology here?
[Incidentally, although I have no time to go into it in this post, the commentary by Michael Hicks from Brooklyn College was absolutely superb and characterized by incredible clarity. He managed — in my opinion — to drive a stake through the heart of Mathiesen’s argument.]
The final talk of the session was by Julianne Chung (Yale) on “hope, intuition and inference.” This is a response to an earlier paper by Jonathan Weinberg criticizing philosophers’ use of their “intuitions” as part of their arguments. (I must say that while not a complete skeptic about philosophical intuition, I certainly am not moved by, say, David Chalmers’ intuitions about zombies and what they tell him — but not me — about the hard problem of consciousness.) Weinberg characterizes the philosopher’s reliance on intuition as “hopeless” in the sense that it is not likely to bring about reliable inferences, for a variety of reasons including the lack of inter-subjective agreement on what the intuition actually suggests (again, see my counter-intuition that zombies don’t tell us anything about consciousness, contra Chalmers).
Chung characterizes intuition as “snap inference” in which we do not expressly list all the premises or make the argument explicit. The claim is that we could, however, do so if called upon. I don’t think this is necessarily the case, and that therefore one can make (either positive or negative) generalizations about intuitions and the related category of thought experiments.
For Chung, thought experiments illuminate the consequences of certain assumptions about whatever problem is at hand. I would not disagree as a matter of general proposition, if one adds the caveat that thought experiments may illuminate the problem, as has indeed been the case in both science and philosophy (think of both Galileo’s conceptual demonstration that the Aristotelian way of thinking of falling bodies is wrong, and Einstein’s insight into the nature of light by imagining himself riding a light wave and looking at a parallel one; in philosophy I think a good example is John Searle’s “Chinese room” experiment, which seems to me conclusively to show that there is something amiss with the functionalist view of the material basis of consciousness — raising also serious doubts about a simple computational account of the mind). Unfortunately, thought experiments can also be profoundly misleading, so they are by no means an unqualified good.

41 comments:

  1. Wow, this post makes me want to stay as far as possible from epistemological debates. The first two boil down to definitions. The third I don't understand, I must admit.

    And the whole thing "seems" to be typical of philosophic writing: parentheticals, brackets, definitional quotes and the joy in a stake through the heart argument.

    I am very interested in the first bracket. First, what is the gender ratio of all professional philosophers? Second, what are the areas where women are over-represented?

    ReplyDelete
  2. May have misread the sarcastic tone here, but can it be that when you use the phrase "that kind of seeming", you backhandedly agree that when someone says statements like "2+2=4", "There are other people in the room",.... that these are points of view, no more and no less?

    The point of view made possible by a combination of
    - 'group' beliefs that 2+2=4, others are in the room, the proper dating age is 16, etc...

    - one's sense of the correctness of the beliefs of others

    One reason why Mathiesen's argument that a group's belief is unequal to the sum of the group's individual beliefs) makes sense to me is because it reduces the distinction between an individual and a group of individuals. As if an individual is a singleton capable of having only one belief at a time.

    ReplyDelete
  3. Massimo, would you elaborate a little on "simple computational account of the mind"? Simple in what sense? As in any computation expressible as a Turing machine program?

    ReplyDelete
  4. pyridine,

    no, by simple computational account I mean a direct analogy between brains and digital computers, as in the strong AI program.

    ReplyDelete
  5. I can't agree that Searle conclusively showed anything except his own inability to follow his own reasoning to its conclusion.

    Searle offers the Chinese room not as a physically realizable system but simply as a device to lampoon functionalism.

    Searle seems to allow for a computer to fully functionally simulate a brain yet tells us that it isn't conscious. In order for it to be conscious it needs some special Serilian substance or process.

    But if the simulated brain is functionally identical to the Serilian brain how can you tell them apart? And what purpose does the Serilian process serve? This is the Zombie problem all over again.

    Chalmers wants a dual substance to explain consciousness. Searle wants a hidden Seralian substance or process. What is the real difference here? Isn't Searle simply a dualist in denial?

    ReplyDelete
  6. Not being familiar with the issue, I have read the rather longish Wikipedia article on the Chinese Room and, quite apart from me not being convinced by the thought experiment, one tangential question has occurred to me:

    It seems that Searle at least partly formulated it as a rejection of dualism, because he wanted to reject the notion that a program can be a mind without having the human brain to emerge from. Okay. But to me it seems that he is the dualist here - the main argument seems to boil down to that there is some mystical extra component to the mush between our ears that makes a mind possible ("causal properties") and that computer programs "obviously" do not have. Huh? Or did I misunderstand him?

    PS: Personally, I think that it is all a question of complexity after all, but of so mindboggling huge amounts of it that they wreak havoc with our intuition's ability to assess the matter.

    ReplyDelete
  7. Massimo

    "by simple computational account I mean a direct analogy between brains and digital computers, as in the strong AI program"

    I think you are oversimplifying here. A computer is no more a brain than a neuron cell culture is a brain. Brainness is in the complex interconnections of the neurons. In a computer it is in the complexity of the program running on the computer.

    A computer can in principle simulate a tornado to any degree of accuracy you want. But that tornado can't actually destroy your house.

    There is no reason to suppose that a computer cannot simulate a brain to whatever level of accuracy you want. The difference is that in principle you could connect that computer to a body and it would function as an actual brain.

    Rejecting this seems to be a blanket rejection of reductionism.

    ReplyDelete
  8. It seems strange to me that you reject Chalmer's zombies but feel the Chinese room argument has some force.

    To me the system's response blows Searle out of the water. His response to it assumes that the mind is indivisible which is obviously false (split brain patients).

    Could you explain why you take Searle seriously?

    ReplyDelete
  9. Baraby,

    I fail to see why you think Chalmers and Searle arrive at the same conclusion. They manifestly don't, since the latter is not a dualist, while the first one is. Moreover, all that the Chinese room does is to show that there is trouble with one particular physicalist theory of mind, not *all* physicalist theories (unlike the zombie argument, with which however I do disagree).

    ppnl,

    as for computers being able to simulate minds, I'm afraid that's begging the question, since that is precisely the thing at issue. But even a rejection of the computational theory of mind doesn't lead to dualism, it simply says that a particular view of thinking of mind is incomplete or inaccurate.

    ReplyDelete
  10. Massimo,

    The problem with Searle's special process is that he leaves us no way to recognize it if we see it. In that sense it is no different from a supernatural soul. Searle simply declares his is naturalistic by decree. Fine but it is an intellectually sterile position unless I have some way to see it under a microscope or weigh it on a scale.

    And I think you have a stunted understanding of the computational approach. At a the bottom level everything we understand about the universe is written as math. Everything that happens can in a certain sense be seen as a mathematical calculation. It may be big, complex and chaotic but that does not make it less of a calculation.

    Just about everything we understand about the brain can be seen as algorithmic. From the way visual information is extracted to the way new memories are formed. The computational model is the only game in town. In a deep sense that's true not only for brains but for all of science.

    ReplyDelete
  11. ppnl,

    I don't think you give Searle a fair shake. To show that a particular theory of anything is lacking is not at all the same as invoking the supernatural, even if one does not have a ready alternative.

    And I think you have an overinflated view of the computational approach. First off, I have no idea on what basis you make sweeping claims such as "everything that happens can be seen as mathematical calculation." Wow. Second, surely you realize that if being "algorithmic" explains literally everything than we don't have an explanation of what makes brains different from, say, rocks...

    ReplyDelete
  12. I don't think you give Searle a fair shake. To show that a particular theory of anything is lacking is not at all the same as invoking the supernatural, even if one does not have a ready alternative.

    He does not call it supernatural, that much at least is true. But again: what else is the position that a computer self-evidently cannot have a mind because it is lacking an undefined and invisible "causal property" that a human brain self-evidently has if not dualism? Conversely, if there is no magical, immaterial component to a mind, then why should a computer not in principle be able to have one? To me, Searle comes across as a dualist.

    ReplyDelete
  13. Mintman,

    are you seriously arguing that to propose that X works differently from a computer means that one is a dualist with respect to X? Why? Again, arguing that point is equivalent to accepting the computational theory by fiat, while in fact what we have been able to do with computers in terms of simulating minds is almost absurdly inadequate (see the abysmal failure of the once much trumpeted strong AI).

    Imagine a physicist who said that unless one accepts string theory one is invoking spooky supernaturalism to explain the unification of forces. He would be laughed out of court immediately...

    ReplyDelete
  14. Okay, either I have misunderstood Searle's motivation as represented on the internet, or I have not expressed myself clearly. I would not worry about him showing that current AI does not have a mind, but to me his whole impetus and argumentation read as if he took issue with the notion that constructs could have a mind in principle, and he seems to have done so because he feels that human brains have something "special". Well, what material thing is that and why could it not be copied, in principle, in a computer? The answer to the first question remains unclear (to me), and the one to the second would be that it could only then not be copied if it were a magical elan vital or soul.

    Apart from that, I agree with BD that the system's argument sounds pretty convincing. If there were a program that could actually intelligently answer a question like "how many fingers am I holding up", "name five animals that are larger than a mouse" or "how are you" then it would understand chinese, no matter if one of the cogs in the apparatus, the human in the thought experiment, did not. All else is either redefining the word understand to suit one's agenda, or else the fallacy of saying that a car cannot drive because a windshield cannot, however that fallacy was called in English.

    Note that I am not saying that giving a computer a mind is anywhere within reach. I am just saying that to me, accepting the possibility of producing a mind on a computer in principle follows necessarily from the rejection of dualism.

    ReplyDelete
  15. Mintman,

    I understand your worries, but I take Searle's claim as more modest than what you take it to be: he is simply arguing that - given the counterintuitive result of his thought experiment - there seems to be something amiss in functionalism (which, btw, is only *one* type of computational theory).

    As for system's arguments, I find them plausible, not convincing. I'll find them convincing when they'll produce a mind (or a good explicit explanation of how to get there).

    And again, I disagree that the rejection of dualism necessarily entails that minds are a type of computer, no more than the rejection of vitalism in the 19th century meant that living organisms are mechanical devices (beware of focusing on the currently available cultural metaphor!).

    The computational theory has to stand on its own, and as far as I can tell it has a long way to go before managing that.

    ReplyDelete
  16. Massimo, you said that rejection of dualism does not necessarily entail that minds are a type of computer. It depends on what you mean by computer. If by computer you mean either: 1. the kind of computers that we are using now, or, 2. any computer running software envisioned by the AI researchers in Searle's time (ie. LISP or Prolog), I think you are right. But let's imagine we have a massively parallel computer with enough power to simulate the electrical and chemical activities of every neurons in, let's say, my brain, wouldn't you think we will be able to create a mind? If your answer is yes, I'd say the mind is a computer. If your answer is no, I'd say yours is probably a dualist position.

    ReplyDelete
  17. pyridine,

    glad we agree that current computers don't do the trick. So now we are talking about hypothetical computers. Which means we don't really have a "theory" anymore, much less empirical evidence.

    Even so, imagine you can build a computer that simulates all the intricacies of the geodynamics of the earth. Would you then call that computer a planet? There is a difference between simulating something and being the thing in question, no?

    ReplyDelete
  18. Massimo,

    You should try to see that in a deep sense both Searle and Chalmeres are making the same argument. They are both using what is essentially zombie argument that something is wrong with functionalism. Chalmeres labels his supernatural while Searle labels his natural. Neither claim has any actual content.


    It isn't just that there is no ready alternative. You would need to redefine science to make any alternative possible. Thats why Chalmeres just gives up goes supernatural. Thats why some go the other way and deny that consciousness does not exist.

    The universe is (or seems to be) algorithmic. That means a falling rock can in some sense be seen as doing a calculation. That is to simply say it follows rules.

    A brain is like a rock in that it also follows rules. It is different in that it is following different rules and is thus doing a different calculation.

    A computer is a device for exploring rule sets. It can in a symbolic sense implement any possible set of rules. That means in an extended symbolic sense it can be either a rock or a brain to as high a degree of accuracy as you wish. Even if this means simulating them down to the subatomic level.

    Computers are universal because they can implement any possible rule set. In a symbolic sense they can be anything. Even brains.

    The only way out of this is to claim there something about the universe that is non algorithmic. That possibility is strange enough as to come close to justifying the supernatural label.

    ReplyDelete
  19. ppnl,

    I heard the "the universe is algorithmic" argument plenty of times before, and frankly, I'm getting tired of it. It is just as empty as the stuff you disagree with.

    A rock is most definitely *not*, in any reasonable sense, "doing a calculation" when it falls. What is true is that we can calculate the trajectory of the rock, or that the rock follows the law of gravity. But that's got absolutely nothing to do with validating a specific theory of mind.

    Computers cannot *be* anything but computers. They can calculate things (within limits imposed both by technology and by things like incompleteness theorems), but that is not to say that they *are* those things. That's a huge ontological mistake.

    ReplyDelete
  20. "Even so, imagine you can build a computer that simulates all the intricacies of the geodynamics of the earth. Would you then call that computer a planet? There is a difference between simulating something and being the thing in question, no?"

    In a symbolic sense yes I would. It can tell me all that I could ever hope to know about a planet.

    A computer model of a brain would could tell me all that can in principle be known about a brain.

    The computer brain is different however in that in principle it could be connected to a body and drive it exactly like a real brain. On what basis can you claim it isn't a brain then? Would you unplug it? Even if it begged you not to?

    Current computers can in principle run any program any future computer can. The only problem is that it will not run it very fast. Like billions of times slower. So I don't really see this as a valid objection to a computational model.

    ReplyDelete
  21. Massimo,

    I did not say a computer could be a rock. I said that it could be a rock in a symbolic sense. That is to say a computer program can incorperate anything that can in principle known about a rock. It is a sybbolic stand in.

    Similarly a computer brain would tell us all that can in principle be known about a real brain. It could even function as an actual brain.

    In an absolute sense computers cannot calculate. They no more know what the number three is than does a rock. But their internal states can be used as symbolic stand in's for our purposes.

    A rock can also be used as a symbolic stand in. Put them on strings and they can be an abacus. A falling rock can be used as a symbolic stand in for a different falling object or some other isomorphic problem. Most rocks are not being used but in that they follow rules that are algorithmic and thus understandable.

    ReplyDelete
  22. ppnl for prsdnt!!

    Would only add this - look at it from the relativistic point of view. Suppose 2 computers were built, one like a rock, the other like a planet. If we mortals could not tell the difference between the computer and the 'real thing' (whatever on earth that means), what would it matter?

    Would debaters debate the nature difference between the computer and the other thing? Yes, but only because they can. But most would have no problem with the "rock" and "planet" aliases.

    ReplyDelete
  23. I recognize that Chalmer's argument is not the same as the Chinese room argument but it seems to me equally obvious that they both fail to show anything at all.

    I don't think the Chinese room argument works against functionalism either. The systems reply still works if functionalism is the intended adversary.

    Also your usage of the word theory to describe strong AI needs to be justified. It would seem not to satisfy Popper's falsification criterion.

    Your comments regarding AI would only make sense if they were directed against weak AI rather than strong AI. But weak AI is implied if the laws of physics relevant to the human brain are Turing computable (+randomness). Both quantum mechanics and relativity are thought to be Turing computable in this sense. A theory which wasn't Turing computable would be huge news.

    If there is no non-Turing operation of the universe relevant to the operation of the human mind then weak AI is possible. You therefore have the burden of proof if you want to maintain that weak AI is not possible. That is because you need a revolution in physics for your viewpoint to even be possible.

    Furthermore its absurd to argue that current failure to produce a human level AI amounts to evidence against the 'in principle' possibility of human level AI. This is because the complexity of the human brain is many orders of magnitude greater than that of the fastest commonly available machines. We should only expect AI to have reached the level of insect intelligence given the available computational power. This level AI has certainly achieved.

    ReplyDelete
  24. Barnaby,

    your points are well taken, but they have little to do with my arguments. Briefly:

    * I think the Chinese room argument works against a particular version of functionalism, but not against a variety of physicalist interpretations of mind, which is why Searle didn't "go dualist" (unlike Chalmers).

    * My usage of the word theory is independent of Popper's falsificationism because the latter has been abandoned by philosophers for decades as unsatisfactory for a variety of reasons.

    * My comments are aimed at strong, not weak, AI. I don't know why you think the opposite.

    * Weak AI (or, better, Turing's theory) establishes the possibility of computable machines of the kind of interest. It doesn't follow that brains are that kind of machine. Possibility is distinct from actuality.

    * I agree that it is absurd to argue that current failure to produce human level AI is evidence against the possibility of doing so. I never argued such silly position. But the proof is in the pudding, and the AI community has failed abysmally so far in the pudding department...

    ReplyDelete
  25. I did not know until very recently Chalmers was a student of Hofstadter. By association, that puts him somewhere between people and gods, so I will take another look at dualism, which seemed reasonable if not the most 'unified' stream of thought. Because I see many fellow unwashed and unschooled folks also calling Searle's arguments as dualistic as they come, you too can read about close to 10 kinds of dualism at http://www.iep.utm.edu/dualism/ before you weigh in.

    I cannot find anyone anywhere who or says that the subject of the Chinese Room experiment leaves with more understanding of Chinese than before the experiment, or asks this question. I think that an impression of some sort has been left on the subject after doing all those translations and I call that impression 'understanding', limited as it may be when compared the speed and efficiency found in verbal communication. Put both the subject and his/her pre-experimental self inside Beijing at rush hour trying to get somewhere by reading only those 'sinograms', and whom would you bet on making more headway?

    Searle is saying things that have been said for a long time by those from within AI - no matter how far you get - strong AI, supersizedAI, you are not replicating people's minds, but creating something different. Agreeing with what I may wrongly think the dualist in Searle is saying, AI does not need to be about human deconstruction. It is about construction. While some would like 'understanding' and 'consciousness' to remain ascribed solely to the human domain, the course of popular culture that produced science fiction will not allow this. Neither will Chalmers, from the looks of things.

    ReplyDelete
  26. Massimo,

    I think you are being confused by an inconsistent and incomplete understanding of computers and this results in a distorted notion of functionalism. Lets see if I can clarify the issues.

    1) Look at the brain as a black box that takes input and produces output. The question is can a computer be programed to produce this same input output function.

    1a) Lets say it can. After all a computer is defined as any device that can produce any possible input/output function that anything else can. The question is does that make the computer conscious? Searle's answer is no. For example he argues that if you were to replace a persons neurons one by one with an electronic device that simulated the neuron as you were talking to them they would continue to talk normally and intelligibility but would cease to mean anything by it. This is Searle's zombies.

    1b)Maybe a computer cannot duplicate the input/output function of a brain. This violates the Church/Turing thesis. That means that the brain function is a noncomputable function. Well whats wrong with that? After all there have been many noncomputable functions found. Turing's halting function for example.

    The problem is that not only can computers not calculate noncomputable functions but nothing else can either as far as we know. Maybe there is some special unknown physics operating in a brain that lets it do hypercomputations. But this seems like an extraordinary claim.

    2) This leaves us with three possibilities.

    2a )Computers can reproduce the functions of brains and thats all there is to it.

    2b) Computers can reproduce the actions of brains but they would lack intentionality. The Searle's zombies solution.

    2c) There is some fundamental new physics operating at the neuron scale that makes brains capable of computing hyperfunctions.

    Choose one.

    ReplyDelete
  27. ppnl,

    it is really amazing to me how quickly people around here throw accusations of being confused, ill informed and so on, just because I disagree (with argument) about their own conclusions.

    Be that as it may, even if I accept all you say, your argument makes no distinction among, say, the brain of an ant, that of a squid and that of a human. As far as we know, only the latter is conscious. That's what I meant when I said that a theory that explains everything ends up explaining nothing (or not much). And of course we don't need just a theory, we need empirical evidence, and that's surely lacking!

    Second, I never argued that computers (in the broad Turing sense) cannot simulate a human mind. But that is *not* at all the same as saying that computers are good theories of the human mind. As I said before, computers can simulate the universe, but they are not the universe, and they do not provide us with a theory of the universe.

    ReplyDelete
  28. Massimo,

    I'm sorry about accusing you of being confused. All I can say is that you have not spent any time making your position clear. I understand that you can't spend the time to explain yourself to every stranger on the internet. But what can I say? Your position is very unclear and seems confused to me. Maybe a blog post on the subject would help.

    Your comment about the difference between human, ant and squid brains is a good example of where you are being unclear. What distinction do you expect me to make? A human brain is millions of times more complex than an ants. A human then should display far more complex behavior. I don't know what else you expect.

    And your demand for empirical evidence seems equally confused especially given your defense of Searle. Remember Searle claims that a human with their neurons replaced with electronic devices would still function but would not have experiences. In explicitly rejecting the Turing test he leaves no room for empiricism. You claim to not like zombie arguments but you fail to notice that Searle lets zombies in the back door.

    And you say "But that is *not* at all the same as saying that computers are good theories of the human mind." which strikes me as very confused. A computer is not a theory of brains or anything else. A computer can contain the rules that give a brain its functionality. A computer program only contains the theory. Functionalism just means we don't care what it is made of as long as it implements the functional rules.

    And one final thing that seems confused to me. Its true that a computer model of a tornado cannot destroy my house. But you continue to ignore the obvious: An accurate computer model of a brain would in all respects be a real brain. It could in principle be connected to a body and function as a person would. This is the strong AI position.

    Now maybe a fully functional computer model of a brain is impossible. If so there are deep consequences to that that must be addressed.

    I do not mean to insult or offend you. I acknowledge that you are smarter, better educated and better read than I am. But what can I say? From where I stand you seem confused on some issues.

    ReplyDelete
  29. ppnl,

    no offense taken, and perhaps I was responding not just to your comments but to others' too, which may have heightened the perception of confusion.

    Let's start from scratch. What exactly are computers and or Turing tests telling us about consciousness?

    If the point is that *in principle* a brain's functionality (including consciousness) can be simulated by a computer, perhaps that's true - though it seems to me to beg the question. But so what? What we are after is not a simulation, it is a theory of consciousness. And no, computer programs aren't theories, they are sets of instructions.

    For instance, "smart" computer programs can sustain the Turning test for quite a bit, fooling humans into thinking that they are talking with a real therapist. Maybe that simply shows that therapy is empty, but I still don't see how you get from general considerations on Turing, his theory, etc. to a theory of consciousness.

    Incidentally, there are other arguments against functionalism. When it comes to biological organisms it is simply not true that "we don't care what they are made of," because what makes an organism alive depends heavily on the physical-properties of the materials involved. Why would that be any different when it comes to consciousness? This is what I find as the worst mistake of AI: they are treating consciousness as simply a logical puzzle, while it is a bio-logical one.

    ReplyDelete
  30. I am confused as to why Searle's arguments defending his thought experiment is not dualist in nature.I guess both Massimo and ppnl's posts both point to him being more likely a rejectionist.

    Maybe it was just the new year, but was so psyched by the thought of Chalmers in Hofstadter's breakaway research lab that I bought his 'conscious mind' book today, 1st 'intellectual' book purchase in about 20 years.

    He and I do not have a lot of common ground because I had always associated answers to questions consciousness, both individual and cosmic, as better answered by Easterners, and disagreed with their notions of monistic consciousness anyhow.

    Prefer to talk about theories of information but suspect its all heading in the same direction. So am very happy to see a whole 33 page chapter in this book titled "Consciousness and Information: Some Speculation" preceded by about 30 pages of "Absent Qualia, Fading Qualia, and Dancing Qualia", and followed by the smaller "Strong AI" and "The Interpretation of Quantum Mechanics" chapters that close the 358 page book, which includes a recipe for curried black-eyed pea salad in the notes. I hope he doesn't waste too much time on information entropy.

    ReplyDelete
  31. Massimo:

    I wanted to let this rest, but just for clarifying my position a bit, and to hopefully show that I may have understood you a bit better by now:

    I may well have misunderstood Searle's intentions or what his thought experiment is supposed to show. At first sight is seems as if he is trying to reserve something ill-defined, special, and thus magically-seeming for human brains, and that would be dualism, if it were correct to understand him this way, which it may not be. His thought experiment also seems to want to impress people with the fact that the person in the room does not understand Chinese himself, which is, of course, irrelevant to the question whether the room understands Chinese, just as the aerodynamic properties of a passenger seat are irrelevant to the question whether an airliner can fly. In that way, the thought experiment may at least invite misunderstanding, or more likely I have not informed myself well enough about it (did not read the original paper, for example).

    What it all boils down to are definitions and the question of what we want to prove or disprove. As far as I see, there are three possible levels. If the issue is whether a computer can have a human mind or human consciousness, I will agree with Searle's rejection immediately, as being a human bodily is an indispensable part of having a human mind. However, if the issue is whether a computer can have a mind or a consciousness, then I would still argue that only from a dualist position could this coherently be rejected. Lastly, if the issue is only whether a computer can understand Chinese, then I suppose that the system's response is perfectly convincing, at least to me.

    ReplyDelete
  32. Massimo,

    A Turing test gives us all the evidence we can hope to have that something is conscious. I do not know that you are conscious. The only evidence I have or ever can have is that you act like you are conscious.

    The computer programs that fooled people only worked because of the limited nature of the interaction. And even so the schizophrenic version was more effective. It was not really even an honest attempt at AI. Mostly it was just an algorithm for constructing gramatically correct sentences. The same general effect is achieved by the post modernist generators that you see around.

    I really don't see how asserting that a computer can simulate a brain is begging the question. Its a clear implication of Church/Turing. We have been over this. Either a computer can simulate a brain or there is something noncomputable about the brain. That is an extraordinary claim with massive consequences.

    You seem to be fairly wishywashy on this. You neither accept nor refute what I think is a strong argument that computers can simulate brains. And you don't address the consequences of either position.

    The problem is that a simulation of a brain can in principle function as a brain. On what objective grounds then can you deny that it is conscious?

    And yes computer programs can be expressions of a theory just like a math equation. A computer model of a tornado must contain an understanding of fluid dynamics for example. A program can contain all we think we know about what is being modeled. It is an extremely detailed statement of the theory.

    You said:

    "When it comes to biological organisms it is simply not true that "we don't care what they are made of," because what makes an organism alive depends heavily on the physical-properties of the materials involved. Why would that be any different when it comes to consciousness? This is what I find as the worst mistake of AI: they are treating consciousness as simply a logical puzzle, while it is a bio-logical one."

    You are treating consciousness as an ontological thing here. This is what causes charges of substance dualism. I see no evidence of some kind of Searlian ectoplasm that somehow causes consciousness. I cannot even make sense of the claim. I don't know how I would recognize it if I saw it.


    Searle calls consciousness a process and compares it to photosynthesis. But photosynthesis takes in ontological substances (light, water, co2) and outputs other substances. The brain is different. It takes in information, transforms it and outputs information. Thats the domain of the computer. That it is made of protein is no more relevant than the fact that modern CPUs are made of silicone.

    ReplyDelete
  33. ppnl,

    > Searle calls consciousness a process and compares it to photosynthesis. But photosynthesis takes in ontological substances (light, water, co2) and outputs other substances. The brain is different. It takes in information, transforms it and outputs information. Thats the domain of the computer. <

    I'm with Searle on this one too. I think of consciousness as a process that results from brain activity. The fact that it processes information rather than matter or energy doesn't bother me. Why do you think it is something entirely different?

    ReplyDelete
  34. Why brain activity? Why not: consciousness is a process that results from information processing (that has become so sophisticated as to be able to process information about itself and about the processing itself)?

    Why, exactly? You could argue that the burden of proof is in the AI camp to show that this is possible, but:

    (1) The whole idea of the Turing test is that this could never be proven beyond doubt,

    (2) If our brain is an information processor, and computers are information processors, how could computers not in principle become conscious unless you postulate a magic ingredient in the brain? (Whether we are actually ever able to produce conscious AI in practice, and what it would be good for, is another matter.) It seems to me that the burden of proof would here to be on those postulating the specialness of the brain in comparison to other data processing devices.

    ReplyDelete
  35. Massimo,

    I really don't know what to say. You said:

    ---"I think of consciousness as a process that results from brain activity. The fact that it processes information rather than matter or energy doesn't bother me. Why do you think it is something entirely different?"---

    Well first of all this is the position that Searle explicitly rejects.

    Second as I said if it is information processing then this is the domain of the computer. A universal computer is defined as a device that can perform any data processing that any other device can. Like Searle you seem to refuse to follow your own position to its inevitable conclusion.

    Sugar is an ontological substance that chlorophyll produces in a process called photosynthesis. It is logically possible then to connect sugar to chlorophyll and the photosynthesis process.

    Motion is not an ontological substance. Any object can move. You cannot connect motion to any ontological substance.

    If you attempt to tie intelligence to biological processes you are treating it as an ontological substance like sugar. This has a long history from Descarte's "I think therefore I am" to the religious concept of the soul. We experience our mind as a thing.

    Imagine you have a chess playing computer and you wish to understand how it works. Well there are two things you may be asking here.

    1) You could study the solid state physics of semiconductors and understand the physics of its operation. But this would tell you nothing about the algorithm used to play chess.

    2) You could study the algorithm as substance agnostic data processing. You would know nothing about the physics of the computer but thats ok. You could implement the algorithm on a computer composed of paper slips and trained monkeys. Data processing does not depend on any particular substance. It is not like sugar.

    What you cannot do is call it data processing and then say it depends on some some special Searlian ectoplasmic substance.

    Well actually technically you can. But there are consequences. You will have broken Church/Turing and probably have developed fundamentally new physics that changes the very definition of information and computation. Hey, I'm all for that. But it is you who must supply the evidence.

    ReplyDelete
  36. I am glad Mintman, ppnl, and Massimo seem to be OK with what Chalmers is scratching at. A unit of 'life' more basic than matter exists, and it is called information.

    When any set of information out there is 'believed' by people it is said to be true. Philosophers used to look for truth, but its pretty obvious these days that the thing does not exist. The believer adds his or her sprinkles of experience to the common 'truth' or conventional wisdom of others, and so forth.

    The rocks question is interesting. People including Chalmers say rocks are unconscious, yet they fail to see that the rock is simply responding to life much slower than people, at least from what we can see! For all we know, the rock could be processing some types of information much faster than people.

    And yes folks, once you accept information, you do need to start looking at stuff like mysticism, religion, alien abduction, zombies, vampires, and anything else that makes skeptics cringe even as it influences.

    ReplyDelete
  37. Dave,

    don't know where you got that I am "ok" with information. I don't think information is a straightforward concept, and I certainly don't think it is "more basic than matter." And even more I surely don't think that the notion inevitably leads to mysticism. Where you sampling hallucinogenic mushrooms when you wrote that comment? ;-)

    ReplyDelete
  38. Because of the 2nd sentence in.....

    I'm with Searle on this one too. I think of consciousness as a process that results from brain activity. The fact that it processes information rather than matter or energy doesn't bother me. Why do you think it is something entirely different?

    Its possible that mushroom residue from 30 years ago had taken control of some brain function or processes, don't really know.

    Why do you not think it is straightforward?

    ReplyDelete
  39. Dave,

    because I'm not convinced at all that "information" is anything other than a form of matter/energy or an epiphenomenon of matter/energy. Hence me not being bothered by that distinction.

    ReplyDelete
  40. There is nothing mysterious about information. Its a simple engineering term that Shannon nailed down way back in 1948. The information storage capacity of something is simply the logarithm of the number of states that it can be put in. We usually use base two logarithms because the simple switch is the core of much information processing technology. This is an engineering choice rather than a necessity. In fact many modern methods of transmitting digital information are not binary at all.

    Its interesting to see how information is transformed in a system. My mouse takes in information from the photodetector and mouse buttons. This information is converted into electrical signals and sent to the computer. Here the information can be recorded as a pattern of charged capacitors or on the hard disk as the state of magnetic domains. If it is put on an optical drive it is stored as a pattern of pits on a disk. Some memories use phase states of some material or as stressed nanotubes or even the state of a single molecule.The information can be sent onto the internet by modem where it is encoded as a phase relation between between two or more sin waves. On its way the information may be in the form of optical pulses in an optical fiber. When it gets to where it is going it may become a pattern of polarized domains in a liquid crystal display.

    What you encode the information in is purely an engineering choice that makes no difference to the information. What you use to process the information is less important than the algorithm you use.

    Once we accept that the brain is an information processor there is a sense in which we no longer care what the brain is made of. The same information processing can be done on a different substance.

    ReplyDelete
  41. Let's go with "epiphenomenon of matter/energy". Is yesterday's weather report for Verona, Italy matter or energy or EP(matter/energy)?

    ReplyDelete

Note: Only a member of this blog may post a comment.