About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, September 06, 2013

Three and a half thought experiments in philosophy of mind


by Massimo Pigliucci

You can tell I've had philosophy of mind on my mind lately. I've written about the Computational Theory of Mind (albeit within the broadest context of a post on the difference between scientific theories and philosophical accounts), about computation and the Church-Turing thesis, and of course about why David Chalmers is wrong about the Singularity and mind uploading (in press in a new volume edited by Russell Blackford and Damien Broderick). Moreover, and without my prompting, my friend Steve Neumann has just written an essay for RS about what is it like to be a Nagel. Oh, and I recently reviewed for Amazon, John Searle's Mind: A Brief Introduction.

But what prompted this new post is a conversation Julia Galef and I have recently had for a forthcoming episode of the Rationally Speaking podcast, a chat with guest Gerard O'Brien, a philosopher (of mind) at the University of Adelaide in Australia. It turns out that Gerard and I agree on more than I thought (even though he is sympathetic to the Computational Theory of Mind, but in such a partial and specific way that I can live with; moreover, he really disappointed Julia when he said that mind uploading ain't gonna happen, at the least not in the way crazy genius Ray Kurzweil and co. think it will). During our exchange, I was able to crystallize in my mind something that had bothered me for a while: why is it, exactly, that so many people just don't seem to get the point of John Searle's famous Chinese Room thought experiment? Gerard agreed both with my observation (i.e., a lot of the criticism seems to be directed at something else, rather than at what Searle is actually saying), and with my diagnosis (more on this in a moment). That in turn made me think about several other famous thought experiments in philosophy of mind, and what exactly they do or don't tell us - sometimes even regardless of what the authors of those experiments actually meant! So, below is a brief treatment of Searle's Chinese Room, Thomas Nagel's what is it like to be a bat?, David Chalmers' philosophical zombies, and Frank Jackson's Mary's Room. I realize these are likely all well known to my readers, but bear with me, I may have a thing or two of interest to say about the whole ensemble. (The reason I refer to 3.5, rather than 4, thought experiments in the title of the post is because I think Nagel's and Jackson's make precisely the same point, and are thus a bit redundant.) In each case I'll provide a brief summary of the argument, what the experiment shows, and what it doesn't show (often, contra popular opinion), with a brief comment about the difference between the latter two.

1. The Chinese Room

Synopsis: Imagine a room with you in the middle of it, and two slots on opposite sides. Through one slot someone from the outside slips in a piece of paper with a phrase in Chinese. You have no understanding of Chinese, but - helpfully - you do have a rule book at your disposal, which you can use to look up the symbols you have just received, and which tells you which symbols to write out in response. You dutifully oblige, sending the output slip through the second slot in the room.

What it does mean: Searle's point is that all that is going on in the room is (syntactic) symbol manipulation, but no understanding (semantics). From the outside it looks like the room (or something inside it) actually understands Chinese (i.e., the Room would pass Turing's test), but the correct correspondence between inputs and outputs has been imported by way of the rule book, which was clearly written by someone who does understand Chinese. The idea, of course, is that the room works analogously to a digital computer, whose behavior appears to be intelligent (when seen from the outside), with that intelligence not being the result of the computer understanding anything, but rather of its ability to speedily execute a number of operations that have been programmed by someone else. Even if the computer, say, passes Turing's test, we still need to thank the programmer, not the computer itself.

What it does not mean: The Chinese Room is not meant as a demonstration that thinking has nothing to do with computing, as Searle himself has clearly explained several times. It is, rather, meant to suggest that something is missing in the raw analogy between human minds and computers. It also doesn't mean that computers cannot behave intelligently. They clearly can and do (think of IBM's Deep Blue and Watson). Searle was concerned with consciousness, not intelligence, and the two are not at all the same thing: one can display intelligent behavior (as, say, plants do when they keep track of the sun's position with their leaves) and yet have no understanding of what's going on. However - obviously, I hope - understanding is not possible without intelligence.

Further comments: I think the confusion here concerns the use of a number of terms which are not at all interchangeable. In particular, people shift among computing speed, intelligence, understanding, and consciousness while discussing the Chinese Room. Intelligence very likely does have to do (in part) with computing speed, which is why animals' behavior is so much more sophisticated than most plants', and why predators are usually in turn more sophisticated than herbivores (it takes more cunning to catch a moving prey than to chew on stationary plants). But consciousness, in the sense used here, is an awareness of what is going on, and not just a phenomenological awareness (as, say, in the case of an animal feeling pain), but an awareness based on understanding. The difference is perhaps more obvious when we think of the difference between, say, calculating the square root of a number (which any pocket calculator can do) and understanding what a square root is and how it functions in mathematical theory (which no computer existing today, regardless of how sophisticated it is, actually possesses).

2. Mary's Room.

Synopsis: Consider a very intelligent scientist - Mary - who has been held (somehow...) in an environment without color since her birth (forget the ethics, it's a thought experiment!). That is, throughout her existence, Mary has experienced the world in black and white. Now Mary is told, and completely understands, everything there is to know about the physical basis of color perception. One bright day, she is allowed to leave her room, thus seeing color for the first time. Nothing, argues Frank Jackson, can possibly prepare Mary for the actual phenomenological experience of color, regardless of how much scientific knowledge she had of it before hand.

What it does mean: According to Jackson, this is a so-called "knowledge argument" against physicalism. It seems that the scientific (i.e., physicalist) understanding of color perception is simply insufficient for Mary to really understand what experiencing color is like, until she steps outside of her room, thus augmenting her theoretical (third person) knowledge with (first person) experience of color. Hence, physicalism is false (or at the least incomplete).

What it does not mean: Contra Jackson, the experiment does not show that physicalism is wrong or incomplete. It simply shows that third person (scientific) descriptions and first person experiences are orthogonal to each other. To confuse them is to commit a category mistake, like asking the color of triangles and feeling smug for having said something really deep.

Further comments: I have always felt uncomfortable about this sort of thought experiment because, quite frankly, I misunderstood them entirely the first few times I heard of them. It seemed obvious to me that what the authors meant to show (the orthogonality of third and first person "knowledge") was obviously true, so I didn't see what all the fuss was about. Turns out, instead, that the authors themselves are confused about what their own thought experiments show.

3. What is it like to be a bat?

Synopsis: Thomas Nagel invited us to imagine what it is like (in the sense of having the first person experience) to be a bat. His point was that - again - we cannot answer this question simply on the basis of a scientific (third person) description of how bats' brains work, regardless of how sophisticated and complete this description may be. The only way to know what it is like to be a bat is to actually be a bat. Therefore, physicalism is false, yadda yadda.

What it does mean: Precisely the same thing that Jackson's Mary's Room does.

What it does not mean: Precisely the same thing that Jackson's Mary's Room doesn't.

Further comments: It's another category mistake. Actually, it's exactly the same category mistake.

4. Philosophical zombies.

Synopsis: David Chalmers has asked us to consider the possibility of creatures ("zombies," known as p-zombies, or philosophical zombies, to distinguish them from the regular horror movie variety) that from the outside behave exactly like us (including talking, reacting, etc.) and yet have no consciousness at all, i.e. they don't have phenomenal experience of what they are doing. You poke a zombie and it responds as if it were in pain, but there ain't no actual experience of pain "inside" his mind, since there is, in fact, no mind. Chalmers argues that this sort of creature is at least conceivable, i.e., it is logically possible, if perhaps not physically so. Hence..., yeah, you got it, physicalism is false or incomplete.

What it does mean: Nothing. There is no positive point, in my opinion, that can be established by this thought experiment. Besides the fact that it is disputable whether p-zombies are indeed logically coherent (Dennett and others have argued in the negative), I maintain that it doesn't matter. Physicalism (the target of Chalmers' "attack") is not logically necessary, it is simply the best framework we have to explain the empirical evidence. And the empirical evidence (from neurobiology and developmental biology) tells us that p-zombies are physically impossible.

What it does not mean: It doesn't mean what Chalmers and others think it does, i.e. a refutation of physicalism, for the reason just explained above. It continues to astonish me how many people take this thing seriously. This attitude is based on the same misguided idea that underlies Chalmers' experiment of course: that we can advance the study of consciousness by looking at logically coherent scenarios. We can't, because logic is far too loose a constraint on the world as it really is (and concerns us). If it weren't, the classic rationalistic program in philosophy - deriving knowledge of how things are by thinking really hard about them - would have succeeded. Instead, it went the way of the Dodo at least since Kant (with good help from Hume).

Further comments: Consciousness is a bio-physical phenomenon, and as Searle has repeatedly pointed out, the answer to the mystery will come (if it will come) from empirical science, not from thought experiments. (At the moment, however, even neuroscientists have close to no idea of how consciousness is made possible by the systemic activity of the brain. They only know that that's what's going on.)

So, what are we to make of all of the above? Well, what we don't want to make of it is either that thought experiments are useless or, more broadly, that philosophical analysis is useless. After all, what you just read is a philosophical analysis (did you notice? I didn't use any empirical data whatsoever!), and if it was helpful in clarifying your ideas, or even simply in providing you with further intellectual ammunition for continued debate, then it was useful. And thought experiments are, of course, not just the province of philosophy. They have a long and illustrious history in science (from Galileo to Newton) as well as in other branches of philosophy (e.g., in ethics, to challenge people's intuitions about runaway trolleys and the like), so we don't want to throw them out as a group too hastily.

What we are left with are three types of thought experiments in philosophy of mind: (i) Those that do establish what their authors think (Chinese Room), even though this is a more limited conclusion than what its detractors think (the room doesn't understand Chinese, in the sense of being conscious of what it is doing; but it does behave intelligently, in proportion to its computational speed and the progammer's ability). (ii) Those that do not establish what their authors think (Mary and the bats), but nonetheless are useful (they make clear that third person description and first person experience are different kinds of "knowledge," and that it makes no sense to somehow subsume one into the other). (iii) Those that are, in fact, useless, or worse, pernicious (p-zombies) because they distract us from the real problem (what are the physical bases of consciousness?) by moving the discussion into a realm that simply doesn't add anything to it (what is or is not logically conceivable about consciousness?).

That's it, folks! I hope it was worth your (conscious) attention, and that the above will lead to some better (third person) understanding of the issues at hand.

66 comments:

  1. I don't worry too much about these particular issues since I think we will some day be making conscious, free-willing robots. (Today is the last day of the First International Workshop on Artificial Consciousness, part of the 12th European Conference on Artificial Life. Papers from this workshop will be published in the International Journal of Machine Consciousness.)

    People may argue whether one of these future robots is telling the truth when it says "I am conscious." But I will probably take its word for it.

    The issues that are significant though are ethical, e.g. "How much freedom do we give them?"

    ReplyDelete
    Replies
    1. I agree this captures the essential issue. It isn't whether there is understanding in the Chinese room but how do we determine that there is understanding (consciousness) in humans. Put another way, if we assume for the sake of argument that people are conscious and the room isn't then what is the property that people have that the room doesn't, We can try various properties, but I've never seen a proposal for a satisfactory one. (Although I'm not a professional philosopher, so maybe Massimo can provide one I've missed) Here are some possibilities I've considered

      1. It resides in "embodiment". That is, the brain is embodied in a body that interacts with the environment, but the room doesn't. This implies that if the room has video cameras and robotic limbs it would exhibit consciousness? If this is Searle's view then there really isn't much point to the example.
      2. We know because we've looked inside the room and there isn't any understanding of Chinese there. But if we look inside the brain then all we'll se is neurons and synapses. Admittedly also a lot of other physical stuff, but we don't expect to see anything we can recognize as "conscious".
      3. The brain is organic, and (critical parts of) the room aren't. This is true, but if we say that understanding can only reside in something organic then we're just defining away "understanding".
      4. As Philip suggests a person will answer "yes" if asked whether they are conscious (or understand Chinese). But so, presumably, will the room. We just don't believe room while we believe the person. We believe the person because we think we understand language and the person "looks like us". But this results in a regress: In what way does the person resemble us? We're back to what is the property that distinguishes people from the room.

      Delete
  2. Hi Massimo,

    I'm delighted you've returned to the issue of philosophy of mind, as I think there are a few things which have not been sorted out from earlier discussions.

    The Chinese Room

    I share your frustration that people just don't get the point of this thought experiment. Unfortunately, I disagree about who these people are! Just as you describe Jackson as not seeing the point of Mary's Room, so do I think Searle doesn't really understand the Chinese Room.

    As I tried to explain before, the fact that the person in the room does not understand Chinese does not establish that there is no understanding. The virtual minds response to Searle (which you can find explained on SEP) is not vulnerable to Searle's argument that the person can simply leave the room and memorise the rules. In this view, Searle's brain supports two minds, his own identity which does not understand Chinese, and a separate Chinese identity which exists on top of this (in the same way as a virtual machine in computer science is supported by a physical machine). Of course, this is highly implausible in the real world, but that's just because Searle is positing a highly implausible scenario.

    Also, I think your idea of what the guy in the room is doing is perhaps overly simplistic. The rules in the book are not of the form "When you see these symbols, return these symbols". Instead, they would need to be highly complex, dynamic and chaotic operations such that the process of deliberation is entirely isomorphic to what happens in a real Chinese brain.

    The system would need to be able to learn and grow in order to be convincing, and so it would have to gain new knowledge and expertise which is not available to either the original programmer or to Searle.

    In the extreme, it should even be possible for the system to learn an entirely new language by interacting with a teacher in the same way that a human would. In this scenario we don't get to explain away the origin of the knowledge by saying it was simply provided by the programmer.

    ReplyDelete
    Replies
    1. Disagreeable Me said: "In this view, Searle's brain supports two minds, his own identity which does not understand Chinese, and a separate Chinese identity which exists on top of this (in the same way as a virtual machine in computer science is supported by a physical machine)."

      Implausible? First you have to establish it's even possible that you can have minds within minds. You haven't first even established CTM either conceptually or empirically which is implicit in the virtual machine analogy.

      Delete
    2. If the CTM is true, it is certainly possible to have minds within minds, because minds can act as computers by following an algorithm.

      The virtual minds argument does in my view show that the Chinese Room is not a refutation of the CTM. It does not in itself establish that the CTM is true, nor is it intended to.

      Delete
    3. I think Searle's analogy starts off wrong, by having a computer with exterior-controlled information rather than a robot that controls its own sensory organs. That gets back to your idea of the system "growing," DM.

      That said, Searle's idea as revised by me still doesn't allow for the idea of emergent properties, unless I add that. And, I think THAT is the flaw, even bigger than computer vs robot.

      Most anti-physicalists don't address the issue of emergence, among several things.

      That said, even in its basic form, it can be seen as an argument against greedy reductionism, and a reasonable one. (I love the idea that Dan Dennett, who regularly peddles that term, never looks in the mirror when he uses it.)

      Delete
    4. Can I point out to DM and Gadfly that the organisation of data, however complex, does not constitute a systems construct. If it did, then ANY organisation of separate items of data, however simple, would also have to be described as constituting a system... and that makes for a valueless or vacuous definition. Computers and their software, or those instructions in the Chinese room, do not constitute a valid systems construct. When Dan Dennett says "the understanding is not in the CPU it is in the system" at 10mins 09 secs in http://philosophybites.com/2013/06/daniel-dennett-on-the-chinese-room.html - he does not know what he means by "system" is so far as he does not qualify what constitutes a true systems construct above or beyond some kind of considered arrangement of bits. For more on this c.f. "Philosophy of Systems" at http://mind-phronesis.co.uk/book/philsys/

      Delete
    5. @Mark Pharoah

      I think you have an idiosyncratic definition of system. When I say system (and I suspect when Dennett says system) I mean any group of objects working in concert. The respiratory system. The solar system. A brain (which consists of neurons). A computer program (which consists of software modules).

      If you have a more specific definition then that's great, and it may even be a good and useful idea, but I don't think it justifies you to say other people have no idea what a system is just because they don't use your definition.

      Delete
    6. Ok... but if one is to say that any group of objects working in concert is a system, then one is forced to conclude that, at some level, all and everything qualifies as a system, which is to say nothing when the term is used.

      Whilst one may understand what a scientist, physicist, biologist, engineer, etc means by 'system', a philosopher needs to be more careful: If they are implying that a special construction of some sort is responsible for some special property, then the nature of that construct must be exacting for otherwise they are saying nothing.

      This is my definition for which I welcome appraisal and alteration: A coherent system is a non-aggregate construct comprising of distinct separate elements, which, whilst interacting with its environment, can maintain integrity of form by virtue of its informed and therefore coherent and functional behaviours.

      Delete
  3. On Nagel and Mary's Room

    I'm happy to say I agree entirely with your analysis!

    On Philosophical Zombies

    I'm not sure Chalmers is a non-physicalist. I see the p-zombie as more of a thought experiment to explore what it would mean for the idea of consciousness to be distinct from the empirically observable aspects of intelligent behaviour. I think it's probably useful in this context but I agree that the conceivability argument is weak.

    Ian Pollock and I have both tried to explain to you before that we regard your intuitions regarding the Chinese Room as equivalent to a belief in philosophical zombies. I think this issue would be worth exploring.

    I was initially surprised that you didn't see this, but I think I understand now. It seems to me that you view a philosophical zombie as being physically indistinguishable from a human being. Indeed, that is often how the argument is phrased.

    However, there is a weaker formulation of the concept, where a zombie need not be physically indistinguishable but is instead behaviourally indistinguishable. It may even look outwardly like a human, but its internal organs are not necessarily the same.

    So if we imagine that its brain is an electronic computer, you would view this as a philosophical zombie in this sense. Indeed, this arguably matches even your own definition of p-zombies in this post as things "that from the outside behave exactly like us (including talking, reacting, etc.) and yet have no consciousness at all, i.e. they don't have phenomenal experience of what they are doing."

    I expect your response to be that by "outside" you mean "objectively" as opposed to "subjectively", which is fair enough. Would you agree then that you do accept the validity of a weaker concept of p-zombie if not the stronger idea you reject?

    ReplyDelete
  4. I know we have gone over this before, but regardless of what the Chinese room is supposed to show its entire point appears to be to nudge the reader into committing the fallacies of composition, begging the question and argument from incredulity.

    It does not follow from the lack of understanding of one part of the room that the room does not understand; and the argument assumes that what is true of the parts must be true of the whole.

    In my eyes, it fails spectacularly to demonstrate anything.

    ReplyDelete
  5. I'm with Alex SL on the Chinese Room; I've read a lot of Searle's writings about it, and sat through several lectures of his on the subject. Massimo is right, the semantics/syntax distinction is Searle's primary point, but he tries to use that to say something else. And I disagree with him (and Massimo, apparently), I think semantics can easily be derived in the interplay between syntax, sensory data, and mental state, which can then easily be extended to other sufficiently complex computational frameworks.

    Massimo, have you run into this essay from several years ago? I don't like their argument at all, and I think I've identified a couple of points of failure in it:

    COGNITION IS NOT COMPUTATION : THE ARGUMENT FROM IRREVERSIBILITY
    Selmer Bringsjord, Michael Zenzen (1997)
    Synthese 113 p. 285-320

    ReplyDelete
  6. Just for reference, Jackson no longer believes that the Mary's Room thought experiment disproves physicalism. See his paper Mind And Illusion (http://www.nyu.edu/gsas/dept/philo/courses/consciousness/papers/jackson.pdf).

    "Intensionalism means that no amount of tub-thumping assertion by dualists (including by me in the past) that the redness of seeing red cannot be accommodated in the austere physicalist picture carries any weight. That striking feature is a feature of how things are being represented to be, and if, as claimed by the tub thumpers, it is transparently a feature that has no place in the physicalist picture, what follows is that physicalists should deny that anything has that striking feature. And this is no argument against physicalism. Physicalists can allow that people are sometimes in states that represent that things have a non-physical property. Examples are people who believe that there are fairies. What physicalists must deny is that such properties are instantiated."

    ReplyDelete
  7. Thanks Massimo for another helpful post,

    I am wondering about the use of the term 'orthogonal' in describing the relationship between 3rd person and 1st person perspectives.

    In math terms orthogonal means independent,non-overlapping & non-correlational like lines at right angles. Do you not think that 3rd person and 1st person perspectives can feed back & forth to influence each other?

    Thanks :)

    ReplyDelete
  8. @ Massimo

    Several points:

    - All the Turing test proves is that some people can be fooled into believing that they are interacting with a conscious machine.

    - We will never program a computer so that it has the capacity for "understanding" because that would involve an infinite regress.

    - IMHO, you're conflating the metaphorical with the literal. Computers do not literally have the capacity for intelligent behavior for the same reason that thermostats do not literally have the capacity to sense the room temperature.

    - Physicalism holds that only the physical is real. And since we define the physical exclusively in objective terms, then we can safely conclude that physicalism is false. Why? Because our first-person experience of our subjectivity clearly furnishes us with compelling evidence that there is more than just the objective.

    - Consciousness is actually beyond the purview of science because consciousness is subjective and science is objective.
    (You cannot give a third-person account of a first-person experience...not even in principle.)

    - I prefer the term "organic robot without consciousness" over "philosophical zombie." (My next point will demonstrate why.)

    - The physicalist cannot explain why "organic robots WITH consciusness" were naturally selected over "organic robots WITHOUT consciousness" because consciusness (awareness), on the materialist view, is an epiphenomenal by-product and cannot ccnfer any selective advantage.

    ReplyDelete
    Replies
    1. Point 2. We cannot "program" a computer, true. But if we understand the construct of the mind from which consciousness characteristics emerge and then duplicate the dynamics of that construct, artificial consciousness will happen.

      Point 5.
      i) Chemistry can explain everything about the processes involved in a ripening fruit.
      ii) We have no idea how we can explain what it is for a fruit to experience ripening (should there be such a thing - we ASSUME no).
      Consciousness is a property of third-persons - others have it apart from me: Consequently, a scientific explanation of third-person properties will succeed in explaining why this also creates the 'first-person perspective'... and yet will fail to reveal any understanding as to what it is to have the particular personal identity belonging to any given first-person

      Delete
  9. Disagreeable, Alex,

    we obviously continue to disagree about the Chinese Room. No problem. But let me ask you this: we actually have a Chinese Room now, it’s called Google Translate. My daughter (who doesn’t speak Italian, unfortunately) uses it regularly to communicate on Facebook with my Italian family. Do you believe Google Translate “understands” Italian? Why? Why not?

    Disagreeable,

    > I expect your response to be that by "outside" you mean "objectively" as opposed to "subjectively", which is fair enough. Would you agree then that you do accept the validity of a weaker concept of p-zombie if not the stronger idea you reject? <

    Of course not. But you are close to the reason why I reject your and Ian’s contention that the CR is analogous to a p-zombie. From the outside, both of their behaviors are indistinguishable from that of a human, even though there is no understanding “inside.” The difference is that Chalmers thinks p-zombies show the conceivability of human-type behavior without consciousness, while Searle and I maintain that that is impossible, as shown by the CR.

    Bubba,

    > I think semantics can easily be derived in the interplay between syntax, sensory data, and mental state, which can then easily be extended to other sufficiently complex computational frameworks <

    Easily?? Hmm...

    I haven’t read Bringsjord and Zenzen, should I?

    Gadfly,

    > it can be seen as an argument against greedy reductionism, and a reasonable one. (I love the idea that Dan Dennett, who regularly peddles that term, never looks in the mirror when he uses it.) <

    Yeah, I’m beginning to appreciate that irony myself.

    Seth,

    > I am wondering about the use of the term 'orthogonal' in describing the relationship between 3rd person and 1st person perspectives. <

    Logically independent.

    Alastair,

    > All the Turing test proves is that some people can be fooled into believing that they are interacting with a conscious machine. <

    Agreed, did I ever say the opposite?

    > We will never program a computer so that it has the capacity for "understanding" because that would involve an infinite regress. <

    Not sure what that means. But we ourselves are beings “programmed” by natural selection and capable of understanding. Did that involve an infinite regress?

    > you're conflating the metaphorical with the literal <

    I am doing no such thing. I think it is you who is confusing intelligence with understanding. A lion (or a plant, for that matter) can act very intelligently and yet have no understanding of what it is doing.

    > our first-person experience of our subjectivity clearly furnishes us with compelling evidence that there is more than just the objective. <

    But my argument is that confusing physicalism with third-person perspective is a mistake. My subjective experiences are first-person, but they are the result of a physical brain interacting with a physical universe. Nothing else.

    > Consciousness is actually beyond the purview of science because consciousness is subjective and science is objective. <

    Again, confusing the experience of consciousness (which is certainly subjective) with the biological processes that generate consciousness (which can certainly be studied objectively).

    > I prefer the term "organic robot without consciousness" over "philosophical zombie." <

    Suit yourself. I call both of them (if they are equivalent, of which I’m not sure) nonsense on stilts.

    > consciousness (awareness), on the materialist view, is an epiphenomenal by-product and cannot confer any selective advantage <

    On *some* physicalist view. Not mine.

    ReplyDelete
    Replies
    1. I don't know if you should bother with the Bringsjord and Zenzen paper, the main point so far has been all in the title: cognition can't be computation because computation is by definition reversible (because all computation can be done by Turing machines), and cognition is not reversible.

      They do a little bit of handwaving for the last part, and they do a LOT of handwaving for the Turing Machines==computation part. I've been researching this question for something else the last couple of weeks, and I don't think Turing said anything like this, and Church may have said it but not defended it effectively. I'm researching things that aren't computable by Turing machines, but still computable. The whole question is still kind of up in the air, especially about whether the human brain (and whole human organism) could be calculating non-Turing-realizable computations, but there are very good arguments that such things may exist.

      I need to go back and read your argument against computationalism, to see if I can come up with a good argument against you, or if I agree with you. :)

      Delete
    2. Thanks again Massimo,

      Yet I am still wondering?

      It seems clear to me that an individuals private 1st person experience has some dependency on the manner in which they engage with available inter-subjective perspectives (& vice/versa). I think they inter-dependent in non-logical space (if that makes any sense).

      Might this entail that classical logical is not sufficient to describe the ontology of the 3rd & 1st person perspective dynamics. If this is the case does it have any implications for the positions you take in the original post?

      Not trying to be critical, just trying to sort out my own position. Thanks again.

      Delete
    3. Hi Massimo,

      >we obviously continue to disagree about the Chinese Room. No problem.<

      Of course! No problem! But when discussing this with you I'm trying to understand other viewpoints. Agreeing to disagree doesn't really achieve this.

      I've outlined why I don't think the Chinese Room works as a refutation in light of the virtual minds response. To my knowledge, you still haven't given an account of why you think this response does not work. I'd like to know if you have an answer or whether there is any chance that you might be a little more open to the possibility that the CTM is correct?

      >we actually have a Chinese Room now, it’s called Google Translate... Do you believe Google Translate “understands” Italian? Why? Why not?<

      I see this as almost (but not quite) a mirror image of the Chinese Room. In the Chinese Room, we have a human being with no understanding of what is said who enacts simplistic rules in order to facilitate a conscious and understanding (in my view) artificial intelligence.

      In Google Translate, we have a human being with understanding which is facilitated by a program which understands nothing of what is said but enacts simplistic rules.

      The difference between Google Translate and the Chinese Room is that Google Translate is a relatively simplistic lookup. If it sees the English word "one", it spits out the Italian word "uno". It's a little bit more complicated than that, but not very much so.

      It may be considered to have some very limited, impoverished understanding of Italian grammar, which words are verbs and which are nouns, etc. In my view, the difference between this and true understanding is quantitative rather than qualitative.

      In any case, it cannot be compared to the Chinese Room, which does not simply perform lookups but which emulates the operation of a human mind. The Chinese Room has opinions, understands concepts, has a history and can develop personal relationships.

      >The difference is that Chalmers thinks p-zombies show the conceivability of human-type behavior without consciousness, while Searle and I maintain that that is impossible, as shown by the CR.<

      Honestly, I'm inclined to say WTF?

      If you think Searle's interpretation of the CR shows anything, it is that it is entirely conceivable to have human-type behaviour without consciousness. That's the whole premise! The Chinese Room behaves like a human, and yet according to Searle and you it is unconscious!

      Delete
    4. @ Massimo

      > Suit yourself. I call both of them (if they are equivalent, of which I’m not sure) nonsense on stilts <

      Okay. Let's call it a "stimulus-response system." This is the behaviorist's term. It's the same thing as an organic information-processing system. It takes environmental stimuli as input data, processes it, and generates some kind of response as output data.

      > On *some* physicalist view. Not mine. <

      You have to explain why a sentient stimulus-response system was naturally selected over an insentient stimulus-response system. Why? Because if consciousness (i.e. awareness) is not required to respond to environmental stimuli, then it logically follows that consciousness has no functional role to play. And if consciousness has no functional role to play, then it cannot possibly confer any selective advantage.

      Delete
    5. @ Massimo

      > I am doing no such thing. I think it is you who is confusing intelligence with understanding. A lion (or a plant, for that matter) can act very intelligently and yet have no understanding of what it is doing. <

      Okay. What exactly can an "understanding" living organism do that a mere "intelligent" living organism cannot do? (Or, to use your example: Why does a computer have to understand what a square root is in order to calculate the square root of a number? Clearly, your "intelligence" does not require any "understanding" to function.)

      Delete
    6. @BubbaRich

      "I'm researching things that aren't computable by Turing machines, but still computable."

      I very much doubt there are any such things. In fact, there certainly aren't, unless you adopt a very non-standard understanding of computation which would require exotic hypothetical physics in order to physically realise.

      Delete
    7. @Disagreeable Me:

      That was my opinion going into it, and it's still my first instinct when I see reference to the idea in papers. But I haven't yet been able to refute the possibility. Although I think I can refute several individual suggestions about non-Turing machine computation.

      Delete
  10. Massimo,

    I think there are two possible meanings of understanding here. If we only ask about a Google Translate-style translation, then we can also easily imagine a single human who can (without the help of anything outside his brain) mindlessly "translate" Chinese without understanding the meaning of what they translate.

    Perhaps I am wrong but I operated under the assumption that this extremely trivial insight was not the point of the Chinese Room because then it could be conveyed with the second sentence of my previous paragraph, and that is it. The complexity of the thought experiment only makes sense under the assumption that a meaningful response, not a mindless translation, is required of the room. In other words, I interpreted the Room to be a rebuttal of the Turing Test and not a criticism of rote learning.

    As an aside, Google Translate is still horrible compared to any half-competent human, although perhaps some language pairs are easier to do than others.

    ReplyDelete
  11. "I hope it was worth your (conscious) attention, and that the above will lead to some better (third person) understanding of the issues at hand."

    Perhaps the most promising computationalist approach today is "Attention Schema" theory [ princeton.edu/~graziano/Consciousness_Research.html ]. I think there is an interesting overlap of this with reflective/introspective code in programming language theory.

    ReplyDelete
  12. Seth,

    > Might this entail that classical logical is not sufficient to describe the ontology of the 3rd & 1st person perspective dynamics. If this is the case does it have any implications for the positions you take in the original post? <

    I don’t think so. When I said “logically independent” I didn’t mean to bring in formal logic, I meant it as in “they have nothing to do with each other.” As I wrote in the post, applying third-person criteria to first-person questions is akin to a category mistake, like asking what is the color of triangles.

    Alex,

    > I interpreted the Room to be a rebuttal of the Turing Test and not a criticism of rote learning <

    That’s right, I think it is. But you pretty much dodged the question: do you think that even a much much improved Google translate would actually understand Italian? And I beg to differ, there is only one applicable meaning of the word understanding here: having the subjective human-type experience that you know what you are doing (as in you “understanding” they phrase you just read).

    Jerry,

    > Put another way, if we assume for the sake of argument that people are conscious and the room isn't then what is the property that people have that the room doesn't <

    That’s right. The experiment is about highlighting that there seems to be something missing from the CR, or from Google Translate. People may bite the bullet and respond that GT is in fact conscious, but I find that hard to believe.

    Alastair,

    > Let's call it a "stimulus-response system." This is the behaviorist's term. It's the same thing as an organic information-processing system. <

    I see no equivalence among these terms that you keep throwing together. What’s your point?

    > You have to explain why a sentient stimulus-response system was naturally selected over an insentient stimulus-response system. Why? <

    We can only speculate about that. But please consider that creatures capable of sentient s-r are by far the dominant species on planet earth, possibly poised to colonize places beyond it. Seems to me that in terms of fitness we’re doing pretty darn well, no?

    ReplyDelete
  13. DM,

    > I've outlined why I don't think the Chinese Room works as a refutation in light of the virtual minds response. To my knowledge, you still haven't given an account of why you think this response does not work. <

    From SEP: “The claim at issue should be “the computer creates a mind that understands Chinese”. A familiar model is characters in computer or video games. These characters have various abilities and personalities, and the characters are not identical with the hardware or program that creates them.”

    I really don’t see how this gets around Searle’s biological naturalism objection. To begin with, of course, simulated characters in video games don’t have “abilities and personalities,” it’s the computer *simulating* their abilities and personalities. And at any rate, remember that Searle doesn’t object to intelligence, but to phenomenal consciousness (including “understanding”) arising from strong AI approaches. So, no, I’m not moved.

    > It may be considered to have some very limited, impoverished understanding of Italian grammar, which words are verbs and which are nouns, etc. In my view, the difference between this and true understanding is quantitative rather than qualitative. <

    Thanks for answering the question directly (unlike Alex ;-), and I guess this really is the difference: I don’t think that GT has any understanding whatsoever of Italian or any other language. No matter how sophisticated it gets, it’s just a very sophisticated and very fast mechanical look up system with no phenomenal consciousness (and hence no understanding).

    > Honestly, I'm inclined to say WTF? <

    Well, I’m glad you refrained, as that surely wouldn’t advance our dialogue... ;-)

    > That's the whole premise! The Chinese Room behaves like a human, and yet according to Searle and you it is unconscious! <

    Correct, which simply shows that the Turing test is not decisive. (This is rather uncontroversial: for a long time we’ve had AI systems like Eliza that can fool people at least for a while into thinking that they are talking to human beings, while they are not. But Eliza doesn’t have phenomenal consciousness.)

    If you are taking Chalmers’ p-zombies to make the same point than I might have to agree, but I think Chalmers meant much more than that. He used the p-zombies as an argument in favor of dualism, which Searle certainly has never endorsed, right?

    ReplyDelete
    Replies
    1. Hi Massimo,

      >I really don’t see how this gets around Searle’s biological naturalism objection.<

      Whatever about the biological naturalism objection, (whatever that might be), it does get around the argument he makes in the Chinese Room.

      As the SEP says, "The claim at issue should be "The computer creates a mind that understands Chinese"". Searle has not shown that he has not created such a mind, he has only shown that he *is* not such a mind.

      Searle seems to think that the physical system is supposed to be identical with the mind, so when *he* is the physical system and yet doesn't understand Chinese, he concludes that there is no understanding.

      But if the CTM is true, a mind can act as the computing substrate for a second mind, or alternatively two separate minds could exist within a single computer program. Video game characters are only simplistic models of this idea. We are not supposing that they are literally conscious intelligent agents.

      This shows that it is wrongheaded of Searle to suppose that the computing substrate is identical to the mind. It is clearly not, and as such, the fact that he does not understand Chinese does not prove that there is no understanding.

      Ditto for phenomenal consciousness.

      "I don’t think that GT has any understanding whatsoever of Italian or any other language."

      Would you say it's possible to understand a language without understanding the concepts to which the words refer? If you do not, then I would agree that GT has absolutely no understanding of the language.

      But is it possible to understand the *syntax* of a language without understanding the concepts to which the words refer? I would say so. And it is in this sense that I think GT has very limited, impoverished understanding of Italian and other languages.

      I do agree that it has no phenomenal consciousness, but I'm less convinced that phenomenal consciousness is required for understanding.

      >If you are taking Chalmers’ p-zombies to make the same point than I might have to agree, <

      Yes, this is precisely what I am getting at.

      I was extremely confused at how you could maintain that the concept of a p-zombie is nonsense while maintaining that the Chinese Room had human behaviour but no consciousness. If you are no longer claiming that p-zombies are incoherent but instead merely insist that they do not establish dualism, then that's fine by me.

      Delete
  14. @ Massimo

    > I see no equivalence among these terms that you keep throwing together. <

    stimulus-response system = organic information-processing = organic robot = living organism <= I dont' think it is very difficult for a materialist to see that all these terms are interchangeable given the fact that a materialist sees biological organisms in strictly mechanistic terms.

    > What’s your point? <

    I have already stated my point: You cannot explain why a sentient stimulus-response system (what you're calling an understanding living organism) was naturally selected over an insentient stimulus-response system (what you're calling a mere intelligent living organism) given your belief that consciousness (i.e. understanding) is not required for a living organism to respond to environmental stimuli.

    > We can only speculate about that. But please consider that creatures capable of sentient s-r are by far the dominant species on planet earth, possibly poised to colonize places beyond it. Seems to me that in terms of fitness we’re doing pretty darn well, no? <

    This is not an explanation; it's simply a tactic to divert attention away from the fact that you have no explanation. We don't have to speculate. We know that, given your mechanistic beliefs and on the basis of logic alone, no explanation will be forthcoming.

    ReplyDelete
  15. Massimo

    >That’s right. The experiment is about highlighting that there seems to be something missing from the CR, or from Google Translate. People may bite the bullet and respond that GT is in fact conscious, but I find that hard to believe.

    What's missing from Google Translate is the ability to carry on a conversation. By hypothesis that's not missing from CR. Some of us feel there isn't anything significant missing from the CR and it seems to me that a philosopher who thinks there is something missing ought to specify something about the nature of what is missing. Otherwise the discussion has degenerated to "is to; is not; is to; ..." (A classic scene from Mash)

    ReplyDelete
  16. Massimo,

    I would argue that I am allowed to 'dodge' the question because I merely said that the Chinese Room is fallacious and fails as an argument, not that I know how understanding or consciousness work. Often it is easier to know that something is definitely wrong than to explain what the real facts of the matter are.

    But okay. It seems pretty clear that Google Translate will never understand because it is not built to understand, merely to search for patterns and exchange one phrase for another. The question is harder for a computer that is designed to learn to give meaningful replies in a conversation. That is exactly the point of the Turing test, isn't it? How do we know that the human brain does anything more than what that computer does?

    Well we don't, either way, at least at the moment. But if Searle basically says, I personally don't believe that a machine could understand because none of its parts understands then he is merely committing the fallacy of composition. It is not a good thought experiment. And the really stupid part is that one could do the same for the human brain, and some religious types do of course do it: Look, an individual brain cell does not understand Chinese, so how could a lot of them understand it? There must be something beyond supernatural to the mind! Yes, and airplanes cannot possibly fly because a single passenger seat cannot fly...

    By the way, I would also urge to keep understanding and consciousness apart in these discussions. They are not the same thing.

    ReplyDelete
    Replies
    1. "By the way, I would also urge to keep understanding and consciousness apart in these discussions. They are not the same thing."

      I'm inclined to agree.

      However, if this is true, I'm not sure it's fair to say that Google Translate has no understanding of any kind.

      If we imagine that it's got a sophisticated model of language syntax and operations for manipulating that syntax, for example, would it not be reasonable to describe this as understanding the syntax of that language?

      Delete
  17. I'm curious in what sense you think understanding and consciousness are physical phenomena, especially when you think - unless I'm misunderstanding your point about orthogonality - that to expect subjective experience to be included in a physical description of the world is committing a category mistake.

    ReplyDelete
  18. Different point: you say "A lion (or a plant, for that matter) can act very intelligently and yet have no understanding of what it is doing," and later "I don’t think that GT has any understanding whatsoever of Italian or any other language. No matter how sophisticated it gets, it’s just a very sophisticated and very fast mechanical look up system with no phenomenal consciousness (and hence no understanding)." Couldn't these points taken together be used to show the impossibility of human understanding and consciousness? Or is there some fundamental difference between the increase in sophistication between lion brains and human brains vs. from GT to a system capable to emulating normal human behavior in the way the CR does?

    ReplyDelete
  19. Love, love, love this post, and I couldn't agree with you more (except for a distaste for trolyology). I wonder, though, about your position on Michael Lepresto's post on this site.
    http://rationallyspeaking.blogspot.com/2013/04/why-problem-of-consciousness-wont-go.html

    Like you, I find certain claims to be so, uh, trivial, that I don't even know how to approach what the claimers seem to think is a very exciting notion. That a third person account will never fully explain a first person experience is either a truism or, for most dualists, a nearly mystical claim. It's such a truism, in my opinion, that the irreducible quality of "thisness" applies even to inanimate things. We may have a third person understanding of what a telescope is looking at, but there is actually a specific, almost qualia like condition to any one particular telescope trained at any one point in the sky on any one particular night. None of the things that astronomy or optics has to say about the telescope will capture that. That there isn't a sentient remembering of this particular inanimate perspective doesn't mean it isn't irreducible. It just so trivial that Nagalites want to dismiss it out of hand as a bad analogy for a sentient point of view. But unless they can explain why qualia are not merely a more rich version of any form of particularness, and as such not actually an extra physical state, but an almost geometric necessity of not being all of everything at once, as an extremely ambitious form of objectivism would like (but of course also can't get) then they are blowing a lot of air into a flimsy balloon. I know a lot of the Nagel type thinkers came of age at the tail end of logical positivism and around the era of Skinner, and they were rightly concerned with an excess of confidence in a physical description of the world and a denial of the importance of subjective experience, but now, I think the world is safe for poetry and love and the only reason to keep beating this drum is if you, like Kurzweil and Chalmers are just too afraid of dying to think clearly.

    ReplyDelete
  20. @ Massimo

    > But my argument is that confusing physicalism with third-person perspective is a mistake. My subjective experiences are first-person, but they are the result of a physical brain interacting with a physical universe. Nothing else. <

    If my first-person experience of my own subjectivity is truly physical, then it should be amenable to the third-person perspective. The fact is that you cannot observe my subjectivity. Therefore, you have no basis to assert that it is physical.

    Also, you have already argued for the validity in "mathematical Platonism." (Remember that a perfect triangle can only exist as a nonphysical abstraction, not as a concrete physical object.) If mathematical Platonism is true, then physicalism is not.

    ReplyDelete
  21. Explaining 'what it is like to be a bat' entails determining the principles behind, or principles governing phenomenal experience. This is a problem of third-person perspective - One is not required to explain 'what it is ACTUALLY like to be a bat' to explain phenomenal experience.

    This second problem, namely explaining 'what it is actually like to be a bat' is no more distant a problem than explaining what it is like to be any human individual other than oneself. Note; this is neither a first-person nor a third-person perspective problem. Rather, this is a problem relating to personal identity. c.f. post - "Why the First-Person Perspective and Identity distinction is the Elephant in the Philosophy of Consciousness Room" at http://mind-phronesis.co.uk/first-person-perspective

    An error arises from an associative assumption that to solve the phenomenon of experience entails solving the problem of personal identity when all is required is to determine the principles that explain the phenomena that give rise to third and first-person phenomena. This error arises from our sense that personal identity is exclusively populated by experience phenomena - Experience is everything that we can know of identity; therefore, to solve one must entail solving the other. This consequently, also becomes entangled with the dualism versus non-dualism arguments. But solving phenomenal experience need have no impact on resolving the dualism question. The hard problem is not actually the problem of experience - it just appears to be.

    ReplyDelete
  22. Regarding the Chinese Room

    1.
    Massimo... are you of the view that there can be intelligence in the absence of knowledge?

    Because you say: "one can display intelligent behavior (as, say, plants do when they keep track of the sun's position with their leaves) and yet have no understanding of what's going on."
    I interpret this sentence to mean the following:
    "intelligent behaviour may arise in the absence of understanding - an example of this is when plants follow the sun as it traverses the sky."

    Therefore, are you of the view that in the absence of understanding, plants nevertheless possess knowledge by virtue of this display of intelligence?- Because, in what way can an agent demonstrate intelligent behaviour if it has no knowledge by which to form its 'acts of intelligence'? So in your view, is the notion that a plant can display intelligent behaviour equivalent, to saying it possesses, or has access to, some form of knowledge?

    Alternatively, if knowledge is the 'wrong term' to use, upon what... is an intelligent act founded, in your opinion? For more on this refer to "Thinking round the Epistemology of Perception Problem" at http://mind-phronesis.co.uk/epistemology-of-perception

    2.
    Massimo: you say "computers can behave intelligently". In my view this is equivalent to saying that

    i) an indicator light on a car 'can behave intelligently' - because it can flash on and off; or even possibly
    ii) that the hammer and chisel of the sculptor can behave intelligently by virtue of the statue that is constructed from them.

    A computer performs calculations in such a manner as to have the 'appearance' of behaving intelligently - We are duped by the speed of, on off calculations, just as we are duped by the numerous pixels on a TV screen into thinking that there are images which move.

    So... to distinguish between "intelligence" and "appearance", is to distinguish between a 'knowledge construct' that results in intelligent behaviours; and tools that, in the hands of an individual with intelligent intentions, amalgamate bits to form the appearance of intelligent design.

    From this, one is able to say the following:
    "Artificial intelligence" is a misnomer. "False Intelligence" is critically, a more suitable term. And consequently, philosophers should draw distinction between Artificial Consciousness and 'False Intelligence".

    ReplyDelete
  23. All,

    thanks for the vigorous discussion, but it’s getting overwhelming again, so I’ll only have time for occasional brief counter-commentaries... (Also, new post coming out on Monday, so...)

    Thomas,

    > I just don't understand what the point being made can be here other than metaphorical inasmuch as a human intelligence/understanding may not be relevant or applicable when talking about a lion or plant outside of a human's desire to frame such acts in terms that are meaningful to a human. <

    Not sure what you are objecting to. I didn’t mean that statement as metaphorical: I meant that plants, say, do not have understanding of what they do, even though their behavior is “intelligent” in the sense of being adaptive and looking purposive.

    > “My subjective experiences are first-person, but they are the result of a physical brain interacting with a physical universe. Nothing else." How is this not an overstatement? "Nothing else"? <

    Yeah, as in: no non-physical processes (whatever that would mean) are required. Again, this is a pretty straightforward, almost universally accepted position in philosophy of mind.

    > My own tendency is more and more to suspect that first-hand experience is subsequent to third-person experience <

    Not sure what you mean by it. If anything, the other way around: one cannot look at things in third person unless one is also having first-person experiences. (A scientist cannot examine empirical data impersonally, i.e., w/out first person experience of what she is doing.)

    Jerry,

    > What's missing from Google Translate is the ability to carry on a conversation. <

    Take Eliza then. It does have a (limited) ability of that type. Do you think it is somewhat conscious?

    > it seems to me that a philosopher who thinks there is something missing ought to specify something about the nature of what is missing. <

    I already did, several times: substrate. I think consciousness is a biological phenomenon that is not substrate independent.

    Alex,

    > It seems pretty clear that Google Translate will never understand because it is not built to understand, merely to search for patterns and exchange one phrase for another. The question is harder for a computer that is designed to learn to give meaningful replies in a conversation. <

    But if the latter is achieved in the same way as the former (i.e., by symbol manipulation) what on earth makes you think that somehow consciousness would emerge? See my comment above about Eliza. Do you think it has limited consciousness? Why not?

    > That is exactly the point of the Turing test, isn't it? How do we know that the human brain does anything more than what that computer does? <

    As I said, I think the Turing test is hopelessly misguided because it is rooted in behaviorism. We know about humans because we are humans and have access to our own phenomenal experience; we then infer (reasonably) that other beings built like us have similar internal experiences.

    > if Searle basically says, I personally don't believe that a machine could understand because none of its parts understands then he is merely committing the fallacy of composition. <

    But that’s not what he is saying. He is saying that if we imagine something that works formally like a digital computer we don’t see to get a grasp of where understanding is coming from; therefore, there might be something missing from our account of understanding.

    > the really stupid part is that one could do the same for the human brain, and some religious types do of course do it <

    Again, I think you (and many others) entirely misunderstand Searle’s point. Just think about his insistence on biological naturalism and you’ll see that your analogy with the religious nutcase simply doesn’t hold water.

    > I would also urge to keep understanding and consciousness apart in these discussions <

    I, ahem, understand. But my point (and Searle’s) is that there is no understanding without consciousness. There can, however, be intelligence.

    ReplyDelete
  24. Dervine,

    > I'm curious in what sense you think understanding and consciousness are physical phenomena, especially when you think - unless I'm misunderstanding your point about orthogonality - that to expect subjective experience to be included in a physical description of the world is committing a category mistake. <

    That is not at all what I said. Consciousness, and therefore understanding, are the results of physical processes. But complaining that a third-person description of consciousness doesn’t give you the first-person experience is, I think, just silly. One is a description, the other is an experience: why on earth would one think that the first gives the latter?

    > Couldn't these points taken together be used to show the impossibility of human understanding and consciousness? Or is there some fundamental difference between the increase in sophistication between lion brains and human brains vs. from GT to a system capable to emulating normal human behavior in the way the CR does? <

    I don’t believe in any fundamental impossibility of human understanding. It’s an open question. But no, we have no idea, empirically, of how consciousness arises from the brain. That’s what makes all this certainty that people have about these matters so damn funny.

    OneDay,

    > We may have a third person understanding of what a telescope is looking at, but there is actually a specific, almost qualia like condition to any one particular telescope trained at any one point in the sky on any one particular night <

    Nope, I don’t think there is. The telescope would have to be conscious, at least as conscious as an animal capable of feeling pain. I seriously doubt that’s the case.

    > I think the world is safe for poetry and love and the only reason to keep beating this drum is if you, like Kurzweil and Chalmers are just too afraid of dying to think clearly <

    Please, no unflattering comparisons with Chalmers and especially Kurzweil. Surely you’ve read enough on this blog to know what I think of both of them.

    Alastair,

    > You cannot explain why a sentient stimulus-response system (what you're calling an understanding living organism) was naturally selected over an insentient stimulus-response system (what you're calling a mere intelligent living organism) given your belief that consciousness (i.e. understanding) is not required for a living organism to respond to environmental stimuli. <

    The first part of your statement is correct: we have speculative hypotheses, but not knowledge, about why consciousness was selected in favor. That’s not unusual: in a lot of cases evolutionary biology finds itself in that situation, because it’s a historical science, where often the necessary information is lost to time. But your conclusion simply doesn’t follow: we know that living organisms don’t even need intelligence to engage in stimulus-response, which means that the whole planet could be made of plants, or bacteria. You see the fallacy?

    > We know that, given your mechanistic beliefs and on the basis of logic alone, no explanation will be forthcoming. <

    Nonsense on stilts.

    > If my first-person experience of my own subjectivity is truly physical, then it should be amenable to the third-person perspective. <

    It is: we can tell which parts of your brain have to be present in order for you to have conscious experience. We can even alter the functioning of those parts and therefore alter your conscious experience. What else do you need to admit that consciousness is a physical phenomenon?

    > If mathematical Platonism is true, then physicalism is not <

    Ah, yes, but notice that I have explained several times that I am a naturalist, not a physicalist, as far as the fullness of the cosmos is concerned. But in terms of biological phenomena - of which consciousness is one - I am a physicalist (which is a sub-category of naturalism, of course).

    ReplyDelete
    Replies
    1. @ Massimo

      > The first part of your statement is correct: we have speculative hypotheses, but not knowledge, about why consciousness was selected in favor. <

      What hypotheses do you have? The only one I am familiar with is the "spandrel argument" (which is so absurd that not even the proponents of this argument actually believe it).

      > But your conclusion simply doesn’t follow: we know that living organisms don’t even need intelligence to engage in stimulus-response, which means that the whole planet could be made of plants, or bacteria. You see the fallacy? <

      No, I don't see. What particular fallacy am I committing here? However, you are now creating an additional problem for yourself. If "intelligence" is not required for a living organism to engage in stimulus-response, then why were intelligent stimulus-response systems (i.e. living organisms) naturally selected over non-intelligent stimulus-response systems. Because it doesn't appear that intelligence is required to process environmental data as input and generate some kind of behavioral response as output. And if intelligence is not required to do this, then intelligence (like your "understanding") plays no causal role whatsoever and therefore cannot possibly confer any selective advantage.

      By the way, there is evidence that bacteria exhibit intelligence (if you define intelligence in terms of information processing).

      "Cellular Decision Making in Bacteria"

      > Nonsense on stilts. <

      Hitherto, no explanation (plausible or otherwise) has been forthcoming.

      > It is: we can tell which parts of your brain have to be present in order for you to have conscious experience. We can even alter the functioning of those parts and therefore alter your conscious experience. What else do you need to admit that consciousness is a physical phenomenon? <

      Bullchit! (This rhetorical expression is more effective than "nonsense on stilts.") This kind of research depends on first-person reports of subjective experiences. Correlation does not necessarily imply causation, and it most certainly does not imply identification. (That's the fallacy you are committing here.) If you actually were able to objectively detect "consciousness," then you should be able to tell me whether an amoeba is experiencing consciousness (subjective awareness). But we both know that you cannot. And the reason you cannot is because consciousness is subjective, not objective. It is not amenable to the third-person perspective.

      > Ah, yes, but notice that I have explained several times that I am a naturalist, not a physicalist, as far as the fullness of the cosmos is concerned. <

      What is at issue here is metaphysical physicalism, not metaphysical naturalism.

      > But in terms of biological phenomena - of which consciousness is one - I am a physicalist (which is a sub-category of naturalism, of course). <

      You're talking more nonsense. If you believe that ultimate reality is comprised of both the physical and the nonphysical, then you evidently believe in some form of dualism - the duality of the physical and the nonphysical...duh!

      Also, you have created another problem for yourself. How exactly does a physical mind intellectually grasp a nonphysical mathematical object (like the abstraction of a perfect triangle)?

      Delete
    2. No, I didn't meant to compare you to Chalmers and Kurzweil at all. I agree with your opinion of them. I should have written "one" instead of "you."

      Delete
  25. DM,

    don’t think I don’t appreciate your vigorous engagement here. One of the reasons I write and curate the blog is to challenge myself, and readers help me clarifying what I think, refine it, and occasionally change my mind about it. I just wish I had a lot more time to devote to it...

    > Searle has not shown that he has not created such a mind, he has only shown that he *is* not such a mind. <

    Or you can think of it as a question of burden of proof: it seems to me that it is the strong AI crowd that needs to show that a virtual mind has been created.

    > Searle seems to think that the physical system is supposed to be identical with the mind <

    Wrong, I think. Searle’s system is analogous to a digital computer, not a mind. His point is precisely that such computer has no mind.

    > But if the CTM is true, a mind can act as the computing substrate for a second mind <

    But that’s precisely what is at issue, so now you are getting pretty close to begging the question.

    > Video game characters are only simplistic models of this idea. <

    I think they are no models at all.

    > Would you say it's possible to understand a language without understanding the concepts to which the words refer? <

    That’s right, at least partially.

    > But is it possible to understand the *syntax* of a language without understanding the concepts to which the words refer? I would say so <

    No, there is no understanding of anything, concepts or syntax, in Google Translate.

    > I'm less convinced that phenomenal consciousness is required for understanding. <

    Do you have any examples of a decoupling between the two?

    > I was extremely confused at how you could maintain that the concept of a p-zombie is nonsense while maintaining that the Chinese Room had human behaviour but no consciousness. If you are no longer claiming that p-zombies are incoherent but instead merely insist that they do not establish dualism, then that's fine by me. <

    This was really helpful, thanks. So, here is what I think, and I blame Chalmers for his ambiguity in setting up the p-zombie experiment ;-) If Chalmers is saying that p-zombies *are* (not just *appear to be*) exactly as humans, but without phenomenal consciousness, I would respond that while logically possible (low bar to meet!) this is physically impossible. If he is saying that p-zombies are convincing *simulations* of human beings, really something akin to an Eliza-type trick, then that’s certainly possible but entirely irrelevant - in my mind - to the issue of phenomenal consciousness.

    Finally, I do see now why you and Ian see such a similarity btw the CR and the p-zombies, but what distinguishes them fundamentally for me is the diametrically opposite aims of the two experiments: Chalmers is trying to build a case for dualism (which I think it’s nonsense), while Searle is proposing something akin to a reductio to argue that there is something missing from the strong AI program (which I think is exactly right).

    ReplyDelete
    Replies
    1. Hi Massimo,

      Thanks for the kind words. I appreciate you're very busy, and that communicating online is time consuming. From my perspective, it's a pity that this is the only avenue available to us to discuss these very interesting issues.

      "Or you can think of it as a question of burden of proof: it seems to me that it is the strong AI crowd that needs to show that a virtual mind has been created."

      "But that’s precisely what is at issue, so now you are getting pretty close to begging the question."

      That's entirely reasonable, but I think you're misunderstanding my intention slightly. The virtual minds response does not establish that the CTM is true, only that the CR is not an adequate refutation. The assertion that it *is* an adequate refutation is then begging the question. My aim is only to persuade you to be more agnostic on the subject rather than to be convinced that the CTM is false.

      "Searle’s system is analogous to a digital computer, not a mind. His point is precisely that such computer has no mind."

      But his point does not establish that, unless the mind the computer is supposed to have is identical to his own. This is what I meant when I said that he has only shown that this is not the case.

      "Do you have any examples of a decoupling between [consciousness and understanding]?"

      It seems clear to me that this is a definitional matter. The meaning of understanding clearly includes the idea of consciousness to you, whereas it does not to me. To me, a p-zombie would understand all the same things a human would. It would be able to solve the same problems, get the same sudden flashes of inspiration and be able to describe its thinking process. This, to me, is understanding. In both Searle's and Chalmers' account it can do all this without consciousness.

      So, for me, understanding is akin to competence, but probably needs an ability to adapt to novel problems and also potentially to give an account of how the problems were solved. If you disagree, that's absolutely fine, but it just means you're using the word a little differently. In your definition then, I would agree that Google Translate does not understand anything.

      I'm glad we got to the bottom of the CR=p-zombie issue!

      Delete
  26. Mark,

    > This second problem, namely explaining 'what it is actually like to be a bat' is no more distant a problem than explaining what it is like to be any human individual other than oneself. <

    Indeed, but I would go as far as saying that it simply isn’t a “problem” at all. Experience is something that one, well, experiences. No explanation is required, unless by explanation one means an understanding of the biological-physical mechanisms that make the experience possible. But if so we are now back again firmly within the third-person perspective.

    > are you of the view that there can be intelligence in the absence of knowledge? <

    Good question: no, because knowledge doesn’t require understanding. A bacterium “knows” to move away from dangerous chemicals, thus showing an intelligent behavior. But there is no understanding of the dangerous nature of the chemical, nor conscious awareness of the evasive maneuver.

    > in what way can an agent demonstrate intelligent behaviour if it has no knowledge by which to form its 'acts of intelligence'? <

    Because it has been programmed thus by natural selection.

    > A computer performs calculations in such a manner as to have the 'appearance' of behaving intelligently <

    I’m fine with that. But then you are adopting such a strict definition of intelligence (essentially colinear with understanding) that you’d have to negate, for instance, that predators such lions are intelligent. I think they are.

    > consequently, philosophers should draw distinction between Artificial Consciousness and 'False Intelligence <

    I’m fine with that too, though I wouldn’t phrase it that way. I am trying to maintain a distinction - which I think most philosophers and biologists would agree to - between intelligence (a la lion) and understanding. Computers are more like lions, or better yet, as plants (intelligent behavior, no phenomenal experience).

    > if we understand the construct of the mind from which consciousness characteristics emerge and then duplicate the dynamics of that construct, artificial consciousness will happen. <

    Sure, but what do you mean by “construct”? For me, that includes certain physical substrates, as I mentioned a number of times (Searle’s biological naturalism, which very few biologists actually doubt, though they don’t call it that).

    > We have no idea how we can explain what it is for a fruit to experience ripening <

    I don’t think fruits have any experience of ripening.

    ReplyDelete
    Replies
    1. Well I for one, think that a lot of people want an explanation of the phenomenon of (their) experience - to them experience IS a problem worth attempting to solve. Indeed, experience IS a problem if one believes there is a solution as to why human interaction with their environment presents phenomenal qualities that are by their nature, subjective, inaccessible and ineffable, etc. The explanation is there for the taking as far as I am concerned.

      You are saying a bacterium “knows” but a plant does not? That; the plant acts intelligently, but only because natural selection has programmed it... programmed it with what? And what holds the programme in the plant? What name do we call this vehicle that holds and implements “the programme”? 


      I think a strict definition of intelligence is vital for coherent philosophical enquiry... and I don’t understand how by adopting this, you think I have to negate that predators such lions are intelligent; I do not arrive, nor have am I compelled to arrive, at this conclusion.

      You say, “Computers are more like lions”. “More like” implies similarities: I’d like to see those similarities articulated... In my view, computers are nothing like lions or plants: Plants have behaviour that one can define as intelligent, whilst computers have behaviours that one can clearly demonstrate give only the ‘appearance’ of intelligence c.f. my original comment: A light bulb is not intelligent by virtue of its performing the task of turning on and off with the switch.

      You say, “Lions have no phenomenal experience”... I have written a paper devoted to explaining why Carruthers’ Dispositional Higher Order Thought theory is false in its claim that non-human animals do not possess phenomenal experience. http://mind-phronesis.co.uk/dispositional-higher-order-thought-theory-versus-hierarchical-systems-theory-of-consciousness

      Why does “construct” in your opinion, include only “certain” ‘physical substrates’ and not others? - What would any distinction entail? For me, “construct” requires coherent definition, which is what a reductive explanation for all forms of emergence (at mind-phronesis.co.uk) entails.


      Massimo, you say “I don’t think fruits have any experience of ripening.” Well, if you think lions do not possess phenomenal experience, then the poor apple has no chance - Actually, my point had nothing to do with the validity or otherwise of whether apples experience ripening. My point is intended to illustrate that explaining phenomenal experience does not require us to explain what it actually feels like for individual living organisms. A third and first-person explanation does not entail a personal identity explanation.

      Delete
    2. You misunderstood Massimo.

      He did not say lions have no phenomenal consciousness.

      He said they were had no understanding. He compared computers to lions in this respect, and then decided that plants were a better comparison since plants have no phenomenal consciousness.

      "Computers are more like lions, or better yet, as plants (intelligent behavior, no phenomenal experience)."

      Delete
    3. Oh yes... I have mis-quoted too. Sorry

      Delete
    4. Massimo:

      Can you clarify a point for me?

      Since you are of the view that there cannot be intelligence in the absence of knowledge, am I right in understanding that this mean that you are of the view that in the absence of understanding, plants nevertheless possess knowledge by virtue of their display of intelligence?

      Delete
  27. Here is another mind experiment: Other than height, weight, money, and degree, how would you measure yourself? And here is a tip: once the measure of One is found, so is the measure of All.

    Truth is

    =

    ReplyDelete
  28. Massimo, On your final three points i and ii agree but on three disagree because Chalmers has moved CR and Mary to our first person. Oh look at that Zombie, he moves and talks like me, an mri shows he has muscles, bones and neurons lie me; matter of fact those neurons fire identical to mine but something is not happening in those neurons so there is no qualia or consciousness? INNER Dualism?

    Victor Panzica

    ReplyDelete
  29. @ Massimo

    > Not sure what that means. But we ourselves are beings “programmed” by natural selection and capable of understanding. Did that involve an infinite regress? <

    Intelligent agents (i.e. human beings) program computers. Natural selection is not an intelligent agent. It does not literally program living organisms with built-in programs. Right? That's where the infinite regress (or circular reasoning) comes into play. It seems to me that you are attempting to describe nature as a "self-programming program" (or a "self-programming programmer"). How exactly does that work?

    As far as I can see, the only way out of this dilemma is to make the argument that there are no intelligent agents (human or otherwise) by appealing to memetic selection.

    ReplyDelete
  30. Massimo said (in parentheses): At the moment, however, even neuroscientists have close to no idea of how consciousness is made possible by the systemic activity of the brain. They only know that that's what's going on.

    Agreed, although I'm tempted to modify "neuroscientists" to "cognitive scientists", so as to include other disciplines, like psychology, linguistics, and philosophy (of mind), that share that same (empirically well supported) hypothesis and have contributed to this research project.

    At the risk of embracing what Owen Flanagan calls a "new mysterian" stance, I suspect that we face enough in the way of stable epistemic limits here as to keep the creationists and mystics pleased. That in itself doesn't prevent cognitive scientists from choosing a winner among the hypothetical candidates (i.e. something along the lines of what Searle calls "biological naturalism"), but then since when did the creationists and mystics agree to play by scientific norms?

    ReplyDelete
  31. Massimo: "Intelligence very likely does have to do (in part) with computing speed, which is why animals' behavior is so much more sophisticated than most plants'..."

    Are you sure it doesn't have something to do with presence of or lack of a central nervous system?

    ReplyDelete
  32. Ok, this is going to be my last set of replies on this thread. It has been interesting, but I sense yet another point of diminishing returns, and of course the new post (on the metaphysics wars) is already up...

    > My aim is only to persuade you to be more agnostic on the subject rather than to be convinced that the CTM is false. <

    I’m not convinced it is false, but I do think the arguments (and evidence) overall point against it being the full answer (I do think the brain has some computational aspects, I just don’t think they account for phenomenal consciousness).

    > So, for me, understanding is akin to competence <

    But that can’t be, otherwise you would have to conclude that a plant “understands” that the sun is a source of energy and that its leaves have to track it. Competence is just not enough for what we are talking about.

    Victor,

    > Oh look at that Zombie, he moves and talks like me, an mri shows he has muscles, bones and neurons lie me; matter of fact those neurons fire identical to mine but something is not happening in those neurons so there is no qualia or consciousness? <

    That’s why I keep hedging on p-zombies. If that’s what Chalmers has in mind, then he is thinking about a physical impossibility, and possibly even a metaphysically incoherent notion (i.e., that one can have two identical physical systems, one of which lacks a property the other has).

    Thomas,

    > Few would deny that an infant is conscious, but at what point its consciousness also entails understanding is to me a different matter <

    Agreed. That’s why I said understanding requires consciousness, but not the other way around. Also, of course, keep in mind the difference between general phenomenal consciousness (which presumably all animals have, to different degrees) and the type of self-reflective consciousness that only humans (and possibly a few other primates) possess. It is the latter, I argue, that is necessary for understanding. Another way to put it is that for me understanding means the ability to formulate and analyze concepts, which is precluded if one doesn’t have self-consciousness.

    ReplyDelete
    Replies
    1. Massimo, I think he is psychologicaly fixated on the explanatory gap. Knowing all of the microphysical facts doesn't entail knowing all of the microbiological functions. Positive Zombie conceivability may be looking at those neurons and saying there is something happening in me that's not happening in the Zombie's neurons. Negative conceivability is I psychologicaly know all of the facts and functions which brings me to an uncomfortable metaphysical wall.

      Delete
  33. Alastair,

    > What hypotheses do you have? <

    I think consciousness has been highly adaptive because it allows humans to understand the world better and to formulate plans. This is speculative only insofar as we don’t have access to the relevant Pleistocene data, but as I said that’s true for a lot of human behavioral traits.

    > If "intelligence" is not required for a living organism to engage in stimulus-response, then why were intelligent stimulus-response systems (i.e. living organisms) naturally selected over non-intelligent stimulus-response systems. <

    You really have a very narrow understanding of evolution. Your question is akin to asking why are there multicellular life forms if bacteria do pretty well at reproducing themselves. It’s a non sequitur from an evolutionary perspective. New forms evolve and differentiate if they can create ecological niches for themselves, which obviously intelligent systems (animals) have been able to do, and so have sentient systems (humans).

    > there is evidence that bacteria exhibit intelligence (if you define intelligence in terms of information processing). <

    I don’t.

    > This kind of research depends on first-person reports of subjective experiences. Correlation does not necessarily imply causation, and it most certainly does not imply identification. <

    Thanks for the Science Method 101 lesson. First person reports are data, or psychology wouldn’t exist, you keep confusing data with experience. And we can manipulate the brain experimentally, which goes well beyond simple correlation.

    > What is at issue here is metaphysical physicalism, not metaphysical naturalism. <

    That’s your issue, not mine.

    > If you believe that ultimate reality is comprised of both the physical and the nonphysical, then you evidently believe in some form of dualism <

    No, my ontology is simply a bit richer than yours (see Ladyman and Ross on this).

    > How exactly does a physical mind intellectually grasp a nonphysical mathematical object (like the abstraction of a perfect triangle)? <

    Don’t know. Got any good ideas? But of course you think that every time I say something like “I/we don’t know” you have “caught” me. Creationists use the same tactics, it’s bullocks.

    > Intelligent agents (i.e. human beings) program computers. Natural selection is not an intelligent agent. <

    Again, thanks for Evolution 101, can we please get a bit more serious?

    > That's where the infinite regress (or circular reasoning) comes into play. It seems to me that you are attempting to describe nature as a "self-programming program" (or a "self-programming programmer"). How exactly does that work? <

    Pick up any good text on evolutionary biology. I recommend Futuyma’s.

    > the only way out of this dilemma is to make the argument that there are no intelligent agents (human or otherwise) by appealing to memetic selection. <

    Whatever that is.

    ReplyDelete
  34. Richard,

    > Are you sure it doesn't have something to do with presence of or lack of a central nervous system? <

    Yes, of course, plants don’t have nervous systems, and here on earth that’s what confers intelligence (because the organism becomes behaviorally much more flexible in response to environmental challenges). But nervous systems are the mechanisms, intelligence is the resulting phenomenon, there is a distinction between the two.

    Mark

    > Well I for one, think that a lot of people want an explanation of the phenomenon of (their) experience - to them experience IS a problem worth attempting to solve <

    but explaining X is not the same as experiencing X. We are making progress in the explanation of phenomenal experience, but it literally makes no sense to me to ask for a scientific explanation to somehow *provide* us with the experience.

    > You are saying a bacterium “knows” but a plant does not? <

    No, nothing without self-consciousness “knows” (in the sense of understands) anything. What I said / meant was that all organisms behave “intelligently” in the sense of having non random responses to their environment. But there is no mystery there: regardless of what Alaistair thinks, that sort of adaptive behavior was “programmed” (yes, it’s a metaphor) by natural selection.

    > what holds the programme in the plant? <

    DNA?

    > I don’t understand how by adopting this, you think I have to negate that predators such lions are intelligent <

    As DM pointed out, I didn’t.

    > “Lions have no phenomenal experience” <

    Didn’t say that either.

    > My point is intended to illustrate that explaining phenomenal experience does not require us to explain what it actually feels like for individual living organisms. <

    We completely agree on that one, it was a major point of my post.

    > Since you are of the view that there cannot be intelligence in the absence of knowledge <

    I am not. I am of the view that there cannot be knowledge without understanding. Again, lions are intelligent. Plants *behave* intelligently. (The difference being caused by the former, but not the latter, having a nervous system.)

    > am I right in understanding that this mean that you are of the view that in the absence of understanding, plants nevertheless possess knowledge by virtue of their display of intelligence? <

    No, for reasons that I hope are now a bit clearer.

    ReplyDelete
    Replies
    1. My point being, if there is no mechanism present in one of the things being compared, then comparing "computing speed" is irrelevant, like comparing the fastest microprocessor to a rock.

      Delete
  35. @ Massimo

    > I think consciousness has been highly adaptive because it allows humans to understand the world better and to formulate plans. <

    Remember that you made a distinction between "intelligence" and "understanding." And your definition of "intelligence" does not require any "understanding" to function. And since functionalism and/or computationalism only require intelligence (as you defined the term), there is nothing for understanding to do. That's why conciousness is considered to be epiphenomenal on the materialistic view. "Awareness" (what you are calling "understanding") does nothing in and of itself. My calculator does not require any consciousness in order to calculate the square root of 256. Neither does the chess application on my personal computer require any "understanding" in order to "formulate plans."

    By the way, the technical terms (employed in the philosophy of mind) for what you are calling "intelligence" and "understanding" are known as access-consciousness (A-conciousness) and phenomenal-consciousness (P-consciousness) respectively. A-consciousness is basically information processing. P-consciousness is subjective experience itself or what is known as "qualia.". Dennett denies qualia." This is why many characterize him as denying consciousness.

    > You really have a very narrow understanding of evolution. Your question is akin to asking why are there multicellular life forms if bacteria do pretty well at reproducing themselves. <

    My question is more akin to asking what exactly do proponents of strong AI expect a sentient robot to do that an insentient robot cannot do. If you can answer that question, then maybe you be able to give me a plausible explanation for why sentient stimulus-response systems were naturally selected over insentient stimulus-response systems. Up until now, all you have been doing is simply evading the question.

    > I don’t. <

    If you believe that plants exhibit intelligence, then what is your basis for rejecting scientific evidence that bacteria exhibit decision-making capacities?

    > First person reports are data, or psychology wouldn’t exist, you keep confusing data with experience. <

    You keep confusing SUBJECTIVE data with OBJECTIVE data. And I will remind you that psychology is not considered a PHYSICAL science. Why do you think that is?

    > And we can manipulate the brain experimentally, which goes well beyond simple correlation. <

    And I can step on your foot and you will scream "Ouch!" But that doesn't prove that my "stepping" is IDENTICAL with your "screaming" even though there is a correlation with my stepping and your screaming.

    All this correlation proves is that the physical can influence mental states. But we also know that mental states can influence physical states.

    > That’s your issue, not mine. <

    I believe you're the one who is attempting to defend physicalism here.

    > No, my ontology is simply a bit richer than yours (see Ladyman and Ross on this). <

    If you believe that ultimate reality is comprised of both the physical and the nonphysical, then you evidently believe in ontological dualism.

    ReplyDelete
  36. @ Massimo

    > Don’t know. Got any good ideas? But of course you think that every time I say something like “I/we don’t know” you have “caught” me. Creationists use the same tactics, it’s bullocks. <

    I could use the same argument to defend interaction dualism. But I won't because your argument is pathetic. (We know how consciousness interacts with the world; it collapses the wave function.)

    > Again, thanks for Evolution 101, can we please get a bit more serious? <

    If human beings are nothing more than programs blindly programmed by natural selection, then any intelligent or purposeful-behavior that they may seem to exhibit (such as designing computer programs) must be deemed purely illusory. Logical consistency dictates this much.

    > Whatever that is. <

    Susan Blackmore (one of your fellow skeptics) is consistent on this score...

    "We once thought that biological design needed a creator, but we now know that natural selection can do all the designing on its own. Similarly, we once thought that human design required a conscious designer inside us, but we now know that memetic selection can do it on its own." (source: pg. 242, "The Meme Machine" by Susan Blackmore)

    ReplyDelete
  37. Massimo:
    I realise that you do not intend to respond to threads any further, so will take this opportunity to thanking you for your time and efforts in responding to queries. I finally feel that I have come to an understanding as to what you mean when using the terms knowledge, understanding and intelligence. Hooray - Sorry if I was a bit slow on the uptake.

    You say in your last thread addressed to me, “I am of the view that there cannot be knowledge without understanding”:

    My perspective is that the understanding that humans possess is derived from a 'conceptual based knowledge' (which is why feelings, being non-conceptual in derivation, are ineffable/inaccessible to a conceptual based knowledge construct) - clearly, one would have to clarify what constitutes conceptual knowledge to make full sense of this.

    From this, I argue that behavioral responses (observed in living organisms) that are accurate in their reflection of environmental conditions/properties, and therefore which are beneficial to survival, entail a knowledge that is based on a 'bio-chemical/ bio-physical construct'.

    I have a thought experiment to support and illustrate this distinction:

    Humans have found, on a planet in another solar system, the dead fragments of a once living organism - nicknamed “Edna”. The planet has suffered a catastrophic disaster and no lifeforms are alive. On examination, it is found that Edna’s chromosomes are constructed not from DNA, but an alternative compound. This “A-DNA” is taken back to earth.

    With some clever computer technology, they use the A-DNA from Edna to reconstruct a graphic of what Edna looked like. She was a plant like organism. From Edna’s structure, color, and other characteristics, the human scientists are able to work out details about the conditions pertaining to Edna’s environment... Edna’s physiology shows what the environment must have been like for her to have created energy for respiration in the manner that she did, reproduced and dispersed in her ‘evidently’ arid windy world; The geneticists can even calculate what the planet’s gravity is likely to have been... the understanding they acquire about Edna’s planet and her environment is endless!

    Is this because the physiology of Edna, encoded in her A-DNA, represents a type of knowledge construction, from which the capable human geneticist are able to derive understanding about Edna’s and only Edna’s particular environment?

    ReplyDelete
  38. I never know if Nagel meant a cricket bat or a baseball one.

    ReplyDelete
  39. "Contra Jackson, the experiment does not show that physicalism is wrong or incomplete. It simply shows that third person (scientific) descriptions and first person experiences are orthogonal to each other. To confuse them is to commit a category mistake, like asking the color of triangles"

    The two categories in your example are colour and shape.

    As far as I understand, colour is to do with the energetic properties of a wave (and/or its particles), and shape is to do with the relative spaciotemporal properties of a wave's particles.

    Is this what justifies your placing them in separate categories, or something else?

    ReplyDelete

Note: Only a member of this blog may post a comment.