The project of how to reconcile our commonsense view of the world with how the sciences and philosophy seem to be telling us the world actually is, has been an obsession of mine for some time. To the detriment of my more terrestrial projects, I can’t stop thinking about (among other things) qualia, free will, metaethics, ontology of numbers and causality.
It is a characteristic of all these issues that they seem to present problems for what we might loosely call a physicalist or materialist or naturalistic or reductionist worldview. At its best, this kind of worldview does not imply a cartoon view that physics is the end of the story for human knowledge (though some, such as Alex Rosenberg, go that far), but rather an insistence that we keep our basic ontology (our catalogue of non-reducible things) to a minimum (especially ruling out fundamentally mental entities). WvO Quine famously identified the naturalist/physicalist/materialist/reductionist impulse as an aesthetic “taste for desert landscapes.” But that is a much larger can of worms, and I am not sufficiently confident in my opinions to open the whole thing just yet.
The problem of qualia is a problem for physicalist theories because it seems to imply that there must be facts about the world that refer directly to some sort of mental or subjective states. Various thought experiments can bring this problem into clearer focus; my favorite is Frank Jackson’s “Mary’s Room” thought experiment (great for thinking about when you get stuck in a queue). This SEP article is a fantastic summary of that thought experiment, as well as related ones. I want to present one of the standard solutions to Mary’s Room, which I think is rather elegant and decisive. But first, here is my brief summary of Mary’s Room (if you’re familiar, you can skip the next four paragraphs).
Mary is a genius neuroscientist. She knows everything there is to know empirically about how the mind and brain work, and in particular she exhaustively knows every single scientific fact that bears on color vision: optics, how the retina works, how the retina’s signals get processed in the brain, et cetera.
However, Mary has had an unusual upbringing. Raised by philosophers (a sketchy proposition in the best of times), she has been kept in a monochrome room for her entire life and has never seen primary colors — never seen a red rose, or a green leaf, or a blue sky. (Nitpicky readers will notice that this would be actually pretty hard to accomplish — for one thing, she would have to have her skin painted monochrome, never look at a bright light through her closed eyelids and see the red color that results, never dream in color... but never mind. This! Is!! PHILOSOPHY!!!)
Eventually, her philosophical zookeepers take a day off from heroically pushing obese people off bridges, and they take her out of her monochrome cage. She comes out into the world and sees a dandelion for the first time, saying “Wow, so that’s what yellow looks like! Cool!” You can imagine all her other epiphanies for yourself.
The gotcha being that she appears to have learned something new — what it’s like to see yellow — on being released. But we stated that she already knew all the scientific facts! Therefore, there must be facts out there that are beyond the reach of science, even in principle — mental facts, or subjective facts, perhaps. Facts about what it is like to have certain experiences. That’s the idea, anyway.
Various solutions have been proposed to this itchy little problem of Mary’s Room. Some of them appeal to the nitpicky objections I mentioned above — would it actually be possible to prevent Mary from having color experiences? Does neuroscience actually allow that the Mary experiment could be carried out?
While these are all good points, it seems to me that they miss the main thrust of the thought experiment. Nitpicky objections usually do, by focusing on an aspect of the problem that is unrelated to the real meat of the issue.
Another approach to solving the problem is to wholesale deny the existence of qualia. This has been attempted by several philosophers, and also recently by a LessWronger, whose post partly spurred me to write this one.
In principle I think it’s a healthy impulse to try and deny that some mysterious phenomenon exists, and see whether you lose any explanatory power. That approach can be applied productively to many problems, such as personal identity.
On the other hand, denying qualia still seems to leave me in the same state of bemused confusion that I was before. I still end up looking at a red traffic light and saying to myself, “That’s weird.” And the strategy looks kind of desperate — the moment phenomenon X creates a problem for physicalism, you deny phenomenon X altogether? Really?
One response to this among qualia-deniers is to appeal to the idea that intuitions are (very) fallible. Yes, this is true, but intuitions still provide prima facie reasons to believe in them. Being counterintuitive is not desirable, just acceptable. And ideally, if you tell me something wildly counterintuitive, it’d be nice if you explained exactly why it’s counterintuitive, instead of leaving it as a brute assertion that my intuition is wrong (here is a good example of a counterintuitive thesis that is explained beautifully).
One of the most popular approaches is to say that Mary is obtaining new knowledge, but that the knowledge is not about any new facts. This view can be termed the New Knowledge/Old Fact view.
I want to summarize this view... but I confess that I cannot make head nor tails of it. Here’s how the SEP summarizes it:
(1) Phenomenal character, e.g. phenomenal blueness, is a physical property of experiences.
(2) To gain knowledge of what it is like to have an experience of a particular phenomenal character requires the acquisition of phenomenal concepts of phenomenal character.
(3) What it is for an organism to acquire and possess a phenomenal concept can be fully described in broadly physical terms.
(4) A subject can acquire and possess phenomenal concepts only if it has or has had experiences of the relevant phenomenal kind.
(5) After release Mary gains knowledge about phenomenal characters under phenomenal concepts.
Well, I am just an amateur philosopher, so it’s quite plausible that my trouble in understanding this view is a function of my own ignorance of the correct subtle concepts. But this seems to be saying that the perceived blueness of blue is a “physical property” of an experience. A physical property of an experience?! Um... that does NOT sound right at all; in fact, it seems more confusing than the original problem. But again, I am probably not doing justice to this view; if you know an alternative way of explaining it, I’d be interested to hear it.
The solution that seems to me to really solve the problem on a gut level is called the “Acquaintance Hypothesis” and is associated with Michael Tye, among others — see his article here. I am less interested in getting all the definitions and necessary and sufficient conditions right, than I am in getting rid of the intuitive problem. So here is the general picture.
Mary does indeed learn something when she leaves the room. But what she learns is not propositional knowledge that can be stated in words. Rather, Mary becomes acquainted with brain states/mind states she has not occupied before. Obviously, her education in the neuroscience of color vision was not an education in learning to occupy new brain states — hence the radical disconnect between her education and her experience of seeing the blue sky.
Her new acquaintance with the redness of red now allows her to do other things, such as remembering what red looks like (by stimulating the same patterns of neural firing that corresponded to seeing red) and recognizing red when she sees it, and deciding whether red is her favorite color.
Similarly, you will never be acquainted with what it’s like to be a bat, but that’s just because you can’t occupy a bat’s brain state. It’s not because there’s some fact you don’t know about the universe.
I don’t know about you, but this satisfies me completely. After I hear this explanation, I look at a red traffic light and it doesn’t feel like there’s any mystery left over. One might draw a parallel between this and the mirror paradox: regardless of whether you can fully explain the geometry behind the mirror paradox, it just intuitively goes away when you get really close to the mirror and see each and every point being reflected in the same manner.
Now that we have this picture, we can see our way (albeit speculatively) towards how Mary might have acquired, via her book-learning, acquaintance with the redness of red. One way would be for her to perform some sort of meditations or mental exercises designed to guide her brain state toward the one(s) that correspond(s) to the color red. I have some doubts that this is really feasible, but its unfeasibility seems like a contingent fact about human brains, rather than bearing strongly on the problem of qualia.
Granted, this doesn’t solve every interesting problem about consciousness. One might still wonder about things like whether sufficiently advanced silicon-based robots, or Martians, or simulations of conscious creatures, could have qualia — and about whether they could be the same as ours (“yes on all counts” is my instinct). A nice discussion of such issues can be found here, in the SEP’s article on functionalism in philosophy of mind (as an aside, the SEP is a really, really good resource).
But leaving these other problems aside, I think I am ready to stop interrogating poor Mary and let her go on her merry way.
It seems to me that as it is stated, the original thought experiment is a bit of begging the question. The wording and definition of "physical knowledge" quietly excludes the distinction about brain states here:ReplyDelete
"But what she learns is not propositional knowledge... Mary becomes acquainted with brain states/mind states she has not occupied before."
Without acknowledging that the process of recognizing a color by sight is such a brain state, the original proposition about physical knowledge smuggles in the conclusion that, "There are some facts about human color vision that Mary does not know before her release." By calling this brain-state experience a "fact" and excluding it from the category of physical knowledge, the original argument artificially constructs a new category that can then fallaciously be referred to as non-physical.
But then again, most dualism arguments exhibit at best a disengaged attitude toward basic neurology.
Agreed! And, as I try to explain in my post below, whether or not such a girl could exist, or what type of brain she would have, or the effort involved in doing such a thing -- all add up to, not nit picking questions, but a physicalist, neurologically based understanding of what the heck is going on. The thought experiment begins by locking the scientists out of the debate and just letting in the idealists. The deck is stacked before the the "experiment" begins.Delete
The other thing the thought experiment does is make a really faulty assumption that excludes an obvious answer. It claims that she knows everything about color and everything about how our brains react to it, but then it just assumes that this doesn't include the ability to predict her own experience. Well if she knows her own brain so well, why couldn't she be able to do exactly that? The premise doesn't include any reason why this possibility is excluded. The whole thing is just a mess.Delete
I've always had trouble recognizing Mary's Room as an interesting problem at all. I agree with you that Tye's solution is satisfying, but I disagree with you that practical questions about the thought experiments are "nit picky" and "miss the point." Whether or not Mary could exist gets to the heart of it. Either Mary, being so smart, would know that she doesn't understand color, because describing an experience is not the same as having it. Who but a philosopher would confuse that issue? Let's step outside of the skull for a second, where smuggled notions like "self" and "consciousness" cloud the issue. What if Mary knew everything there is to know about breaking your leg except what it feels like? This doesn't trick us quite as much, because we aren't mystified if the brain understands something that the leg doesn't. Well, as it turns out, our brains have different regions and states. Our brains can understand and feel. Two different activities done in different ways and in different regions. Just as brains aren't legs, prefrontal cortexes aren't occipital lobes.ReplyDelete
Further, on an ontological level, seeing yellow is not the same as describing seeing yellow. And, it's obviously the case that one can't even describe or understand seeing yellow unless you have seen it. Remember, minds don't arrive fully formed. They freaking DEVELOP. All our minds, all our vaunted cognition was built up as we developed and is built up, from substrate up, from stimuli in, as we do the physical process known as thought with our physical brains.
OR, the other option, proposed by Dennett, is that if you insist on Mary somehow magically knowing EVERYTHING about seeing color, then when she sees yelllow she would say, "Ah, yes, just as I expected."
If you are a physicalist, as I am, you are committed to understanding the world as much as possible in terms of things and not ideas. But why on earth would you think that understanding what is involved in having an experience is the same thing as having that experience? A physicalist description of the world doesn't exclude subjective experience.
Dennett's reply is good, but even it gets a bit confused if you try to stick to the thought experiment's artificial delineations of "knowledge." You almost get into an infinite regress: Mary has enough knowledge that she can accurately predict what her own experience will be, and then she gets the physical experience. Wouldn't the confirmation of her suspicions be a new item of "knowledge" that she previously lacked?Delete
"I was right!" would be a new fact that she didn't know before, so there is a way in which her knowledge was incomplete - a way that does not fall into the non-physical category sought by the experiment's proposal. This means that the category of "complete knowledge" in the experiment's premise is logically flawed to begin with, because it is not really complete. She could, in principle, know exactly what to expect about the experience she will have, but a memory of that occurrence and correlation between it and her expectations would still be missing. That memory and confirming comparison would each, I think, count as knowledge, but they are sneakily excluded by the conditions of the premise.
Exactly. The concept of "complete knowledge" assumes that subjectivity/perspective/point of view are not part of "complete knowledge" but obviously it is. So, Mary would have to see color to have complete knowledge.Delete
It's the same trick non realists pull when they get excited about information theory, as though the revelation that information is part of the world is somehow earth- shattering. They want to argue that information is special, but they end up using the fact that it ISN'T extra-physical to try to prove that it is fundamental. In fact, information (and minds) are relatively mundane manifestations of matter.
I will have a closer look at that Less Wrong reference but under heading A the writer talks about unverifiable correlation between brain states and subjective experience. Actually, if the brain state is "identical" for two subjects, AND fully understood in its capacity to create the experience of awareness, we might begin to hypothesize that the subjects have the same experience (as defined from our knowledge of the brain state). Don't be defeatist about it.ReplyDelete
This is important, because moving on he talks about the subjective experience of what it is like to see redness. This is a result of a brain state, but entirely informed by the rest of anatomy, including eye receptors, as a neuronal facility to represent that anatomy both in thought and in the diverse real feelings (actual vision, for example) we have that are attended by thoughts. Awareness is thought and felt as a brain state upon sufficient building of impulses from the entire anatomy, 100 + millisends after the receptors at the eyes fire. It is experinced as the completion of a signal from the eye by firing in the brain without a signal going back to the eye to say "now see". Touch at the hand is likewise felt as a brain event momentarily after the real stimulation, as a referral across the brain "as if" felt at the hand.
The writer does not present that model up front because he does not have real knowledge. He is getting nowhere at understanding the brain state and what "feeling", "thinking", and "what it is like" actually mean. Redness for example is both a firing of specific receptors (and correlating neurons eventually in the brain), AND the meaning given to that specific experience (a red color) by its asssociation with everything red we have experienced in our lives. What it is like becomes what has it always been like for me to have been in proximity to red things.
This broader correlation extends beyond cortices that recognize and fnalize specific color perception, by extending those signals across other cortices correlating to redness, including sunsets, apples, post boxes, and their meaning to us. The key to cortical processing is to first process "real" feelings of sites (vision, sound, touch, smell etc) in real cortices before extending those signals across all cortices in a sensory-motor structure for attending thoughts that call up associations between all real signals (I see a photo of red apple and can "almost smell it"). read the middle chapters of my free book at www.thehumandesign.net for more.
I didn't comment much on your article Ian, but the same issue of a lack of knowledge would apply to yours too. The way to bust through little syllogistic abstractions is with knowledge. The "subtle concepts" are the problem, but I wouldn't be too pleased with your simple solution, as it is dead obvious, unsupported by clear facts about neurology, and doesn't properly deal with "what it is like" for Mary with no prior experience of redness. I am particularly interested in what that would be like for Mary, but that's an argument for another day.Delete
These are not just theoretical problems. They are moral problems, too. Ideas have consequences. Monstrous ideas have monstrous consequences. Behaviorism, the idea that all that matters is how people act, led to the monstrous practice of lobotomizing patients in great pain, just to shut them up. After all, if there is nothing inside, what is the problem? An aggregate of particles cannot “really” feel pain, it can only produce an unpleasant noise.
The terrible truth was that the patients were still in great pain, but unable to complain about it, because of the lobotomy. If asked, they would still report that they were feeling pain. From Grantham and Spurling: 'Some patients may voluntarily complain of pain, but it seems to be of less disturbance to them than before lobotomy. Others will speak of having pain only when questioned.' So don't “question” them and everything is OK!
The problem of qualia is not just about pretty color. It is about the reality of intractable pain and of the need of doing something about it. It is a reality that has not gone away.
Dynes, J.B., and J.L. Poppen: Lobotomy for Intractable Pain. J. Amer. Med. Ass., 140:15, 1949.
Freeman, W., and J.W. Watts: Pain of Organic Disease Relieved by Prefrontal Lobotomy. Lancet, 1:953, 1946.
Grantham, E.G.: Frontal Lobotomy for the Relief of Intractable Pain. South Surg., 16:181, 1950
Grantham, E.G.: Prefrontal Lobotomy for the Relief of Pain with a Report of a New Operative Technique. J. Neurosurg., 8: 405, 1951.
Grantham, E.G. and R. G. Spurling: Selective Lobotomy in the Treatment of Intractable Pain. Ann. Surg. Volume 135, Number 5 Pag. 602-607, 1952
Greenblatt, Milton and H.C. Solomon: Survey of 9 Years of Lobotomy Investigations. Amer. Jour. Psychiat., 109: 262, 1952.
Hamilton, F.E., and G.J. Hayes: Prefrontal Lobotomy in the Management of Intractable Pain. Arch. Surg., Chicago, 58:731, 1949.
Love, J.G., M.C. Petersen and F.P. Moersch: Prefrontal Lobotomy for the Relief of Intractable Pain. Minn. Med., 32:148, 1949.
Otanasek, F.J.: Prefrontal Lobotomy for the Relief of Intractable Pain. Johns Hopk. Hosp. Bull., 83:229, 1948.
Scarff, J. E.: Unilateral Prefrontal Lobotomy with Relief of Ipsilateral, Contralateral, and Bilateral Pain. A Preliminary Report. J. Neurosurg., 5:288, 1948.
Philippo, that is extraordinary, thank you. Yes I agree that the problem of qualia has consequences, although I don't think anybody really takes that kind of cartoon behaviourism seriously anymore, thank goodness.Delete
FYI, I am the author of this post, not Massimo.
Sorry Ian, I did not read carefully the header of the post.
Thank you for recognizing the extraordinary nature of this episode, that should be better known. The criminally stupid nature of those procedures can be recognized from the fact that the sensory areas of the brain are in the back, not the front. But the prefrontal areas can be easily reached, with an icepick through the eyesockets. This was the MO of Walter Freeman, the main author of the first (chronologically) paper.
I am not so sure that such episodes could not happen again: today many "rationalists" are denying both consciousness and the significance of history...
One of the nitpicks can be resolved. If the lighting in the room is monochromatic yellow (like the old sodium street lamps), every photon she sees will be the same colour. Even the light through her closed eyelids won't look red. I have experienced those lights. Nothing looks red, green or blue.ReplyDelete
That's true, that resolves most of them, although there is still at least one more I can think of. When I have my eyes closed in the dark, I can see little spots of blue & red (mostly). I don't know whether they are optical or neurological artifacts, but I do (seem to) see them.Delete
Also, the yellow lights will give a purplish afterimage, I think.
That's a good point about afterimages. I suppose we will have to go the whole hog and intercept her optic nerves, blocking any signals that code for unwanted colours (we will know the code by then, right?)Delete
The information that Mary has in her brain that relates to this intuition pump consists of abstractions. Forgive me if I overuse the word, but an abstraction perfectly embodies the concept that has the best potential for disambiguation.ReplyDelete
Abstractions can and are transferred and translated to and from many mediums. The word ‘tree’ is the abstraction for a real tree, within a 'language abstraction framework'. We also have digital abstraction frameworks, where compressed models of reality are stored as 1’s and 0’s. The human mind is also an abstraction framework. An organic alien sentience could be considered as having an altogether different abstraction framework from that of humans.
Mary could spend centuries studying every book on earth, and “translating” as best she can the abstractions from one framework(language) to another(her mind). The abstraction framework of her mind has unique inputs to an organism. In other words, there are some concepts in the linguistic framework that are not translatable to the cognitive framework.
I like the intuition pump of considering the experiential journeys of a Transformer robot. First we must presume that purely electromechanical “creature” could achieve sentience. Whether or not it holds is another discussion.
The only way we know what’s going on in the robot’s head is through the things he tells us. Which means he must translate the abstractions from his robot(digital) framework into a linguistic framework. We could have a full understanding of everything he tells us, but we are still completely ignorant what it means to experience the world from within his abstraction framework.
But that isn’t a concern, because the things that cannot be conveyed are secondary qualities, and only apply to the abstraction framework that experienced them. We cannot tell a robot what it feels like for us to experience red, just as the robot cannot tell us what HIS experience of red is like. Such secondary qualia are only relevant in the abstraction framework that “experienced” them. By the time we utter the word “red”, we’ve already translated the abstraction into a linguistic framework, which is standardized and transferrable.
Yes, I think this is equivalent to what I was trying to say. I like your intuition pump as well.Delete
A strange use of 'abstraction', equating 'mind' to an 'abstraction framework' to no useful progress. A mind is not entirely abstract, it identifies realities in the world and responds to them (quite automatically most of the time - in reality). It abstracts solutions to realities and as amusements as well. In any case, to abstract 'red' has nothing to do with an experience of seeing red or talking about it technically (unless the technical has no relation to reality - which would make it somewhat useless).Delete
Then there is a jump in logic to translating from one entity to another by technical language to fit the real experiences of the other. Clearly Mary would have no idea what's in a book unless she understands the language and concepts, and could not experience it unless she had color perception. If Mary has no color percetion, it would lessen to value of the technical language, which remains real beyond Mary's limitations in applying it for a fuller understanding of red. Forget about Aliens and making a further jump to sentient beings from nowehere to take the place of Mary.
Ian is correct in saying Joe grasps the idea that objective technical language is not the same as the subjective experience, which is all that Ian says. And Joe does not deal with Mary's experience of what red is like for the first time and how that is achieved neurally, just as Ian does not, which makes the story obvious and uninformative in each case. But Joe adds nonsense to the argument to puff it up, which Ian seems to have avoided for the most part. Unfortunately Ian hasn't made the effort to correct the errors for the benefit of other readers. Nonsense on stilts.
What are you looking for when you ask what "Mary's experience of red is like for the first time"? I see two possible interpretations, sorry if there are more.
If you're asking what it's like for Mary to 'experience' seeing red, then your question can't be answered in Mary's case. The brain-state of experiencing red cannot be duplicated either through the transmission/acquisition of nor the internal recollection of propositional knowledge. The two modes of knowledge do not translate(they interact through association). There are no possible words that could elicit the same brain-state, therefore no possible words to elicit the corresponding mind-state.
But they can be associated. Which is what I think your question refers to. I can explain to someone(through association) what it's like to smell Patchouli, even if they have never experienced the smell.
Or I could explain to someone who has never felt inebriated what it's like to feel drunk, by referencing the associations. But 'what it's like' to feel intoxicated is not the same thing as actually feeling intoxicated. That is a mind-state that propositional knowledge cannot duplicate.
Propositional knowledge can get "close" to the appropriate mind-state through association. For example, I could say that Patchouli is musty. But to activate precisely the same brain-state, I would need to introduce the chemical into your nose. There is no other way for the exact same brain state to be activated using propositional knowledge.
For Mary, she has nothing even 'close' to the experience of red to use as an associative anchor, let alone the token phenomenon of seeing red.
Part of Mary's retinue of physical facts is that in human biology, the brain-state for seeing red cannot be duplicated(even by proxy) using propositional knowledge. The only method left is to remove her eyeballs and stimulate the appropriate neural pathways, thereby artificially reproducing what it's like to see red. If the brain-state of seeing red is one of the facts that Mary knows, then she would know that she is missing knowledge until the surgery is performed.
As for how "seeing red" is achieved neurally, that's a science question. Are you requesting references to scientific literature regarding neuroimaging of the brain while seeing red? Is said scientific information necessary, seeing as how much of the accepted philosophical vernacular consists of placeholders for such mechanisms?
You have merely said here that the subjective experience of seeing red is different from objective book knowledge, which is what Ian says, and is obvious. Upon seeing red for the first time, Mary relates the book knowledge to that experience. It would bring the simple relationship between prior book knowledge and later experience into focus as an exciting process of discovery for Mary.Delete
The relationship between an immediate experience and existing book knowledge is resolved neuronally to inform Mary about the experience. Ian and Less Wrong avoid the brain-state as they do not have the knowledge. It has nothing to do with shallow neuroimaging or shallow confidence about supposed placeholders based on an unexplained process. It is a process that requires explanation rather than ignorant avoidance.
If you're attempting to bring a problem into focus Marcus, I'm not seeing what that problem is. Again, are you requesting that the shallow placeholders not be placeholders? That I should reference the scientific literature that precisely explains the associative mechanism between experiential and propositional knowledge?Delete
You're missing the nuance. I didn't only say they are different, but that no association with propositional knowledge can duplicate the experiential knowledge. What are you asking for when you ask for an explanation? Are you requesting that propositional knowledge recreate the experience?
Be clear what you're asking for: "Joe does not deal with Mary's experience of what red is like for the first time."
Read my initial post and explain the brain state that creates the experience of both seeing red and thinking about book knowledge to apply it to seeing red. Clearly you do not realize how shallow your analysis is, and perhaps Ian likewise, as he hasn't responded. I've explained what I mean by what it is like for Mary, and if you cannot even understand the issue, there is no way you can explain the brain state, so it has become a rhetorical question anyway. Forget it.Delete
Read my initial post and explain the brain state that creates the experience..."Delete
No. That's a science question, and this is a philosophy thread. Stop being thick, the placeholder can and does suffice.
The problem of Mary's room seems already to have a solution I agree with, albeit using different words. Thanks Ian.
You are a fool if you think I am thick. And you are a fool if you think anyone can know what it is to be Mary without an objectively knowable brain state to compare. Without it, you are just babbling about the difference between objective & subjective knowledge, and awfully done in your case. Abstraction machines, intuition pumps, digitized processes ... they are all bunkum, nonsense on stilts. You clearly have no idea about the fundamental connection between supposedly philosophical analyses like Ian's and the science it is meant to enlighten. Consequently, you drift off into a mire of confusion. Good luck with that.Delete
I should add for the benefit of readers (as I doubt you will understand) that the appeal to philosophy over science in a blog discussing subjective creation of awareness by a brain state is artificial and ridiculous. If you want to discuss empirically baseless philosophy, go ahead, but don't foolishly try to narrow the enquiry to your own limited terms.Delete
Fillippo, pain is no different from any other feeling except it is intense. We are talking about the subtleties of color perception and the meaning of color to an individual, but if you want to talk about pain more generally you might begin with a basic neurological analysis of how particles create it and how lobotmized patients supposedly feel it. If a section of the brain is missing and is required for a signal of burning from the hand to be completed for feelings of pain, there is no feeling of pain. There is still burn damage to the hand, and it is a dangerous state to be in if we can't feel such pain, but it aint pain. You will need to revise your analysis or come up with something concrete (other than a lengthy boring bibliography which may or may not address that sepcific issue).ReplyDelete
Joe, you over use the word abstraction, which might have caught your attention in recent threads. Digitization also appears to have caught your attention. They are irrelevant to the issue of the neurological creation of a specfic perception, and even to the meaning given to it by association with every red thing we have ever experienced. Abstraction is relevant if Mary employs abstract ideas to reality, which does not apply to Mary in this case. Your intuition pumps, tranformers, and digitizations are pseudo scientific pulp. Hit the papers on neurology for a few months solid and see how you go.
These are realities
This is a philosophical discussion Marcus, not a scientific one, pseudo or otherwise. Sorry if my previous comment struck a nerve.Delete
The appropriate critique is that the concept is better expressed as information rather than abstraction. I was hoping to avoid the ambiguity of the term, since the conceptual understanding includes mathematics, physics, and statistics.
The neurological creation of a specific perception is part of the process of abstracting red from our environment.
I think the problem is that your example doesn't strike any nerves in the real sense of the word.Delete
Ian Pollock:"But what she learns is not propositional knowledge that can be stated in words."ReplyDelete
A few questions,
Am I to take this as meaning that there is no fact of the matter as to the nature of experiences BECAUSE it can't be stated in words (that is, all real "knowledge" must be capable of expression), or simply that knowledge of experiences does not constitute knowledge of any particular fact, irrespective of its capacity (or not) for linguistic expression.
If the former: why would a lack of capacity for linguistic expression entail experiences are not particular facts of what is actually the case. What is the connection between legitimate factuality and linguistic expression, if any?
If the latter: why shouldn't I consider experiences of things as constituting factual knowledge of those things, or (more precisely) factual knowledge of those experiences themselves; namely, that any particular experience has THIS qualitative character rather than THAT? How does gaining an ability automatically negate the factuality of experiences? Couldn’t all of those nifty new abilities you mentioned follow from the fact that GIVEN that now Mary knows WHAT red IS, rather than what it is not, experientially speaking, she gains a slew of abilities as a consequence? This doesn’t seem particularly implausible to me, since many abilities, in order to be learned require factual knowledge, in addition to practice etc...
>Am I to take this as meaning that there is no fact of the matter as to the nature of experiences BECAUSE it can't be stated in words (that is, all real "knowledge" must be capable of expression), or simply that knowledge of experiences does not constitute knowledge of any particular fact, irrespective of its capacity (or not) for linguistic expression.Delete
The latter is what I had in mind.
>If the latter: [...] factual knowledge of those experiences themselves; namely, that any particular experience has THIS qualitative character rather than THAT?
Good question. So the claim is that when Mary sees the blue sky, she becomes acquainted with phenomenal blue AND learns the fact that phenomenal blue is like such-and-such rather than like so-and-so?
The really important point here, I think, is to replace the "such-and-such" and the "so-and-so" with some concrete stand-ins, and to notice that "like" so-and-so implies a comparison with other phenomenal experiences.
Perhaps the experience of blue carries with it (for her) positive emotional phenomenology. So she can say "now I know that blue experiences have similarities with experiences of pleasure, rather than pain" - or something broadly like that.
Is that what you have in mind when you talk of the "fact that phenomenal blue is like such-and-such rather than like so-and-so?"
If it is, then I think that my response would be that her knowledge before exiting the room is sufficient for her to know that fact ALREADY, given her ex hypothesi exhaustive knowledge of her own neurology. All she needs to know is that blue_sky_brainstate will trigger her pleasure centres, or something roughly like that.
To summarize, I think that Mary's full knowledge of the functional organization of her brain should allow her, before leaving the room, to understand the *relation* of phenomenal states to each other. She just isn't acquainted with them.
Nonetheless, as I say, you raise a good question and I am not sure that I am wholly satisfied with my answer.
it's simple Ian, it's information, nuthin more nuthin less. Climb on board - you can do it! Chalmers did but will not commit in print (I don't know why) Look at his musings on how it may well work in speculatory Chapter 8 of "The Conscious Mind 1996"ReplyDelete
Dave, I think your theory needs more moving parts than just the word 'information.'Delete
(Maybe it has them, and that Chalmers article explains?)
It would be handy if you could attempt a theory of your own Ian. A theory as opposed to obvious syllogisms would be a good start. I'm not sure what your essay achieved except to promote Less Wrong as a source of inspiration to solve straw arguments. What the Les Wrongers avoid is the brain state (as explained by me in my first post above). They have no knowledge to share about neuroscience, just useless word games.Delete
Apologies for delay in responding, Ian. Did not want to add the 8 or so moving parts, my theory of information comes with, at least not here, but I guess its fair to say qualia is only a problem for those with an ontological orientation. For those who would say "not only is my experience of red possibly unlike yours, but my sense of anything may not necessarily map to your sense of the same thing" Why? Because that thing does not exist in a vacuum, ergo it doesn't even exist. And while I cannot prove objective stuff doesn't exist, I feel that just like dealing with the question of gods, spirits and whatnot, the burden of proof is upon the realists to prove that stuff exists. Why? Because antirealists (?) can always describe everything in terms of only information, without jettisoning the explanatory powers of current sciences. Given all the things we know that we don't know, I think realists are the ones who need to defend their weltanschauung (I've wanted to use that word for the longest time).Delete
The type of information that is propositional knowledge is transferrable. But weak stimuli sensed by an organ is a form of information that can't be transferred until it is associated with it's propositional counterpart.ReplyDelete
Sensory information is only connected to propositional information by association. Without experiencing red, there was nothing for Mary to associate the propositional knowledge to.
<(4) A subject can acquire and possess phenomenal concepts only if it has or has had experiences of the relevant phenomenal kind> - Part of the SEP argument paraphrased by Ian.
If Mary were to know everything there was to know empirically about how the brain and mind work, she would know that some information can only be obtained by sensory input, as the format is different from that of propositional information. They can be associated only after you obtain the information of red in both it's forms - experiential and propositional.
The reason she was missing information isn't because there is no physical medium for it, it's because the types of information are stored differently.
A thousand ways to say the same thing. DaveS, do you have any up to date suggestions for further reading?
I can't see how you have moved on from your obvious argument differentiating book knowledge from personal experience, except to now use 'information' as meaninglessly as 'abstraction'.Delete
Joe - sorry most of the stuff that I know of is at least ten years old. Maybe Max Tegmark has come out with fresher material. I can say the most powerful work to come out in the last 2 years which seems to echo the way I think the world 'works' would be a novel by Mark Leyner titled "The Sugar-Frosted Nutsack". I think a few others came out this past year that draw on similar 'postmodern' themes, but I have not read them.Delete
I sympathize with you Ian. I struggled with the qualia arguments throughout philosophy of mind.ReplyDelete
If it's any consolation, the argument I wound up constructing for myself had Mary's unbounded knowledge carrying over to her own brain states, such that she already knew about the phenomenal character of colors, despite never having direct experience of them, because she was effectively running a "virtual machine" of herself that captured the essential subjectivity of visual processing.
(I didn't say it was a particularly good or defensible argument of course ;-) )
That's a hypothesis of sorts, at least in proposing some kind of mechanism to relate objective and subjective knowledge (but strangely). I wouldn't worry about the distinction, as its just a matter of learning generally. You read about things that have objective reality (supposedly) and you apply your own experiences to that objective account and visa versa, to improve both the objective account and hopefully also your experience of it from that greater knowledge.Delete
This is one of the most common issues imaginable, resolved by each of you every time you read something. Where you might usefully hypothesize is about specific neural processes creating the brain state we experience as awareness in both identification (red) and in thought relating to those identifications. I can't follow the mechanism you propose (and I don't think it's necessary anyway), but at least you have returned to the brain state, which is the issue avoided by Ian in his preference for useless syllogisms to distinguish objective from subjective.
I often find that when it comes to discussions pertaining to the mind, clarity is often more difficult to achieve than in other contexts because of the peculiarity of the subject itself, and people often end up talking past each other, so first off, let me just say thanks for the clarification. Much obliged.ReplyDelete
Now, you said: “So she can say "now I know that blue experiences have similarities with experiences of pleasure, rather than pain" - or something broadly like that.”
This is not quite what I was getting at, although I should state that I don’t think it unlikely that given Mary’s hypothesized neurological omniscience she would be able to know the relationship of different phenomenal states to each other. She could look at state x and, although not know “what it is like” to BE in that state, nevertheless understand that it would engender state y, which has positive emotional salience because of an association with a reward system or something, whether or not she knew “what it is like” to actually be in that state either. Correctly mapping relations of states without knowing “what it is like” to be in those states, isn’t much different from mapping mental states to brain states generally. (Or so it seems to me).
What I’m trying to wrap my head around, is the idea I take the ability response to be advocating; namely, that because Mary learns an ability, it must also be the case that she is NOT learning a fact, which is a line of thinking I find strange for a number of reasons.
One: it doesn’t seem to follow that simply because she does as a matter of course gain an ability, that it also means she isn’t learning a fact. People learn abilities in consequence of what they know factually all the time, no? If I didn’t know a certain set of facts, I wouldn’t be able to play my guitar very well, but knowing how to play doesn’t mean there are no facts of the matter about my guitar. Why shouldn’t the same hold for experiences, unless experiences are wholly different things from mere facts?
This brings me to point two: I find it difficult to conceive of experiences as being anything less than facts about THE WAY THINGS ARE, in the most primordial sense, which I think is why it might sometimes be difficult to conceive of them as facts at all, because we normally think of the truth-satisfaction of states of affairs in terms of the satisfaction of certain physical or phenomenological criteria,(i.e. is the apple 30 inches long? Is it red?) and fail to recognize we can ask broader questions about the existence of those criteria at all.
I doubt anyone thinks it’s controversial to claim it can be true or false that an apple is red, right? That is, it can either be, or fail to be, that an apple is red. Likewise, it seems clear that it can either be the case, or fail to be the case, that when we both perceive something, that it is the same exact experience. That is, it can be true or false whether or not we experience the same colour (maybe one of us has jaundice or whatever). Similarly, isn’t it plausible that it can be the case, or fail to be the case, that the colour experience “red” or the note “F#” or whatever, IS one of the experiential elements of the world, so to speak, rather than NOT one of those elements? Not merely that it is like THIS, rather than THAT.
If the previous paragraph sounded like gibberish, let me just try to explain by way of analogy before I shut up. Imagine a paint program, with copious amounts of little creations in it. Surely it makes sense to ask questions about all the things in it? We can ask whether or not something is blue or green, or this long or that wide, or big relative to something else. But can’t we also ask the following: of what does our paint palette consist? Colours xyz, or pqr, and is it not a fact of the matter whether or not my paint palette elements are THESE ones rather than SOMETHING ELSE? If so, then it seems to me that know "what it is like" is surely to know a fact, if nevertheless a strangely primeval and pre-linguistic one.
It's a fact as identified using a brain-state and therefore real to the individual (whether or not red looks blue to 'Fred' because of unique receptor configurations). Objective reality, by corresponding to what others subjectively identify as the usual red, would need to be agreed by them, as it's the only way to satisfy an objective application to their various subjective experiences. Fred would be left to surmise from objective book knowledge about his unique receptors that he might experience 'objective' red as 'subjective' blue, but he would be locked into the limitations of his subjectivity nonetheless.Delete
Don't get sidetracked by confused use of the word 'fact'. A fact is what is subjectively identified as a reality AND what is objectively identified in a book as realities about those subjective experiences (that seeing red is a consequence of receptor processes and so on). Alternatively, the subjective experience can be abstract (seeing a ghost) or the book knowledge can be abstract (a wild hypothesis or rambling). Being a product of a real neurological process, abstracts only have reality to that extent, which is problematic as some people live their lives in a cloud of abstracts and are a danger to others, while some are just fools as a result.Delete
The problem here is that there is no way to talk about how a given brain state causes a given experience of color. There is no way to talk about experience at all.
If I didn't have experiences then all the discussions of experience would not parse at all and nothing could convince me that others have experiences. The flip side is the only reason I think others do have experiences is by projecting the fact that I have experiences on others.
No way that you know of to talk about how a given brain state causes a given experience of color. That would be the aim of neuroscience, in its infancy, so don't give up hope. Consequently, as you say, you are left with assuming their subjective experiences are similar or comparable to yours.Delete
Actually I would say that there is no way even in principle unless you massively change how science is done. Science is about the objective. Asking about the nature of, mechanism for or even the existence of the subjective breaks the rules.Delete
It is understandable that behaviorists deny that there is any scientific question at all here. But this will never be a very satisfying answer.
It doesn't break the rules, because the subjective always remains exactly that, but if the brain state creating it is fully understood then there is a way to objectively assess it. As I said, you don't know what neuroscience holds, or if you do you haven't demonstrated it. There is no absolute way of knowing the subjective but there might be ways that take us very close, which would be the aim. Don't lose hope.Delete
No, if the brain states are fully understood then we can predict the pattern of brain activity and thus predict and "explain" objective behavior. That explanation cannot tell us how it feels or even if it feels. Thats subjective and you have to live it to feel it.
If we fully understand brain states then we have the capacity to program a computer to produce exactly the same pattern of "states" and thus produce the exact same objective behavior. Does the program feel?
"It doesn't break the rules, because the subjective always remains exactly that, but if the brain state creating it is fully understood then there is a way to objectively assess it."
You have simply assumed your conclusion. I don't know if there is an accessible answer. Maybe there are some things that simply aren't knowable. OTOH I never claimed we would never find an answer. I only said that any answer would break science as we know it. I actually think that would be kinda cool.
No, I said "There is no absolute way of knowing the subjective but there might be ways that take us very close, which would be the aim."Delete
You are making the mistake of assuming what an objective understanding might contribute. You have no basis for saying that knowledge can be translated into a computer. You are just making two unfounded statements.
I have not assumed my conclusion (which is to aim at a more informative objective analysis by neuroscience), so that would be a third unfounded statement in quick succession. I will leave it that as the discussion has completely dried up.
" You have no basis for saying that knowledge can be translated into a computer. "
Yes I do. Its called the Church-Turing thesis. Computers are universal.
Now what ever else the brain does it absorbs, processes and outputs massive amounts of information. Whatever else the brain is it is also a computer. Your ability to learn and do calculus is an algorithm. Your ability to stand and walk without falling over is an algorithm that balances your body without you even thinking about it. Your ability to convert the information in the light that hits your eyes into a model of your surroundings is an algorithm. Your ability to decode patterns of sound into words is an algorithm. You could not do any of these things if you were not a computer. Understanding how you do these things involves understanding an algorithm.
Now you are free to claim that the brain is also something else in addition to being a computer. But there is no objective empirical evidence that you need anything more than a computer to do all that the brain does. In fact the idea seems to break the Church-Turing thesis.
In general understanding anything as a process involves converting it into an algorithm. Understanding a tornado means reducing it to a mathematical model that can be programed into a computer. If you cannot create such a model then you have not understood the tornado. Ditto brains.
If there is something that cannot be programmed into a computer then there is an aspect of the universe that is not algorithmic. If it is not algorithmic then it cannot be understood as a process even in principle and you break science.
The first task would be to understand the brain in its own terms, and computational ideas might be useful or misleading in that task, we shall see. In your analysis, anything that has a process is a computer, including tornados. That's too broad to be useful. The material of brains, in their own terms in their natural settings of other material things, might not be mechanically reproducible even if understood as a process.Delete
Your use of the term 'algorithm' does not add anything to the transition of brain material (and its crucial context of anatomical & environmental material) into silicon. Basically 'algorithm' means 'understandable' in your analysis. The algorithm, or process, would depend entirely on the conditions of application (neurons, anatomical functions, environments).
The translation to silicon does not automatically follow from there being a process, and it would be better to focus on the first task at hand - dealing with natural stuff- assisted if relevant by some computational concepts. The focus needs to be biological, not silicon, if its biology we seek to explain. I don't see any basis for your confidence, and many philosophers (Dennett, Searle etc) are highly skeptical of beliefs such as your, so I will leave you to it.
" The first task would be to understand the brain in its own terms,... "
Sorry, I can't attach any meaning or sense to that at all as it seems to be a semantic swamp. To the extent that I can make sense of it it seems wrong. I don't worry about whose "terms" I describe things of the external world in. I only worry about increasing the predictive and explanatory power of those description. I can assign no meaning at all to the ownership of "terms". One of the problems with human language is that it is so easy to say nothing intelligible at all.
Beyond that I think you have such a narrow concept of computers that it makes communication difficult. It is true that modern computers are implemented on silicon chips but silicon has nothing to do with the definition of a universal Turing machine. Past computers were made with vacuum tubes and even paper card shuffling devices reminiscent of Searle's Chinese room. Future computers may be made of carbon nano-tubes or even protein neural nets. Whatever is used it is only an engineering choice that does not affect the definition of a UTM.
You should not think of a UTM or computer as a physical device at all. Rather think of it as a way of defining a class of symbolic languages. The Church-Turing thesis tells us that all these languages are in a deep sense equal to each other and that there is no more powerful language.
Searle would strongly disagree with what I have said here but then he was...
Lets just say that I don't have a high opinion of Searle.
Dennett would just about agree with everything I have said. He would just not be troubled because he doesn't think questions about experience are meaningful. That is a very unsatisfying answer but I see no alternative that does not break science.
You are taking the wrong approach to answering the question. As explained, brains operate within anatomies and environments. To understand its functions, that context must be understood. That's what understanding means; attributing facts to the mathematics (if math is used as a tool to describe neuronal fnctions, for example).Delete
Reducing to an algorithm, as math, for example, is dependent upon the things, the facts, the terms (the number "1" or "2" as units of acetycholine, not as some algorithmic anstract). Without the reference to real terms - the facts to which the math applies - you have meaningless self-consistent equations. Its that simple. Symbols are just abstracts unless apllied to realities - so work on realities.
I would like to see an example from you of an algorithm that attempts to explain the brain state. If you can provide one, I will comment further. Otherwise, just refer back to the previous posts on OSR, Krauss & Smolin for abundant argument on what constitutes reality, abstracts, information, digitization etc. I won't repeat it here.
Better still, provide an example that is in pure mathematics or symbols without reference to real terms such a 1 unit of acetylcholine, or the timing of synaptic potential. Your analysis is just abstract, unless of course you can provide any kind of example. You have broken science by removing facts in preference to theories that might or might not have application to computers, but are purely abstract in application to humans.
Dennett would not make the jump from a computer algorithm to the human mind as you have done, so I have no idea where you got that idea from. The manufactured basis for the computer is irrelevant. Science is better served by working FROM the real thing, not FROM an abstract. You can propose in abstract that Church-Turing applies to biology, and good luck with that abstract. Your arguments make no sense, so I will check out.
" Dennett would not make the jump from a computer algorithm to the human mind as you have done, so I have no idea where you got that idea from. "
Keeerist man! The whole basis for the feud between Searle and Dennett was their difference over the jump from computer algorithms to minds. Searle constructed the Chinese room argument to ridicule the idea that formal systems could function as minds. Dennett defended the idea. The feud got so bad that they stopped talking at all. You seem to have no knowledge of the history of the debate at all.
" I would like to see an example from you of an algorithm that attempts to explain the brain state. If you can provide one, I will comment further. "
Any attempt at all to explain brain states is pretty much by definition algorithmic. See this for example:
I think there is a group trying to raise the money to implement an entire brain on computer at a neuron level. I don't think it is currently practical but large portions of a cats visual cortex has been implemented on a supercomputer.
I have never read how Dennett shows that the brain state is artifically reproducible. Provide the reference. You are just avoiding the point. Any "process" is by definition "algorithmic", but that gets you nowhere to creating a computer with a human mind without neurons, anatomies & environments. As I have said repeatedly, that is a woefully insufficient argument. You rely merely on the connections to it being "a process", and the obvious recognition of that point by Dennett.Delete
The issue you have avoided all the way along is the terms of the algorithm: the real facts to which it applies. The example you referenced is the usual dime store example of math applied to biological facts to hopefully find some paralells to assist computer function. There's no problem using humans to get ideas to advance computers in various ways, but no computer can reproduce the human mind.
So it boils down to you providing an example of a computer that reproduces the human mind - but you haven't demonstrated an understanding of how the human mind is constituted in the first place! Read the middle chapters of my free book at www.thehumandesign.net to see the argument in real terms. If you have a way of reproducing the human mind artifically, the world would be exteremely interested. Until then, don't simply assume that an astract applies to reality unless you can prove it. QED
" I have never read how Dennett shows that the brain state is artifically reproducible. "
You are moving the goal post. I never claimed that Dennett showed any such thing and you didn't ask me to show any such thing. Your original assertion was that dennett never argued that a brain could be reproduced on a computer. For example here:
" I don't see any basis for your confidence, and many philosophers (Dennett, Searle etc) are highly skeptical of beliefs such as your, so I will leave you to it. "
" Dennett would not make the jump from a computer algorithm to the human mind as you have done, so I have no idea where you got that idea from. "
Dennett is not skeptical of this and he did make exactly such a jump. He is a founder and most vocal cheerleader for this jump. That you have somehow missed this shows that you are ignorant of the major elements of the debate. Its like arguing that Einstein didn't believe in relativity.
" So it boils down to you providing an example of a computer that reproduces the human mind... "
Moving the goal post again. No computer on earth has the computational power of a mouse let alone a human mind. So no there is no mind in a computer currently. But you can have very strong arguments that it is possible even if implementing it is currently impossible. For example the Higgs boson was predicted like fifty years ago.
You seem concerned that an algorithmic description of a brain contains no information about neurotransmitters, neurons, axons or glial cells and so cannot explain a brain. In an absolute sense you are correct.
But the same argument applies to a computer. A programmers model of a computer tells us nothing about semiconductors, electron holes, capacitive inductance, magnetic domains, or the giant magneto resistance effect. A modern computer uses all of these and you have not fully understood it until you understand all of these.
But in another sense none of these are important. Once you have a programmers model you can in principle implement it without these things. You could even impliment it on a 1940's style card shuffling machine.
Ditto a brain. In order to fully a understand a specific brain you do need all those little details. But once you have a symbolic model you can implement it in some other substance. A silicon chip maybe. Or even a card shuffling machine.
Well thats the implication of the Church-Turing thesis anyway. So what if the Church-Turing thesis is wrong? What if Church-Turing does not apply to biological systems? Well maybe. But it does massive violence to physical law and what we think we know about the universe. Thats what I mean by breaking science. I would love to see it happen but still I must assign it a very low probability.
I don't see a specific reference to Dennett or quote by you, but if he says his multiple drafts model can be reproduced by a computer, he is abstracting along with you. That model is an attempt at defining the process or 'algorithm' with reference to biology (neurons, anatomies & environments) and he does not attempt to translate his proposal to a computer as far as I am aware.Delete
Your exasperation is from your own evasiveness, and there is no goal post shifting, that's more evasion to get away from your fundamental flaw. It doesn't matter whether Dennett has 'said something' about translating to computers or 'expressed hope'. The issue is whether he put anything forward other than a model tied to biology, and he has not to my knowledge. In other words, he has not bridged the gap at all.
I don't support Dennett or Searle, and they don't support each other, but no one has bridged that gap, and that is the point you keep evading with your side steps. Nothing supports you because you have your methodology back to front. First: model from real biology. Second: compare with models from computer science to see if they can be reconciled. You have shown no knowledge of real biology, preferring to jump straight to computers and purport to apply that knowledge to biology without understanding it in the first place. You are going at it bass ackwards.
You seem to argue that 'support' amounts to 'cheer leading' for your guesswork. Dennett might be your biggest cheerleader for all I know and care. If he has done any actual work to bridge the gap, by all means reference it. He doesn't have a neuronal model that is any more viable than Searle's or any others, so you should concentrate your efforts on devising such a model before making an unfounded jump. Develop some working knowledge of the facts of biology instead of computing alone. Good luck with that.
This has become as much about method in blog exchanges as it is about methodology in science. The marker is: "You have no basis for saying that knowledge can be translated into a computer." "Yes I do. Its called the Church-Turing thesis. Computers are universal." You then explain C-T without reference to anything real (a processing computer) so in the next post I argue that such a processor is not proposed by any philosopher who works FROM the biology (Dennett, Searle etc).Delete
As I say now, however, if Dennett has said a processor can reproduce the biological human mind, I will be most interested to write to him about it. But back to your error, you then persisted with quoting my Dennett references to support the possible connection of alorithmic mathematics (for example) to our biology, which is not the issue. As explained, 'process' and 'algorithm' might be interchangeable, and C-T might or might not be useful in understanding the biological process, but that does not mean a computer can stand in for neurons, anatomies & environments.
As scientific methodology, the way in is from the biology, using the computational theory as an adjunct that is always subject to the potentials & limitations found from the biological model. Good luck with C-T as an adjunct - you make some useful comments about computation. However, looking at the biological, I cannot see a reproduction by a computer and I doubt a theoretical program could be written for one (it might be meaningless anyway if it cannot be proven by implementation - but I don't want to get into that).
The reason I quote you on Dennett is to point out that you are wrong about Dennett's stance. Grossly and perversely wrong. Dennett is the man who stood before a crowd and proclaimed that humans were really mindless zombies following deterministic rules like a computer program. He is basically a behaviorist who denies that there is any scientific issue at all in qualia. It is just an illusion.Delete
One of Dennett's famous thought experiments is with replacing a persons neurons with a computer chip. But he does it slowly, one at a time as he is talking to him. Dennett's position is that there would be no functional change as the person's brain was transformed into a network of computers as that is all it was anyway. Searle's position is that the person would continue to talk and act normally but would stop experiencing any qualia. They would become a mindless zombie that simply imitated consciousness.
I don't need to refer to the specifics biology in order to assert that Church-Turing applies to biological systems any more than I have to refer to the specifics of biology to assert that the law of gravity applies to biological systems.
Church-Turing isn't about the biological or abiological. It is a limitation imposed by the laws of physics as we think we know them. You cannot break it without breaking the laws of physics as we know them. It applies to any physical object.
Well maybe we are wrong about the laws of physics. It could happen. But if so you will have to show where. You can't just assume it and insist that others prove you wrong.
There actually has been research on how you could change the laws of physics to allow a more powerful model of computation.
Then Dennett would say that a computer can reproduce the human mind. I will have a look at his model some more if that's the case. That's abstract nonsense of course, given that he has not even made a dent into the biology with his model. However, it's probably to be expected, and I should have anticipated he might have claimed some such thing. There is absolutely no basis for saying that 'computer chips' can replace the neurons, anatomies & environments, but we can only wait and see if he makes good on his abstract.Delete
However, the methodology remains. If you want to reduce the mind to physics, there is a long way to go with that. Thermodynamics, for example (a real scientific fact) would apply to the particles & fields making up our anatomy & environment, so there would be a very basic level of application there. That is a clear scientific fact that applies. Computation using C-T principles might be possible, but how and to what extent is an open issue without a clear bilogical model to apply it to. So begin with the biological model in the usual way to make progress, as explained.
Just looking quickly at C-T in Wiki, 'a function is effectively calculable if its values can be found by some purely mechanical process ... carried out by a machine'. So computability is equated to calaculability, which is an easy adjustment of terms - but it must be able to be carried out by a machine, which is the real problem. I have no issue with trying to calculate mental processes (including qualia), but translating that calculation into a code for a computer to implement is science fiction.Delete
Thermodynamics is likewise basic, but it it a real physical fact of particles & fields, rather than a basic statement equating their obvious but exceedingly complex 'calculability' to 'computation'. To be a computation, every calculation of the physical process involved would need to be accounted for, without knowing beforehand by biology what those processes are at that level.
It would be a meaningless tracking of endless complexities, much like atoms losing heat in bonding - a basic level from which to consider functions. Maybe a machine that can prove the C-T "hypothesis" would need to be as big as the universe, but any guess is as good as any other in this entirely abstract argument about implementation if I ignore the biological and simply 'calculate' particle & field interactions. Calculate, by all means, and call that a computation if it can be reproduced by a machine, but not until then.
The more I read about C-T, the more I need to calculate, starting with particles & fields as all the calculations of physics (that's physical properties explained), then as the chemical hydrodgen as accumulated particles & fields, all the way up the periodic table (that's chemical properties explained, and their relation to physics), then biology...I shan't continue as it would be as pointless as the 'hypothesis'.Delete
That's no less than the complete explantion for existence in calculations if you build from the ground up with 'computation'... and then you need a machine to process all those calculations in correct order to reproduce neurons, anatomies & environments as the experience of mind. I would stick with initially trying to use it as an adjunct to real attempts at understanding biology, perhaps narrowed to chemical properties standing between the physics & the biology.
The quote above is from Turing himself, and it encapsulates the 'hypothesis' and its extremely basic nature and dependency on a machine to prove it. It's a good mantra to try to computerize this and that, but that's all. Darwin = organism conditioned by an environment. Turing = calculation conditioned by a computer. Both are just open ended research programs subject to what you can do in an environment (Darwin), or get a machine to do (Turing). Thus the endless narratives about survival by evolutionary biologists, and the endless computers coming out of silicon valley. File it away under highly speculative.
I will check back later in case anyone wants to complete the explanation of (1) calculation and (2)mechanization to order the calculations as a computation to reproduce the human mind.Delete
In particular, (a) at what level would you calculate (eg biological, chemical, physical, or all three?); (b) what would you calculate (eg at a chemical level, every chemical aspect of every neuron representing every chemical aspect of our gross anatomy - sight, touch, etc and their correlations?).
My proposal would be that until a proper model of the biology is in place to guide what to calculate and how to calculate it, C-T can go nowhere except revert to the physical - an impossible dream of reconstruction. Note that C-T is nothing but an open hypothesis that IF we can make mechanical calaculations, they can be computerized IF a machine can account for those calculations.
I reckon from my reading that C-T is nothing but an invitation to mechanically calculate what you like, and try to account for the calculations by a mechanical process. It's an adjunct to biology in medicine, and that's about all.
you have been posting (a lot) on this blog, and I always appreciate readers' contributions, even when they regularly include advertisings to their own freely available books that will change the world.
However, I wonder if - as an exercise in will power - you could manage to post some comments without being condescending or downright insulting to other readers. If you don't, I may start filtering out your contributions, with great loss to the quality of this blog.
I will just read the daily.Delete
There is some data to suggest the experiment would not work, as the unused colour receptors would not pass information to the visual cortex. This isn't a nit-pick, rather a basic quality of experience: we are not passive spectators of perception, rather this is a process and a skill we need to acquire. Imagine the analogous experiment with "Mary has never ridden a bike". We would not conclude there must be a "basic carrier of the ability to ride a bike."ReplyDelete
Bodily sensations are not the same as theoretical knowledge, even if at bottom all theoretical knowledge is acquired by or with help of the senses.The difference is hard-wired in the brain. To make that into a proof of dualism is like concluding the world we see must be separate from the world we hear just because there is no sight, seeing which is precisely like listening to Beethoven's Ninth.