Nature -- presumably through the mechanism of Darwinian selection -- has endowed us with a balanced system of pains and pleasures that correspond respectively to the sort of things we should avoid or seek in order to further our survival and reproduction. It is not surprising that the brain produces a sensation of pain when we bleed: if it didn't we may run the risk of bleeding to death without noticing (or noticing too late). Similarly, it is hardly surprising that our brain releases pleasure chemicals (literally, neural drugs) to reward us when we do something useful, like finding and eating a sugar or fat-ladened substance.
But what about social pains and pleasures? We often speak poetically and metaphorically about the pain of experiencing envy, or the pleasure of donating to a favorite charity. Turns out, such talk need not be considered quite so metaphorical. A study published in the 13 February 2009 issue of Science magazine by H. Takahashi and collaborators has investigated what happens in the brain when we experience those socially triggered feelings of envy or self-satisfaction. The results are rather stunning, if perfectly logical in hindsight: the researchers found that the same neural circuitry that is involved in the generation of physical pain and pleasure is also in charge of generating the analogous reactions in response to apparently more abstract situations. For instance, people experiencing envy because of another's success activate the pain circuitry of their brains, and when that person is befallen by misfortune, the reward neurocircuitry is activated because we feel delighted. On the more positive side, making a donation to a charity not only stimulates the reward system, but it does so more intensely than when we receive money ourselves.
Biologically this makes sense because the human species' survival and reproduction -- those golden standards of evolution -- depend as much on social interactions as on interactions with the physical environment. Cutting yourself may turn out to be lethal, but so may be getting on the wrong side of enough people in the group which you depend upon for long term sustenance. Not finding enough sugary and fatty foods is certainly bad news, but so is not finding a mate willing and able to copulate and have progeny with you (evolutionarily speaking).
I find the implications of the new research, however, to be particularly compelling for the continuing philosophical debate about the nature of emotions and the primacy of subjective experience. Some philosophers, usually of the continental tradition (particularly phenomenologists) seem to feel a particular delight (I wonder by which circuits in their brains ) in pointing out that science is intrinsically limited because it will never be able to tell us anything about first person, subjective experience of the world. Not only that, but science -- in these people's minds -- cannot even satisfactorily account for the very generation of subjective experiences (so called "qualia"), such as pain, or color.
If the point is simply that science can at best hope to describe and explain the neural circuitry that makes subjective experience possible, but that only a subject can "feel" what it is like -- in the title of a famous paper by Thomas Nagel -- to be a bat (or anything else for that matter), this seems to be rather trivial and not that interesting (although phenomenologists do make a big deal of it). The objective of science is to provide a mechanistic account of feelings, not to feel the emotions themselves. So it isn't really a failure of science at all, but rather a misconception on the part of some philosophers as to what cognitive research is attempting to do.
But reading some of the philosophical literature, one does get the impression that the science skeptics are after something more fundamental: they seem to be claiming that there is an uncanny, non-materialistic nature to subjective experiences, which therefore not only cannot be "felt" through a third party approach, but cannot even be adequately explained mechanistically. From there to the classic position of mind-matter dualism the step is short indeed.
Yet, research like the one by Takahashi and colleagues continuously chips away at a non-materialistic view of human emotions and subjective experience. We know quite a bit now about how the sensation of color, a staple of the qualia debate, actually originates. To insist that one still needs to personally experience what color feels like is -- again -- entirely beside the point: we know the neural basis of the phenomenon, we have a good understanding of the chemicals involved and how they react to light of different wavelengths, and we even have a pretty good idea of why color vision evolved to begin with. What else does one want in order to acknowledge that we do have a good scientific explanation of color? We are not quite yet in the same explanatory position concerning complex emotions like social pains and pleasures, but the study published in Science is a good step in that direction.
This, I hasten to clarify, is not a tale of science vs. philosophy, where the latter inevitably retreats in the wake of a steady advance of the former. Rather, it is a question of what happens when philosophy unnecessarily pits itself against science. Instead of reveling in pointing out the alleged limitations of science as an explanatory enterprise of natural processes, philosophers should concentrate on how a better understanding of science can help them to deal more effectively with truly philosophical questions. For instance, if we accept that certain social actions are related to neural pains and pleasures because of evolutionary history, what does that tell us about the foundations of our moral reasoning, and how can philosophy help us transcend a naturalistic morality that may have been all right in Pleistocene times, but is clearly inadequate to navigate the complex global society of the 21st century?
About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
I will not dismiss philosophy entirely, but do have a fair amount of disdain for trying to know about reality without going out to see what it's like. Additionally, past philosophers who actually did look at nature and let their observations inform them really didn't know anything about most of what matters to, for example, the experience of pain and other emotions. Now that we are beginning to be able to observe these phenomena in a scientific way, we should expect philosophy to experience a huge overhaul.
ReplyDeleteM: "Instead of reveling in pointing out the alleged limitations of science as an explanatory enterprise of natural processes, philosophers should concentrate on how a better understanding of science can help them to deal more effectively with truly philosophical questions."
ReplyDeleteWhat I think that you're suggesting here is to detach and depersonalize but I think that there are a multitude of good reasons to not do that.
The Holocaust, for one.
Once again I'm in Yad Vasahem last week walking through this agonizing place of rememberance with friends and family.[http://www.yadvashem.org/] Now I manage this situation alright just as long as I don't discuss what I see with anyone I'm with. But the first time I open my mouth and try to assess or articulate an extremely difficult thing that I happen to be looking at (maybe hundreds of body's being bulldozed irreverently into a mass grave) I'm so entirely brokenhearted and undone I absolutely WILL NOT regain my composure through the rest of the museum.
You want to talk about "complex"? Science does NOT, as you well know, address the complexity and depths of our 'dark side'. As a matter of fact, it has nothing whatsoever to say on it, does it.
If as SCIENTIST you don't REALLY FEEL the the grief of what it is to be "human" and are not brokenhearted about the terrible condition that we are in, it is doubtful that you can be an authority on what science and philosophy can legitimately address and in what order.
Massimo,
ReplyDeleteThis is a question I have wondered about a lot. In reading your post, I come out disagreeing with that we have or will soon have satisfactory explanation for qualia or consciousness is general. Explaining qualia through only "the easy problem of consciousness” is very unsatisfying and is stopping short. Here we have a phenomena that science seems, perhaps even in principle (as some philosophers say and I can understand why), unable to account for and your dismissing this great mystery as unimportant. Explaining consciousness through chemical interactions and neurons firing (by which I mean the easy problem, I'm grasping for a better way to say that) leaves us as zombies (a concept you dismissed without good reason in an earlier post). Why we are more than that, why qualias exist, is a scientific problem and one that science needs to deal with. In response to your claim that philosophers are coming dangerously close to the idea of dualism of mind and body, I agree with Bjorn that philosophy need to be scientifically informed. If you want to debase philosopher’s assertions that does not count against the idea that qualia are not satisfactorily explained. Also, you have always chided evolutionary psychology but you give it great support in this post. I would definitely be interested in hearing more of your thoughts on that subject.
caliana, it is a common and deeply flawed argument that because scientists want to study something in a detached way, that they then do not share the emotions of people like you. Utter nonsense!
ReplyDeleteYou want to talk about "complex"? Science does NOT, as you well know, address the complexity and depths of our 'dark side'. As a matter of fact, it has nothing whatsoever to say on it, does it.
'Dark side' meaning why humans can be evil? Sure science can address that issue. Do you have any particular reason for saying otherwise?
Massimo,
ReplyDeleteI'd like to clarify something I wrote. I said in my comment that you dismissed the zombie argument without good reason. What you did was dismiss Chalmers argument about zombies, specifically that conceivability is a guide to possibility. While I could not agree with you more on that, the concept of a zombie does not rely on this idea. The real question is why subjective experience, qualia, consciousness, whatever, is possible in the first place.
There is a nice article by Paul Churchland "Chimerical Colors: Some Phenomenological Predictions from Cognitive Neuroscience"
ReplyDeletehttp://web.gc.cuny.edu/cogsci/private/Churchland-chimeric-colors.pdf
in which he uses what is known about the mechanics of color perception to predict some, er, odd "phenomenological" experiences (and you can try them for yourself -- it is pretty weird. But I now list "Stygian Blue" as my favorite color...). He thinks that this shows that we can explain qualia mechanistically -- or at least, that because we have a model that permits us to make interesting and unexpected predictions regarding qualia, we should not be too quick to dismiss the possibility that our model explains the ordinary ones.
I'm less sanguine about the models role in explaining so-called qualia, but largely I suppose because I think of qualia as a bit of a non-problem...
But what I would like to note is that I think the study you cite: re emotional responses using the "same" circuitry as physical pain/pleasure pathways is very much on the "how possibly" side of things, as opposed to the "explaining the details of how this system works."
You're right of course that it is clear that *some* evolved mechanisms must be involved in our emotional abilities, and that these are clearly related to our sociality. I'll give you that many of our basic emotional responses must be evolved responses... And all of this has got to use shared architecture. So sure, envy might use some of the same neural pathways as physical pain. But of course, different things activate those pathways, and indeed, in different people different (kinds of) things activate the envy pathways.
In order for us to feel pain at being betrayed, or pleased at helping someone, we already have to care about the person betraying us or the person we are helping. In the latter example, if we didn't care about the person we were helping prior to feeling happy about helping them, it would make no sense for us to feel happy at having helped them.
On another note, I think that e.g. contemporary decision-theory etc has a much easier time explaining evil than explaining good! That's what makes the kind of research Massimo is pointing towards so interesting -- it shows us a way to reconcile what is known about our emotional responses with our best theories about our actions. As Massimo well-knows, a very Hume-ian project. :)
Jonathan Kaplan
Can a man-made machine have a subjective experience? We've heard of the Turing test for artificial intelligence, where success is declared if a person can't tell if he's, say, on a chat window talking with a computer running an AI program and not a human being. I propose the "Scott Test" of subjective experience, where success would be declared if a computer, not explicitly programmed for subjective experience, reports wonderment about the mystery of its subjective experience. If we succeeded, how would we explain it?
ReplyDeleteScot said:"Can a man-made machine have a subjective experience?"
ReplyDeleteHow about asking Caliana and see what its response its.
someone,
ReplyDelete"Explaining qualia through only "the easy problem of consciousness” is very unsatisfying and is stopping short."
Maybe, but I share Jonathan's view that the so-called qualia problem is, in fact, a non-issue. What additional explanation would one want for, say, color perception? What is it that we are missing? (Other than the feeling itself, which does not require an explanation, it simply requires, well, to be felt!)
"zombies (a concept you dismissed without good reason in an earlier post)"
As you say, I actually dismiss the idea that "intuitions" a la Chalmers are useful or reliable. I don't think I need a detailed argument for that, the burden of proof is on someone like Chalmers to convince the rest of us that his flights of fancy are actually relevant to philosophical and scientific issues. Remember, Galileo had to convince his contemporaries that the telescope worked as intended, and that the sunspots were real astronomical phenomena. Unfortunately, Chalmers is going to have a much harder time than the Italian astronomer.
"you have always chided evolutionary psychology but you give it great support in this post. I would definitely be interested in hearing more of your thoughts on that subject."
Good point. My relationship with evolutionary scenarios for human behavior is complex: I don't reject evo-psych on the ground that it is untenable, or that the general idea is misguided. As Jonathan and I point out in Making Sense of Evolution, the problem is with empirically testing specific scenarios invoked by evo-psychologists. This doesn't mean that one cannot entertain the possibility that broad human emotional characteristics evolved by natural selection, especially within the context of a philosophical discussion. In general, I take it as uncontroversial that emotions have a biological root, regardless of the specific mechanisms and time frames involved.
Bjorn " Dark side' meaning why humans can be evil? Sure science can address that issue. Do you have any particular reason for saying otherwise?"
ReplyDeleteSure. Germans were awash in some of the best science and medicine of the time. Yet with no moral resolve or underpinning the science that they presumed to have practiced then was just ample reason to experiment more on the people that they hated. Everything imaginable, you see, is justifiable when one rejects Gods authority.
The Germans did not believe, for one, in the actual personal depravity of man. Yet they themselves were some of the most depraved while believing themselves some of the most enlightened people on the earth. And I know of a lot of people and movements with the same mentality which exist TODAY.
I am German as well, but I'll quickly disassociate with anything that looks like we're all just drifting along with the crowd doing whatever comes next. When the end does come for each of our lives, we simply cannot say to God "well, everyone else was doing such and such, so I did too". That isn't going to fly. God is going to judge us each individually for our response to evil whether it is our own or someone elses.
What specifically does science do address actual and quantifiable evil?
Death by lethal injection?
Faithless,
ReplyDeleteIt is the height of subjectivity to experiment on and attempt to annihilate the people that one just happens to hate. And hating, much less without even knowing.
That is double, compounded subjectivity, actually.
The fact that I, on the other hand, can feel deeply for people who I have never known, never hated or loved, means just the opposite.
Glad you can understand that now.
“What is it that we are missing? (Other than the feeling itself, which does not require an explanation, it simply requires, well, to be felt!)”
ReplyDeleteI very much disagree. We can think of the universe in mechanical terms (quantum physics says we can’t though I don’t think that destroys my point), and in those terms zombies resulting from human evolution make a lot more sense. Evolutionarily, it makes sense that we would be able to feel pain so we can react in the way the best increases our chances of survival, but it does not make sense that the ability to subjectively feel could exist at all. The issue here is how science, which has never confronted an issue anything like this before, can account for subjective experience. Science and subjective experience seem, for now, to be completely alien of each other.
Hmm, am I explaining my position well? Here is another way to look at it. There is a difference from being life and being alive. A tree is life but not alive. What is it like to be a tree? It is like nothing. A tree has no subjective experience, subjective experience is life. A zombie would be as alive as a tree. It is the difference of human minus zombie that does indeed need to be explained.
To Scott,
“I propose the 'Scott Test' of subjective experience, where success would be declared if a computer, not explicitly programmed for subjective experience, reports wonderment about the mystery of its subjective experience. If we succeeded, how would we explain it?”
As speculative as it is, I doubt there is anything so special about being organic to being alive (as opposed to being life). Subjective experience/consciousness I bet is the most important thing to being alive. If we can build a computer that has consciousness it would be alive. This opinion is based off how I feel “here” right now. My feeling of being is my consciousness. A feeling of being is consciousness. Of course and again, this is all speculative.
someone,
ReplyDelete"Evolutionarily, it makes sense that we would be able to feel pain so we can react in the way the best increases our chances of survival, but it does not make sense that the ability to subjectively feel could exist at all."
And how do you suggest we could feel the pain without subjective experience, exactly? I believe you just gave a reasonable answer to your own question.
Caliana
ReplyDeleteAh you are German! That at least explains your lack of a sense of humour.
"It is the height of subjectivity to experiment on and attempt to annihilate the people that one just happens to hate. And hating, much less without even knowing."
Then how about you stop doing it?
"The fact that I, on the other hand, can feel deeply for people who I have never known, never hated or loved, means just the opposite."
Fact? Well There is no evidence to support this assertion based on any of your comments that I ( and I guess most others) have read in this blog.
Massimo,
ReplyDelete“And how do you suggest we could feel the pain without subjective experience, exactly? I believe you just gave a reasonable answer to your own question.”
I never suggested that. What I was saying is that, if subjective feeling is possible (which, obviously, it is) it makes sense that it would evolve in animals and hence we would be able to feel pain. However, it makes no sense that it is possible.
Oh, and should apologize for my remark about you dismissing the concept of a zombie for no good reason. Doing so without clearly showing that you didn’t have a good reason made it an unmerited insult. I tried to make up for that with my second comment though only created a misconception. Charmers’ weird argument for dualism is wrong and it takes no diatribe explanation to show why. What you did that bothered me was that you said his argument for dualism is wrong but not that his concept of a zombie was somehow misguided. Yet, at the end of your post you dismissed it anyway.
The concept of a zombie is a rock solid, useful thought experiment because it shows us the true and exiting mystery of consciousness.
Faithless,
ReplyDeleteDisagreeing with someones point of view is in no way equal to hating. Agreeing with other certain points of view is in no way equal to loving. In my house, I am the emo patrol. Doesn't matter to me the least little bit IF ITS POPULAR, just as long as it's right. Is that unloving? If someone thinks it is, too bad. Someone has to be the parent.
I have no sense of humor?
That's is almost correct. ;)
When we were in Isrl recently one of our friends from Mexico mistakenly asked when we were going to be at the "wailing strip". We just about laughed till we died....
Poor man. Probably he use to be our friend.... lol!
:)
"The concept of a zombie is a rock solid, useful thought experiment because it shows us the true and exiting mystery of consciousness."
ReplyDeleteI'm inclined to disagree. The idea that the 'zombie' argument is useful depends on too many assumptions, at least some of which are at best questionable.
See: http://host.uniroma3.it/progetti/kant/field/zombies.htm for review
Consciousness may be an "exciting mystery" but that isn't shown by thought experiments re: the logical possibility of consciousness being missing. For all we know, consciousness is a necessary by-product of a certain level of behavioral sophistication, and hence something that comes along with increased behavioral complexity; if increased behavioral complexity is selected for relatively often, so too will be consciousness.
*****
Back to the issue of giving aid and comfort to evo.psych supporters. One interesting element of the work Massimo cites is that it is neutral between several different visions of how minds & brains are related, and how minds work. It is *compatible* w/ evo.psych's stress on modularity (and informational encapsulation) but also compatible w/ a more general purpose architecture.
I would caution, however, that the leap from "the brain is using sugar in zone x during activity y" to "zone x is where y is done" gets made far too rapidly. Still worse, "since zone x is where sugar is burnt during physical pain, and also where it is burnt during emotional pain, physical and emotional pain are 'the same' kind of thing" -- just doesn't follow.
It doesn't even follow if we think brains are like computers -- just because the same chip-set is accessed during two operations doesn't mean they are the same kind of thing -- my graphics processor uses more power when I'm playing games, but also when I'm doing 3-D CAD...
OK, that's enough for now.
What I find cool is not the more directly rewarding behavior (donating to charity, finding a mate) but the more nebulous forms of altruism that also make us "feel good." There's no apparent or immediate connection for why I help some stranger change a tire (and then walk away, never to see him again, ever) and yet feel that "it's the right thing to do."
ReplyDeleteBut that sort of broader engagement is precisely what many would look for in "future man."
Massimo, I am a bit puzzled by your assertion that "the hard problem" is a non-issue. I agree that it may not be as hard as Chalmers and others think it is, but it is still a problem that I think needs a scientific explanation. If we cannot explain and demonstrate, how neural activity in the brain generates consciousness, I don't se how we will ever be able to effectively refute dualism. Could you be persuaded to elaborate a bit?
ReplyDeleteMorten,
ReplyDeletebriefly (I'm cooking dinner... :) I didn't say that the problem of consciousness isn't interesting, I said -- like Jonathan -- that I find the specific problem of qualia a bit of a non-issue. The two are not one and the same.
I do agree that science needs to explain consciousness, and maybe it will. What usually annoys me is that dualists make an argument from ignorance along the lines of "ah! science can't explain it, therefore..." Sounds like a creationist talking to an evolutionist...
Suppose I built a robot with automated learning software and various sensory inputs: color vision, tactile sensors, etc. Colliding with something would trigger a "pain" subroutine and the robot would learn to avoid the behaviors that preceded the collision. This scenario can easily be expanded to include multiple such robots interacting socially, with corresponding social "pain" and "pleasure" subroutines.
ReplyDeleteMy point is that the hardware and software necessary for a system of "feelings" and corresponding behavioral incentives in no way necessitates that those "feelings" be experienced. And yet we experience our feelings—or at least I experience mine!
The study by Takahashi et al sounds very interesting. But I disagree that it "chips away at a non-materialistic view of human emotions and subjective experience". We are learning more and more about the neural basis of emotion. But I don't think science can tell us anything about why we experience any of it.
Nick,
ReplyDeletethe robotic example is not a good example of pain. Pain needs to be felt, or it ain't pain.
Also, if you reject an evolutionary explanation of feelings/emotions, have you got a better one to propose?
Sometimes I feel like there is a number of philosophers who delight in pointing out the "limits of science" (which, of course, exist), but then seat back and congratulate themselves for a job well done, without having advanced human knowledge a single bit.
"My point is that the hardware and software necessary for a system of "feelings" and corresponding behavioral incentives in no way necessitates that those "feelings" be experienced."
ReplyDeleteHow do you know? And: is this a claim about *logical* necessity, physical necessity, or what? If it is a logical claim, I might give it to you, but as a bit of a functionalist at heart, I'm still not sure. As a physical claim, I'm inclined to think it is wrong...
Can we program robots to avoid things and use subroutines that we *call* "the pain subroutine" w/o the robot having conscious experience? Sure -- that's just a better roomba. But, given the set-up as you imagine it, that doesn't seem to be what's up. And if they are in fact "feelings" I'm going w/ them being felt. If they aren't felt, they are not feelings! And I'm guessing that as a matter of *physical* possibility, the requirements of sociality & behavioral sophistication require actual feelings (felt).
Note that this is neutral regarding the possibility of such states being instantiated in a Church/Turing-complete system; maybe it can be, maybe it can't be, but either way, I don't see any reason to reject the position that if you've got (really got) the sophisticated social life & the behavioral flexibility, you've also got the feelings, really felt.
Massimo wrote:
ReplyDeletePain needs to be felt, or it ain't pain.
I agree, which is why I wrote "pain" in quotes.
Also, if you reject an evolutionary explanation of feelings/emotions, have you got a better one to propose?
I accept the evolutionary explanation of the physiological basis of the neurological phenomena we call feelings/emotions. But that doesn't explain why we feel those feelings and emotions, why we're not just robots processing information.
----
I wrote:
My point is that the hardware and software necessary for a system of "feelings" and corresponding behavioral incentives in no way necessitates that those "feelings" be experienced.
But Jonathan asks:
How do you know? And: is this a claim about *logical* necessity, physical necessity, or what? If it is a logical claim, I might give it to you, but as a bit of a functionalist at heart, I'm still not sure. As a physical claim, I'm inclined to think it is wrong...
Yes, my claim was about logical necessity. I was amused that you mentioned the roomba, which was precisely what I had in mind. Does the roomba have feelings? Or do you need a super-duper-deluxe model in order to say it has feelings?
Joking aside, I find the notion of emergent consciousness to be unconvincing. And it seems to lie outside the realm of science.
... I don't see any reason to reject the position that if you've got (really got) the sophisticated social life & the behavioral flexibility, you've also got the feelings, really felt.
I see only one piece of evidence in support of that position: us. If we follow your reasoning, we might next arrive at bees, who have social life though admittedly less behavioral flexibility. Perhaps bees have feelings and emotions, I don't know. But do bacteria?
Nick,
ReplyDelete"I accept the evolutionary explanation of the physiological basis of the neurological phenomena we call feelings/emotions. But that doesn't explain why we feel those feelings and emotions, why we're not just robots processing information."
But that is the point. As David Hume remarked in the 18th century, animals (including humans) aren't going to do anything unless they are motivated by feelings/emotions. A robot that doesn't feel pain will "bleed" to death (or whatever the robotic equivalent of it would be). Feelings are a matter of survival.
A robot that doesn't feel pain can still have sensory inputs and subroutines to process those inputs and change the robot's behavior (e.g to remove its arm from a hot stovetop). Logically speaking, the feeling of pain is not necessary.
ReplyDeleteMassimo, you cannot seriously mean that you think feelings/emotions in themselves are causal agents without which animals would just lie down and die.
ReplyDeleteAs Nick mentions, many animals don't have consciousness and hence no feelings/emotions (certainly protozoa but probably also many metazoa) but they survive just fine.
Hence, the question of why we have consciousness is a real and fascinating one. Maybe a "human" zoombiebrain without consciousness is impossible to build and consciousness is simply a necessary byproduct of the complexity of the brain. Or maybe consciousness has survival value and hence is a product of natural selection. Well I guess we probably have to figure out what consciousness really is and how it is generated before we can answer the question of why we have it.
Nick, Morten,
ReplyDeleteof course I don't mean to say that feelings/emotions are *logically* necessary for an animal to survive. As you say, protozoa do just fine without them, and of course the issue doesn't arise for plants either.
But, as a matter of historical contingency, if not necessity, it just so happens that when the complexity of the nervous system increases an animal begins to have feelings/emotions, and these are extremely useful for survival (think of fear, for instance).
Which is why I don't see the big deal in explaining why humans have feelings/emotions.
By the way, I most certainly do not equate feelings/emotions with consciousness, as clearly animals can have the first without the latter. So I think that the problem of qualia is distinct (and easier) than the problem of consciousness.
I don't understand that Massimo. How can you have feelings/emotions without being conscious about them (isn't that what you getting at, when you say that feeligs have to be felt?)? And how do you know, that animals have feelings/emotions but not consciousness?
ReplyDeleteAlso, I don't understand how you can simply assert, that feelings/emotions are "extremely useful for survival". Are you saying that the following sequence:
Detection of lion -> generate fear -> flee in the opposite direction
is more "useful" than:
Detection of lion -> flee in the opposite direction
and if so, why (given that many animals don't have the fear part)?
Morten,
ReplyDeleteperhaps we have different concepts of consciousness, but I don't think that, say, dogs are conscious in the same way in which we are. Yet, they can clearly display the signs of the emotion of fear.
Indeed, even humans experience lots of things at the level of feelings before we become consciously aware of them, so no, I don't think that feelings require consciousness at all.
As for your lion example, I grant you that it is not *logically* necessary that the first sequence be the one that evolved, but it happened that way, and we know that animals that are emotionally impaired through brain damage (including humans) do have a harder time navigating the world.
Part of the problem here is that too many philosophers who study qualia and consciousness are stuck on logical necessity, apparently completely failing to appreciate that biology is a lot about contingency. Still, that contingency is subjected to the filter of natural selection, so there is no contradiction -- for a biologist -- in say that emotions are necessary for the survival of animals with complex nervous systems, even though that necessity is not a logical one.
To put it differently, biologically speaking augmented survival is a perfectly fine explanation for the existence of emotions, even though one can certainly conceive of, and even observe, living organisms that survive without relying on emotional responses (plants, for instance).
I guess you're right Massimo, that we have different concepts of consciousness. For example, I see no reason to believe, that dogs don't have consciousness. It is obviously different from ours, but why should it be fundamentally different? Also in my understanding of consciousness, you cannot experience og feel anything if you don't have consciousness (like when you are unconscious).
ReplyDeleteI agree, that "there is no contradiction -- for a biologist -- in say that emotions are necessary for the survival of animals with complex nervous systems". I just think it would be nice to know, if it is the case that it is necessary and if so why it is necessary.
I think the mechanism for love is similar. The survival advantage for a newborn of being loved by it's parents is a very strong advantage indeed. Of course, what 'triggers' the mechanism is beyond me; idiosyncrasies, facial recognition, smell, etc. I'd expanded the thought at booktalk.org, though I'm sure it's a pale comparison to the pioneers in this thread of inquiry.
ReplyDeleteI'm very late to the party here, so I guess nobody will read this... (apart from Massimo, if he gets notified of new entries) But I'll leave a little comment nonetheless.
ReplyDeleteNick, (and others)
Define "feeling pain". You haven't yet, but you still insist the robot cannot do that "feeling pain" thing.
The problem is that you, and apparently most everybody else if I understood correctly, considers "feeling" something as some special thing, separate from the rest. Why? It is not. Feeling fear and, say, sweating are just two outputs of physiological activities, although you could say one is clearly more complex and difficult to explain than the other.
Back to your robot analogy, I think there is no problem there. You just have to program a more sophisticated robot, that's all. If you program a very simple device that behaves like a protozoan running away from a toxin and then conclude that robot can never "feel", that's called a straw-man, I'm afraid.
What if the program you mentioned triggers a set of other programs in the robot? A set that does a bunch of stuff, let's say constricts the robots peripheral "vases", narrows its attention to the object of the pain/fear, etc. etc. And when these programs are active, the output is "I fell pain/fear".
How is that necessarily different from us in quality? Or, what is the threshold of complexity you require to consider the entity is "feeling" something.
If we consider "feelings" as separate from the rest as an assumption to start with, there is no surprise that we would find it hard to reconcile it all.
J,
ReplyDeleteyou are right, I probably will be the only one to read this comment :)
I think you do make a good point here, but there is a reasonable possibility that feelings require a certain physical substrate and that not every other substrate will do. Not only that excludes robots but, for instance, plants.
That said, of course, one can easily imagine a robot made of organic materials, including nerve fibers, and then the question becomes whether feelings are an emergent property of those materials when combined with certain physical conditions. It's an empirical question.
Thanks to the Blogger follow-up notification feature, no belated comments need go unread!
ReplyDeleteI tend to agree with Massimo's comment, but I'm not sure it's an empirical question. How can we ever know that someone (or something) feels anything? When it comes to other humans, we take it on faith. Mammals are pretty similar to us, so most of us believe they feel too. Beyond that, and particularly with much different or simpler forms of life, our intuitions seem to run out.
I do think that the problems of feeling and consciousness are very much related. The Turing Test doesn't seem to resolve either of them.