by Massimo Pigliucci
Julia, ah, I love it that you started your essay with “some of my best friends are philosophers...” ;-)
Seriously, though, when you claim that most philosophical discussions get off the ground on the basis of intuitions I think you are widely off the mark. That is a skewed reading of the literature, based on a few prominent but certainly not representative examples (like the infamous zombies concocted by Chalmers). More importantly, I think the entire premise of this discussion is off base. Intuitions — in philosophy like in science, and in fact as in everyday life — are not and cannot be “evidence.” They are better thought of as embryonic hypotheses to be further tested (in the case of science) or analyzed (in the case of philosophy).
Moreover, some instances of so-called intuitions — like those underlying the trolley dilemmas, for instance — are not really intuitions at all. They are thought experiments, meant to bring out explicitly certain implicit assumptions about, for instance, ethics, or philosophy of mind, so that such assumptions are clearly out there for everyone to see and discuss (and accept or reject).
Searle’s Chinese room, for instance, is really a thought experiment, not an appeal to intuition. Searle is correct in putting the question: if the Chinese room cannot in any sense be said to “know” Chinese (and it really would be difficult to argue that it does), then there is something missing from a straightforward computational theory of mind. The thought experiment is meant to highlight this point and to pose a challenge. We are not meant simply to believe Searle’s intuition and be done with it, but to try to figure out where the analogy between a computing mind and a computing Chinese room went wrong — if anywhere.
Back to ethics, you are confusing two different approaches when you go from intuitionism (which really is a doctrine based on the significance of ethical intuitions, and which I don't buy) to the criticisms of utilitarianism based on hypothetical situations. The latter are perfectly legitimate, rational (i.e., non intuitive) critiques. As you know, I actually think that ethics is pretty much all about “what if ... then ... why” sort of issues: IF you are a utilitarian, and IF you face a situation in which you can kill one person in order to save five, what would you THEN do, and WHY? And the utilitarian has to meet the challenge, again not by appealing to inscrutable intuitions, but on the basis of reasons. And of course many sophisticated utilitarians, like Peter Singer, have.
Of course, there are examples that do make your case. I think G.E. Moore was simply wrong about his reasoning concerning a beautiful vs. ugly — but inaccessible — world, where the beautiful would in any meaningful sense be “better.” But I know of very few philosophers who have been moved by Moore’s so-called reasoning. So, your rejection of Moore puts you in excellent company! Indeed, your objections to Moore (please define what you mean by better, worse, beautiful and ugly) are standard, and show how easily one can reject a too cavalier use of intuition in philosophy.
Your point that different philosophers have different intuitions, and that somehow this should make us feel uncomfortable seems a bit strange to me. Do you feel uncomfortable because different scientists have different hunches about, say, the value of string theory? And while it is possible (though entirely unsubstantiated) that “having the ‘right’ intuitions is the entry ticket to various subareas of philosophy” this is certainly the case in science. According to Lee Smolin, for a long time it was next to impossible to find a job in fundamental physics, or get a grant funded, unless you shared the community’s intuition that string theory is the only game in town when it comes to unifying relativity and quantum mechanics. I’ve had for years the same experience in certain domains of evolutionary biology, when I was a practicing scientist.
You then proceed to make the valid point that we know of all sorts of other intuitions that seem reasonable to most people but are in fact wrong. Your example is the intuition that there must be more rational numbers than positive integers. But that’s an intuition shared by laypeople, not mathematicians. Mathematicians, however, feel comfortable with their intuition (because there is no proof yet) that Goldbach's conjecture is true. At the moment, that’s all it is, an intuition. But since it is an intuition shared by a group of people with a lot of expertise in math in general and with Goldbach’s conjecture in particular, I think it’s a good bet that they are probably right, or that there is at least good reason to consider the notion seriously. The same argument applies to your (again, valid) criticism of folk understanding of physics. But you are probably not going to argue that professional physicists share that sort of naive intuition. Indeed, that particular intuition was initially put in doubt by a thought experiment concocted by Galileo (who, contrary to popular lore, never actually conducted the physical experiment to test his intuition).
Your quote of Harman is a non sequitur: “Considering the inadequacies of ordinary physical intuitions, it is natural to wonder whether ordinary moral intuitions might be similarly inadequate.” Right, the problem is that moral philosophers are people who constantly think about ethics, so their intuitions in that regard are anything but the equivalent of “ordinary physical intuitions,” just like Galileo did not rely on ordinary intuitions about physics.
I’m not sure why you then bring up our Pleistocene ancestry. Certainly whatever “intuitive” thinking may have evolved in us about quantum mechanics shouldn’t be trusted, but this isn’t the sort of intuition we are talking about. Intuition in the sense discussed here is not the same as instinctive behavior. Indeed, there is a large literature on the cognitive science of intuitions (I've been reading about it in preparation for a chapter of my new book), which basically suggests that: a) intuition is the result of sub-conscious processing of information by the brain; b) that it is domain-specific (so, for instance, chess masters have excellent intuitions about chess, but not about anything else); and c) that it can be effectively used either heuristically (when one doesn't have time to further analyze things), or as a preliminary hypothesis on the basis of which to conduct further inquiries.
I am a bit baffled by your citing of Jonathan Haidt and his argument about incest. Yes, a large number of people have a “yuck” reaction to incest precisely for the evolutionary reasons you mention, and it certainly is interesting to study the neural underpinning of such reactions, like Haidt does. But what does that have to do with the ethical issues posed by incest in a modern context? Do you really think that most ethicists would support a “yuck” type of reaction and therefore agree with the lay person that incest is wrong under all circumstances? In the same paragraph you also mix categories of explanation inappropriately. Let’s say that a particular lay intuition (about probability, instead of ethics) can be shown to be the result of evolution. Would you then ask, as you do in the case of ethics, “since that [evolutionary] fact alone suffices to explain the (widespread) presence of the X intuition, why should we take that intuition as evidence that X is true?” The answer is because something that evolution favored may or may not be true, but we could take it’s (alleged) evolutionary origin as provisional evidence that it may be true, or at least useful, and go from there. Again, the idea is to treat intuitions as preliminary hypotheses to be further investigated.
Finally, the question with which you conclude your post seems to me a bit too grandiose: “Can analytic philosophy survive without intuition?” Of course it can, since a lot of analytic philosophy simply does not depend on intuition, and nowhere in analytic philosophy is intuition considered the final arbiter of truth. And of course, it is somewhat ironic that you quote several papers by analytic philosophers, discussing — analytically — the role of intuitions in philosophy. Analysis, not intuition, is what analytic philosophy is all about.
UPDATE: Spurred by a lively discussion on Julia's Facebook page about this, I actually finally got raw quantitative data (as opposed to "my reading of the literature is..." or "I talked to a philosopher who told me that...") pertinent to the issue at hand.
According to the Philosopher's Index, the most comprehensive database of philosophical entries (covering 1940-2010), 2052 articles published in English over the past 70 years include the word "intuition" in the abstract. This, arguably, is an overestimate as far as this discussion is concerned, since not all these papers actually deploy philosophical intuitions as a major part of their arguments.
Still, this is out of a total of 234,018 papers, which means 0.88%. I think that any claim to the effect that intuition plays a "major" role in the philosophical literature runs into trouble, given these statistics.
Out of further curiosity, I inquired about the possible differential frequency of the use of the term intuition in abstracts taken from different sub-fields of philosophy. Here is the breakdown (the total doesn't add up to 2052 because there is a fairly significant "miscellaneous" category):
461 in ethics
417 in epistemology
385 in metaphysics
276 in logic
190 in philosophy of science
100 in semantics
46 in political philosophy
UPDATE: Spurred by a lively discussion on Julia's Facebook page about this, I actually finally got raw quantitative data (as opposed to "my reading of the literature is..." or "I talked to a philosopher who told me that...") pertinent to the issue at hand.
According to the Philosopher's Index, the most comprehensive database of philosophical entries (covering 1940-2010), 2052 articles published in English over the past 70 years include the word "intuition" in the abstract. This, arguably, is an overestimate as far as this discussion is concerned, since not all these papers actually deploy philosophical intuitions as a major part of their arguments.
Still, this is out of a total of 234,018 papers, which means 0.88%. I think that any claim to the effect that intuition plays a "major" role in the philosophical literature runs into trouble, given these statistics.
Out of further curiosity, I inquired about the possible differential frequency of the use of the term intuition in abstracts taken from different sub-fields of philosophy. Here is the breakdown (the total doesn't add up to 2052 because there is a fairly significant "miscellaneous" category):
461 in ethics
417 in epistemology
385 in metaphysics
276 in logic
190 in philosophy of science
100 in semantics
46 in political philosophy
Thanks for the reply, Massimo. I know you are skeptical about the prevalence of appeals to intuition, so here are some of the philosophers whose work I consulted when writing my essay (with links to the papers the quotes are taken from):
ReplyDelete1. "It is safe to say that these intuitions – and conclusions based on them – determine the structure of contemporary debates in epistemology, metaphysics, and philosophy of logic, language, and mind. Clearly, it is our standard justificatory procedure to use intuitions as evidence (or as reasons)." (George Bealer)
2. "One thing that distinguishes philosophical methodology from the methodology of the sciences is its extensive and avowed reliance on intuition. Especially when philosophers are engaged in philosophical “analysis”, they often get preoccupied with intuitions. The evidential weight accorded to intuition is often very high," (Ernest Sosa)
3. "Contemporary analytic philosophy is based upon intuitions. Intuitions serve as the primary evidence in the analysis of knowledge, justified belief, right and wrong action, explanation, rational action, intentionality, consciousness and a host of other properties of philosophical interest.Theories or analyses of the properties in question are attacked and defended largely on the basis of their ability to capture intuitive judgements." (Joel Pust)
4. "Appeal to intuition appears ubiquitous in armchair philosophy." (Jonathan Ichikawa)
5. "Intuitions currently play a central evidential role in much of the practice of philosophy." (Brian Talbot)
6. "Appeals to intuition play a foundational role in a good deal of philosophical theory construction." (Hilary Kornblith)
7. "“According to standard practice, a philosophical claim is prima facie good to the extent that it accords with our intuitions, prima facie bad to the extent that it does not... intuitions about thought-experiments are standardly taken as reasons to accept or reject philosophical theories.” (Stacey Swain, Joshua Alexander, and Jonathan M. Weinberg.)
8. "Theorists regularly claim support from disputed intuitions when there is no resolution in sight. Indeed, disputed intuitions are often the linchpin on which everything turns." (Robert Cummins)
9. "Philosophers have traditionally held that claims about necessities and possibilities are to be evaluated by consulting our philosophical intuitions." (Janet Levin)
10. "Traditionally, intuitions about cases have been taken as strong evidence for a philosophical position." (Alexander Harper)
11. "When contemporary analytic philosophers run out of arguments, they appeal to intuition. Intuitiveness is supposed to be a virtue, counterintuitiveness a vice. It can seem, and is sometimes said, that any philosophical dispute, when pushed back far enough, turns into a conflict of intuitions about ultimate premises: ‘In the end, all we have to go on is our intuitions’." (Ted Williamson)
Also: I don't think your lit search for the word "intuition" is a good way of capturing the prevalence of appeals to intuition, since they usually don't explicitly use the word. Cases in point: my example of G. E. Moore, and philosophical zombies, are both arguments that rely on intuition but neither one uses the word "intuition."
ReplyDeleteJulia, a list of cherry picked citations an argument doesn't make. Yes, of course my raw estimate is ballpark only, but it does make the point that the very term appears rarely in the entire philosophical literature of the past several decades.
ReplyDeleteAt any rate, I'd like to know what you think of my more substantial argument, which is that intuitions is both a heterogeneous category, and simply does not (always) play the role you attribute to it.
@Massimo -- I don't think it's ballpark, I think it's a vast underestimate. Almost never do appeals to intuition actually use the word "intuition."
ReplyDeleteAnd my citations aren't cherry-picked; if you want to go through the literature yourself, I promise you'll find that they're representative. Keep in mind, this isn't analogous to string theory (in which the field self-selects for people who believe in string theory).
These philosophers who have written about intuitions are all established in other fields (epistemology, metaphysics, ethics, etc.) and chose to write papers about intuition in addition to their other work. It doesn't make sense to discount their testimony about the prevalence of intuition on the grounds that they wrote papers about intuition.
More broadly, I think your disagreement stems in part from the fact that you're using the word "intuition" somewhat narrowly. Thought experiments like Searle's Chinese Room are commonly taken to be appeals to intuition, as you'll see if you follow the link I embedded to the Stanford Encyclopedia of Philosophy:
"Many responses to the Chinese Room argument have noted that, as with Leibniz’ Mill, the argument appears to be based on intuition: the intuition that a computer (or the man in the room) cannot think or have understanding. For example, Ned Block (1980) in his original BBS commentary says 'Searle's argument depends for its force on intuitions that certain entities do not think.'"
In the narrow sense of "intuition," which might be analyzed as "a baseless intellectual seeming," I think Massimo is correct, that analytic philosophy does not depend heavily on such methods.
ReplyDeleteIf the meaning of "intuition" is broadened to include any kind of judgment not based directly on observation, then I agree that this is a common tool in analytic philosophy. But analytic philosophy is not unique in this regard, as Massimo pointed out.
So, can analytic philosophy survive without "intuitions" in the narrow sense? Yes. We still have rational argument. Can analytic philosophy survive without "intuitions" in the broad sense? Perhaps not, but is this the meaning of "intuitions" philosophers have in mind when they criticize the practice of appealing to them? At base, even accepting the observable evidence of a particular science involves an intuition in the broad sense; a judgment that the interpretation of the observation is accurate, a judgment that the instruments are reliable, or a judgment that the calibration of the instruments is a reliable process, or a judgment that one explanation is more likely than another. These are the types of things scientists tend to disagree about in the particular cases.
Even in logic the broad intuitions are at work in the confirmation or rejection of formulae and rules of inference. You take a formula, like [A -> possibly-A], and you see if there are any counterexamples. This involves cooking up cases in which A is true, but possibly-A is false. If you can't find any, then the formula is well-supported. It is a judgment, or an appeal to intuitive support in the broad sense. So, I doubt that anyone thinks "unempirical judgments" ought to be disposed of. Consider,
"[S]ome philosophers think that something's having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of anything, myself. I really don't know, in a way, what more conclusive evidence one can have about anything, ultimately speaking." Kripke, Naming and Necessity, pg. 42.
Julia,
ReplyDeletele tme clarify: you are quoting philosophers who write about intuition. If your point is to say that intuition is part of the philosopher's toolkit and that there are discussions about its effectiveness, granted, though nobody would doubt that.
But if you make the much grander claim that the entire enterprise of analytic philosophy depends on it, then I think you are spectacularly wrong. Not only my numbers point to the conclusion that that is simply not the case, but I am a professional philosopher who reads technical papers all week long, goes to philosophy conferences on a regular basis, and attends the weekly philosophy colloquium at CUNY's Graduate Center. And I can't recall when was the last time I heard anyone make an appeal to intuition during his talk!
Searle's room: once again, to characterize it as an appeal to intuition is wrong. The experiment sets up a parallel between the computational theory of mind and a hypothetical situation. If one wants to show that the parallel fails, one can *analyze* the thought experiment and explicitly states where exactly Searle goes wrong. That's pretty much the way thought experiments work. Pace Block's commentary.
This comment has been removed by the author.
ReplyDeleteMassimo,
ReplyDeleteI think you are just plain wrong on the Chinese room in two respects. Firstly, it seems obvious to me that the Chinese room (referring collectively to the book, the man, his pencil, and so forth) does in fact understand Chinese. By hypothesis it responds with the same output to the same input as a native Chinese speaker would. Giving appropriate output for appropriate input is the only way I can think of that anyone would ever be able to determine if anyone else understood a language, so it seems like cheating to apply a different standard to a nonhuman device, just because it doesn't fit with our intuitions.
Secondly, I think the Chinese room argument is unavoidably an appeal to intuition. It is obvious to me that it does know Chinese, it is obvious to you that it does not. How are we to resolve that difference?
I think the Chinese room raises another argument against the validity of intuition as evidence, which is that a single persons intuition about a philosophical question, like whether a machine can know a natural language, can vary significantly based on what hypothetical is used to present the philosophical question. A lot of people do share your intuition that the Chinese room does not know Chinese. But I bet if you put a lot of those same people in front of a good episode of Star Trek: The Next Generation, Star Trek: Voyager, or Battlestar Galactical, all of which feature machines that speak English as major characters, they will experience a very different intuition about whether machines can know natural languages.
I also think that your search of philosophy journals for the word intuition doesn't tell us anything. It is like searching physics journals for the word calculus, finding that most abstracts do not include it, and therefor concluding that most physics does not use calculus.
I'd agree that a simple search for the word "intuition" has more or less no bearing on the matter. When appealing to intuition, I would not expect people to actually say so. Rather, I think they would come to a point in an argument where they would say "clearly, we can see that this is a problem", or "obviously, this is not what one would expect from..." or "X seems to be much preferable to Y." Any statement which one could justify further, but doesn't feel compelled to (because the audience will probably agree that it is right) is effectively ending in intuition. The only exception is if one knows a justification for it and reasonably expects the audience to know of such a justification as well.
ReplyDeleteI'd say, in fact, that a very large proportion of philosophy seems to lean on intuition, not in the sense of considering intuition a form of proof, but in letting one's opposition know: "You can assert otherwise, but if so, you'll have to accept this counter-intuitive idea, to bite this bullet, or else wriggle out of the reasoning." In the case of Moore's planet, this is a poor technique because lots of people find it easy to bite the bullet. In the case of Searle, we find that it's much more effective, in that people do find themselves going through a lot more to explain why the situation is not so bad, why the bullet is bite-able or why the situation is not analogous to the sort of computation that they believe produces a mind.
You (Massimo) seem to be looking at the dilemma and the wriggling, and saying that the dilemma was posited to produce the wriggling. That is, the really good ones are not mere appeals to intuition, but challenges to the opposition (Explain this!). To a certain degree, this may be right. I don't think that many serious students of ethics really believe that the next version of the trolley problem that they come up with will kill utilitarianism dead.
But I think that there's a certain degree of hope that it will move things closer to that point. It seems to me that, a lot of the time, the hope is that if there are enough counter-intuitive ideas, and if the wriggling gets counter-intuitive enough, people will stop believing it, and supporters of the idea will decay away. I don't think Searle posed the Chinese room so that computationalists could explain it for him, and be the stronger for it. He posed it because he wanted to use a simple, intuitive scenario to undermine their position.
Or to take a scientific version: the EPR paradox is well-known for the insight it gives into the very real phenomenon of entanglement. But it wasn't designed just to do that; E, P, and R wanted to undermine quantum mechanics, and they used this counter-intuitive "paradox" as an attack. (Of course, Einstein's anti-QM arguments in general tended to be intuitively motivated; "God does not play dice." is obviously not a very specific objection.) The value of the thought experiment to us, now, is not the same as the value as perceived at the time (when it was thought of as a potential reductio ad absurdum against quantum indeterminacy).
To be clear, I'm not certain that an appeal to intuition is a bad thing. In a coherentist view of knowledge, one could see it as the natural desire to avoid having to one's most central, well-connected, deep-seated beliefs disturbed by peripheral/fringe beliefs. Or one could see it as due to one's mind's primitive version of Bayesian reasoning, attempting to produce a prior probability for an idea based on one's previous understanding of the world, without the luxury of having a rigorous mathematical basis to produce it. (I'm extremely certain that certain beliefs in Christianity are false, but how certain? 99.9% sure? 1-(10^-20) sure? Is it worth the trouble to invent a number quantifying my inner conviction?)
ReplyDeleteTwo points contra Massimo:
ReplyDelete(1) Lots of articles that involve thought experiments don't use the phrase "thought experiment" in the abstract. This further supports the criticism in Julia's second comment.
(2) Philosophers who write on intuition tend to define it in such a way that many uses of thought experiments will count as appeals to intuition. Searle, for example, argued roughly as follows: "Strong AI says the Chinese room understands, but obviously the Chinese room doesn't understand, so Strong AI is wrong."
Bealer would say this claim of obviousness is exactly the sort of thing he means by "appeal to intuition." Likewise the claim that the subject obviously doesn't know in Gettier cases, or the claim that (contra consequentialism) it is obviously wrong to kill one innocent person to save five others with their organs.
Frank,
ReplyDelete> it seems obvious to me that the Chinese room (referring collectively to the book, the man, his pencil, and so forth) does in fact understand Chinese <
Well, you are entitled to your opinions. It seems to me obvious that it doesn't. Yes, this *may* sound like playing right into Julia's hands, but again the point of the thought experiment is to make the analogy with strong AI clear. There is no reasonable way in which the room "understands" Chinese unless you are using "understand" in a very non-standard way. That, of course, doesn't mean that one cannot have artificial intelligence (I think of human beings as biological machines), but it does mean that strong AI is wrong - a fact bore out my the utter failure of that research program over the past several decades.
> I think the Chinese room argument is unavoidably an appeal to intuition <
Only if by intuition here you mean human judgment. But as others have pointed out above, then *all* human judgment is questionable (which it is), including when used by scientists.
> I bet if you put a lot of those same people in front of a good episode of Star Trek: The Next Generation, Star Trek: Voyager, or Battlestar Galactical, all of which feature machines that speak English as major characters, they will experience a very different intuition about whether machines can know natural languages. <
This is a common mistake about skeptics of AI. You are confusing skepticism for the original strong AI program (which I share) with skepticism for machine intelligence in general (which I don't). Do you claim that the Cylons actually work according to a straightforward computational theory of mind?
> I also think that your search of philosophy journals for the word intuition doesn't tell us anything. It is like searching physics journals for the word calculus, finding that most abstracts do not include it, and therefor concluding that most physics does not use calculus. <
Perhaps, but a selective search of the literature like the one Julia did tells you nothing about the alleged prevalence of intuition in philosophy either.
Sean,
ReplyDelete> a very large proportion of philosophy seems to lean on intuition, not in the sense of considering intuition a form of proof, but in letting one's opposition know: "You can assert otherwise, but if so, you'll have to accept this counter-intuitive idea, to bite this bullet, or else wriggle out of the reasoning." <
I wouldn't disagree with that, but this seems to me a very legitimate and useful application of "intuition."
> I don't think Searle posed the Chinese room so that computationalists could explain it for him, and be the stronger for it. He posed it because he wanted to use a simple, intuitive scenario to undermine their position. <
Of course, but if the computationalists *could* have explained then the community at large would have been a step closer to computationalism.
And as you point out, similar thought experiments are useful in science as well.
I've noticed that philosophers often use reductio ad absurdum arguments that hinge on non-absurdities. This is where gut feelings about what is and isn't absurd can come into conflict.
ReplyDeleteMassimo, I seem to recall that in your criticism of Sam Harris a while back, you argued that if corporal punishment could be established as beneficial, you would still reject it because of your ethical intuitions (I believe you used the word explicitly). Is this a change of heart?
ReplyDeleteIan, I doubt I used the term intuition, and at any rate I certainly didn't mean it in an emotive way. My reason for that position is my virtue ethics stand, which tells me that beating defenseless people who are not entirely in control of what they do is not the right way to go. One could probably come up with deontological reasons too. Only a consequentialist *may* contemplate that course of action. Sure enough, Harris seems to be a consequentialist.
ReplyDeleteMassimo--
ReplyDeleteAnd when Harris insists that all your concerns are part of this whole "well-being" thing too, what will you say?
I say that his notion of wellbeing is confused. Whose wellbeing? Mine? The child? Society's? And what does he mean by wellbeing? For a virtue ethicist your eudaimonia (degree of flourishing) is negatively affected every time you do something non virtuous, but that something may enhance my psychological happiness, or my material wellbeing nonetheless.
ReplyDeleteAnd of course one needs a prior philosophical commitment to wellbeing.
Philosophy depends on logic.
ReplyDeleteLogic depends on valid premises.
If premises are verifiable materially, then they are empirical; i.e. science.
If premises are not verifiable materially, then they require previous sub-premise validity.
If there are no sub-premises which are verifiable materially, then the verification devolves to First Principles.
First Principles are intuitive, not material.
Philosophy is not based on material principles, nor it is science. It is intuitive, except possibly experimental philosophy, which arguably is a science anyway.
How is eudaimonia measured? What are its units of measurement? Is this an absolute existence? Or is it relative?
ReplyDeleteIf I am happy and wealthy (and say, healthy and wise), why would I care about eudaimonia?
Stan, read Aristotle, or at least this:
ReplyDeletehttp://plato.stanford.edu/entries/ethics-virtue/
And enough with this obsession that only things that can be measured are meaningful.
Then meaning is derived without material evidence, how? Intuitively?
ReplyDeleteStan, first of all, quantification is not the same thing as material evidence, the first one is clearly a subset of the latter.
ReplyDeleteSecond, there are entire branches of knowledge that have nothing whatsoever to do with material evidence: math and logic.
@Stan, how about conceptual analysis and reasoning?
ReplyDeleteYes, this *may* sound like playing right into Julia's hands, but again the point of the thought experiment is to make the analogy with strong AI clear. There is no reasonable way in which the room "understands" Chinese unless you are using "understand" in a very non-standard way.
ReplyDeleteI don't think that the word "understands" is usually the problem here. I think the more common problem is that the Chinese room story is not consistent with reality, and that people who are trying to picture it as consistent drop different assumptions or features of the story. You then have several people, each with a coherent picture of what's happening, but they are actually talking about different coherent scenarios.
The heart of the problem, to me, is the suggestion that Searle reaches the point where he can say that "Nobody just looking at my answers can tell that I don’t speak a word of Chinese."
In order for this to hold arbitrarily, he would need to answer inquiries like: "Describe a coniferous tree, from the viewpoint of a sheltered child who has never seen one." Or "If you were abducted by aliens who could see but had no ears, what sorts of things might you try in order to communicate with them?" Or "Please write a short poem [of some well-specified form] describing your favorite word in our discourse so far, but without using the word itself."
If a program could give reasonable answers to these questions, and many more, it would not be John Searle sitting in a room, with a book, cranking out answers every 20 minutes. It could easily be John Searle working full time for months on a single reply, inside a written program that might very readily compete with the Library of Congress.
If one drops the assumption that he can answer arbitrary questions, and instead merely passes relatively shallow Turing tests, then he can do that with a book in a reasonably sized room, in a reasonable time, and he obviously speaks Chinese no better than Eliza the chatbot speaks English.
If one maintains that the room can answer really tricky questions, and maintain the charade indefinitely against an arbitrarily clever inquisitor, it seems (to me!) that we can be as sure that the room speaks Chinese as we can that anyone does (although it probably speaks Chinese very, very slowly). But we then have to drop our mental image of Searle cranking out rote responses in real time.
Despite the "obvious" conclusions about the room, the only conclusion that seems obvious to me is that a superficially convincing chatbot can still be dumb, and that a real artificial intelligence is something much more difficult to produce. It doesn't, to me, speak at all of the ultimate feasibility of AI that approximates human understanding.
Sean, I really don't want to turn this into a defense of Searle. But it seems to me that a) the Turing test is a really dumb way to "test" anything; and b) that your last paragraph acknowledges what I think is the main point of Searle's thought experiment.
ReplyDelete"First Principles are intuitive, not material."
ReplyDeleteExcept that by that same logic, they are not verifiable. Not even to an exact approximation.
"How about conceptual analysis and reasoning?"
According to Neuroscientists like David Eagleman the bulk of that as well is done below the level of our conscious examination.
Baron, no, analysis cannot possibly be done unconsciously, by definition. Of course it feeds on a lot of subconscious reasoning. Which is why I find Julia's critique of intuition so amusing: since intuition can be said to be the outcome of unconscious reasoning, and since that's more than 90% of the thinking we do (we all, not just philosophers), she is rejecting a large portion of human thinking.
ReplyDeleteSearle's point was that we understand that we do things for a purpose, i.e., we have objectives, and machines so far as he was aware don't.
ReplyDeleteMassimo writes,
ReplyDelete"analysis cannot possibly be done unconsciously, by definition."
And what if your definition of unconscious is wrong precisely because it posits that we must be conscious of every level at which out assessment processes operate, or otherwise the assessments will be, as someone earlier alleged, primitive.
Which I suppose assumes as well that the unconscious processes cannot evolve where their intelligence is concerned.
Sorry, ain't gonna get into that discussion again, you know what I think about these issues. Human beings *are* thinking machines, so to me that line of inquiry is moot.
ReplyDeleteForget the intentionality of purpose, what about the knowledge that humans have concerning their objectives? It seems to be part of your own argument, which is a good one, that non-human machines have none that aren't supplied by humans.
ReplyDeleteI am saying that *currently available* non-human machines do not have objectives that are not supplied by humans, not that this is impossible in principle. In fact, it ought to be possible if humans themselves are machines endowed with endogenous objectives...
ReplyDeleteOK, now to the next question I posed - will not our unconscious (or less than fully conscious) thinking processes have evolved somewhat concurrently with the evolution of our more conscious abstract reasoning systems? More conscious primarily because of the literate use of language, which in turn must be recognized by collaborative processes, yes?
ReplyDeleteI would think that unconscious thinking evolved much earlier than conscious thinking, though at some point they probably began to co-evolve.
ReplyDeleteI can't agree with your b. From Searle:
ReplyDeleteThe aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese.
and later:
The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states.
This seems to me a far cry from my point, which is more like "any satisfactory formal program would have to be very, very complex and well-designed in order to have human-comparable mental states". It also strikes me as a rather silly set of objections, for reasons that perhaps are best not to get into here.
Anyway, my roundabout point was, people's intuitions about which assumptions are best to drop clearly play a role here. The Chinese room experiment (at least, as Searle conceived it) plays upon the intuition that understanding has to be in something, that either the man or the stuff has to understand Chinese (or just the man, in the case of the internalized room). If there's nothing present that understands, there's no understanding. A more functionalist interpretation, however, relies on the intuition that, if something can use information in all the right ways, then it understands that information. If not a Turing test, there should at least be some test we can give to a machine that can show whether it understands, and if it doesn't, that lack of understanding should show.
Whatever specific justifications are given, I suspect that much of the division over this issue is a matter of leaning on these intuitions (that understanding inheres in an object or substance that has the power to understand, and that lack of understanding is demonstrated by a visible failure to understand). Whichever warps or breaks first, seems to hand victory to the other.
I agree that *some* test *might* be able to tell us about understanding, but not Turing's. And I actually agree with Searle that understanding isn't simply the ability of a program to manipulate symbols. Otherwise I would have to conclude that my computer "understands" chess. It doesn't, even though it can play the game better than I can.
ReplyDeleteAnd I actually agree with Searle that understanding isn't simply the ability of a program to manipulate symbols. Otherwise I would have to conclude that my computer "understands" chess. It doesn't, even though it can play the game better than I can.
ReplyDeleteI'd tentatively agree with the latter statements, but I'm not actually sure what the difference would be between "understanding" or not would be, which makes it rather difficult to say that this requires more than symbol manipulation.
You can program a computer that plays tic-tac-toe perfectly, either through inputting every possibility, or just using a few simple principles. With a little more work, one can program a computer that can deduce such principles through induction over several games, though perhaps not leading to results that are quite as tidy, eliminating the need for a human programmer to input anything about the game other than its basic rules and structure. I'm not sure what it means for a human being to "understand" tic-tac-toe better than that, or that the human has any comprehension of tic-tac-toe, in and of itself, other than knowledge of the rules and algorithmic (maybe non-deterministic) tendencies or strategies.
Sean, you are beginning to sound like a behaviorist ;-) Understanding has to do with internal mental states, not just with the ability to carry out a task.
ReplyDelete"...but it does mean that strong AI is wrong - a fact bore out my the utter failure of that research program over the past several decades."
ReplyDeleteThe 'failing for decades' category would at present include String Theory. Would you then say that is wrong as well?
Concerning the recently posted data, I think that very few published papers that give an argument appealing to intuition actually use the word 'intuition' in the abstract.
ReplyDeleteNope, string theory is not even wrong...
ReplyDeleteMassimo and Michael,
ReplyDeleteYour response of a list of non-physical knowledge does not address the issue of their intuited base.
Math, logic, conceptual analysis and reasoning, all these devolve to an axiomatic basis which is used to confirm their validity: First Principles. First Principles are intuited. There is a presupposition of intuited validity in all philosophical arguments which are not empirical in nature.
Your answers did not address the issue that Julia raised: Is intuition a valid source of evidence? If it is not, then the above pursuits have no valid basis beyond the unsupported presumption of their validity.
The second level of the issue is this: presuming that intuition is not believed to be a significant source of knowledge, then what is the basis for evidence (and evidentiary validation) in the pursuits you list?
And a third level: what is your theory of valid evidence as pertains to philosophical discovery? For example, which evidence is objective, if it is not, at its base, intuitive? And how is it objectively demonstrable?
Stan, first off, philosophy is not in the business of discovery; its purpose is to analyze concepts and ideas. Again, there is a tendency to use the model of science, which does not apply.
ReplyDeleteSecond, are you seriously suggesting that not only philosophy, but also math and logic are baseless?
I asked for your evidentiary theory; if it is not axiomatically based, then what makes it valid evidence?
ReplyDeleteNow I also ask if discovery of both truth and validity as well as "what is actually the case" is not a purpose (the purpose?) of philosophy, then what exactly is the purpose of analyzing and philosophizing?
Two questions, seeking answers.
Stan, evidentiary theory of what? Philosophy, math, logic, science, everything? I tend to subscribe to views that see evidence and arguments as interconnected, in other words, views that don't rely on an "ultimate foundation" for knowledge.
ReplyDeleteI already stated what the purpose of philosophizing is: to analyze, dissect and critically evaluate concepts.
So you reject that philosophy discovers anything. Even if it is the validity of one argument over another. An interesting definition.
ReplyDeleteYour view on valid evidence is hard to decipher; do you mean that an argument contains valid premises if it interconnects with other arguements, which is all that can be expected due to the lack of axioms or first principles?
If I understand correctly, then, all premises are never discovered to be "true" nor "valid", they have an interconnection with other arguments / premises which are also never "true" nor "valid", also due to the lack of the foundation of axioms and first principles.
Question then is, how do you argue against the circularity of such a system? (argument A is valid due to argument B which is valid due to argument C which is valid due to argument A)
And the final questions (all on the same issue) - then I will leave you alone - when you evaluate, critically, what are your standards of evaluation? If it is based on other arguments only, isn't there either an infinite regress of arguments based on arguments based on arguments, or circularity as described above)?
If you are selecting between competing arguments, do you not have standards for your analysis? This is the type of evidentiary theory which it seems that you must reference in your analyses. Does not one argument succeed because it is the more "rational" argument, and does not "rational" depend upon the validity of its evidence? If the evidence is just another argument, then nothing has been shown. But if the other argument has some independent evidence, then there is a standard by which to judge, is there not?
Stan, again, you seem to be wanting some sort of "foundational" basis, but human knowledge - all knowledge, not just philosophy - is more akin to a web of interconnected axioms, data and arguments. There is no such thing as "first principles," anywhere.
ReplyDeleteI never said that philosophy cannot establish that an argument is better than another. Of course it can, that's what philosophers do most of the time. But I don't call that a "discovery." A discovery is the factual finding of something physically out there, like a planet. We could argue whether mathematicians discover things or not, but I really think it is a stretch to describe finding that an argument is better than another a "discovery."
Massimo said: "human knowledge - all knowledge, not just philosophy - is more akin to a web of interconnected axioms, data and arguments. There is no such thing as "first principles," anywhere."
ReplyDeleteThis is the most succinct expression of atheism and physicalism I've seen in a long while. It is also profoundly anti-intuitive. I'd point out to Julia that Philosophy has succeeded precisely when it has ignored intuition and found terra nova. Intuition is, at least to the philosophers I like, brush to be cleared, not lumber for houses.
Theists and idealists are unable to quell their intuitive sense of self and thought primacy, so they construct dubious arguments against empirical reality. Personally, I'm satisfied that if Betrand Russell and A.N. Whitehead couldn't find the foundations of math, then I shouldn't hold my breath waiting for the adherents of Bishop Berkeley and Deepak Chopra to succeed in finding the foundations of reality.
OneDayMore,
ReplyDeleteLeaving intuition aside for the moment, what are the "axioms" to which Massimo refers? The questions I asked above are, for the most part still open, and I have others as well. Perhaps if I ask them serially:
1. What are the axioms to which philosophers refer?
2. If there were no axioms, would not the reference to other arguments be either circular, referring back to itself, or a linear sequence terminated either in an indestructible argument or in an infinite regress attempting to find an indestructible argument?
3. Presuming that I do not understand philosophy or its fruits, can you answer for me: is it valid to say that philosophy does not produce either truth or validity? What about indestructible arguments, which apparently do not exist?
4. Presuming that question 3 is thought to be the case (there is no validity, truth, nor indestructible argument), then what do philosophers see as the fruit of their pursuit?
5. Is not an axiom thought to be an indestructible argument? One without premises even?
Thanks for your patience.
If intuition is the unconscious aspect of thinking, where is it written that it's not knowledgeable, or not the source of ideation, the font of scientific hypotheses for example? What a croc to then argue that philosophy must ignore intuition to succeed.
ReplyDeleteWhich wasn't Massimo's argument in any case unless I miss my guess intuitively.
And as a supporter of intuition's evidential value, I was the first to comment that, in effect, there is no evidence that first principles exist, nor is belief to the contrary due to some flaw in the intuitive conception process.
Baron P,
ReplyDeleteMassimo did mention axioms, and the first principles are axioms of the most basic kind. He states, however, that those principles do not exist.
On the other hand he also argues (I think) that there are no objective truths that can be discovered, and that discovery is not part of philosophy.
Can it be argued successfully that there are no tautologies? That a "thing" can simultaneously exist and not exist? That internal contradiction (non-coherence) renders a proposition false? These and other basic principles do exist, so there must be coherent arguments that they are false, if falseness exists.
Perhaps falseness does not exist, but if not then all arguments would seem to be equally correct. But if all arguments are not equally correct, then there must be a sorting mechanism, a standard, by which the less correct (more false) would be eliminated.
It would seem that eliminating falseness from the idea of argument analysis would eliminate the value of the argument and argumentation altogether.
This is the reason that I have asked the questions, above, for which I hope to receive answers.
Stan,
ReplyDeleteIsn't it circular to justify logic with logic? The answer is no, but I'm uncertain how one can arrive at that answer analytically.
Massimo,
Can anti-foundationalism be arrived at analytically? What philosophy can arrive at an anti-foundational position while limiting itself to the analysis of concepts? And don't say you have an intuition that there are no foundations, or a vortex will open and swallow us all.
Stan,
ReplyDeleteAxioms are not truths, they are human constructs that hope to get as close to what were assumed to be the certainties that were assumed in turn to be the basis of all knowledge. We predicted in that sense that if we looked in the right place and right way for those certainties we would find them. In the meantime we've come up with axioms that are at best approximations of what we can hope is true. One axiom might be that truth can be no more than an exact approximation in time.
No first principles need apply.
Can discovery be a part of philosophy? Sure, but not in the sense that it was heretofore an undiscovered truth.
Of course there are tautologies - you can say the same thing twice in different words and be twice wrong convincingly in the process. In short, tautologies are not principles that one can assign an exact value to - as you seem to have done with your concept of first principles. (2+2=4, for example, represents a predicted value approximate to the relativity of the spaces filled and times of filling.)
You also imply that if there is no truth there can be no falseness that depends on it. I'd call that the quintessential false dichotomy - as if true and false were our only choices.
And yes there are other forms of logic than the classical that sort out the probabilities - Bayesian may be closest to the one we use intuitively. And who's to say our variety is not the more subtle, depending on who's using it and how.
Not even wrong? Now where have I heard that before? Hmmm. Anyway, slightly off topic but somewhat incongruous is that statement up there:
ReplyDelete"I think of human beings as biological machines." with your (inferred) view that human beings are not in fact commodities and concomitant worries about increasing human commodification (and blondification).
So are machines not in fact commodities judged by function and appearance?
I certainly agree with the first part - human beings (and all living things) are biological machines, 'meat machines' as Nikolai Tesla used to say. I also think they are commodities and may as well have numbers stamped on their heads listing Physical, Mental, Social and Economic value because that is certainly the way they treat each other. So what is it that makes our particular brand of machine not a commodity?
When attending a seminar, I, a mathematician, sometimes would raise questions by saying “my intuition tells me that …” The intuition I refer to is based on my ”learned” knowledge in my area of expertise, and is usually not a hunch (at least not a hunch a non-mathematician can cook up) or something unjustified. It’s acquired through my thorough understanding of my area.
ReplyDeleteThameron,
ReplyDelete> I certainly agree with the first part - human beings (and all living things) are biological machines, ... So what is it that makes our particular brand of machine not a commodity? <
The fact that they can reflect on what they do and why.
James,
ReplyDelete> Can anti-foundationalism be arrived at analytically? What philosophy can arrive at an anti-foundational position while limiting itself to the analysis of concepts? <
Yes, it can, in some cases. The obvious one is Godel's incompleteness theorems. But more generally, the lack of foundationalism is derived by the failure of so many people, from Hume to Russell, to find foundations.
Stan,
ReplyDelete> Massimo did mention axioms, and the first principles are axioms of the most basic kind. He states, however, that those principles do not exist. <
People have already addressed this, but just to make sure. Axioms exists, of course. In philosophy they are the laws of logic, or more narrowly the assumptions that go into an argument. The problem (see above) is that there is no foundational justification for them. They are open to discussion, and people derive certain consequences from them. Again, in this philosophy is no different from math, logic and science. So if you are going to reject the former be prepared to reject pretty much all of human knowledge, which would be pretty strange.
Stan, I posted a response last night to the questions you addressed to me, but it seems to have been lost in the shuffle. I didn't make a copy, so can't repost it.
ReplyDeleteWasn't trying to ignore the questions, but in any case Massimo seems to have addressed some of them in the interim, so I've done trying.
Godel's theorums, as well as the absence of apparent foundations, only show that a lack of foundation is possible, not that it is certain. To claim that there are not foundations is a certain claim that cannot be reached by mere analysis (it's impossible to be certain of a groundless universe). You can support an anti-foundational standpoint, I tend to, but it is a position whose certainty cannot be underwritten by analysis alone. Isn't that the case?
ReplyDeleteWe can - and since we want to, should - look for whatever was or is or could be responsible for the order that we feel exists in the universe. And I use the word 'feel' advisedly. Since all we can know or learn from is limited to what we have evolved to feel. Which means we can't be certain what it is we're looking for or that we'll know it for a certainty when and if we find it.
ReplyDeleteAnd conversely, we can't be certain of a groundless universe, so why would or should we want to be? To be groundless is to have no reason to exist, and yet we do exist, and so we know at least we're here (as the old song says) because we're here. And can't know that there could be a time when whatever we are made of wasn't/isn't somewhere. Someplace where everybody (as the old joke goes) gotta be.
James, my understanding is that Godel's theorems (there are two) *are* analytical. They show that the foundational quest (within the limits of applicability of the theorems) is impossible.
ReplyDeleteA brief clarification:
ReplyDelete"Sean, you are beginning to sound like a behaviorist ;-) Understanding has to do with internal mental states, not just with the ability to carry out a task."
If I was really aiming at behaviorism, I would not have bothered to write the bit about induction (since it wouldn't matter how the computer came by the ability to play tic-tac-toe).
Rather, I was simply trying to highlight that it's not clear what's being referred to as "understanding" here. If my knowledge about tic-tac-toe strategy is encoded into a set of simple principles that I've picked up from playing the game, together with one or two special cases I've learned to look out for, and if the computer actually has principles and special cases that formally correspond to mine in its program, I don't know what part of my mental state is both relevant to tic-tac-toe and not shared by the computer. I may have understanding of some things that are far deeper than what the computer could possibly understand, but the understanding that I actually use to play tic-tac-toe is not, in any way that's clear to me, more profound or relevant than the computer's "understanding" of the same subject.
Sean, the computer, in my opinion, has no understanding at all of tic-tac-toe. It only has a set of rules and heuristics, possibly even flexible ones (so it can "learn"), but no understanding.
ReplyDeleteUnderstanding here means to be aware of the fact that you are playing a game, that it has rules, that it works in a certain way, etc. You do, the computer doesn't.
Sure, Godel's theorems are analytic, but they do not prove the impossibility of foundations analytically, rather, they prove that a foundation by means of analysis is impossible - which is exactly what I said in the first place. Godel's theorems have nothing to say about the possibility of foundational knowledge outside of strict analytic constraint, ie. outside of logical assumptions. Of course, I can only make this statement by also assuming logic, but therein lies the paradox.
ReplyDeleteMy point is just that if philosophy is absolutely limited to analysis, then most things unravel pretty quickly. At the very least we assume logic for no particular reason other than the fact that it seems to be useful - not a very analytic move.
I think some would argue that a pragmatic turn at this point would be an analytic solution to this paradox, but I don't think pragmatism is a strictly analytic turn. Rather, when it does things like assuming the validity of logic, it is creating a space where analysis possible, not making an analytic observation.
James, but if not analytically, how do you get your foundations? That was the program that Russell et al. were after, and that's the one Godel showed can't be done.
ReplyDeletePhilosophy is done by analysis, but not necessarily by strict formal logic, which is what Godel's theorem addressed. Still, nobody has provided a self-consistent foundation for either formal or informal logic, so yes, the best that we can do is to use analysis, logic, science etc. because they seem to work. Good enough for me.
"[...] because they seem to work. Good enough for me."
ReplyDeleteI agree. I'm just uncomfortable calling the "seems to work" turn an analytic one. It seems to run counter to the idea that philosophy is limited to analysis. Though I have to admit, I'm not really prepared to say what else it is. Maybe I just have an overly narrow or fuzzy conception of analysis.
Massimo,
ReplyDeleteI'm not at all clear on how you are using terms like “understanding,” “strong AI,” or “straightforward computational theory of mind,” so I'm not sure what positions you are taking. Searle doesn't tell us anything about how the program described in his book works, it could be a decision tree or a neural network or a detailed simulation of a biological brain or some as yet undreamed of computational scheme. But it, by hypothesis, exhibits the same understanding of Chinese as a native chinese speaker. I don't know how an intelligent person could deny that it understands chinese simply because it happens not to be composed of neurons. The habit philosophers have of throwing the word “intentionality” into a discussion like this, without ever being able to give a coherent account of what intentionality might be, doesn't seem to change anything. I also don't know how this thought experiment would be related to a particular program of actual artificial intelligence research. It seems to me that if one wants to know what currently existing AI programs are able to do, one has to talk to actual AI researches, not do thought experiments.
Your equating of intuition with judgement is interesting, and it may be that there is a spectrum between them. I want to bring in something you said in episode 25, because I think it is relevant here. You said that philosophical expertice is about approaches, not outcomes, and that while moral philosophers claim to be experts in moral reasoning, they do not claim to be moral authorities. If this is so, you seem to have admitted that moral philosophy, as an academic discipline, is useless and should be done away with. If moral philosophers are no more likely to give the correct answers to moral questions than anyone else, then what use could moral philosophy possibly be? Why would anyone spend any time at all studying it? Doesn't it fly in the face of occam's razor, which says that adding complexity is a bad idea unless it makes you more correct? At best your conception of moral philosophy just makes moral philosophers more likely to give consistent answers to moral questions, and that just reminds me of this: http://www.thinkgeek.com/homeoffice/posters/b049/. How much of the rest of philosophy is similar to moral philosophy in this way?
Frank, well, I think the terms "strong AI" and "computational theory of mind" are clear. You can just look them up. As for "understanding" I don't say that intuition is the same as understanding. I said that some of what Julia seems to think is intuition could be recast simply as understanding. In other words, she is mixing different things under too wide an umbrella term.
ReplyDeleteIf you think I ever said that moral philosophy is useless you clear misunderstood what I said.
Massimo,
ReplyDeleteIn episode 25 starting around 12.5 minutes in you did say very explicitly that moral philosophers do not claim to be experts in the sense of being more likely to have the correct answers to moral questions. If that is so, how could moral philosophy be useful? I really want to know how you can reconcile the position you stated there with the position that moral philosophy is of any value, because I just don't see it.
Frank, as I explained there and elsewhere, moral philosophers are good at thinking about moral issues and in bringing out the consequences of our assumptions about morality. This doesn't necessarily give you answers, but it does help thinking about it. I.e., moral philosophy is an exercise along the lines of "if I hold values X,Y and engage in action Z, what follows?" So, it's useful, but in the sense of providing definitive moral answers.
ReplyDelete"Understanding here means to be aware of the fact that you are playing a game, that it has rules, that it works in a certain way, etc. You do, the computer doesn't."
ReplyDeletePerhaps we are merely talking past each other? I was talking about understanding how to play well, whereas you are talking about understanding the game in general. At least, I'm not confident that understanding how to play is the same as understanding more abstract concepts such as what a "game" is in general.
We're also hitting the usual issue of what the referent of "awareness of playing a game" is, which I don't think we're [heavy understatement] likely to settle here. I do think I've played many games without being consciously aware of that fact. I'm not talking about anything sinister, just zoning out while doing some kind of complex, but repetitive, well-understood task, only to find myself suddenly aware of my surroundings again and the process finished. One does wonder if that temporarily made me a "zombie" with respect to games, or programming, or driving home.
Sean, really, you may have played games while you were (partially) "zoned out, but I think it is strange to insist that (current) computers understand anything in any meaningful sense of the word understanding. Computers cannot reflect on what they do and why, we can. That's understanding.
ReplyDeleteMassimo,
ReplyDeleteI don't think you're addressing the issue. Here is another way to ask the question: In what sense is thinking like a moral philosopher thinking "better," if it doesn't make one more likely to arrive at the correct conclusions? Why would I want to think like a moral philosopher if thinking about morality like a person without philosophical training is easier and gives me answers that are just as likely to be true? Why would I want to live in a society that has institutions dedicated to moral philosophy, if those institutions don't bring us closer to moral truth? I can see how your conception of moral philosophy would make a person more likely to give consistent answers to moral questions, but so what? As the poster I linked to earlier says, "consistency: it's only a virtue if you're not a screw up."
Perhaps an analogy would be helpful in clarifying what my issue is. If my goal is to predict the outcomes of a series of coin flips, guessing heads all the time may be more consistent than making random guesses, but who cares? Guessing heads for prime numbers and tails for composite numbers may sound fancy and require more mental energy, but it doesn't make the guesses more accurate, so that expending of mental energy is a vice, not a virtue. If no method of guessing the outcome of a coin flip is going to give me a better answer than any other, why wouldn't I do the easiest thing, and laugh at people who try to make it complicated? Similarly, if no way of thinking about moral questions is going to give better answers than another, why wouldn't I just do the easiest thing and laugh at people who try to make it complicated?
Frank, philosophy is neither like science nor like statistics, so your example is irrelevant to the discussion. Moral philosophers arrive at conclusions - obviously - but I wouldn't say that they are "right" in the same sense of a scientist's conclusion (that would be Harris' mistake). It's more like reasoning in logic: IF X then Y. X therefore Y. Then you can go back and argue that not Y because, for instance, X is a debatable premise, and so on. One makes progress and arrives at answers, but in nothing like the sense you mean.
ReplyDeleteMassimo, I was surprised to see you draw what looked like a contrast between Chalmers' "Philosophical Zombie" and Searle's "Chinese Room" arguments. Despite their disagreements, I have always thought of them as philosophical fellow travelers, methodologically speaking. To be blunt about it, I think they both argue by the abuse of intuition.
ReplyDeleteIf you have the time and inclination, I'd be interested to hear your take on the difference between their arguments, and why exactly Chalmers' argument is an "appeal to intuition" while Searle's is not.
You say above that to say the Chinese room "understands" Chinese would be to use "understands" in a non-standard way; but we don't yet know enough about our own "understandings" to define a standard. We have only our social intuitions about what constitutes "understanding" in other people.
So if intuitions are indeed domain specific, then it's no wonder that our intuitions balk at the idea that the Chinese Room understands Chinese. I think Searle's argument works by fabricating a scenario in which our intuitions fail, and then trying to build a claim atop that failure.
Scott,
ReplyDeletethe difference between the two cases is that the Chinese room attempts to bring out the hidden assumptions of the computational theory of mind. Chalmers, on the other hand, makes an appeal to "plausibility," which is simply uninformative as to actual reality. It is impossible to show that zombies are plausible, because being able to think about them doesn't make them plausible. And plausibility isn't going to settle any real question about consciousness.