by Julia Galef
* David Chalmers’ The Matrix as Metaphysics is a great read if you haven’t already discovered it – he makes a pretty convincing case that being a brain-in-a-vat wouldn’t actually be as big of a deal as we might have thought.
* A Philosopher of Religion Calls it Quits: An article I wrote for Religion Dispatches about philosopher Keith Parsons’ decision to abandon the field. Indirectly, it’s also about differing approaches to philosophy; are we asking questions of the form “Is X true?” or the form, “If X were true, what would follow?”
* A good article from the NY Times about the controversy over what statistical methods researchers should be using, stirred up by the recent publication – in a top psychology journal – of a paper purporting to provide evidence of ESP.
* This is a great interview with Andrew Gelman, an excellent and insightful statistician and political scientist at Columbia (and a former professor of mine). He’s talking about the top five statistics-related books he would recommend to laypeople, and why.
* I asked my friend at Ask A Mathematician, Ask A Physicist whether it's possible to prove cats are waves. Answer: Yes. Also from AAMAAP, the mathematician answers the question, “What is 0^0?” – a reminder that sometimes, there is no right answer in math.
* Joshua Greene’s fMRI famously showed a difference between emotional versus cognitive moral judgments. But this well-argued paper by Selim Berker, The Normative Insignificance of Neuroscience, makes a good case for why the neuroscientific findings are irrelevant to moral philosophy.
There is a philosophy class at Sac City that teaches the philosophy of the Matrix. Awesome.
ReplyDeleteKriss
This comment has been removed by the author.
ReplyDeleteJulia, I'd be wary of Chalmers. He seems to be a theist in sheep's clothing. He's trying to build the kingdom of heaven on a very flimsy foundation: The notion that a non-physicalist metaphysics can be seriously entertained, because it is "coherent and cannot be conclusively ruled out." Yes, he's trying to move the "brain in a vat" notion away from it's usual adherents--I'm uncharitably inclined to say potheads and cinema nerds . He does this by arguing that it is more than a skeptical project. In fact, Chalmers doesn't really want us to question our empirical experience of the world (which is exactly where this sort of thinking falls apart). Instead, he's just trying to leave enough room for something non-physical to undergird physics. Exactly what he wants that to be (consciousness, computation, God) I don't really care. But let's all just pause for a moment and hear what he's saying. We might be brains in a vat. Why? Because this is a notion that is coherent and can't be ruled out. Therefore what?
ReplyDeleteLook, there are myriads of silly ideas that are coherent andm by dint of their silly nature, irrefutable. I might be the first sentient corndog having a dream about being a human. This is coherent and irrefutable, but it's not the stuff of serious discourse. I don't want to get a Logical Positivist here, but these are empty discussions. Nothing productive will ever come from this sort of thinking except for Chalmers getting attention. Any serious discussion of simulation, neurobiology and evolution will make it obvious how absurd the notion is.
First of all, let's remember what a brain is. A brain is a physical structure that has been formed by countless physical processes to function in a certain way. A brain is a response to an environment! So a brain in a vat has either been formed in the usual human way outside the vat, and then once put in the vat, and by its nature, must be fed a highly accurate simulation of the actual reality that created it, thus opening up all the arguments about simulation v reality.... OR the brain in the vat was formed in the vat, and therefore is a bio-neurological response to the vat itself, and as such is not a brain that human brains can understand, nor the aliens conducting the vat experiment, given that they are "outside" the vat reality. The vat brain is in the only reality it can ever inhabit, because it was created there, and it is an inaccessible reality to non vat brains. Unless you suggest that the brain vat was grown being fed a sufficiently accurate representation of reality, (perhaps with only minor variations), in which case the vat brain is neither deluded (in a metaphysical sense) nor in a separate reality. It is in a reality formed out of our reality. A subset of our reality, if you will. So NEO meets people who act like people and drive cars etc. Yes they fly and dodge bullets and such, but those aren't actually novel representations of reality. These variations on normal physics are, let's be honest, fairly hackneyed story telling devices that are every bit a part of our world as anything else.
ReplyDeleteSo, all of the mind-bending fun that vat trippers want to have, can only be bought by first assuming, as I'm sure Chalmers does, that there is something other than physicalism in the universe. This is an unfounded assumption. If you don't take this leap of faith at the beginning, then you can't prove what Chalmers thinks he proves. Brains in vats only prove there is something other than physics if you first assume there is something other than physics. To use Dennet's argument that Mary's response upon seeing color could be "Ah, just as I expected." So too, might a brain in a vat say "wow this is a crappy version of the reality I grew up in" or "I have become accustomed to a simulation of everybody else's reality, but to the extent I function as a human brain, I must exist in a version of everybody else's reality." Or "I do not function as a human brain because I was poorly formed by alien's who don't understand how human brains work. Therefore I don’t function at all, or I am as a severely brain damaged human, not something other humans can empathize with" Or "I exist in the only reality I can know, I have formed my neural patterns in response to these alien's stimuli, and therefore I am a special type of brain for a special type of reality that neither humans nor aliens can understand. I am not in a vat, I am not in an alternate reality, I am in the only reality in which I can be. Take me out and I will surely not function at all."
So Chalmers fails because he prefers to talk about reality while ignoring what any educated person should now know about brains.
Okay, here's another whack at it. None of the brain-in-a-vat scenarios are actually about brains in a vat. The smuggled assumption is that they are about SOULS or PEOPLE in a vat. Of course they say "brain" because they are falling victim to what I call the "skull dichotomy" which implicitly asserts that a) there is a soul (or "consciousness" if you like), b) it resides in the brain and no other part of the body c)it would also exist in a brain in a vat. None of these assumptions is justified. The smuggled assumptions becomes clearer if you ask, "Why not a leg in a vat?" Or "Why not a Boltzmann spleen?" These scenarios are much less interesting. But why? Why couldn't there be a leg in a vat that "thinks" it's walking, but in reality it's being "fooled" by electrodes. Would we rightfully assume that this proves something metaphysical about a leg's relationship to "walking"? Or even reality? I don't think so. But unless you are assuming the existence of "soul" or "self" as an extra physical thing at the beginning of your argument, then you can't take any more import from a brain in a vat than a leg. A brain is very much like a brain. It is an organ system that develops in its environment. A leg in a vat might be horribly atrophied, even though aliens stimulate its neurons so that it "experiences" walking. Or perhaps your vat is very complex and includes enough simulation of actual walking (perhaps through a treadmill, like device) that, should the leg be attached to an actual body, it would function. But then you would have to conclude that it wasn't "fooled." It was, in fact, trained to conform to our reality. In short, if the envatinated brain has anything close to human function, then you must be more accurately mirroring actual reality than not. In other words, Chalmer's flights of fancy may allow him to hypothesize all sorts of absurdities, but we know there are biological and physical strictures on a brain that prevent if from even existing in a vat qua a brain. So the wild assumption is not that there are aliens who are supernaturally good at simulating reality, but that there is a thing like a person (or a soul or consciousness) that would persist independently of the brain. This is Chalmer's starting point, not his reasoned conclusion. The argument is circular. In short, the idea that I might be a brain in a vat is not coherent.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteNo, Baron; of course, to say that would be the fallacy of affirming the consequent. However, some people with imaginations and active minds enjoy exploring the implications of ideas they don't think are true, such as the existence of various kinds of deity, the coherence of "objective value", or the tragicomic exploits of a cranky doctor and his bewildered coworkers. Once you've pointed out that something isn't true, you might not yet have said everything that's interesting about it.
ReplyDelete@One Day More: this IS serious stuff - about you being a sentient corn-dog. It hits the nail on the head. No level of 'life' has a lock on consciousness. The corn-dog is as conscious as you think you are.
ReplyDeleteAw crap, I think I may have mistaken Baron's meaning because I didn't see the "not" in his post.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteDaveS I am unable to reconcile the notion of sentient corn dogs and serious discourse. Can you provide any useful ideas that come from brains in a vat? Chalmers works hard to argue that there is something more here than naive skepticism. I don't see it. Especially if the best you can offer is it's "coherent and cannot be conclusively ruled out.". I can rule it out as conclusively as any other assertion. Of course I can't defeat epistemological skepticism but that is the nature of knowledge and nothing special about brains in vats. If you stack the deck by saying there might be such a thing as perfect simulation of reality you win your little card game, but nothing useful comes from the game.
ReplyDeleteHere's my problem with null hypothesis significance testing.
ReplyDeleteIf I flip a coin 5 times and get THHHT, the chance of getting that exact sequence with a fair coin is 1/32 = 0.03 < 0.05. Therefore, reject the null hypothesis that it's a fair coin.
Now, you might say I should've found the probability of getting 3 or more heads, but why should I look at that statistic instead of, say, the probability of getting 3 heads in a row, or a palindrome, or some other set of outcomes that includes THHHT?
It seems arbitrary unless you have a well-defined alternative hypothesis, which you can weigh against the null by using Bayes factors. Not only that, you can set different thresholds for Type I and Type II errors, and even assign costs to them.
@OneDayMore
ReplyDeleteCan you provide any useful ideas that come from brains in a vat?
Yes, (1) all useful ideas that come from brains in skulls
(2) the idea that physical implementations of anything are generally irrelevant, except when speaking in contextual terms
(3) speaking of context, this paper was written before Chalmers became a 'professional' academic, when he was still in Tucson, AZ and his ideas are rawer than what you see in later works. I do not know if he had spiritually separated from Hofstadter at this point.
Especially if the best you can offer is it's "coherent and cannot be conclusively ruled out.".
You mean "he can offer"?
but nothing useful comes from the game.
I see lots of utility but right now it is philosophy that is in question on this blog.
- Because of changes in our sciences, philosophers don't want to continue to speak of objective things (e.g."something exists" rather than "For x, something exists")
- Because of changes in our sciences, philosophers don't want to continue to speak of of dualism and physicalism etc... It's all information, and the Gospel, Torah, Koran, Western science, Eastern Zen, Numerology, Astrology, and even the thoughts of Jared Loughner constitute information, and should be taken seriously.
OneDayMore, the immediate utility of taking all information seriously is that we will probably discover things more quickly. For instance, if you buy even a bit of informational theory, and conclude that as humans created machines, so we may have been created, you immediately understand that spiritual connections are amoral.
If you buy even a bit of the idea that consciousness is universal thing, not arbitrarily assigned to humans and things that act like humans, then you immediately understand that consciousness must in fact reduce to a geometric point. (otherwise you have the same brain in vat vs skull problems) If this is what Chalmers, Kripke et al are taking about when they speak of "centered worlds" then I guess they get it, too.
The argument for using Bayes factors is that significance testing produces too many false alarms. But the opposite critique is that significance testing too often only considers the probability of false alarms (alpha) and ignores the probability of missed detections (beta), which can be catastrophic when it comes to screening for bombs or cancer.
ReplyDeleteHere's an illustration:
http://www.danielsoper.com/statcalc/img/F-Distribution.jpg
When the null and alternative distributions are close, setting a small alpha of 5% causes a huge beta.
If the thing that undergirds physical reality is either a god, devil, computer or nothing, and (per Chalmer's argument) we cannot ever know which, then you have a nonsensical untestable mush of a metaphysics. You may call this information but I call that noise.
ReplyDeleteTo assume that because we make machines, something larger could make us is the crudest form of Intelligent Design and thoroughly debunk-able. Hume did it even before Darwin wrote Origins.
I would like to know how you prevent an infinite regress of brains in vats within brains in vats, because each level would meet the test of "coherent and cannot be conclusively ruled out ". So there could just as easily be a million brains in vats hallucinating a million realities as one.
Finally, Dave, you take way too much liberty with scientific perspectivism. It certainly doesn't agree with the notion of sentient corn dogs.
@OneDayMore
ReplyDeleteI like 'mush of a metaphysics', and do need to elaborate. True, there is much noise. The idea of information is that it is sent by an entity and received by another entity. Any information that cannot be effectively processed by a receiving entity is noise.
Somewhat neutral on infinite regress. Why do you want to prevent it?
Likewise I think scientific perspectivism doesn't disagree, but is neutral about those now-unconscious and moldy corn dogs that need to be tossed.
DaveS
ReplyDelete"Because of changes in our sciences, philosophers don't want to continue to speak of of dualism and physicalism etc... "
Isn't David Chalmers a Philosopher who speaks of dualism?
"Somewhat neutral on infinite regress. Why do you want to prevent it?"
Not all types of infinite regress are equal. So while I don't necessarily have a problem with infinite, unbounded existence (even though Big Bang makes it seem unlikely) I do think infinite regress is a damning problem for Chalmers' essay on the Matrix. I'd say the word that pretty much ends it for him is: "Likely." It may be coherent and not conclusively refutable that I might be a brain in a vat, but it is by all other criteria profoundly unlikely. Yet if we are to discount likelihood in our speculation (as Chalmers insists we must) then I might just be a brain in a vat in the lab of an alien who is actually a brain in a vat in another alien's lab and on and on. Clearly each subsequent level of reality becomes less Likely. It is less likely that the aliens would have a reason to create illusions in brains in illusions of brains (and less likely that they would have the ability to create accurate simulations on level after level. Lord knows we'll be having this discussion about "Inception" any day now) So, if you in any way think it's silly to suggest that I am the millionth brain in the millionth vat, you must also agree that it is silly to posit the first level of unlikelihood.