About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Monday, April 11, 2011

Ray Kurzweil and the Singularity: visionary genius or pseudoscientific crank?

by Massimo Pigliucci
Everyone keeps telling me that Ray Kurzweil is a genius. Kurweil has certainly racked up a number of impressive accomplishments. Kurzweil Computer Products produced the first optical character recognition system (developed by designer-engineer Richard Brown) in the mid-‘70s, and another of his companies, Kurzweil Music Systems, put out a music synthesizer in 1984. A few years later Kurzweil Applied Intelligence (the guy really likes to see his name in print) designed a computerized speech recognition system further developed by Kurzweil Educational Systems (see what I mean?) for assistance to the disabled. Other ventures include Kurzweil's Cybernetic Poet, Kurzweil Adaptive Technologies, and Kurzweil National Federation of the Blind Reader. In short, the man has a good sense of business and self-promotion — and there is nothing wrong with either (within limits).
However, the reason I’m writing about him is because of his more recent, and far more widely publicized, role as a futurist, and in particular as a major mover behind the Singularitarian movement, a strange creature that has been hailed as both a visionary view of the future and as a secular religion. Another major supporter of Singularitarianism is philosopher David Chalmers, by whom I am equally underwhelmed, and whom I will take on directly in the near future in the more rarefied realm of academic publications.
Back to Kurzweil. I’m not the only one to have significant problems with him (and with the idea of futurism in general, particularly considering the uncanny ability of futurists to get things spectacularly and consistently wrong). John Rennie, for instance, published a scathing analysis of Kurzweil’s much taunted “prophecies,” debunking the notion and showing that the guy has alternatively being wrong or trivial about his earlier visions of the future.
(Here is a taste of what Rennie documented, from a “prediction” Kurzweil made in 2005: “By 2010 computers will disappear. They'll be so small, they'll be embedded in our clothing, in our environment. Images will be written directly to our retina, providing full-immersion virtual reality, augmented real reality. We'll be interacting with virtual personalities.” Oops. Apparently the guy doesn’t know that one should never, ever make predictions that might turn out to be incorrect in one’s own lifetime. Even the Seventh Day Adventists have learned that lesson!)
It is pretty much impossible to take on the full Kurzweil literary production, the guy writes faster than even I can, and I fully expect a book length rebuttal to this post, if he comes across it in cyberspace (the man leaves no rebuttal unrebutted). Instead, I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
In the essay in question, Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).” He continues by acknowledging that he understands how so many people don’t get it, because, you see, “after all, it took me forty years to be able to see what was right in front of me.” Thank goodness he didn’t call it the Kurzweil law of accelerating returns, though I’m sure the temptation was strong.
Irritating pomposity aside, the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
Kurzweil proceeds with a simple lesson meant to impress on us the real power of exponential growth, which he claims characterizes technological “evolution.” If you check out the original essay, however, you will notice that all of the diagrams he presents to make his case (Figs. 1-6) are simply made up by Kurzweil, either because they do not show any actual data (Fig. 1) or because they are an arbitrary assemblage of “canonical milestones” lined up so to show that there has been progress in the history of the universe (Figs. 2-6).
For instance, in Fig. 6 we have a single temporal line where “Milky Way,” “first flowering plants,” “differentiation of human DNA type” (whatever that means), and “rock art” are nicely lined up to fill the gaps between the origin of life on earth and the invention of the transistor. In Figs. 3 and 4, a “countdown to Singularity” is illustrated by a patchwork of evolutionary and cultural events, from the origin of reptiles to the invention of art, again to give the impression that — what? There was a PLAN? That the Singularity was inherent in the very fabric of the universe?
Now, here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.” Indeed, Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
There are several unintentionally delightfully ironic sentences scattered throughout Kurzweil’s essay: “The future is widely misunderstood ... The future will be far more surprising than most people realize,” etc. Of course, “most” people doesn’t include our futurist genius, despite the fact that he has already been wrong about the future, and spectacularly so.
And then there is pure nonsense on stilts: “we are doubling the paradigm-shift rate every decade.” What does that even mean? Paradigm shifts in philosophy of science (a la Thomas Kuhn) are a fairly well understood — if controversial — concept. But outside of it, the phrase has simply come to mean any major change arbitrarily defined. Again, not the stuff of rigorous analysis, and far less of serious predictions.
And there is more, much more: “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one? Techno-mystical babble this is.
In his description of the progression between the six epochs, Kurzweil dances a bit too close to the infamous anthropic principle, when he says “The rules of our universe and the balance of the physical constants ... are so exquisitely, delicately and exactly appropriate ... that one wonders how such an extraordinary unlikely situation came about.” Can you say Intelligent Design, Ray? This of course had to follow a paragraph including the following sentence: “we do know that atomic structures store and represent discrete information.” Well, only if one adopts such a general definition of “information” that the word entirely loses meaning. Unless of course one has to force the incredibly chaotic and contingent history of the universe in six nicely lined up epochs that start with “Physics and Chemistry: information in atomic structures.”
The jump from epoch 2 (biology and DNA) to 3 (brains) is an almost comical reincarnation of the old scala naturae, the great chain of being that ascended from minute particles and minerals (Kurzweil’s physics and chemistry age) to plants (epoch 2), animals (epoch 3), humans (epoch 4) and... Well, that’s where things diverge, of course. Instead of angels and god we have, respectively, human-computer hybrids and the Singularity. The parallels are so obvious that I can’t understand why it took me forty years to see them (it didn’t really, it all came to me in a rapid flash of awakening).
Where does Kurzweil get his hard data for the various diagrams purportedly showing this cosmic progression through his new scala naturae? Fortunately for later scholars, he tells us: the Encyclopedia Britannica, the American Museum of Natural History (presumably one of those posters about the history of the universe they sell in their gift shop), and Carl Sagan’s cosmic calendar (which Sagan used as a metaphor to convey a sense of the passing of cosmic time in his popular book, The Dragons of Eden). I bow to the depth of Kurzweil’s scholarship.
And finally, we get to stage 6, when the universe “wakes up.” How is this going to happen? Easy: “[the universe] will achieve this by reorganizing matter and energy to provide an optimal level of computation to spread out from its origin on Earth.” Besides the obvious objection that there is no scientific substance at all to phrases like the universe reorganizing matter and energy (I mean, the universe is matter and energy), what on earth could one possibly mean by “optimal level of computation”? Optimal for whom? To what end? Oh, and for this to happen, Kurzweil at least realizes, “information” would have to somehow overcome the limit imposed by the General Theory of Relativity on how fast anything can travel, i.e. the speed of light. Kurzweil here allows himself a bit of restraint: “Circumventing this limit has to be regarded as highly speculative.” No, dude, it aint’ just speculative, it would amount to a major violation of a law of nature. You know, the sort of thing David Hume labeled “miracles.” (channel Sagan’s version of Hume’s dictum: Extraordinary claims require extraordinary evidence.)
Would you like (another) taste of just how “speculative” Kurzweil can get? I’m glad you asked: “When scientists become a million times more intelligent and operate a million times faster, an hour would result in a century of progress (in today’s terms) ... Ultimately, the entire universe will become saturated with our intelligence. This is the destiny of the universe.” Oh? The universe has a destiny? And, pray, who laid that out?
At this point I think the reader who has been patient enough with me will have gotten a better idea of why I think Kurzweil is a crank (that and the fact that his latest book, Transcend: Nine Steps to Living Well Forever is co-authored with Terry Grossman, who is a proponent of homeopathic cures — nobody told Ray that homeopathy is quackery?).
Allow me, however, to conclude with what may well turn out to be a knock down argument against the Singularity. As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time? Call that the, oh, I don’t know, “Pigliucci paradox.” It has a nice ring to it.

127 comments:

  1. Kurzweil is just one "proponent" of a technological Singularity, see "Three Major Singularity Schools".

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  2. Note that Chalmers says that the 'intelligence explosion' type of singularity is plausible, and doesn't have much to say about Kurzweil's 'accelerating change' type of singularity. See XiXiDu's link on 'Three Major Singularity Schools.'

    I don't have much interest in a Kurzweilian singularity, but I take an intelligence explosion singularity very seriously.

    ReplyDelete
  3. Luke,

    apparently Chalmers actually does have a lot of interest in Kurzweil, since Chalmers' 2010 technical paper (which I will criticize later this year in a philosophical forum) abundantly cites Kurzweil. It is also unclear to me how one could have an intelligence explosion without an accelerating change.

    ReplyDelete
  4. Your post was spot on; I think you nailed essentially all of it.

    I second Luke & XiXiDu, however, in that you have successfully argued only against the silliest person in the room. I wish you'd tackle the intelligence explosion idea (Yudkosky et al.), because I think it's probably wrong (implausible conclusion), but where exactly the flaw in reasoning is, is not clear - all the individual steps seem to add up. One is left with the feeling that "that can't be right!", but incredulity in itself is no argument, as we know from the creationists.

    ReplyDelete
  5. Hmm, I'll look further into it, though Kurzweil is by far the most visible person in the room (see him making the cover of Times magazine recently, as well as a documentary on him due out soon).

    As for Yudkosky, I wasn't impressed by him either, as clearly shown by this video debate: http://goo.gl/oavJz

    ReplyDelete
  6. I am myself rather skeptical that humanity will quickly approach any kind of technological Singularity but allow me to direct your audience to some comprehensible material:

    What should a reasonable person believe about the Singularity?

    Why an Intelligence Explosion is Probable

    Much more can be found here. It all looks like science fiction but there is a lot more to it than meets the eye. You and other people like P.Z. Myers usually only target Kurzweil as he gets the most press but there are others who take this topic much more seriously. People like Luke or Eliezer Yudkowsky think that unfriendly machine superintelligence is the most plausible cause of total human extinction.

    ReplyDelete
  7. Re: the Yudkowsky debate. Don't take this the wrong way, Massimo, but although in a rhetorical sense you came out on top in that debate, I don't think you really engaged with him much at all. You spent most of the debate arguing (ad hoc and incoherently, as far as I can tell) against functionalist philosophy of mind, and stubbornly conflating intelligence with consciousness-in-the-sense-of-qualia.

    Again, I agree with you most of the time & I love to hear your thoughts on all subjects, but on that particular occasion you didn't just seem wrong, you seemed like you didn't get it at all.

    ReplyDelete
  8. No, dude, it aint’ just speculative, it would amount to a major violation of a law of nature.

    Lately I've been catching up on the (reimagined) Battlestar Galactica TV series (having somehow missed it all these years), and I've been slowly burning with envy over that FTL drive (i.e. the technology that enables their ships to "jump", or reach destinations in space at faster-than-light speeds). Like the warp drive on Star Trek, it's just too cool (which I don't say about all sci-fi tech).

    And, as it turns out, FTL is a serious research topic in theoretical physics, and proposals of "uncertain plausibility" do exist.

    But, given the hurdles that you allude to, a big healthy dose of skepticism seems in order here.

    ReplyDelete
  9. Massimo,

    This is minor, but I was a bit confused by your use of "teleonomy." You seem to criticize Kurzweil's "blatantly teleonomic sense" of biological evolution, but the link you include describes "teleonomy" as a term coined by biologists specifically to avoid thinking teleologically about evolution, while retaining some concept of "purpose." Is it your opinion that "teleonomy" is just teleology renamed? That's fine, but the wikipedia article you link to does not clearly endorse that position, and in fact, with its reference to Dawkins, seems to suggest that the concept of teleonomy has been accepted by at least some prominent biologists. I'm not familiar with the term myself, so I can't judge the accuracy of the Wikipedia article.

    Furthermore, it seems to me that Kurzweil is being blatantly teleological -- so I'm not sure why you didn't just use that term.

    ReplyDelete
  10. "As for Yudkosky, I wasn't impressed by him either, as clearly shown by this video debate: http://goo.gl/oavJz"

    Only now I realized who you are. I still don't quite get what you were trying to argue for in that discussion. Do you doubt substrate neutrality or simply believe that the human brain has some unique characteristics that can't be simulated efficiently by a digital computer?

    ReplyDelete
  11. I wonder what Ray's fanboys will say when he dies pretty much according to schedule (the actuarial tables).

    ReplyDelete
  12. Scott,

    good point, I did mean teleology, and I have corrected the link.

    Ian,

    ouch. Well, at least I trumped Yudkowsky rhetorically! More seriously, I obviously don't think that my argument was incoherent at all. What I regard as incoherent is the idea that the mind can be uploaded - at least as presented. I never said something like that cannot be done in principle, I just don't think transhumanists have done anywhere near the work that it takes in both philosophy of mind and cognitive science to even begin to show that it is possible.

    XiXiDu,

    I make the distinction between simulating X and X itself, as the example of photosynthesis shows. I am also suspicious of substrate neutrality. My analogy there is with talk of non-carbon based life forms, for instance of life using silicon. It turns out that silicon have very limited chemical characteristics compared to carbon. That doesn't mean "life" is impossible if based on silicon, but it does mean that it is far from obvious that it would be possible.

    ReplyDelete
  13. Massimo Pigliucci,

    I don't think that anyone would claim that a copy of X is X itself, there is always a difference if only it is the spatiotemporal position that differs. The important question to ask is what constitutes the nature of "self". What is it that we value about us, what constitutes our identity.

    Imagine some ultra-advanced alien species came along and offered you to beam you up to their spaceship. Let's just assume you would really want to visit their spaceship. Let's also assume that you knew that they were trustworthy and capable of whatever they claimed they could do. They told you that the only way to visit their ship was to copy you and print you out again on their ship. In the process of copying, you would be instantly killed (unnoticeable) and perfectly reassembled on their ship (same kind of atoms but different ones). Just assume those aliens used advanced femtotechnology.

    If you are troubled by this procedure, can you pinpoint exactly what is the problem and explain why it is reasonable to be troubled?

    -----

    Regarding substrate neutrality I agree that for example the noisiness and patchwork architecture of the human brain might play a significant role. There are many possibilities here. But even if that is the case, why wouldn't we be able to mimic it as well and then go on to fix its flaws? Also, is it reasonable to believe that this is the case?

    ReplyDelete
  14. "Let's also assume that you knew that they were trustworthy and capable of whatever they claimed they could do."
    Let's also assume that the trustworthy never make mistakes and are never exposed to the vicissitudes of accident.

    ReplyDelete
  15. I’m not troubled by the assumption that identity is a particular arrangement of matter; what remains to be seen, though, is the possibility of fully knowing that arrangement, let alone how to duplicate it (or knowing if it can be duplicated). I’m not quite convinced that some things aren’t unknowable and/or unduplicatable, and that this might not be one of those things. What evolution has produced – human consciousness, intelligence, etc. – may not be recreatable with a human skill set. (Flame on.)

    ReplyDelete
  16. XiXiDu,

    I believe Massimo's argument is that a simulation might recreate the information processing capabilities of a mind but, being non-biological, would remain in some important respect different. The analogy he gives is that a simulation of photosynthesis would not convert any actual light to actual oxygen, so perhaps a simulation of a mind would not be the same as an actual mind.

    It seems like a plain old dualist argument to me.

    ReplyDelete
    Replies
    1. Actually your own argument is dualistic. Do you the mind is in some special Platonic plane and it can just be recreated with purely informational processes? If you are a physicalist, then you cannot just think a mind on a whole other substrate will be the same, unless you want to throw in functionalism and things like multiple realizability into the mix.

      http://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)#Multiple_realizability

      http://en.wikipedia.org/wiki/Multiple_realizability

      Delete
    2. And your argument implies plain old multiple realizability which some rightly argue is dualistic or not compatible with physicalism.

      Delete
  17. I don't buy any of the major Singularity claims (for various reasons that wander off-topic), but I will agree that you were picking on the easy target in Kurzweil's popular work. I won't fault you on that though; the man's work is absurdly ubiquitous, even among people who prize the rigor of science and should know better, and he needs to be either taken down a notch or pushed to be more careful about the kinds of claim he peddles.

    Because I can't resist: I still think that your claim about the X/simulation-of-X distinction is misplaced, or at least the wrong grounds upon which to argue. Talking about whether conscious is the same as a simulation of consciousness seems to be begging the question, insofar as the issue only arises in the first place if a functionalist view is wrong. The core idea behind computational perspectives is (or appears to me to be) that there is no coherent experiential or mental type of thing which is distinct from informational types of things (or else that experiences supervene directly onto the computational processes themselves), in which case there is no category for consciousness to belong to that does not possess the property of simulation-equals-the-thing-itself. I don't think that a functionalist is likely to be persuaded by anything other than an argument directly for separating experience from computation. And I don't think that introspective observation (qualia and so on) is liable to do the trick. I mean, when I look at something that's red, I can't conceive of what type of computation could possibly comprise that experience. But to me, that only says that I don't have internal access to the way in which sensory data is tagged with "redness", not that the mysterious experience of redness contains useful information about the fundamental nature thereof.

    But I also will say that functionalism doesn't necessarily imply that minds can be readily abstracted from the brain in the way that most transhumanists would expect. Since brains are not designed in an entirely modular fashion, and are in many ways a sort of analog wetware, a "sufficiently precise" copy of a brain to produce a recognizable personality may not be possible, not in a really compressible way that is significantly smaller or faster than the original brain itself. We like to think of neurons as simple input/output components, like transistors, but transparently they do not function in the same way.

    I mean, we still cannot produce a non-living object that functions precisely as a heart would in the body (although we can produce approximations that temporarily sustain life). Nor are we particularly adept at swapping out even naturally produced hearts. In part this is because evolution had no reason to design the heart to be a replaceable, modular object. And yet what the heart does, the design specifications it has to fulfill, those are relatively simple compared to those of even small parts of the brain.

    ReplyDelete
  18. With reference to the Yudkowsky debate, ianpollock wrote to Massimo that "on that particular occasion you didn't just seem wrong, you seemed like you didn't get it at all." I felt the same way—about Yudkowsky.

    ReplyDelete
  19. You likely can't duplicate the purposive elements of our motivating forces, since they come from and are activated by our extremely complex ever changing mix of sensory awareness of the ever ongoing changes in the totality of our accessible environment. Or not.

    ReplyDelete
  20. Minor correction Massimo, it's Seventh Day Adventists.

    Other than that, bravissimo.

    ReplyDelete
  21. Perhaps I was too harsh if Nick Barrowman and others got the opposite impression, and certainly Yudkowsky didn't do well in that debate getting his ideas across either, else it would have been more productive.

    I guess it was primarily annoying because you guys barely talked about the singularity itself (which I would really love to see criticized well), because you couldn't accept Yudkowsky's functionalist assumptions about mind - assumptions that, I have to admit, seem crashingly obvious to me (the alternatives being dualism or something an awful lot like zombieism).

    Instead we got this game of burden-of-proof hot potato, leading nowhere. So maybe a more productive debate would occur, if ever you had the inclination, by provisionally granting functionalism and criticizing EY's ideas from there? Also, I suggest agreeing on a rough definition of "intelligence" first.

    ReplyDelete
  22. While this is nothing I did not know already at least in principle, thanks for reading through that drivel and digging out these gems! Really good stuff, and a joy to read.

    Everyone keeps telling me that Ray Kurzweil is a genius.

    Well, maybe he is. The sad truth is that even very intelligent people can be cranks or loons.

    Techno-mystical babble this is.

    QFT!

    When scientists become a million times more intelligent and operate a million times faster, an hour would result in a century of progress (in today’s terms)

    I heard this one from a colleague recently who was impressed by Kurzweil. My objection is always that while you may perhaps, if you are really clever, have a lot of knowledge and think really fast, speed up progress in some humanities type endeavors, the speed of progress in science and engineering is ultimately limited by having to do experiments, testing ideas against the real world, and building and refining prototypes. And a self-improving intelligence would always have to design the prototype of its next iteration and see if it actually works to satisfaction. This blows the concept of the technological singularity out of the water right there, even before considering whether there might not be physical limits to computing speed.

    (And both for those and the humanities there is of course the added problem of actually making any use of the progress. What good is it even if, hypothetically, some artificial superbrain has found the universal formula, the solution to some grand philosophical puzzle, a cure for all cancer or the blueprint for a fusion reactor if it still takes us 20 years to understand the first two and to technologically implement the other two, perhaps because the materials are just not there yet? Just generating knowledge is as useless as the Alexandrian elevators and steam engines in antiquity. To be technological progress, it has to be applied.)

    Fermi Paradox

    My take is still that as far as we currently know, it may well be impossible in principle to fly to another star system and survive the passage. Not impossible as in we could not reach the moon 200 years ago and now we can, but impossible as in we cannot eat 1 mg of plutonium and expect to live. Oh, that and the fact that we are too darn stupid and living too unsustainably. It may well be that it is impossible in principle to build a high-tech civilization that is long-term sustainable, because you need high population densities for that, and those do not go well with a sustainable use of resources.

    ReplyDelete
  23. Per the idea of Massimo choosing an easy target, there is an atheist philosopher at my alma mater who took Kurzweil so seriously there was a faculty debate put on about his stupid book 'The Singularity Is Near.'

    A physicist, a mathematician, and two philosophers (one the aforementioned) went back and forth on its merits.

    While I was in attendance (three years) we didn't have faculty debates about anything except for Kurzweil.

    If he has that kind of sway over an atheist, and even just one college has had no official, inter-departmental debates about anything except his book for three years, Massimo is on point.

    ReplyDelete
  24. Harry, I agree, I don't object to criticizing Kurzweil. He needs some serious deflation, and M did a brilliant job here. All I object to is if somebody thinks that they have thereby refuted the more coherent singularity scenarios by proxy.

    ReplyDelete
  25. In my view, our own self-engineering processes are already a million times more intelligent than our cognitive processes - which have the less rigorous duty of feeding, clothing and protecting our master works from harm. So it would seem that we already have the ability to acquire as much as we need to survive, and our engineering strategies see no value in increasing the aptitudes of their household staff. What's been done so far may have passed the turning point where extra brain powers do more harm than good to the establishment.

    ReplyDelete
  26. Hello Massimo, I just wanted to respond to your recent article on futurist and Singulartarian Ray Kurzweil. I’ve been reading a lot of criticism of Kurzweil lately, and to give you credit yours seems to be a bit more reasonable. However, I must disagree with some of what you say. Firstly, I agree with you about Kurzweil being wrong in some cases about predicting future technologies. Predicting, in 2005 no less, that by 2010 computers as we know them will disappear was far fetched. Kurzweil has also been wrong on several other predictions. Does this negate him or make him un-credible? I don’t believe so. Kurzweil isn’t pretending to be some prophet getting his information from some supernatural source. He is a human, just like all of us, and he is bound to make mistakes. Certainly it isn’t fair to use this against him. A big step would be if he admitted he made a mistake, but that is another story.

    Having read “The Singularity is near”, I agree that a lot of the wording in the book is kind of hard to decipher. “Epochs” and what not, I don’t know what he means either. My interpretation of the book was just his own personal spin on a popular philosophical concept of the singularity. I think that it would be sort of presumptuous of us to assume that, at no time in the future, will we have the technology to overcome the limit of the speed of information.

    I agree with you, Biology and evolution does not include “exponential growth”. That isn’t how things work in biology. Clearly Kurzweil is wrong here. However, as far as technology goes this doesn’t seem to be the case. It is true that, as of now, our ability to track and determine the change of technological advancement is not that great. However this is simply because the idea isn’t that common. Moor’s law does this, and there are also many technological advancements related to moor’s law that seem to be growing exponentially. Some things like LED’s, Megapixels, and several other things I think.

    You understand what the singularity is, and what its proponents believe. I can’t fault you there. That is good, since most of its critics don’t. I also agree that Kurzweil is clearly a bit “far out” on a lot of things. However, I don’t think that this is an argument against the idea of the singularity. I think that it would be wrong to attempt to attack the idea of the singularity based on one of its proponents, though I’m not saying that is necessarily what you were doing.

    I like your “Pigliucci paradox” argument but, to be sure, it doesn’t refute the singularity anymore than Fermi’s paradox refutes extraterrestrial life. To answer the question, why has the singularity not already happened? I can not say. Nor can anyone answer this anymore than they can answer Fermi’s paradox. They are one in the same, if you think about it. As you know, per your wikipedia article, there are dozens of explanations for the Fermi paradox. Perhaps Alien life does not exist at all (I don’t believe this), Perhaps it is incredibly rare, perhaps it has destroyed itself before reaching the singularity. Or, perhaps alien life doesn’t think like we humans do and never reached the singularity (not due to their lack of technological advancement, but simply because they didn’t develop the necessary technologies due to some inherent inability due to the way their brains work) Who knows? Certainly not you or I. I just don’t think that it is a good argument.

    Let me ask you, do you believe that technology will continue to advance? Do you believe that we will develop A.I. at any point in the future? Do you believe that A.I. will be able to produce more sophisticated A.I., and so on and so forth?

    ReplyDelete
  27. Ian, point taken.

    ReplyDelete
  28. The idea of downloading your 'self' into a machine is of course ridiculous from the outset for a variety of reasons, but the merger of humans and technology is no pipe dream. We need do nothing more than extend out current trends. Cell phones, smart phones and their ilk are ubiquitous and constantly improving. It is not inconceivable that one day an information network will stretch across the entire planet. I read a story just the other day about progress on a bionic eye. There is no reason to think that you could not receive information from the global network and have it fed through an implant directly into your brain. Everyone could have an on board phone/GPS/video-audio feed. I suspect this technology will come in by way of the handicapped who are very motivated to have it because they have no good alternatives.

    Where Kurzweil gets a bit too optimistic is in thinking this wont create a two-tier society (one of your nightmares I believe Massimo). He apparently believes that adoption of this technology wont meet resistance. I am not so sure.

    To say that the universe does not have a destiny is to embrace the concept of free will, which I believe has been discredited. There is certainly a destiny, we just don't know what it is. Except for the very end which the cosmologists tell us is a thin cold haze.

    You can't get on Kurzweil's case about breaking the speed of light limit without also mentioning the Great Sagan. He used a similar physics breaking interstellar travel method in Contact. You will see very little science fiction that does not use a similar device. So if Kurzweil is speculating there he is in good company.

    What we know is that information technology is creating new levels of complexity by joining many previously isolated elements. What emerges from that complexity (if anything) is anyone's guess, but might well be something no one has guessed.

    ReplyDelete
  29. Tim LaHaye has also been wrong on several other predictions. Does this negate him or make him un-credible? I don’t believe so.

    Allan Greenspan has also been wrong on several other predictions. Does this negate him or make him un-credible? I don’t believe so.

    Mohammed Saeed al-Sahaf has also been wrong on several other predictions. Does this negate him or make him un-credible? I don’t believe so.

    ReplyDelete
  30. Dustin,

    > do you believe that technology will continue to advance? Do you believe that we will develop A.I. at any point in the future? Do you believe that A.I. will be able to produce more sophisticated A.I., and so on and so forth? <

    Yes, technology will advance, at least for some time. It is possible that we will develop AI, but far from sure, and we don't know what form that will take. AI may or may not be able to develop more sophisticated AI, and very likely there are limits, so the "so on and so forth" is eventually going to stop.

    XiXiDu,

    > I don't think that anyone would claim that a copy of X is X itself <

    That wasn't my point. I said that a *simulation* of X is not X, different issue.

    As for teleportation, yes, first of all I am troubled by it, because it sounds to me like suicide followed by the creation of a copy of me (call me McCoy, if you will). More crucially, even David Chalmers agrees that there is a disanalogy between teleportation and uploading, because uploading is usually conceived as a complete change in substrate and type of physical arrangement (you are uploading only your "software" not your body).

    AlexF,

    > It seems like a plain old dualist argument to me. <

    No more dualist that conceding that photosynthesis can happen only with certain physico-chemical substrates, or that carbon has different properties than silicon. Indeed, it is the fans of uploading who have been accused of a form of dualism, implied in the complete separability of brain "hardware" and mind "software."

    Sean (quantheory),

    > The core idea behind computational perspectives is that there is no coherent experiential or mental type of thing which is distinct from informational types of things <

    I know, and I find this to be a huge and so far unjustified leap. The only conscious systems we know are biological, and so far the computational theory of mind has really not moved much past speculation (see the abysmal failure of AI). I'm not saying it's impossible, I'm saying that too many people confidently assert what so far is pure speculation.

    ReplyDelete
  31. Ian,

    > Yudkowsky's functionalist assumptions about mind - assumptions that, I have to admit, seem crashingly obvious to me (the alternatives being dualism or something an awful lot like zombieism) <

    Again, I think the dualism is on the other side, and you know what I think of zombies, so I think you are mistaken in this conclusion.

    > I suggest agreeing on a rough definition of "intelligence" first. <

    Indeed, that's the big elephant in the uploading room that they carefully mention at the beginning and then ignore altogether. They are assuming a hell of a lot of about "intelligence," including that one can measure it on a relatively simple, monotonic scale, that it is one kind of "thing," that is is entirely a matter of computation, etc.

    > All I object to is if somebody thinks that they have thereby refuted the more coherent singularity scenarios by proxy. <

    I read the link to Yudkowsky that you posted, but I'm still not sure what you are driving at. The three versions of singularitiarianism all seem nuts to me, albeit for slightly different reasons. What is it, exactly, that you find so compelling in Yudkowsky's as opposed to Kurzweil's?

    Thameron,

    > You can't get on Kurzweil's case about breaking the speed of light limit without also mentioning the Great Sagan. He used a similar physics breaking interstellar travel method in Contact. <

    Ahem, you do realize that Sagan was writing science fiction, while Kurzweil purports to make predictions about what will happen in the next few decades, right?

    AlexSL,

    I think there are reasonable answers to the Fermi paradox in the SETI context (and I wrote as much in Nonsense on Stilts), but not in the context of the singularity. If it is that easy, it would have happened already, and since we are talking about the cosmos "becoming aware" (whatever the hell that means), we would have noticed.

    ReplyDelete
  32. Well, I was not referring to the Cosmos becoming aware, as that is plainly an incoherent concept anyway.

    ReplyDelete
  33. Massimo -- ah, good, that was simpler than I thought it was! Nice post, by the way.

    ReplyDelete
  34. Massimo Pigliucci,

    > That wasn't my point. I said that a *simulation* of X is not X, different issue.

    Of course it isn't. But why does that matter? Somehow you are able to indentify your self, even after a general anesthesia. What is it that you value about your self and that you want to protect? What is it that is being lost if you are simulated and what if it could be simulated as well?

    > As for teleportation, yes, first of all I am troubled by it, because it sounds to me like suicide followed by the creation of a copy of me (call me McCoy, if you will).

    I think that is simply a bias and one would be better off ignoring it. You just seem to value a certain kind of causal continuity, but how important is it? Would you care if you stopped to exist for a millisecond if that way you could become young again or just earn a million dollars?

    > [...] because uploading is usually conceived as a complete change in substrate and type of physical arrangement (you are uploading only your "software" not your body).

    Do you agree that hypothetically one could describe a human being in terms of math, that one could come up with a mathematical definition of a human being? If you agree, do you also agree that one would be able to compute it? If you agree, do you also agree that the output will be the same regardless of the means used to compute it?

    ReplyDelete
  35. XiXiDu,

    no need to be that formal, just call me Massimo.

    > What is it that you value about your self and that you want to protect? <

    My spatiotemporal continuity, without which it would be suicide. Let me ask you: why is it so important to you if there is a copy of you going around with your thoughts in his head?

    > I think that is simply a bias and one would be better off ignoring it <

    You are more than welcome to step into the teleporter. I'll keep using the spaceship, thank you.

    > Do you agree that hypothetically one could describe a human being in terms of math, that one could come up with a mathematical definition of a human being? <

    No, that notion to me is incoherent at worst, and at best it is far from established. And I'd like it to be securely established before I agree to any "uploading."

    ReplyDelete
  36. Massimo,

    > My spatiotemporal continuity, without which it would be suicide.

    I know what you mean by spatiotemporal continuity, I guess everyone implicitly does. But if we attempt to remove some of its vagueness, then what is it that guarantees continuity without which it would be interrupted? Personally I can't pinpoint it and therefore believe that any kind of continuity is at best secondary.

    > Let me ask you: why is it so important to you if there is a copy of you going around with your thoughts in his head?

    It is not important to me but it would be a small sacrifice if copying would allow me to travel faster or gain something equally valuable by it.

    If I was going to do some mountain climbing and knew there was an identical backup, from before I started the activity, that would be "activated" in case of a fatal accident, then I wouldn't fear the annihilation of the particular reification of myself that is currently active.

    > No, that notion to me is incoherent at worst, and at best it is far from established.

    Interesting, I believe my current education isn't sufficient to continue this discussion. I don't know how something could not be subject to mathematics. If you mean that it is impossible to collect enough data about a human being due to certain physical limitations like the uncertainty principle, I understand that. But it evades me how something could not be describable mathematically and subsequently encoded by an unique natural number.

    ReplyDelete
  37. XiXiDu,

    > what is it that guarantees continuity without which it would be interrupted? <

    Spatio-temporal continuity, plus psychological continuity. Your copy would have the latter, but not the former.

    > If I was going to do some mountain climbing and knew there was an identical backup, from before I started the activity, that would be "activated" in case of a fatal accident, then I wouldn't fear the annihilation of the particular reification of myself that is currently active. <

    But the "particular reification" is you! The other is a copy. It's like saying that you don't mind dying because your twin will survive. That might be of some comfort, but it isn't you.

    > I don't know how something could not be subject to mathematics. <

    It depends on what you mean by that. Most systems are "subject to mathematics," but that doesn't mean that a mathematical simulation recreates everything *physical* that there is to recreate about the system. See the recurring example of photosynthesis in this thread: you can certainly simulate the *logic* of photosynthetic reactions in a computer, but you ain't gonna get sugar as output.

    ReplyDelete
  38. Massimo,

    > Spatio-temporal continuity, plus psychological continuity. Your copy would have the latter, but not the former.

    Spatio-temporal continuity is given as a copy is causally connected to the original just as you_today is causally connected to you_yesterday. I am not able to pinpoint in what important aspect the one causal chain that leads to psychological continuity is different from any other causal chain that leads to psychological continuity.

    > But the "particular reification" is you! The other is a copy. It's like saying that you don't mind dying because your twin will survive. That might be of some comfort, but it isn't you.

    If you were going to formalize that claim you would have to define what it means to be "you". A few days ago I wrote a short piece about that question here. In short, I don't think that the continuity of consciousness, your memories or causal history are sufficient to define your indentity in a time-consistent manner.

    > See the recurring example of photosynthesis in this thread: you can certainly simulate the *logic* of photosynthetic reactions in a computer, but you ain't gonna get sugar as output.

    How is this different from saying that if one was to simulate a human being in a computer it wouldn't start to pee? It is true, but how is it relevant? Surely there would be dramatic psychological effects if one wouldn't simulate a highly detailed environment as well. In the end this is just a question of what is an important part of human nature and what isn't and everything that is would have to be simulated as well.

    According to Wikipedia to simulate something means to represent certain key characteristics or behaviours of a selected physical system. Are you arguing that a human being can not be subject to any amount of abstraction? A human being doesn't need to have hair, does it? You would still be you without the ability to vocalize, wouldn't you? Where do you draw the line and why?

    ReplyDelete
  39. XiXiDu,

    I wouldn't rely on Wikipedia too much for key notions concerning this debate. The Stanford Encyclopedia of Philosophy is a better starting place.

    > Spatio-temporal continuity is given as a copy is causally connected to the original just as you_today is causally connected to you_yesterday. <

    Not at all. The only being that can be spatio-temporally continuous with me is me. Again, think of your identical twin: he may act/think/feel like you, but he is not you. Why? Because there is spatio-temporal discontinuity.

    > If you were going to formalize that claim you would have to define what it means to be "you" <

    Not at all, I think the commonsensical understanding of "me" is more than sufficient for our purposes.

    > How is this different from saying that if one was to simulate a human being in a computer it wouldn't start to pee? <

    Not much, that is indeed a difference between simulating a human being and actually having a human being. The problem with sympathizers or "uploading" is that they think they can separate the "software" from the hardware, a notion that, as I said earlier, maybe logically incoherent, and it certainly is very far from being demonstrated (and a form of dualism to boot).

    ReplyDelete
  40. "Describable by" does not mean "reducible to", if that helps.

    ReplyDelete
  41. > I don't know how something could not be subject to mathematics. <

    Describe or measure or simulate feelings, hopes, intentions, anticipations, purposes, fears, joys, happiness and sadness in mathematical terms. Then mathematically ask another mathematician to discuss the shades of meaning of those concepts mathematically.

    ReplyDelete
  42. Massimo,

    > The only being that can be spatio-temporally continuous with me is me.

    As far as I am aware all of the cells that make up our body undergo division and duplication (replication). This does not happen for all cells at the same time, but would it matter if that was the case? My problem is that I don't see how a copy of me isn't causally related in the same sense (as far as it matters) that cells undergoing regeneration are spatio-temporally related. There is no causal discontinuity between me and a copy, it is just a different kind of causal chain between me_original and me_copy versus me_today and me_tomorrow. Why is that difference important?

    > Again, think of your identical twin: he may act/think/feel like you, but he is not you. Why? Because there is spatio-temporal discontinuity.

    You just seem to value a certain kind of natural continuity that I disregard as insignificant gut feeling.

    > Not at all, I think the commonsensical understanding of "me" is more than sufficient for our purposes.

    I thought the purpose was to figure out how to talk to people like Eliezer Yudkowsky and transhumanists who are trying to fathom an understanding of "self" that allows for technological advances like brain implants.

    At some point we will be able to not only create artificial hearts but computational substitutes for various brain areas. At what point are we going to say that someone died then? Many transhumanists and futurists try to use definitions of "self" that are consistent under such circumstances.

    > The problem with sympathizers or "uploading" is that they think they can separate the "software" from the hardware...

    I got no formal education and on my journey to educate myself I haven't yet arrived at computer science. So excuse me if this sounds wrong, but I just don't see that any sort of distinction between software and hardware is relevant here. The only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.

    ReplyDelete
  43. >Again, I think the dualism is on the other side, and you know what I think of zombies, so I think you are mistaken in this conclusion.

    I don't accuse you of dualism. Rather, I think you're a lot closer to Chalmers than you imagine.

    This is because, on your view, it is possible in principle to create an entity that looks like Massimo Pigliucci and behaves indistinguishably from how Massimo Pigliucci would behave, in all possible situations, and yet has no inner experiences. The only difference from Chalmers is that this entity is not atom-by-atom identical to the original Massimo Pigliucci. But certainly your view would appear to make qualia epiphenomenal, which is indeed the Chalmers posish.

    Re: personal identity, you told XiXiDu that "the commonsensical understanding of "me" is more than sufficient for our purposes."

    I think this is wrong, and is probably the source of our disagreement. You said something similar when EY brought up the fact that the ontology of physics is over indistinguishable particles and hence (in principle) indistinguishable copies - something about personal identity being a macroscopic phenomenon. This seems to be a simple map/territory confusion. There are no "macroscopic phenomena," really, there are only macroscopic perspectives on phenomena that are ultimately governed by basic physics (whatever the final ontology of physics turns out to be, I don't necessarily want to marry myself to the current model). Your conscious experience is generated (please don't ask me how!) from basic physics, not from your folk philosophy of personal identity. If the opposite were true, then you could change whether "you" survived the teleportation by changing your philosophy of personal identity!

    ReplyDelete
  44. The fear of death motivates a lot of irrational thinking, doesn't it? But I think arrogance is also a big part of this cocktail. Let's take a second to ponder the arrogance of the singularitarian (or the similarly motivated cryogenicsatarian-ian). Even if it were possible to upload (defrost) yourself, on what grounds would you merit the the bandwidth (rack in the toaster-oven-of-the-Gods)? Kurzweil may feel his magnificence is worth a lot of processor time, but I doubt AI++ is going to be similarly impressed. If the FUTURE is as splendid as promised, they certainly will not have use for the likes of us.

    Do I have a point other than snark? Yes. If you aren't willing to take your own death seriously, you have in inauthentic relationship to the only existence you will ever have. It leads Chalmers to places he should be embarrassed about. The poor dude is hoping to be rescued by the Matrix.

    ReplyDelete
  45. OneDayMore,

    a lot of people concerned with the possibility of a technological Singularity are actually not so much worried about their own death but rather about the future of humanity.

    ReplyDelete
  46. XiXiDu,

    I'll address your other points later, but I'm impressed by your optimism. My impresion is that singularitarians are actually a bunch of narcissuses who desperately want to be immortal.

    ReplyDelete
  47. Ian to Massimo: But certainly your view would appear to make qualia epiphenomenal, which is indeed the Chalmers posish.

    Massimo, say it 'aint so!

    ReplyDelete
  48. jcm,

    Absolutely it ain't so, don't know where Ian got that idea. I'll address some of this later tonight, after I get out of an insanely long administrative meeting to which, as you can see, I ain't paying much attention...

    ReplyDelete
  49. I must agree with Massimo that singularitarians generally seem like self-absorbed folk. Not that it means their thinking is wrong, but if we are to ask not, "are their views irrational?", having concluded "yes" and moved on, but rather "why do they think like this?" I think the "bunch of narcissuses" explanation works pretty well, esp. with regards to Kurzweil and Yudkowsky.

    ReplyDelete
  50. Actually, I withdraw the charge of epiphenomenalism; that was too hastily written. The other stuff is right though. M's position definitely implies quasi-zombies.

    ReplyDelete
  51. Ian,

    glad to hear you agree about epiphenomenalism. But, again, why exactly do you think that my position implies quasi-zombies? (And what are quasi-zombies, anyway? Are they zombies or not?

    Also:

    > Your conscious experience is generated (please don't ask me how!) from basic physics, not from your folk philosophy of personal identity. If the opposite were true, then you could change whether "you" survived the teleportation by changing your philosophy of personal identity! <

    I don't see how this is relevant. Again, think back to my twin analogy: suppose your identical twin is really completely and totally identical to you, mental states included. In what sense would he *be* you?

    XiXiDu,

    > My problem is that I don't see how a copy of me isn't causally related in the same sense (as far as it matters) that cells undergoing regeneration are spatio-temporally related <

    It is, but that isn't the point. Pretty much everyone's cells are causally related in pretty much the same way to mental states and so on, so what? You are you and everyone else is everyone else.

    > You just seem to value a certain kind of natural continuity that I disregard as insignificant gut feeling. <

    I think you are grossly underestimating my point. One more time: would you commit suicide if you were assured that a completely identical twin would survive you? Because *that's* what uploading would do to you (if it were possible, which I don't think it is).

    > At some point we will be able to not only create artificial hearts but computational substitutes for various brain areas. <

    That's where we disagree. To me the brain is an organ just like the heart. Would be happy with a simulator of your heart, instead of your actual heart?

    > The only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. <

    That's right, and human beings are physical beings, not symbolic representations of physical beings...

    ReplyDelete
  52. Interesting exchange. And this seems to be an important part:

    Massimo:

    Let me ask you: why is it so important to you if there is a copy of you going around with your thoughts in his head?

    Touche!

    XiXiDu:

    It is not important to me but it would be a small sacrifice if copying would allow me to travel faster or gain something equally valuable by it.

    If I was going to do some mountain climbing and knew there was an identical backup, from before I started the activity, that would be "activated" in case of a fatal accident, then I wouldn't fear the annihilation of the particular reification of myself that is currently active.


    The misconception here is that a copy would be YOU in any meaningful sense of the word. It wouldn't.

    Massimo:

    But the "particular reification" is you! The other is a copy. It's like saying that you don't mind dying because your twin will survive.

    Exactly. You are spatio-temporally connected to your hypothetical twin in precisely the same way you are to an uploaded copy of your mind that was produced, say, two years ago.

    Also: What OneDayMore said.

    ReplyDelete
  53. Alex, it's a pleasure to be on the same side! I appreciate your sharp thinking here. (Of course, since you agree with me ;-)

    ReplyDelete
  54. "That's where we disagree. To me the brain is an organ just like the heart. Would be happy with a simulator of your heart, instead of your actual heart?"

    In what sense, Massimo, do you mean it is "just like" the heart? I for one would not mind having an artificial heart if it could carry out its functions well.

    ReplyDelete
  55. Massimo -

    Landing somewhere between a Singularity believer and agnostic I obviously have many problems with your analysis and dismissal of Kurzweil's theories, however I applaud you for teasing out some of the deeper philosophical assumptions that underpin the belief set. (Bottom line of disagreement: I don't think a teleological analysis of evolution is easily dismissed - though it need not be a religious teleology - nor do I think you can restrict evolution to a biological domain, only having to do with genes and DNA. The process metaphysics of Albert North Whitehead are worth a refresher here I think.)

    I wanted to comment here for a different purpose however: You argue here, and elsewhere, that a simulation of X is not an X, therefore, a simulation of a brain may not produce mind just as a simulation of photo-synthesis does not produce sugar. This view has been conclusively rebutted, in my opinion, by David Chalmers. I know you are no great fan of Mr. Chalmers but his Fading/Dancing Qualia Argument is a convincing articulation of the view that a simulation of a brain IS sufficient for mind. Chalmer's defense of this view hinges on the fact that, for some things, a simulation of that thing is itself that thing. If a system meets this criterion it is an organizational invariant. (A simulation of a circle is a circle. A simulation of a calculator is a calculator. etc.) The Fading/Dancing/ Qualia argument is a thought experiment designed to illustrate the organizational invariance of consciousness. It's easy to find online, but I'll summarize: The argument asks you to imagining each of your neurons being replaced one-by-one with a silicon chip that does the exact same functional work of the neuron. (Whether a silicon chip can actually perform the function of a neuron in reality is irrelevant to the thought experiment. All we need imagine is IF it could then the results follow. The principle does not change.) If you lose your consciousness at some point in this process, or if it fades, you would be unable to notice it doing so. As this seems hard to imagine, if not outright logically impossible, it's best to suspect that your consciousness is preserved in tact as it transfers substrate. If so, then consciousness is an organizational invariant and is not like photo-synthesis. A simulation of consciousness IS consciousness.

    ReplyDelete
  56. I say "quasi" because they (unlike Chalmers' zombies) are not atom-by-atom identical to you. The reason the possibility of such zombies is implied by your philosophy of mind was spelled out in several of our previous exchanges:

    (1) it's possible in principle to simulate any physical process (unless dualism is true);
    (2) that includes humans;
    (3) it's possible (again, in principle) to make the simulation "react" in real time to the real world, just as the original would, then implement its outputs (arm movements, speech) in a physical body;
    (4) such an entity claims to have qualia (otherwise dualism must be true);
    (5) if (by your contention), it actually does not have qualia, it is what I have called a quasi-zombie - an entity that plausibly claims qualia but runs on a non-biological substrate.

    >I don't see how this is relevant. Again, think back to my twin analogy: suppose your identical twin is really completely and totally identical to you, mental states included. In what sense would he *be* you?

    I don't deny that it is hard to think about such things coherently. However, consider your epistemic state AS the identical twin. You remember being Massimo Pigliucci, you have the same hopes, dreams, habits, and you want to live. But are you him or are you some sort of usurper? It's a difficult question.
    But hang on. It's ALSO a difficult question if:
    -You lost consciousness and then came to;
    -You slept and then woke up;
    -You died and then were resuscitated.
    I mean this very literally, we have no way to tell introspectively whether we are the same consciousness that existed previously, or some brand new usurper; and this is true not only of "uploading" but of the simple act of going to sleep!
    All anyone can guarantee you about the "you" that exists tomorrow morning, is that he will have your memories, hopes, habits etc. Which is all that teleporting/uploading/copying can guarantee you. I strongly suspect they have the same effect.

    If that doesn't convince you, try some thought experiments. Are "you" alive or dead after these procedures? If dead, try to pinpoint the cause of the loss of personal identity.

    (1) I replace the atoms of your body, one by one, with "new" atoms, until your "original" atoms are entirely replaced. You are awake during the procedure.

    (2) I kill you (sorry!), replace all the atoms of your body with "new" ones, then resuscitate you.

    (3) I kill you (really very sorry!), take your body to Idaho, then resuscitate you.

    (4) I kill you (seriously, nothing personal!), replace the atoms with new ones, take the newfangled body to Idaho, then resuscitate you.

    Note that (4) is getting awfully close to a teleportation scenario. Where is the crucial loss of identity here? I can't find it.
    (1) definitely doesn't involve lost identity (this actually happens to us, albeit very slowly).
    (2) might, but then we wouldn't be saving people by resuscitating them.
    (3) is not plausibly different from (2), but there is no spatial continuity of consciousness - "you" don't travel.
    (4) may be different by reason of atom-replacement, but our negative answer to (1) should make us doubt that very much.

    ReplyDelete
  57. "The argument asks you to imagining each of your neurons being replaced one-by-one with a silicon chip that does the exact same functional work of the neuron."
    That's what I love about these so-called thought experiments. The purpose of a thought experiment about, for example, neurons, is best served by imagining you were dealing with a real living breathing neuron. Replacing it with an imaginary silicon chip can tell us nothing unless the chip so perfectly simulates a neuron that it has virtually become one. And then what have you proved except that in your imagination you can turn a chip into a neuron.

    ReplyDelete
  58. "Ahem, you do realize that Sagan was writing science fiction, while Kurzweil purports to make predictions about what will happen in the next few decades, right?"

    Indeed I am, but I suspect he put that in there because he had a hope (or perhaps some scientific basis to believe) that such a thing was possible. If you want to write about the impossible you may as well write fantasy.

    Kurzweil makes the common error that other prognosticators make in giving a timetable. He should just stick with 'near' and 'soon' and such.

    I believe that history is replete with men who have good ideas mixed in with the bad (Socrates for example). You can (and should) certainly discredit Kurzweil's bad ideas, but that doesn't automatically tarnish his good ones. I think the idea of a human merger with technology is sound and underway even if no one will ever 'download' themselves into a computer.

    ReplyDelete
  59. Not quite right Baron. The point of the thought experiment is not to show that silicon chips can replace neurons. That's an open question. What's being established is the functional character of consciousness. It doesn't matter how neuron-like the silicon is, it's nonetheless NOT a biological neuron; what matters is that the chip be isomorphic to the neuron's functional role in the brain. Consciousness, unlike digestion or photo-synthesis, supervenes wholly on the functional organization of a system, regardless of substrate. That is what Chalmer's thought experiment attempts to establish.

    ReplyDelete
  60. Ritch,

    > I for one would not mind having an artificial heart if it could carry out its functions well. <

    Of course, but would you be happy with a virtual simulation of a heart, pumping virtual blood?

    Matt,

    > This view has been conclusively rebutted, in my opinion, by David Chalmers. <

    I really do wonder how exactly Chalmers got as far as he did. His thought experiment establishes no such thing. Philosophers have realized since Descartes that the rationalist program of establishing truths about the physical world by just thinking about it is simply not possible. That's why we have science, apparently Chalmers missed that boat and he is stuck in the 16th century. Appropriate, for a dualist.

    > Consciousness, unlike digestion or photo-synthesis, supervenes wholly on the functional organization of a system, regardless of substrate. <

    That is begin the question on a massive scale. The issue is prrecisely whether or not consciousness supervenes it's substrate, and despite Chalmers' delusion, no thought experiment will ever be able to demonstrated that. And the only kind of consciousness we have is, hum, biological. That doesn't mean that any other kind is impossible, but it does mean that Chalmers, Kurzweil and co. have to do a hell of a lot of serious, empirical work before making a good case for it.

    ReplyDelete
  61. Ian,

    Nice try, but no cigar. Do you really think that there is no difference between you waking up in the morning (or even from a coma) and you shooting yourself (I prefer you get killed instead of me, nothing personal) in the happy knowledge that a copy of you will keep going?

    And once again, simulating a physical process is NOT the same as having the damn physical process. To claim *that* is dualism, not my claim that you are not going to be able to detach minding (it's an activity, not a thing) from the brain.

    ReplyDelete
  62. Ian,

    As you will know from before I agree with you that, in principle, there cannot be any reason why it should not be possible to simulate a mind on a sufficiently advanced machine or even produce real AI with the exception of dualism being correct*, which hardly any reasonable person would consciously claim these days**.

    And what you write is all very well. However, it does not change the fact if I lay down in a machine and had my mind copied into it with some non-destructive process, I would still wake up afterwards and think: "Hm. Here I am, and nothing feels different."

    There will be a copy of me in the machine that also thinks it is me, but I will not experience what it experiences, perceive what it perceives, or have any real advantage from its existence. Even worse, from that moment on, it will become a different person in exactly the same way as two identical twins become different persons after the moment of separation of the two original cells, leading to Massimo's question about what good it is for you personally to die in the knowledge that your twin survives. When I die, I do not suddenly wake up as my copy in that machine, thus achieving immortality. By the same logic, if my mind is uploaded while destroying the body, I, in any meaningful sense of the word, am dead.

    So what is it all good for, even if the process is non-destructive? Having a copy made seems at least not any more satisfying than achieving immortality through your children, professional output, political activism, influencing other people, etc.

    *) While this might seem to disagree with Massimo's view, it could be added that I would have a hard time considering a machine intelligence human, simply because it is not in a human body, and our human mind is an emergent property of the body we have, unless dualism is true. I also have strong doubts that it is technically possible to emulate a really human mind on a machine - most people seem to be unaware how complicated the brain is. It seems reasonable, however, to assume that one could ultimately build a thinking machine so sophisticated that it could pass the Turing test. What I don't get is what we would gain from building a thinking machine with autonomous personality and motivations, except for the giggles. Seems like a very good chess computer, autopilot or search algorithm is what we really want, and not something that goes "what is it all good for?"

    **) Although many if not all uploading fans seem to hold an essentially dualist view without being aware of it.

    Baron P,

    Great comment!

    ReplyDelete
  63. One last comment before I put the issue, and myself, to bed. You claim that Chalmers' thought experiment experiment in no way establishes functionalism and that "philosophers have realized since Descartes that the rationalist program of establishing truths about the physical world by just thinking about it is simply not possible." (So long metaphysics!) Fundamentally you think that this kind of philosophical exercise is worthless from the outset and exists only as a kind of mental puzzle designed to make its creator look clever. Such an argument can provide no insight into the actual nature of things or illuminate the mind/body problem in any way. (Do you feel the same about Jackson's Mary's Room argument? Searle's Chinese Room? What would Philosophers do for a living without these kind of mental olympics?) There is an emotional appeal to this line of dismissal but such a response nevertheless sidesteps what the argument has to say. Sneaky. Attack such a thought experiment all you like from the stands, the actual argument still waits there in the center of the ring, waiting for an opponent.

    Bottom line, while undergoing the neuronal replacement the Qualia either fades, disappears at some point, or is sustained. Is there another POSSIBLE option in logical space? I don't see what it could be. Option one and two lead to ridiculous conclusions so option three still strikes me as the most plausible. Functionalism is established. Option four, the "this whole line of thought is silly" response doesn't attempt to engage with the idea. Obviously this is a complex topic and I hardly expect any kind of thorough response but, I've never seen a criticism which plausibly demonstrated Chalmers' error or even attempted to engage with the idea on its own terms. If you know of any links to articles which criticize The Fading/Absent Qualia case in depth I'd love to read them. Usually materialists just evade and duck, eager to call Chalmers' names, make fun of his hair, or claim he's stuck in, as you say, "the 16th Century." For all it's worth, I can see no reason to reject his analysis just because it isn't "science." (As if science could determine this kind of question anyway!) But, then again, I'm a panpsychist/monist nutjob, so perhaps I'm too far gone as it is to see the light. Gotta run, Kurzweil's on Colbert! ;)

    ReplyDelete
  64. @Matt Sigl,
    "It doesn't matter how neuron-like the silicon is, it's nonetheless NOT a biological neuron; what matters is that the chip be isomorphic to the neuron's functional role in the brain."

    So you're imagining a chip that functions exactly like a neuron but is not one. To demonstrate convincingly that a chip that's not a neuron can be fashioned to thoroughly replace its functional role outside of your imagination?
    Can you perhaps imagine that the material substance of a neuron that will be different from the substance that makes the chip remain (in your imagination) a chip may have an inescapably necessary role to play in that real-life-like function?
    And if so, it's no longer by that stretch of imagination actually a chip?

    ReplyDelete
  65. @Alex:
    My opinions on this topic are strongly stated, loosely held. I am not at all sure that this theory of personal identity is correct, but if you go through the above steps (1) to (4) you'll find it very hard to pinpoint where identity disappears or why, which makes me suspect that it either doesn't disappear, or that it isn't even a meaningful question.

    And then you notice that there is no way, in principle, scientifically or introspectively, to tell the difference between having awakened from a deep sleep, or having undergone the copy-delete sequence, which makes me REALLY suspect it's a wrong question.

    I also wish to disclaim, lest I be accused of rationalizing, that I am arguing about this because I think philosophy of mind is fun, not because I'm holding out hope of uploading, which seems in a technical sense a hilariously long way off, and not necessarily even desirable.

    @Massimo:
    >And once again, simulating a physical process is NOT the same as having the damn physical process. To claim *that* is dualism, not my claim that you are not going to be able to detach minding (it's an activity, not a thing) from the brain.

    I emphatically agree that minding is an activity - that is precisely why substrate independence seems like such a sure bet. Because a thought sure as hell isn't a bunch of carbon atoms, so it must be some sort of relation between them, or between neurons, or something. I have a wonderful proof of this, which this margin is too small to contain.

    As far as a simulation of X not equalling X, I agree that they are not in general the same thing (our old friend photosynthesis being the obvious counterexample). But there are some physical processes in which the aspect we actually care about looks like it is just as good simulated as real.

    A trivial example would be a simulated sum: 2+2=4.
    A slightly less trivial example would be a simulated plan for how to catch a real antelope: "Hide behind that bush!"
    Or a simulated memory of a real last week: "There were lots of antelope in the valley."
    Or a simulated reflection on a simulated planning process: "I seem to be watching my thoughts planning how to catch an antelope."
    Or a simulated poetic composition: "Home, home on the range..."

    Obviously, a gazillion complications (e.g., hormones) intervene here, which is why this sort of scenario will almost certainly never occur outside Disney's magical kingdom of In Principle. But correctly stated, the idea is definitely coherent.

    ReplyDelete
  66. Massimo,

    > My impresion is that singularitarians are actually a bunch of narcissuses who desperately want to be immortal.

    See for example this article. Some of them take a huge personal toll by working towards a positive Singularity. Some lost friends and others even got psychic problems due to it. I can't go into details here but many of them are not selfish. Also see this post, people donate the current balance of their bank account to the Singularity Institute for Artificial Intelligence. One of the reasons for it is of course that they want to survive but I have read enough to tell that a lot of those people are highly altruistic.

    ReplyDelete
  67. Massimo,

    > You are you and everyone else is everyone else.

    I treat agents with identical values and goals as parts of me.

    > One more time: would you commit suicide if you were assured that a completely identical twin would survive you?

    If only one of us could survive at the same time and the identical twin would have better chances to fulfill its goals, yes.

    > Would be happy with a simulator of your heart, instead of your actual heart?

    If it was better than the original, of course.

    > That's right, and human beings are physical beings, not symbolic representations of physical beings...

    If you don't buy the Mathematical universe hypothesis then the only important difference seems to be that one isn't being computed.

    ReplyDelete
  68. Alex SL,

    > The misconception here is that a copy would be YOU in any meaningful sense of the word. It wouldn't.

    You just claim this and I think that it is simply biased and inconsistent. I wrote a bit about it here.

    What is your definition of identity?

    ReplyDelete
  69. Baron P,

    > Replacing it with an imaginary silicon chip can tell us nothing unless the chip so perfectly simulates a neuron that it has virtually become one. And then what have you proved except that in your imagination you can turn a chip into a neuron.

    Are you saying that the brain can not be subject to any amount of abstraction, anything about that evolutionary product is vitally important? I don't see how one would come to believe this. It is a very unlikely possibility.

    ReplyDelete
  70. Massimo,

    > And once again, simulating a physical process is NOT the same as having the damn physical process.

    Both are physical processes, one just uses a different substrate. The only reason to doubt substrate neutrality is that certain substrates have functional properties that cannot be mimicked by an universal computing device, which I believe is an unreasonable assumption.

    Further, the only reason there is a distinction between software and hardware is that we're talking about computational substrates that are capable of a large degree of freedom and therefore have to be "programmed" to exhibit a certain behavior.

    ReplyDelete
  71. Massimo, what do you think about the singularian idea that if you make a horde of nanobots which gradually replace every neuron in your brain, one at a time, with an artificial device that functions just like a real neuron. The artificial neuron has membrane channels that take in real ions. It has membrane potential; it synthesizes real proteins and releases real neural transmitters. All the configuration of the original neuron is duplicated faithfully. We are not cloning the neuron from the DNA. The first time one neuron in your brain is replaced, you won't feel any difference. You are still you. But eventually all your neurons will be artificial neurons. You still won't know the difference. If we then connect the brain now completely made up with artificial neurons with a device that pumps in artificial blood and other biostuff, wouldn't you be a consciousness living in a machine?

    Note that I am not a fan of singularity. As far as we know, the only thing that function like neurons are biological neurons. If you make artificial neurons that are indistinguishable from real neurons, they probably will degrade with time and die like real neurons. But that is just a possibility. Making artificial neurons is not impossible in principle.

    Just as a thought experiment, what do you think? I am also wondering if this scenario can be considered a thought experiment arguing for physicalism. It's strange that there are so many thought experiments for nonphysicalism but not many for physicalism. If you are asked to defend nonphysicalism, how would you refute this thought experiment?

    ReplyDelete
  72. For everyone who is interested to learn more about "the other" Singularity movement, existential risks and Eliezer Yudkowsky, I recommend the following three part interview between mathematician and climate activist John Baez and Eliezer Yudkowsky:

    Part 1
    Part 2
    Part 3

    You can learn everything from what it is all about, his idea of rationality and what he thinks about climate change etc.

    ReplyDelete
  73. Good morning gentlemen! I see people have been very active on this topic during the night. Okay, here we go for a few more - hopefully helpful - clarifications:

    Matt,

    > Do you feel the same about Jackson's Mary's Room argument? Searle's Chinese Room? What would Philosophers do for a living without these kind of mental olympics? <

    No, my critique was not against thought experiments in general. But I think that thought experiments need to be used carefully to highlight problems or provide insight into how we think about things (Chinese room, for instance). They cannot be used to prove metaphysical theses (zombies) or establish physical realities (dancing qualia).

    > Bottom line, while undergoing the neuronal replacement the Qualia either fades, disappears at some point, or is sustained. Is there another POSSIBLE option in logical space? <

    Bottom line: we have no idea what would happen if we started to replace biological neurons with silicon ones. My *guess* is that pretty soon during the process something would go really badly and the person would die. Since that cannot be excluded other than experimentally, Chalmers' "experiments" proves precisely nothing.

    scitation,

    > the singularian idea that if you make a horde of nanobots which gradually replace every neuron in your brain, one at a time, with an artificial device that functions just like a real neuron <

    If the artificial neurons are not just "functionally" (whatever that means) but also materially like biological neurons - i.e., they are made of proteins, produce neuropeptides, etc. - then of course it would work. That's what happens in our bodies normally, which means the thought experiment establishes nothing. Notice, however, that this has nothing to do either with AI or with uploading.

    (Disclaimer: yes, it is conceivable than one could produce, say, things that look and work like proteins but are not. Good luck with that. But what is NOT conceivable, in my opinion, is that *just anything* that maintains the original logically/mathematically described properties will do, because biological function is a bit more than just logic/math, it is biophysics.)

    ReplyDelete
  74. Here is another interview that highlights what other people than Kurzweil are working on:

    Steve Omohundro on the Global Brain, Existential Risks and the Future of AGI

    "Some qualities that I see as precious and essentially human include: love, cooperation, humor, music, poetry, joy, sexuality, caring, art, creativity, curiosity, love of learning, story, friendship, family, children, etc. I am hopeful that our powerful new technologies will enhance these qualities. But I also worry that attempts to precisely quantify them may in fact destroy them. For example, the attempts to quantify performance in our schools using standardized testing have tended to inhibit our natural creativity and love of learning.

    Perhaps the greatest challenge that will arise from new technologies will be to really understand ourselves and identify our deepest and most precious values."

    ReplyDelete
  75. XiXiDu,

    > Some lost friends and others even got psychic problems due to it. I can't go into details here but many of them are not selfish. <

    Nobody has or can claim that "they all selfish." It simply strikes me that the whole exercise is a colossal scifi fest for narcissistic geeks, most of whom probably don't realize why they are so interested in it. Of course the fact that your two links are to obviously self-serving articles by the Singularity Institutes makes me a bit suspicious...

    > I treat agents with identical values and goals as parts of me <

    Well, that's very zen of you. Most sane people, however, don't.

    > If only one of us could survive at the same time and the identical twin would have better chances to fulfill its goals, yes <

    Why on earth would someone else's goals be more important to you that your goals?

    [about the virtual heart] > If it was better than the original, of course. <

    You are missing the point: a virtual heart wouldn't be able to pump *real* blood, which is, you know, the point of a heart.

    > If you don't buy the Mathematical universe hypothesis then the only important difference seems to be that one isn't being computed. <

    I guess I don't buy your peculiar form of Platonism.

    > Both are physical processes, one just uses a different substrate. <

    And I've been arguing forever now that as far as we know the substrate matters. First, because the only examples of consciousness we have are all tied to a particular substrate. Second, because we know that silicon, for instance, has very different chemical properties from carbon, and simply would not do as a substitute. This doesn't make non-carbon consciousness impossible, but it does mean that there is no sensible reason at the moment to think that it will happen.

    ReplyDelete
  76. And finally Ian (after this, I really ought to get some work done :-)

    > if you go through the above steps (1) to (4) you'll find it very hard to pinpoint where identity disappears or why <

    It's actually not that much of a problem. Here we go: (1) is fine, it's what happens biologically; (2) wouldn't work, just try to resuscitate a person after having killed him, even without replacing his atoms one by one; unless you are Jesus...; (3) also wouldn't work, unless you find Jesus in Idaho; (4) wouldn't work for the same reasons as (2) and (3). So, you have established nothing.

    The crucial point that I have been trying to make is that personal identity - at the very least - requires both psychological and spatiotemporal continuity. It may also require biological continuity in the sense that if you either try to replace all my atoms at once (teleportation) or to replace them with a different kind of material (silicon chips) it won't work.

    > Because a thought sure as hell isn't a bunch of carbon atoms, so it must be some sort of relation between them, or between neurons, or something. I have a wonderful proof of this, which this margin is too small to contain. <

    Appreciate the humor ;-) My point is that not all substrates are capable to maintain what you call relations. To play chess, for instance, you need a board of a certain type, physical (made of wood, metal, etc.) or virtual. But you *cannot* play chess on a variety of other surfaces, physical or virtual, because they do not allow for the functional relations among pieces to occur.

    Similarly, I have not said - nor will I ever say - that non-biological consciousness is impossible. But I am saying that biological substrate needs to be taken seriously, and that once cannot simply assume without evidence that anything will do because it's all in the relations. It's not, the relations are bound to particular substrates.

    This is particularly so if we talk about qualia, since qualia are physical sensations, and I'll be darn if I understand in what possible sense one can "simulate" a sensation.

    > there are some physical processes in which the aspect we actually care about looks like it is just as good simulated as real. A trivial example would be a simulated sum: 2+2=4. <

    Neither 2+2=4 nor the others are *physical* processes. The first one is a mathematical operation that can be instantiated by a physical process - which is the opposite of what we are talking about. As for memories, you still have to go into the brain and change something physical to "simulate" one.

    ReplyDelete
  77. Massimo,

    > Of course the fact that your two links are to obviously self-serving articles by the Singularity Institutes makes me a bit suspicious...

    My history as a skeptic of the Singularity Institute is on record, see for example "Should I believe what the SIAI claims? or "What I would like the SIAI to publish (there is much more if you are interested). I can assure you that the interview and especially the second article are not advertisement material. Especially the second interview featuring Ben Goertzel (an artificial general intelligence researcher) was not written to support the Singularity Institute (SIAI). I can easily prove that as Ben Goertzel is one of the biggest critics of the SIAI, see his blog post "The Singularity Institute's Scary Idea (and Why I Don't Buy It)".

    ReplyDelete
  78. XiXiDu,

    What is your definition of identity?

    Certainly not this one:

    I treat agents with identical values and goals as parts of me.

    Massimo,

    Similarly, I have not said - nor will I ever say - that non-biological consciousness is impossible. But I am saying that biological substrate needs to be taken seriously, and that once cannot simply assume without evidence that anything will do because it's all in the relations. It's not, the relations are bound to particular substrates.

    Now that was a very clear way of describing your position, which I was unsure about before. Thanks.

    ReplyDelete
  79. Massimo,

    > Similarly, I have not said - nor will I ever say - that non-biological consciousness is impossible. But I am saying that biological substrate needs to be taken seriously, and that once cannot simply assume without evidence that anything will do because it's all in the relations. It's not, the relations are bound to particular substrates. <

    All that really matters if you want to have a fruitful discussion with people like Eliezer Yudkowsky is if you believe that human intelligence can be improved upon. If it means we we will have to create substrates with the same chemical properties does not matter. What matters is if you agree that we are not perfect and that much more intelligent beings are possible, e.g. carbon-brains with neurons that transmit on optical wavelengths rather than electrical, neurons with improved chemical properties or brains with better memories (there seem to be a lot of possibilities).

    Many transhumanists are concerned that much more intelligent beings (e.g. augmented human cyborgs) pose a huge risk and that research to mitigate such risk is underfunded. People like Kurzweil believe into a positive Singularity which isn't even the prevailing opinion within transhumanist circles. Most transhumanists are actually concerned about a negative Singularity like some worldwide dictatorship that is using advanced surveillance. A negative Singularity could turn out to be closer to a living hell than the kind of heavenly vision that Kurzweil imagines.

    ReplyDelete
  80. XiXiDu,

    of course human intelligence can be improved upon, that's a pretty uncontroversial claim. Can it be done by substituting optical fibers for neurons? I don't know, I doubt it, and it is up to those who make that claim to show that it is possible.

    I share the fears of pessimist transhumanists, but I'm not worried because I seriously doubt that we are anywhere near the problem actually becoming a problem. We've got plenty of serious issues to deal with first, say, health care, poverty, war...

    ReplyDelete
  81. "Replacing it with an imaginary silicon chip can tell us nothing unless the chip so perfectly simulates a neuron that it has virtually become one. And then what have you proved except that in your imagination you can turn a chip into a neuron." (Baron)

    Just what I was thinking. The "thought experiment" becomes a bit of begging the question when you make the different substrates identical.


    "Consciousness, unlike digestion or photo-synthesis, supervenes wholly on the functional organization of a system, regardless of substrate. That is what Chalmer's thought experiment attempts to establish." (Matt)

    How can it establish that when creating identical substrates is a necessary assumption? Again beggin the question.

    ReplyDelete
  82. "The only reason to doubt substrate neutrality is that certain substrates have functional properties that cannot be mimicked by an universal computing device, which I believe is an unreasonable assumption."

    So does subtrate not matter? Perhaps you can do the same with wood, or fiber optic cables. I'm sorry but the burden of proof is on the person putting forth the claim that the materials used in computer technology are reasonable substrates.

    ReplyDelete
  83. Alex SL,

    There is too much vagueness involved here. Is there any reason to believe that even though evolution could create consciousness we can not?

    No doubt we don't know much about intelligence and consciousness. Do we even know enough to be able to tell that the use of the term "consciousness" makes sense? I don't know. But what I know is that we know a lot about physics and biological evolution and that we know that we are physical and an effect of evolution.

    We know a bit less about the relation between evolutionary processes and intelligence but we do know that there is an important difference and that the latter can utilize the former.

    Given all that we know is it reasonable to doubt the possibility that we can create "minds", conscious and intelligent agents? I don't think so.

    ReplyDelete
  84. XiXiDu,

    if you retreat far enough, we are eventually going to disagree. Since biological organisms *are*, in a sense, "machines," nobody in his right mind doubts that it is *possible* that we may eventually make artificial ones.

    What I and others are objecting to are the specific claims of Singularitarians that (a) the substrate won't matter; (b) it will happen very soon; and (c) it will lead to a runaway process. There are serious reasons to doubt all three claims and no particular reason to believe them.

    ReplyDelete
  85. Massimo and ccbowers,

    I am just saying that it is very unlikely that all of the human body is necessary to recreate all of human experiences. Some amount of abstraction is possible.

    > (a) the substrate won't matter;

    I have criticized that claim myself in the past. But there is a limit to what is a reasonable assumption given what we know. You have to see where those people are coming from. They perceive the creation of artificial general intelligence to be a risk with an associated disutility that can outweigh even a very tiny probability of it happening.

    There are reasons to assume that it is possible to use digital computers to create very smart agents. The burden of proof is on you to show that the possibility of artificial agents running on digital computers is unlikely enough to disregard the possible negative consequences of ignoring it.

    (Note that I express their opinion, not necessarily mine as I am not sure where to draw the line (if at all) on using probability-utility calculations that run on sparse evidence.)

    > (b) it will happen very soon;

    Again, they believe that the possibility that it will happen soon is likely enough to justify a strong commitment to mitigate that risk.

    > (c) it will lead to a runaway process.

    This isn't too unreasonable in my opinion. Once you got human level intelligence it should be much easier to improve such an intelligent design. Any improvement will make such an intelligence even more capable of improving intelligence even further. And given that such an intelligence will be sufficiently alien it will be extremely dangerous as it might simply outsmart us.

    ReplyDelete
  86. "The burden of proof is on you to show that the possibility of artificial agents running on digital computers is unlikely enough to disregard the possible negative consequences of ignoring it."

    I'm not sure that anyone here is claiming that we should ignore anything. The objection from my perspective is that some are talking about the future with huge assumptions, and are operating under the impression that these assumptions are certainties. In fact these 'certainties' appear to be pretty unlikely... perhaps I am objecting to the quacks primarily, but I'm not sure that is even true

    ReplyDelete
  87. "Again, they believe that the possibility that it will happen soon is likely enough to justify a strong commitment to mitigate that risk."

    I see no evidence for this risk that will catch us with our pants down, and I'm not sure what you mean by 'mitigate' to know if I agree or disagree.

    ReplyDelete
  88. Massimo, I also have work to do :) , but I will say a couple of things.

    First, bringing people back from the dead happens routinely in ICUs (probably too routinely, but let's not get into medical ethics). Granted, the timescale is several minutes, but that can be extended to several hours by keeping somebody cold. You cannot dismiss (2) so easily, my friend!

    Also, you said:
    >No, my critique was not against thought experiments in general. But I think that thought experiments need to be used carefully to highlight problems or provide insight into how we think about things (Chinese room, for instance). They cannot be used to prove metaphysical theses (zombies) or establish physical realities (dancing qualia).

    But thought experiments can establish physical realities. The canonical example is Galileo's two falling objects of different masses, joined by a string - which establishes the mass-independence of gravitational acceleration.

    I just don't see why you're objecting to fading qualia other than that you don't like the conclusion. Especially since you're apparently all right with Searle's Chinese room, which although it tries to establish an opposing conclusion, is precisely the same sort of thought experiment.

    Saying that somebody would "die" in the thought experiment really needs defending, because from the perspective of the biological body interacting with the synthetic neurons, there is no difference (ex hypothesi). It's not as if the biological neurons check to make sure they are being fed electrochemical signals by other biological neurons. They just process the signals coming from whatever black box is connected to them.

    ReplyDelete
  89. ccbowers,

    > The objection from my perspective is that some are talking about the future with huge assumptions, and are operating under the impression that these assumptions are certainties.

    The people associated with the Singularity Institute do not believe that it is a certainty, although I assume some think that it is very likely that we will at some point create or augment intelligence to exhibit superhuman capabilities.

    I once asked someone who donates most of his money to the Singularity Institute about his estimates and he said that he sees a 90 percent chance that they are going to fail. Yet he still thinks that his money is best put with the Singularity Institute rather than some other charity. Part of a very complicated answer to why those people think so can be found in the following article, "Efficient Charity: Do Unto Others...".

    > I see no evidence for this risk that will catch us with our pants down, and I'm not sure what you mean by 'mitigate' to know if I agree or disagree.

    Again, to grasp the full scope of their beliefs you would have to read literally hundreds of articles and papers they wrote.

    The short answer, they (Singularity Institute) are trying to mathematically define binding and stable rules that, if implemented into the goal-system, would make an artificial general intelligence care about humans and not destroy them due to indifference while pursuing some instrumental goal.

    To approach this problem they first wrote hundreds of posts on rationality (they are currently writing a book on rationality) and are now developing a reflective decision theory of self-modifying decision systems.

    For more check out some of their publications.

    ReplyDelete
  90. Baron -

    I've read over your comment a few times and can't quite make heads or tails of it. But, let me just try to articulate what I think your problem is. You take umbrage with the concept of an artificial neuron doing the exact same work as a neuron. At that point, the point where a chip is performing the exact same functional role of neuron, you suggest, I am not really imagining a chip after all, I'm imagining a neuron. In others words, I suspect you think a chip just can't cut the mustard. In my example this is only true if the "work" of the neuron is only possible with an object made of the same chemical and atomic structure of a neuron. Perhaps quantum effects play an important role, a la Penrose, and a silicon chip is incapable of duplicating this feature. If THAT is true, or something like it, then this stripe of functionalism is wrong and biologism is right. Only a biological neuron, due to it fundamental atomic characteristics and the role these characteristics play in the function of the system, would allow for the rise of mind. Even in this case however, strong functionalism is still not totally defeated. An atomic/quantum level emulation of a brain in silicon (or graphene or whatever substance could compute at the quantum level) would then give rise to the same mind as the biological brain, so general functionalism is preserved. The issue I take is the "hard" biological opinion which is that an exact computational emulation of a brain would nonetheless not produce consciousness because it is silicon not wetware. This would be a cyborg zombie with behavior exactly like a real brain but no subjective experience whatsoever. I believe that we have good a priori reasons to reject this view. So, to address your problem, it doesn't really matter how difficult it is to imagine a silicon chip acting like a neuron, what matters is the principle: a fine-grained enough simulation of the brain will give rise to consciousness. Mind, not being a physical entity like the sugar in photo-synthesis, is not dependent on the contingent particulars of a physical substrate. Finally, as an empirical point, we have very little reason to believe that the brain is anything but a classical system. As such, I think it likely that function of a neuron is, in fact, possible in silicon. Still, it matters little towards the philosophical point.

    ReplyDelete
  91. ccbowers,

    > I see no evidence for this risk...

    See for example the following debate which does discuss the evidence at length, "The Hanson-Yudkowsky AI-Foom Debate".

    ReplyDelete
  92. Ian, yeah having to work sucks eh? ;-) Luckily, I consider blogging as part of my professional outreach, which means that I can get away with a little more of it than other people and still call it "work."

    > bringing people back from the dead happens routinely in ICUs <

    Of course, but there are pretty strict *biological* limits to when and how that can happen. (Indeed, one could argue that if the person came back s/he wasn't really "dead.")

    > thought experiments can establish physical realities. The canonical example is Galileo's two falling objects of different masses <

    I knew you were going to bring that up! Technically, what that experiment showed was that there was something wrong with the *logic* of Aristotelian physics. To establish that Galileo and not Aristotle (or someone else) was right you still had to do the actual experiment.

    > Especially since you're apparently all right with Searle's Chinese room, which although it tries to establish an opposing conclusion, is precisely the same sort of thought experiment. <

    Some people interpret the Room's experiment that way, I don't. I think of it as simply fleshing out the assumptions of a certain way of thinking in philosophy of mind and showing that there are logical problems with it. Whether the room is or is not conscious is a matter of empirical evidence that cannot be established a priori.

    > It's not as if the biological neurons check to make sure they are being fed electrochemical signals by other biological neurons. <

    No, but since silicon has very different chemical properties from carbon and other biological materials, what exactly authorizes Chalmers' "ex hypothesi" assumption? Seems to me he is begging the question, just like he does with zombies.

    ReplyDelete
  93. I tried my best to give an account of the opinion hold by people like Ray Kurzweil or Eliezer Yudkowsky. Now that I have been exposed to the opinion of the philosophy camp I'll try and see if I can argue for that side as well:

    Let's assume that you wanted to simulate gold. What does it mean to simulate gold?

    If we were going to simulate the chemical properties of gold, would we be able to use it as a vehicle for monetary exchange on the gold market? Surely not, but why? Some important characteristics seem to be missing. We do not assign the same value to a simulation of gold that we assign to gold itself.

    What would it take to simulate the missing properties? A particle accelerator or nuclear reactor.

    In conclusion, we need to create gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job.

    ---

    P.S.

    It's me, XiXiDu. I changed my Google Blogger settings to display my real name instead of my nickname.

    ReplyDelete
  94. Matt, my point is that your thought experiment does not rise to the level of experiment if all you're doing is affirming the consequent.

    ReplyDelete
  95. Alexander,

    Yes, that is a good example. Of course the singularitarians would answer that, unlike gold, consciousness can be absqcted to the point of being substrate independent. But there is no reason to think that to be the case, and quite a few reasons - based on biophysics - to think not.

    ReplyDelete
    Replies
    1. It's even funnier than this. Most will *insist* they are physicalists, but the moment they talk about uploading or AI, they completely *discard* the actual physical properties of mind. Some of them even mock dualists. Though, as you correctly pointed out, they are themselves dualists.

      Some of them computationalists become unwitting pan-computationalists, some are functionalists and are actually arguing for multiple realizability. And so on. funny stuff.

      Delete
  96. Okay, I think your "anybody who can be brought back from the dead isn't really dead" gambit is a little bit iffy (for one thing, it implies that "dead" means something different depending on which century you live in). And obviously, we are not going to have a productive discussion by haggling over definitions.

    The point is that at time t=0, there was a corpse and not even background brain activity, so there's no way there was continuing consciousness, and then at time t=1, a person was conscious again.

    Now: what difference does it make if you "replace" all their atoms and move them a long way away in the meantime?

    ReplyDelete
  97. Ian,

    I can't tell whether you are serious about this. Okay, I can. First, of course it makes no difference whether you move the "corpse" before resuscitating it or not, but that gets you nowhere in therms of uploading, artificial intelligence and all the stuff we have actually talked about.

    Second, the reason you can resuscitate some people under certain conditions is because their brains and other bodily activities haven't decayed beyond irreparability. So?

    Finally, yes, you can replace all my atoms, and if you do it properly you will get simply what nature normally does, no big point scored. But if you do it by replacing my carbon neurons with silicons, it's a safe bet that I'll die. And if you do it by making a copy of me, well, you'll have a copy of me, and I am still me and will not agree to be "turned off" just because there is a copy around to replace me.

    ReplyDelete
  98. I can't follow this whole argument, especially given all the references to outside articles, but it seems kind of odd for you, Massimo, to use the heart analogy. Nobody is claiming that a simulated brain could, say, deflect a pass beyond the Italian goalkeeper into the net. That's ridiculous, and nobody would make such a mistake. The idea is that the thought processes of the human brain could be mimicked in an electronic setting, allowing a simulated brain to have a mind in a similar way to the way a physical one does. Of course it couldn't give orders to real-world limbs; why would we suppose it could?

    ReplyDelete
  99. Joseph, the heart analogy was in direct response to someone's comment about the general idea of simulating X, where X could be anything.

    The specific point about the brain, however, remains: singularitarians claim that mental activity can be completely decoupled from its physical substrate, which is a form of dualism. I say that we have no evidence whatsoever of this, and that to the contrary everything we know about biological organisms tells us that everything they do - including thinking - depends on particular substrates arranged in particular ways. Change the arrangements OR the substrates (and therefore their physical-chemical characteristics) and instead of thinking you likely get a dead organism.

    ReplyDelete
  100. I've been following this discussion for a while and I find it extremely interesting. I'd like to share my opinions on the subject of "teleportation" that Massimo and XiXiDu discussed here.

    Although I do think that singularity will come one day (let's not get into when now), I don't think that transferring ones mind into a computer will be as easy as just scanning his brain and simulating it on a computer. I think I would agree with Massimo on this issue .It seems to me like killing one person and then creating his identical copy (with his brain in the exact state it was during scanning). Of course the copied person would say that the process was successful, and the psychological continuity is preserved. But what would it feel like from the copied person perspective? XiXiDu mentioned the general anesthesia. I think that it would be a general anesthesia that you never wake up from.

    To show what I mean, I came up with the following thought experiment:
    Suppose we scan persons brain and create his perfect copy, but we do keep the original person alive. Would this original person suddenly feel like two persons at a time? Would he perceive his copies feelings, thoughts and so on? I highly doubt it.

    I do think that transferring a mind into a computer is possible if we did it gradually - neuron by neuron. During this process the persons brain would have to be able to communicate with the neurons that have already been transfered, and data would be processed my hybrid brain constructed from "natural" neurons and the simulated ones.

    ReplyDelete
  101. Jacek,

    your objection is similar to the scenario in Robert Sawyer's novel, Mindscan (http://goo.gl/YvRnG), which is discussed seriously in terms of philosophical objections to uploading by Susan Schneider in her essay, Mindscan: transcending and enhancing the human brain, published here: http://goo.gl/5FfE2

    ReplyDelete
  102. Jacek,

    > Suppose we scan persons brain and create his perfect copy, but we do keep the original person alive. Would this original person suddenly feel like two persons at a time? Would he perceive his copies feelings, thoughts and so on? I highly doubt it.

    Nobody is claiming that, just that it doesn't matter. What matters is the psychological continuity, but less than the continuity of your values and goals which define your identity more than anything else.

    ReplyDelete
  103. Alexander, it doesn't matter, really? No sympathizer of singularitarianism on this thread has so far answered a simple question: if a condition of uploading were that the original gets shot, would you volunteer? If the above doesn't matter I would say you'd have to agree, and yet somehow I really doubt it.

    ReplyDelete
  104. While reading through these comments, I was reminded of a book that I read recently, Philosophy in the Flesh by George Lakoff and Mark Johnson, which concluded its chapter on The Mind with the following:

    What we call "mind" is really embodied. There is no true separation of mind and body. These are not two independent entities that somehow come together and couple. The word "mental" picks out those bodily capacities and performances that constitute our awareness and determine our creative and constructive responses to the situation we encounter. Mind isn't some mysterious abstract entity that we bring to bear on our experience. Rather, mind is part of the very structure and fabric of our interactions with our world.

    From this angle (which is informed by empirical research in cognitive science and linguistics), I can see more clearly why Massimo says "singularitarians claim that mental activity can be completely decoupled from its physical substrate, which is a form of dualism" and why that's problematic.

    Of course, that doesn't rule out a weak version of AI, but it does suggest that, if the goal is to synthesize a human mind, then you also must synthesize - not just a human brain - but a human body (in other words, a robot will get you further than a desktop computer, and it had better be a really advanced one!).

    Of course, there's already a proven, and far more efficient way, to produce a human. It's also a lot more fun (wink, wink, nudge, nudge).

    ReplyDelete
  105. Massimo,

    > ...if a condition of uploading were that the original gets shot, would you volunteer?

    I thought that I had already answered that question in this thread at April 13, 2011 4:44 AM.

    Yes, I would volunteer under the condition that it was worth it, e.g. that the uploaded mind would be better off than me (faster, smarter or simply healthier and forever young etc.).

    In reality I wouldn't volunteer but for completely different reasons. Horrible things can happen once you are easily copyable...but that's a different story.

    ReplyDelete
  106. You are right, you did answer it. You are also right that a horrible thing would happen from your perspective: you'd die! it would be like committing suicide in favor of your twin brother, which I think is both ethically and logically indefensible.

    ReplyDelete
  107. Alexander,

    > Nobody is claiming that, just that it doesn't matter. What matters is the psychological continuity, but less than the continuity of your values and goals which define your identity more than anything else.

    Well I don't get your point of view. The psychological continuity you are talking about would be preserved for the copy only. For the copied, the continuity is not preserved, he simply dies and stops existing - would you agree with that?

    ReplyDelete
  108. Jacek and Massimo,

    > For the copied, the continuity is not preserved, he simply dies and stops existing - would you agree with that?

    Yes and no, but that doesn't matter. You are not committing an ethically indefensible act because you are not causing the annihilation of a moral agent.

    What is your definition of "killing"? I don't share the opinion that killing means to interrupt a special kind of spatio-temporal continuity.

    If I was going to burn the last copy of a book, then in my opinion I would be doing something unethically, or at least distasteful. But if I was going to make an additional copy, then burning the other copy would cease to be problematic.

    You might argue that a human being is more like an original work of art and that a copy would be a less valuable imitation. I don't deny you the right to assign that value to yourself but I don't share this assessment of value regarding my own identity.

    ReplyDelete
  109. Alexander, I find something deeply disturbing in the comparison you just made, and I do think that killing a human being on the ground that there is a copy available is as unethical as it gets.

    ReplyDelete
  110. Massimo, you are right to say choosing to get shot in such a scenario would be committing suicide, thankfully that scenario would never happen because a successful upload/transfer procedure would not allow for two separate identities to be conscious at the same time--that would be a copy procedure. Continuity of self and divergence are critical details and such dire scenarios as yours are to be avoided.

    I wonder,what criticisms do you have of Daniel Dennett's 'Where am I?' scenario?

    ReplyDelete
  111. FWIW, I consider myself a transhumanist and a believer in 'uploading' but agree with you, Massimo, that Alexander is horribly wrong. Conscious agents are not books, the last copy is as aware and experiences consciousness just as much as the first. Anytime you instantiate a sentience, whether copied from a human or created ab initio, or left over from an upload procedure gone awry, you must treat that agent with the proper ethical stance you would with any other.

    ReplyDelete
  112. nonzero,

    it would take a separate post to deal with Dennett. I actually disagree with him on a surprising number of issues (e.g., memetics), despite the fact that I always enjoy reading him, and that he was instrumental in getting me my new job (he wrote one of the letters of recommendation on my behalf to CUNY).

    Anyway, I'm a bit confused by what you wrote:

    > thankfully that scenario would never happen because a successful upload/transfer procedure would not allow for two separate identities to be conscious at the same time <

    But whether they are conscious at the same or not doesn't make a difference, does it? If same time then, as you say, that's copying; but if sequentially that's still suicide/homicide of the original, no?

    ReplyDelete
  113. Quotes by Massimo:

    "singularitarians claim that mental activity can be completely decoupled from its physical substrate, which is a form of dualism."

    Seems like the word "form" is doing a lot of work for you here. Isn't dualism the belief that there are separate realms of the Mental and the Physical? This has got to be a big stretch.

    "I say that we have no evidence whatsoever of this, and that to the contrary everything we know about biological organisms tells us that everything they do - including thinking - depends on particular substrates arranged in particular ways."

    I don't think anyone denies that your brain produces the thoughts it does because its various parts are composed and entwined in certain ways. But it's a fair question, can we pinpoint a conceptual framework created by that organ and view it abstractly, theoretically disregarding how it was created?

    To me, it seems like you're saying "you can't detach Algebraic Topology from the mathematicians that create it." You certainly can. The creator is not the created; "The Love Song of J. Alfred Prufrock" is not T.S. Eliot; "Eleanor Rigby" is not Paul McCartney; etc.

    "Change the arrangements OR the substrates (and therefore their physical-chemical characteristics) and instead of thinking you likely get a dead organism."

    You're once again stating the obvious as if it pertained to the question of simulation. Nobody thinks you can put a bullet through my head and it will work the same way. The question is simply, could we have a simulation of the human mind that mimics its thought processes?

    ReplyDelete
  114. If such mimicry could be perfected it would amount to the most perfect application of deception humanly imaginable. But no more than that.

    ReplyDelete
  115. >'But whether they are conscious at the same or not doesn't make a difference, does it? If same time then, as you say, that's copying; but if sequentially that's still suicide/homicide of the original, no?'

    Based on continuity of identity, if no conscious experience occurs between the original going unconscious and the copy regaining consciousness, then I don't see that as suicide/homicide. If, however, the original becomes conscious at any time after the copy has also become conscious then a divergence has occurred and the two agents should be treated as separate individuals.

    Forgetting about uploads for a moment, imagine incrementally stepping through time and observing the matter comprising yourself changing from one state to the next state and so on. At each moment your 'self' is lost and a new, though incredibly similar, self is created as a result of matter dynamically evolving forward through time. We don't mourn the deaths of our countless past selves at each instance of the universe's timestep because our minds persist. So then why should we consider state changes that happen to consist of patterns migrating substrate any differently?

    ReplyDelete
  116. Ritchie,

    > Isn't dualism the belief that there are separate realms of the Mental and the Physical? This has got to be a big stretch. <

    That's Cartesian dualism, but there are other kinds. Chalmers himself, a singularitarian, admits that his thinking in philosophy of mind amounts to a type of dualism.

    > t's a fair question, can we pinpoint a conceptual framework created by that organ and view it abstractly, theoretically disregarding how it was created? <

    Nobody denies that we can abstract the theoretical properties of anything, including photosynthesis. The claim is that abstracting the theoretical properties is not at all the same thing as duplicating, uploading, etc. a functional version of the thing.

    > The question is simply, could we have a simulation of the human mind that mimics its thought processes? <

    That's an open question, and if by thoughts you actually mean human-like thoughts - e.g., including qualia - I'm betting no. And more importantly, the burden of proof seems to me to be squarely on the other side.

    nonzero,

    > if no conscious experience occurs between the original going unconscious and the copy regaining consciousness, then I don't see that as suicide/homicide <

    You may not see it as suicide, but to me you just put someone to sleep and then killed them, as even Alexander agreed.

    > Forgetting about uploads for a moment, imagine incrementally stepping through time and observing the matter comprising yourself changing from one state to the next state and so on <

    This is a standard singularitarian argument, but it won't do. We don't need to imagine anything, you are describing a standard and well understood biological process. The relevant questions are: (a) can we substitute silicon chips instead of neurons? I'm wager the answer is no, because that would kill the person. (b) If we somehow reconstitute the same arrangements of atoms of the same type somewhere else, is that thing still me? Again, no, it's a copy that will begin to diverge as soon as it gains consciousness.

    ReplyDelete
  117. I see we are at an impasse due to two holdups of yours. The first is your misunderstandings or lack of appreciation for the implications of the computational/informational reality of our universe. The second is your inability to override your intuitions regarding personal identity and what constitutes a 'self'.

    The first problem stems from, as I see it, not understanding or appreciating the idea that all computable systems (which all physical processes in our universe, including the universe itself not to mention single brains, is a subset of as far as we know) can be computed using any Turing-complete system with sufficient time and memory. Several years ago the tobacco mosaic virus was simulated and run for 50ns, this was in effect an upload of the virus. Scaling that up to a brain/body/environment is a massive undertaking, but the theory is sound and proven. Let's take the most direct case of emulating your brain atom by atom for conceptual simplicity's sake. I don't see why it is hard to comprehend the fact that you can capture all the relevant dynamics of each relevant physical structure comprising your brain at a particular point in time, represent that information on a computer regardless of substrate, and have that system evolve through time according to the same next state transitions that would have occurred were they to have evolved in the 'real' world. This is an 'upload' and it is not magic.

    The second holdup is, and I acknowledge this as well, much more counter-intuitive and harder to internalize. I think the Buddhist notion of 'No Self' is a good representation (and if you're incredulous of eastern religious thought then read David Hume's 'bundle theory' or Parfit's account of personal identity as they are all describing the same thing). Our 'selves' are fluctuating patterns of matter experiencing consistent conscious events. Science has come to understand that these patterns are represented by atoms assembled into brains and bodies in environments all governed by physical laws. What we care about is the consistent evolution of our conscious experience, not necessarily the patterns of matter representing those experiences (unless those patterns of matter determine the experiences, but taking an informational/computational perspective shows that different assemblages of matter can stand in for others and still maintain the same patterns). You wake up after being anesthetized and whether you are (mostly) the same biological body or a 'copy' switched without your knowledge (with the original being destroyed / recycled) makes absolutely no difference to your 'No Self' self.

    Alas, I may have reached my limits in articulating my side of the argument and if it is still inadequate then I'll conclude in recommending that you read more information theory and computer science, particularly where they are used in physics (ie Church-Turing-Deutsch principle) and perhaps meditate on Anatta.

    ReplyDelete
  118. "Yes, technology will advance, at least for some time. It is possible that we will develop AI, but far from sure, and we don't know what form that will take. AI may or may not be able to develop more sophisticated AI, and very likely there are limits, so the "so on and so forth" is eventually going to stop." Er, isn’t this a prediction? Can you explain how you know this is true and why saying it does not make you a ‘crank’?

    I have no interest in consciousness uploading or backup. I am agnostic on a near-term technological singularity. But I have to say, this blog post, masquerading as a serious debate, is nothing more than a poorly-researched character assassination that cherry-picks a few of Kurzweil’s thoughts, flippantly ‘analyzes’ them out of context, and does it all with an atmosphere of smug self-satisfaction that provides a sadly humorous counterpoint to Kurzweil’s most obvious character flaws. This irritates me, as someone who is looking hard for serious debate on the real issues we are confronted with through the effects of exponentially increasing price/performance in technology.

    I think Kurzweil is wrong about some things. He advocated microwater in one of his health books (which, regardless of who it was coauthored by, has little or nothing to do with homeopathy – clearly you haven’t bothered to even skim one of the books before deciding its very existence is further proof of Kurzweil’s being a ‘crank’) and he personally consumes large quantities of supplements on a daily basis. His advocacy seems to me to be based on possibly rational theories, but theories that are largely untested. Having a wrong theory, however, hardly makes someone a crank.

    How do you start the attack? First you attempt to minimize Kurzweil’s accomplishments, for example, listing the CCD flatbed scanner as “developed” by someone else. Would you care to share your source on this tidbit, or what it even means? The Inventor’s Hall of Fame lists the invention as a team effort of which Kurzweil was a part – not surprising as it was his company, and presumably he was directing its resources. This team also created the first OCR software. You then go on to point out that he often uses his family name to brand his companies. You try to make it a comic character flaw in the man – then blithely end by saying there is “nothing wrong” with that, even though you clearly are trying to undermine his credibility by pointing it out! All this does little to elucidated Kurzweil’s crank-ness, but it certainly makes your own, obviously emotional, agenda clear.

    People also make fun of Donald Trump for splashing his name about everywhere. Few would argue that Trump is a narcissistic egomaniac – yet his reason for doing this is perfectly rational: he makes huge sums of money merely by renting out his name. Kurzweil may have similar reasons, or he may have a fond remembrance of his deceased father and wish to honor him, or he may simply have different cultural values than you. You acknowledge this, while simultaneously attacking him for it. What does this say about you?

    All this before you even begin your ‘real’ argument. You start that off by dismissing the entire enterprise of futurism: “considering the uncanny ability of futurists to get things spectacularly and consistently wrong.” Of course that’s no argument, any more than someone arguing against the Wright brothers with the phrase, “considering the uncanny ability of people attempting to fly to crash spectacularly and consistently.” Previous failures – predictions made with incorrect or absent underlying theory – have no bearing.

    ReplyDelete
  119. Next, you cherry pick a SINGLE prediction of Kurzweil’s, and hold it up as ludicrously wrong. Except the prediction he made is largely CORRECT! Microprocessors are clipped in iPods onto shirts, are routinely embedded in clothing, appliances and automobiles: they are invisible and ubiquitous. Displays that write to the retina are indeed in service in the military. Interaction with virtual personalities is a debatable milestone and Kurzweil himself agrees that that prediction was only “partially correct”. (http://www.kurzweilai.net/predictions/download.php).

    Your very considered response to these predictions and outcomes: “Oops.” Oops is right. As in, oops, maybe next time you ought to spend five minutes thinking about something before passing judgment – I mean, this is supposed to be an intellectual debate, isn’t it?

    Beyond that, you make no mention of Kurzweil’s unambiguously correct predictions – he famously predicted when a computer would defeat a grandmaster in chess, he correctly predicted the rise of the internet, etc. etc. The fact is, Kurzweil has criticized his own predictions in a far more rigorous fashion than you have bothered to do with even one of them.

    You then go on with some more thinly veiled ad hominem attacks on his productivity and desire to defend his beliefs. These things are fine for Massimo, but for Ray they are just further evidence of his craziness. There is a general psychological principal at work here.

    You then “focus on a single detailed essay he wrote.” By this, I assume you mean “the only piece of Kurzweil’s writing that I actually was able to stomach reading before penning this cheap hit piece” but I cannot test this assumption.

    What’s next? Ah, yes, mentioning the Law of Accelerating Returns, and then somehow using the fact that he did not attach his name to the theory as further evidence of his “irritating pomposity.” I will take it as further evidence of your own.

    However, after mentioning the central theory of Kurzweil’s vision of the future, you studiously ignore it! In fact, I cannot tell from what you’ve written if you even know what it is. However, in order to qualify as a crank, I would think that it would be a requirement for you to undermine his CENTRAL theory. But you make no attempt to describe it, debate it, or from what I can tell, understand it.

    I too am unimpressed with Kurzweil’s efforts to pull the Law of Accelerating Returns back into previous epochs. But you don’t give him fair hearing here either. You complain about his charts. Fine. But then you make no mention of his multitudinous charts that DO contain a great deal of data. More cherry picking.

    You complain that he writes bad science fiction.. Qualifier aside, does it not matter to you that Kurzweil makes it clear that he is writing fiction? His books are interspersed with fictional dialogues with future people. They are also filled with metaphorical writing to attempt to demonstrate scale and ubiquity. Saying the universe “wakes up” is no more ridiculous than Stephen Hawking talking about knowing the mind of God, even though Hawking is clearly an atheist.

    ReplyDelete
  120. You go on: “First, it is highly questionable that one can even measure ‘technological change’ on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of ‘technological change.’” Umm, am I missing something? If we are able to measure one aspect of technological change, then can we not measure many aspects? Of course we can – Kurzweil has done just that. One might similarly impugn a biologist who attempts to measure bird population through sampling. “That’s just one percent of birds in this forest!”

    Again, you sidestep the main thrust of Kurzweil’s argument. In fact, you never touch upon his main theory, which is pretty simple: we use one generation of tools to create the next, and when a technology becomes an information technology, that rate of creation (i.e., the improvement in price/performance) becomes exponential. He has voluminous data to support this theory. You don’t attempt to confront this idea, let alone bring any data of your own to contradict him. I agree the attempts to stretch this theory back to the dawn of the universe leaves much to be desired, but that is hardly enough ammunition to make Kurzweil into a crank. Kurzweil’s central theory is predictive and testable. And he has used it to make many accurate forecasts (much as Gordon Moore was able to make limited projections – not all futurists are by definition quacks).

    “No, dude, it aint’ just speculative, it would amount to a major violation of a law of nature.” Here, you are sidestepping Kurzweil’s attempt at humor (downplaying the enormity of FTL travel) with a dumber attempt at humor of your own. But I’m glad that you, without even a degree in physics, have settled the question of whether FTL travel is possible. Because of course, no such theory in the history of physics has ever been overturned. Irritating pomposity, indeed. You mention needing extraordinary evidence in support - um, for WHAT exactly? Kurzweil has never, ever claimed that FTL travel is possible – he has only ever said that he does not know whether it is possible or not! Surely that is a reasonable position to take? (You mention Kurzweil’s unintentional irony while remaining blithely unaware of your own!)

    ReplyDelete
  121. There's too much for me to read here. But what I did read here (including the blog post, and XiXiDu's early omments, Massimo's responses to him) and the blog post I read about David Chalmers by Massimo makes me wonder why Heidegger isn't coming into the conversation. Fundamentally, it seems, many issues here is about what it means to BE a human being.

    ReplyDelete
  122. I have yet to hear the most obvious argument against Kurzweil's Singularity theory - this all happening within the constraints of a finite system with finite and rapidly depleting energy resources.

    Kurxweil is fond of citing geometric technological progression as an argument for his position but does not take into consideration that resource requirements necessary to accomplish it is simply untenable.

    Yes, it is true that within the last few hundred years we have seen an explosion of technology and that it has grown exponentially but that has been due to a seemingly endless supply of cheap energy in the form of fossil fuels which, though few seem willing to face it, is running out.

    If Kurzweil were to realize his dream of immortality it would be a short-lived dream at best.

    There are currently close to seven billion people on the planet. At a modest growth rate of seven percent per year that number will double in ten years to fourteen billion. Without fossil fuels the sustainable carrying capacity of the earth is about three billion.

    It's time that everyone , including Ray Kurzweil, woke up.

    ReplyDelete
  123. With all due respect to Massimo, I don't think he makes a lot of sense in his debate with Yudkowsky.

    1. Massimo doesn't claim that Church-Turing thesis is false. That means he should be open to the possibility that it is possible to create a simulated human brain which behaves exactly the same way a human brain does in terms of inputs and outputs. This simulated brain would say stuff like "I think therefore I am", just like a biological human brain would.

    2. According to Massimo, even though this simulated brain says stuff like "I think therefore I am", it is not conscious.

    3. So the simulated brain lacks something that makes it conscious. But it is definitely not what makes a human brain say stuff like "I think therefore I am". The simulated brain is also saying the same thing, and writing philosophy papers about it.

    4. So whatever that makes humans conscious is not the brain process that makes it say "I think therefore I am". Remove that part, and you still got a human looking thing that says "I think therefore I am", and writes philosophy papers about it, just like the robot.

    5. That last guy is a philosophical zombie.

    ReplyDelete