Friday, July 30, 2010

Julia's Picks

By Julia Galef
* A recording of the panel I moderated at the Northeast Conference on Science and Skepticism (NECSS) this spring was featured on this week's Scientific American podcast. My topic was "Arguing with Non-Skeptics," and the panel comprised an all-star cast of James Randi, DJ Grothe, George Hrab and Steve Mirsky. (Part I and Part II)
* I'll be going to this year's Singularity Summit in San Francisco in a couple of weeks. Whatever you think of their conclusions, the singularitarian and transhumanist communities comprise some very smart and interesting people in the fields of physics, neuroscience and artificial intelligence. Registration is still open.
* What would the political news look like if it were written by academics? (This is a pretty good illustration of why I don't read daily news.)
* After my meditation on the "self" and its potentially illusory nature last week, I particularly enjoyed Daniel Dennett's take on the issue, comparing the self to the center of gravity. 
* This thoughtful New York magazine article discusses the research showing that parents are less happy, on a moment-to-moment basis, than non-parents. The research is interesting for its own sake, but the main reason I liked the article is that it raises the question of what kind of utility we should be pursuing, given that people (e.g., the parents) argue that moment-to-moment happiness is only one consideration, which doesn't capture the sense of meaning their children give them.
* I've been relishing the Bloggingheads archives lately. In this interview, Tyler Cowen of Marginal Revolution challenges philosopher Peter Singer on the moral obligation to give. I admit I was surprised Singer didn't have better answers on whether free immigration policy is ethically required, and whether there is any utilitarian problem with eating wild-caught fish.

28 comments:

  1. I'll be curious to see what you think of the singularitarians (a word that somehow - fittingly - reminds me of a religious cult). While I agree that some of 'em are pretty smart people, the one I briefly interacted with at TAM (he was often after you) seemed weird, and I'm being charitable...

    ReplyDelete
  2. "They seem weird" is not a rebuttal of their arguments, Massimo! :-P

    ReplyDelete
  3. I wasn't making an argument. Nor I thought were you when you said that some of 'em are smart, right?

    ReplyDelete
  4. Right, "smart and interesting" wasn't meant to imply any position on the accuracy of their beliefs. It was just meant to imply that it's worth one's time to listen to them.

    So your point about weirdness was just an aesthetic concern? I can understand that. But on that level, I actually find weirdness to be, at minimum, not a negative, and frequently a positive trait. I get bored easily.

    ReplyDelete
  5. No, I did mean it as a negative. The individual in question, for instance, wasn't just weird in terms of social behavior (which, frankly, didn't seem to me to be of the fun sort at all), but in terms of his convoluted and grossly flawed arguments for why it is important to have a Singularity Institute. We really should do a podcast on this, perhaps when you get back from the meeting?

    ReplyDelete
  6. Yes, we should definitely devote an episode to this! But until I've done my homework, I am (as you can probably tell) deliberately trying not to take a position on the subject.

    ReplyDelete
  7. Julia, I just read the Dennett's text you mention - really fascinating stuff. Thanks for the link.

    ReplyDelete
  8. Re singularity:

    Much of what I read from the proponents has a downright eschatological flair to it; their expectations do not appear very reasonable and well-founded, and "rapture of the geeks" sounds about right.

    Of course it is very probable that technological process will lead to amazing new developments (if our society does not collapse first from the strain of having a population of about ten times what the planet can sustain in the long term), but the point is that futurologists have always gotten it wrong in history because they simply extrapolated the trend of their lifetime and failed to anticipate the next step that really occurred.

    Discovery of radioactivity: soon we will all have radon ovens in our living rooms! Invention of rockets: all international travel will be in rockets by the 1980ies! Invention of computers: auto-piloted cars by the 1980ies! (But the internet was not anticipated.) And now it is nanotech, AI and uploading your soul into a computer. Is any of that even technically feasible or possible?

    No issue if you manage to believe in a quasi-magic event where sufficient AI will somehow solve all technical problems in a day without having to do year-long lab experiments, of course.

    ReplyDelete
  9. I don't know why it is counter-intuitive that parents are less happy in a moment to moment basis. It seems pretty obvious that it is true in the short term (despite the benefits). There is sleep deprivation, less leisure time, less time with your husband/wife and more responsibility (= more stress). These studies do not take into account happiness at other stages of life. Most studies looking at this topic find that people are happiest when they are in their 50s 60s and 70s (health being the major confounding variable as the age increases). I am curious to see the happiness of people with no children as they age. Children give their parents perspective on life that cannot be acquired any other way. I imagine that some of these benefits manifest later in life.

    ReplyDelete
  10. The "what if political scientists..." article was one of the best I've read in a while. A great mockery of the irrelevant babble that makes up most political journalism.

    ReplyDelete
  11. Massimo, I don't self-identify as a singularitarian, but I will defend

    (1) the possibility of AI (the brain is not magical);
    (2) the thesis that, if an AI can be made that is intelligent enough to *improve its own design* in a positive feedback loop, the future begins to look radically different.

    I am very skeptical of claims that recursively self-improving AI will happen *soon,* but when and if it does, we are in for one hell of a ride.

    (NB: This is the Vernor Vinge singularity. The Ray Kurzweil singularity is indeed pretty badly thought out.)

    @Mintman: "And now it is nanotech, AI and uploading your soul into a computer. Is any of that even technically feasible or possible?"

    Nanotech is starting to happen now, although it has become (to its detriment) too much of a buzzword for comfort.

    As for AI and uploading (of mind, not "soul") - it more or less follows from a non-supernatural view of the brain that both these things should be possible. To wit:

    (1) Humans ARE artificial intelligence. We are sophisticated AIs, made out of meat by a sensationally stupid and wasteful algorithm called evolution. So either Cartesian dualism is true, or AI has *already happened* - which is about the best feasibility demonstration one could hope for. I rest my case.

    (2) Uploading refers to moving human minds from an organic substrate to some other substrate like a computer. It may not be *feasible*, but the only way you could argue it's *impossible* is to posit that carbon chemistry has qualities of Mind that silicon (say) does not. That seems awfully implausible and, again, borderline magical.

    Nature doesn't know the difference between "organic" and "mechanical." How is it that even card-carrying materialists aren't grokking it: we're machines made out of meat!

    One other thing. Massimo wrote:
    "I'll be curious to see what you think of the singularitarians (a word that somehow - fittingly - reminds me of a religious cult)."

    One should be careful whom one tars with this brush, as it can be applied with equal ease to almost any group of like-minded individuals. What specific cultish attributes do they have? Is it a sign of motivated cognition that what followed was a generalization about singularitarians based on one anecdotal example?

    More broadly: does skepticism mean dismissing all ideas that are not mainstream? I think there is a danger here of skeptical tribalism trumping rationality. Contra Michael Shermer, not everything that sounds weird is necessarily bunk.

    ReplyDelete
  12. @ccbowers:

    My understanding of hedonic psychology is that there is a real split between moment-to-moment, instantaneous perceptions of well-being, and perceptions of well-being in general.

    For example, take two medical procedures: one is very painful for 20 seconds and then is over; the other is very painful for 30 seconds, then ramps down to a little painful for another 10 seconds, then is over.

    People consistently prefer the 2nd option, apparently because it is "a better story," i.e., the pain was bad but got better. Even though if you integrate their total instantaneous pain, it's way higher in the 2nd case.

    In the same way people with kids may report less moment-to-moment happiness, but their "tell-me-a-story" happiness might be higher. There's really no fact of the matter about which of these is the right one to listen to.

    ReplyDelete
  13. ian, I don't think there is anything magic about the brain, but the strong AI program has been an abysmal failure. Time to try something else...

    Yes, one can imagine self-evolving machines that at some point, in some sense, will "surpass" humans. Not any time soon, and not the route the singularitarians envision, in my view.

    ReplyDelete
  14. ianpollock:

    Probably somewhat in opposition to Massimo, I agree with you that there cannot be anything special about life as far as conscience, intelligence, self-awareness etc. go, because that could indeed only be argued if you accept dualism.

    But!

    Whatever you get when you emulate a human mind on a computer will still never be a human personality, because, and it is simple as that, we are not a mind in our body, we are our body. That also follows from monism.

    In addition, the idea of many singularitarians that "you" can be uploaded and thus achieve immortality only makes sense if you think that there is some your-soul-thingy that enters the machine. What would (if it were technically possible) really happen is that a copy of your memories would be stored while you still live on in your aging, fragile body. No, if there ever is immortality, if will come from biology not informatics, and even there I wonder if we would not become insane if our brain had hundreds of years to accumulate tics, neuroses and traumata...

    As for nanotech, I read a quite convincing article recently which argued that it fails mostly due to the fact that they try to miniaturize mechanics (cogwheels etc.) that will not work at a scale where friction, van-der-Waal's forces and spontaneous configuration shifts into energetically lower molecular structure dominate. As a biologist I would perhaps go even further: those nanotech machines that are feasible have evolved already; we call them enzymes.

    And as for a self-improving AI, I have already indicated my conceptual issue with that. It assumes that advances in science and engineering can be made by simply thinking about the problem or simulating. Presumably we will find that even a superintelligent being would have to do tiresome experiments to verify its findings, and that throws a wrench into the imagined acceleration of progress.

    ReplyDelete
  15. @Mintman:
    Thanks for the response! I love this topic, as you can no doubt tell.

    "Whatever you get when you emulate a human mind on a computer will still never be a human personality, because, and it is simple as that, we are not a mind in our body, we are our body. That also follows from monism. In addition, the idea of many singularitarians that "you" can be uploaded and thus achieve immortality only makes sense if you think that there is some your-soul-thingy that enters the machine."

    I find it very amusing that, in our disagreement, we both consider the other to be dualist.

    I buy into the position called functionalism, which is a subset of monism. Functionalism sees mind as arising exclusively from patterns of matter and energy flow.

    Perhaps I could give you a thought experiment to generate the same intuition in you.

    I presume you agree that there is nothing in principle stopping a good engineer from making an artificial neuron that is made of silicon but outputs the same electrical signals, given specific inputs? In essence, a simulated neuron. (Making new axon connections would be damn hard, but that's an engineering problem, not a philosophical one.)

    Suppose you have a disease involving neural degeneration. We crack open your skull and replace one neuron with our silicon pseudo-neuron.
    Then another.
    Then another.
    And we keep going until all your 100 billion odd brain cells are non-biological.

    Obviously, you would not lose your consciousness with just one neuron replaced. How many would it take before you were "just an automaton" or dead to the world or whatever? What would the process feel like?

    I find it intuitively obvious that, actually, this process would make no difference at all to my consciousness. So, absent evidence to contradict my intuition, I conclude that the brain needn't be biological - which makes sense, given that Nature doesn't even know what "biological" means.

    Do you really have radically different intuitions about this?

    "What would (if it were technically possible) really happen is that a copy of your memories would be stored while you still live on in your aging, fragile body."

    Only if the simulation were static - put in visual and auditory inputs and outputs and you've got a living person. I also don't think there would be any fact of the matter about whether the copy or the original was "you." They would both be.

    "even there I wonder if we would not become insane if our brain had hundreds of years to accumulate tics, neuroses and traumata..."

    I'd love to have the choice. There's always suicide if your future life is unbearable.

    "As a biologist I would perhaps go even further: those nanotech machines that are feasible have evolved already; we call them enzymes."

    I find this very implausible, to put it mildly. Evolution is about the most stupid, wasteful optimization/satisficing process in the galaxy. The only advantages it has are the billion-year timescales it's enjoyed, and the absence of necessity for planning.

    To suppose that this brute-force monster of an algorithm has tried all the possible avenues of nanotech... that's like assuming that erosion has explored all the possibilities of stonemasonry.

    Just for a start, human nanotech designers are not bound, as evolution is, by "irreducible complexity" constraints. Our nanotech doesn't have to be functional at all stages of its construction.

    "And as for a self-improving AI, I have already indicated my conceptual issue with that. It assumes that advances in science and engineering can be made by simply thinking about the problem or simulating."

    Interesting, but there is no rule against AI being given a physical body to do experiments. Even just with access to the internet, a smart AI can do a lot in the physical world, starting with opening a bank account and taking up quant finance. :)

    ReplyDelete
  16. ianpollock and Mintman re: "we are our bodies" and the "soul thingy."

    It might be helpful to draw a distinction between selves as bodies and selves as an emergent property (which, I think, is what Ian is up to). This isn't Cartesian dualism, though one might argue that it constitutes a different sort of dualism (which, I think, is what mintman is up to). The question then becomes: If we are an emergent property, will that property be duplicated when the pattern that caused it is repeated in a different material? Also known as: is flocking still flocking when 3D birds do it? If the answer is yes, or possibly, I think we're back in Turing territory. If the answer is no, how can a self ever be transfered between materials?

    ReplyDelete
  17. Ian:

    Oh yes, this is an interesting discussion to be had. And it is not as if I am extremely pessimistic about technological progress; I just think that it is futile to claim that the future will surely involve a technological singularity with self-improving super-AI when past futurologists got it wrong every single time. And especially when you can so obviously see the wishful thinking between the lines: "and then I am going to live forever!"

    I find it very amusing that, in our disagreement, we both consider the other to be dualist.

    I don't necessarily claim that you are, but that this seems to be the logic of the transhumanist hopes.

    Perhaps I could give you a thought experiment to generate the same intuition in you.

    As a thought experiment, that is for all I know correct - although most likely not technologically possible in practice, ever, as (1) I can hardly imagine surgical intrusions on such a grand scale to leave the brain intact (which is also a big issue for the whole uploading your mind idea) and (2) it would be hard to even imagine a non-biological neuron that would have full functionality, including the formation of new synapses that you mentioned yourself and which is an important part of memory and learning. Again, the best working concept of a neuron will presumably be the one that evolved, and an artificial one that could replicate its functionality would be so close to biological and so far away from computational that it becomes difficult to say whether this would still be within the vision of usually informatics-centred "signularitarians".

    But that all aside; the important point about your thought experiment is that this brain would still reside in my body and be fed the same sensory input all the time. If my mind were to be uploaded into a machine and supposed to continue being human, then my whole body would have to be emulated around it. If not, then I am not human anymore (and a faithfully replicated mind would presumably go mad under those circumstances). In the first case, the question is what I would gain from it - wasn't it the limitations of my body that I wanted to leave behind? In the second, it would be whether it is really still me who is uploaded, and again, if not: what do I gain from it?

    "even there I wonder if we would not become insane if our brain had hundreds of years to accumulate tics, neuroses and traumata..." - I'd love to have the choice. There's always suicide if your future life is unbearable.

    The sentiment is understandable, although I wonder how suicide would work for 20 FantastillioBytes on a harddrive somewhere; what if the sysop does not allow you to? Anyway, biological immortality would be fatal to us as a species, as we would stop evolving. In my eyes, we are morally obliged to make way for the next generation so that we have any future at all. Immortality is through our children, and through the positive or negative contributions we make to our culture while we are alive.

    ReplyDelete
  18. I find this very implausible, to put it mildly. Evolution is about the most stupid, wasteful optimization/satisficing process in the galaxy. The only advantages it has are the billion-year timescales it's enjoyed, and the absence of necessity for planning.

    Of course it is terribly wasteful from the perspective of the individual born with a genetic defect, but surely there is a reason why engineers have started using evolutionary algorithms about two decades ago, in some cases finding technical solutions that they would never have thought of with planning only. Okay, nothing against improving on it, but enzymes demonstrably work, while metal machines with cogwheels are optimized for a scale on which completely different forces are dominant. So why not invest our money into designing better enzymes?

    To suppose that this brute-force monster of an algorithm has tried all the possible avenues of nanotech... that's like assuming that erosion has explored all the possibilities of stonemasonry.

    Bad analogy, as erosion lacks mutation, recombination and selection. In fact, I believe that evolution is unique in nature as a problem-solving algorithm, and I would trust it to find the optimum solution for any problem given enough time. Admittedly, and that is the strongest limitation, optimal only among the solutions that are possible under the constraints resulting from the evolutionary history of the lineage. The more important point is that not everything out of a sci-fi novel will turn out to be technically possible, just like Jules Vernes' moon cannon will never be built because it would kill its passengers, and that is that. Some things are impossible now because we have not invented the right solution yet, but some are impossible in principle, and it may just be that cogwheels at a molecular level belong into that category. But hey, we have no moon cannon, but we have invented rockets; we may never have nanobots circulating through our veins, but maybe we can develop a retrovirus that kills cancer. Certain solutions come with trade-offs, and the squishiness and short lifespan of the biological solution may turn out to be an entirely unavoidable trade-off for the ability to self-replicate and to work on a nanotech scale.

    Just for a start, human nanotech designers are not bound, as evolution is, by "irreducible complexity" constraints. Our nanotech doesn't have to be functional at all stages of its construction.

    Nor does evolution. The use of constructs changes, modules of a previous one are coopted to build a new one, and scaffolding is removed later.

    Interesting, but there is no rule against AI being given a physical body to do experiments. Even just with access to the internet, a smart AI can do a lot in the physical world, starting with opening a bank account and taking up quant finance. :)

    Sure, but not the point. What I meant was that a central assumption of the singularity proposal is that suddenly advances in science and engineering would accellerate exponentially, and I doubt that you can do science and engineering without taking the time for careful experiments. What a super-AI could arguably speed up are areas where you do not need constant input of empirical data. Say, philosophy? Well, if you trust it to get things right, that is...

    ReplyDelete
  19. "I agree with you that there cannot be anything special about life as far as conscience, intelligence, self-awareness etc. go, because that could indeed only be argued if you accept dualism."

    I don't think this is necessarily true. I don't think dualism necessarily follows the idea that life is required for consciousness. It may just not be possible to do it any other way than it is currently done (not that I believe this). I really don't have a position on the topic, since I think it is way beyond our (or at least my) understanding at this point, but I think you are incorrectly limiting the possible perspectives people can have.

    ReplyDelete
  20. I'm out of my depth here, but here is my singularity reassurance corollary: if creating an ultimate artificial intelligence is a really bad idea, then the penultimate artificial intelligence will not do it.

    About parenting:

    Jennifer Michael Hecht was pretty convincing in her critique of measuring "happiness". Nonetheless, I will venture to say that parenting is one of the best ego busting koans imaginable. So, if you agree that the unexamined life is not worth living (or the unexamined self not worth being) then kids are the ticket. Not that you should have kids as a means to self-improvement, but still, if one of the best reasons not to kill yourself is your kids (one of Hecht's assertions that runs counter to her happiness agnosticism) then they may be the best answer to Camus' query.

    ReplyDelete
  21. James:

    Possibly, but my question would still be, is the copy me? If it were perfect, it would think it is, but the real me would still remain behind in my body. Body-me would also think that it is me, grow old and wonder what use the whole undertaking was. It is not as if your consciousness would suddenly "see" both copies.

    I wonder if this were easier to express if I were a native speaker...

    Btw, to forestall certain witty rejoinders that may suggest themselves: http://zs1.smbc-comics.com/comics/20100512.gif

    ccbowers:

    But to that some would reply that you just have to postulate a sufficiently complicated computer. As far as our current understanding of consciousness goes, I do not see that it contains anything that could not be duplicated in a machine - basically you just need to be sophisticated enough to think higher-order thoughts. It is just that, however self-aware such a future machine would be, it would still never be a human because it would not be in the body of a human. I cannot even guess what would motivate it without all the hormone-based libido, pride, insecurity, ambition etc. that shape our characters. And I wonder what use it would be to duplicate those in a machine anyway.

    ReplyDelete
  22. Mintman:

    Well, two things.

    The moment there is a body-me and a computer-me, they begin to become unidentical. In sense, neither is the me at the time of the copy. But, this doesn't seem to be what's really concerning you.

    What seems to be worrying you is, let's call it, the phenomenon of being. I, for example, have the sense right now that I am, well, me. I am experiencing that. You seem to be wondering if there are two senses of being at the moment of copying, or one, and if there are two senses (I presume, one possessed by the body-me and one by the computer-me), how can we say that a being has been copied?

    What I would propose is that the phenomenon of being is a property that emerges from the combination of brain structure and sensory experience. At the moment I am copied, there may be a computer and a body, each of which seem to house me, but there is only one me, because an emergent property does not occupy space the way a body or computer does.

    So, yikes. I should make that a little clearer. Let's use a thought experiment. Imagine that the "me" part of you was somehow separated from your brain. Imagine also that this "me" part was separated from your sensory experience. Then imagine that we copied the "me", so that there are two. Then, imagine hooking up your brain and sensory experience to both "me"s. Is there a difference between these "me"s? I would say no. They are exactly identical. Well, really I would say that there is no such thing as a phenomenon of "me". There is only brain and sensory experience.

    I think this seems strange to us because we like to think that we are more than an emergent byproduct of brain and experience. We are just patterns of stuff.

    ReplyDelete
  23. James:

    Nothing worries me about the concept except the wishful thinking of some singularitarians that "we" can transfer onto a machine to achieve immortality, and I do not find any specific disagreement between us.

    ReplyDelete
  24. Julia: "After my meditation on the "self" and its potentially illusory nature last week, I particularly enjoyed Daniel Dennett's take on the issue, comparing the self to the center of gravity."

    Dennett refutes his argument each and every time he employs the personal pronoun "I."

    ReplyDelete
  25. @Alex SL (Mintman) -- You said "In my eyes, we are morally obliged to make way for the next generation so that we have any future at all."
    Out of curiosity, why do you think we have a moral obligation to people who don't exist? Is it that you're imagining the next generation sort of waiting in the wings of life, so to speak, and we're behaving immorally by denying them entrance? Or is it that you feel we have moral obligations, not just to individual living beings, but to constructs like species?

    @Paisley -- In what sense? Dennett wasn't arguing that there is no "I", just that what "I" refers to isn't one distinct thing but a shifting collection of things.

    ReplyDelete
  26. Julia: "In what sense? Dennett wasn't arguing that there is no "I", just that what "I" refers to isn't one distinct thing but a shifting collection of things."

    Dennett is an eliminative materialist who denies the reality of qualia (the technical term in the philosophy of mind for "subjective experiences"). I think you would be hard-pressed to find a more irrational philosophical stance.

    "The most common versions are eliminativism about propositional attitudes, as expressed by Paul and Patricia Churchland,[6] and eliminativism about qualia (subjective experience), as expressed by Daniel Dennett and Georges Rey.[2]"

    (source: Wikipedia: "Eliminative materialism")

    ReplyDelete
  27. Julia:

    My daughter exists already, and if everybody who is grown up now were to live forever and occupy the resources and jobs they have now forever, she and every other child would grow up into dystopia.

    Apart from that, morally was probably the wrong word to use if we are thinking about the fifth generation from now, yes. Let us say then that if we care about the long-term adaptability and survival of our species as opposed to only our personal mid-term advantage, biological immortality would be imprudent (and of course, uploading your memories would not have the same problems, but as indicated above I consider that even less likely to be technically feasible).

    ReplyDelete
  28. "But to that some would reply that you just have to postulate a sufficiently complicated computer. As far as our current understanding of consciousness goes, I do not see that it contains anything that could not be duplicated in a machine."

    I think this is a big assumption. So far the only places we have seen consciousness there is biology. I'm not sure that all of biology can be replicated using the technology found in computers. I find it more likely that consciousness will be created by humans through biology, than through computer technology alone. I am not saying that it is impossible, but it doesn't appear to be as obviously true as some people believe.

    Besides, what makes our current technology not sufficiently complicated? Is this a hardware issue? Software issue? Neither? Both? Perhaps some people feel that they know the answers to these questions. I don't know.

    ReplyDelete

Note: Only a member of this blog may post a comment.