About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Tuesday, March 19, 2013

The Heuristics of the Hyperhuman


by Steve Neumann

How many men at this hour are living in a state of bondage to the machines? How many spend their whole lives, from the cradle to the grave, in tending them by night and day? Is it not plain that the machines are gaining ground upon us, when we reflect on the increasing number of those who are bound down to them as slaves, and of those who devote their whole souls to the advancement of the mechanical kingdom? — Samuel Butler, Erewhon: or, Over the Range (1872)

As a guide dog mobility instructor, I’ve had the opportunity over the years to attend the annual conventions of the two largest membership organizations of and for the blind in the United States: the American Council of the Blind and the National Federation of the Blind. It was at the 2006 convention of the latter organization where I watched futurist Ray Kurzweil unveil his K-NFB Reader that provides portable text-to-speech technology for the blind. At the time, I had read Kurzweil’s The Age of Spiritual Machines, and had just finished reading his The Singularity is Near

I’ve always been a fan of science fiction, ever since I read Asimov’s Foundation series in junior high school. I think what appeals to me most about science fiction is the fecund ingenuity and unabashed experimentalism. So Kurzweil’s writing resonated on that level. And truth be told, I was pretty much on board with many of his predictions about future technology, if not with the time frame for them. But I just couldn’t wrap my head around the idea of the Singularity itself.

I assume readers of this blog are familiar with the concept of the Singularity, but here’s a simple definition just in case: the creation by humans of a superintelligence that exceeds our own so significantly that we can’t even fathom what will happen after it occurs. Or as Sheldon Cooper put it, “when man will be able to transfer his consciousness into machines, and achieve immortality.” Whatever the merits or demerits of the concept of the Singularity may be, there has been an increasing amount of coverage of it and its implications, primarily with regard to “existential risk.” One of the standard bearers of the futurist camp, Nick Bostrom, was recently interviewed about his work at the Future of Humanity Institute at Oxford University; philosopher Russell Blackford at Talking Philosophy posted about the forthcoming book The Transhumanist Reader, for which he himself contributed a chapter; and author Matt Ridley gave his assent to Kurzweil’s latest book, How to Create a Mind, in the Wall Street Journal. However, philosopher Colin McGinn has a rather scathing review of How to Create a Mind.

The overall transhumanist consensus, it seems clear to me, is that the Singularity is a foregone conclusion, with discrepancies within the movement only concerning the timing of it. It eerily reminds me of talk of the Rapture when I was growing up in a fundamentalist Christian church: the Rapture would be sudden and indescribable. And while futurists are optimistic about the progress researchers have been making in the two most relevant Singularity disciplines of computer science and neuroscience, they’re much less optimistic about the potential consequences of the event. Their primary concern is about risks. As Bostrom put it in the above linked interview in Aeon:

[these are risks] for which there are no geological case studies, and no human track record of survival. These risks arise from human technology, a force capable of introducing entirely new phenomena into the world.

And thinkers like Bostrom believe a superintelligence is precisely the kind of phenomenon that could easily introduce them. They’re not simply referring to things humans have already introduced or are about to introduce into the world, like dangerous new materials or superbugs for which we have no natural biological defenses. Bostrom and others are trying to think past these more commonplace maladies.  

The main motivation for attempting to come up with strategies and remedies for hard-to-imagine existential threats, potential species-ending threats, seems to be primarily a moral one. Bostrom again: 

Toby Ord is wrestling with a formidable philosophical dilemma. He is trying to figure out whether our moral obligations to future humans outweigh those we have to humans that are alive and suffering right now.

Even if the Singularity-Rapture doesn’t come to fruition as expected, the prospect of significant existential change can be used as a sort of case study for the moral evolution of our species. 

We humans seem to have enough trouble trying to work out the moral calculus involved with current utilitarian concerns, even within our own countries. Additionally, Americans in particular seem to have difficulty walking the line between obligations to ourselves and obligations to others, between the individual’s interests and those of society. 

And how do we fulfill our perceived obligations to others? I suppose there are two ways: actively and passively; or, in other words, performing actions that benefit others (+A), and refraining from performing actions that harm others (-A). For simplicity’s sake, +A generally include the sacrifice of one’s time, effort and material resources, or any combination of the three, for the benefit of others. One may have plenty of time but be incapable of physical effort and possessed of no significant material resources; or, conversely, one might not have the time but have a surfeit of material wealth. And so forth. But even when we benefit others, we naturally adjust the variables in our moral equation with a view to balancing it. The wealthiest philanthropists don’t give away all their wealth; the poorest philanthropists don’t give away all their time (unless, of course, it’s their job and they’re supported by the charity of yet others; or they actually work for a charity). So even when we attempt to fulfill our duties to others we seem to be mindful of fulfilling our duties to ourselves. 

What I’m getting at here is that, while I may agree with Bostrom, et al., that it is certainly wise to spend time pondering potential existential threats and their solutions, I think it would also be wise to spend an equal amount of time pondering how to prepare ourselves morally and psychologically for future exigencies. Within the context of this topic, however, I would regard morality and psychology to be essentially synonyms. 

I consider myself to be a type of virtue ethicist, though other virtue ethicists may not consider my taxonomy of virtues to be virtues. As a virtue ethicist, I’m concerned with the individual first, and then with others; but I would note that this focus is closer to a Nietzschean conception rather than the more popular (at least among politicians of a certain temperament) Objectivist one. I do think that one has to be a certain sort of person before one can effect any kind of durable change in others or in circumstances. And while I believe that a philosophical education is essential to this end, I’m not advocating an oligarchy of philosopher kings, Zeus forbid. And I should also point out that I don’t necessarily agree with thinkers like Savulescu and Persson who argue for “moral bioenhancement” where 

[o]ur knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process.

I do agree, however, that moral education can “enable us to act better for distant people, future generations, and non-human animals.” But at what is this education aimed, and which normative approach is preferred or being deployed — deontological, consequentialist, or virtue-ethical? 

The predominant approach seems to be consequentialist. In other words, futurist thinkers are asking what the consequences of our current decisions regarding the transhumanist movement will be for posterity. As a result, their efforts are focused on the nature of the new technologies themselves. For example (and in its simplest form), they’re asking questions like, How can we program artificial intelligence to have a favorable regard for biological human existence? While I think this is something to be concerned with to some degree, I’d like to see an equal focus on individual character.

Here I’d like to make a distinction between the Transhuman and what I’ll call the Hyperhuman. The former is best represented by the essays in The Transhumanist Reader, noted at the beginning of this post; while the latter is closer to Nietzsche’s conception of the Übermensch. The conception of the Hyperhuman is similar to Nietzsche’s in that one must be able and willing to cultivate one’s psychological “raw material” into something better, where “better” means “overcoming oneself.” The idea of overcoming oneself is, at least in my opinion, the essence of morality. 

For brevity’s sake, to overcome oneself is to subdue or sublimate one’s animal nature into something more humane; and this, to my mind, involves engaging with the uniquely human projects of philosophy, certain forms of asceticism, and the various arts. It’s a little difficult to say with more precision exactly how the conception of the Hyperhuman is similar to Nietzsche’s because, while he certainly had a project in the works devoted to fleshing out the details of his vision of the Übermensch, he lost his mental faculties and died before he could finish it. So all we have left are his notes about “discipline and breeding” and the like, and often contradictory secondary literature purporting to know what he would have gone on to think and to say. 

From what he had published in his lifetime, however, it seems Nietzsche espoused a type of virtue ethics inspired by his affinity for classical Greek culture:

“You shall always be the first and excel all others: your jealous soul shall love no one, unless it be the friend” — that made the soul of the Greek quiver: thus he walked the path of his greatness. — Thus Spoke Zarathustra

In other words, we might say that the ancient Greek sought to bring all his mental and physical capacities to bear on the project of making himself into a better person, and thereby inspiring both envy and possibly a motivation for the “others” to do the same. And this would apparently create a type of “moral arms race” that would elevate the cultural standard in general. So under this conception, even a focus on oneself has salubrious effects on others. 

Bostrom, in his paper A History of Transhumanist Thought, briefly mentions Nietzsche as a potential philosophical precursor of the transhumanist movement, but notes that what Nietzsche had in mind 

was not technological transformation but a kind of soaring personal growth and cultural refinement in exceptional individuals.

This is undoubtedly correct; but at the same time I’m inclined to wonder what a mind like Nietzsche’s, which existed in a 19th Century world, would have thought about the effects of 20th or even 21st Century technology on the moral psychology of the individual. I would say that current and anticipated technological developments could be a stimulus for the kind of personal growth and cultural refinement Nietzsche encouraged. The recipe for personal growth is necessarily different for each individual, so I don’t think there can be any top-down plan for building the type of character required to meet the challenges of future technological change. 

Is authentic moral education even possible, or are we merely stuck with a superficial veneer of improvement and enhancement? And if all we’re left with is appearance and not authenticity, is that enough for our species to be equal to future existential challenges, whether they come from a superintelligence or from a super-catastrophe? 

Though I’m certainly no technological expert, I believe that the predictions of the futurists are overstated; so I think our species has the time if not the proper motivation to overcome itself in time for any near-Singularity type of change. And while Bostrom believes that “philosophy has a time limit,” I’m more inclined to agree with his interviewer that “[c]omputer science could be 10 paradigm-shifting insights away from building an artificial general intelligence, and each could take an Einstein to unravel.”

But if, as Nietzsche seemed to believe, the most fulfilling objective of one’s life is a type of self-realization, doesn’t the increased mechanization of life and society threaten to hamper the chances for such a life-project? As a lover of science fiction, I have a certain affinity for the practical and existential possibilities of technological advancement; but I worry that the increasing pace of those advances will seduce us into a slavish dependence and fascination that will tear us away from the project of self-realization, which is what I believe will be necessary to flourish in — or even survive — the future we ambivalently anticipate. As much as I love my Macbook, my iPhone, Google, and all the other fruits of the Internet and software application developers, I must confess that it requires increasing exertion on my part to remain focussed on the “life of the mind.” Quantity all too easily dethrones quality. And the effort to focus on what’s important seems to have become not a daily battle but a moment-to-moment battle.

37 comments:

  1. Assuming Steve's working definition of the singularity:


    "I assume readers of this blog are familiar with the concept of the Singularity, but here’s a simple definition just in case: the creation by humans of a superintelligence that exceeds our own so significantly that we can’t even fathom what will happen after it occurs."

    Is correct ... then how can the likes of Kurzweil even make predictions? (Not that the likes of Kurzweil won't find ways to argue around that.)

    ReplyDelete
    Replies
    1. They can't, thats the point. So the likes of Kurzweil are more concerned with our path toward the singularity, than with the post-singularity wonderland.

      Apart from this, Kurzweil is trying to establish a 'law of accelerating returns' of technology and information processing in general. So if this is true, one can learn a little bit of high level options a general intelligence can have in 2100.

      Delete
    2. Well, Kurzweil and others have made predictions about technological developments *up to* the singularity; and beyond that, they are merely speculating, I think.

      Delete
  2. Very nice post!

    Completely agree, self-realization could be reached by a lot of people in the 18 century. How we feel about us is not a matter of technology and I think very dangerous let our happines in hands of machines.

    ReplyDelete
  3. 'The idea of overcoming oneself is, at least in my opinion, the essence of morality.'

    This seems to me to be a very combative way to look at self-realization.

    'For brevity’s sake, to overcome oneself is to subdue or sublimate one’s animal nature into something more humane; and this, to my mind, involves engaging with the uniquely human projects of philosophy, certain forms of asceticism, and the various arts.'

    I disagree. I would say that our 'animal nature' is as important to our rationality as is our ability to reason. If a better 'self' is to emerge I think it will be by unifying our capacity to reason with our ability to feel.

    And the lack of 'feeling' by an AI is why I don't see it becoming a singularity anytime soon.

    ReplyDelete
    Replies
    1. Seth_blog -

      "This seems to me to be a very combative way to look at self-realization."

      Think even of the development of a child: a child goes from all animalistic instinct to a (potentially) functioning member of society; there is much the child has to combat within himself in order to become 'civilized'.

      "I would say that our 'animal nature' is as important to our rationality as is our ability to reason. If a better 'self' is to emerge I think it will be by unifying our capacity to reason with our ability to feel."

      Actually, I totally agree with this; but unfortunately I can't include all aspects and angles of an argument in one blog post :(

      Delete
    2. Is the process of development and becoming civilized better described as:

      1)Combating the animal instincts within, or
      2)Learning how useful and harmful our instincts can depending upon application & context

      I didn't mean to be overly negative or nitpick, but I think the framing is important & relevant to the topic.

      Delete
    3. "And the lack of 'feeling' by an AI is why I don't see it becoming a singularity anytime soon."

      Why do you assume that an AI would lack feeling? I would imagine that comes down to how it is designed. It is not obvious to me that an AI could not feel.

      Delete
    4. >It is not obvious to me that an AI could not feel.<

      Nice example of the "argument from ignorance" fallacy.

      Delete
    5. Everything that feels has its own substrate change dynamically as it processes information (the substrate becomes part of the information interpretation process). I think that would be a minimum requirement for feeling to emerge. Feeling is subjective, without such subjectivity how can information have intrinsic meaning?

      Am I wrong to think that type of AI is coming any time soon?

      I know you think that we can develop a substrate neutral AI. If an AI is substrate neutral how do you envision it being designed to feel?

      Delete
    6. @Filippo Neri
      >Nice example of the "argument from ignorance" fallacy.<

      Absolutely not. An argument from ignorance would be something like "I don't see how brains could be anything other than computers therefore they must be computers."

      What I said was quite different - that there is no evidence that it is impossible to design an AI with subjective feeling. I am arguing for open-mindedness not for a conclusion.

      Delete
    7. @Seth_blog

      >Everything that feels has its own substrate change dynamically as it processes information (the substrate becomes part of the information interpretation process). I think that would be a minimum requirement for feeling to emerge.<

      I'm not sure what you mean about the substrate changing dynamically. Brain states change with respect to the configuration of particles inside the skull, which at a higher level can be understood in terms of concentration of neurotransmitters, which neurons are firing and which synapses are strengthened and weakened over time. The state of the substrate of an AI changes with respect to the configuration of bits stored in a computer's memory - but of course these changes can be understood in terms of higher levels of abstraction also. Any type of dynamic physical system must have some kind of change in physical state so intelligence is not particularly special in this regard.

      >Feeling is subjective, without such subjectivity how can information have intrinsic meaning?<

      Agreed, to an extent.

      >Am I wrong to think that type of AI is coming any time soon? <

      It's probably not coming any time soon. Or maybe it is. I don't know. Personally, I'm more interested in questions of principle than trying to guess when or if it will ever happen in practice. I'm certainly open to the possibility.

      >I know you think that we can develop a substrate neutral AI. If an AI is substrate neutral how do you envision it being designed to feel?<

      My hypothesis is that it doesn't need to be explicitly designed to have a subjective experience. I suspect that once you have created a genuinely intelligent system capable of much the same kinds of reasoning as humans (e.g. reporting on its own state and thought processes), then subjectivity and true consciousness will be aspects of that system. In this view, consciousness is simply what it feels like to be such a system, and the concept of a philosophical zombie is nonsense.

      However such an intelligent system would not necessarily be emotional. If that's what you mean by feeling, we might need to instill in it some drives. Emotion seems to be easier to understand than consciousness, in that it can be readily modelled by some set of global variables the system seeks to maximise or minimise along with some modification of general behaviour according to the values of those variables. The emotional state of the system becomes just another percept/input to the system, much like sensory data. This explanation may not be adequate but it sketches out one potential model at least.

      Of course speculating in detail about how an intelligent, emotional AI might be designed is beyond the scope of a comment on a philosophy blog. It may be that it is simply too difficult to design such a system manually in any case.

      That doesn't mean it's not going to happen though. Don't rule out using a Darwinian algorithmic process to evolve artificially intelligent systems autonomously with minimal direct human intervention.

      Delete
    8. Disagreeable-
      'I suspect that once you have created a genuinely intelligent system capable of much the same kinds of reasoning as humans (e.g. reporting on its own state and thought processes), then subjectivity and true consciousness will be aspects of that system.'

      I have little time today to post. Where did the 'thought process' come from? It seems tautological to me. You claim that we don't have to design subjectivity or consciousness, once they are part of the system they will be part of the system? How? Simply assigning static code values to individual entities does not address the issue of how all the entities will relate. I don't think emotion from code is nearly so simple a process.

      Human intelligence is dynamic not just due to the substrate of the brain itself. The brain evolved to respond interdependently with the body and the environment to enhance survival. The relationships are non-separable. I have seen no explanation how or why a closed system of abstract code should be expected to produce any kind of similar phenomena.

      I never said it was impossible, as Filippo pointed out the burden of the evidence lies on the side of those making the claims. I find the evidence lacking.

      Delete
    9. @Seth_blog
      >Where did the 'thought process' come from? It seems tautological to me. You claim that we don't have to design subjectivity or consciousness, once they are part of the system they will be part of the system?<

      Well, there's two different questions at stake.

      1) Is it possible to make an AI which behaves intelligently? (weak AI)

      2) Is it possible to make an AI which is genuinely conscious? (strong AI)

      The paragraph you quoted was in the context of explaining why I believe the answer to (2) is yes when (1) is achieved.

      I'm drawing a distinction between intelligence and consciousness which I don't think you have appreciated. Conceptually, they are two different things. Even if we made a computer which had every outward hallmark of intelligence, there are those who would doubt that it was actually conscious, having no real inner subjective experience of the world.

      My paragraph was intended to dispute this view, and to suggest that an intelligently behaving system would necessarily be conscious.

      I hope this clears up the confusion.

      >I have seen no explanation how or why a closed system of abstract code should be expected to produce any kind of similar phenomena.<

      Nobody says it has to be a closed system. The brain gets all of its information from the world through electric pulses along wires we call nerves. It's not hard to see how you could provide an AI with sensory data in the same way. If you really think having a body is so important (I don't - I see no problem conceptually with having a "disembodied" AI), then why not suppose that the AI has a robot body with which it can explore and experience the world?

      >I never said it was impossible<

      Great! It's much more fun to exchange ideas with open-minded people.

      >as Filippo pointed out the burden of the evidence lies on the side of those making the claims.<

      Hmm. Well, I might agree with you about Singularitarians or those who claim that we are on the cusp of building intelligent machines, but I'm not sure I agree with you on where the burden of proof lies with those who advocate Strong AI in principle.

      Strong AI advocates like me claim that mental phenomena arise out of the computations carried out by neurons in accordance with the findings of neuroscience thus far.

      Strong AI deniers seem to me to be in the more tenuous position of arguing that there is something rather more mysterious (if not downright magical) going on that could never be reproduced by a mere machine.

      Weak AI deniers must make the even stronger claim that what happens in a brain could not even be simulated by a machine.

      For these beliefs to be proven, basic assumptions about physics would have to be completely overturned. It seems more parsimonious to simply accept Strong AI.

      Delete
    10. I appreciate the clarifications. I don't think 'human like intelligence' depends on a body, but i do think it depends at a minimum on a direct and dynamic relationship with it's environment.

      What is the environment that provides the input for weak AI. The AI's intelligence will be limited to those non-physical abstract inputs. Importantly I don't think you replied to my question of how an AI would acquire relational value.

      Currently Google's attempts at a single task (face recognition intelligence) including input from millions of videos has only 15.8 % success with just 20,00 categories to differentiate. Now imagine relating the value of a specific facial expression and body language compared with the words produced, along with another observing persons responses. You don't know the talking person but you know the observer (who knows the speaker) and have some opinion of the observers reliability. You would form a valuation of the spoken information almost instantly. What kind of input would a weak AI need if 10 million videos currently don't even allow an AI to even reliably recognize an image as a face?

      Delete
    11. @Seth_blog

      Thanks for continuing the discussion with an open mind.

      > I don't think 'human like intelligence' depends on a body, but i do think it depends at a minimum on a direct and dynamic relationship with it's environment. <

      Perhaps a human-like intelligence would need a human-like environment. Other more alien intelligences might not. In any case, I see no problem with the environment being virtual.

      >What is the environment that provides the input for weak AI. The AI's intelligence will be limited to those non-physical abstract inputs.<

      A weak AI's inputs need be no more abstract than ours. The impression you have that you directly experience the concrete world around you is an illusion. Everything you perceive is a result of signal processing performed by your brain on a series of electrical impulses travelling down wires (nerves). Providing software with a perfectly analogous set of inputs is not so much of a challenge - having the software make sense of it is.

      As to where the inputs come from, well, the real world is one obvious possibility. Input could come from a sophisticated robot body or from a webcam. Or we could have AIs talking to and learning from each other. Or perusing the internet. Or inhabiting a virtual environment. There's lots of possibilities. Different scenarios might lead to different types of outcomes.

      >Importantly I don't think you replied to my question of how an AI would acquire relational value.<

      I'm not sure what you mean. You might be asking how semantics or meaning can be derived from syntax or symbols. I'm not going to be able to give you a full answer in a comment on a philosophy blog, but I can point you in the right direction.

      The quick answer is I don't know, but our brains manage to do it so obviously it is possible, and if our brains obey the laws of physics and the laws of physics can be simulated by a computer, then it must be possible for a computer to do what our brains can do.

      On the other hand I do have a strong inkling about where meaning comes from, and the answer is that meaning comes only from the associations between different concepts, percepts and feelings. Google "Semantic networks", or even better, read "Godel, Escher, Bach", for a better understanding, or read my thoughts on the question here: http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-semantics-from.html

      >Currently Google's attempts at a single task (face recognition intelligence) including input from millions of videos has only 15.8 % success with just 20,00 categories to differentiate.<

      AI is very very very very very hard. I am making no claims that it will be achieved any time soon. I'm just arguing that it's possible in principle. The limited abilities of current AI systems do nothing to dissuade me from this view.

      >What kind of input would a weak AI need if 10 million videos currently don't even allow an AI to even reliably recognize an image as a face?<

      Human beings have some information baked in by evolution over millions of years in the form of instincts (particularly relevant to face recognition). Humans which are competent at making advanced judgements usually only get to be so after a couple of decades of learning - a huge amount of input. Humans are also vastly more complex and sophisticated than current AI systems, so we can't extrapolate from how much input current AI systems need to how much input an AI as sophisticated as a human would need.

      The amount of input is really inconsequential compared to the sophistication of the software processing it.

      Delete
    12. Disagreeable,
      I also appreciate the manner in which you disagree and your thorough replies.

      I think there are a few basic different assumptions we each lean toward that are responsible for our divergent views. I understand that the perceptual information we process appears to us as 'real' but that reality is only subjectively 'real'. Nevertheless that subjective reality in my opinion cannot be reduced to neural firings. I believe the whole package is reguired (the mind is an extension).
      You said:
      'A weak AI's inputs need be no more abstract than ours'

      But our brain is both an input and output device, and is not abstract. It is physical and dynamicaly interconnected with the body and the environment. It's physical structure cannot be separated from the information processing, it is part of the information itself.

      How we get from syntax to semantics was indeed part of what I was trying to get at with my question on 'relational value'. Data or syntax can be processed as Shannon information, but Shannon information alone does not provide for semanitic meaning or pragmatic problem solving. I think we will need to think of information at different levels to get there. I think Terrence Deacon, Robert Logan, Joseph Brenner and others are on the right track.

      The world is relational. An AI that would be able to problem solve across physical and conceptual domains (as suggested by Brainoil) would need to be able to compute value based on complex relations from a more micro-level code. How would it determine and value what emerges in complex cross domain interactions. Why should we assume what emerges from substrate neutral abstract relations would apply to what emerges in the physical world?

      I think we would need to see much more evidence of the ability to simulate complex emergent properties based soley on micro-level understandings before worrying about singularities.

      Delete
    13. @Seth_blog

      >reality is only subjectively 'real'.<

      Ok, but not quite what I was getting at. I was making the point that all of our experience of the external world gets into our brains via electrically-encoded signals. Since computers are designed to process electrically encoded signals, there should be no great conceptual leap in imagining how an artificial intelligence could have just as real an experience of the world as we do.

      However, on reflection, I think what you may have been getting at is instead the question of how these signals become interpreted so as to give rise to qualia.
      http://en.wikipedia.org/wiki/Qualia

      That certainly bears discussion, and I have done so in part (with reference to the Mary's Room thought experiment) here:
      http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-qualia-and.html

      If qualia are what you're asking about, then let me know and we can discuss it further.

      >
      But our brain is both an input and output device, and is not abstract. It is physical and dynamicaly interconnected with the body and the environment.
      <

      An AI would also be both an input and output device. And while a brain is not abstract, the mind it hosts is. The same would go for an AI with a hardware and software side. An AI system, as I have described, could also be physically connected with its environment (and a robot body if you wish).

      >It's physical structure cannot be separated from the information processing, it is part of the information itself.<

      Well, I suppose that's the issue in question, isn't it? Whether a mind could be supported by substrates other than a brain? I think it can, so I disagree with your assertion.

      >but Shannon information alone does not provide for semanitic meaning or pragmatic problem solving.<

      A position I disagree with, as I tried to explain on the blog post I linked you to. In my view "meaning" derives from the relations between different nodes in a semantic network and from the rules we use to process information. An automated system with a sufficiently rich semantic network would then have meaning in the same way we do. As for problem solving, there are already many AIs that can problem-solve in particular domains.

      >I think we will need to think of information at different levels to get there.<

      Sure, I'm with you there. But multi-level modelling of information processing is entirely compatible with the view that it's all ultimately a matter of processing ones and zeros.

      You kind of lose me on the last couple of paragraphs. I'm not sure what you mean, and it's not clear to me that even (most) humans are capable of what you ask. Examples might help.

      If you mean we would need a computer system that demonstrates emergent properties based on simple rules, then we already have numerous examples of such systems.

      >Why should we assume what emerges from substrate neutral abstract relations would apply to what emerges in the physical world?<

      Well, in our case it's because we evolved in and grew up interacting with a physical world.

      As to why we should expect an AI to understand the physical world, I'd say for much the same reasons. If we ever do develop an AI, evolution and automated learning will probably be a big part of the solution. The AI could learn about the physical world by interacting with an accurate virtual simulation or by controlling a physical robot with sensory feedback.

      But the physical world is only one domain. I can also imagine the creation of an AI which understands nothing of the physical world but is a genius at math, music, chess, or other such abstract pursuits.

      Delete
    14. @Seth_blog
      I feel like the conversation is getting a bit muddied with issues arising out of the distinction between strong AI and weak AI. "Semantics from syntax" and "Qualia" arguments in particular are only relevant to Strong AI.

      If you doubt even weak AI (that it should be possible in principle for a computer to behave as if it were intelligent in the same way as we are), then we should focus on issues relevant to that.

      In particular, I'd like to understand the basic reasons we hold our respective positions. As I have outlined in more detail in my comments to Filippo Neri below, I believe in AI because our brains are natural systems and natural systems can in principle be simulated on a computer. If it's possible in principle to simulate a brain on a computer, then there is at least one extravagantly inefficient way of getting a computer to behave intelligently, but there are probably better ways also.

      What's the main reason you doubt the possibility of AI?

      Delete
  4. //The overall transhumanist consensus, it seems clear to me, is that the Singularity is a foregone conclusion, with discrepancies within the movement only concerning the timing //

    This is wrong as a point of fact. There at three major schools of Singularity and all three are logically different from each other. The differences go far beyond the timing. Yes, singularitarians believe it's a foregone conclusion, but you have to ask them which kind of singularity they are talking about.

    And for all your talk about rupture, there isn't a single argument against Singularity. This is not the first time I have seen this. Most of the real arguments against Singularity comes from within the community itself. Outsiders don't actually have a good argument against it. It's just that they find it hard to believe.

    Note that for the singularity to happen, you don't need a conscious machine. You just need a machine smarter than we are. There is no reason that we would know how it would behave, anymore than chimps would know how we behave.

    ReplyDelete
    Replies
    1. 'Note that for the singularity to happen, you don't need a conscious machine. You just need a machine smarter than we are.'

      In what way are you defining smarter such that a singularity will appear? I would expect by some definitions some machines already are quite a bit 'smarter' than we are.

      Delete
    2. >but you have to ask them which kind of singularity they are talking about.<

      Sounds interesting. How would you describe the major camps?

      Delete
    3. >In what way are you defining smarter such that a singularity will appear? I would expect by some definitions some machines already are quite a bit 'smarter' than we are.<

      A precise definition is difficult. However, let's suppose we're talking about a machine that is at least as capable as a human at all mental tasks, including tasks computers are not currently very good at such as natural language processing, image recognition, creativity etc.

      Delete
    4. Seth_blog,

      Smarter, or more intelligent in this context mean, as Luke Muehlhauser puts it, efficient "cross-domain optimization."According to this definition,

      Intelligence = optimization power/resources used

      If I borrow a little more from Luke "This definition sees intelligence as efficient cross-domain optimization. Intelligence is what allows an agent to steer the future, a power that is amplified by the resources at its disposal."

      There machines that are better than us at specific tasks. For example, even the most basic computer chess games can beat most of ordinary humans. What Singularity would mean is that not only computers will be far better than us in chess, but they also will be better than us in forming defense strategies, forming counter-espionage strategies, rigging elections, and lot of other stuff that requires problem solving.
      ***

      Disagreeable,

      There is Accelerating Change, Event Horizon and Intelligence Explosion. Eliezer Yudkowsky describes them here:

      http://yudkowsky.net/singularity/schools

      The forth one, Apocalyptism, seems to be the popular view here.

      Apocalyptism: Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Filippo Neri,

      You can't do science on a blog. What we are doing is arguing, and you are right that the burden of proof is on the Singularitarins. But they have presented their argument. It's your turn now to show exactly where do they go wrong.

      I believe it's the premises that you find hard to believe. That's understandable, especially when you think according Singularitarians believe that "given enough processing power, machines will somehow develop intelligence and even consciousness"

      First of all, when you think about Singularity, stop associating it with consciousness. Strong AI is not part of Singularity. The confusion is due to the fact that it's the same people who talk about them both.

      What's relevant here is intelligence (please refer to my reply to Seth_blog to see what I mean when I say intelligence). There too, no one says that given enough processing power machines will just somehow develop intelligence. You have to code them that way. That's something you learn very early as a Singularitarian. Nothing magically springs into existence. Everything has to be coded.

      Singularitarins don't believe that Siri will eventually cause the Singularity, unless Apples codes it that way. What's needed, though this is not part of the core claim of Singularitarians, is a machine that can modify it's own code.

      Of course this could be hard to believe. But most of these objections don't come because people who object to this have studied Minsky's papers and figured out why AI is not possible even in principle. It comes from the intuition that human intelligence and consciousness are somehow special. You yourself betray that intuition when you say "intelligence and consciousness are natural phenomena does not imply that they are substrate-neutral phenomena."

      People find it hard to believe functionalism. For me, it's always been obvious. If my artificial leg is not functionally different from the biological one I had before the Mafia boss cut it off, it is my leg. Same goes for artificial hearts, artificial lungs, and of course, artificial brains.

      Delete
    2. brainoil -

      "But most of these objections don't come because people who object to this have studied Minsky's papers and figured out why AI is not possible even in principle. It comes from the intuition that human intelligence and consciousness are somehow special."

      Are you here talking about those who believe that human consciousness is in some way supernatural - or at least not ultimately reducible to the electrochemical activity of neurons? I think that human consciousness is *special* but not in the way I just described; I think human consciousness is special because it's been intractably difficult to explain in a satisfactory way. Even of the many accounts of consciousness I've read over the years, I find none of them *intuitively* satisfying; and the others always seem to me to include an all-too-important caveat of something "yet to be explained."

      But, generally-speaking, I don't rule out strong AI in principle. But could you point me to an explanation of a super-intelligence *without* strong AI? I must confess I find it intuitively implausible. My main reason for this is Muehlhauser's definition of intelligence you cited above, which seems to be at odds with a quote he cites at his site Intelligence Explosion:

      "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever." - Irving John Good

      My main problem with this is: where does the ultraintelligent machine's *motivation* come from? What is its impetus? I'm assuming you'd say humans would have to code that in, and then subsequently the ultraintelligent machine could alter its own code, and so on. So, if we think of all of man's intellectual activities - which I would say include philosophy, art, *and* technology - it seems implausible to me that we humans could write the code for an ultraintelligent machine that would include the potential for all those activities, *and then some*.

      Delete
    3. >But could you point me to an explanation of a super-intelligence *without* strong AI?<

      Strong AI is the position that it is possible to make a conscious AI. It is separate from the claim of singularitarians that we can make an intelligent AI. For a description of an intelligence with no consciousness, see http://en.wikipedia.org/wiki/Philosophical_zombie

      For what it's worth, I agree with you that I find the concept intuitively implausible. I suspect that consciousness is just what it feels like to be an intelligent agent, and that you can't have one without the other.

      >My main problem with this is: where does the ultraintelligent machine's *motivation* come from? <

      Intelligence and motivations are completely different things. I agree, motivations would have to be coded in. At the simplest, all we would need would be a motivation for the system to do as we ask of it. I don't see why this is hard to envisage as that's what essentially what our computer systems are like now. We could then ask it to develop new technology for us, etc, and it would be motivated to comply.

      Delete
    4. Steve,

      //I think human consciousness is special because it's been intractably difficult to explain in a satisfactory way.//

      If that is the reason you believe consciousness is special, consciousness is special only because it confuses people. That's not something that should be considered special. Lot of ordinary people believe that love is special. They believe that love cannot be explained by science. Philosophers, and other people who read blogs like this one, don't quite go there. But for them, consciousness is what love is for ordinary people.The reason they can do this is because science and philosophy don't yet understand consciousness as well as they understand love. Since in 2013 we don't yet know exactly how consciousness works, it must be something special, unlike love.

      I know that you don't think consciousness is supernatural, or that it is not reducible to electrochemical activity of neurons. But this whole idea of substrate dependence sounds a lot like vitalism. They are certainly invoking a vital principle. Intelligence cannot arise if this vital thing isn't there.

      //which I would say include philosophy, art, *and* technology - it seems implausible to me that we humans could write the code for an ultraintelligent machine//

      See this is what I was talking about earlier. Those who object to the concept of machine learning don't do so because they have a mathematical proof that Turing machines can't learn. They just find it implausible. I mean, lot of math talent is put into this. You can save their time and effort if you just come up with a proof that all their math is going to be useless. The whole thing now is almost like telling Wright brothers that they can't fly.

      Anyway, you yourself answered most of your questions. Motivation has to be coded into the AI. But then again the question is what do you mean by motivation. In humans, it is an adaptation that gave them better survival odds. There's no reason AIs should have similar mental drives. If you write a program to learn the rules of cricket, it's pointless to ask where does it find the motivation to learn those rules.




      Delete
  6. >life is definitely not a substrate-neutral phenomenon.<

    Where do you get this conclusion from? In terms of forms of life, we have a sample size of one. I don't see how the word "definitely" is warranted.

    What do you even mean by substrate in this context? I might suppose you mean the particular biochemistry shared by all life on earth. If we find life elsewhere, how do you know it couldn't have found a different substrate?

    Furthermore, even if only one particular biochemistry is likely to arise in this universe, how do you know that there are not other universes with different laws of physics and so different possible biochemistries?

    If you mean a radically different substrate altogether then we have to be careful that we're not simply getting into territory where we disagree about what we call life. For example - whether digital organisms which only live in computer simulations could be considered alive.

    >The idea that given enough processing power, machines will somehow develop intelligence and even consciousness (the strong AI conjecture) is just, well, a conjecture.<

    The idea that it's about processing power is a misunderstanding of the Strong AI position. It's more about the organisation and design of an AI system, and the difficulty in achieving success is likely related to the incredible complexity in the structure of a mind.

    Given that we don't understand the functioning of the brain perfectly yet either, it's hardly surprising that we can't artificially reproduce it.

    As to why I think intelligence is substrate neutral, a simple thought experiment convinces me. If you are committed to naturalism, then the laws of physics are mathematical in nature and the changes in a physical system over time can in principle be simulated. Furthermore, everything that has a causal effect can be reduced to interactions in physical systems, including thought processes.

    (There are objections from people like Roger Penrose to the argument that mental thought processes could be simulated, but we'll come back to those later if you take them seriously - I don't.)

    So if we had enough processing power, a sufficient grasp of physics and the ability to scan every particle in a living brain, we could simulate what's going on in that brain. The simulated brain would behave in a way precisely analogous to the real brain. If we wired that simulated brain up to appropriate sensory and output apparatus, we would expect it to interpret its environment and be capable of conversation in a manner indistinguishable from the original physical brain.

    The question then is whether that brain would be conscious or not. That is debatable, although I believe so as there is no reason to think it would not be. It's irrelevant to the concept of the singularity, however, because for the singularity all we need is a machine which is intelligent, not conscious.

    For more on why naturalism leads me to belief in Strong AI: http://disagreeableme.blogspot.co.uk/2013/01/strong-ai-naturalism-and-ai.html

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. @Filippo Neri
      >Carbon chemistry is uniquely rich. Non-carbon-based biochemistry is only found in bad science fiction stories.<

      I agree with you that we are unlikely to find non-carbon-based life. But you might find life based on chemistry with different chirality, or based on genetic material other than DNA, or with a myriad of other types of chemical differences. Whether these would count as a different substrate I have no idea, as you haven't explained what you mean by substrate in the case of life.

      Even so, I don't see how you can rule out radically different forms of biochemistry just because we haven't managed to conceive of how it might work. My point stands. Our sample size of life is one. You are not justified in making conclusions about whether it could exist on other substrates.

      (Regarding multiverses)
      >Another argument from SF.<

      So what?

      The question is whether life in principle could exist on other substrates. Even if it's just a thought experiment, the mere possibility of other universes opens your position to doubt.

      We don't know whether there are other universes with different laws of physics. There are serious cosmological theories which predict them. You don't know those theories are incorrect. You don't know that there are not other forms of life in other universes.

      Remember, with regard to this point, I'm not the one making the claim. If you want to show that life could not exist on other "substrates" (whatever that means), then you need some evidence or argument to back that up.

      >>Given that we don't understand the functioning of the brain perfectly yet either, it's hardly surprising that we can't artificially reproduce it.<

      More arguments from ignorance.<

      Nonsense. You are claiming as evidence against Strong AI the failure of AI researchers to produce a sentient machine.

      Since we have not managed to achieve anything like a complete understanding of the biological brain, we have good reason to believe that brains and minds are complex, and thus no reason to believe it will be easy to achieve AI.

      The failed promises of AI researchers in no way cast doubt on Strong AI in principle - they just show that it's very very hard. I will even allow that it may be impossible to achieve in practice.

      None of this is an argument from ignorance. You are making the positive claim that Strong AI is impossible, and you have scant evidence to back up your position. Accusing me of arguing from ignorance or SF at every turn does your side no credit.

      Your arguments regarding quantum mechanics are more interesting and so I will deal with them in a separate comment.

      Delete
    2. Your points on QM are subtle and thought-provoking, but I don't find them compelling. There is no quick answer so I am forced to break my response into more than one comment. I'll give you a general sense of it first and then deal with your specific criticisms.

      The brain-scanning argument is a thought experiment intended to show that it should be possible in principle to achieve machine sentience. It is not intended to be a practical suggestion. As such, your comments regarding the practical unfeasibility of such a project can be dismissed.

      If machine sentience is possible in principle, then it is likely that it is possible with far fewer resources than would be needed to simulate every particle in a human brain.

      For example, I suspect scanning at the neuron level together with an accurate model of neuronal behaviour and global effects such as neurotransmitter concentration would be perfectly adequate to produce something of much the same behaviour as a given human brain. There may yet be more elegant ways of producing an AI which would not take a human brain as a literal model.

      Please don't take my arguments regarding the scanning of a human brain to mean that the scanned system will behave in precisely the same way as the original. Due to chaos and quantum uncertainty, the two brains will diverge over time, but both will be qualitatively similar, and both will exhibit intelligence.

      Delete
    3. @Filippo Neri

      On to your specific criticisms.

      >Quantum physical system cannot be simulated accurately by ordinary computers.<

      It might help if you gave an example of what you mean. Is it because we need exponentially increasing computational resources (infeasibility) or because quantum systems are fundamentally unpredictable?

      Intelligence is not likely to be sensitive to the precise outcomes of random events at quantum scales. I'll discuss why when addressing your criticisms pertaining to precision.

      >As for quantum computers, we already know that a general-purpose quantum computer is not possible – even in principle.<

      We don't need quantum computers. Quoting Wikipedia here: "Given sufficient computational resources, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis."
      http://en.wikipedia.org/wiki/Quantum_computer

      Interestingly, this suggests that no argument against Strong AI can be built on the suggestion that the human brain is a quantum computer (not that you implied this). Instead we would need some kind of radically new conception of physics.

      >The fact that a system is ruled by mathematical laws does not imply that it can be simulated – even in principle – by physically-realizable computers.<

      This is correct.

      However to date we have discovered no physical laws which are not computable. We had to spend billions at CERN to conduct experiments at huge energies just to push the boundaries of physical knowledge slightly (or to confirm what we already know). At the risk of hubris, it seems that the physics of the everyday are already known. If brains were doing something at a low level which violated the known laws of physics, we might expect to have witnessed that by now.

      Even so, the possibility that something truly exotic is going on in the human brain cannot be ruled out. I grant you this point, but I find it exceedingly unlikely that it is the case.

      >Scanning precisely every particle in a living brain is not possible - even in principle - because of quantum effects<

      There is no reason to suppose that absolute precision is necessary to produce an intelligent system. Precision within the bounds of what is allowed by quantum mechanics should actually be far far more than we actually need.

      If your intelligence were critically dependent on the absolutely precise state of all the particles in your brain, then a light tap on the head or even a chance quantum fluctation would be enough to kill you. Brains have survived bullet wounds, so I don't think this is the case.

      Healthy brain function cannot be dependent on absolutely precise particle state because there *is* no absolutely precise state. Particles have no well-defined simultaneous position and momentum so whatever your explanation of consciousness it can't depend on it.

      And since people do not generally behave particularly randomly, it seems reasonable to suppose that the precise outcomes of quantum events to not have much of a qualitative effect on people's behaviour. This is why I think pseudo-random computational processes with similar gross characteristics should be adequate to simulate the genuinely random quantum events in a human brain.

      >... and the chaotic nature of classical physics.<

      Like quantum effects, chaos is not a problem. The butterfly effect just means that over time, the state of the real brain and simulated brain will diverge. The simulated brain will no more cease to be intelligent than a butterfly could put an end to weather.

      Delete
  8. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. @Filippo Neri

      Why did you delete your comments? You had some good points and it seemed like we were approaching some kind of agreement.

      I have your latest reply in my email account and I wanted to respond to it, but I feel silly doing so when it's been deleted.

      Delete

Note: Only a member of this blog may post a comment.