by Steve Neumann
About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Tuesday, March 19, 2013
The Heuristics of the Hyperhuman
by Steve Neumann
Subscribe to:
Post Comments (Atom)
Assuming Steve's working definition of the singularity:
ReplyDelete"I assume readers of this blog are familiar with the concept of the Singularity, but here’s a simple definition just in case: the creation by humans of a superintelligence that exceeds our own so significantly that we can’t even fathom what will happen after it occurs."
Is correct ... then how can the likes of Kurzweil even make predictions? (Not that the likes of Kurzweil won't find ways to argue around that.)
They can't, thats the point. So the likes of Kurzweil are more concerned with our path toward the singularity, than with the post-singularity wonderland.
DeleteApart from this, Kurzweil is trying to establish a 'law of accelerating returns' of technology and information processing in general. So if this is true, one can learn a little bit of high level options a general intelligence can have in 2100.
Well, Kurzweil and others have made predictions about technological developments *up to* the singularity; and beyond that, they are merely speculating, I think.
DeleteVery nice post!
ReplyDeleteCompletely agree, self-realization could be reached by a lot of people in the 18 century. How we feel about us is not a matter of technology and I think very dangerous let our happines in hands of machines.
'The idea of overcoming oneself is, at least in my opinion, the essence of morality.'
ReplyDeleteThis seems to me to be a very combative way to look at self-realization.
'For brevity’s sake, to overcome oneself is to subdue or sublimate one’s animal nature into something more humane; and this, to my mind, involves engaging with the uniquely human projects of philosophy, certain forms of asceticism, and the various arts.'
I disagree. I would say that our 'animal nature' is as important to our rationality as is our ability to reason. If a better 'self' is to emerge I think it will be by unifying our capacity to reason with our ability to feel.
And the lack of 'feeling' by an AI is why I don't see it becoming a singularity anytime soon.
Seth_blog -
Delete"This seems to me to be a very combative way to look at self-realization."
Think even of the development of a child: a child goes from all animalistic instinct to a (potentially) functioning member of society; there is much the child has to combat within himself in order to become 'civilized'.
"I would say that our 'animal nature' is as important to our rationality as is our ability to reason. If a better 'self' is to emerge I think it will be by unifying our capacity to reason with our ability to feel."
Actually, I totally agree with this; but unfortunately I can't include all aspects and angles of an argument in one blog post :(
Is the process of development and becoming civilized better described as:
Delete1)Combating the animal instincts within, or
2)Learning how useful and harmful our instincts can depending upon application & context
I didn't mean to be overly negative or nitpick, but I think the framing is important & relevant to the topic.
"And the lack of 'feeling' by an AI is why I don't see it becoming a singularity anytime soon."
DeleteWhy do you assume that an AI would lack feeling? I would imagine that comes down to how it is designed. It is not obvious to me that an AI could not feel.
>It is not obvious to me that an AI could not feel.<
DeleteNice example of the "argument from ignorance" fallacy.
Everything that feels has its own substrate change dynamically as it processes information (the substrate becomes part of the information interpretation process). I think that would be a minimum requirement for feeling to emerge. Feeling is subjective, without such subjectivity how can information have intrinsic meaning?
DeleteAm I wrong to think that type of AI is coming any time soon?
I know you think that we can develop a substrate neutral AI. If an AI is substrate neutral how do you envision it being designed to feel?
@Filippo Neri
Delete>Nice example of the "argument from ignorance" fallacy.<
Absolutely not. An argument from ignorance would be something like "I don't see how brains could be anything other than computers therefore they must be computers."
What I said was quite different - that there is no evidence that it is impossible to design an AI with subjective feeling. I am arguing for open-mindedness not for a conclusion.
@Seth_blog
Delete>Everything that feels has its own substrate change dynamically as it processes information (the substrate becomes part of the information interpretation process). I think that would be a minimum requirement for feeling to emerge.<
I'm not sure what you mean about the substrate changing dynamically. Brain states change with respect to the configuration of particles inside the skull, which at a higher level can be understood in terms of concentration of neurotransmitters, which neurons are firing and which synapses are strengthened and weakened over time. The state of the substrate of an AI changes with respect to the configuration of bits stored in a computer's memory - but of course these changes can be understood in terms of higher levels of abstraction also. Any type of dynamic physical system must have some kind of change in physical state so intelligence is not particularly special in this regard.
>Feeling is subjective, without such subjectivity how can information have intrinsic meaning?<
Agreed, to an extent.
>Am I wrong to think that type of AI is coming any time soon? <
It's probably not coming any time soon. Or maybe it is. I don't know. Personally, I'm more interested in questions of principle than trying to guess when or if it will ever happen in practice. I'm certainly open to the possibility.
>I know you think that we can develop a substrate neutral AI. If an AI is substrate neutral how do you envision it being designed to feel?<
My hypothesis is that it doesn't need to be explicitly designed to have a subjective experience. I suspect that once you have created a genuinely intelligent system capable of much the same kinds of reasoning as humans (e.g. reporting on its own state and thought processes), then subjectivity and true consciousness will be aspects of that system. In this view, consciousness is simply what it feels like to be such a system, and the concept of a philosophical zombie is nonsense.
However such an intelligent system would not necessarily be emotional. If that's what you mean by feeling, we might need to instill in it some drives. Emotion seems to be easier to understand than consciousness, in that it can be readily modelled by some set of global variables the system seeks to maximise or minimise along with some modification of general behaviour according to the values of those variables. The emotional state of the system becomes just another percept/input to the system, much like sensory data. This explanation may not be adequate but it sketches out one potential model at least.
Of course speculating in detail about how an intelligent, emotional AI might be designed is beyond the scope of a comment on a philosophy blog. It may be that it is simply too difficult to design such a system manually in any case.
That doesn't mean it's not going to happen though. Don't rule out using a Darwinian algorithmic process to evolve artificially intelligent systems autonomously with minimal direct human intervention.
Disagreeable-
Delete'I suspect that once you have created a genuinely intelligent system capable of much the same kinds of reasoning as humans (e.g. reporting on its own state and thought processes), then subjectivity and true consciousness will be aspects of that system.'
I have little time today to post. Where did the 'thought process' come from? It seems tautological to me. You claim that we don't have to design subjectivity or consciousness, once they are part of the system they will be part of the system? How? Simply assigning static code values to individual entities does not address the issue of how all the entities will relate. I don't think emotion from code is nearly so simple a process.
Human intelligence is dynamic not just due to the substrate of the brain itself. The brain evolved to respond interdependently with the body and the environment to enhance survival. The relationships are non-separable. I have seen no explanation how or why a closed system of abstract code should be expected to produce any kind of similar phenomena.
I never said it was impossible, as Filippo pointed out the burden of the evidence lies on the side of those making the claims. I find the evidence lacking.
@Seth_blog
Delete>Where did the 'thought process' come from? It seems tautological to me. You claim that we don't have to design subjectivity or consciousness, once they are part of the system they will be part of the system?<
Well, there's two different questions at stake.
1) Is it possible to make an AI which behaves intelligently? (weak AI)
2) Is it possible to make an AI which is genuinely conscious? (strong AI)
The paragraph you quoted was in the context of explaining why I believe the answer to (2) is yes when (1) is achieved.
I'm drawing a distinction between intelligence and consciousness which I don't think you have appreciated. Conceptually, they are two different things. Even if we made a computer which had every outward hallmark of intelligence, there are those who would doubt that it was actually conscious, having no real inner subjective experience of the world.
My paragraph was intended to dispute this view, and to suggest that an intelligently behaving system would necessarily be conscious.
I hope this clears up the confusion.
>I have seen no explanation how or why a closed system of abstract code should be expected to produce any kind of similar phenomena.<
Nobody says it has to be a closed system. The brain gets all of its information from the world through electric pulses along wires we call nerves. It's not hard to see how you could provide an AI with sensory data in the same way. If you really think having a body is so important (I don't - I see no problem conceptually with having a "disembodied" AI), then why not suppose that the AI has a robot body with which it can explore and experience the world?
>I never said it was impossible<
Great! It's much more fun to exchange ideas with open-minded people.
>as Filippo pointed out the burden of the evidence lies on the side of those making the claims.<
Hmm. Well, I might agree with you about Singularitarians or those who claim that we are on the cusp of building intelligent machines, but I'm not sure I agree with you on where the burden of proof lies with those who advocate Strong AI in principle.
Strong AI advocates like me claim that mental phenomena arise out of the computations carried out by neurons in accordance with the findings of neuroscience thus far.
Strong AI deniers seem to me to be in the more tenuous position of arguing that there is something rather more mysterious (if not downright magical) going on that could never be reproduced by a mere machine.
Weak AI deniers must make the even stronger claim that what happens in a brain could not even be simulated by a machine.
For these beliefs to be proven, basic assumptions about physics would have to be completely overturned. It seems more parsimonious to simply accept Strong AI.
I appreciate the clarifications. I don't think 'human like intelligence' depends on a body, but i do think it depends at a minimum on a direct and dynamic relationship with it's environment.
DeleteWhat is the environment that provides the input for weak AI. The AI's intelligence will be limited to those non-physical abstract inputs. Importantly I don't think you replied to my question of how an AI would acquire relational value.
Currently Google's attempts at a single task (face recognition intelligence) including input from millions of videos has only 15.8 % success with just 20,00 categories to differentiate. Now imagine relating the value of a specific facial expression and body language compared with the words produced, along with another observing persons responses. You don't know the talking person but you know the observer (who knows the speaker) and have some opinion of the observers reliability. You would form a valuation of the spoken information almost instantly. What kind of input would a weak AI need if 10 million videos currently don't even allow an AI to even reliably recognize an image as a face?
@Seth_blog
DeleteThanks for continuing the discussion with an open mind.
> I don't think 'human like intelligence' depends on a body, but i do think it depends at a minimum on a direct and dynamic relationship with it's environment. <
Perhaps a human-like intelligence would need a human-like environment. Other more alien intelligences might not. In any case, I see no problem with the environment being virtual.
>What is the environment that provides the input for weak AI. The AI's intelligence will be limited to those non-physical abstract inputs.<
A weak AI's inputs need be no more abstract than ours. The impression you have that you directly experience the concrete world around you is an illusion. Everything you perceive is a result of signal processing performed by your brain on a series of electrical impulses travelling down wires (nerves). Providing software with a perfectly analogous set of inputs is not so much of a challenge - having the software make sense of it is.
As to where the inputs come from, well, the real world is one obvious possibility. Input could come from a sophisticated robot body or from a webcam. Or we could have AIs talking to and learning from each other. Or perusing the internet. Or inhabiting a virtual environment. There's lots of possibilities. Different scenarios might lead to different types of outcomes.
>Importantly I don't think you replied to my question of how an AI would acquire relational value.<
I'm not sure what you mean. You might be asking how semantics or meaning can be derived from syntax or symbols. I'm not going to be able to give you a full answer in a comment on a philosophy blog, but I can point you in the right direction.
The quick answer is I don't know, but our brains manage to do it so obviously it is possible, and if our brains obey the laws of physics and the laws of physics can be simulated by a computer, then it must be possible for a computer to do what our brains can do.
On the other hand I do have a strong inkling about where meaning comes from, and the answer is that meaning comes only from the associations between different concepts, percepts and feelings. Google "Semantic networks", or even better, read "Godel, Escher, Bach", for a better understanding, or read my thoughts on the question here: http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-semantics-from.html
>Currently Google's attempts at a single task (face recognition intelligence) including input from millions of videos has only 15.8 % success with just 20,00 categories to differentiate.<
AI is very very very very very hard. I am making no claims that it will be achieved any time soon. I'm just arguing that it's possible in principle. The limited abilities of current AI systems do nothing to dissuade me from this view.
>What kind of input would a weak AI need if 10 million videos currently don't even allow an AI to even reliably recognize an image as a face?<
Human beings have some information baked in by evolution over millions of years in the form of instincts (particularly relevant to face recognition). Humans which are competent at making advanced judgements usually only get to be so after a couple of decades of learning - a huge amount of input. Humans are also vastly more complex and sophisticated than current AI systems, so we can't extrapolate from how much input current AI systems need to how much input an AI as sophisticated as a human would need.
The amount of input is really inconsequential compared to the sophistication of the software processing it.
Disagreeable,
DeleteI also appreciate the manner in which you disagree and your thorough replies.
I think there are a few basic different assumptions we each lean toward that are responsible for our divergent views. I understand that the perceptual information we process appears to us as 'real' but that reality is only subjectively 'real'. Nevertheless that subjective reality in my opinion cannot be reduced to neural firings. I believe the whole package is reguired (the mind is an extension).
You said:
'A weak AI's inputs need be no more abstract than ours'
But our brain is both an input and output device, and is not abstract. It is physical and dynamicaly interconnected with the body and the environment. It's physical structure cannot be separated from the information processing, it is part of the information itself.
How we get from syntax to semantics was indeed part of what I was trying to get at with my question on 'relational value'. Data or syntax can be processed as Shannon information, but Shannon information alone does not provide for semanitic meaning or pragmatic problem solving. I think we will need to think of information at different levels to get there. I think Terrence Deacon, Robert Logan, Joseph Brenner and others are on the right track.
The world is relational. An AI that would be able to problem solve across physical and conceptual domains (as suggested by Brainoil) would need to be able to compute value based on complex relations from a more micro-level code. How would it determine and value what emerges in complex cross domain interactions. Why should we assume what emerges from substrate neutral abstract relations would apply to what emerges in the physical world?
I think we would need to see much more evidence of the ability to simulate complex emergent properties based soley on micro-level understandings before worrying about singularities.
@Seth_blog
Delete>reality is only subjectively 'real'.<
Ok, but not quite what I was getting at. I was making the point that all of our experience of the external world gets into our brains via electrically-encoded signals. Since computers are designed to process electrically encoded signals, there should be no great conceptual leap in imagining how an artificial intelligence could have just as real an experience of the world as we do.
However, on reflection, I think what you may have been getting at is instead the question of how these signals become interpreted so as to give rise to qualia.
http://en.wikipedia.org/wiki/Qualia
That certainly bears discussion, and I have done so in part (with reference to the Mary's Room thought experiment) here:
http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-qualia-and.html
If qualia are what you're asking about, then let me know and we can discuss it further.
>
But our brain is both an input and output device, and is not abstract. It is physical and dynamicaly interconnected with the body and the environment.
<
An AI would also be both an input and output device. And while a brain is not abstract, the mind it hosts is. The same would go for an AI with a hardware and software side. An AI system, as I have described, could also be physically connected with its environment (and a robot body if you wish).
>It's physical structure cannot be separated from the information processing, it is part of the information itself.<
Well, I suppose that's the issue in question, isn't it? Whether a mind could be supported by substrates other than a brain? I think it can, so I disagree with your assertion.
>but Shannon information alone does not provide for semanitic meaning or pragmatic problem solving.<
A position I disagree with, as I tried to explain on the blog post I linked you to. In my view "meaning" derives from the relations between different nodes in a semantic network and from the rules we use to process information. An automated system with a sufficiently rich semantic network would then have meaning in the same way we do. As for problem solving, there are already many AIs that can problem-solve in particular domains.
>I think we will need to think of information at different levels to get there.<
Sure, I'm with you there. But multi-level modelling of information processing is entirely compatible with the view that it's all ultimately a matter of processing ones and zeros.
You kind of lose me on the last couple of paragraphs. I'm not sure what you mean, and it's not clear to me that even (most) humans are capable of what you ask. Examples might help.
If you mean we would need a computer system that demonstrates emergent properties based on simple rules, then we already have numerous examples of such systems.
>Why should we assume what emerges from substrate neutral abstract relations would apply to what emerges in the physical world?<
Well, in our case it's because we evolved in and grew up interacting with a physical world.
As to why we should expect an AI to understand the physical world, I'd say for much the same reasons. If we ever do develop an AI, evolution and automated learning will probably be a big part of the solution. The AI could learn about the physical world by interacting with an accurate virtual simulation or by controlling a physical robot with sensory feedback.
But the physical world is only one domain. I can also imagine the creation of an AI which understands nothing of the physical world but is a genius at math, music, chess, or other such abstract pursuits.
@Seth_blog
DeleteI feel like the conversation is getting a bit muddied with issues arising out of the distinction between strong AI and weak AI. "Semantics from syntax" and "Qualia" arguments in particular are only relevant to Strong AI.
If you doubt even weak AI (that it should be possible in principle for a computer to behave as if it were intelligent in the same way as we are), then we should focus on issues relevant to that.
In particular, I'd like to understand the basic reasons we hold our respective positions. As I have outlined in more detail in my comments to Filippo Neri below, I believe in AI because our brains are natural systems and natural systems can in principle be simulated on a computer. If it's possible in principle to simulate a brain on a computer, then there is at least one extravagantly inefficient way of getting a computer to behave intelligently, but there are probably better ways also.
What's the main reason you doubt the possibility of AI?
What is self realisation?
ReplyDelete//The overall transhumanist consensus, it seems clear to me, is that the Singularity is a foregone conclusion, with discrepancies within the movement only concerning the timing //
ReplyDeleteThis is wrong as a point of fact. There at three major schools of Singularity and all three are logically different from each other. The differences go far beyond the timing. Yes, singularitarians believe it's a foregone conclusion, but you have to ask them which kind of singularity they are talking about.
And for all your talk about rupture, there isn't a single argument against Singularity. This is not the first time I have seen this. Most of the real arguments against Singularity comes from within the community itself. Outsiders don't actually have a good argument against it. It's just that they find it hard to believe.
Note that for the singularity to happen, you don't need a conscious machine. You just need a machine smarter than we are. There is no reason that we would know how it would behave, anymore than chimps would know how we behave.
'Note that for the singularity to happen, you don't need a conscious machine. You just need a machine smarter than we are.'
DeleteIn what way are you defining smarter such that a singularity will appear? I would expect by some definitions some machines already are quite a bit 'smarter' than we are.
>but you have to ask them which kind of singularity they are talking about.<
DeleteSounds interesting. How would you describe the major camps?
>In what way are you defining smarter such that a singularity will appear? I would expect by some definitions some machines already are quite a bit 'smarter' than we are.<
DeleteA precise definition is difficult. However, let's suppose we're talking about a machine that is at least as capable as a human at all mental tasks, including tasks computers are not currently very good at such as natural language processing, image recognition, creativity etc.
Seth_blog,
DeleteSmarter, or more intelligent in this context mean, as Luke Muehlhauser puts it, efficient "cross-domain optimization."According to this definition,
Intelligence = optimization power/resources used
If I borrow a little more from Luke "This definition sees intelligence as efficient cross-domain optimization. Intelligence is what allows an agent to steer the future, a power that is amplified by the resources at its disposal."
There machines that are better than us at specific tasks. For example, even the most basic computer chess games can beat most of ordinary humans. What Singularity would mean is that not only computers will be far better than us in chess, but they also will be better than us in forming defense strategies, forming counter-espionage strategies, rigging elections, and lot of other stuff that requires problem solving.
***
Disagreeable,
There is Accelerating Change, Event Horizon and Intelligence Explosion. Eliezer Yudkowsky describes them here:
http://yudkowsky.net/singularity/schools
The forth one, Apocalyptism, seems to be the popular view here.
Apocalyptism: Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.
This comment has been removed by the author.
ReplyDeleteFilippo Neri,
DeleteYou can't do science on a blog. What we are doing is arguing, and you are right that the burden of proof is on the Singularitarins. But they have presented their argument. It's your turn now to show exactly where do they go wrong.
I believe it's the premises that you find hard to believe. That's understandable, especially when you think according Singularitarians believe that "given enough processing power, machines will somehow develop intelligence and even consciousness"
First of all, when you think about Singularity, stop associating it with consciousness. Strong AI is not part of Singularity. The confusion is due to the fact that it's the same people who talk about them both.
What's relevant here is intelligence (please refer to my reply to Seth_blog to see what I mean when I say intelligence). There too, no one says that given enough processing power machines will just somehow develop intelligence. You have to code them that way. That's something you learn very early as a Singularitarian. Nothing magically springs into existence. Everything has to be coded.
Singularitarins don't believe that Siri will eventually cause the Singularity, unless Apples codes it that way. What's needed, though this is not part of the core claim of Singularitarians, is a machine that can modify it's own code.
Of course this could be hard to believe. But most of these objections don't come because people who object to this have studied Minsky's papers and figured out why AI is not possible even in principle. It comes from the intuition that human intelligence and consciousness are somehow special. You yourself betray that intuition when you say "intelligence and consciousness are natural phenomena does not imply that they are substrate-neutral phenomena."
People find it hard to believe functionalism. For me, it's always been obvious. If my artificial leg is not functionally different from the biological one I had before the Mafia boss cut it off, it is my leg. Same goes for artificial hearts, artificial lungs, and of course, artificial brains.
brainoil -
Delete"But most of these objections don't come because people who object to this have studied Minsky's papers and figured out why AI is not possible even in principle. It comes from the intuition that human intelligence and consciousness are somehow special."
Are you here talking about those who believe that human consciousness is in some way supernatural - or at least not ultimately reducible to the electrochemical activity of neurons? I think that human consciousness is *special* but not in the way I just described; I think human consciousness is special because it's been intractably difficult to explain in a satisfactory way. Even of the many accounts of consciousness I've read over the years, I find none of them *intuitively* satisfying; and the others always seem to me to include an all-too-important caveat of something "yet to be explained."
But, generally-speaking, I don't rule out strong AI in principle. But could you point me to an explanation of a super-intelligence *without* strong AI? I must confess I find it intuitively implausible. My main reason for this is Muehlhauser's definition of intelligence you cited above, which seems to be at odds with a quote he cites at his site Intelligence Explosion:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever." - Irving John Good
My main problem with this is: where does the ultraintelligent machine's *motivation* come from? What is its impetus? I'm assuming you'd say humans would have to code that in, and then subsequently the ultraintelligent machine could alter its own code, and so on. So, if we think of all of man's intellectual activities - which I would say include philosophy, art, *and* technology - it seems implausible to me that we humans could write the code for an ultraintelligent machine that would include the potential for all those activities, *and then some*.
>But could you point me to an explanation of a super-intelligence *without* strong AI?<
DeleteStrong AI is the position that it is possible to make a conscious AI. It is separate from the claim of singularitarians that we can make an intelligent AI. For a description of an intelligence with no consciousness, see http://en.wikipedia.org/wiki/Philosophical_zombie
For what it's worth, I agree with you that I find the concept intuitively implausible. I suspect that consciousness is just what it feels like to be an intelligent agent, and that you can't have one without the other.
>My main problem with this is: where does the ultraintelligent machine's *motivation* come from? <
Intelligence and motivations are completely different things. I agree, motivations would have to be coded in. At the simplest, all we would need would be a motivation for the system to do as we ask of it. I don't see why this is hard to envisage as that's what essentially what our computer systems are like now. We could then ask it to develop new technology for us, etc, and it would be motivated to comply.
Steve,
Delete//I think human consciousness is special because it's been intractably difficult to explain in a satisfactory way.//
If that is the reason you believe consciousness is special, consciousness is special only because it confuses people. That's not something that should be considered special. Lot of ordinary people believe that love is special. They believe that love cannot be explained by science. Philosophers, and other people who read blogs like this one, don't quite go there. But for them, consciousness is what love is for ordinary people.The reason they can do this is because science and philosophy don't yet understand consciousness as well as they understand love. Since in 2013 we don't yet know exactly how consciousness works, it must be something special, unlike love.
I know that you don't think consciousness is supernatural, or that it is not reducible to electrochemical activity of neurons. But this whole idea of substrate dependence sounds a lot like vitalism. They are certainly invoking a vital principle. Intelligence cannot arise if this vital thing isn't there.
//which I would say include philosophy, art, *and* technology - it seems implausible to me that we humans could write the code for an ultraintelligent machine//
See this is what I was talking about earlier. Those who object to the concept of machine learning don't do so because they have a mathematical proof that Turing machines can't learn. They just find it implausible. I mean, lot of math talent is put into this. You can save their time and effort if you just come up with a proof that all their math is going to be useless. The whole thing now is almost like telling Wright brothers that they can't fly.
Anyway, you yourself answered most of your questions. Motivation has to be coded into the AI. But then again the question is what do you mean by motivation. In humans, it is an adaptation that gave them better survival odds. There's no reason AIs should have similar mental drives. If you write a program to learn the rules of cricket, it's pointless to ask where does it find the motivation to learn those rules.
>life is definitely not a substrate-neutral phenomenon.<
ReplyDeleteWhere do you get this conclusion from? In terms of forms of life, we have a sample size of one. I don't see how the word "definitely" is warranted.
What do you even mean by substrate in this context? I might suppose you mean the particular biochemistry shared by all life on earth. If we find life elsewhere, how do you know it couldn't have found a different substrate?
Furthermore, even if only one particular biochemistry is likely to arise in this universe, how do you know that there are not other universes with different laws of physics and so different possible biochemistries?
If you mean a radically different substrate altogether then we have to be careful that we're not simply getting into territory where we disagree about what we call life. For example - whether digital organisms which only live in computer simulations could be considered alive.
>The idea that given enough processing power, machines will somehow develop intelligence and even consciousness (the strong AI conjecture) is just, well, a conjecture.<
The idea that it's about processing power is a misunderstanding of the Strong AI position. It's more about the organisation and design of an AI system, and the difficulty in achieving success is likely related to the incredible complexity in the structure of a mind.
Given that we don't understand the functioning of the brain perfectly yet either, it's hardly surprising that we can't artificially reproduce it.
As to why I think intelligence is substrate neutral, a simple thought experiment convinces me. If you are committed to naturalism, then the laws of physics are mathematical in nature and the changes in a physical system over time can in principle be simulated. Furthermore, everything that has a causal effect can be reduced to interactions in physical systems, including thought processes.
(There are objections from people like Roger Penrose to the argument that mental thought processes could be simulated, but we'll come back to those later if you take them seriously - I don't.)
So if we had enough processing power, a sufficient grasp of physics and the ability to scan every particle in a living brain, we could simulate what's going on in that brain. The simulated brain would behave in a way precisely analogous to the real brain. If we wired that simulated brain up to appropriate sensory and output apparatus, we would expect it to interpret its environment and be capable of conversation in a manner indistinguishable from the original physical brain.
The question then is whether that brain would be conscious or not. That is debatable, although I believe so as there is no reason to think it would not be. It's irrelevant to the concept of the singularity, however, because for the singularity all we need is a machine which is intelligent, not conscious.
For more on why naturalism leads me to belief in Strong AI: http://disagreeableme.blogspot.co.uk/2013/01/strong-ai-naturalism-and-ai.html
This comment has been removed by the author.
ReplyDelete@Filippo Neri
Delete>Carbon chemistry is uniquely rich. Non-carbon-based biochemistry is only found in bad science fiction stories.<
I agree with you that we are unlikely to find non-carbon-based life. But you might find life based on chemistry with different chirality, or based on genetic material other than DNA, or with a myriad of other types of chemical differences. Whether these would count as a different substrate I have no idea, as you haven't explained what you mean by substrate in the case of life.
Even so, I don't see how you can rule out radically different forms of biochemistry just because we haven't managed to conceive of how it might work. My point stands. Our sample size of life is one. You are not justified in making conclusions about whether it could exist on other substrates.
(Regarding multiverses)
>Another argument from SF.<
So what?
The question is whether life in principle could exist on other substrates. Even if it's just a thought experiment, the mere possibility of other universes opens your position to doubt.
We don't know whether there are other universes with different laws of physics. There are serious cosmological theories which predict them. You don't know those theories are incorrect. You don't know that there are not other forms of life in other universes.
Remember, with regard to this point, I'm not the one making the claim. If you want to show that life could not exist on other "substrates" (whatever that means), then you need some evidence or argument to back that up.
>>Given that we don't understand the functioning of the brain perfectly yet either, it's hardly surprising that we can't artificially reproduce it.<
More arguments from ignorance.<
Nonsense. You are claiming as evidence against Strong AI the failure of AI researchers to produce a sentient machine.
Since we have not managed to achieve anything like a complete understanding of the biological brain, we have good reason to believe that brains and minds are complex, and thus no reason to believe it will be easy to achieve AI.
The failed promises of AI researchers in no way cast doubt on Strong AI in principle - they just show that it's very very hard. I will even allow that it may be impossible to achieve in practice.
None of this is an argument from ignorance. You are making the positive claim that Strong AI is impossible, and you have scant evidence to back up your position. Accusing me of arguing from ignorance or SF at every turn does your side no credit.
Your arguments regarding quantum mechanics are more interesting and so I will deal with them in a separate comment.
Your points on QM are subtle and thought-provoking, but I don't find them compelling. There is no quick answer so I am forced to break my response into more than one comment. I'll give you a general sense of it first and then deal with your specific criticisms.
DeleteThe brain-scanning argument is a thought experiment intended to show that it should be possible in principle to achieve machine sentience. It is not intended to be a practical suggestion. As such, your comments regarding the practical unfeasibility of such a project can be dismissed.
If machine sentience is possible in principle, then it is likely that it is possible with far fewer resources than would be needed to simulate every particle in a human brain.
For example, I suspect scanning at the neuron level together with an accurate model of neuronal behaviour and global effects such as neurotransmitter concentration would be perfectly adequate to produce something of much the same behaviour as a given human brain. There may yet be more elegant ways of producing an AI which would not take a human brain as a literal model.
Please don't take my arguments regarding the scanning of a human brain to mean that the scanned system will behave in precisely the same way as the original. Due to chaos and quantum uncertainty, the two brains will diverge over time, but both will be qualitatively similar, and both will exhibit intelligence.
@Filippo Neri
DeleteOn to your specific criticisms.
>Quantum physical system cannot be simulated accurately by ordinary computers.<
It might help if you gave an example of what you mean. Is it because we need exponentially increasing computational resources (infeasibility) or because quantum systems are fundamentally unpredictable?
Intelligence is not likely to be sensitive to the precise outcomes of random events at quantum scales. I'll discuss why when addressing your criticisms pertaining to precision.
>As for quantum computers, we already know that a general-purpose quantum computer is not possible – even in principle.<
We don't need quantum computers. Quoting Wikipedia here: "Given sufficient computational resources, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis."
http://en.wikipedia.org/wiki/Quantum_computer
Interestingly, this suggests that no argument against Strong AI can be built on the suggestion that the human brain is a quantum computer (not that you implied this). Instead we would need some kind of radically new conception of physics.
>The fact that a system is ruled by mathematical laws does not imply that it can be simulated – even in principle – by physically-realizable computers.<
This is correct.
However to date we have discovered no physical laws which are not computable. We had to spend billions at CERN to conduct experiments at huge energies just to push the boundaries of physical knowledge slightly (or to confirm what we already know). At the risk of hubris, it seems that the physics of the everyday are already known. If brains were doing something at a low level which violated the known laws of physics, we might expect to have witnessed that by now.
Even so, the possibility that something truly exotic is going on in the human brain cannot be ruled out. I grant you this point, but I find it exceedingly unlikely that it is the case.
>Scanning precisely every particle in a living brain is not possible - even in principle - because of quantum effects<
There is no reason to suppose that absolute precision is necessary to produce an intelligent system. Precision within the bounds of what is allowed by quantum mechanics should actually be far far more than we actually need.
If your intelligence were critically dependent on the absolutely precise state of all the particles in your brain, then a light tap on the head or even a chance quantum fluctation would be enough to kill you. Brains have survived bullet wounds, so I don't think this is the case.
Healthy brain function cannot be dependent on absolutely precise particle state because there *is* no absolutely precise state. Particles have no well-defined simultaneous position and momentum so whatever your explanation of consciousness it can't depend on it.
And since people do not generally behave particularly randomly, it seems reasonable to suppose that the precise outcomes of quantum events to not have much of a qualitative effect on people's behaviour. This is why I think pseudo-random computational processes with similar gross characteristics should be adequate to simulate the genuinely random quantum events in a human brain.
>... and the chaotic nature of classical physics.<
Like quantum effects, chaos is not a problem. The butterfly effect just means that over time, the state of the real brain and simulated brain will diverge. The simulated brain will no more cease to be intelligent than a butterfly could put an end to weather.
This comment has been removed by the author.
ReplyDelete@Filippo Neri
DeleteWhy did you delete your comments? You had some good points and it seemed like we were approaching some kind of agreement.
I have your latest reply in my email account and I wanted to respond to it, but I feel silly doing so when it's been deleted.