Is the mind a kind of computer? This episode of Rationally Speaking features philosopher Gerard O'Brien from the University of Adelaide, who specializes in the philosophy of mind.
Gerard, Julia, and Massimo discuss the computational theory of mind and what it implies about consciousness, intelligence, and the possibility of uploading people onto computers.
Gerard's pick: "Alan Turing: The Enigma The Centenary Edition."
About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Subscribe to:
Post Comments (Atom)
I take it that pan-computationism says that everything computes something, but that doesn't mean it computes everything.
ReplyDeleteA mass on a spring is an analog computation of a harmonic oscillator because it IS a harmonic oscillator, but it won't play chess. So computation is still a useful concept.
What's missing in this discussion is an explanation of how digital computers work, which would make it obvious why they can't be conscious. All they do is perform a simple operation like an addition per clock tick, and store the result in memory, where it just sits like text in a book. No way is that conscious. But it could simulate a model of the brain and behave just like a brain. So if you kick robot that's simulating a brain, it'll behave like it's hurt without actually feeling pain.
ReplyDeleteWhy wouldn't a perfect simulation of the brain experience everything a physical brain experiences? Of course it would, if it's really accurate, down to chemical reactions.
DeleteDo you really think that sitting there with a calculator (processor), a set of instructions (brain simulation code), and a notebook (memory), you can create the feeling of pain? The state of the brain is all written in the notebook, but the notebook doesn't feel pain. The calculator just adds and subtracts numbers. Where's the pain?
DeleteYes, Max, I do, at least. Actually, to be more precise, I think that a computer, with sensors to detect the world and actuators to act on it, can do and experience things in exactly the same way as an animal mind.
DeleteYou don't need sensors and actuators to feel pain. It can be all in your head.
DeleteGetting back to the above analogy, let's make it even more absurd. Suppose the calculator printed out the sequence of additions and subtractions that created the pain. Now, if you put away the instructions and the notebook, and simply re-enter the sequence of additions and subtractions, will that create pain again?
Or answer this question. At what point in time is the pain being experienced? When the calculator is adding 2+2? While writing 4 to memory?
DeleteMax:
DeleteAssuming naturalism holds, the only thing which is needed to create subjective sensations is the geometric pattern of the neurons in any given brain.
If you think it's ridiculous that a machine that adds and subtracts could experience pain, just remember that the machinery which can currently do it is basically made up of protons, electrons and neutrons. I'm talking about the brain.
So if we could simulate the brain down to the level of atoms, why wouldn't the simulated atoms be able to do what the real ones already do? It's a pretty educated guess that if naturalism holds, then a perfectly simulated person could do everything a physical person does.
The simulation part is the problematic issue here. How do we accurately simulate atoms, and how do we scan a brain down to the level of molecules, or at least of individual neurons?
Bubba:
A computer with sensors and motors probably doesn't experience anything. How could it? Why would it? The ability to experience qualia probably evolved over millions of years. There's no reason to believe it would just pop up in any laptop with some fancy USB gadgets connected to it.
Hi Max,
Delete"All they do is perform a simple operation like an addition per clock tick, and store the result in memory, where it just sits like text in a book. No way is that conscious."
Neurons:
"All they do is a simple operation such as react to activation of their connections by activating themselves when a certain threshold is reached. No way is that conscious".
My approach is that the hardware is never conscious. It's the software, the algorithm, the pattern that's conscious. Consciousness is not a physical objective phenomenon, it's subjective, only accessible to the software experiencing it.
As Massimo said, a computer simulation of photosynthesis is not photosynthesis. Consciousness, like photosynthesis and digestion, is a physical process. You might model it mathematically and simulate it on a computer, but an abstract model or a simulation of something is not the thing itself. Now, if you build an identical brain out of neurons, then of course it'll be conscious.
Delete@Max
DeleteComputation is not like photosynthesis
eh eh, I just had to comment on this one: *nobody* says that *computation* is like photosynthesis. it is *consciousness* as a biological phenomenon that bears similarities with other biological phenomena. as usual, you begin by equating consciousness and computation, at which point your argument is a winner by default...
Delete@Massimo
DeleteQuite right. I just typed the wrong word. Sorry about that. The linked article uses "consciousness" not "computation".
Practical simulation already has a precedent in the computer sciences: emulation, virtualization. You can emulate Windows 95 within a virtual machine on an iPad, even though the two operating systems use drastically different processing architectures.
DeleteEven though your emulated x86 processor isn't a "real", "physical" thing, for all intents and purposes it still does everything a real one would do. You could play Solitaire on the emulated Windows 95 system. It wouldn't care that it's running on a real x86 processor or on a virtualized one.
Assuming we can reliably virtualize physical interactions, why should photosynthesis care that it's not generating physical oxygen? If the simulation worked, it would produce simulated oxygen. Of course, there would be no oxygen coming out of the processors, but we would understand what's going on.
Similarly, it wouldn't be the processors and the memory who would experience qualia, it would be the simulated person reporting it.
Qualia isn't a physical substance that we can measure. Qualia isn't even something we can directly detect in other people. But if the simulated person reported it experienced "something", then I'd consider this a hint that we're on the right track.
Once we get to that stage, we can devise experiments which would help us answer several questions about how life and the mind works. The big issue is to accurately simulate real physics, which, unfortunately, is something which probably can't be done in classical computers. We might have to settle for rough approximations, but I'm curious what we'd come up with even then.
Ethics would probably be an issue. There are some serious ethical dilemmas concerning running experiments on virtualized humans. The experiment I'd like to run is selective lobotomy of various segments of the brain. Keep the communication parts active, to allow the individual to report on his/her experience, but slowly remove small regions of the brain to establish what's the minimal structure required for qualia to be reported.
"It's the software, the algorithm, the pattern that's conscious."
DeleteIs the algorithm written on paper conscious?
I don't think a single neuron is conscious, I think it's the billions of neurons in a network all working in parallel that somehow create consciousness.
"Is the algorithm written on paper conscious?"
DeleteKind of. I'm a dualist of a kind, but not Cartesian dualist. My own views are rather weird and certainly not representative of CTM proponents as a whole, but this is what I think:
I think the mind is a mathematical structure, so it only exists in the Platonic sense. It is not a physical phenomenon but an abstract phenomenon. Like all mathematical objects, I think the physical representation of the mind is irrelevant, whether on paper or in a brain.
But to be conscious in the same way that we are, it needs to have an environment, input, output, time. So to really capture a conscious mind on paper you would probably need to have an algorithm capable of simulating a world for that mind (although not necessarily much of a world). I don't think the algorithm actually needs to be run. I think running an algorithm allows us to explore what happens, but any minds existing within an algorithm exist independently platonically. I believe all mathematical structures which hold minds within them exist, just as all mathematical objects exist platonically, except that those which hold minds will contain observers that perceive those objects to be physical reality. I think our universe is such a mathematical structure.
This view is called the Mathematical Universe Hypothesis.
"I don't think a single neuron is conscious"
And I don't think a single addition is conscious. It's the complex pattern of additions and other operations taken as an aggregate that is conscious.
I don't think that any sequence of additions and subtractions, whether on a calculator or an abacus or on paper, is conscious. If it were conscious, and you changed the radix, or base, would it still be conscious?
Delete@Max
DeleteI agree, a sequence of additions and subtractions cannot be conscious, because that's just a number.
I said "and other operations". You need to be able to perform conditional operations and jump to different parts of the algorithm, go in loops and that sort of thing.
And no, changing the radix or base makes no difference. A radix is used only to represent numbers to humans. It has no bearing on the fundamental logic of mathematical operations. Any operation on an algorithm which maintains the essential logic of what's going on makes no difference as far as I'm concerned.
Programming on non-classical/digital substrates is an active research field. E.g., see Analyzing Boundaries of Computation Group, National University of Ireland. (The underlying "computer" of a programming language does not have to be the standard, digital computer.)
ReplyDeleteAs for code that will make a (neuromorphic) robot conscious, I expect computational reflection has to be a part of that code.
(BTW, when you do upload your mind to your robot companion, it will from that point on literally have a mind of its own, and who knows what could happen.)
I don't expect any code to make a robot conscious. The brain doesn't execute code, it's all hardware, neural networks.
DeleteOf course the brain is executing code. What else could it possibly be doing? Brains scientists working on the BRAIN initiative think it is.
Delete"But if we really understood the brain's language, the brain's code, we could potentially recreate everything you do with your own arm."
John Donoghue, director of the Brown Institute for Brain Science at Brown University.
You were talking about code as in a computer program. The BRAIN initiative is mapping neurons, which is more like the architecture. I just said it's all neural networks.
DeleteMax,
DeleteBut what about neuromorphic computers? They do not have "program code" in that sense. They attempt to emulate the neural networks in the brain using analog computation. Of course, the underlying implementation is still completely different than that of a biological brain and from a scientific perspective the only benefit over "digital" computation is that of efficiency. So, do you think a neuromorphic computer can be conscious?
I don't know how neuromorphic computers are actually implemented. All I can say is the better they mimic the brain, its architecture, and its parallel, asynchronous, analog processing, the more likely they can be conscious.
DeleteOn a side note, conscious computers would be needed for brain uploads, but I wouldn't want my robot slave to be conscious.
A neural network made of logic gates (like the one IBM is making in silicon) executes programs, aka code.
Deletehttp://www.research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml
I was thinking more of the FACETS project, which I believe is going to be the basis for (part of) the Human Brain Project's planned neuromorphic computers. These have analog models for the neurons/synapses, which attempt to mimic the response of the real neurons closely and are meant for simulating actual parts of the brain for purposes of studying them. The particular connection configuration that is being studied must, of course, be programmed into these chips but this type of programming is very different than what we normally mean by "program code" that runs on general-purpose computers.
DeleteOf course, the only real reason for this is efficiency but I was curious what the people who maintain that only analog computers can be conscious will say about this kind of a computer running a simulation of actual brain circuits.
The Chinese Room starts a whole separate debate over what counts as "true understanding" and whether it requires consciousness. A more obvious example of consciousness without intelligence is pain. I'm pretty sure there's no sequence of calculations that can be entered into a calculator to produce the experience of pain, like entering 2 produces mild pain, and entering 9 produces severe pain. If there is, then I guess it should be outlawed.
ReplyDeleteOn the other hand, I think it's plausible for an unintelligent creature with a nervous system to feel pain.
Pain seems to simply be aversional sensations generalized.
DeleteMax, the bottom-level operations of the brain are just as elementary as a transistor. But much, much slower.
ReplyDeleteI'll have to listen to the podcast before I comment. I don't think I agree with some of Massimo's stronger skepticism/criticism of the computational theory of mind (although he's much more careful than, frex, Searle). I think computationalism is very powerful, and I think it not only could explain consciousness, but it already can explain consciousness. I've already been through Searle's ideas, and I think I can handle all of them, but, as I say, Newer analyses seem to be a lot more sophisticated than his.
Yeah, and they're parallel and continuous, not sequential and synchronized to a clock. When you look at a picture of square, you see that it's a square right away. A simple computer might trace out the square one pixel at a time and keep track of the segment lengths and angles to determine that it's a square.
DeleteA simple computer, yes, but most people investigating computer vision and the intersection of philosophy of mind and computer vision would not attempt to simulate vision that way.
DeleteThey can simulate neural networks, which gets closer to a simulation of the brain, but the simulations on a standard computer are still doing one arithmetic operation per clock cycle.
DeleteAs others have said, if you're using a digital computer that's true. Although, as others have suggested, "digital computer" is just a simplifying model of the occurrences in a real analog machine.
DeleteSince a digital, sequential computer can ultimately do everything an analog, parallel computer can do, I think the difference is moot unless we're talking about efficiency.
DeleteDo you have any proof of this statement? From my readings this year, I'm not certain this is true. It's certainly not true of a Turing Machine.
Delete"Do you have any proof of this statement? From my readings this year, I'm not certain this is true. It's certainly not true of a Turing Machine."
DeleteIt is true of a Turing Machine. A turing machine can do anything that is computable, and an analog computer cannot do anything that is not computable.
I think you may be being led astray by theoretical discussions of models of analog computers with perfect, infinite precision analog values. These could in principle perform uncomputable calculations. However, they are also physically impossible to build, because no measurements can ever be made to infinite precision.
It's an open question whether real world physics is computable. Many scientists believe it is, such as Max Tegmark. Others, such as Roger Penrose, believe it is not. But if there is a debate, you can be sure that nobody has yet found a device which has been shown to perform a non-computable computation. If analog computers could do anything Turing machines could not (given enough resources), the debate would be over.
Sorry I can't find a good reference to back this up right now. Do you have a reference to the contrary? I have actually found some of these, but they are not convincing to me for various reasons (e.g. discussing impossible perfect precision machines). Perhaps if you show me where you get this idea from I will be able to explain why I don't think it is correct.
"It is true of a Turing Machine. A turing machine can do anything that is computable..."
DeleteNo, that is absolutely not true. I won't be able to go over my papers and give a specific limitation on this, but what you say is absolutely not true. This is the restatement of the Turing Thesis that many people incorrectly use.
Sorry, BubbaRich, I'm not going to accept that without a reference. I think you're geting the technical terms a little bit confused.
DeleteComputable is pretty much defined as that which can be achieved by a Turing machine. The Turing thesis shows that a Turing machine can calculate anything that is effectively calculable, i.e. by means of an algorithm, and the formal definition of computability in computer science is to do with effective calculability.
There are functions a Turing machine cannot compute, but which could be computed by hypothetical hypercomputers, but these are classed as "uncomputable".
I'm using the terms in this technical sense, and I am positive that I am correct in my usage. Please explain why you think I'm wrong.
BubbaRich, what you are saying is definitely wrong, unless you are using non-standard definitions.
DeleteFrom the Stanford Encyclopedia of Philosophy:
"There are various equivalent formulations of the Church-Turing thesis. A common one is that every effective computation can be carried out by a Turing machine."
Here is what Turing himself said (referenced from the SEP page):
"
LCMs [logical computing machines: Turing's expression for Turing machines] can do anything that could be described as "rule of thumb" or "purely mechanical". (Turing 1948:7.)
"
and
"
This is sufficiently well established that it is now agreed amongst logicians that "calculable by means of an LCM" is the correct accurate rendering of such phrases. (1948: 7.)
"
Again, I shouldn't do this since I can't get into the documents right now, but the Church-Turing thesis is slightly different than what Turing said about his machines.
Delete@BubbaRich
DeleteThe Turing Thesis states:
"We shall use the expression 'computable function' to mean a function calculable by a machine, and let 'effectively calculable' refer to the intuitive idea without particular identification with any one of these definitions."
And the thesis went on to show that any such function could be computed by a Turing machine. This is what I'm saying.
I also make the claim that there exists no analog computer which can compute an uncomputable function. If you think I'm wrong, please send me a reference.
I also make the claim that it is impossible in principle to build an analog computer which can compute an uncomputable function. This claim is an open question, but seems likely.
Massimo, am I right to assume that you would disagree with Gerard regarding the possibility of uploading a mind to a substrate even to a computer very similar to a human brain (perhaps, even, an identical human brain)? I seem to recall from a pervious discussion that you maintained that under circumstances similar to the above, there would be no "transference" of consciousness. Rather, a mental clone would be created. Would this be an accurate description of your position?
ReplyDeleteI would also be curious to know, in your view, how much tinkering could be done on a system like the human brain until it can't instantiate consciousness any longer. For instance, if it were possible to devise some alternative neurochemistry (e.g. a different set of neurotransmitters and receptors) which would be capable of sustaining the same neural architecture/macrostructure (neuronal connections and firing patterns and such), would that be sufficient to instantiate consciousness?
I'm interested in Massimo's answer, too. I don't think there's any way you could claim that a transference of consciousness has taken place, because of the Transporter Problem.
DeleteYour second paragraph is a good argument against people claiming that the computationalist theory of mind doesn't work.
In the strictest sense, I don't believe a transference of consciousness occurs either. However--again, strictly speaking--I'm also not sure there's any actual transference of consciousness is going on between different states of the same individual's brain.
DeleteConsider the physical person A, who at time t can be said to have the brain B(t). Suppose n amount of time elapses, during which time there naturally occur certain changes in the brain of person A; let's label the brain at time t B(t+n). Assuming that person A is conscious at time t while possessing the brain B(t), will there have been a transference of person A's consciousness between the temporally and physically distinct brains B(t) and B(t+n)?
Or, if you would like to phrase it differently, do persons A(t) and A(t+n) possess the same consciousness? And if they do, what is the meaningful difference between this "transference" and that from person A's brain B(t) to a "computer" very similar to B(t) (indeed, perhaps identical to B(t))?
(Please forbear any mistakes in my use of symbols; I attempted to be as clear as possible, but I have no training in symbolic logic.)
Yes, I think that is a very defensible position. However, you still have the transporter problem: You've created a new person, but the old person still thinks it exists, so consciousness has been multiplied, not transferred.
DeleteHi Bjorn,
DeleteI think what you've said is fine, however it's then a question of semantics. It depends what you mean by a "transfer of consciousness".
Most people are perfectly happy to believe that the changes over time in their minds constitute a continuous, evolving existence for that mind. I say that's fine too, and if so, then I think mind uploading should be considered to be much the same.
How do you deal with the Transporter Problem, DM?
Delete"How do you deal with the Transporter Problem, DM?"
DeleteKirk survives.
If the original Kirk is not destroyed, and a new Kirk is created on the planet, then both Kirks inherit the original identity of Kirk, but that identity now diverges into two like the forking of a river.
Neither is metaphysically speaking any more authentic than the other. From the point of view of Kirk before the transporter is engaged, he has a 50% chance of ending up where he is or at the planet's surface.
This is very like how the Many Worlds Interpretation of QM explains the apparent randomness as the universe splits every time a wave function collapses. There is no original universe, no copy universe. Both universes have equal claim to the identity of the original, and only from the point of view of those within the universes does it look like randomness and probability.
I've written extensively on this on my blog,
http://disagreeableme.blogspot.co.uk/2012/06/essential-identity-paradoxes-resolved.html
It's unfortunate I don't really have time to join this discussion (preparing for a trip to Brazil...). But, DM, "Kirk survives"? Really? Even if the CTM is correct Kirk dies or is copied, to call that "survival" is to use the term in a way that is bizarrely out of common understanding.
DeleteLet me ask you: if you were about to upload, but could do it only under the condition of destroying the original, would you really do it? Even Chalmers balks at that prospect...
Hi Massimo,
DeleteWhat is Kirk? He's not just a collection of atoms - the atoms in our bodies are changing all the time. He's certainly not even his body, as most naturalists would agree that if his brain were transplanted into a new body the resulting entity would be Kirk.
For various reasons I won't go into here but i have expressed on my blog, I don't think he's even his brain, because the CTM means the mind is substrate-independent, and the mind is a computational process. I think the identity of Kirk belongs to this computational process - to the mind.
So if Kirk is teleported, then the relationship of the mind of the teleported Kirk to the pre-teleportation Kirk is much the same as the relationship between my mind at time t=0 and time t=1. Later mind states are determined by prior mind states. The computational process continues uninterrupted even if it occurs in two different physical places.
It's like hibernating a computer, transferring the saved state and resuming. From the point of view of a computation on the computer, nothing significant has changed. The computation continues, and it's the same computation.
Would I upload? Abstractly, yes. In practice, I'd want to be pretty confident the technology worked, I'd need a compelling reason to leave the physical world behind, and I'd need to be sure I wouldn't be upsetting my loved ones and that I had legal rights as a virtual human being. So, in practice, perhaps not.
I probably wouldn't want to be the guinea pig that does it for the first time, but I'd probably be willing to follow in the footsteps of others.
I would want it done in such a way as the destruction of the original happened pretty much simultaneously with the transfer to the virtual world, or else I would want to be unconscious while the upload and destruction took place.
In fact, if all these conditions were met, it might be preferable to destroy the original. If I really did have a good reason to want to leave the physical world, e.g. some painful terminal disease, I would not want to allow the 50% possibility of continuing to exist in the physical world.
Why call it a "computational theory of mind" if it doesn't explain consciousness?
ReplyDeleteThe "computational theory of mind" is a wrong theory. That's the answer.
DeleteOr maybe it's right, and maybe it does explain consciousness.
DeleteMassimo:
ReplyDeleteI'd like to make a respectful, and I hope constructive, suggestion about the approach you use when conducting these podcasts.
First, let me say how much I appreciate that you are able to get these important thinkers to take part in an open discussion.
But what I find in many of the podcasts I have listented to is that the guest spends more time responding to your thoughts than to stating and expanding on his or her own.
For example, O'Brien seems seems to favor a computational model based on analog computing. But he never really had time to describe it in detail: what it is, how it differs from digital, why it is better, what research is happening using it, why emulation of analog computers on a digital computer won't work etc.
What would you think of adopting a more interviewer style, where the goal is to let the guest do most of the talking, and to concentrate on open-ended questions which explore the points the guests raises about his or her ideas, rather than suggesting topics for the guest to respond to.
(I made a similar comment on the podcast site).
It strikes me a bit odd at the outset to combine 'computational' and 'theory'. What is a theory? If one thinks of a theory in physics, it is typically expressed as a collection of field equations. But these field equations are computational in that they are used (as the basis of calculation) to verify the theory by matching the predictions they make with experimental data. A theory that isn't computational wouldn't be useful. In fact, I don't know what a non-computational theory would be (given what we mean by 'theory').
ReplyDelete(this is a bit of a repetition of a comment I made on the podcast but it seems the comments on this blog are more active)
ReplyDeleteCan someone explain what is the philosophical meaning of "analog computer"? Physically, all computers are actually analog, including the ones we call "digital" and the reason we call them digital is because we assign digital meaning to certain analog properties. We do that in order to make it (much) easier to work with these types of computers but there is nothing fundamentally different about the computers we call digital (on a physical level).
I'm only familiar with the term "analog computer", as it is used in science/engineering contexts, where the only benefit on analog computers (i.e. computers that are not designed to be interpreted as digital) is that they can sometimes do the job with significantly less resources than a corresponding digital computer, at the expense of less accuracy and less flexibility. But, at least in science and engineering, there is nothing that an analog computer can do that a digital cannot, if cost were not the issue. However, the implication here is that the philosophical meaning of "analog computer" involves something more fundamental. So, can anybody explain what exactly this is?
Also, as another commenter here mentioned, people today are developing neuromorphic computers, which are partially analog and are specifically designed for the purpose of efficiently emulating the neurons in the brain. Now, the only reason the neuroscientists and engineers go to the trouble of doing this is because simulating the whole brain on a traditional digital computer is cost-prohibitive, whereas the neuromoprhic computer, due to being partially analog, can make this task realistic. But I wonder, do the philosophers here consider such a computer to be "analog" in the same sense as the term was used in this discussion?
Digital computer is a simplifying model/abstraction, mostly based on Turing's model machine. Designers go to great lengths to compartmentalize digital behavior inside an obviously analog machine.
DeleteDigital computers can do so much in our world because we understand the model, and we can exploit and control it with more and more elaborate programming languages and layers of control. Philosophically, an analog computer can do many things that a digital computer in this sense cannot do.
I think you may be misreading the Turing thesis, but I'll address that in my response to Filippo.
"Digital computers can do so much in our world because we understand the model, and we can exploit and control it with more and more elaborate programming languages and layers of control."
DeleteBut similar reasoning applies to analog computers as well. When you design an analog computer you still need to understand the model of the system that you want your analog computer to approximate and you still need to restrict the analog computer to modelling the particular type of system. Also, "layers of control" and some form of programming (although different) do apply to analog computers as well, otherwise there is no way to get them to do what you want.
"Philosophically, an analog computer can do many things that a digital computer in this sense cannot do."
But the question is what, specifically are those things and what, specifically is preventing a digital computer from accomplishing the same?
"I think you may be misreading the Turing thesis, but I'll address that in my response to Filippo."
I'm not sure what you mean here (even after reading your other comments). An analog computer is typically designed to mimic certain physical processes, but those processes still follow the known laws of physics. The known laws are all computable, in the sense that, in theory (but often not in practice, due to resource-constraints), design a Turing machine that solves the equations that describe these laws. But other than that, I'm not sure what the Turing thesis has to do with it. Some people object that "solving the equations" is not the same as doing the actual thing, but that criticism would be equally applicable to analog computers as well.
I think you're wrong here. A digital computer can do anything an analog computer can do and vice versa. The only difference is in which is more efficient for a given task.
DeleteI was watching NHK World news (from Japan.) As usual, the program had a section on the cleanup at Fukushima. As usual, the program showed tired (human) workers trying to contain radioactive leaks, wearing flimsy “bunny” suits as protection against radioactive contamination.
ReplyDeleteThe question that always comes to my mind is where are the robots? Why can't Japan, the country of robotics, deploy some robots, at least to perform the most dangerous tasks, in the most dangerous locations? This is particularly puzzling, when you see the next section, showing a Toyota assembly plant: the factory floor is full of robots, doing essentially all the work.
The answer is that the robots have zero intelligence. They work well in the assembly plant, because there they perform repetitive actions, in a completely predictable environment. The humans on the factory floor are there mostly to deal with unpredictable incidents.
A post-accident nuclear plant is, on the contrary, an unpredictable environment. It takes some minimum intelligence to work there: no robot in existence is intelligent enough. We don't need human intelligence. A robot with the intelligence of an insect could be useful: at least it could walk around, avoiding obstacles while carrying radiation-measuring devices.
The sad truth is that a robot with the intelligence of an insect is not possible today and, in fact, it may never be possible. The AI program, after five decades of hype has been a total failure. It is impossible, today to program a remote-handling arm to perform simple object oriented tasks, like “take that test tube and pour the contents in this container.” This impossibility follows not from practical but theoretical considerations.
As Fortnow explains in “The Golden Ticket,” “Feed in a movie of a human hand tying a knot, and immediately have the computer repeat the process with robotic hands. [...] We can if P=NP.” Unfortunately, P is almost certainly not equal NP, so that programming robots by example is not possible. One can generate tables of actions to cover the most common tasks, but the problem is, the table size grows exponentially with the duration of complex tasks. No matter how much memory you have and how fast processors you have, you can't beat an exponential. Only relatively simple tasks will ever be programmable, like the ones performed by the robots in the Toyota factory.
The AI people have failed so completely because they have been mesmerized by the Turing thesis: essentially any task can be performed by a generic computing device, given enough time. Unfortunately, for many tasks, the time grows exponentially with problem size. So the Turing thesis is meaningless, because computational efficiency, not computational possibility is what matters. Time is our most scarce resource. Generic computing devices are generically useless. The cleanup in Fukushima is a dramatic example of this.
Of course, a materialist (like me) will assert that a physical system, like the brain of a bee, can be capable of doing amazing things. It is is very efficient, because it is not a generic, programmable, computing device. However, nobody knows how to reproduce it artificially. This is unfortunate, because bees are capable of performing tasks like fixing leaking storage tanks.
If you can show me all of those great plasma and LCD televisions that evolved without human assistance, then your statement has a point.
DeleteThe computation necessary is not even very complex. It requires a few inputs, a few outputs, memory, and the ability to relate and connect different memory structures.
Qualia are just sensor readings (like a thermostat temperature) perceived and related to all other existing memories (including memories of sensations and actions). What is it like to be a thermostat?
What is your definition of "intelligence"? Robots already have the "intelligence" of insects. You are conflating two different types of machines, simple machines that are programmed with a sequence of actions, and more complex machines with the ability to sense their environments and select actions in response to sensory information.
DeleteI think that you are reading Turing's Thesis wrong. In fact, it definitely doesn't say "essentially any task can be performed by a generic computing device." I've been researching the best thinking on non-Turing computing machines for several months, but I haven't looked at it in a few weeks. I need to look at it again to more fully participate in a discussion like this.
Hi Filippo,
DeleteThe so-called failure of AI may just be indicative that AI is a very hard problem to solve.
We can't create yet create complex living multi-cellular organisms from scratch either. That doesn't mean vitalism is true.
The DARPA Robotics Challenge aims to develop robots precisely for a Fukushima-like scenario. The robots don't have to be completely autonomous, but autonomy factors into their score. They certainly don't need to be conscious.
Deletehttp://www.theroboticschallenge.org
Neuromorphic computer programming involves the development of new programming languages, like IBM's Corelet.
ReplyDeleteCan a single neuromorphic computer run multiple different software applications the way general-purpose computers do, or is this a hardware description language like VHDL, which describes the hardware architecture?
DeleteIBM's neuromorphic computer can run multiple different applications written in Corelet. "New corelets are written and added to the library, which keeps growing in a reenforcing way." (from the paper above)
DeleteTrueNorth is the hardware (brain), corelets (written in Corelet) are the software (mind).
What about a quantum computational theory of mind?
ReplyDelete"What We Can Learn from the Quantum Calculations of Birds and Bacteria"
Because quantum computers are intermediate between digital and analog, a "quantum computational theory of mind" is not a "computational theory of mind." It has some aspects of a substrate-dependent theory.
DeleteQuantum computers cannot be simulated by Turing machines in polynomial time. The possibility of exponential-time simulation is meaningless, because exponentially-hard simulations cannot actually be performed, even using all the resources of the observable universe.
"The possibility of exponential-time simulation is meaningless, because exponentially-hard simulations cannot actually be performed, even using all the resources of the observable universe."
DeleteI wouldn't say it's meaningless. It depends on whether you're approaching the question from an engineering or a metaphysical standpoint. Metaphysically, I think it ought to be possible for a standard digital computer to run a conscious algorithm. Practically, it may be completely infeasible, and perhaps quantum computations are required to make it possible.
Only metaphysical concerns are relevant if we want to consider whether the Chinese Room thought experiment works, for example. If a quantum computer can be conscious, then the CR doesn't work, because if Hilbert can have a thought experiment with an infinite hotel then I don't see how we can't allow Searle a thought experiment with an infinite amount of time to conduct the Chinese Room by simulating quantum processes in exponential time. Practical feasibility is simply not relevant to the philosophical argument.
DM,
Delete>"The possibility of exponential-time simulation is meaningless, because exponentially-hard simulations cannot actually be performed, even using all the resources of the observable universe."
I wouldn't say it's meaningless. It depends on whether you're approaching the question from an engineering or a metaphysical standpoint.<
Agreed, but AI is an engineering, not a metaphysical issue. Besides, unless you are an extreme Platonist (OK, YOU are,) engineering feasibility should eventually affect your metaphysics.
I am an ultrafinitist mathematical realist: I do not believe in the reality of infinite numbers (like regular finitists,) but I also don't believe in very large number either, because of their actual non-realizability.
On the CTM, I'm generally interested in questions of principle, not practice. I think we're probably more or less on the same page here, then, since I agree that it may be that unconventional physical architectures are necessary to make strong AI feasible (although I am far from certain that this is so).
Delete@ Filippo Neri
Delete> Because quantum computers are intermediate between digital and analog, a "quantum computational theory of mind" is not a "computational theory of mind." It has some aspects of a substrate-dependent theory. <
I should have said a "quantum mind" theory.
Disagreeable Me: "I'm generally interested in questions of principle, not practice."
DeleteAnd, of course, the opposite for me. :)
My definition of 'computation': Whatever code can run, whether humanly-made or autonomously-generated, on a physical computer, whether conventional, quantum, neural, DNA, ... .
Philip, with you definition of computation, the human mind is, in fact, computational. It has to be, because it is a physical system.
DeleteHowever, your definition is non-standard.
Yes. I can't think of anything that is physical (as scientists view that domain) but non-computational.
Delete@ BubbaRich,
ReplyDelete>If you can show me all of those great plasma and LCD televisions that evolved without human assistance, then your statement has a point.<
LCD TVs don't grow on trees. So what?
>What is it like to be a thermostat?<
Not fun at parties.
>What is your definition of "intelligence"? Robots already have the "intelligence" of insects.<
Robots can't move as effortlessly around obstacles as insects do. Also robots can't perform tasks that insects perform routinely, like autonomously building and repairing structures.
Note that when I talk of performing a task, I always mean controlling the performance of the task: I assume that existing sensors and actuators are already capable of performing almost any conceivable task. The remaining problem is developing the “software” to control the actuators, based on the inputs from the sensors.
Hi Massimo,
ReplyDeleteThis is a great podcast. It's Rationallyspeaking podcast is at its best when you guys talk about deep philosophical stuff, instead of stuff like wine tasting.
But you got to do something about Julia's sound. Perhaps recording voices of the two of you separately and combining them later would do the trick. It's extra work though.
Anyway, great podcast.
Hi Massimo,
ReplyDeleteThere are some issues I have with what was discussed on the podcast.
Analog computers:
Firstly, I think Gerard O'Brien is wrong to think that the analog/digital distinction is significant. It cannot be, because a digital computer can emulate an analog computer to an arbitrary level of precision. It doesn't necessarily even need to be all that precise, since analog computers are noisy and imprecise by their very nature. The only thing analog computers gives you over digital computers is efficiency for certain tasks.
Pancomputationalism
I think pancomputationalism is both correct and relatively useless. Yes, a rock really is doing a computation, but of course that doesn't mean it is useful to think of it in this way. I certainly wouldn't unless forced to concede that it is when prompted with such a question. But not all computations are interesting, much less conscious. The computations carried out by a brain are interesting, and produce consciousness. There is something in the complexity and self-reflecting nature of the computations in the brain that produces consciousness, not the mere fact that the brain is computing. The nature of the computation is crucial, so questions about rocks are irrelevant.
Dennett's response to the Chinese Room
I don't think memorising the rules would let you understand Chinese. I think the implementation of those rules would manifest a second, distinct mind. I was disappointed that this possibility was not mentioned.
Intelligence vs Consciousness
Everyone seemed reasonably prepared to accept that the simulation of a brain might be intelligent, but not conscious. But this goes against Massimo's stated belief that consciousness is functional, that it is necessary for our intelligence. If we need consciousness to be intelligent -- if this level of intelligence is impossible without consciousness, then I think achieving intelligence in an electronic device is good reason to think that that device (or its software) is also conscious.
@ Disagreeable Me
Delete> I think pancomputationalism is both correct and relatively useless. <
What qualifies as the hardware? What qualifies as the software? And what qualifies as the information that is being processed?
> There is something in the complexity and self-reflecting nature of the computations in the brain that produces consciousness, not the mere fact that the brain is computing. <
What does complexity have to do with it? And coding a feedback loop is simple, not complex.
> Everyone seemed reasonably prepared to accept that the simulation of a brain might be intelligent, but not conscious <
I am not. Real intelligence presupposes consciousness.
> But this goes against Massimo's stated belief that consciousness is functional, that it is necessary for our intelligence. If we need consciousness to be intelligent -- if this level of intelligence is impossible without consciousness, then I think achieving intelligence in an electronic device is good reason to think that that device (or its software) is also conscious. <
Do you really believe your personal computer is experiencing subjective awareness?
Hardware is the physical substance. Stuff made of atoms. Transistors. Wires. Neurons. Neurotransmitters.
DeleteSoftware is the logical structure of the information processing happening. I would count the pattern of connections between transistors on a circuitboard as software for these purposes, as would I count the pattern of connections in a brain.
"What does complexity have to do with it? And coding a feedback loop is simple, not complex."
It is probably not a coincidence that brains are the most complex objects known? Of course, I'm not saying that anything complex is conscious. I'm saying there is something about the way the brain is organised that gives rise to consciousness, and complexity is a part of it. But I don't think consciousness is all or nothing. I think some of today's computer programs might be conscious in a very dim, primordial sense. I think a certain level of complexity is required to have a rich conscious experience, just as a certain level of complexity is required to achieve intelligent behaviour.
Similarly, I didn't say that anything with a feedback loop is conscious. I am saying that consciousness requires software to have certain self-monitoring capability.
"I am not. Real intelligence presupposes consciousness. "
I was talking about those on the podcast.
I'm not sure if I'm with you here. If you mean that it is incoherent to talk of real intelligence without consciousness, then I'm not sure I agree. One can have the appearance of intelligence without consciousness, as with various chat bots. One can also have some pretty intelligent seeming behaviour in various automated management systems without consciousness. If we don't want to conflate the two words, then I think intelligence ought to refer to problem-solving ability alone and so does not presuppose consciousness.
However, with that said, I think achieving human-type intelligence with the same kinds of problem solving abilities humans have is probably impossible without consciousness.
"Do you really believe your personal computer is experiencing subjective awareness?"
No. Nor did I say that I did. I didn't say we had achieved intelligence in an electronic device. I was discussing a hypothetical. If we made an electronic device which behaved as intelligently as a real human, then I think there would be reason to believe it would be conscious.
On Analog Computers: @Disagreeable is correct.
DeleteOn Pancomputationalism: "I think pancomputationalism is both correct and relatively useless." That feels to me like saying "knowing that electricity and magnetism are actually the same thing is both correct and useless." Understanding the underlying nature of things tends to be useful even if you don't know what the use is yet.
On intelligence vs. consciousness: This, I believe, is where the lack of rigor in definitions creates the most trouble (and thus the most verbiage).
First: Intelligence is clearly a relative term. It is pointless to talk about something being "clearly intelligent" (a bee?) and something else being "clearly not intelligent" (a rock?). I submit that it is only useful to talk about the intelligence of something if you do two things first. 1. Define the physical boundaries of the something and 2. Define the specific information processing capabilities required for "intelligence". Do you require use of language? Specific pattern recognition? Generic pattern recognition? Long term memory (note: the rock may win here)?
Second: Consciousness. No one, I think, has defined this term well. Therefore, I submit this working definition: Consciousness is the subjective experience of information processing (or calculation as described in the podcast). Thus, in order to discuss consciousness you must 1. define the physical subject (for example, specify everything that is physically in the [Chinese?] room), and 2. define exactly what counts as "information processing/calculation".
Traditionally when people speak of consciousness and intelligence, they're speaking about a well defined subject (a human), and the package of information processing that comes with it. It is okay to arbitrarily consider only human or other biological subjects, but then you are simply limiting yourself arbitrarily. It's easy to fall into the trap that there is something special about human consciousness because we simply cannot create a machine that has the massive information processing capabilities of the human brain (yet. My money's on Kurzweil).
@Disagreeable, while my definition of consciousness requires a physical subject, I think your understanding of consciousness boils down to specifying a set of information capabilities, and then referring to the consciousness that would result if those capabilities were manifested in a physical subject.
*
@ Disagreeable Me
Delete> Hardware is the physical substance. Stuff made of atoms. Transistors. Wires. Neurons. Neurotransmitters.
Software is the logical structure of the information processing happening. I would count the pattern of connections between transistors on a circuitboard as software for these purposes, as would I count the pattern of connections in a brain. <
I'm asking you what qualifies as hardware, software, and information in regards to your belief in pancomputationalism (i.e. the belief that the universe as a whole is an information-processing system).
> It is probably not a coincidence that brains are the most complex objects known? <
And my personal computer is vastly more complex than the first electronic pocket calculator. But I have no reason to believe that my personal computer is anymore consciousness than the first electronic pocket calculator.
> But I don't think consciousness is all or nothing. I think some of today's computer programs might be conscious in a very dim, primordial sense <
It's absurd. That being said, do you believe that the first replicators in the so-called primeval soup had some kind of rudimentary mentality?
> I was talking about those on the podcast. <
Well, the individuals on the podcast were conflating the metaphorical with the literal. Computers are not really intelligent, and thermostats do not really sense the temperature.
> One can also have some pretty intelligent seeming behaviour in various automated management systems without consciousness. <
"Seeming" is the operative word here.
> No. Nor did I say that I did. I didn't say we had achieved intelligence in an electronic device. I was discussing a hypothetical. <
But your argument seem to imply that we could if we could only make a computer program that was a little more complex.
@ Unknown
Delete> Second: Consciousness. No one, I think, has defined this term well. Therefore, I submit this working definition: Consciousness is the subjective experience of information processing (or calculation as described in the podcast). Thus, in order to discuss consciousness you must 1. define the physical subject (for example, specify everything that is physically in the [Chinese?] room), and 2. define exactly what counts as "information processing/calculation". <
For the sake argument, let's assume that consciousness is the subjective experience of information processing. So, what exactly is the information here that is being processed to generate consciousness?
@Unknown
DeleteI suppose there's two kinds of pancomputationalism. The first states that we can interpret everything as performing a computation, not just brains and computers. This is what I take Massimo to mean, and is what I regard as true and useless.
The second is stating that reality itself is fundamentally a computational process. I think this is more useful, and is relatively close to my own metaphysical position, the Mathematical Universe Hypothesis.
I have no major issues regarding your definitions, apart from the fact that I don't see physical boundaries as being particularly relevant. I think people have not bothered to define these terms because this conversation has been going on in this community for quite a while now on previous threads, and we don't always reiterate what we mean if it doesn't seem necessary.
Nevertheless, your introduction of a bit more rigour is certainly welcome!
@Alastair
DeleteSorry, I didn't realise your question about hardware/software/information was in the context of pancomputationalism. I took you to be talking about the CTM. As I said in my remark to Unknown, I mean pancomputationalism in the sense with which I take Massimo to be using it when discussing the CTM. This is just the claim that we can interpret any object as computing something. A rock computes how to be a rock. Thermal fluctuations within the rock can be interpreted as computing pseudorandom numbers. It's not really much of a metaphysical claim.
However, in regards to my own views on the Mathematical Universe Hypothesis, there is no hardware, the software is the laws of physics and the information is the state of the universe at any given time.
" But I have no reason to believe that my personal computer is anymore consciousness than the first electronic pocket calculator."
I don't think computers can be conscious. I think software could be conscious. So, it depends what software you're running. If you're running an AI simulation with percepts, decision-making, self-monitoring, etc, then I think it's reasonable to suppose it might have some sort of dim rudimentary consciousness, and perhaps more than one if your simulation contains multiple agents.
Again, complexity alone is not enough. It has to be organised in a way sufficiently analogous to the organisation of a human mind for it to be conscious.
"That being said, do you believe that the first replicators in the so-called primeval soup had some kind of rudimentary mentality?"
I don't think the fact that they are biological or replicators is terribly significant. They probably have no more mentality than any other chemical, which is to say none at all or so little as to be negligible. They don't implement that complex self-reflecting human-analogous algorithm that I think is required.
""Seeming" is the operative word here. "
Only if you don't want to distinguish between intelligence and consciousness. If we want to use the term "intelligence" to refer to the appearance of intelligence, and the ability to solve problems, etc, then all we need to show intelligence is "seeming". You're just using the word differently to me. There's no significant argument there.
"But your argument seem to imply that we could if we could only make a computer program that was a little more complex."
Perhaps vastly more complex. Perhaps so complex it could never be designed.
But complexity is not enough. NASA rockets are complex. Supercomputers are complex. That doesn't mean that supercomputers can reach orbit or that NASA rockets can crack encryption codes. Complexity is necessary but not sufficient. The organisation of the algorithm has to have many of the same features as the human mind - the ability to introspect, to solve novel problems, to generalise, recognise patterns, etc.
Call me James (I should have signed that way). What is 'here'? My point is: to discuss consciousness you have to define (or delimit) the physical system and define (or at least describe) information processing that you then map onto the physical subject. If the information processing you're interested in does not map to the physical system, then you can call that not conscious. For example, there are electromagnetic waves passing through the air are electromagnetic waves passing through the air air around and through me, which waves carry a lot of information. If you choose the physical system as just me and everything inside my skin, it's safe to say I am completely unconscious of all that activity. Now If you happen to include the phone in my pocket in that physical system, consciousness of some of that information gets turned on. Again, you have to define/delimit the physical system and the information processing mapped to that system.
DeleteIt might be useful to consider how consciousness may have evolved/emerged in living systems before assuming that is plausible for designed machines to exhibit the phenomena.
DeleteLiving systems don't just process information, they make use of that information in a self contained fashion. They 'feel' the processing of information and respond by maintaining stability in their environment despite a continuous turnover of their physical parts. They do this by avoiding danger, gathering energy & engaging in self-repair & replication or reproduction.
When we can design a machine that does this perhaps we can start making assumptions regarding consciousness.
@James
DeleteWhat you're saying makes sense but I don't personally think it makes sense to describe any physical object as conscious. I see the mind as an abstract phenomenon, essentially a mathematical object. I don't think the mind has a location in space, I think it is a mathematical structure describing the information processing in a brain. So I would not call a brain conscious, I would call a person conscious. Even after an AI breakthrough, I would not call a computer conscious, I would call its software conscious.
@DisagreeableMe: you say "I don't think computers can be conscious." What's the difference between that statement and "I don't think people can be conscious"? If consciousness is in the software, the latter statement seems to be true.
DeleteJames
@James
DeleteDepends how you define a person. I don't think people are physical objects. As I said above in a comment to Massimo about Kirk, I think people are identical with their minds, not their bodies, and I think minds are abstract. So I don't think brains are conscious, people/minds are conscious.
@DisagreeableMe: I think what you are calling a consciousness, I would call a mind. I think we can be of the same mind, but not of the same consciousness.
DeleteJames
@James
DeleteI wouldn't say "a consciousness". Consciousness is a property of a mind, alongside other properties such as intelligence. I don't know what you mean when you say we could be of the same mind but not of the same consciousness.
@ Disagreeable Me
Delete> I don't think the fact that they are biological or replicators is terribly significant. They probably have no more mentality than any other chemical, which is to say none at all or so little as to be negligible. They don't implement that complex self-reflecting human-analogous algorithm that I think is required.<
What about a bacterium? Do bacteria have any kind sentience?
> Complexity is necessary but not sufficient. The organisation of the algorithm has to have many of the same features as the human mind - the ability to introspect, to solve novel problems, to generalise, recognise patterns, etc. <
My chess application 'solves' problems, 'recognizes' patterns, etc. No consciousness required.
Question: What exactly do you expect a sentient information processing system to do that an insentient information processing system cannot do?
Hi Alastair,
DeleteNo, I don't think a bacterium is sentient. It doesn't really do any significant information processing. There is no part of a bacterium that is dedicated to taking input symbols, performing complex processing and producing output symbols.
Your chess application does not solve novel problems. It performs a brute force search of a very limited and predefined set of problems.
It also doesn't recognise patterns in the sense that I mean. It may be able to match what it sees to a predefined list of chess scenarios, but it can't look at an arbitrary set of data and infer general patterns the way humans do all the time.
It also doesn't introspect, and it's missing many of the other features and capabilities human minds exhibit. The "etc." in my previous post is important.
"Question: What exactly do you expect a sentient information processing system to do that an insentient information processing system cannot do?"
Sentience in the sense of consciousness is a subjective attribute perceivable only to the system itself. As such, there is no specific empirically detectable signifier of consciousness.
However, I think it is probably impossible to perform certain tasks without consciousness. I don't think consciousness has to be "added" for the system for it to work, I think it arises as a natural byproduct of the kind of processing needed to perform these tasks.
The classic example of the kind of task that a sentient system might be able to perform that an insentient system might not be able to is of course passing the Turing test.
@ Disagreeable Me
Delete> No, I don't think a bacterium is sentient. It doesn't really do any significant information processing. There is no part of a bacterium that is dedicated to taking input symbols, performing complex processing and producing output symbols. <
"Bacteria Are More Capable of Complex Decision-Making Than Thought"
Question: What about cats? Are cats capable of " taking input symbols, performing complex processing and producing output symbols"?
> The classic example of the kind of task that a sentient system might be able to perform that an insentient system might not be able to is of course passing the Turing test. <
Why can't an insentient information processing system pass the Turing test?
Bacteria making more complex decisions than previously thought isn't really saying much.
Delete" What about cats? Are cats capable of " taking input symbols, performing complex processing and producing output symbols"?"
Yes.
"Why can't an insentient information processing system pass the Turing test?"
Because if it could pass the Turing test it would be sentient, in my opinion.
I could be wrong. This is a hypothesis, not something I assert with conviction. It is my view that in order to successfully behave like a human, one must believe one is conscious and so experience consciousness.
I think that consciousness is necessary for many of the tasks we have evolved to do. We need it to have a concept of self, to think deeply about problems, etc. It is my view that this level of ability cannot be achieved without the experience of consciousness. If it were possible, if consciousness were completely irrelevant to our abilities, I think it is unlikely that nature would have gone to all the trouble of evolving consciousness. All evolution cares about is behaviour leading to improved survival and rates of reproduction, not how we feel about it.
@ Disagreeable Me
Delete> Bacteria making more complex decisions than previously thought isn't really saying much. <
Doesn't it tell you that they are "taking input symbols, performing complex processing and producing output symbols"? If not, why not?
> Because if it could pass the Turing test it would be sentient, in my opinion. <
This doesn't tell me anything except that it is possible to fool some people into believing that an insentient information processing system is actually sentient. It doesn't appear that you can tell me what exactly a sentient information processing system can do that an insentient information processing system cannot do.
> I think it is unlikely that nature would have gone to all the trouble of evolving consciousness. <
But the problem is that you cannot tell me exactly what consciousness is doing. (Being "aware" in and of itself performs no functional role that can be simulated by an information processing system. That's why I asked my question. Why does an information processing system have to experience consciousness in order to perform some functional role? If it does, then what exactly is that functional role?)
"Doesn't it tell you that they are "taking input symbols, performing complex processing and producing output symbols"? If not, why not?"
DeleteBecause even if it's more complex than previously thought, it's still pretty simple. Can't be more specific because that article is behind a paywall. The information processing bacteria do is probably orders of magnitude simpler even than your chess program.
"This doesn't tell me anything except that it is possible to fool some people into believing that an insentient information processing system is actually sentient."
That's your view. My view is that if it could pass every test we could devise, it could only do so by being sentient.
" It doesn't appear that you can tell me what exactly a sentient information processing system can do that an insentient information processing system cannot do."
Pass the Turing test.
"But the problem is that you cannot tell me exactly what consciousness is doing. "
It's not doing anything. It's a property of the information processing needed to perform certain tasks. Asking what consciousness is doing is like asking what complexity is doing. It has no functional role, it is an inevitable property of the computations needed to fulfill certain functions.
Essentially, consciousness is simply what it feels like to be a sophisticated algorithmic agent with percepts, memory, etc. It exists only subjectively and has no empirical basis. You cannot be such an agent and not experience consciousness. It can have no functional role because it is empirically undetectable.
I've explained why I think this is the case. Evolution only cares about behaviour, yet we experience consciousness.Perhaps this is because it must be so, because this is what it feels like to be an agent capable of the behaviour evolution has selected for. Since consciousness has no empirical consequence in its own right, it seems to me to be parsimonious to assume that consciousness is an inevitable feature of the kind of behaviours evolution has selected for. To achieve those behaviours is therefore to achieve consciousness.
If you are really saying that you think you can behave just like a person, claiming to have a sense of self, experiencing qualia, yelping out when stubbing your toe etc, and yet be insentient, then you are proposing that philosophical zombies could exist.
And I don't think that is the case.
I don't claim to have proven it, but I think this is a perfectly consistent and reasonable position.
Correction: in the last bit, where I say "experiencing qualia", I meant "claiming to experience qualia".
Delete@ Disagreeable Me
Delete> It's not doing anything. It's a property of the information processing needed to perform certain tasks. Asking what consciousness is doing is like asking what complexity is doing. It has no functional role, it is an inevitable property of the computations needed to fulfill certain functions.
Essentially, consciousness is simply what it feels like to be a sophisticated algorithmic agent with percepts, memory, etc. It exists only subjectively and has no empirical basis. You cannot be such an agent and not experience consciousness. It can have no functional role because it is empirically undetectable. <
You're making my point: There is nothing a sentient information processing system can do that an insentient information processing cannot do. Why? Because consciousness does not perform any function whatsoever.
This also begs the question. If consciousness does not perform any function whatsoever, then why was it naturally selected? Invoking "complexity" in this context explains nothing.
How complex is a cell's work ...
Deletehttp://ocrampal.com/node/141 (nice video too)
http://www.sciencedaily.com/releases/2009/11/091126173027.htm
@ Disagreeable Me
DeleteYou should have free access to this link.
"Bacteria Shed Light on Human Decision-Making?"
@Alastair
Delete(In the following, please understand I'm explaining my view, not asserting that this view is certainly correct.)
You're missing my point, I'm afraid. Consciousness isn't an ability or a function. It's a property. Some tasks are simply impossible without consciousness. You can think of consciousness as a side-effect of solving certain problems rather than something used to solve those problems.
Thinking of it as a side effect isn't quite right either, though. I know you don't like the analogy to complexity but it's the best one I have. It's not that "complexity" in itself does anything, it's that any systems which perform tasks of a certain kind must necessarily be complex. Complexity is an essential inherent property of any system that can perform difficult tasks.
Consciousness for me is like complexity. It's not that it's required as an essential part of an information processing task, but any system which can carry out the task will necessarily be conscious because it is impossible to achieve the task without also producing consciousness.
I very much disagree with your statement:
"There is nothing a sentient information processing system can do that an insentient information processing cannot do. Why? Because consciousness does not perform any function whatsoever."
If I echo your words back to you but replace consciousness with complexity, you'll see your argument doesn't hold.
"There is nothing a complex information processing system can do that a simple information processing system cannot do. Why? Because complexity does not perform any function whatsoever."
(But again, I want to reiterate that consciousness is not the same as complexity. I think it does require complexity, and I think it is in some ways analogous to complexity.)
We could also make analogies with side effects.
"There is nothing a high-wattage appliance can do that a low-wattage appliance cannot do. Why? Because high consumption of electricity does not perform any (useful) function whatsoever."
As for bacteria, I will grant that colonies of bacteria may perform relatively interesting computations. I was initially talking about an individual bacterium. I wouldn't entirely rule out the idea of a hive mind when discussing colonies, however I do think it is improbable that the kind of processing colonies do is likely to give rise to anything we would call a conscious mind.
Also, Alastair, if you think consciousness is not necessary for intelligent behaviour, then why do you think it was naturally selected?
Delete@ Disagreeeable Me
Delete> (In the following, please understand I'm explaining my view, not asserting that this view is certainly correct.) <
That's understood in any philosophical debate.
> You're missing my point, I'm afraid. <
I don't' think I am missing your point. You believe that consciousness is an acausal property that emerges whenever information processing achieves a certain complexity threshold. I would classify your beliefs in regards to consciousness as emergentism, epiphenomenalism, and property dualism. I do not necessarily have a problem with emergentism, epiphenomenalism, or property dualism. What I have a problem with is that you are equating the information processing that occurs in an electronic digital computer with the information processing that occurs in a living organism. They're not the same.
> If I echo your words back to you but replace consciousness with complexity, you'll see your argument doesn't hold.
"There is nothing a complex information processing system can do that a simple information processing system cannot do. Why? Because complexity does not perform any function whatsoever." <
I don't know what this even means. But clearly some information processing system can perform functions that other information processing systems cannot. And clearly some functions in an carbon-based (as opposed to a silicon-based) information processing systems would confer a selective advantage over other carbon-based information processing systems.
@ Disagreeable Me
Delete> Also, Alastair, if you think consciousness is not necessary for intelligent behaviour, then why do you think it was naturally selected? <
I never argued that consciousness was not necessary for intelligent behavior. (You did.) I believe all living organisms (what materialists refer to as "stimulus-response systems") are sentient (i.e.have some kind of inner experience). (I subscribe to "panexperientialism." On this view, even a subatomic particle exhibits some form of mentality. So, there is really no need to explain why it was naturally selected.)
Hi Alastair,
Delete" You believe that consciousness is an acausal property that emerges whenever information processing achieves a certain complexity threshold."
No, that's not my position. Again, complexity does not necessarily give rise to consciousness. The particular organisation of the system also has a bearing.
I think that any algorithm with a logical structure isomorphic to that of a human mind will be conscious. I don't make any strong claims about other complex logical structures, because all I really know for sure about consciousness is that I am conscious. I might have strong suspicions regarding other humans, animals, insects, colonies of ants and bacteria etc, but I can't be sure. I'm not at all saying that any sufficiently complex system will be conscious.
What I think instead is that consciousness is a necessary property of any system that can exhibit certain behaviours such as passing the Turing test.
"What I have a problem with is that you are equating the information processing that occurs in an electronic digital computer with the information processing that occurs in a living organism. They're not the same. "
Why do you think they are fundamentally different?
"I don't know what this even means."
Exactly. It's meaningless. It's "not even wrong", but incoherent because it's confusing complexity with some ability of or feature of a system that directly enables some required behaviour. You need complexity to achieve some result, but that doesn't mean that you can just "add complexity" and get what you want. And that's pretty much how I feel about your original statement regarding consciousness.
"I never argued that consciousness was not necessary for intelligent behavior. (You did.)"
DeleteNot quite. I argued that conceptually, intelligence is distinct from consciousness. However, I also argued that it seems probable that certain kinds of intelligence cannot be achieved without also producing consciousness.
Now, if you "never argued that consciousness was not necessary for intelligent behaviour", would you in fact agree with me that consciousness is indeed necessary for some kinds of intelligent behaviour? If we ever make an AI that behaves intelligently, in a way indistinguishable from a biological organism, would that not then mean that it must be conscious?
If so, then we agree.
If not, then it must be possible to behave intelligently without consciousness, and therefore there is no reason for evolution to select for consciousness, so you need to explain why you think we are conscious.
"living organisms (what materialists refer to as "stimulus-response systems") are sentient "
Electronic machines (or software programs) are also stimulus-response systems. So why can't they be sentient? (Or can they?)
And for bacteria, is it the individual bacteria that have experience or the colony as a whole, or perhaps both? In human beings, does each cell have its own experience, or only the human as a whole?
Panexperientialism was originally absurd to me, however I no longer feel this way. But I think subjective experience would have to be a property of logical/mathematical/computational processes rather than being attached to matter, so it's not that the subatomic particle has mentality, it's the equations that govern its physical interactions that have mentality. (Does this seem more implausible to you than that it's the matter that experiences consciousness?)
However, I regard this hypothetical form of mentality as being so primitive and basic that effectively it's better not to describe it as real mentality at all.
Again, by analogy to complexity, all physical objects have some measure of complexity. A brain is very complex. A rock less so. A molecule less so. An atom less so. An electron really is not very complex at all, but it has a position on the "complexity spectrum" nonetheless, so it has some non-zero amount of complexity.
So even though an electron is not complex, it has some complexity. Similarly, I think it makes sense to talk of simple logical structures as having some mentality even if they are not conscious.
@ Disagreeable Me
Delete> No, that's not my position. Again, complexity does not necessarily give rise to consciousness. The particular organisation of the system also has a bearing. <
Okay. But why are you bringing the idea of complexity into the equation? Why can't you just say that specific logical structures (functions) must be in place in order to achieve consciousness? (Whether a specific logical structure is complex or not is subjective.)
> I think that any algorithm with a logical structure isomorphic to that of a human mind will be conscious. <
Why does the 'algorithm' have to be isomorphic to the human mind? Do you actually believe that other animal species are not conscious? That they have no mind? Why not start with the simplest mind? Maybe the algorithm (if that is the right concept) is relatively simple (like the Darwinian algorithm). And if you believe in evolution, then it would stand to reason that a relatively simple inner experience evolved before a highly complex experience. That's usually the way evolution works. It goes from the simple to the complex. So, why not start with the simple?
> Why do you think they are fundamentally different? <
Because we don't have any evidence that the computational process in living organisms relies on digital information. In fact, the evidence suggests that it relies on quantum information.
"Photosynthesis works 'by quantum computing"
"What We Can Learn From the Quantum Calculations of Birds and Bacteria"
> You need complexity to achieve some result, but that doesn't mean that you can just "add complexity" and get what you want. <
There you go again: "You need complexity to achieve some result." To reiterate: Why do you keep bringing complexity into the discussion if it is not relevant? Why can't you just say we need this or that logical structure in place to achieve consciousness?
There may be a place to bring "complexity" into the picture if you're invoking "emergence" and "spontaneous self-organization." For example you could argue that consciouness is a self-organizing system that spontaneously emerges out of chaos whenever a certain complexity threshold is achievved. But if that is not what you're doing, then stop bringing up complexity. (Whenever you bring up "complexity," it gives me the impression that you referring to "complexity theory.")
"But why are you bringing the idea of complexity into the equation?"
DeleteBecause:
1) Complexity is analogous to consciousness, something that must be present in a system for certain tasks to be possible. It's not added deliberately but an overall high-level or emergent property that has to be there as a consequence of building a system which can solve certain problems.
2) Because I don't think you can have "significant" consciousness without complexity. I will allow that you could have primordial awareness in the panexperientialist sense in relatively simple systems.
"Why can't you just say that specific logical structures (functions) must be in place in order to achieve consciousness?"
Ok, but I don't know what specifically those logical structures are. I suspect they are those that are required to have perceptions, awareness, memory etc, however I think the extent to which these achieve rich consciousness is proportional to how complex the system is. I think human beings have richer experience than cats, cats have richer experience than insects, and insects have richer experience than bacteria. I also suspect that IBM's Watson has a richer experience than bacteria and perhaps insects.
To put aside such speculation, I can only confidently say that any logical structure isomorphic to a human mind will have human-level consciousness.
"Why does the 'algorithm' have to be isomorphic to the human mind?"
It doesn't. I'm just not making as strong claims about non-human minds. I can be 100% confident that at least one human mind is conscious (my own). I might be 99.9% confident that other humans and even cats are conscious, but I can't logically be certain.
"And if you believe in evolution, then it would stand to reason that a relatively simple inner experience evolved before a highly complex experience. That's usually the way evolution works. It goes from the simple to the complex. So, why not start with the simple?"
Actually, this is precisely what I believe. It seems you misunderstood me.
"Because we don't have any evidence that the computational process in living organisms relies on digital information. In fact, the evidence suggests that it relies on quantum information."
There is no information processing task a quantum system can perform that cannot be performed by a Turing machine. The brain may use QM to process information, but that's only an implementation detail. The same information could be processed by a Turing machine. Quantum systems might be much faster at it, but there is no fundamental or metaphysical difference in the types of tasks that can be undertaken.
In any case, the examples you have given are not really of information processing per se, in my view. Photosynthesis is producing sugar, not information. Magnetic detection in birds is sensory input, like photoreception. Neither are good evidence that brains use QM to process data.
@ Disagreeable Me
Delete> Now, if you "never argued that consciousness was not necessary for intelligent behaviour", would you in fact agree with me that consciousness is indeed necessary for some kinds of intelligent behaviour? <
I have already stated in a previous post that real intelligence presupposes consciousness.
> If we ever make an AI that behaves intelligently, in a way indistinguishable from a biological organism, would that not then mean that it must be conscious? <
I don't believe we will ever create an electronic digital computer that will be indistinguishable from a biological organism. The notion that we will sounds absurd to me.
> Electronic machines (or software programs) are also stimulus-response systems. So why can't they be sentient? (Or can they?) <
Electronic machines are not living stimulus-response systems. That's the difference.
> And for bacteria, is it the individual bacteria that have experience or the colony as a whole, or perhaps both? In human beings, does each cell have its own experience, or only the human as a whole? <
I believe living cells (that would include a bacterium) have some kind of inner experience.
> Panexperientialism was originally absurd to me, however I no longer feel this way. But I think subjective experience would have to be a property of logical/mathematical/computational processes rather than being attached to matter, so it's not that the subatomic particle has mentality, it's the equations that govern its physical interactions that have mentality. (Does this seem more implausible to you than that it's the matter that experiences consciousness?) <
I believe a subatomic particle (e.g. an electron) has both a mental pole and physical pole. This is directly related to the wave/particle duality of quantum mechanics.
"Even an electron has at least a rudimentary mental pole, respresented mathematically by the quantum potential." (source: pg. 387 "The Undivided Universe: An Ontological Interpretation of Quantum Theory" by David Bohm and B.J. Hiley)
> However, I regard this hypothetical form of mentality as being so primitive and basic that effectively it's better not to describe it as real mentality at all. <
If the entity in question has some kind of inner experience, then it qualifies as having some form of mentality. Pure-awareness is ground zero. If you deny this, then you have to invoke some kind of miracle to explain the evolution of higher-levels of consciousness.
"[If] you have to say "then a miracle" occurs you haven't begun to explain what consciousness is." (pg. 455, "Consciousness Explained" by Daniel Dennett)
> Again, by analogy to complexity, all physical objects have some measure of complexity. A brain is very complex. A rock less so. A molecule less so. An atom less so. An electron really is not very complex at all, but it has a position on the "complexity spectrum" nonetheless, so it has some non-zero amount of complexity. <
There's nothing complex about pure-awareness.
> So even though an electron is not complex, it has some complexity. Similarly, I think it makes sense to talk of simple logical structures as having some mentality even if they are not conscious. <
If it has some form of mentality, then it has some form of inner experiencce. (Such a view qualifies as a form of panpsychism.)
"I don't believe we will ever create an electronic digital computer that will be indistinguishable from a biological organism."
DeleteOk, so you think it is impossible in principle for an elecronic information processing system to pass the Turing test then, assuming that the software process being tested is tested thoroughly. If such a system ever came to be built, you would be proven wrong. So your position is falsifiable, which is good.
So why do you think it is possible for an organic information processing system to pass the Turing test? What is it that lets brains do what computers cannot? If panexperientialism is true, then why do you think biological matter can be sentient but electronic circuits can't?
"If you deny this, then you have to invoke some kind of miracle to explain the evolution of higher-levels of consciousness."
I'm not saying there's a miracle. I just have my doubts about whether it makes sense to say something has "real" experience or mentality if it has no senses and no ability to reflect on its experience, It needs a minimum amount of reflective information processing power to have experience worthy of the name.
"There's nothing complex about pure-awareness."
Well, I didn't say there was. I was using complexity as an analogy. Even simple things have some complexity. In the same way, perhaps even simple mathematical structures have some measure of awareness even if I wouldn't call them aware.
I'm not sure "pure-awareness" really works as a concept though. It seems dangerously close to meaningless.
@ Disagreeable Me
Delete> Actually, this is precisely what I believe. It seems you misunderstood me. <
If this is precisely what you believe, then why don't you ponder what the requirements would be to achieve the simplest experience rather than the most complex.
> There is no information processing task a quantum system can perform that cannot be performed by a Turing machine. <
There is an information processing task that a quantum computer can do that a classical digital computer cannot do. A quantum computer can process a qubit; a classical digital computer cannot.
"In a classical system, a bit would have to be in one state or the other, but quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing." (source: Wikipedia: Qubit)
> Quantum systems might be much faster at it, but there is no fundamental or metaphysical difference in the types of tasks that can be undertaken. <
Superposition and entanglement (two quantum-mechanical phenomena that allow for quantum computing) definitely have metaphysical implications.
> In any case, the examples you have given are not really of information processing per se, in my view. <
"Photosynthesis works 'by quantum computing"
> Neither are good evidence that brains use QM to process data. <
"The main argument against the quantum mind proposition is that quantum states in the brain would decohere before they reached a spatial or temporal scale at which they could be useful for neural processing" (source: Wikipedia: "Quantum mind")
You have been furnished with evidence that quantum computing plays a part in one of the most fundamental of all biological processes (photosynthesis). This evidence undermines the main argument against quantum mind theories - that quantum states in biological systems "would decohere before they reached a spatial or temporal scale."
@ Disagreeable Me
Delete> If panexperientialism is true, then why do you think biological matter can be sentient but electronic circuits can't? <
Because I believe life and sentience are inexorably-linked. This is the primary difference between an organic view and a mechanistic view.
> I'm not saying there's a miracle. I just have my doubts about whether it makes sense to say something has "real" experience or mentality if it has no senses and no ability to reflect on its experience.<
Either something is undergoing an experience or it is not.
> I'm not sure "pure-awareness" really works as a concept though. It seems dangerously close to meaningless. <
You do not understand what "awareness" means?
Hi Alastair
Delete>If this is precisely what you believe, then why don't you ponder what the requirements would be to achieve the simplest experience rather than the most complex.<
Because I don't think there is a simplest experience. I think experience arises and develops gradually over evolutionary time. Asking for the simplest experience seems to me like asking for the smallest possible big thing, or the smallest number of grains of sand that constitutes a heap. One grain of sand might have some primordial element of heapishness but I wouldn't describe it as a heap.
>A quantum computer can process a qubit; a classical digital computer cannot.<
That's an implementation detail, not an information processing task, like saying that my computer can process 64 bits at a time while my previous computer could only process 32 at a time. Even so, there is no task that my current computer can do that my old computer could not do, given enough time and memory. What I mean is there is no function a quantum computer can compute that a classical computer cannot.
Same goes for superposition and entanglement. These are implementation details, not information processing tasks.
>This evidence undermines the main argument against quantum mind theories - that quantum states in biological systems "would decohere before they reached a spatial or temporal scale."<
That may be a common argument, but it is not mine (except perhaps with respect to Penrose's position, but that's another story). I'm open to the idea that the brain might exploit QM. I also maintain that a classical computer can do whatever a quantum computer can do.
>Because I believe life and sentience are inexorably-linked.<
Why? Anyway, I thought you said that even fundamental physical particles had awareness in panexperientialism. They're not alive, are they?
>Either something is undergoing an experience or it is not. <
I don't see it that way. I don't think sentience is binary, I think it's a matter of degree. What you have said seems to me to be akin to "either something is big or it is not".
>You do not understand what "awareness" means?<
I'm not sure that I do. And I'm not sure that you do either. Certainly not the concept of pure awareness, devoid of any thought process.
@ Disagreeable Me
Delete> Because I don't think there is a simplest experience. I think experience arises and develops gradually over evolutionary time.<
You're talking only about a difference in degree, not in definition.
> One grain of sand might have some primordial element of heapishness but I wouldn't describe it as a heap. <
Whether you're talking about a grain of sand or a pile of sand you;re still talking about sand. To reiterate: You're talking only about a difference in degree, not in definition.
> What I mean is there is no function a quantum computer can compute that a classical computer cannot. <
A quantum computer can randomly 'choose' between "heads" or "tails." A classical digital computer cannot. IOW, a quantum computer can truly perform a random function; a classical digital computer cannot.
By the way, the only possible function that consciousness can exhibit is the exercise of free will. And the only place in science where consciousness may have to be invoked in order to explain observed phenomena is in the "collapse of the wave function."
> Why? Anyway, I thought you said that even fundamental physical particles had awareness in panexperientialism. They're not alive, are they?. <
"Biology is the study of larger organisms, whereas physics is the study of smaller organisms." - Alfred North Whitehead
> I don't see it that way. I don't think sentience is binary, I think it's a matter of degree. <
If it is a matter degree, then you are describing a difference in degree and not in definition.
Your objection is tantamount to the following argument: "A small amount of energy does not really qualify as energy because it isn't large enough. Moreover, the "amount of energy" that qualifies energy as energy is completely arbitrary."
> I'm not sure that I do. And I'm not sure that you do either. <
I understand what "awareness" means. And if you do not, then there is not point to continue this debate because it would prove to be nothing more than an exercise in futility. (It is not possible to engage in an intelligent debate with an individual who does not understand whether s/he is presently experiencing awareness.)
Hi Alastair,
Delete>You're talking only about a difference in degree, not in definition. <
It depends on what kind of property "conscious" is. If it's like "big", then I'm right. If it's like "energy", then you are right. If it's like "heap", then I am right. If it's like "sand", then you are right.
I think it's a relative term. You think it's an absolute term. I think I understand your position. Do you understand mine? Can you see, from my position, why I might not think it makes much sense to talk about the most basic element of consciousness?
>A quantum computer can randomly 'choose' between "heads" or "tails."<
That's actually a very good point. If you could demonstrate that true randomness is actually very useful for computation, you might even persuade me. I think for most purposes pseudorandomness is adequate. If something is indistinguishable from randomness I can't see any benefit from having it truly random.
I don't see randomness as being particularly helpful in the free will debate. If what you do is determined by random chance, I don't think that constitutes making a choice in any meaningful sense. You have no more control over the way the quantum dice roles than you do over the other laws of physics that determine your behaviour.
Roger Penrose thinks that quantum mechanics might grant us abilities beyond classical computation, but in my understanding this can only be true if quantum events are not truly random. There would have to be some underlying uncomputable order there for the brain to exploit, because true randomness is not really that useful.
But in any case it's pretty easy to add randomness to a classical computer if you really want to. You just need to plug in a device that can generate random numbers from quantum noise into a usb port. Such devices already exist. If they prove to be necessary for consciousness I would be surprised, but this is not a serious engineering problem. In fact it is trivial.
The collapse of the wave function has nothing to do with consciousness. I'll grant you that it certainly seemed that way to the pioneers of QM, but this view is now very much in the minority. There are many interpretations of QM now where consciousness plays no role, such as the many worlds interpretation.
>It is not possible to engage in an intelligent debate with an individual who does not understand whether s/he is presently experiencing awareness.<
Ok, I'm experiencing awareness. I remain nonplussed about what "pure awareness" might mean. Awareness in the absence of thought, memory or perception doesn't make sense in my book.
@ Disagreeable Me
Delete> Do you understand mine? <
I understand your position. I don't agree with it.
> That's actually a very good point. If you could demonstrate that true randomness is actually very useful for computation, you might even persuade me. I think for most purposes pseudorandomness is adequate. If something is indistinguishable from randomness I can't see any benefit from having it truly random.
That's actually a very good point. If you could demonstrate that true randomness is actually very useful for computation, you might even persuade me. I think for most purposes pseudorandomness is adequate. If something is indistinguishable from randomness I can't see any benefit from having it truly random.
I don't see randomness as being particularly helpful in the free will debate. <
You're missing the point. The only explanation for a truly random event is consciousness. (It doesn't have any physical cause by definition.)
Also, the "two-stage model of free will" explains how indeterminism plays a part in freedom. It incorporates the same basic principles as Darwinian evolution - random variation and natural selection, And the significance is not in the moral realm, but in the creative process. (Randomness is necessary for creativity.)
> Roger Penrose thinks that quantum mechanics might grant us abilities beyond classical computation, but in my understanding this can only be true if quantum events are not truly random. <
Penrose furnishes you with another argument against your thesis.
"The Penrose-Lucas argument states that, because humans are capable of knowing the truth of Gödel-unprovable statements, human thought is necessarily non-computable.[15]" (source: Wikipedia: Orchestrated objective reduction)
> But in any case it's pretty easy to add randomness to a classical computer if you really want to <
Linking a random number function to a quantum event will not make a classical digital computer experience consciousness.
> The collapse of the wave function has nothing to do with consciousness. I'll grant you that it certainly seemed that way to the pioneers of QM, but this view is now very much in the minority. There are many interpretations of QM now where consciousness plays no role, such as the many worlds interpretation. <
"Every interpretation of quantum mechanics involves consiousness." - Euan Squires
"There is an unresolved problem with many-worlds: What constitutes an observation? When does the world split?" pg. 162, "The Quantum Enigma" by Bruce Rosenblum and Fred Kuttner)
"The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer". (source: Wikipedia: Many-minds interpretation)
> I remain nonplussed about what "pure awareness" might mean. Awareness in the absence of thought, memory or perception doesn't make sense in my book. <
The experience of pure consciousness is a mystical state known as "turiya." Anyone who routinely meditates should understand what I am referring to.
Hi Alastair,
Delete>I understand your position. I don't agree with it.<
Of course. But if you understand my position then you should understand why I don't think it is sensible to talk about the simplest possible form of consciousness, and why I prefer to discuss human consciousness.
>The only explanation for a truly random event is consciousness. <
Not having a deterministic explanation does not mean it is explained with consciousness. I can't see how you could make that imaginative leap.
>the "two-stage model of free will" explains how indeterminism plays a part in freedom.<
It's a good answer to my criticism of quantum free but I don't think it quite works. Anyway, I don't believe in free will so I'm not going to persuaded that randomness is necessary to account for free will.
>Randomness is necessary for creativity.<
I disagree. I built a computer system that composed music for my final year project in computer science. The results were often surprising and beautiful. All driven by pseudorandom numbers.
>Penrose furnishes you with another argument against your thesis.<
I don't think Penrose's argument works. I tackle it on my blog.
Strong AI: The Gödel Problem
>Linking a random number function to a quantum event will not make a classical digital computer experience consciousness. <
Agreed, but if you did that then your argument would no longer be relevant. A classical computer would be capable of doing everything a quantum computer can do.
>Every interpretation of quantum mechanics involves consiousness.<
Simply untrue.
http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics#Comparison_of_interpretations
>What constitutes an observation? When does the world split?<
Any physical interaction constitutes an observation. The timing of splitting depends on your frame of reference.
>Many minds interpretation<
This sounds like a variant of my own view. However I reject that it has anything to do with consciousness. Any physical interaction with a particle in superposition constitutes an observation. No conscious observer required. The world splits from the point of view of the physical interaction. Please understand that I mean point of view in a sense that assumes no consciousness. I mean it in a sense similar to how reference frame is used in classical physics.
Sometimes these physical interactions will take place in unconscious detector devices. Sometimes they will take place on the retinas of conscious scientists. Consciousness has nothing to do with it.
From the point of view of Schrodinger's cat, the universe splits as soon as the cat interacts with the results of the release of poisonous gas. From the point of view of the scientist outside the box, the universe splits when he opens the box. But I could replace the cat with a detector and the scientist with a robot and the same would apply.
>The experience of pure consciousness is a mystical state known as "turiya." Anyone who routinely meditates should understand what I am referring to.<
Fair enough. I can claim no such experience. However I remain skeptical that it constitutes pure awareness with an absence of any information processing whatsoever.
For those who deny the possibility of uploading: your greatest strength is the intuitive argument that "no amount of binary flops could every equal consciousness" (or similar), and your greatest weakness is the fact that I can reply with the just-as-plausible reductio "no amount of electrical pulses between neural cells could ever equal consciousness".
ReplyDelete@ ianpollock
Delete> and your greatest weakness is the fact that I can reply with the just-as-plausible reductio "no amount of electrical pulses between neural cells could ever equal consciousness". <
Why is that a weakness?
And yet we experience consciousness.
DeleteSo perhaps the pure reductionist account that consciousness us just 'electrical pulses between neural cells' may be incomplete. There is also an assumption that 'binary flops' are equivalent to 'electrical pulses between neural cells', but the electrical/neural dynamic may not even be a discrete one.
I am not going deny 'possibility' which is not falsifiable, but also not a useful benchmark. I would need to see much better arguments however to consider uploading as plausible.
@ Seth_blog
Delete> And yet we experience consciousness.
So perhaps the pure reductionist account that consciousness us just 'electrical pulses between neural cells' may be incomplete. <
It would seem that it is.
@Seth_blog:
Delete>I am not going deny 'possibility' which is not falsifiable, but also not a useful benchmark. I would need to see much better arguments however to consider uploading as plausible.
Sorry, are we talking about engineering feasibility here or conceptual possibility? I don't claim that mind uploading will ever actually happen, it may be infeasible on a technical level. The question is whether it does in fact preserve the aspects of personhood we care about.
Ian
DeleteMy point is that I see no reason to accept your 'just-as-plausible' comment in the comparison to binary flops. There are lots of differences in the relationships between autonomous living systems and their environments as compared to machines designed to run software. The drawing of equivalence seems to me to be idealogically based.
@Ian,
DeleteThe main argument against uploading is that there nothing to upload. There is no uploadable “master program” in your head, separated from the hardware.
@Seth_blog,
>There are lots of differences in the relationships between autonomous living systems and their environments as compared to machines designed to run software. The drawing of equivalence seems to me to be idealogically based.<
I agree. In fact, I see the idea that minds are the programs of computer-like brains as an attempt to revive the idea of a Platonic soul. Even immortality is then possible (by uploading.)
OMG, I missed the Singularity!
@Filippo
DeleteI have no hope of surviving death through uploading. I think that technology is very unlikely to happen during my lifetime, if ever.
Also, I have no love for the metaphysics of Plato. I call myself a mathematical Platonist only because that's the prevailing term for mathematical realists, not because I particularly want to align myself with him.
And I certainly have no desire to believe in a soul.
I come to the CTM from the position of a naturalist seeking a naturalistic explanation for human consciousness. I simply cannot see how the CTM could possibly be wrong given naturalism. No other viewpoint seems coherent to me. The implications of the CTM have led me to the position that the mind is an abstract mathematical object. I certainly did not start from the assumption that the soul exists and set out to prove it.
It really is not wishful thinking, at least not in my case. I could just as easily accuse you of denying the CTM because you want to believe that you're special, more than just a computer program. I don't say that or think that because I respect you.
"The main argument against uploading is that there nothing to upload. There is no uploadable “master program” in your head, separated from the hardware."
True, there's no sequence of bytes stored in your head which represents the program your brain is running. It doesn't work like that, I agree.
But there is a physical structure of neuronal connections representing a logical, deterministic process. I see this as being rather like a silicon circuit built of transistors and logic gates.
A circuit on a microchip is also all hardware, no software, yet we can reverse engineer it, figure out what it's doing, and then implement the same function in the same way in software, if necessary by running a simulation of all the transistors on the circuit, but potentially at higher levels of abstraction.
As long as we capture the logical structure of the computations it carries out we can reproduce the same thing in software.
I see no reason to think the brain is any different.
Re: "The main argument against uploading is that there nothing to upload. There is no uploadable master program' in your head, separated from the hardware."
DeleteIf one had a computer board with CPU, RAM, and peripherals and you uploaded a compiled program to it and the board is running, all that is there then are a bunch of logic gates and electrons flowing through circuitry. There is no "program" on board! (Unless one thinks of the program residing in the bytes of RAM in its original source language, but typically the program is compiled into another representation. But still all that is in RAM are a bunch of gates and electrons.) Now that is with a typical CPU and RAM. But one might have a different hardware setup on that board, like a cellular automata-like chip. Now the compiler for the program to target that new chip may be a bit trickier, but in the end there is still a bunch of logic gates and electrons flowing through circuitry: There is no "program" as such on the computer board. But one could still transfer the code running on that board to another board.
Philip,
DeleteMy understanding is that programs are translated into machine code so they can run on a specific hardware setup. So I think we can transfer code running on one board to another but only if the boards are identical in the sense that the code will encounter the same architecture on each board.
Hi Marc,
DeleteWhat you say is correct, and some circuit boards don't even have code in any sense.
My point is that you can still transfer the logical structure of the computations. You don't need code or a program, you only need to faithfully duplicate the pattern of operations the circuit board implements and even if your physical implementation is completely different, the essential logical pattern is identical and your copy will behave in the same way as the original.
If the CTM is true, all that matters for human consciousness is this logical pattern, so mind uploading ought to be possible in principle.
So the metabolism of the cell has nothing to do with it?
Delete@Louis
DeleteYes, in my view the metabolism of the cell has nothing to do with it, other than being a physical prerequisite for this particular substrate (in the same was as the supply of electricity for a CPU).
@Marc
DeleteI think what you need is a binary recompiler ("software that takes an executable binary as input, analyzes the structure, applies transformations and optimizations, and outputs a new optimized executable binary") but with the target being the binary for a computer with a perhaps different architecture. Or takes takes the code from your brain and transforms it into the binary for your robot companion.
DM,
DeleteThis is not true, molecular and genetic mechanisms are involved in the working of the brain. The brain is not an “electrical” computer. For instance, the same molecules are involved in long-term learning-related synaptic plasticity, in widely separated species. These are cAMP, PKA, CRE, CREB-1, CREB-2, and CPEB.
PKA is a protein kinase: it attaches phosphate groups to proteins. Because PKA acts on covalent bonds, it is a quantum-mechanical machine. (Covalent bonds are inherently quantum-mechanical effects, they have no classical analogs.)
See E.R. Kandel, The molecular biology of memory, in Molecular Brain 2012, 5:14 (http://www.molecularbrain.com/content/5/1/14).
Hi Filippo,
Delete"This is not true, molecular and genetic mechanisms are involved in the working of the brain."
But I never denied that. I said they were a physical prerequisite for the brain substrate.
"The brain is not an “electrical” computer."
I never said it was. That is certainly not my view.
"These are cAMP, PKA, CRE, CREB-1, CREB-2, and CPEB."
The particular chemicals which the brain uses to process information are as irrelevant to the essential logic of the processes taking place as are the specific materials used to build an electrical circuit, or the colour of the paper on which I write a system of equations.
"Covalent bonds are inherently quantum-mechanical effects, they have no classical analogs."
As long as it is possible to model how covalent bonds behave in a computer simulation, then the same logic can be represented in an electronic computer, and their essential quantum mechanical nature is irrelevant.
But it is probably as unnecessary to model every covalent bond in a brain as it is to model every molecule in an electronic circuit when simulating that circuit in software. Quantum effects are also crucial to the operation of transistors at a fundamental level, yet there is no problem implementing the digital logic of circuits in software simulations without needing to deal with these quantum effects explicitly.
I was hoping Ian would answer too.
DeleteDM,
ReplyDelete>The so-called failure of AI may just be indicative that AI is a very hard problem to solve.<
Agreed, but there are two reasons to worry...
One is that sufficiently hard problems could as well be impossible. If a solution requires a number of operations that grows exponentially with problem size, the problem will never be solved practically for interesting sizes. This is why the P ?= NP question is so important.
The other is that, historically, hard mathematical problems have the tendency of becoming impossible, as our knowledge progresses. For instance, in the eighteenth century, solving fifth or higher order polynomial equations in radicals was an hard problem. Evariste Galois solved the problem by showing that it is impossible in general. In the process he invented group theory. In the nineteenth century, the hard problem was to solve the gravitational-three-body problem. Poincaré solved it by showing that it is impossible to solve in general. In the process, he invented chaos theory.
Well, I'll agree with you insofar as I'm quite open to the idea that a real human-level AI may never be realised. So, I'm on board with your P?=NP analogy completely.
DeleteHowever I think there are very good reasons for thinking that it must be possible in principle, mainly because nature has already done it. This, of course, begs the question that the nature has done so by means of computation, however if the laws of physics are computable, as they seem to be as far as we can tell, then everything nature does must be computable, and the mind can be no exception. I also have reasons for believing that even if physics were not computable, this would be unlikely to be useful for biological information processing, but I'll leave that for now.
I'm not sure that there were ever reasons so compelling for believing there was a systematic way of solving fifth-order polynomials or solving the gravitational three-body problem.
In any cases, there are also many examples of very difficult problems finally being cracked, e.g. Fermat's Last Theorem.
>...nature has already done it
DeleteExactly. Once you realize that, it seems like all that's left is weird special pleading for the unique mental capabilities of meat versus silicon.
Calling the intricate machinery of living matter "meat" is a cheap and self-refuting rethorical trick.
DeleteAs is calling the intricate machinery of advanced electronics "silicon"! Repent, Ian!
DeleteLiving matter is just matter. However complicated the brain may be, organic chemistry is not imbued with some élan of consciousness that is not possessed by other substrates.
DeleteIf you can do what the brain does (in engineering terms, process information inputs and generate information outputs) with some other substrate (whether you can do that is just a boring engineering problem, not a philosophical one), then there is every reason to suppose that all those things which accompany meat-based thought ("qualia", et cetera) will attend thought on the other substrate.
Even as I speak, I can hear the dull thud of my certain intuition against your equally certain counterintuition. 'Twas ever thus, in phil mind.
Of course living matter is "just" matter. But clearly it has some special property (due to its complexity, arrangements, and type of material) that makes it alive instead of not. That's precisely the same sense in which people like Searle insist that biology matter. No magic necessary.
DeleteBasically, CTM is simply a license to be proudly ignorant about biology.
Delete@Filippo
Delete"Basically, CTM is simply a license to be proudly ignorant about biology."
Nonsense. Of course it is a great and worthy pursuit to expand our knowledge of biology. There is no reason to be proud of ignorance of anything. And you have no reason to suppose that CTM proponents are ignorant of biology.
However, from my point of view we don't have to know about the biology of human brains to know that the CTM must be true, as long as we start from a naturalistic perspective and agree that consciousness is not a physical substance. but a property of systems exhibiting certain behaviours.
If the universe is naturalistic, then we ought to be able to model and predict how physical processes will behave, which means that we can simulate them with computers. If so, we can (in principle) simulate any physical process, including human brains. If we can simulate a brain, we can make a computer behave like a brain. If a computer can behave like a brain, then as long as we don't believe in philosophical zombies then the computer (more properly the virtual mind of the simulated brain) must be conscious.
If, on the other hand, we do believe in philosophical zombies of a kind (entities which behave intelligently but have no consciousness), then we have to wonder why evolution would select for consciousness when it could have simply selected for intelligent behaviour instead. It seems more parsimonious to assume that philosphical zombies cannot exist.
The argument is really that simple. It's not based on ignorance, but you are correct that it does not require detailed understanding of biology. Nor does any fact you can give me about biology offer any significant problem for the argument.
Filippo,
Delete> Basically, CTM is simply a license to be proudly ignorant about biology. <
Precisely right my friend.
@Massimo
Delete"Precisely right my friend."
:(
I don't think accusations of proud ignorance are called for.
Unless I'm very much mistaken, there are no known facts about biology which refutes the CTM. Criticisms of the CTM are instead philosophical in nature (e.g. the Chinese Room).
If I'm wrong, please enlighten me.
@Massimo, you know better than that. Unless you think biology is simply a license to be proudly ignorant about physics.
DeleteFilippo, let's talk biology. What is the brain, from the point of view of biology? It's an organ for taking in sensory data, processing them, and outputting signals to the rest of the body. (Take a moment to convince yourself that natural selection does not require qualia "in addition" to this function.)
DeleteSimilarly, the kidneys are for eliminating toxins, the nose is for detecting airborne chemicals, the heart is for pumping blood, et cetera. (The teleological language is justified by both natural selection and by our own perspective as agents who want functioning bodies.)
Some of those functions can, in principle, be done by different devices than standard human body parts (we already have artificial hearts and dialysis machines). Nobody thinks Dan Dennett's heart isn't *really* pumping blood because it isn't made of meat. Anything that fulfills the functional role of the heart (pumping blood) is just as good qua heart.
You could thus say that, in addition to subscribing to the Computational Theory of Mind, I also subscribe to the Pumping Theory of Cardiology, which posits that anything that successfully pumps blood is (functionally) identical to a heart.
I confess to my great scientific ignorance about the details of organic hearts! I couldn't tell an atrium from a vena cava. Nonetheless, ever the armchair philosopher ignorant of biology, I think the Pumping Theory of Cardiology is obviously correct.
Now, maybe the brain is really sui generis, and it's literally physically impossible (in this universe) to build a machine out of anything except organic chemistry that does the same thing the brain does for the organism. Personally, I consider this only slightly more likely than finding that all extraterrestrial life forms in the universe are monolingual English speakers, but I can appreciate the physical possibility, at least.
Still, you must admit the *conceptual* possibility that something other than a human brain could, in principle, do for the body the same job that the organic brain does.
If so, why on earth would you imagine that the property of "mind" fails to supervene on that thing?
DM,
Delete> I don't think accusations of proud ignorance are called for. <
C'mon, don't take it personally. You know what I'm talking about, we've covered this territory several times.
Bubba (and Ian),
> Unless you think biology is simply a license to be proudly ignorant about physics. <
Nope, unless you think biology doesn't have anything to say beyond physics (which is simply false) you are simply missing the point.
Wish I could join the fun, boys (and girls?), but I'm preparing for a trip to Brazil...
@ ianpollock,
DeleteI never said that the brain is the only substrate that can support a "mind." I simply insist that classical models of computation are unlikely to be powerful enough to do it. Quantum computers, for example, will evade my objections. The fact that evolution did produce quantum computers shows that they are, indeed, possible. It is, however, rather interesting that the only known non-trivial examples of quantum computation are products of organic chemistry.
At the present state of technology all of these questions are irrelevant: our technology has failed spectacularly in the AI field. This is where the "Pumping Theory of Cardiology" analogy fails. We can, in fact, build pumps that can replace human hearts.
Engineering is my metaphysics.
@Massimo
DeleteI just get a bit disappointed when interesting philosophical discussion descends into dismissive insult like that. Yeah, we've been over it several times before, but I still think there are many issues that haven't been teased out fully yet.
This is probably the last post you're going to make on the CTM in quite a while, so it's also the last opportunity to thrash out these issues on rationallyspeaking.
Anyway, enjoy Brazil!
@Filippo
"classical models of computation are unlikely to be powerful enough to do it"
You may be right. Ian and I are not saying that classical computers are feasible substrates for intelligence, only that it ought to be possible in principle (given unbounded time and resources). That said, I'm not sure they are unfeasible either.
"The fact that evolution did produce quantum computers shows that they are, indeed, possible"
What? As far as I'm aware, there is no evidence that evolution has produced quantum computers. What are you talking about?
Sure, there are quantum processes happening in brains, but the same is true of standard electronics. A quantum computer is a very specific concept, and I have not heard of any discovery of quantum computers in nature.
@DM,
Delete>I have not heard of any discovery of quantum computers in nature.<
You have missed the discovery of quantum computation in photosynthesis:
http://www.scientificamerican.com/article.cfm?id=when-it-comes-to-photosynthesis-plants-perform-quantum-computation
Hi Filippo,
DeleteI had indeed missed it, and it is interesting, but I think it's stretching the definition to breaking point to call this quantum computation. There's no qubits. There's no representation of data, no input and output of information. There's just chemicals finding the most efficient routes to get to low energy states.
You might as well call the double slit experiment a quantum computer. More pertinently, you might as well call a transistor a quantum computer, and by extension all computers.
Filippo, thanks for posting. I agree with DM, that is fascinating, but while it is a macroscopic quantum effect, it's not clear in what interesting sense it represents quantum *computation*. (You can trivially define any quantum event as computation, but that seems pointless.)
DeleteI am open to the idea that IF the brain is a quantum computer in some sense (currently I don't believe it is, although I think Penrose has a longshot theory that that's the case), THEN the CTM would need to be updated. I guess you could argue that if any mind conceivably COULD depend on quantum computation, then CTM needs updating.
None of this seems obviously to bear on the philosophical possibility of uploading.
Massimo, enjoy Brazil!
@DM,
Delete>Sure, there are quantum processes happening in brains, but the same is true of standard electronics.<
True, but ordinary computers, by design, can only perform classical computations, regardless of their quantum nature. Biological systems are not limited the same way. They are not crippled by design because they have not been designed.
@Ian
DeleteI disagree with you slightly. Roger Penrose's ideas depend critically on the brain exploiting previously unknown uncomputable physics. He seems to think that hidden beneath quantum weirdness and unpredictability is some hidden uncomputable order which brains are exploiting.
Penrose's ideas derive from Godel and are motivated by the idea that it is fundamentally impossible for a computational process to achieve what can be achieved by human brains, even given infinite resources.
If it's simply the case that the brain is a quantum computer, but the laws of physics prove to be computable, then the CTM remains true and Penrose is incorrect.
@Filippo
DeleteI agree with that. I was just surprised by your claim that quantum computers existed in nature (since my definition of quantum computer is seemingly narrower than yours).
Would you agree with the proposition that the only difference between quantum computers and classical computers is efficiency/feasibility? If not, can you give an example of any function that can be computed by quantum computers but not by classical computers (given enough time/memory)?
If that is the only difference, then why are you open to the idea that quantum computers could be conscious but dismissive of the idea that classical computers could be conscious? It hasn't yet been demonstrated that quantum computations are needed for human intelligence even from an engineering/practical perspective, so why are you so sure that classical computation will not do the job?
My view is that computation is computation. It doesn't matter if it's analog, digital, parallel, single-threaded, quantum or classical. A digital, single-threaded, classical computer can emulate an analog, parallel, quantum computer. Why do you feel this is so obviously incorrect?
DM, Ian,
DeleteI agree that the difference is efficiency/feasibility. But for me, efficiency/feasibility is everything: I find the "philosophical possibility of uploading" risible, for instance. Engineering is my metaphysics. Really.
One more point. If you don't care about efficiency/feasibility, then you could as well talk of clockwork mechanisms, instead of silicon. Basically, a mechanical computer can do everything an electronic computer can, if you ignore efficiency/feasibility. But you keep talking about silicon. Why?
For me, uploading is like Star Trek teleportation. I see no philosophical barriers to it, but have little expectation it will come to pass (at least in the forseeable future). Does that seem reasonable to you?
DeleteI agree we could talk about clockwork. If I am committed to the CTM as I am, then I of course agree that computers built of clockwork, water pipes, ants (e.g. HEX in discworld) etc could be conscious. I only talk of silicon because that's the most practical substrate we have yet found.
So I return to my previous question.
If it's just efficiency or feasibility, then what reason do you have to believe that sentience via quantum computation might be feasible while sentience via classical computation might not?
I'm agnostic on this question (I would even doubt whether sentient quantum computers are feasible), and I don't see how certainty one way or the other can be justified.
Massimo,
Delete"
> Basically, CTM is simply a license to be proudly ignorant about biology. <
Precisely right my friend.
"
I am curious, are you considering computational neuroscience to be equivalent with CTM? Because in that field, the brain is considered in terms of information processing circuits. Some computational neuroscientists are working on full brain emulation and the way they do it is precisely by studying the biology and then attempting to mimic it with electronic systems, the largest such group being the Human Brain Project.
Are you, by any chance, claiming that computational neuroscientists are ignorant of biology?
All computational neuroscientists as far as I know think the brain is a computer and are working on defining the computation that is going on in the brain. The goal of defining the the mind in computational terms is a given. They are not going to be scared away by statements like "CTM is simply a license to be proudly ignorant about biology." They should laugh at that.
DeleteAbsolutely not, II and Philip. I consider myself a computational neuroscientist, and I do primarily work in digital simulations of networks of neural models more complex than an Artificial Neural Network, and in higher level simulations of functional networks. But I'm at the fringe of computational neuroscience, which is mostly simulation of biochemical processes, although they do cover network dynamics in simulations.
Delete@BubbaRich: I have never heard of a computational neuroscience research lab where the mind/brain isn't approached as a computer: "Our goal is to understand, at the most fundamental, algorithmic level, how the brain processes information about the world to generate perception, knowledge, decision, and action."
Deletehttp://krieger.jhu.edu/mbi/about/
http://cnslab.mb.jhu.edu/
Around 24:55 in the poscast OBrian spoke about his characterization of computation and the Analogue/Digital distinction but I was having trouble following his thoughts, so I transcribed it to get a better idea of his thinking on the matter :
ReplyDeleteWe are thinking that computation has got something to do with what it is what is required for a system to behave intelligently, so the characterization to call it that, I don't think it's a definition it's much broader than that, the characterization that I favor of computation is very generic but it is one that says that computations are special and they are special physical processes, there is nothing mysterious in that sense, they are special because they allow information to throw its weight around in this world.
Now the trouble with starting to talk about information is like computation, suddenly we have got another word that's used in all these different ways so to immediately clarify what I mean by information — what I mean by information is not the kind of quantitative analysis of information that we sometimes get in mathematics and especially in engineering that derives from people like Shanon and Weiver who developed a quantitative notion of information especially for the purposes in communication, and here the idea of information was simply a measure of the amount of information that people that a message might contain, for example I'm not talking about information in that quantitative sense, I'm talking about information in a more in what could be said a more qualitative sense where, we are actually talking about the idea of what it is what information is actually carried by a particular state, a message, an object or some such thing such that we can really take seriously the notion of encoding or representing information, and so I'm saying with computation what we have is some, we have some physical process where the physical process, where the physical process itself, is influenced in some way — and I'm very ? ? so ? very ? ? get this point I accept — is influenced in some way by the information that is encoded by the states of the device, so the device encodes information in a qualitative sense and that information that it encodes in some way affects the trajectory of the process in which that device engages.
Now that's, that's I think, that's a very broad notion of computation but it captures the distinction, it it allows for, on to have then a distinction between what we were talking about before, digital computers on the one hand and analogue computers on the other because they both satisfy my view, that characterization, is just how then they go about, how in each case the physical system goes about allowing information to throw its weight around is quite different.
I think this could perhaps be succinctly summarised as the position that the semantics of information processing have a qualitative causal effect on physical outcomes.
DeleteThis is opposed to simple post hoc interpretation of physical systems as being computational. For example, a pattern of windblown grains of sand might be interpreted as some kind of computation, but the semantics of this computation has no noticeable effect on on the world. Indeed, there may be many different ways to attribute semantics to such a pattern.
I don't think what you've transcribed actually has much bearing on digital vs analog, though. It's instead a distinction between interesting and trivial computation.
DM,
Delete"I think this could perhaps be succinctly summarized as the position that the semantics of information processing have a qualitative causal effect on physical outcomes."
uhm.
"I don't think what you've transcribed actually has much bearing on digital vs analog, though. It's instead a distinction between interesting and trivial computation."
My understating from that section is that he is saying that digital and analogue computations each permit information to throw its weight around in different ways, the digital way being more quantitative and the analogue way more qualitative, and he expands a bit on each, and implies the analogue qualitative computation way ("the device encodes information in a qualitative sense and that information that it encodes in some way affects the trajectory of the process in which that device engages") is a better way to view how the brain computes and a better way to view what constitutes intelligent computation, but I don't think he means other forms of computation are trivial (though I think what he is saying may be implying in a general sense that analogue computations are less trivial than digital computations but I think that is probably an over interpretation and probably not something he intended).
Hi Marc,
DeleteI think you're probably right. I think I was misunderstanding him. Never mind!
Marc,
Delete"the device encodes information in a qualitative sense and that information that it encodes in some way affects the trajectory of the process in which that device engages"
I'm not sure what precisely does that mean. The information in the digital computer certainly does also affect the behavior of the physical processes involved (e.g. the flow of electrons, etc.), unless you mean something else by "trajectory of the process"?. So, the question is, in what way, physically, are the analog computations more qualitative?
"the analogue [...] is a better way to view how the brain computes and a better way to view what constitutes intelligent computation"
Being a better way to view it is one thing, but I think the claim here is that a digital computer simply cannot do certain things whereas an analog one can. That is quite a different claim that just the one about what is the most suitable way for us to view the activities of the systems in question.
I agree with Gerard O'Brien that quantitative vs qualitative is a reasonable way of describing the distinction between how digital and analogue devices work.
DeleteHowever I also agree with Infinitely Improbable that this doesn't mean that analog computers can do anything that digital computers cannot. They only go about it differently, ultimately they are functionally equivalent as long as we don't care too much about efficiency or feasibility.
Infinitely Improbable,
Deletethe device encodes information in a qualitative sense and that information that it encodes in some way affects the trajectory of the process in which that device engages -- O'Brian
I'm also not sure what that means.
"The information in the digital computer certainly does also affect the behavior of the physical processes involved (e.g. the flow of electrons, etc.), unless you mean something else by "trajectory of the process"?.
Yes. I'm still looking for some aspect of 'analogue/qualitative computing' as he describes it in that context that makes it significantly different from digital computing. To speculate, all I can see now is that he might mean there is something about the encoding of 'analogue' information, and its computing, that changes the shape or stance of the whole system including the hardware. But information is not clearly defined, and specifically analogue information, also I think speaking as we seem to be doing in the comments within a hardware/software context may be limiting because the boundary between hardware and software is not clear, and for the brain it may be inherently even less so, for example an analogy for the brain might be something like in the brain the hardware is a team of players with red sweaters and the software is a team of players with blue sweaters -- and some/many/all the players are exchanging their sweaters with other players on the opposite team some/many/all the time.
"So, the question is, in what way, physically, are the analog computations more qualitative?"
I suppose he might say it's more the shape, as opposed to the amount, of information. But that doesn't really help.
"but I think the claim here is that a digital computer simply cannot do certain things whereas an analog one can. That is quite a different claim that just the one about what is the most suitable way for us to view the activities of the systems in question."
I think he does believe that an ordinary digital computer today is not doing what an analogue computer like the brain is doing once we/he understand that better, also that his use of digital and analogue is more about how information can be encoded, and if it's encoded in a digital fashion it will then be computed in a digital fashion and if information is encoded in a analogue fashion it will be computed in an analogue fashion, but again that doesn't specify how each type of computation differs so it doesn't help answer the question of if a digital computer simply cannot do certain things that an analog could but I think that is not his intention. I think he might be going for an understanding that starts more with a search for the differences between analogue and digital information and analogue and digital encoding, and then consider the implied differences between analogue and digital computing as a whole without consideration for how the computing would actually be implemented.
For example, on the one hand how the information is presented to, and encoded by, a string of dominos topling over relies more on the use of digital information and encoding (more quantitative), whereas a glob of molecules interacting as a weather system relies more on the use of analogue information and encoding (more qualitative), and in each case that implies the kind of computing going on in the system and, to get back to what he actually said, how information is throwing its weight around.
All vague speculations, or food for though at most, clearly not a coherent view as I'm presenting things, but if I can start to make sense of how I'm reading him, I think I have to consider 1) that the concepts of digital and analogue are not mutually exclusive, be they applied to information, encoding, or computing and 2) that how information throws it's weight around, is encoded, and is computed, are all intimately tied together, may be part of one process, and none may have precedence.
Hi Marc,
DeleteIn digital computers, quantities are represented precisely by symbols, e.g. the number three might be represented as the binary digits 00000011.
In analog computers, quantities are represented with imprecise real number analogies. The number three might be represented by 2.99999993421123144769..... volts.
Addition in a digital computer is done indirectly by symbol manipulation, carrying 1's etc. Addition in an analog computer is done by actually adding quantities together, e.g. adding 2 litres of water to 3 litres of water and measuring the resulting volume is one analog way of computing 2+3.
So, in analog computers, the computation is more direct, more physical. In digital computers it's more abstract.
But I agree that it makes sense to mix analog and digital. At a low level, I think brains are analog. Neurons are excited by the "analog addition" of incoming signals.
On the other hand, when a human brain adds two numbers, I don't think there are real quantities in a brain being added and measured. I think it's much more like digital computation, with each number being represented by a "digital" symbol comprised of neural excitations.
Other calculations might be analog at even a high level, such as quickly estimating approximately how many sheep you see in a given field. The number of sheep seen might correspond rather directly (if imprecisely) to the number of certain neurons excited in some way.
Does that help?
DM,
DeleteThanks for the reply,
Interesting but I think G. O'Brien is not making those distinctions ...
(I think my second comment was fairly grounded in the podcast but my last comment was pushing it, so at this point I think I'd have to listen to the podcast again and read some of what he has written on the subject if I wanted to make either his point of view clearer, but ! )
... I think he really wants to look at the brain as an analogue computer that does analogue computations on information encoded analogically, and what the brain does is nothing like what is happening inside a computer whether we speak of analogue or digital computers because the word analogue in that context is not being used in the same way, or it is being used in an extremely limited way.
I think when he says "the device encodes information in a qualitative sense and that information that it encodes in some way affects the trajectory of the process in which that device engages" that sounds very close to the the idea that input can change incrementally the process the device is accomplishing or the ends to which it is working, and that becomes interesting if the inputs can change the process to the extent that it not only is changing the processing quality of the device but is also changing the device itself, hence it becomes hard to talk about the distinction between the information, the computing, and the device, and the only machines I can think of that can really do something like that are living organisms, even a cell or a hydra seem capable of this kind of 'analogue computing' (analogue computing in the sense that I think O'Brien's is alluding too).
But in the podcast that I wrote down Julia asked him about uploading minds: "Uploading of the human mind onto computers is it definitely possible though we may never get it, maybe possible, or definitely impossible" and he replies: "My view is theoretically in principle it is possible if when we talk about uploading onto a specif kind of computer namely an analogue computer, and analogue computer's depend very much on the physical media from which they are composed, what I'm actually saying we'd have to upload it onto a physical medium that looks a lot like the brain". That solidifies at least two problems for me 1) it appears he is post dating his definition of analogue to whatever a brain is doing once we get a better handle on what a brain is doing and 2) how can you upload, or what do you upload, and onto what, if a brain does not maintain an internal separation between information, process, computation and the device itself.
Hi Marc,
DeleteIf you want to understand his views on this, I think this paper is a good place to start. I've been reading it, and it explains in detail with examples what he means by analog computation.
http://www.adelaide.edu.au/directory/gerard.obrien?dsn=directory.file;field=data;id=12822;m=view
One specific example he gives is "computing" the pattern of shadows cast by the sun on a building by constructing a scale model and shining a light on it.
He goes on to argue that the connectionist models of intelligence are in fact analogue because the representations of semantics have "natural" semantic relations to each other rather than the "forced" relations strings of symbols have to each other in a digital computer.
It seems to me that his argument is persuasive in that it probably does make sense to think of the brain as an analog computer. In particular, it certainly does not make sense to think of it as a digital computer.
As long as he confines himself to talking about how the brain works I don't have an issue with him. However just because the substrate of the brain is analog doesn't mean a human mind could not also be supported on a digital substrate.
His argument gives no reason I can see to doubt that a digital computer could support a conscious mind.
Further reading in this paper:
Deletehttp://www.adelaide.edu.au/directory/gerard.obrien?dsn=directory.file;field=data;id=12672;m=view
This paper makes an argument that seems to me to be an answer to the "you can't get semantics from syntax" position, arguing that while this might be correct in digital computers, the semantics of representations in analog computers arise naturally out of the intrinsic physical properties of those representations.
This is interesting, but I can see several arguments for intrinsic semantics in digital systems also, so I remain unconvinced that analog computation is necessary for consciousness.
Semantics come from syntax naturally by the relationship between inputs, outputs, and the memory of effects of both inputs and outputs (sensory data and actions). You have to have a context (normally considered as an embodied consciousness), but it's the context that creates semantics, that gives meaning to "things that happen or exist."
DeleteHAL: Dave, this conversation can serve no purpose anymore. Goodbye. =
ReplyDelete@ MAX,
ReplyDelete>The DARPA Robotics Challenge aims to develop robots precisely for a Fukushima-like scenario. The robots don't have to be completely autonomous, but autonomy factors into their score. They certainly don't need to be conscious.
http://www.theroboticschallenge.org<
One of the tasks in the DARPA challenge is described as:
“For the second sub-task, the robot crosses the zig-zag hurdles and footfalls with holes. This
sub-task shall be considered complete when all parts of the robot (excepting the tether) have
crossed a line marked after the footfalls with holes. It does not suffice for part of the robot to
cross the line; the entire robot must cross the line.”
As I have previously pointed-out, this is a task that an insect brain can handle very well. The DARPA challenge is designed to develop sub-insect intelligence devices. And they can be tethered and receive some assistance from human operators!
"And they can be tethered and receive some assistance from human operators!"
DeleteThat here is the problem with your argument. Robots in applications that you are talking about (jobs that are dangerous for humans) can always be remote-controlled in real-time, which is why they don't need intelligence. If the suitable actuators could be made then the robots would already be immensely useful without having any intelligence. This is a strong hint that the problem in this case is not with the lack of intelligence but with the actuators.
There's also a problem with sensitivity, I imagine.
DeleteThe body of an insect is an object of much greater complexity and sophistication than any man-made robot, even leaving the brain out of it. It has very many sensors spread around its body, including proprioception, sensitive bristles, etc. I doubt most robots have such a rich sensorium to work with.
In addition to DNA (biometrical) computers I mentioned [ http://en.wikipedia.org/wiki/DNA_computing ], it should be noted that probabilistic computers are being made based on analog technology (with lower power and first targeting pattern-recognition applications) [ http://www.extremetech.com/computing/168348-probabilistic-computing-imprecise-chips-save-power-improve-performance/ ]. There's a DARPA program to fund research [ http://www.wired.com/wiredenterprise/2012/08/upside/ ].
ReplyDeleteThe ultimate robot "brain" may end up being made using computer technologies like these and not just so-called standard computer technology. (Power is definitely an issue, for example.)
Is the difference between analog and digital computers tenable? Like prof. O'brien I think that if we are going to upload a brain it has to be in an analog 'brain mimicking' environment. But isn't there nowadays software that behaves like an analog computer, e.g. digital computers running programs that use for neural networks for pattern classification?
ReplyDeleteHi Kurt,
DeleteI agree with you. It's not tenable. Analog computers can be emulated with digital computers, although for certain tasks analog computers may be much more efficient.
It is true that neural network programs can run on standard digital hardware, but the hardware that runs these programs at all practically (size, power, speed) may have to have a different technology base (analog, biomolecular, etc,)
DeleteHi Philip/Disagreeable
DeleteI suppose that learning is the issue for the analog like software on digital computers, not applying (the neural network) afterwards. Once the software is trained, and there is much greater flexibility in software than in designing an analog computer, running it for let's say pattern recognition is trivial. The brain as an analog computer also needs two decades of training and I am a little wary of applying biomolecular analog computing. That said, I haven't had the time yet to read your suggested articles (but I certainly will). There is of course still an enormous amount of work (computer science and neurobiology) to have decent AI but at least I am starting to get convinced that it should not be theoretically impossible to perform "human-like" calculations on a digital computer.
One thing to keep in mind about when the underlying hardware on which a program runs is not classical (digital). Take the case of quantum programming languages*. There are operators (like measure) that can be simulated, but don't have the exact semantics as on an actual quantum computer.
Delete* http://www.quantiki.org/wiki/Quantum_Programming_Language
http://en.wikipedia.org/wiki/Quantum_programming
I don't know as much about DNA programming languages, but here is an example:
A programming language for composable DNA circuits
I don't know that I've got much to contribute here but I've enjoyed reading all this. I'll just highlight a couple of points.
ReplyDeletePhilip said: "It is true that neural network programs can run on standard digital hardware, but the hardware that runs these programs at all practically (size, power, speed) may have to have a different technology base (analog, biomolecular, etc.)."
This seems to me a crucial point which gives some substance to my intuitive responses to some of the claims being made here.
E.g. by BubbaRich who said:
"I think that a computer, with sensors to detect the world and actuators to act on it, can do and experience things in exactly the same way as an animal mind."
In practice the substrate is important. Our bodies operate and react the way they do in large part because of the relevant biochemical etc. facts. The particular processes (including neuronal ones) which our bodies perform happen the way they happen because our bodies have a particular evolutionary history etc. These particular patterns of activity only make sense in the context of that history, of that biology.
So it seems plausible that the nature and quality of our consciousness and the sentience of other living things is dependent on these details. An autonomous non-biological agent (or robot) may have some kind of consciousness or sentience (I remain agnostic on this) but it would be qualitatively different from ours (and from that of other biological creatures).
I'm unclear about simulations. If a computer was programmed to simulate a human brain, I don't see how this could ever be conscious, in part because consciousness is a product of the entire human body acting in a wider environment.
If the simulation incorporated all of this, then, sure, some kind of (virtual?) consciousness may be involved.
But this is extremely hypothetical. This sort of simulation is just not something we could aspire to create.
I like what Filippo said about engineering being his metaphysics. If it's not practically possible, in what meaningful sense is it possible? Logically? Unless your metaphysics is replete with logically possible worlds on a par with this one, I don't know that logical possibility gives you much at all.
(And the idea that our world is a simulation created by an advanced civilization is, as I see it, not worth taking seriously. There is no evidence to support it and the arguments I have seen are unconvincing to say the least.)
Hi Mark,
Delete>If the simulation incorporated all of this, then, sure, some kind of (virtual?) consciousness may be involved.<
If so, what would be the difference between virtual consciousness and real consciousness? Would it feel qualitatively the same or could you tell the difference?
>If it's not practically possible, in what meaningful sense is it possible?<
How about an analogy? Humans travelling at the speed of light or faster is fundamentally impossible. Humans travelling at 99.999999999999999% the speed of light is fundamentally possible, but in all likelihood practically impossible. This seems to me to be precisely analogous to how I view the AI problem.
> >If the simulation incorporated all of this, then, sure, some kind of (virtual?) consciousness may be involved.<
DeleteIf so, what would be the difference between virtual consciousness and real consciousness? Would it feel qualitatively the same or could you tell the difference?<
These are not questions I could even begin to address. I tried to make it clear that I am not convinced that any sufficiently sophisticated and extensive simulation – one capable of generating conscious experiences – is even possible (or whether, if it were, it would still be a 'simulation' in any significant sense of that word).
Hi All,
ReplyDeleteI have emailed Gerard O'Brien for clarification regarding his position on the significance of analog vs digital computation.
He had this to say:
"But to cut right to the chase, the distinction between analog and digital computation is significant for the very simple reason that cognitive science, as an enterprise, is interested in explaining how perceptual and cognitive processes are actually physically implemented in the brain (and is not so interested in more abstract questions about the limits of computability in different kinds of computational substrates). Given that the style of computation in analog devices is quite different from that found in their digital counterparts (both in terms of the form in which information is encoded and the manner in which is processed), the distinction here thus has important ramifications for our explanation of the nature of cognition."
I take this to mean that he is more interested in explaining how the brain creates mind than in questions about what computers might be able to do. And in his view, the brain constitutes an analog computer, so it makes sense to consider how analog computers work when doing philosophy of mind.
He went on to explain the significance of analog vs digital for phenomenal consciousness. His own suspicions are that digital computers could indeed faithfully simulate human intelligence in principle, but that such simulations would be philosophical zombies, having no phenomenal experience.
"My own position is that the fundamental difference between the way analog and digital computers operate does make it the case that only the former have the capacity to generate conscious experience, and hence that a digital intelligence would indeed be zombie. The crucial issue here concerns the way in which information is encoded in these different computational substrates (something I said a little about in the podcast). Specifically, whereas digital computers employ "symbols" (which bear arbitrary relations to the elements that compose the task domain), analog computers operate with "analogies" (and hence there is a systematic correspondence--a "resemblance" relation-- between internal states of the analog computer and elements of the task domain). One consequence of this difference is that analog computers are not "medium independent" in the way of digital computers, in the sense that it is the material properties of the substrate in which an analog computer is implemented which govern the computations it performs. And it is this connection with the material substrate that makes the difference insofar as consciousness is concerned (though there is a long story to be told here about what this really means)."
Finally, he answered a question I had about why evolution would have produced consciousness if all that was needed was intelligence. He proposed that natural selection did not select for consciousness but for intelligent behaviour. However, the only medium that natural selection had to work with was biological matter, which is more conducive to building conscious analog computers than unconscious digital computers.
I think his views are interesting and do make a though-provoking case for the natural semantics (and thus consciousness) of analog computers and how we might have evolved consciousness by accident. However, I don't think he has established that digital computers would not be conscious and I remain happy with the classical CTM.
'I think his views are interesting and do make a though-provoking case for the natural semantics (and thus consciousness) of analog computers and how we might have evolved consciousness by accident.'
DeleteIf O'Brien indicated that evolution selected for intelligent behavior specifically (not for conscious), that is not at all the same as saying consciousness evolved by accident. It suggests to me the alternative interpretation that consciousness was the best way to achieve certain intelligent behaviors to maintain stability (fitness) in changing environments. If we frame consciousness as an accidental side effect then we can claim it has no causal role so I think the framing is important.
It seems to me that living conscious agents display unique intelligent behaviors that would not occur without the consciousness. If a p zombie is coherent why didn't any evolve? I think because consciousness has a causal role in the process of producing certain types of intelligent behaviors.
Hi Seth_blog
Delete>If a p zombie is coherent why didn't any evolve?<
This is my view and precisely why I asked him the question, because he does believe that p-zombies are possible and that consciousness plays no causal role.
And his answer is pretty good for a position I am inclined to reject. He's saying that it was easier for nature to evolve conscious intelligence than unconscious intelligence, because conscious (analog) intelligence is easier for evolution than simulated (digital) intelligence.
A good analogy might be wheels rather than legs. Engineers find wheels much easier to design and build than legs, but evolution found the reverse. Our digital computers are like wheels - easy for us to design and build but effectively impossible for nature to evolve.
But don't take this as my agreeing with him. I do of course believe that digital intelligences would also be conscious.
Disagreeable,
DeleteThanks for the clarification - I still don't think he is answering why 'conscious (analog) intelligence is easier for evolution' or why the 'conscious' part is not crucial for the referred to intelligent behaviors to manifest.
Hi Seth_blog,
DeleteHe does answer one of these questions, although if you really want to understand where he's coming from it may be best to read the papers I linked to above.
"why 'conscious (analog) intelligence is easier for evolution'"
Firstly, his main motivation is that he has developed a reasonably plausible account of how semantics (and thus qualia and consciousness) could arise in an analog computer.
He then argues that the brain is such an analog computer. I'm not sure I totally buy this, but I'm willing to consider it.
Finally, he assumes that given the messy, imprecise nature of biological components, it is simply easier for nature to evolve brains than silicon chips (as in legs versus wheels). Brains happen to be conducive to consciousness because they are analog while silicon chips are digital and so unconscious.
"why the 'conscious' part is not crucial for the referred to intelligent behaviors to manifest"
He has no argument that I've seen to support the hypothesis that p-zombies can exist. It's more of a suspicion of his than something he claims to have shown. I think it's because he acknowledges that it ought to be possible for a digital computer to do whatever an analog computer to do, but since his account of semantics doesn't apply to digital computers he feels digital computers must be unconscious.
Does that clarify things?
Here are two possibilities:
ReplyDelete(A) Build a super-massively parallel computer with standard digital CPUs and programming that simulates consciousness in a simulated brain. It would be in effect a "conscious" game-world character in a game. (Good luck cooling this computer, and it may be difficult to achieve the right speed.)
(B) Using emerging hardware chip designs, e.g., neuromorphic, probabilistic (analog), biomolecular, maybe even quantum, and accompanying programming technologies, build a conscious robot computer brain. The robot would move and learn in our world, not the game world of (A).
So it's a matter of opinion if either (A) or (B) demonstrates consciousness.
A film about two scientists trying to make a conscious robot, just released for free for the public. Entertaining, at least.
ReplyDeleteSingularity or Bust: the film
"In 2009, film-maker and former AI programmer Raj Dye spent his summer following futurist AI researchers Ben Goertzel and Hugo DeGaris around Hong Kong and Xiamen, documenting their doings and gathering their perspectives. The result, after some work by crack film editor Alex MacKenzie, was the 45 minute documentary Singularity or Bust — a uniquely edgy, experimental Singularitarian road movie, featuring perhaps the most philosophical three-foot-tall humanoid robot ever, a glance at the fast-growing Chinese research scene in the late aughts, and even a bit of a real-life love story. The film was screened in theaters around the world, and won the Best Documentary award at the 2013 LA Cinema Festival of Hollywood and the LA Lift Off Festival. And now it is online, free of charge, for your delectation."
A summary of my position:
ReplyDeleteThe main argument against the CTM is that brain functions are substrate-dependent.
The main counter-argument to substrate-dependence is that one can always simulate any physical system on a digital computer, so that the substrate is irrelevant.
The simulation argument, however, fails for both quantum and classical physical systems because the simulation cannot be exact. The reason is that digital computers cannot do real arithmetic exactly and they require slicing time into discrete steps.
The answer that approximate simulations are enough, is an assertion that the detailed behavior of the substrate does not matter. Because the simulation argument is designed to prove substrate-independence, allowing approximate simulations makes the simulation argument circular. It is not a logically or mathematically valid argument.
There are more sophisticated arguments against approximate simulations. Approximate simulations of physical systems are problematic because they introduce violations of the laws of physics. It is true that there are errors and perturbations in the real system. But these errors and perturbations, differently from the errors in the digital simulation, obey the laws of physics. For instance, they strictly conserve energy, while the errors in a digital simulation do not.
In conclusion, the simulation argument is invalid on logical and physical grounds. Substrate-dependence is not refuted, and remains a logical possibility.
Hi Filippo,
DeleteIt's really great to see it laid out like that. I feel like I have a much better understanding of where you're coming from.
I think you have identified a very fruitful point for further discussion - whether approximate simulations are enough. I obviously think they are, and I will try to explain why.
Biological systems are messy and imprecise. Brains are subject to vibrations, temperature fluctuations, magnetic fields, effects of chaos in molecular interactions, quantum fluctuations, etc. There is no conceivable way that infinite precision could possibly be necessary for consciousness because there is no way to realise infinite precision in a physical system, especially in an analog system.
Since infinite precision cannot be required, and since digital computation can offer us any degree of precision we like, digital computation must be able to approximate the workings of a brain closely enough that the same qualitative behaviours and phenomena will be observed.
>differently from the errors in the digital simulation, obey the laws of physics<
The problem with digital computation is one of precision, not of errors. Sure, programs can have errors or bugs. Any simulation of a brain which did not obey the laws of physics would be a bug. Let's assume our simulation is bug-free. If we're bug-free, I see no reason to doubt that approximations can be just as faithful to the laws of physics as physical systems, especially when we take quantum mechanics into account.
For example, I don't think it is true to say that physical systems strictly conserve energy. They do over long periods and over large volumes, but this is statistical. On tiny scales they cannot conserve energy due to Heisenberg. Even without quantum mechanics, it's theoretically possible though statistically improbable that all the air molecules in a room could happen to converge on one corner, reducing entropy and so effectively producing energy.
At tiny scales, energy does fluctuate. As long as our digital system approximates the physical system to within those kinds of scales, then the digital system can obey the laws of physics just as well as a physical system. These tiny imprecisions of approximation at quantum scales will even out at larger scales in much the same way as physical energy fluctuations, as long as we're bug-free.
Even if this were not the case, it remains an unsupported hypothesis that consciousness depends crucially on absolutely faithful following of the laws of physics. I really don't see any reason to believe this is so, although I suppose it could be. If digital computations are enough to predict and reproduce diverse physical phenomena such as air turbulence, structural integrity, protein folding etc, then what is your reason for thinking intelligence and consciousness must be different?
My own intuition is that it is likely to be possible to produce consciousness by simulating at far coarser levels of detail than quantum scales. I would even suspect that we might get away with modelling neurons directly rather than modelling the molecules that make them up. I could certainly be wrong. I think the only sensible position regarding the level of detail or precision required is agnosticism.
"The main argument against the CTM is that brain functions are substrate-dependent."
DeleteI define computation as being substrate-dependent. If one does define computation as being substrate-independent (which I think is very strange*), then one is setting up CTM to fail.
* I would call this "substrate-independent" concept "Platonism".