About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, January 04, 2013

LessWrong on morality and logic


by Massimo Pigliucci

There has been a debate on morality brewing of late over at LessWrong. As readers of this blog know, I am not particularly sympathetic to that outlet (despite the fact that two of my collaborators here are either fans or even involved in major ways with them — see how open minded I am?). Largely, this is because I think of the Singularity and related ideas as borderline pseudoscience, and have a hard time taking too seriously a number of other positions and claims made at LW. Still, in this case by friend Michael DeDora, who also writes here, pointed me to two pieces by Eliezer Yudkowsky and one of the other LW authors that I'd like to comment on.

Yudkowsky has written a long (and somewhat rambling, but interesting nonetheless) essay entitled “By Which It May Be Judged” in which he explores the relationship between morality, logic and physics. A few days later, someone named Wei Dai wrote a brief response with the unambiguously declarative title “Morality Isn’t Logical,” in which the commenter presents what he takes to be decisive arguments against Yudkowsky’s thesis. I think that this time Yudkowsky got it largely right, though not entirely so (and the part he got wrong is, I think, interestingly indicative), while Dai makes some recurring mistakes in reasoning about morality that should be highlighted for future reference.

Let’s start with Yudkowsky’s argument then. He presents a thought experiment, a simple situation leading to a fundamental question in ethical reasoning: “Suppose three people find a pie — that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone’s ideas about what is ‘fair’.” He continues: “Assuming no relevant conditions other than those already stated, ‘fairness’ simplifies to the mathematical procedure of splitting the pie into equal parts; and when this logical function is run over physical reality, it outputs ‘1/3 for Zaire, 1/3 for Yancy, 1/3 for Xannon.’”

Setting aside fancy talk of logical functions being run over physical reality, this seems to me exactly right, and for precisely the reasons Yudkowsky goes on to explain. (I will leave it to the unconvinced reader to check his original essay for the details.) He then tackles the broader question of skepticism about morality — a surprisingly fashionable attitude in certain quarters of the skeptic/atheist (but not humanist) community. Yudkowsky of course acknowledges that we shouldn’t expect to find any cosmic Stone Tablet onto which right and wrong are somehow written, and even mentions Plato’s Euthyphro as the 24 centuries-old source of that insight. Nonetheless, he thinks that “if we confess that ‘right’ lives in a world of physics and logic — because everything lives in a world of physics and logic — then we have to translate ‘right’ into those terms somehow.” Don’t know why that would be a “confession” rather than a reasonable assumption, but I’m not going to nitpick [1].

Yudkowsky proceeds by arguing that there is no “tweaking” of the physical universe one can try to make it become right to slaughter babies (I know, his assumption here could be questioned, but I actually agree with it — more later). And continues with his punchline: “But if you can’t make it good to slaughter babies by tweaking the physical state of anything — if we can’t imagine a world where some great Stone Tablet of Morality has been physically rewritten, and what is right has changed — then this is telling us that what’s ‘right’ is a logical thingy rather than a physical thingy, that’s all. The mark of a logical validity is that we can’t concretely visualize a coherent possible world where the proposition is false.”

But wait! Doesn’t Yudkowsky here run a bit too fast? What about Moore’s open question, which Yudkowsky rephrases (again, unnecessarily) as “I can see that this event is high-rated by logical function X, but is X really right?” His answer to Moore comes in the form of another thought experiment, one in which we are invited to imagine an alternative logical function (to the one that says that it isn’t right to slaughter babies), one in which the best possible action is to turn everything into paperclips. Yudkowsky argues that “as soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output ‘life, consciousness, etc. and paperclips,’” finally concluding that “where moral judgment is concerned, it’s logic all the way down. ALL the way down.”

And that’s where he goes wrong. As far as I can tell he simply sneaked in the assumption that life and consciousness are better than paperclips, but that assumption is entirely unjustified, either by logic or by physics. It is, of course, perfectly justified by something else: biology, and in particular the biology of conscious social animals such as ourselves (and relevantly similar beings in the rest of the universe, let’s not be unnecessarily parochial). [2]

Even though he mentions the word, Yudkowsky seems to have forgotten that logic itself needs axioms (or assumptions) to get started. There has to be an anchor somewhere, and when it comes to reasoning about the physical world those axioms come in the form of brute facts about how the universe is (unless you think that all logically possible universes exist in a strong sense of the term “exist,” a position actually taken by some philosophers, but which we will not pursue here). Specifically, morality makes sense — as Aristotle pointed out — for beings of a certain kind, with certain goals and needs in life. The axioms are provided by human nature (or, again, a relevantly similar nature). Indeed, Yudkowsky grants that an intuitive moral sense likely evolved as an emotion in response to certain actions performed by other members of our in-group. That’s the “gut feeling” we still have today when we hear of slaughtered children but not of paperclip factories. Moral reasoning, then, aims at reflecting and expanding on our moral instincts, to bring them up to date with the complexity of post-Pleistocene environments.

So morality has a lot to do with logic — indeed I have argued that moral reasoning is a type of applied logical reasoning — but it is not logic “all the way down,” it is anchored by certain contingent facts about humanity, bonoboness and so forth.

Which brings me to Dai’s response to Yudkowsky. Dai’s perspective is that morality is not a matter of logic “in the same sense that mathematics is logical but literary criticism isn’t: the ‘reasoning’ we use to think about morality doesn’t resemble logical reasoning. All systems of logic, that I’m aware of, have a concept of proof and a method of verifying with high degree of certainty whether an argument constitutes a proof.”

Maybe Dai would do well to consult an introductory book on logic (this is a particularly accessible one). Logic is not limited to deductive reasoning, but it includes also inductive and probabilistic reasoning, situations where the concept of math-like proof doesn’t make sense. And yet logicians have been able to establish whether and to what degree different types of inductive inferences are sound or not. I agree that literary criticism isn’t about logic, but it doesn’t follow that philosophical reasoning — and particularly ethical reasoning — isn’t either. (When something doesn’t logically follow from something else, and yet one insists that it does, the person in question is said to be committing an informal logical fallacy, in this specific case a non sequitur.)

Dai does present some sort of positive argument for why ethical reasoning isn’t logical: “people all the time make moral arguments that can be reversed or called into question by other moral arguments.” But that’s exceedingly weak. People also deny empirical facts all the time (climate change? Evolution? Vaccines and autism?) without this being a good argument for rejecting those empirical facts.

Of course if by “people” Dai means professional moral philosophers, then that’s a different story. And yes, professional moral philosophers do indeed disagree, but at very high levels of discourse and for quite technical reasons, as is to be expected from specialized professionals. I am not trying to argue that moral philosophy is on par with mathematics (not even Yudkowsky is going that far, I think), I’m simply trying to establish that on a range from math to literary criticism ethical reasoning is closer to the former than to the latter. And that’s because it is a form of applied logic.

Dai is worried about a possible implication of Yudkowsky’s approach: that “a person’s cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning.” I don’t know why people at LessWrong are so fixated on algorithms [3], but no serious philosopher would think in terms of formal algorithms when considering (informal) ethical reasoning. Moreover, since we know that it is not possible to program a computer to produce all and only the truths of number theory (which means that mathematical truths are not all logical truths), clearly algorithmic approaches run into severe limitations even with straight math, let alone with moral philosophy.

So here is the take-home stuff from the LW exchange between Yudkowsky and Dai: contra the latter, and similar to the former, ethical reasoning does have a lot to do with logic, and it should be considered an exercise in applied logic. But, despite Yudkowsky’s confident claim, morality isn’t a matter of logic “all the way down,” because it has to start with some axioms, some brute facts about the type of organisms that engage in moral reasoning to begin with. Those facts don’t come from physics (though, like everything else, they better be compatible with all the laws of physics), they come from biology. A reasonable theory of ethics, then, can emerge only from a combination of biology (by which I mean not just evolutionary biology, but also cultural evolution) and logic. Just like Aristotle would have predicted, had he lived after Darwin.

———

[1] If I were to nitpick, then I would have to register my annoyance with a paragraph in the essay where Yudkowsky seems not to get the distinction between philosophy of language and logic. I will leave the reader to look into it as an exercise, it’s the bit where he complains about “rigid designators.”

[2] Yes, yes, biological organisms are made of the same stuff that physics talks about, so aren’t we still talking about physics? Not in any interesting way. If we are talking metaphysically, that sort of ultra-reductionism skips over the possibility and nature of emergent properties, which is very much an open question. If we are talking epistemically, there is no way Yudkowsky or anyone else can produce a viable quantum theory of social interactions (computationally prohibitively complex, even if possible in principle), so we are back to biology. At the very least, this is the right (as in most informative) level of analysis for the problem at hand.

[3] Actually, I lied. I think I know why LW contributors are fixated with algorithms: because they also tend to embrace the idea that human minds might one day be “uploaded” into computers, in turn based on the idea that human consciousness is a type of computation that can be described by an algorithm. Of course, they have no particularly good reason to think so, and as I’ve argued in the past, that sort of thinking amounts to a type of crypto-dualism that simply doesn’t take seriously consciousness as a biological phenomenon. But that’s another story.

101 comments:

  1. >...it is not logic “all the way down,” it is anchored by certain contingent facts about humanity, bonoboness and so forth.

    If I am reading Eliezer correctly, you disagree less than you think. I suspect that he's going to say that those contingent facts do indeed explain & ground why it is that we humans deeply care about the particular logical entity of morality, as opposed to the particular logical entity of paperclip maximization.

    Myself, I am not as confident in my metaethics as either of you, I seem to change my mind every week.

    >Actually, I lied. I think I know why LW contributors are fixated with algorithms: because they also tend to embrace the idea that human minds might one day be “uploaded” into computers, in turn based on the idea that human consciousness is a type of computation that can be described by an algorithm.

    You are right that LW likes to use discourse about algorithms, but I don't think they are wrong to do so. If we're lucky, the problem may be merely that LW is using a broader understanding of "algorithm" than you are.

    Basically, an algorithm is just a step-by-step procedure for generating a conclusion. This could be something like a procedure for prime factorization (which is what I suspect you're imagining), but it could also be a procedure for deciding what to cook for dinner, or what-the-organism-should-do-next, or for deciding what's ethically right.

    Put another way, an ethical algorithm is a way to get from (states of the world) to (evaluations). Seen this way, finding a coherent algorithm for ethical living is the *whole project* of normative ethics from utilitarianism to deontology.

    We may not be able to formalize our ethics immediately and without philosophical difficulty, but if it's provably completely impossible to state them as an algorithm, then they're probably just incoherent, and should either be refined or discarded.

    (For example, "the greatest good for the greatest number" is obviously incompletely specified when you try to state it algorithmically: average good or aggregate good?)

    ReplyDelete
    Replies
    1. Ian,

      wow, didn't take you long to write a detailed defense of LW, did it?

      > I suspect that he's going to say that those contingent facts do indeed explain & ground why it is that we humans deeply care about the particular logical entity of morality <

      I don't see how he could coherently say something like that, given his strong statement that morality is logic "all the way down." If he did, then he would have to admit that the right level of analysis for moral assumptions is biology, not physics.

      > If we're lucky, the problem may be merely that LW is using a broader understanding of "algorithm" than you are. <

      But if you define "algorithm" that broadly the word essentially means that one needs to be clear about what one means. Then calling it an algorithm simply adds a patina of pseudo-rigor, which is one of the tendencies that irks me about a number of writings at LW.

      Delete
    2. >But if you define "algorithm" that broadly the word essentially means that one needs to be clear about what one means. Then calling it an algorithm simply adds a patina of pseudo-rigor, which is one of the tendencies that irks me about a number of writings at LW.

      I don't think that's an unusually broad definition of 'algorithm,' I think yours is unusually narrow. It's analogous to the way a lot of people seem to think 'evidence' means 'something in a test tube.'

      Delete
    3. Oh, wait, I didn't parse the sentence
      "the word essentially means that one needs to be clear about what one means."
      correctly the first time I read your comment.

      No, that is not how I am seeking to *define* the word 'algorithm.'

      All I am saying is that any mapping from (states of the world) to (evaluations) is necessarily going to be expressible as an algorithm. And it just so happens that trying to express it that way keeps your thinking honest, because it ensures that it is actually, y'know, logically possible to live by your ethical theory.

      (Unlike, say, the categorical imperative, where I can play with reference classes in the Universal Maxim to make it say that pretty much anything is ethical.)

      Delete
    4. "I don't see how he could coherently say something like that, given his strong statement that morality is logic "all the way down." If he did, then he would have to admit that the right level of analysis for moral assumptions is biology, not physics."
      Except he *is* (at least fairly close to) what you call an "ultra-reductionist". He maintains that biology *is* physics.

      Almost all of your disagreements that I have seen involve this in some way, and it' shard for me to tell how much. Tell me, had you genuinely not realized it? I'm not trying to be confrontational here, but it seems to me that this is a big issue whenever you address anything from EY.

      Delete
  2. I think Ian is right about you, Yudkowsky, and bonoboness thing.I think Yudkowsky has written about this previously.

    It's just that...people's moral judgments change all the time.People's moral judgments change depending on such trivial things like the smell of the environment at the time they make their judgment. So even if morality is like logic, people change the axioms all the time, unlike mathematics where people usually don't change the axioms all the time (partly because you don't want someone to say 2+2 is 3 and give you 3 apples instead of 4).

    So even if morality is like logic, what's the difference between saying killing people is wrong, and saying this song is better than that song which I thought the best song ever just three minutes ago?

    ReplyDelete
  3. I hope that future comments will clarify this for me, as I struggle to see where Massimo and Yudkowsky fundamentally disagree. I may be duplicating ianpollock's point, but I'm not sure that I am.

    One technique that I personally have picked up from LW and found useful is to "taboo the problem word", as follows. It seems to me that the key example of disagreement is stated here:

    "As far as I can tell he simply sneaked in the assumption that life and consciousness are better than paperclips, but that assumption is entirely unjustified, either by logic or by physics."

    Now, if I read that as ...

    "As far as I can tell he simply sneaked in the assumption that life and consciousness are _____ than paperclips, but that assumption is entirely unjustified, either by logic or by physics."

    ... and try to fill in the blank with phrases that do not use the word or close conceptual synonyms, I come up with the following guesses:

    Y: ... the assumption that life and consciousness are [ranked, given a computer program that perfectly models what's going on in an ideal reference human's brain when that human experiences what he calls "preference" in what he calls an "ethical" sense, more highly] than paperclips ...
    vs
    M: ... the assumption that life and consciousness are [soundly concluded by a moral reasoner to be goals that are more ethically preferable] than paperclips ...

    And I no longer see much of a disagreement, except that Yudkowsky's version includes AI. While clearly Massimo and Yudkowsky differ about computerized consciousness, I doubt that is what drives the disagreement here.

    If I may take my best guess at what's going on?

    When we go to ground out our moral reasoning, we go back to biology plus environmental factors. When a pedant asks us to ground those out, we go back to more contingent facts about Nature's actual selections and the resulting brain dynamics. When pressed further, we ground *those* out on unknown historical facts, mutation, and brain and body chemistry. All of which suggests that, in theory, perhaps we could explain just why we call such-and-such behavior "right" as a strict consequence of physical events spread over eons and the attending physical laws.

    To actually pursue this approach, however, would be foolish beyond comprehension. What could we possibly learn from it? To get real work done, we must start our theories at higher levels of discourse. Study human behavior in terms of biology and environmental factors.

    Roughly speaking, I think that first paragraph is what Yudkowsky has in mind when he refers to "physics" in his hyperbolically limited ontology. And that last paragraph fits well with Massimo's approach to practical moral reasoning.

    The point of contention, I think, is that here Yudkowsky uses "logic all the way down" to mean, not bricks to construct a moral home, but formal rules to specify a moral essence. Much like how the formal rules of second order logic pin down a unique essence of the phrase "natural numbers", or how the abstract formalism of derivatives, integrals, and vector spaces create a body of results called "vector calculus".

    In a way this answers a secular Euthyphro - reasoning from eons of physics (a/k/a biology plus environment) may justify us in our preferences for life, consciousness, etc, but it seems a mistake to say that the defining property all these "right" preferences possess is no more and no less than the fact that eons of physics justify them to us. Why not assume that there is a Stone Tablet of Morality, and it is a logical construct not yet found?

    Whether Massimo agrees or disagrees with this mathematically Platonist characterization of morality, I'd love to know, but it seems beside my point. The word "logic" for a practicing philosopher almost certainly refers to something different than it does for an AI researcher with a notably ... singular ... point of view.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. brainoil,

    > People's moral judgments change depending on such trivial things like the smell of the environment at the time they make their judgment. <

    They do, but they ought not to. Let’s not confuse the psychology of moral judgments (descriptive) with the philosophy of it (prescriptive).

    > what's the difference between saying killing people is wrong, and saying this song is better than that song which I thought the best song ever just three minutes ago? <

    I’m confident Yudkowsky would agree with me that there is a huge difference. With practical consequences for society as well: typically, you get prosecuted for killing someone, but not for your aesthetic judgments in matters of music.

    daedalus,

    > I no longer see much of a disagreement, except that Yudkowsky's version includes AI. <

    I disagree. The sound conclusions of a moral philosopher have to be based on taking on board human nature as a background condition for moral reasoning, which Yudkowsky doesn’t do. But that’s the only way I can see — outside of begging the question — to cash out the intuition that paperclips aren’t as important / good as conscious beings.

    > Yudkowsky uses "logic all the way down" to mean, not bricks to construct a moral home, but formal rules to specify a moral essence. Much like how the formal rules of second order logic pin down a unique essence of the phrase "natural numbers" <

    Yes, and that captures the difference. Unlike numbers, morality doesn’t have “essence,” it’s a contingent concept that applies to contingent beings. That’s why morality can’t be a question of logic all the way down, unlike math.

    Ian,

    > All I am saying is that any mapping from (states of the world) to (evaluations) is necessarily going to be expressible as an algorithm. <

    According to what definition of algorithm? You do know that there are propositions that are not computable, and therefore cannot be arrived by algorithms. (Careful how you respond here: if you are going to say something along the lines that everything, or everything meaningful, is by definition computable as an algorithm, then the word algorithm loses any interesting meaning whatsoever.)

    More pragmatically, there are plenty of things we know — or reasonably suspect — are indeed computable but we have no way how to actually compute algorithmically, and yet we still need to arrive at some sort of decision regarding them.

    As for the categorical imperative, I do not actually think you can twist it so that anything is ethical. I have no trouble understanding the imperative, regardless of whether it can be implemented as an algorithm or not. But I reject it because I am not a deontologist...

    ReplyDelete
    Replies
    1. Massimo,

      Not just Yudkowsky, even I do agree with you that there's a huge difference between moral judgments and aesthetic judgments. But that's only because of practical reasons.

      I'm not confusing descriptive with prescriptive. I'm saying that the description is such that the prescription doesn't make a lot of sense (of course it makes practical sense though). Choosing your axioms is neither logical nor illogical. If I have moral system-X, and suddenly I change my axioms and have moral system-Y, who's to say that I made a logical error? I didn't. If you don't like Peano arithmetic, change the axioms and make new one. You won't be making a logical or mathematical error.

      Of course, programming an AI to think that morality is logical makes sense because it won't arbitrarily change the axioms (unless you program it that way). But humans don't have that kind of clean design. So a human, who believed that killing is bad, suddenly decided to kill someone, he's probably not making a logical error. He just changed the axioms.

      Delete
    2. "Yes, and that captures the difference. Unlike numbers, morality doesn’t have “essence,” it’s a contingent concept that applies to contingent beings. That’s why morality can’t be a question of logic all the way down, unlike math."
      I think Yudkowsky *might* disagree with this. Mind you, I'm not entirely sure what you mean by numbers having 'essence', but depending on what exactly you're thinking of, I *think* he would contend that either number don't or morality does.

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Surely logic applies to literary criticism, morality and indeed any statement about anything? If I decide features X, Y and Z entail that a book is a classic of its genre and a particular book has features X,Y and Z then logically I must conclude it is a classic of its genre. What I can't see is any distinctive logic of morality or to put it another way that the fact it is subject to logic is what makes morality interesting or problematic. What is curious about morality is not what can be deduced from my axioms but how those axioms got there in the first place and how do we resolve the problem if your axioms are different from mine.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  8. >According to what definition of algorithm? You do know that there are propositions that are not computable, and therefore cannot be arrived by algorithms. (Careful how you respond here: if you are going to say something along the lines that everything, or everything meaningful, is by definition computable as an algorithm, then the word algorithm loses any interesting meaning whatsoever.)

    Interesting. I'm not sure how this consideration affects my argument; I would only say that it certainly doesn't seem like anything remotely as abstruse as a Goedel problem should arise in ethical reasoning.

    >As for the categorical imperative, I do not actually think you can twist it so that anything is ethical.

    The CI says (in one formulation): "Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction."

    Suppose I am a deontologist, but despite myself, I really want to steal a whole bunch of money from the Society for Helping Cute Kittens.

    Alas, it is not possible to will "stealing" as a universal maxim, for if it were universalized, then property would become a meaningless concept, and so therefore would stealing - leading to a contradiction. It looks like I have to stick to the straight and narrow.

    But wait! What if I forget about the maxim "stealing," and focus on the maxim "stealing by Ian Pollock (social insurance number 123 456 789) on January 5, 2013 CE." If I universalize THAT maxim, there is no contradiction anymore!

    Thus we see that the CI can permit anything whatsoever, as long as the reference class to which it is applied is suitably narrowed. *It fails as an algorithm* to deliver the results we intuitively are looking for.

    Therefore, it needs to be refined or tossed out.

    ReplyDelete
    Replies
    1. A maxim canot be universalised if it specifies something most people don't want, and also if it name a specific inividual. Those are differnt kinds of non-univesaliseability, but it works either way.

      Delete
  9. >a surprisingly fashionable attitude in certain quarters of the skeptic/atheist (but not humanist) community

    Surprisingly fashionable.... certain quarters.... but not humanists....

    I've lurked this blog for years and occasionally commented under various pseudonyms. Never once have I seen you seriously deal with moral skepticism/non-realism, although I have under different pseudonyms argued with you over the implications of non-realism, in this forum and others. That was a depressing experience. Is there a series of posts I have missed concerning non-realism? If not, would you please cut this nonsense out?

    We are not a silly fringe. We disagree with you over a technicality. I don't mind you saying that I am wrong, because again the difference between us is negligible, all things considered. But please quit sniping at non-realists as though they were morons or "others" from humanism.

    ReplyDelete
  10. Neuroscientists have found mirror neurons: We "feel" empathy when we see someone else is hurt. But as Rorty http://en.wikipedia.org/wiki/Richard_Rorty wrote, "The brain is the hardware, the culture is the software." The culture is up to us.

    ReplyDelete
  11. > Yudkowsky argues that “as soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output ‘life, consciousness, etc. and paperclips,’”

    Typo: Yudkowsky writes that the betterness function will output "life, consciousness, etc is greater-subscript-B than paperclips" (where greater-than-subscript-B is the "betterness" relation meaning you got a higher score out of your betterness function for "life etc." than for "paperclips").

    Yudkowsky never claims that the fact that we judge things according to whether they're better and not according to whether they're clippier is not contingent on what sort of beings we are (after all, he acknowledges that Clippy doesn't care whether things are better). When he says it's logic all the way down, he is not talking about the casual history of our brains, he's talking about the logical relation that "better" has to our other concepts, including those which we use to justify using "better" rather than "clippier" in our judgements. It's that italicised part which is the essence of his argument (see the paragraph beginning "Where moral judgment is concerned, it's logic all the way down"). This is all in the context of the stuff about logical pinpointing and the natural numbers: Yudkowsky thinks that just as we can specify what we mean by number talk, we can also specify what we mean by "better". If Clippy uses different axioms, he's actually pinpointed some other scheme of "clippy-numbers" or "clippy-morality", even if he (confusingly) uses the same words for it as we do.

    To say that this morality is anchored in evolutionary history leaves you open to the response that your "anchoring" claim entails that whatever has evolved would be moral (assuming that "anchoring" involves some sort of justification). This is what confused Leah Libresco a while back when she suggested that an atheist who claimed that his moral sense evolved should be happy to delegate decisions to a black box on the basis that people who followed the black box's advice end up having more children. I tried to correct her but gave up: it seems her and her Catholic commenters are pretty wedded to the idea that any mind should somehow be able to arrive at morality, or it's not true morality after all.

    ReplyDelete
    Replies
    1. It can be argued that biologial facts are relevant to morality. For instance, if humans had a bu
      iology where females laid eggs and then left them to fend for themseles, there would be no child neglect. That is sufficient to show that morlaity is *not* logic all the way down, and it is the claim that morality is just an evolutionary black box either: one can still reason out a morality based on biological facts.

      Delete
    2. Suppose I were a Consequentialist, travelling between those two universes.

      The "no child neglect" change would not fundamentally alter my morality, even if it runs contrary to my intuitions, because in Consequentialism whether "leaving children to fend for themselves" is Bad is the *output* of the algorithm, not a core component to the algorithm itself.

      If I were operating purely intuitively, then yes, it would be an issue, as it *would* be part of the algorithm itself.

      Delete
  12. This comment has been removed by the author.

    ReplyDelete
  13. My take on the matter of the relation between logic and morality is simply that logic is a more general linguistic idiom than morality: we can use logical language without using moral language, but not the reverse. But of course the matter is far more complicated for reductionists. Given that Logic and Physics represent Pure Being, they must figure out how morality is just a kind of shorthand way of talking about quantum mechanics ;)

    ReplyDelete
  14. This comment has been removed by the author.

    ReplyDelete
  15. Ian,

    > it certainly doesn't seem like anything remotely as abstruse as a Goedel problem should arise in ethical reasoning. <

    Yes, I was simply making the broader point that we already know that not everything is computable, so that there must be limits to what one means by “algorithm.”

    > Thus we see that the CI can permit anything whatsoever, as long as the reference class to which it is applied is suitably narrowed. *It fails as an algorithm* to deliver the results we intuitively are looking for. <

    All you’ve shown is that things get really silly when one goes sophistic about a complex concept, deliberately looking for extreme situations that no sentient human being would take seriously. Interestingly, that is precisely the problem with computer algorithms, e.g. why it’s so difficult to teach computers about context, and why computers still make simple mistakes that a four year old child wouldn’t when it comes to culturally embedded situations.

    Mark,

    > Surely logic applies to literary criticism, morality and indeed any statement about anything? <

    Indeed, though certain subjects require more formalized reasoning than others. That’s why I put ethics along a continuum from literary criticism to mathematics, closer to the latter than the former.

    Zachary,

    sorry you’ve been frustrated by this blog all this year, albeit under different pseudonyms. But you read too much in my phrase. I was not rejecting moral anti-realism across the board as silly. It is a family of serious philosophical positions, and indeed, my own thinking can be said to belong to a type of anti-realism, in the sense that I don’t believe that there are moral truths “out there.”

    Rather, what I was “sniping” at is what I see the facile (within the skeptic community, which doesn’t, by and large, comprise people who are particularly well versed in philosophy) of dismissing a number of important things as “just illusions.” The list is long, but it includes morality, free will (whatever one may mean by that), consciousness, and so on. That, I maintain, is silly, particularly when based — as it is almost always for skeptics — on misconstruing the pertinent scientific literature and ignoring the philosophical one.

    Philip,

    > But as Rorty wrote, "The brain is the hardware, the culture is the software." The culture is up to us. <

    I’m usually extremely unsympathetic to Rorty, but yup, in this case that’s reasonably good, except that I don’t make sharp distinctions between hardware and software when it comes to brains.

    ReplyDelete
    Replies
    1. Massimo,

      Thank you for the correction; I did indeed read too much into your statement. I do at least hope you think the mistake understandable, given that by "moral skepticism", you appear to have meant philosophical illiteracy and intellectual laziness. Of course I have also seen the strain of "moral skepticism" you to which you referring and lament it equally, but I did not see that this version of meta-ethical stupidity was to be mentioned in the absence of others.

      Delete
    2. @Massimo:

      >All you’ve shown is that things get really silly when one goes sophistic about a complex concept, deliberately looking for extreme situations that no sentient human being would take seriously. Interestingly, that is precisely the problem with computer algorithms, e.g. why it’s so difficult to teach computers about context, and why computers still make simple mistakes that a four year old child wouldn’t when it comes to culturally embedded situations.

      Probably flogging a dead horse here, but no. Remember what Kant is claiming for the categorical imperative - it is supposed to be a summary of all morality! The one imperative which is imperative by its own lights, not via some instrumental utility it has. A reduction of morality to a matter of logical consistency or "rationality."

      These are grand claims! So if I can make it say *anything* by playing around with reference classes, I have poked a pretty big hole in it (by the way, this objection is not unique to me).

      Also, keep in mind that my example of this process was deliberately absurd and sophistic in order to get the point across. But in general, there are lots of ways to apply special pleading to the CI, and not all of them are obviously absurd.

      The only reason the CI actually seems plausible to anybody as a moral principle is that we *already know* by intuition what the Right Answer is supposed to be ("don't steal"), so we can make sure we only ask it questions which will lead to the Right Answers.

      Delete
  16. Paul,

    > Yudkowsky never claims that the fact that we judge things according to whether they're better and not according to whether they're clippier is not contingent on what sort of beings we are <

    But if that’s not the case it is hard to see what he means by “logic all the way down.” You have to sneak in some assumptions based on biology. If his precious AI were made of paperclips, what sort of logical argument would convince them that the universe wouldn’t be better off if it were turned into paperclips?

    > he's talking about the logical relation that "better" has to our other concepts, including those which we use to justify using "better" rather than "clippier" in our judgements. <

    Again, that sneaks in the biology, which makes the whole thing *not* a matter of logic only.

    > Yudkowsky thinks that just as we can specify what we mean by number talk, we can also specify what we mean by "better" <

    Forgive me, but that’s trivially true. The point is that the way we specify those two meanings is different: the first one really is a matter of logic all the way down, the second one isn’t.

    > To say that this morality is anchored in evolutionary history leaves you open to the response that your "anchoring" claim entails that whatever has evolved would be moral <

    I don’t think in terms of anchors. All I said was that morality doesn’t make sense if one does not take on board certain facts about human nature (a la Aristotle). But human nature isn’t just a matter of biological evolution (though certainly that’s a big part of it!). Culture also evolves, and as Hume argued, in the long run culture changes our very nature (of course he wasn’t talking in evolutionary terms, he was pre-Darwin).

    brainoil,

    > Choosing your axioms is neither logical nor illogical. If I have moral system-X, and suddenly I change my axioms and have moral system-Y, who's to say that I made a logical error? <

    That is correct, which is why I don’t think that morality is a matter of only logic. If it were, as you say, you could change the axioms at will and get completely different results (though even in math, a number of axioms are not interesting or lead to contradictions).

    > a human, who believed that killing is bad, suddenly decided to kill someone, he's probably not making a logical error. He just changed the axioms. <

    That is precisely what I’m arguing is not true. Said human would simply — and correctly — be characterized as a psychopath. Interestingly, psychopaths do suffer from the neurological incapacity of distinguishing between moral and etiquette rules...

    Paul,

    > My take on the matter of the relation between logic and morality is simply that logic is a more general linguistic idiom than morality: we can use logical language without using moral language, but not the reverse <

    Precisely.

    ReplyDelete
    Replies
    1. Of course such people would be characterized as psychopaths (rightly so), but you'd have a hard time proving that they're making logical errors. If Peter Singer gives inconsistent answers to the trolley problem, it is fair to say that he's making logical errors. He thinks he's a utilitarian but he's not acting according to his own philosophy. Take the psychopath on the other hand. What logical rules is he violating when he kills someone? We can of course impose our rules on him, as we always do (pretty reasonably), but that's about it.

      Delete
    2. But that's precisely my point: psychopaths do not make any logical error, consistent with my idea (contra Yudowsky) that ethics is not a question of logic "all the way down." What psychopaths do is to ignore the "axiom" (loosely speaking) represented by the idea that morality is about facilitating human flourishing.

      Delete
    3. The psychopath has a different process for making moral decisions from you, but they still have a process. It is that process that Yudkowsky refers to as "logic", not the particular family of processes that assign low utility to murdering people for fun, say. Yudkowsky is not claiming that psychopaths incorrectly execute their own logic or that they have inconsistent axioms or something, he's claiming that their process is morally bad. If someone claimed otherwise, I imagine he'd say that they hadn't understood what morally bad means: "moral" judgement just means something which (among many other things) says that psychopathic reasoning is bad.

      So in fact I think you and Yudkowsky just disagree about what "all the way down" means, since you both agree that people do stuff for reasons which make sense to the kind of beings we are, and which were formed by evolution and culture; and you both agree that moral arguments won't convince other, different, sorts of beings (but that doesn't make those arguments flawed as moral arguments, which is what Leah was confused about, I think).

      Yudkowsky's claim is not a causal history claim, it's ontological: he says that morality is made of logic, not stuff (but requires stuff implements the logic for it to have any effect). Any justification for morality is in terms of some other logic we find compelling, hence it's logic "all the way down". Causally, we find it compelling because of evolution and culture and whatnot, but we don't find arguments that say "do X because it will give you more children" compelling, generally: we'd be worried that X might be morally wrong. We value the particular things that these causal factors have in fact made us value, not whatever those casual factors were trying to make us value (anthropomorphising the blind forces of evolution there, but you get the point, I hope).

      So I would dispute the claim that valuing life and consciousness is "perfectly justified by something else: biology and in particular the biology of conscious social animals such as ourselves". If I'm asked to give a reason why I value life and consciousness, I think I'd have to say "well, I just do". If I were asked what the cause of my feelings was, I'd definitely point to biology and culture and so on, though, so maybe we just disagree about what justification means.

      Delete
  17. Massimo,
    > If his precious AI were made of paperclips, what sort of logical argument would convince them that the universe wouldn’t be better off if it were turned into paperclips?<
    It's not what they're made of, but I get the point; it seems to me Yudowsky's position is that no argument would persuade some minds not to turn everything into paperclips.

    Perhaps, his post entitled "No Universally Compelling Arguments" would provide more information on his take on this.

    ReplyDelete
  18. Paul says, "we can use logical language without using moral language, but not the reverse," and Massimo says "Precisely."

    In other words they both agree that you can't use moral language without using logical language.
    Which means what, that moral language can't be illogical?
    Or that on the other hand, logical language can be neither right nor wrong?
    Why, because logic in the end is mathematical and morality isn't?
    Unless of course it's mathematics that in the end is logical but can't be moral unless it's used for logically immoral purposes.

    You really can't make an argument while at the same time ignoring the purposes of that argument, now can you?

    Mathematics has logical purposes. Logic does not have mathematical purposes.

    ReplyDelete
  19. Baron P ,

    The expression 'logical language' is ambiguous between language that is logically correct (in accordance with logically correct thinking) and the aspect of language represented by logical words and how they work in language. I mean the latter, and in this sense logical language can certainly be illogical. To say that something both exists and does not exist, for example, is an illogical use of logical language. That logical language is part of moral language says nothing about the correctness of statements about morality.

    ReplyDelete
  20. Paul,

    > The psychopath has a different process for making moral decisions from you, but they still have a process. It is that process that Yudkowsky refers to as "logic" <

    That may be, but in that case the statement is trivial. We still need a way to say that the psychopath (or the mad paperclipper!) is doing the wrong thing, and that can only come from a consideration of facts about human biology and culture, not from logic.

    > he's claiming that their process is morally bad. If someone claimed otherwise, I imagine he'd say that they hadn't understood what morally bad means <

    I doubt he can get off that easily. Refer, again, to his statement that there is an inherent “logical” superiority in the function that maximizes consciousness over the one that maximizes paperclipperness. Paperclips around the universe could reasonably object.

    > Yudkowsky's claim is not a causal history claim, it's ontological: he says that morality is made of logic, not stuff <

    That’s an interesting take, but it amounts to Yudkowsky making a category mistake: morality isn’t “made” of anything, and nothing is made of logic. Morality isn’t a thing, it is a set of (hopefully logical) judgments. So even ontologically we are not at all as close as you seem to think.

    Ian,

    I think you are shortchanging deontological ethics a bit too quickly. There is a huge technical literature on it, which is worth looking at:

    http://plato.stanford.edu/entries/kant-moral/
    http://plato.stanford.edu/entries/kant-reason/
    http://plato.stanford.edu/entries/kant-hume-morality/

    The relatively simple way around your objection is Quienian: to understand the force of the categorical imperative one has to rely on a web of background knowledge, a web that precludes your reductio as an example of reasonable interpretation of the CI.

    Incidentally, you did not address my counter-point: that we have been really bad so far at teaching much simpler concepts to computers...

    ReplyDelete
    Replies
    1. >I think you are shortchanging deontological ethics a bit too quickly.

      I'm only beating up on one formulation of the Categorical Imperative, and on its pretensions to encapsulate the whole moral law. I am actually pretty sympathetic to some deontological ideas (recall I was beating up on utilitarianism earlier, despite being VERY sympathetic to it).

      >The relatively simple way around your objection is Quienian: to understand the force of the categorical imperative one has to rely on a web of background knowledge, a web that precludes your reductio as an example of reasonable interpretation of the CI.

      Aye, but the "background knowledge" in question is the set of all moral conclusions you already endorse! This is no mere semantic objection, like for example if I were equivocating on the word "stealing." The reference class problem is much uglier; it makes the CI totally vulnerable to special pleading (morality is supposed to push me around, not the reverse). But anyway, I suggest we shelve this, since Kant is a side issue.

      >Incidentally, you did not address my counter-point: that we have been really bad so far at teaching much simpler concepts to computers...

      I agree with this but fail to see its relevance. The importance of a moral theory's being expressible as an informal algorithm (for me, anyway) has nothing to do with AI; it's a heuristic which forces you to avoid many of the failure modes you would otherwise easily fall into in ethical theorizing.

      (Such as: using un-analyzed moral vocabulary, relying on your readers' pre-existing moral commitments to do most of the work of filling in the blanks, having so many degrees of freedom that it offers no action-guidance, being vulnerable to re-framing, and many others.)

      Delete
    2. Ian,

      yes, let's leave Kant aside, though I maintain that even your original example wouldn't really concern him: the reference class is supposed to be all of humanity in all its actions, so you can't pick your own and then criticize Kant for it.

      As for the algorithm issue, I'm not sure what you mean by an "informal algorithm," I thought the whole point of talking about algorithms was to formalize (again, in the service of Yudkowsky's preoccupation with AI) talk of morality. If you make it informal, aren't you going back to a simple request that people be clear so that other understand what they are talking about?

      And how would you formalize algorithmically something like the Aristotelian idea that morality is about human flourishing? I understand the concept perfectly well, but it is too fuzzy to be rendered precise enough so that you can implement it in a computer (for similar reasons to why computers are so bad at understanding social context, hence my comment above).

      Delete
    3. >If you make it informal, aren't you going back to a simple request that people be clear so that other understand what they are talking about?

      It's not MUCH more radical an idea than that. You just have to go the extra distance of realizing that what seems "clear" to humans may be obviously not clear once you try to spell it out more or less formally, with terms reduced to the greatest extent possible.

      >And how would you formalize algorithmically something like the Aristotelian idea that morality is about human flourishing?

      It would be *mind-bogglingly difficult.* But that is a feature, not a bug, of this approach. It forces you to notice things that natural language allows you to gloss over, like the 900 tons of philosophical and psychological baggage hidden in the word "flourishing."

      Ultimately the approach would force you to break "flourishing" down into a (huge) list of human terminal values; you would thus have gained a key insight into the sheer disparateness of the things we value morally (as against, say, something like Ronald Dworkin's theory of the Unity of Value).

      >I understand the concept perfectly well...

      Really? I don't. I can tell you (when I see it) whether some scenario is good or bad in terms of flourishing, but I have very poor introspective access to the cognitive algorithms that are consistently generating such judgments.

      Delete
    4. David Sloan Wilson wrote, re the concept of behavioral flexibility, also called phenotypic plasticity:
      “No organism is so simple that it is instructed by its genes to “do x”. Even bacteria and protozoa are genetically endowed with a set of if-then rules of the form “do x in situation 1”, “do y in situation 2” and so on. These rules enable organisms to do the right thing at the right time, not only behaviorally but physiologically and morphologically."
      My guess is these are the rules that govern cognitive algorithms, through which we've managed to logically evolve and flourish.

      Delete
    5. Ian,

      wow, your latest response is an excellent summary of what I think is wrong with the LW approach. Which was very helpful. For instance:

      > You just have to go the extra distance of realizing that what seems "clear" to humans may be obviously not clear once you try to spell it out more or less formally <

      But you need to realize that some of the time the problem isn't with the fuzziness, indeed the fuzziness is part and parcel of the concept, not a problem to eliminate. Think Wittgensteinian family resemblance.

      > It would be *mind-bogglingly difficult.* But that is a feature, not a bug, of this approach <

      Again, I'm all in favor of clear thinking, but I think that when something becomes mind-bogglingly difficult, and yet we gain little or nothing after we spell it out to a computer, it's a bug, not a feature.

      > Ultimately the approach would force you to break "flourishing" down into a (huge) list of human terminal values <

      If your goal is to explain it to a computer. I think humans do very well without that Spock-like approach.

      > I have very poor introspective access to the cognitive algorithms that are consistently generating such judgments. <

      Of course, so what? You seem to be confusing the need for accessing the algorithm (certainly present if you need to program a computer, irrelevant if you are a human being) with the question of whether the "algorithm" works or not. And it's likely not an algorithm anyway...

      Delete
    6. > We still need a way to say that the psychopath (or the mad paperclipper!) is doing the wrong thing, and that can only come from a consideration of facts about human biology and culture

      I'm not sure what this consideration gets you unless you are already the sort of person who cares about such facts. You could tell Clippy facts about human biology and culture but this would not help you to convince him not to turn us into paperclips, because, as I think we all agree, facts aren't motivating to all possible minds.

      Conversely, I don't see why we'd want to appeal to facts about human biology and culture to demonstrate to other humans why Clippy is wrong. I mean, I suppose we might use these facts instrumentally to reach some terminal point where humans care about the outcome ("removing all the iron from my blood to make paperclips will kill me" is such a biological fact) but it's redundant to say that "humans are motivated to prevent the pointless deaths of others" to a human who already is so motivated: the motivation to stop others dying pointlessly isn't the same thing as the fact that most humans have such a motivation.

      > Refer, again, to his statement that there is an inherent “logical” superiority in the function that maximizes consciousness over the one that maximizes paperclipperness.

      Perhaps we've reached the root of your objection. But again, I don't think you actually disagree. Yudkowsky makes no such claim, as far as I can see. Where do you think he does? He doesn't believe in inherent superiority (in the sense of a superiority which would convince Clippy and a human), he believes human values are better than Clippy's. Of course, Clippy doesn't care about better, he cares about whether things are clippier (in the comments on Less Wrong, someone uses "clippy-better" but people are anxious to avoid that terminology: clippier is nothing like better in terms of what is actually preferred).

      Delete
    7. >But you need to realize that some of the time the problem isn't with the fuzziness, indeed the fuzziness is part and parcel of the concept, not a problem to eliminate.

      I am not sure what kind of fuzziness exactly is being discussed (maybe we should relate this to a concrete example?), but considering the extent to which our moral theorizing impacts others (ostracism, social pressure, legal penalties etc.), there are many situations where fuzziness is just not acceptable in moral theorizing.

      For example, "It's just wrong" will not do as an answer, especially if that answer has legal force behind it.

      Delete
    8. >Of course, so what? You seem to be confusing the need for accessing the algorithm (certainly present if you need to program a computer, irrelevant if you are a human being)...

      It is certainly NOT irrelevant, and saying that it is amounts to a complete dismissal of most moral philosophy!

      When, for example, we feel moralized disgust at some sexually deviant behaviour, we don't get to just leave it at that. We have to (a) figure out where the disgust is coming from, (b) decide whether we want to endorse it on reflection. Doing (a) is what I am calling "accessing the algorithm."

      I find it hard to believe that you really endorse this sort of crass moral intuitionism.

      Delete
  21. What is being ignored here in the talk about logic, and why or how it goes "all the way down" is that if this is so, it's because our biology, from which our morality actually emerges, also has been a logically developed process, or at least an attempt at perfecting one, all the way down. And the logic operates within the biological entity, not somewhere outside in mother nature's selective sieve or father God's selective determinants.
    And if psychopaths are logically immoral, it's because their biological strategies have evolved (at least from our culture's logical point of view) to take an illogical turn.
    (And I didn't mention the logical purposes of our culture once.)

    ReplyDelete
  22. Paul,

    > You could tell Clippy facts about human biology and culture but this would not help you to convince him not to turn us into paperclips, because, as I think we all agree, facts aren't motivating to all possible minds. <

    That’s right, which means morality cannot be a matter of logic only, or Clippy would agree with us.

    > it's redundant to say that "humans are motivated to prevent the pointless deaths of others" to a human who already is so motivated <

    Yes, but ethics is a lot more complicated and nuanced than the simple case you present. In complex cases, human beings may agree on the fundamental axioms and still have to do a lot of work to figure out whether, for instance, late term abortion or drone warfare are moral.

    > Yudkowsky makes no such claim, as far as I can see <

    He does, right in the linked post. He uses the word “better,” as you point out, which amounts to the same thing in this case.

    > Clippy doesn't care about better, he cares about whether things are clippier <

    Quoting Yudkowsky: “As soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output ‘life, consciousness, etc. > paperclips’.”

    Unless he is smuggling in the biological-cultural axioms I referred to, this amounts to a massive begging of the question: on what grounds does he think that the truth-value of “betterness” is such that life and consciousness beat paperclips?

    Ian,

    > I am not sure what kind of fuzziness exactly is being discussed <

    I gave a specific reference: Wittgenstein’s family resemblance (also known as cluster) concepts. His favorite example was the concept of game, I wrote specifically about the concept of biological species.

    > "It's just wrong" will not do as an answer, especially if that answer has legal force behind it. <

    Now you are simply setting up a straw man. You ought (moral) to know better than to think that I would argue on that basis.

    > It is certainly NOT irrelevant, and saying that it is amounts to a complete dismissal of most moral philosophy! <

    Seriously? So we couldn’t do science, say, until we figure out what mechanisms in the brain allow us to come up and test hypotheses? I hope you see that that’s a non sequitur.

    > We have to (a) figure out where the disgust is coming from, (b) decide whether we want to endorse it on reflection. Doing (a) is what I am calling "accessing the algorithm." <

    I call it engaging in good thinking. Saying that it is an algorithm simply adds a patina of pseudo-rigor on a process that we already practice without help from LessWrong, thank you very much.

    > I find it hard to believe that you really endorse this sort of crass moral intuitionism. <

    I can’t begin to guess how you arrived at that conclusion, my friend.

    ReplyDelete
    Replies
    1. I agree with Paul on this.

      >
      Quoting Yudkowsky: “As soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output ‘life, consciousness, etc. > paperclips’.”

      Unless he is smuggling in the biological-cultural axioms I referred to, this amounts to a massive begging of the question: on what grounds does he think that the truth-value of “betterness” is such that life and consciousness beat paperclips?<
      I don't know I'd call it 'smuggling', since the point appears to be clear in some of his posts; for instance, in the one I linked to above.

      While he thinks that the function 'betterness' is such that consciousness > paperclips, his position implies that the function 'clip-betterness', or 'clipperness', or whatever one calls it, is that paperclips > consciousness.

      Delete
    2. >I can’t begin to guess how you arrived at that conclusion, my friend.

      Not a conclusion; just what your words seemed to imply if taken at face value. Hence my finding it hard to believe, as I know you to be more sensible.

      I think we have probably tapped out this discussion, which was only ever a nitpicky minor disagreement about vocabulary choice anyway. Looking back at the pages and pages of comments, it appears I find it hard not to have the last word. :) Apologies if I got carried away.

      Delete
    3. > That’s right, which means morality cannot be a matter of logic only, or Clippy would agree with us.

      Clippy can perfectly well agree with us that it is bad to kill people to make paperclips.

      There's a bit in the earlier (and harder to understand, I thought) Less Wrong stuff on morality which makes this point: read http://lesswrong.com/lw/rs/created_already_in_motion/ from "And even if you have a mind that does carry out modus ponens". Imagine Clippy is such a mind, and we give him the axioms A and B (and explain to him that "fuzzle" is "good"). If he reasons correctly, Clippy agrees with us that pulling that toddler off the railway tracks is good. We can even explain further, so that Clippy agrees with us that killing humans to make paperclips is not good.

      I think what you're arguing is that Clippy should find goodness motivating, that it should persuade him to save the toddler and dissuade him from killing us, on pain of being wrong in his logical reasoning. But Clippy is motivated by clippiness, not goodness/betterness: his reasoning isn't coupled with what Yudkowsky calls the "dynamic" of actually doing anything as a result of deductions about goodness.

      Possibly you think that "logic" here has a connotation of "motivating to all minds", but you've already agreed (I think) that our motivation for moral goodness is contingent on the kind of creatures we are, so why think that "logic" is any different? What Yudkowsky means by "logic" is not that something is motivating to all possible minds: logic is abstract and definitional, and, AFAICT, he thinks that nothing is motivating "by definition".

      Delete
  23. I suspect that the discussion would do well to look back to Hume, and the "is-ought" problem. I will neglect the prior problem of answering empirical "is" questions and the problem of induction (and the relatively arbitrary prior problem of language selection and/or translation between selections) as something less-wrong Bayesians will presumably be able to find agreement on.

    Questions of morality, and ought, such as "ought I kill this person or let him live?" involve partial ordering relationships over choices -- option A may be better than B, worse than B, morally equivalent to B, or incomparable to B. Given the standard ZF axioms, and a set corresponding to the choices, it is relatively easy to show that there exists a set corresponding to the ordering relationships over the choices. However, barring trivial cases (no choice or only one choice), there is more than one possible relationship in the set. "Is" axioms do not answer the question of which "ought" ordering is correct; an additional axiom is required to indicate which is the referent of "ought". (In the case of infinite choices, the Axiom Of Choice also seems required; for finite choices, ZF independent on Choice suffices.)

    Given the additional bridge axiom, the comparison of two choices via that axiom becomes an empirical question; but the comparison of choices itself is not. So, if you define "good" as "fairness", you get one result. If (as Haidt indicates humans tend to do), you define "good" using some metric weighing multiple concepts like harm avoidance, equitability, reciprocity, in-group membership, dominance, prestige, and purity, you may get another. (Pointing to "biology" appears equivalent to an ordering favoring evolutionary survival over its lack.)

    The key seems to be to pay attention to where an ordering axiom is inserted.

    ReplyDelete
  24. This comment has been removed by the author.

    ReplyDelete
  25. This comment has been removed by the author.

    ReplyDelete
  26. Ian,

    > Looking back at the pages and pages of comments, it appears I find it hard not to have the last word. :) <

    Ok, I’ll leave you the last word, but... ;-)

    Angra,

    > While he thinks that the function 'betterness' is such that consciousness > paperclips, his position implies that the function 'clip-betterness', or 'clipperness', or whatever one calls it, is that paperclips > consciousness. <

    Fine, but obviously he cares about the betterness function, not the clipperness one. Why? Because he is a social animal of a particular type, characterized by a certain cultural evolution, etc. etc. But he doesn’t admit it, because that wouldn’t be logic all the way down anymore.

    ReplyDelete
    Replies
    1. Massimo,

      I recognize that I've not read many posts on LW, or generally by Yudowsky, but it seems to me that Yudowsky's position is precisely that he cares about goodness because he's a human being [perhaps, a human being who isn't a psychopath] and goodness is the kind of function humans care about, whereas clipperness isn't. So, I'm not sure what it is that he doesn't admit; I'm not sure if I'm reading 'cultural evolution' right, though.

      By the way, I get from his post that his position would be that like morality, clippality is a logical thingy rather than a physical thingy, and even that clippal judgment is also logic all the way down.

      For instance, Yudowsky's said: "Conversely, Clippy finds that there's no clippyflurph for preserving life, and hence that it is unclipperiffic."
      Clippy is also computing a logical fact, and [presumably correctly] computes that preserving life is unclipperiffic.

      On that note, he says "...what's "right" is a logical thingy rather than a physical thingy, that's all.  The mark of a logical validity is that we can't concretely visualize a coherent possible world where the proposition is false.".

      But similarly, we can't concretely visualize a coherent possible world where preserving life is a clippery thing to do (well, assuming certain facts about the concept of clipperness; else, we'd have to pick another example, but that's another matter).

      Delete
    2. I agree with Angra Mainu about what Yudkowsky thinks. Whether I think what Yudkowsky thinks, I'm not sure, but it makes a certain amount of sense. After reading it, I now understand why he claims to be a moral realist better than I did after reading his earlier stuff on morality. There are things I'm not clear on, such as what he thinks is going on when there are moral disagreements between humans.

      http://lesswrong.com/lw/fv3/by_which_it_may_be_judged/80mr gets into how to translate Yudkowsky-speak into analytic-philosophy-speak.

      I should probably go over to Less Wrong and link to this thread, just in case Yudkowsky knows what he thinks better than any of us do.

      Delete
  27. Lance,

    not sure I want to repeat that particular conversion again, especially since it’s entirely tangential to the main post. But since you brought it up:

    > I don’t see any reason to think it is a biological phenomenon in particular rather than a phenomenon biological things happen to instantiate <

    Because we *only* have examples of biological consciousness. Until the empirical evidence says otherwise it seems a basic mistake to not take the biology seriously. Would you not take the biology seriously if we were talking about photosynthesis? Do you think *anything* can photosynthesize, regardless of the materials and chemical properties involved?

    > What is the analogous feature of consciousness that corresponds to sugar in photosynthesis? <

    Qualia.

    > Whereas photosynthesis can be coherently and usefully described as a specific physical process, I can’t even imagine how this could be analogous to consciousness. <

    I can’t imagine how it cannot, unless you assume that consciousness is all about symbol manipulation and computation. Which, in this context, amounts to begging the question.

    > There’s no magic sauce <

    Please, nobody’s talking about magic, you should know better than to accuse me of that sort of thing.

    > I deny that you can privilege this default presumption about consciousness as a biological “until proven otherwise” <

    You are welcome to deny it, but see my first comment about the only available examples of consciousness being biological.

    > Until we get clear on what you even mean by consciousness <

    Oh, the “let’s get a precise definition before we get started” game. I won’t play. I think we all know exactly what we mean by consciousness. Again, it has to do with qualia...

    > I might just deny that what you’re talking about even exists. <

    Which would be bizarre, because I wager that you are a conscious being, unlike, say, the computer you used to type those words.

    > “Mind” is radically different from matter, but that doesn’t make anyone a dualist for thinking so. Jumping is radically different from matter. <

    Yes, it does, by definition. And the analogy is not good, since jumping is an activity, not a thing. Now, if you are talking about the *activity* of minding, I do think that’s a better way of thinking about the whole shebang. But if you think the activity is independent of a physical substrate it’s like saying that there can be jumping without a particular kind of thing that jumps. Or do you think that *anything* can jump?

    > We’re claiming consciousness is also something matter can do <

    Of course, so do I. I just claim that not *all* forms of matter can do consciousness. And I still need to see a shred of evidence that such claim is false.

    > Those of us who hold the views you criticize here *don’t think that consciousness is a type of stuff* <

    Than I suggest that those of you who hold those view have some conceptual clarification to do. Which is what I’m trying to do here.

    > I think I can get a machine to jump, whether it was made out of silicon or carbon or even lego blocks. <

    It would be more difficult with lego blocks, and impossible with a lot of other materials. That’s my point. I’m concerned with the range of possible physical materials capable of doing consciousness, you seem to think that such range is effectively unlimited (because it’s all about computations).

    > The capacity to jump transcends the physical substrate. <

    No, it doesn’t. Try to make a solid block, or a highly viscous substance, to jump.

    > to accuse them of that suggests to me that you have failed to grasp their position. <

    That’s one possibility. The other is that they may have failed to grasp the implications of their position.

    ReplyDelete
    Replies
    1. @Massimo:

      >Because we *only* have examples of biological consciousness... Do you think *anything* can photosynthesize, regardless of the materials and chemical properties involved?

      I want to try and convince you that your well-worn photosynthesis analogy is a bad one.

      Consider: the output of photosynthesis is sugar, whose purpose from the organism's point of view is to be burned.

      What is the purpose of a mind, from the organism's point of view? Well, in very broad terms, its purpose is to receive sense data, then direct the body in response to them. It does this mainly by sending nerve impulses to the various parts of the body.

      Thus, in the same way as the "output" of photosynthesis is sugar, the ultimate "output" of a mind is nervous signals telling the body what to do, the lips how to move, etc. *That is the whole point (from a biological perspective) of having a mind.*

      ...And those nervous impulses could be sent by a digital computer with input/output hardware. In principle. That is because the output of a mind is signals, not substances. Thus breaks the analogy with photosynthesis.

      You may reply that my account is missing qualia, or the phenomenal aspect of consciousness. This is true; in focusing on "outputs" I have not addressed whether a machine with indistinguishable input/output behaviour to a biological brain will experience the redness of red. All I have shown is that the effects of a mind on the world can be replicated by a machine.

      (Including all *talk* of qualia, which after all is physically embodied talk that lies at the end of a causal process.)

      Delete
    2. "What is the purpose of a mind, from the organism's point of view?"
      Massimo has informed me that nothing is supposed to think it has a purpose.

      Delete
    3. I've had this argument before with Massimo. I gather his charge of dualism is likely referring to property dualism that most AI fans agree with. I also believe Massimo subscribes to a form of embodied mind thesis. These are rational positions that lead him to disagree with many AI fans. I don't believe you will convince him to change his mind! @jcockrel

      Delete
  28. Hi Massimo, thank you for the reply, and I apologize for my prolixity...I hope you'll see fit to post my responses in full, despite their length. Since I went straight for where we disagree, I should note that I while I might quibble over some of the details, I also take issue with the main topic of discussion in this post.

    Part 1:

    [Because we *only* have examples of biological consciousness. Until the empirical evidence says otherwise it seems a basic mistake to not take the biology seriously. Would you not take the biology seriously if we were talking about photosynthesis? Do you think *anything* can photosynthesize, regardless of the materials and chemical properties involved?]

    Prior to planes and other synthetic means of flight the only examples we had of directed flight were organisms. That wasn’t a good reason to think flight was a biological phenomenon. Flying is a product of having the right physics, and it doesn’t require carbon to achieve it. I see no reason to think any differently about consciousness. I’m directly analogizing consciousness to flight: unless you think that in 2000 BCE it was sensible to think flight was only possible for living things, you have no reason to think the same of consciousness. The mere paucity of available alternatives is not itself a good reason to think consciousness is uniquely biological.

    As far as photosynthesis, yes: all I need is a machine that can produce sugar with light, water, CO2…If it’s got the same inputs and outputs, I’m happy to call that “photosynthesis”, even if it’s realized by machines that involved no carbon whatsoever. If we defined photosynthesis functionally, anything that took the same input and yielded the same output would by definition be carrying out photosynthesis. Only by forcing biological features into the definition can you insist photosynthesis must be biological: but that’s just begging the question if you use it as an analogy for consciousness being biology. That, or it just so happens that the physics of our universe only allows such conversations by biological things, which strikes me as deeply implausible, and I can’t understand why anyone would assume that.

    That being said, the short answer is: no. I don’t “take the biology seriously”. It would be ludicrous to think only living things could convert light, CO2 and water into sugar and waste products. Obviously a machine could do that. In fact, as far as I know, research on synthetic photosynthesis has existed for a while, and often involves a host of nonorganic compounds and materials. And to the extent that such processes can result in sugar, they’re “doing photosynthesis”.

    [Qualia.]

    I deny there’s any such phenomena as that to be accounted for, at least not in anything like the way qualia advocates use the term. I know what sugar is. I have no idea what qualia is supposed to be in the way you’re likely employing it, beyond a confusion in the minds of philosophers. When clarified as something a little different, and less mysterious, the notion of qualia seems to me to adequately dissolve under reductionist views of consciousness. To be frank, I think Dennett’s views are basically right about this.

    I suspect it’s not even a confusion most people typically have, as recent studies suggest folk psychology fails to distinguish mental states in the way philosophers ordinarily do (Sytsma & Machery, in particular). That being said, we have some pretty fundamental disagreements about what the subject matter of consciousness even is. Since I deny many forms of qualia that are the most interesting to certain philosophers of mind, I can hardly be accused of making ontologically dubious distinctions between them and other phenomena, any more than my denial of ghosts makes me a dualist. I also suspect my views aren’t terribly uncommon among those you accuse of dualism, again making this charge in particular a strange one. I don’t even know what it would mean for me to be a dualist.

    ReplyDelete
  29. Part 2:

    [I can’t imagine how it cannot, unless you assume that consciousness is all about symbol manipulation and computation. Which, in this context, amounts to begging the question.]

    It’s not begging the question if it isn’t what’s being argued for. The folks at LessWrong are taking on shared assumptions and arguing about further implications that follow. If two biologists are arguing about evolution, it would be silly to accuse them of “begging the question” by both assuming evolution is true. Likewise, to enter in on a discussion among people who share similar views of consciousness, which are not under dispute, and accuse them of “begging the question” makes no sense. I’m also not begging the question, since I’m not assuming my position in arguing for it, I’m just arguing for it! That, and arguing that your view isn’t a default, privileged one, and you aren’t entitled to dismiss others for failing to make the case for their views on consciousness every time they discuss ideas downstream of those views.

    [I think we all know exactly what we mean by consciousness. Again, it has to do with qualia...]

    No, sorry, I have no idea what people who take qualia seriously are talking about. There isn’t anything in addition to the articulable, functional aspects of an experience that exist independently of it, and the apparent seemingness that there is is, I think, no more than a cognitive illusion. That being said, that’s *not* what I mean by consciousness, and I’m denying what you’re talking about is even a real thing.

    Sugar is real. Qualia of the sort suggested here aren’t, any more than “life” is some additional phenomena beyond specifiable subcomponents that could be realized in a range of physical substrates that aren’t logically dependent (even if they were in practice) on carbon.

    [Yes, it does, by definition. And the analogy is not good, since jumping is an activity, not a thing.]

    No, it doesn’t. No offense Massimo, but you must be extraordinarily confused if you think the moment someone thinks some things are different from matter they are a dualist. I said it was different. Not that it was a different kind of stuff. I can’t be a dualist if I’m denying consciousness or the “mind” is stuff at all! A dualist thinks that are two fundamentally distinct kinds of things: I don’t think this. You’re not getting it: I’m denying that consciousness/mind is a “thing” at all! That doesn’t make the analogy a bad one, it’s precisely the point of the analogy!

    ReplyDelete
  30. I don't understand how anyone can limit conscious output to signals from a process of computation.

    If you are a human-being then the conscious process includes feelings and emotions (both as input and output). The quality of feelings and emotions are certainly dependent on the physical substrate in which they are situated.

    Any definition of consciousness that doesn't account for 'what it feels like' would seem to me to be something very difficult any human being to form a conception of. It certainly would not resemble the consciousness we experience, and I am wondering why we would want call it 'consciousness'.

    ReplyDelete
  31. Sorry my first comment was off the main topic of the post, it was in response to the recent comments.

    Since this is my first series of comments I wanted to thank Massimo for the interesting and informative blog. I think it serves a very useful purpose, and as a long time lurker it has definitely widened my knowledge base :)

    ReplyDelete
  32. Seth,

    much appreciated!

    Baron,

    > Massimo has informed me that nothing is supposed to think it has a purpose. <

    No, Massimo simply informed you that not everything has a purpose, but you chose to willfully misinterpret what Massimo said, as usual.

    Tarn,

    > Except he *is* (at least fairly close to) what you call an "ultra-reductionist". He maintains that biology *is* physics. <

    Well, there goes another of his mistakes... ;-) Of course that claim can be interpreted in two ways: metaphysically or epistemically. Metaphysically I think the best one can do is stay agnostic. Yes, reductionist approaches in science have worked nicely, but there is also good reason not to exclude emergent properties from the picture. Epistemically that position is just silly, and discussions of morality *have* to deal with the epistemic issue, unless Yudkowsky is able to produce a quantum mechanical theory of human decision making.

    > I'm not entirely sure what you mean by numbers having 'essence', but depending on what exactly you're thinking of, I *think* he would contend that either number don't or morality does. <

    Essence is probably the wrong word here. I was referring to the ontological status of the entities in question. I think a reasonable case can be made for a mind-independent existence of mathematical objects (Platonism), but not so for moral “facts.” Even if one doesn’t buy into mathematical Platonism, it still seems clear that mathematical and moral reasoning are different (though, of course, both employ logic).

    Ian,

    hey, no fair! I thought I left you the last word! Ok, if you insist:

    > Thus, in the same way as the "output" of photosynthesis is sugar, the ultimate "output" of a mind is nervous signals telling the body what to do, the lips how to move <

    Nope. That’s the purpose of the truly (likely) computational parts of the brain, those that don’t have to do with consciousness and qualia. As I said above, the relevant output here is qualia.

    > those nervous impulses could be sent by a digital computer with input/output hardware. In principle. That is because the output of a mind is signals, not substances <

    Again, nope, the relevant output is a particular type of first person sensation. Unless you think qualia aren’t of the physical world...

    > You may reply that my account is missing qualia, or the phenomenal aspect of consciousness. This is true; in focusing on "outputs" I have not addressed whether a machine with indistinguishable input/output behaviour to a biological brain will experience the redness of red. <

    That is, you have conveniently avoided the crucial point of the discussion...

    ReplyDelete
    Replies
    1. >hey, no fair! I thought I left you the last word!

      Only on our previous topic, haha!

      >The storm had now definitely abated, and what thunder there was now grumbled over more distant hills, like a man saying "And another thing…" twenty minutes after admitting he's lost the argument.

      Delete
    2. >Again, nope, the relevant output is a particular type of first person sensation.

      First person sensation is certainly not relevant for biology (which we are supposed to be taking seriously). All a biological organism needs from its mind in order to survive and thrive is input/output behaviour; that'll do the job.

      Your position seems to imply that it would be possible for life forms to evolve on another planet with brains (made of silicon, say) that controlled their behaviour, and were even able to reflect on their own cognition, but without any attendant qualia.

      I admit I can't prove that this is logically impossible, but doesn't it strike you as... well, sounding wrong?

      Delete
  33. Lance,

    with all due respect, man, you’ve so far written the equivalent of a long blog post via four separate comments, and all of it with precious little to do with the topic at hand (not to mention one that I have covered extensively at Rationally Speaking). Nonetheless:

    > Prior to planes and other synthetic means of flight the only examples we had of directed flight were organisms. That wasn’t a good reason to think flight was a biological phenomenon. <

    You keep missing the point. I *never* said that it is impossible to have consciousness outside of a biological system. I only said that as of now the only examples we have are biological and, moreover, that it doesn’t make much sense to think that substrates don’t matter. None of this contradicts at all your flight analogy.

    > all I need is a machine that can produce sugar with light, water, CO2…If it’s got the same inputs and outputs, I’m happy to call that “photosynthesis” <

    Naturally. But now photosynthesis isn’t anymore a matter of just computation, is it? And, again, not *all* materials will do. Seriously, I don’t understand why you guys have such a hard time with such an obvious and pretty darn mild claim.

    > no. I don’t “take the biology seriously”. It would be ludicrous to think only living things could convert light, CO2 and water into sugar and waste products. <

    Yes, it would, and I never did. But I think it would be ludicrous to say that substate doesn’t matter, which is why strong AI and people at LW are apparently committed to.

    > Qualia: I deny there’s any such phenomena as that to be accounted for <

    Well, forgive me, but that’s just silly. It amounts to denying the problem because one has no idea of how to solve it. Be my guest, but I’m not interested in denialism.

    > The folks at LessWrong are taking on shared assumptions and arguing about further implications that follow. <

    You may not have noticed, but I’m not a member of that club, so I’m free not to take on those shared assumptions, to analyze them, criticize, and if necessary reject them.

    ReplyDelete
    Replies
    1. I honestly don't understand the conflation of Strong AI and mind uploading. I'm interested in the possibilities and enthusiastic about the creation of AGI but I also think ideas like uploading and substrate independence are ludicrous. Can you explain why you think there is a link between the latter and the former. I know this is a digression from the topic of your post but I think it's important nonetheless.

      Delete
  34. Lance (more...),


    > you aren’t entitled to dismiss others for failing to make the case for their views on consciousness every time they discuss ideas downstream of those views <

    First off, remember that this has little to do with my criticisms of Yudkowsky in this post. Second, my charge of begging the question was very specific: one wants to argue for strong AI, but *assumes* that the most difficult and interesting aspects of human mental life are only a matter of computation (i.e., they are substrate independent). Then one concludes that strong AI is a real possibility, indeed essentially inevitable (as in the “Singularity”). *That* sequence amounts to massively begging the question, which is precisely in what sense and to what extent consciousness is a matter of computation (in a sense of “computation” that is not so broad to encompass pretty much everything, of course).

    > I have no idea what people who take qualia seriously are talking about. <

    You must be a Chalmerian zombie then. Who knew? We found empirical evidence of his intuitions after all!

    > no more than a cognitive illusion. That being said, that’s *not* what I mean by consciousness, and I’m denying what you’re talking about is even a real thing. <

    I am getting soooo tired of people dismissing what they don’t have the means to explain as being an “illusion” that I’m seriously considering a petition to ban the word from the internet. No, not seriously, but still. As for consciousness, what exactly is it you *do* mean by that term? You have published for long comments about it and I haven’t gotten a clue.

    > any more than “life” is some additional phenomena beyond specifiable subcomponents that could be realized in a range of physical substrates <

    Glad you brought that up, since life is a pretty similar problem here. It too is not substrate independent (as you yourself admit), just like consciousness.

    > you must be extraordinarily confused if you think the moment someone thinks some things are different from matter they are a dualist. <

    Surely you realize that there is more than one type of dualism, yes? I never accused you of Cartesian, substance-dualism. But you (and Ian, and the other LW fans) are the one who insists that the brain does something that is in fact independent of the particular physical make up of that brain.

    > A dualist thinks that are two fundamentally distinct kinds of things: I don’t think this. You’re not getting it: I’m denying that consciousness/mind is a “thing” at all! <

    I get it, you are a property dualist.

    ReplyDelete
    Replies
    1. Hi Massimo, thanks for the reply and my apologies again for my verbosity. Since I'm constrained for space, I can hardly explain my views in any detail. Nonetheless, I think consciousness is roughly speaking captured by Jack & Robbins work on the “phenomenal stance”. We have distinct cognitive mechanisms, including the physical stance and the intentional stance, but also the phenomenal stance, which is what I think “consciousness” is. Essentially, I think we have two different mechanisms for inferring mental states, and the sense that consciousness is something that must be accounted for is the product of ontological confusions stemming from our projecting the distinctions our minds are naturally inclined to make onto the world itself. Consciousness is thus a problem of human psychology: how our minds conceptualize the world, not a problem in figuring out what sorts of things are actually out there in the world.

      I think our brains are producing different “maps” of the world, some of which interpret the world as made up of physical things, and some of which project mental qualities onto things in the world. What gives rise to the explanatory gap is the sense that no physical account of things could possibly also explain the mental qualities of those things. And perhaps it can’t, but that’s a limitation on how our brains work, not an ontological problem. I think the hard problem of consciousness will ultimately be resolved by dissolution – it is the product of wonky human software, and nothing more.

      [Surely you realize that there is more than one type of dualism, yes? I never accused you of Cartesian, substance-dualism.]

      Then tell me: what type of dualist am I? Am I a “linguistic dualist” because some of the nouns I use refer to specific objects, and others refer to abstractions of objects, like “war” and “party”? The way I divide things up conceptually says nothing about my assumptions about the underlying ontology of the world. I don’t know why you think people’s division of concepts reflects ontological divisions.

      [But you (and Ian, and the other LW fans) are the one who insists that the brain does something that is in fact independent of the particular physical make up of that brain.]

      Because I think consciousness is something that brains can do, and other things could (in principle) do. I think a triangle-shaped object is also substrate-independent. Any matter that could be arranged triangle-wise would be a triangle. This does *not* entail that any kind of matter *actually could* be arranged triangle-wise. It could be that certain elements decay too quickly to be arranged in this way, but that isn’t a counterexample to the position. If x is substrate-dependent, I mean that some feature of the substrate is a necessary condition for x. I don’t think consciousness is the sort of thing that could even in principle be substrate-dependent.

      [I get it, you are a property dualist.]

      No, I'm not. The divisions I make between mental and nonmental stuff is a matter of pragmatics, not ontology. I don't think we need mental properties to adequately account for things. I use mental terminology because its useful, but I'm ultimately a reductionist and in an eliminativist. I think in principle everything can be described in terms of physics, including whatever consciousness is supposed to be. Any apparent necessity to assign other predicates to the objects in the world is the result of limitations in how our minds presently work, and says nothing about ontology.

      You're confusing conceptual distinctions with ontological distinctions. The way I’m using the word “consciousness” is similar to how I use a concept like “war” or “the 18th century”. Believing in these sorts of things doesn’t make you a dualist. Whatever else might be wrong with the views expressed on LessWrong, they aren’t dualists, and even if some of them are, the notion of uploading doesn’t depend on dualism any more than the ability to transfer my math homework from paper onto a TI-83 requires dualism.

      Delete
  35. @Massimo, re > Massimo has informed me that nothing is supposed to think it has a purpose. < and "No, Massimo simply informed you that not everything has a purpose, but you chose to willfully misinterpret what Massimo said, as usual."

    Baloney. You've never answered any question concerning something's purpose except to deny that there was one, and in addition, not explaining why, if there ever was a thing with a purpose, this thing in question wasn't it.
    And you seem to see no difference between things that have a purpose and those that may more merely serve them.
    You avoid, in fact, any use of the word like the plague. For a professional philosopher, this lack of willingness to discuss that hugely important subject in any depth at all is astonishing, and that you can't or won't see this is even more astonishing.

    ReplyDelete
    Replies
    1. Whatever. And I assure that's my professional opinion.

      Delete
  36. Massimo,

    Yes,

    >morality isn’t a matter of logic “all the way down,” because it has to start with some axioms, some brute facts… <

    But then you say:

    >Those facts don’t come from physics … they come from biology <

    Taking those brute facts to be biological is arguably stopping one level short of the most revealing level of causation.

    What if we did not stop looking at the level of our biology, but identified the “brute facts” that selected for our moral biology (empathy, guilt, and so forth) and past and present cultural moral codes?

    In our physical reality (yes, these brute facts are in our physics – and our mathematics), cooperation often produces more benefits than individual effort. Cooperation can be exploited. In the short term and sometimes even in the long term, exploiters get more benefits from exploitation than cooperation. Exploitation destroys cooperation. This defines the cross-species universal dilemma of how to obtain the synergistic benefits of cooperation without being exploited - the universal cooperation-exploitation dilemma.

    Based on explanatory power, consistency with known facts, and so forth, solutions to the cooperation-exploitation dilemma are the brute facts that have selected for our moral biology and cultural moral codes. That is, our moral biology and cultural moral codes are most usefully understood as necessarily fallible biological and cultural heuristics selected by the increased benefits of cooperation in groups that they produce.

    Of course, the above is only claimed to describe what morality ‘is’ as an evolutionary adaptation. I expect there will always be moral philosophers vigorously arguing that moral codes ‘ought’ to be based on something else entirely.

    ReplyDelete
  37. A lot of armchair science is going on here. It's one thing to do armchair philosophy, but science?

    Look, whether we can create an artificial human consciousness (which would enable mind uploading), and whether we can create a conscious AI is two different questions. In the latter case, all we need is just some machine to say "I think therefore I am". It doesn't really have to think exactly like human (an AI that thinks exactly like a human would be a bad idea anyway, since it'd easily be corrupted by power and act in its own self interest). I think Massimo would agree that this latter case is not impossible (at least more probable than the first).

    Whether we can create a human artificial consciousness is a different question. I see where Massimo's coming from. The only conscious beings we've seen are biological ones. We haven't yet seen conscious rocks. So it can be taken as evidence against substrate neutrality of consciousness.

    I'm somewhat agnostic about this, but I think that the arguments Lesswrongers make for substrate neutrality of consciousness are sound. At the quantum level, the concept of matter breaks down. You're not the same person you were just a nanosecond ago. None of the particles are in the same place. The amplitude distribution has changed. I don't know where the feeling of personal continuity comes from. But I think whatever it is, it has very little to do with what type of particles you are made of.

    I also think that if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

    ReplyDelete
  38. Louis,

    > I honestly don't understand the conflation of Strong AI and mind uploading. ... Can you explain why you think there is a link between the latter and the former. <

    Good question, thank you for pointing that out. As it should be clear by now, my objection to the idea of mind uploading is that it is based on a very strong version of the computational theory of mind, which in turn does not take seriously the fact that - to the best of our knowledge - consciousness is a biological process. So mind uploading rests on a form of process dualism.

    The same can be said of strong AI IF by that term one means a computation-only attempt to replicate human-like consciousness inside a computer (which was, indeed, the stated goal of early strong AI supporters).

    If, however, one is asking whether it is possible in principle to artificially replicate human consciousness, then the answer has to be yes (again, in principle, whether or not we'll ever be able to actually do it). That's because I don't believe there is anything magical about human consciousness.

    The further question, then, is what are the most promising venues for such artificial consciousness to emerge. I would say that it would have to be a body, and I am also betting that the range of materials that could be used is limited. I am *not* making the strong claim that only biological materials would do, but it is a distinct possibility.

    It is here that I see a good analogy with the problem of life itself. There has been speculation for a long time about the possibility of alternative chemistries for life, say one based on silicon instead of carbon. But there are good arguments that silicon has a much more restricted range of reactivity than carbon, so that a silicon-based life form might not be able to get off the ground, or if it did it would have limited capabilities. Certainly a biochemist would tell you that a lot of other elements won't do, they simply don't have the right properties. I think the same goes for non-human artificial consciousness.

    I hope this helps!

    ReplyDelete
    Replies
    1. Thank you Massimo, I entirely agree with your position too. If the goal of Strong AI is to build artificial consciousness over strong intelligent systems then I'll have to divorce myself from that idea.

      I always thought strong AI was about building strong human level and beyond problem solving agents.

      I certainly understand why the substrate independent minds gang can be annoying, but they don't represent all lesswrongers if I can be considered a LWer.

      Delete
    2. I also agree with Massimo's position, both in regard to the to the grounding of morality and the substrate independence of anything resembling human consciousness.

      I like Terrence Deacons conception of nested systems that mutually constrain each other to enable self-repair or self-regulation in response to environmental demands. Stable and enduring biological systems appear to utlilize this negative feedback process. The ability of these systems to unfold into progressive or adaptive complex solutions seems to me to be related to the continuous nature of the substrates in which they are situated.

      I think the medium is part of what constrains the conscious process and thus cannot be separated from it.

      I'm just a general interested layman so am likely to be either stating the obvious or missing the obvious.

      Delete
    3. Hey Seth, I'd say your on the ball. Any recommended reading from Terrence Deacon?

      Delete
  39. Paul,

    > Clippy can perfectly well agree with us that it is bad to kill people to make paperclips. <

    I don’t think it can. My point is that “good” or “bad” are not logical concepts, they are part of the background conditions of moral reasoning. If we agree, say, that whatever augments human flourishing, or whatever increases people’s happiness and decreases their pain, etc. is good *then* certain conclusions and positions logically follow from such premises. If Clippy accepts the premises then he may agree with Yudkowsky, but it (Clippy) could simply deny the premises and be done with it (in fact, it very likely will, since concern for human well being isn’t part of its clippy nature).

    And yes, I agree that logic is abstract and not motivating. But that’s precisely my argument: morality is in part about motivations, so if you take those out (“logic all the way down”) you destroyed the concept.

    Ian,

    > First person sensation is certainly not relevant for biology (which we are supposed to be taking seriously). All a biological organism needs from its mind in order to survive and thrive is input/output behaviour; that'll do the job <

    That goes to the issue of the adaptive meaning of qualia. I don’t have an answer, but I don’t think that’s relevant. Either qualia evolved for adaptive reasons or they didn’t. Either way, they are not an “illusion,” they are a biological feature of certain organisms, so they need to be explained by any theory of consciousness (as opposed to “eliminated”).

    > Your position seems to imply that it would be possible for life forms to evolve on another planet with brains (made of silicon, say) that controlled their behaviour, and were even able to reflect on their own cognition, but without any attendant qualia. <

    I have no idea how you got that implication. You don’t need to invoke other planets or silicon life forms, right here on earth you’ve got a good number of species that probably has brains too small to produce first person sensations. Ants come to mind.

    ReplyDelete
    Replies
    1. >Either way, they are not an “illusion,” they are a biological feature of certain organisms, so they need to be explained by any theory of consciousness (as opposed to “eliminated”).

      For what it's worth, I don't exactly think qualia are an illusion. I think the *concept* of qualia may turn out to be simply confused (bla bla Wittgenstein, bla bla bewitchment of language), but that is not the same thing as eliminativism.

      >...right here on earth you’ve got a good number of species that probably has brains too small to produce first person sensations. Ants come to mind.

      That's the case I meant to rule out by specifying "able to reflect on their own cognition" (a working definition of consciousness).

      >I have no idea how you got that implication.

      Well, it seems to be an implication of your position that a "brain" (of whatever material and origin you like) could in principle do every single thing that the human brain does for the physical organism (because all of that is mere input/output "signals"), and yet lack qualia.

      Whereas my position is that anything that quacks like a duck, for functionally similar reasons to a duck (i.e., it's not just a tape recorder), necessarily has duck qualia. I so wish I could prove this. :)

      Delete
  40. Great discussion.

    I think there's another way to consider the question of the relationship between morality and rationality.

    Rather than ask "are moral questions reducible to logic" or "constrained by logic" or "can (should/must) we use logic to resolve moral questions" we can instead use our existing knowledge of each to better inform our understanding of the other. We can put them into a associative/analogous relationship, rather than making one subservient/reducible to the other.

    For example, in Reasons and Persons, Derek Parfit discusses ways of acting which are not, according to any strict criteria morally correct for the individual, but which we are *all* better off when all individuals follow.

    eg: I choose to save my child's life and let 10 other people die, my instinctual love for my child overwhelms any obligations to consider the overall good, I did not improve the overall good with my action BUT it *does* improve the overall good to live in a world in which people care deeply for their children and have a powerful instinctual love for their children that overwhelms other considerations. We are all better off living in a world like this, rather than a world where people act in accordance to strict criteria regarding the overall good.

    It's important to keep in mind that we continue to think of the action of the individual here as morally wrong, in the individual case we would genuinely prefer him to save the 10 people, and we think he is morally wrong not to do it.

    And yet we are all better off living in a world where people consistently make this error.

    This is a different kind of golden rule, based on the idea of emergent effects rather than universal applicability.

    Can we apply the same idea to rational thought? Should we prefer to *think* in a way that maximizes the overall truth in a similar fashion? ie. Should we, as individuals, each be a Bayesian system calculating as accurately as we can the certainty of the truth statements we encounter? Or should we think of our selves as agents in an overall Bayesian system. Should we think in a way that, when everyone thinks in this way, we maximize the overall correctness.

    This might entail being *wrong*, making a mistake according to strict rational/logical criteria, but one that, when people tend to make this mistake we are, overall, more correct, smarter, as a whole.

    Consider the difference between being a dispassionate calculator of uncertain truth (Judge) vs. being a passionate defender of one particular idea (Advocate). It's possible that we might all be smarter, as a group, when there is a mix of Judges and Advocates, or when we each act as Judge under certain circumstances and Advocates under others.

    You can think of morality as a protocol for collective action in distributed networks. And you can think of a similar protocol for thinking, calculating truth, in a distributed network.

    And maybe there is an obligation to think in this way, despite that we might *want* to think in a way that maximizes our own, personal, correctness.

    ReplyDelete
    Replies
    1. I agree strongly that the best way conceptualize morality is within the framework of nested relationships. Ideally, I think a good moral approach will be mutually beneficial to the stability and flourishing at all scales (individual, family, group, socieity, ......). I think the necessity of mutual constraint helps inform this approach.

      The ability to apply this concept in cases where there are apparent conflicts is of course going to be fuzzy. I think it is a good grounding concept however that encourages an ever expanding awareness to how things are connected.

      Delete
  41. This comment has been removed by the author.

    ReplyDelete

  42. Massimo,

    >I don’t think it can. My point is that “good” or “bad” are not logical concepts, they are part of the background conditions of moral reasoning. If we agree, say, that whatever augments human flourishing, or whatever increases people’s happiness and decreases their pain, etc. is good *then* certain conclusions and positions logically follow from such premises. If Clippy accepts the premises then he may agree with Yudkowsky, but it (Clippy) could simply deny the premises and be done with it (in fact, it very likely will, since concern for human well being isn’t part of its clippy nature).

    And yes, I agree that logic is abstract and not motivating. But that’s precisely my argument: morality is in part about motivations, so if you take those out (“logic all the way down”) you destroyed the concept.<
    Are you assuming a kind of motivational internalism somehow built into the meaning of moral terms?

    That would seem to have Clippy disagreeing with Yudowsky about what's good and what's bad. But that raises a question of who's correct.

    For instance, let's say that aliens evolve on a different planet with a different set of motivations; what happens if they disagree with humans on whether an invasion of Earth is good or bad, and their minds and intuitions are different from those of humans?

    If I get Yudowsky's position, Clippy might say, after studying humans (say, to see whether they can help it make more clips more quickly, or they are a problem): "Okay, humans care about this thing they call 'goodness', and it turns out that goodness is such that humans > paperclips. However, I do not care about goodness. I care about clipperness, and clipperness is such that paperclips > humans. So, I'll go make more clips, killing those pesky humans who get in the way."

    ReplyDelete
  43. Ian,

    > it seems to be an implication of your position that a "brain" (of whatever material and origin you like) could in principle do every single thing that the human brain does for the physical organism (because all of that is mere input/output "signals"), and yet lack qualia. <

    I disagree, my position is more restrictive, not more permissive, than yours. For me in order to have qualia a “brain” has *both* to have certain functional characteristics *and* be made of certain materials (though, again, not necessarily carbon).

    > necessarily has duck qualia. I so wish I could prove this. :) <

    Yeah, that’s the holy grail of functionalism, and you are right, there is no proof of it. Indeed, the bar for said proof, empirically speaking is pretty high. It wouldn’t be enough to, say, make a duck out of silicon and show that it has carbon-duck type qualia. You would have to show that one could produce duck-like qualia *regardless* of the materials used. Good luck.

    Angra,

    > If I get Yudowsky's position, Clippy might say, after studying humans (say, to see whether they can help it make more clips more quickly, or they are a problem): "Okay, humans care about this thing they call 'goodness', and it turns out that goodness is such that humans > paperclips. However, I do not care about goodness. I care about clipperness, and clipperness is such that paperclips > humans. So, I'll go make more clips, killing those pesky humans who get in the way." <

    Precisely. Which means that morality has to include something other than pure logic, which as far as I can tell Yudowsky does not admit in the post I commented on (otherwise, again, it wouldn’t be logic “all the way down”).

    ReplyDelete
    Replies
    1. >Indeed, the bar for said proof, empirically speaking is pretty high.

      Isn't it empirically impossible to prove? There is no disproof of solipsism even about (other) humans.

      >It wouldn’t be enough to, say, make a duck out of silicon and show that it has carbon-duck type qualia. You would have to show that one could produce duck-like qualia *regardless* of the materials used.

      Just to clarify: I would never claim that one can make a functioning conscious mind from ANY material. Sandstone, for example, is not going to work.

      All I claim is that IF a "brain" can be made that does the same things for the body it's attached to as our brains do (crudely, has the right input/output behaviour for the right functional reasons), then it's necessarily phenomenally conscious.

      Delete
  44. >If I get Yudowsky's position, Clippy might say, after studying humans (say, to see whether they can help it make more clips more quickly, or they are a problem): "Okay, humans care about this thing they call 'goodness', and it turns out that goodness is such that humans > paperclips. However, I do not care about goodness. I care about clipperness, and clipperness is such that paperclips > humans. So, I'll go make more clips, killing those pesky humans who get in the way."<

    So the humans adopt a moral stance appropriate for humans based on what they 'care' about. Aliens adopt their morality in relation to what they 'care' about. Logic cannot discriminate whose moral stance is correct. Each stance may be appropriate to the holder of the stance in isolation.

    If however there were yet another alien species that could point to a solution that would allow all the moral reference frames to flourish together acoounting for all cares beyond what the prior moral stances allowed, this would seem logically superior.

    ReplyDelete
    Replies
    1. Seth_blog

      >So the humans adopt a moral stance appropriate for humans based on what they 'care' about. Aliens adopt their morality in relation to what they 'care' about. Logic cannot discriminate whose moral stance is correct. Each stance may be appropriate to the holder of the stance in isolation. <
      My understanding of Yudowsky's position is that only humans or aliens with relevantly similar minds care about maximizing goodness, whereas Clippy or other aliens wouldn't be adopting a moral stance.

      Personally, I'm not sure about some of his points about goodness and morality.

      For instance, he agrees that "surely one shouldn't praise the clippyflurphers rather than the just"; that is true of humans, and if Yudowsky is talking about humans, I agree. But if that 'one' encompasses Clippy, I do not know. Maybe Clippy would be doing anything immoral, by praising clippyflurphers; maybe he's simply not a moral agent at all (i.e., Clippy's actions are not morally good, morally bad, or morally anything; he might bring about bad results, but that's not morally bad behavior, just as a crocodile might bring about bad results, but the crocodile does not behave immorally).

      Generally, I get the impression he might have to clarify better how he distinguishes between goodness and moral goodness, badness and moral badness, etc.

      >If however there were yet another alien species that could point to a solution that would allow all the moral reference frames to flourish together acoounting for all cares beyond what the prior moral stances allowed, this would seem logically superior. <
      I'm not sure what you mean by 'logically superior'.

      Delete
    2. >So the humans adopt a moral stance appropriate for humans based on what they 'care' about. Aliens adopt their morality in relation to what they 'care' about. Logic cannot discriminate whose moral stance is correct. Each stance may be appropriate to the holder of the stance in isolation.

      Yes, I don't think this necessarily makes different species into quite the moral islands that Yudkowsky thinks. Clippy wants to make paperclips and I don't, but a theory that I can almost endorse (preference utilitarianism) still regards Clippy as an agent worthy of moral concern by virtue of its having preferences.

      We can picture some action that would count as "torturing" Clippy (melting paperclips, say), and I would be willing to count that action as morally wrong, all other things being equal.

      Of course, all things are not equal if Clippy wants to convert earth into paperclips, but there is not much more paradox here than if one human wants to use oil to make plastic for a chair and another wants to burn it. It's just a garden variety conflict between two people's interests.

      Delete
    3. Angra Main Yu
      >I'm not sure what you mean by 'logically superior'. <

      I am not a logician, but I was alluding to a seemingly logical process of moral reasoning that takes into account 'concerns' (or maybe preferences is a better term) at multiple levels. I think many of our concerns occur in part due to narrowly perceived constraints, but often we fail to see the ways in which the constraints may actually afford long-term benefit of some type.

      So it is my view that agents capable of adopting moral stances do so more logically when they critically consider their concerns holistically than in isolation. The biological cycles in nature that survive and flourish seem to suggest to adopt this logic.

      Individuals in a society enjoy more freedom when the society enforces a degree of constraint on their actions. Of course on overly constraining government put its' own existence in peril. I think it is within these considerations that morality should be formed.

      I may be applying the use of the term 'logic' inappropriately and am sure this is not a novel idea.

      Delete
    4. Ian

      thanks I appreciate your comment. I was just trying to get at a potential way of seeing through what could unnecessary conflicts.

      Delete
    5. Ian & Angra Main Yu

      To follow up I think our our bias naturally is to concern ourselves with what causes individual discomfort of some kind. If we look at a small view we have a preference to eliminate the cause. The consequences of that approach may harm the individual in the long run.

      Take the H-pylori ulcer causing bacteria. This article in the New Yorker does a nice job of describing the potential problems that comes when we remove one piece of a system without a good understanding of the potential system-wide consequences.

      http://www.newyorker.com/reporting/2012/10/22/121022fa_fact_specter

      I'm not suggesting in any way that all natural systems are symbiotic and to be used as a model for morality. But often I think the moral conflicts we debate are unnecessary, and have mutually beneficial solutions if each partner (or the one with agency) in the debate could be receptive to the plus side of what makes them uncomfortable.

      Delete
  45. Massimo,

    >Precisely. Which means that morality has to include something other than pure logic, which as far as I can tell Yudowsky does not admit in the post I commented on (otherwise, again, it wouldn’t be logic “all the way down”). <
    Why does it mean that?

    I don't understand why you conclude from that that Yudowsky's claim that morality is logic all the way down, as he understands that expression, is false.

    But perhaps the issue here is how to interpret 'logic all the way down'. What do you think he means by that?

    It's obvious that you need something other than logic to be motivated to do bring about something. But I do not see that Yudowsky claims otherwise.

    ReplyDelete
  46. All,

    thanks for your feedback and patience, but I think we are now reaching diminishing returns on these topics, so this will be my last set of comments:

    Ian,

    > Isn't it empirically impossible to prove? There is no disproof of solipsism even about (other) humans. <

    It has nothing to do with solipsism, and you are the one making the really broad claim, so it seems fair that the burden of proof is on you, my friend.

    > All I claim is that IF a "brain" can be made that does the same things for the body it's attached to as our brains do (crudely, has the right input/output behaviour for the right functional reasons), then it's necessarily phenomenally conscious. <

    That’s much more reasonable than the mind upload crowd, but still, you are stuck with this input/output stuff, which is part and parcel of the computational theory. Of course, if you define input/output loosely (just like you define algorithm), then we probably agree.

    Angra,

    > I don't understand why you conclude from that that Yudowsky's claim that morality is logic all the way down, as he understands that expression, is false. <

    Because if we have to import information about, say, biology, than we are not talking about only logic anymore, no?

    > how to interpret 'logic all the way down'. What do you think he means by that? <

    If he is talking in English, he means formal manipulation of symbols according to certain rules, which is what logic is.

    > It's obvious that you need something other than logic to be motivated to do bring about something. But I do not see that Yudowsky claims otherwise. <

    No, it’s worse than that. We are not talking about motivations, we are saying that his logic simply doesn’t get started unless one takes on board facts about human biology and culture. Which he doesn’t.

    ReplyDelete
    Replies
    1. >It has nothing to do with solipsism, and you are the one making the really broad claim, so it seems fair that the burden of proof is on you, my friend.

      I think it has everything to do with solipsism. What, in your opinion, would constitute proof (or at least evidence) that some artificial (or alien, or nonhuman animal) mind experiences qualia?

      If I'm going to carry this humongous burden of proof, I'd at least like to know how I may discharge it.

      Another question for you: earlier I used a rough working definition of consciousness as "an ability to reflect on one's own cognition." What I like about that definition is that it suggests how we might expect a conscious mind to differ in outward behaviour from an unconscious one - for example, the unconscious mind doesn't criticize it's own thought processes from within, and so would be expected to be more impulsive.

      Do you allow the metaphysical possibility of a "brain" that is self-reflective in this sense and yet not phenomenally conscious?

      Delete
    2. >thanks for your feedback and patience, but I think we are now reaching diminishing returns on these topics, so this will be my last set of comments:

      Sorry, didn't see this before. It's been a pleasure arguing, as always.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. This comment has been removed by the author.

      Delete
  47. Lance,

    > We have distinct cognitive mechanisms, including the physical stance and the intentional stance, but also the phenomenal stance, which is what I think “consciousness” is. <

    I really don’t understand how you can define a phenomenon in the real world by a “stance,” which is simply someone’s attitude toward said phenomenon. There seems to be a category mistake lurking somewhere in there...

    > Consciousness is thus a problem of human psychology: how our minds conceptualize the world, not a problem in figuring out what sorts of things are actually out there in the world. <

    Not trying to be obtuse, but I have no idea what that means.

    > What gives rise to the explanatory gap is the sense that no physical account of things could possibly also explain the mental qualities of those things. <

    I never said anything of the sort, as I don’t have that gap at all. I just think there is a problem yet to be solved, not one that cannot be solved.

    > I think the hard problem of consciousness will ultimately be resolved by dissolution – it is the product of wonky human software, and nothing more. <

    Forgive me, but that seems to me more of a pious hope than a research program.

    > Then tell me: what type of dualist am I? <

    I told you: property dualist.

    > The way I divide things up conceptually says nothing about my assumptions about the underlying ontology of the world. <

    You can’t have it both ways, this discussion was about what consciousness is, if it is anything. That’s as ontological as it gets.

    > I think consciousness is something that brains can do, and other things could (in principle) do <

    As I said many times, I agree. But not *anything* can do it. And it isn’t just a matter of symbol manipulation, which means that mind uploading goes out the window.

    > I think a triangle-shaped object is also substrate-independent <

    Sure, because you are talking about the abstract idea of triangles. Are we talking about the abstract idea of consciousness? What does that mean?

    > I don’t think consciousness is the sort of thing that could even in principle be substrate-dependent. <

    So you say. I still need to hear an argument in defense of said position.

    > The divisions I make between mental and nonmental stuff is a matter of pragmatics, not ontology. <

    See my comment above about what this discussion is about, according to your own words.

    > I'm ultimately a reductionist and in an eliminativist. <

    I know. I’m neither.

    > I think in principle everything can be described in terms of physics, including whatever consciousness is supposed to be. <

    I hear this “in principle” stuff a lot. I asked Steven Weinberg what he meant by that. Turns out the best he could do was to say something along the lines of “I don’t see any reason why,” which I pointed out to him is an argument from ignorance. That is, a logical fallacy.

    > You're confusing conceptual distinctions with ontological distinctions. The way I’m using the word “consciousness” is similar to how I use a concept like “war” or “the 18th century”. <

    Again, I have no idea what this means. Do you not experience conscious states? Are you finally the vindication of Chalmers’ zombies idea?

    > Believing in these sorts of things doesn’t make you a dualist. <

    No, because they are nothing like consciousness.

    > the notion of uploading doesn’t depend on dualism any more than the ability to transfer my math homework from paper onto a TI-83 requires dualism. <

    Another analogy that makes no sense. Qualia are something you experience, as a physical being. They are nothing like homework, or like a computer program (which can definitely be transferred from one type of hardware to another. But maybe I’m just not getting it. Let me know when you manage to upload your mind and tell me how it feels. I bet it won’t.

    ReplyDelete

  48. Massimo,

    >Because if we have to import information about, say, biology, than we are not talking about only logic anymore, no? <
    I wouldn't know, since I'm not sure what he meant by 'logic all the way down', but in any case, the information that Clippy is importing about humans is not necessary.

    What I said earlier was: "If I get Yudowsky's position, Clippy might say, after studying humans (say, to see whether they can help it make more clips more quickly, or they are a problem): "Okay, humans care about this thing they call 'goodness', and it turns out that goodness is such that humans > paperclips. However, I do not care about goodness. I care about clipperness, and clipperness is such that paperclips > humans. So, I'll go make more clips, killing those pesky humans who get in the way. "
    But similarly, also if I get his position right, someone (not a human, but say another AI that was programmed by humans) could tell Clippy that there is a function 'goodness' that is computed in a certain manner; Clippy could then know that humans > paperclips according to that function, without studying humans as a means to learn what the function in question is.

    But sure, let's say that someone has to study humans as a means of getting what the function is in the first place. Does that mean that it's not 'logic all the way down' in the sense in which Yudowsky used that expression?
    That is unclear.

    >If he is talking in English, he means formal manipulation of symbols according to certain rules, which is what logic is. <
    Okay, so your take on this is that when Yudowsky says that morality is logic all the way down, he means that morality is formal manipulation of symbols, and nothing else.

    However, (and for example), when he talks about fairness, he says "running the physical starting scenario through a logical function that describes what a 'fair' outcome would look like", so he's running physical scenarios through logical functions, which is not simply manipulating symbols formally (for instance, you need to run it through a physical scenario, and accept humans or paperclips as inputs), and that a contradiction so plain as claiming that it's only a formal manipulation of symbols should be pretty obvious to him and/or many of his readers.

    As I mentioned, I've not read many of his posts, so I'm hardly an expert on Yudowsky's terminology, but I'm not inclined to think that he's saying that.

    Still, if you're right and that's what he means, it would seem trivially true that he's wrong.

    ReplyDelete
  49. morality is whatever is agreed between people, and it needs to be logically self consistent and refer to an objective world, or it can be easily crticized. beyond that, do what you like and pay the price if others strongly disagree.

    ReplyDelete

Note: Only a member of this blog may post a comment.