Stop me if this seems familiar.
It’s October 11th. You were planning to work out tonight, but now it’s already 8:00 pm. You’re tired from work, and hungry, and suddenly watching a nice quiz show (maybe QI?) seems very appealing. You reason: I can always catch up later. I’ll go for an extra-long run tomorrow. And so you grab a bag of Doritos and start watching.
And you know, it’s funny, but tomorrow... a similar thing happens... and keeps happening until January 1st, when you throw off your wicked ways, turn a new leaf, and renew your Firm Resolution to work out regularly. Which lasts until February, or April if you have strong willpower. This is the age-old problem of akrasia.
It doesn’t take a genius to see that not working out regularly is a mistake, in the abstract. However, what is not so clear is whether your specific decision on the 11th of October was irrational. Arguably, at that moment, you really did value watching TV more than running - that’s why you watched TV (revealed preferences). And you really *could* catch up later. The gotcha, of course, is that you end up making a similar decision every night, which adds up to the annual result of NOT working out regularly.
I think it’s pretty intuitively obvious that on January 1st, considering the year past, if you didn’t exercise regularly despite your intentions, you made some sort of mistake. But what mistake? Your decision of October 11th didn’t feel like a mistake. It didn’t even feel like a rationalization. At that moment, the TV and junk food was absolutely worth it. And by standard, causal decision theory, it wasn’t a mistake.
Causal decision theory is basically equivalent to utility maximization using correct probabilistic reasoning, and is very appealing as a normative model for how to make decisions. Given a set of values, the correct decision at any given time is the one that causes the probability of achieving those values to be maximized.
This model works most of the time. As a fairly trivial example, suppose you are considering whether to play the lottery. Causal decision theory thinks that you should play the lottery if U(lottery) > 0, where U(lottery) = [($ win)*P(win) - ($ loss)*P(loss)]*(utility/money scaling factor). This is entirely sane, and is a good explanation for why playing the lottery is a mistake unless that scaling factor is negative (i.e., unless you like losing money).
In simplified terms, the trouble with causal decision theory is that, on October 11th, the model looks at U(workout today) and U(slack off today), concludes U(workout today) is WAAAY less than U(slack off today), and you open a bag of Doritos and turn on the TV.
This is something of a paradox. How can it be that, considering the year as a whole, workouts are better than slacking, and yet on any individual day of that year, slacking off is better than a workout?
The reason appears to be a phenomenon called hyperbolic discounting, explained in detail here. In essence, humans in general seem to overvalue things happening now, and undervalue things that will happen in the future. For example, getting your oil changed today would be a bit of a pain. But somehow, that bit-of-a-pain-now outweighs the massive pain a few years from now when your car needs major maintenance because you neglected to take care of it. So you don’t change your oil, or at least you have to fight with yourself to do so.
Hyperbolic discounting often expresses itself as a time-dependent preference reversal: exercise-in-the-future is better than chocolate-cake-in-the-future, but chocolate-cake-now is waaay better than exercise-now. Much ink has been spilt over whether, and under what conditions, hyperbolic discounting can be said to be rational, but it appears we are stuck with the psychology behind it in any case.
This is depressing. It means that New Year’s resolutions are basically futile. Exercise sounds great on the 1st, in the abstract, but on any given day in October, Doritos and TV are going to seem soooo much better than exercise.
Good news! There is a hack! Beeminder’s slogan is “bringing long-term consequences near.” The first innovation is a bet. You set a goal (say, 250 half-hour workouts per year) and you give Beeminder your credit card number (yes, I use it, no, I haven’t been robbed). If you fail to achieve your goal, you lose a specified amount (in my case, $30).
The second, most important innovation is to make the bet depend on your behaviour NOW rather than on your reaching a distant future goal. 250 half-hour workouts per year sounds great, but realistically, what you’ll do is slack until November, then try to catch up with a bunch of longer workouts. On Beeminder, you set a pace (in this case, ~5 workouts per week), and if you fall below that pace AT ANY TIME, you lose. You can change the pace, but changes will only take effect a week from when the change is made, so you can’t slack off today by changing the pace. You can use it for a lot of things (weight loss, number of pages written in your novel, time volunteered, etc.).
I have had limited success with this over the medium term, and with similar techniques before I found Beeminder - one of them involving a google docs spreadsheet, a good friend, and some amicable blackmail (I believe Julia has a most amusing anecdote about this sort of technique). It is not a perfect solution to the problem of akrasia, but it really, really helps. The best part about it is that deciding to exercise on any given day ceases to be a big psychological fight with yourself, in which you have to spend a bunch of willpower. You just know you’ll lose $30 if you don’t do it, and the decision magically becomes simple.
Nevertheless, I find it inelegant to *merely* use a hack to prevent behaviour which is patently irrational. If you have a systematizing tendency like me, you want to figure out the general theory that shows akrasia to be irrational, and then make that theory part of your common sense. Also, there are practical problems with the hack. What if, after you’ve committed, you genuinely reconsider whether the goal is worthwhile?
A more general decision theory that solves this sort of problem is already being put forward by two people (that I know of), for reasons that apparently have nothing directly to do with akrasia. Gary Drescher (in Good & Real - which happens to be the best book EVAR) calls his theory “acausal decision theory,” and Eliezer Yudkowsky calls his “timeless decision theory.” In both cases, the theory is motivated mainly by consideration of the prisoner’s dilemma, Newcomb’s problem, and machine ethics (I do so love these intellectual convergences).
The gist of these decision theories may be summarized as follows (though probably not carefully enough for their authors):
When making a decision, act as if you were deciding the output of similar decisions, by similar decision-makers, in similar circumstances.
This looks promising for explaining:
-Why it is rational to vote in an election, even if your vote has extremely low probability of being individually decisive;
-Why it is a mistake to litter in a public park, even though your individual napkin will not noticeable contribute to the amount of garbage there;
-Why consequentialist ethics has a hard time doing away with prima facie deontological concepts like “honour” and “duty” and “oath” and “right;”
and, last but not least,
-Why you were wrong to get out the Doritos instead of working out, on the night of October 11th.
Now, you may have noticed the words “act as if” in the above description of acausal decision theory. They are the great bone of contention for the theory. Why should we “act as if” something is true, when it’s not?
Intuitively, I think there is probably a good answer to that question, although I don’t know exactly what it looks like yet. But for akrasia, there is no need to “act as if” you were deciding the output of similar decisions by similar decision-makers in similar circumstances - you really, obviously are!
It is a truism about human psychology that acting in a particular way on a particular occasion contributes to a general disposition to act in that way. Being surly to the girl behind the Tim Horton’s counter today makes you more likely to be surly to someone else tomorrow, and to become, in time, a generally surly person. We fall into behavioural ruts all too easily.
What this means for akrasia is that, on October 11th when you are deciding whether to run or slack off, you are NOT just deciding what to do on that particular night. You are:
(a) contributing to a general psychological disposition to not exercise (and to ignore difficult long-term goals);
and more controversially,
(b) implicitly deciding that in broadly analogous situations (i.e., similar levels of motivation), the proper course is to not exercise. In other words, you are deciding on a general policy of slacking off whenever you are at least as disinclined to exercise as you are now.
So my tentative take-home lesson is the following:
Decisions are never solitary, one-off events. A decision about a particular action on a particular day (running or not on the 11th of October, given a certain level of motivation) is simultaneously a decision about a general decision policy (running or not, EVER, given a certain level of motivation).
Always make decisions such that you would find it acceptable if all similarly-placed agents (including your future self) made the same decision.
Realistically, don’t rely on the above, theoretical considerations to defeat your own akrasia. If you have a quantifiable goal and you fear you’ll procrastinate on it, just sign up with Beeminder. If the goal is less quantifiable, the Beeminder folks recommend Stickk. Your mileage may vary.
Oh, and I almost forgot. Happy new year, Rationally Speaking readers & bloggers!! Thanks for the warm; stimulating discussion we’ve had and will have!