Continuing our discussion of Platt’s classical paper on “strong inference” and, more broadly, the difference between soft and hard science, another reason for the difference between these two types of science mentioned but left unexamined by Platt is the relative complexity of the subject matters of different scientific disciplines. It seems to me trivially true that particle physics does in fact deal with the simplest objects in the entire universe: atoms and their constituents. At the opposite extreme, biology takes on the most complex things known to humanity: organisms made of billions of cells, and ecosystems whose properties are affected by tens of thousands of variables. In the middle we have a range of sciences dealing with the relatively simple (chemistry) or the slightly more complex (astronomy, geology), roughly on a continuum that parallels the popular perception of the divide between hard and soft disciplines. That is, a reasonable argument can in fact be made that, so to speak, physicists have been successful because they had it easy. This is of course by no means an attempt to downplay the spectacular progress of physics or chemistry, just to put it in a more reasonable perspective: if you are studying simple phenomena, are given loads of money to do it, and you are able to attract the brightest minds because they think that what you do is really cool, it would be astounding if you had not made dazzling progress!
Perhaps the most convincing piece of evidence in favor of a relationship between simplicity of the subject matter and success rate is provided by molecular biology, and in particular by its recent transition from a chemistry-like discipline to a more obviously biological one. Platt wrote his piece in 1964, merely eleven years after Watson, Crick and Franklin discovered the double helix structure of DNA. Other discoveries followed at a breath-taking pace, including the demonstration of how, from a chemical perspective, DNA replicates itself; the unraveling of the genetic code; the elucidation of many aspects of the intricate molecular machinery of the cell; and so on. But by the 1990s molecular biology began to move into the new phase of genomics, where high throughput instruments started churning a bewildering amount of data that had to be treated by statistical methods (one of the hallmarks of “soft” science). While early calls for the funding of the human genome project, for instance, made wildly optimistic claims about scientists soon being able to understand how to make a human being, cure cancer, and so on, we are in fact almost comically far from achieving those goals. The realization is beginning to dawn even on molecular biologists that the golden era of fast and sure progress may be over, and that we are now faced with unwieldy mountains of details about the biochemistry and physiology of living organisms that are very difficult to make sense of. In other words, we are witnessing the transformation of a hard science into a soft one!
Despite all of the reservations that I detailed above, let us proceed to tackle Platt’s main point: that the difference between hard and soft science is a matter of method, in particular what he refers to as “strong inference.” Inference, of course, is a general term for whenever we arrive at a (tentative) conclusion based on the available evidence concerning a particular problem or subject matter. If we are investigating a crime, for instance, we may infer who committed the murder from an analysis of fingerprints, weapon, motives, circumstances, etc. An inference can be weaker or stronger depending on how much evidence points to a particular conclusion rather than to another one, and also on the number of possible alternative solutions (if there are too many competing hypotheses the evidence may simply not be sufficient to discriminate among them, a situation that philosophers call the underdetermination of theories by the data). The term “strong inference” was used by Platt to indicate the following procedure:
1. Formulate a series of alternative hypotheses;
2. Set up a series of “crucial” experiments to test these hypotheses; ideally, each experiment should be able to rule out a particular hypothesis, if the hypothesis is in fact false;
3. Carry out the experiments in as clear-cut a manner as possible (to reduce ambiguities of interpretation of the results);
4. Eliminate the hypotheses that failed step (3) and go back to step (1) until you are left with the winner.
Or, as Sherlock Holmes famously put it in The Sign of Four, “when you have eliminated the impossible, whatever remains, however improbable, must be the truth.” Sounds simple enough. Why is it, then, that physicists can do it but ecologists or psychologist can’t get such a simple procedure right?
If Platt’s strong inference sounds familiar, it should: it is related to Francis Bacon’s method of induction, and Platt explicitly invokes the British philosopher in his article. The appeal of strong inference is that it is an extremely logical way of doing things: Platt envisions a logical decision tree, similar to those implemented in many computer programs, where each experiment tells us that one branch of the tree (one hypothesis) is to be discarded, until we arrive at the correct solution. For Platt, hard science works because its practitioners are well versed in strong inference, always busy pruning their logical trees; conversely, for some perverse reason scientists in the soft sciences stubbornly refuse to engage in such a successful practice, and as a consequence waste their careers disseminating bricks of knowledge in their courtyards, rather than building fantastical cathedrals of thought. There seems to be something obviously flawed with this picture: it is difficult to imagine that professionally trained scientists would not realize that they are going about their business in an entirely wrong fashion, and moreover that the solution is so simple that a high school student could easily understand and implement it. What is going on?
We can get a clue to the answer by examining Platt’s own examples of successful application of strong inference. For instance, from molecular biology, he mentions the discovery of the double helix structure of DNA, the hereditary material. Watson, Crick, Franklin and other people working on the problem (such as twice-Nobel laureate Linus Pauling, who actually came very close to beating the Watson-Crick team to the finishing line) were faced by a limited number of clear-cut alternatives: either DNA was made of two strands (as Watson and Crick thought, and as turned out to be the case) or three (as Pauling erroneously concluded). Even with such a simple choice, there really wasn’t any “crucial experiment” that settled the matter, but Watson and Crick had sufficient quantitative information from a variety of sources (chiefly Franklin’s crystallographic analyses) to eventually determine that the two-helix model was the winner. Another example from Platt’s article comes from high-energy physics, and deals with the question of whether fundamental particles always conserve a particular quantity called “parity.” The answer is yes or no, with no other possibilities, and a small number of experiments rapidly arrived at the solution: parity is not always conserved. Period. What these cases of success in the hard sciences have in common is that they really do lend themselves to a straightforward logical analysis: there is a limited number of options, and they are mutually exclusive. Just like logical trees work very well in classic Aristotelian logic (where the only values that can be attached to a proposition are True or False), so strong inference works well with a certain type of scientific question.
Yet, any logician knows very well that the realm of application of Aristotelian logic is rather limited, because many interesting questions do not admit of simple yes/no answers. Accordingly, modern logic has developed a variety of additional methods (for instance, modal logic) to deal with more nuanced situations that are typical of real-world problems. Similarly, the so-called soft sciences are concerned largely with complex issues that require more sophisticated, but often less clear cut, approaches; these approaches may be less satisfactory (but more realistic) than strong inference, in that they yield probabilistic (as opposed to qualitative) answers. Soft science, then, is soft because of very good reasons intrinsic in the nature of the object of study, certainly not because of the intellectual inferiority of its practitioners.
About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Subscribe to:
Post Comments (Atom)
"What these cases of success in the hard sciences have in common is that they really do lend themselves to a straightforward logical analysis: there is a limited number of options, and they are mutually exclusive."
ReplyDeleteI'm not sure this is really the key. A lot of problems in evolutionary biology (for example) can be reduced down to a limited number of options, e.g. is allele A fitter than allele B. I think the difference is often that it is not possible to get a definite yes/no answer: we have to settle for "maybe". Hence the need for statistics: the systems we study as sufficiently complex (and hence stochastic) that we can only get probabalistic answers.
The difference between hard and soft sciences is not in the difficulty of the mathematical models, but there is certainly a big difference in how they are used. In the hard sciences, a model is expected to explain ALL the data down to negligible measurement error. If the line doesn't precisely go through the data points, it isn't publishable, and either the experiment or the model must be improved first. In the soft sciences, we are content if a model explains ANY of the data. The key mathematical concept is the coefficient of determination describing what proportion of the variation in the data we have explained. We statistically test whether this is greater than zero, not whether it is less than one.
ReplyDeleteIf hard science truly is distinguishable from soft science, what mensuration yields the difference? How does one measure the claim that hard science has progressed faster than soft science? Can one really claim that the discovery of the Higgs boson would be a "larger" discovery than, say, the discovery of some odd correlation in psychology? How does one measure the size of a discovery?
ReplyDeleteI find these distinctions highly suspect, and I wonder at the motivation for them. Who cares whether physics is harder or softer than biology? Is this some variation on making claims regarding the size of one's penis?
Is painting a better art form than sculpture? Is chocolate ice cream provably superior to strawberry ice cream? Is rock better than rap?
Aren't these all silly questions?
Chris,
ReplyDelete"Who cares whether physics is harder or softer than biology? Is this some variation on making claims regarding the size of one's penis? ... Aren't these all silly questions?"
No. First, because they affect both the funding and social perception of science (which in turn affects the funding...). Second, because there are some genuine epistemological issues here, as explained, for instance, in this paper: Cleland CE: Historical science, experimental science, and the scientific method. Geology 2001, 29:987-990.
I liked Joanna's explanation. An example would be developing a model for alcohol dependence. The most sophisticated models leave room for physioloical factors (physical dependence), psychological factors, cultural factors (differences in pattern between, say, Israel, Italy, USA), genetic predispositions (as may be indicated in twin studies, sometimes labelled "familial"). All of these factors explain something, but are not necessarily relevant across the board. The entire phenomenon is probably just that complex. It is even difficult to define. One falls back on diagnostic criteria, and even these have to be split up, with a subsection for physical dependence.
ReplyDeleteUsually, the various factors outlined above are subsumed under some sort of all-accomodating learning theory. But that seems to me to be the shakiest part of the entire model.
Hmm... if the issue is perception of science, then should we not contest that perception if we consider it misleading? I don't have access to an academic library so I cannot pursue your suggestion. However, I would like to expand a bit on my previous post.
ReplyDeleteI suggest that science is best understood as a tool for understanding the universe. Different fields address different aspects of the universe. If we use the tool metaphor, then the distinction between hard science and soft science seems absurd, because the tool is defined by its task, not its intrinsic properties. Are sharp tools such as saws superior to dull tools such as hammers? Obviously not. Milling machines are more complicated than screwdrivers, but you can't unscrew things with a milling machine. It is silly to assign superiority to one tool, because tools are defined by their functions, not their constitutions.
Physics is utterly useless in explaining why people behave as they do. Psychology couldn't tell a hadron from a boson. I see no point in applying any kind of value judgement here.
If on the other hand your goal is to determine means of differentiating various fields, that's taxonomy and again I think that the tasks of the field provide a better basis here than the methodologies.
How does one measure the claim that hard science has progressed faster than soft science?
ReplyDeleteChris,
In my opinion (as a biologist, not a philosopher, and not having read the paper Massimo cites), we can very well measure the progress of a science, even if not with a perfectly objective number on a scale or something like that.
What I think is that a science's progress can be "measured" by looking at how much predictive and/or explanatory power its theories have. With that in mind, compare physics and psychology. Or chemistry and ecology. It's quite clear which ones have greater powers in their realms. I don't believe there is a way to deny that. Physicists can explain their "stuff" much better than I can explain mine.
The reasons for the differences in progress don't matter for my argument here. But most would agree with the reasons given by Massimo (more tractable subjects, etc.).
J, let's consider closely this notion of predictive power. A psychologist, for example, can predict with high confidence that, if you ring a bell before feeding a dog, and do that many times, then if you later ring a bell, the dog will still salivate whether or not you feed it. That prediction is every bit as substantial as a physicist's prediction regarding the motion of particles. Psychologists could extend this to make predictions regarding the salivatory behavior of cats, otters, bears, and all manner of other creatures -- and all of those predictions could be made with much confidence.
ReplyDeleteThere are millions of predictions that psychologists can make regarding human behavior with great confidence: predictions regarding response to pain, consequences of brain lesions, effects of stimulating portions of the brain with electrical current, and so on. If psychologists had chosen to keep their science a hard science, then they could have compiled a wealth of results every bit as massive as physicists have (if you take into account the greater length of time physicists have had to work and the greater funding they have received.)
But psychologists have chosen to tackle some extremely complicated problems. They want to know what makes the brain tick, and the brain is the most complicated phenomenon in the universe.
Indeed, in the one area in which physicists have tackled a truly complicated problem, magnetohydrodynamics, their predictive powers have been dismal. Yes, they've learned a lot -- but they still can't predict this behavior well enough to build a fusion reactor, and they've been working on the problem for 50 years now. Think how much progress psychology has made during those 50 years.
So it would be just as correct, in my opinion, to replace the phrases "hard science" and "soft science" with "easy science" and "difficult science". Then the statement that "The hard sciences have more predictive power than the soft sciences" becomes "The easy sciences have more predictive power than the difficult sciences" -- which is almost tautological.
J, I take your comment (and to some extent Massimo's post, as well) to mean that "easy" and "difficult" science are problem categories shared across the traditional "hard" and "soft" spectrum. So, for example, a particular hypothesis in physics or chemistry may be easy or difficult to test and a particular hypothesis in psychology or sociology may be easy or difficult to test. Yes?
ReplyDeleteI think there is some extent to which more intelligent people will be drawn toward the "hard" sciences because it is there that their intellectual talent is more likely to be recognized -- in the social sciences, having the right politics and making the powers that be like you is more important than being accurate.
ReplyDeleteI took an Anthropology class from a professor named Charles Erasmus at UC Santa Barbara. He warned all of us not to take Anthropology, because it was so hard to distinguish truth from falsity, and the field was overrun, in his opinions, by idiots. He advocated our majoring in a technical field. He also advocated a career outside of academia altogether, but that's another matter...
On another note, I've also noticed a tendency of scientifically inclined people to believe that the smartest person in the room must be the one who is right. I'm convinced this is false. More brainpower gives one more opportunities to sustain false beliefs, if that is what one intends to do.
This really comes to a head in the Evolution-Creation wars -- the Evolutionists tend to assume that since the Creationists are so very wrong, they must be retards, and there is a substantial history of many Evolutionists underestimating Creationists so badly that they show up at debates having done no research into Creationist arguments (and get slaughtered).
I think there is some extent to which more intelligent people will be drawn toward the "hard" sciences because it is there that their intellectual talent is more likely to be recognized
ReplyDeleteWhoa! This is a very parochial statement. I was trained as a physicist, and I agree that there are some very very bright people in physics -- but I am not so egotistical as to think that my own area of expertise is somehow superior to anybody else's. The fundamental question here is whether intelligence is a one-dimensional quantity. I believe that intelligence is far too complicated to be measured on a single scale. There are many different kinds of intelligence, and some of the physicists I have know are astoundingly stupid in some of those dimensions.
I'm not saying that everybody's the same. There are people whose overall cognitive performance is better than the average person's. But cognitive performance is measured in many dimensions. The hard sciences require only a few of those dimensions.