Massimo and Julia answer listeners' questions.
In this installment the topics include: how much do works of fiction affect people's rationality, Bayesian vs. frequentist statistics, what is evidence, how much blame do people deserve when their actions increase the chance of them being targeted, time travel, and whether a philosophically examined life is a better life.
Also, all about rationality in the movies, from Dr. Who to Scooby-Doo.
About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
The problem with Bayesian hypothesis testing is that it gives researchers more parameters to tweak to get any results they want.
ReplyDeleteSee the following paper:
"False-Positive Psychology : Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant"
http://people.psych.cornell.edu/~jec7/pcd%20pubs/simmonsetal11.pdf
"Although the Bayesian approach has many virtues, it actually increases researcher degrees of freedom. First, it offers a new set of analyses (in addition to all frequentist ones) that authors could flexibly try out on their data. Second, Bayesian statistics require making additional judgments (e.g., the prior distribution) on a case-by-case basis, providing yet more researcher degrees of freedom."
Max,
ReplyDeleteThe first potential problem is a methodological concern and it seems to me that is can be corrected for methodologically. E.g., in publishing their findings, researchers must of course publish their methodology, and if reviewers detect any methodological improprieties, I imagine there are procedures to correct them.
The second potential problem is not much of a problem at all-- or, if it is, is a much more palatable problem than those which face classical statisticians. So, e.g., the 'additional judgements' are neither made irrationally nor in a vacuum: researchers assign priors based on background knowledge (e.g., physical laws, the best available work in their disciplines, etc.) and are available to their colleagues for critical assessment. And, the extent to which subjectivity does play a role (and it does), it is no more problematic for the Bayesian than it is for the frequentist: The frequentist proffers much subjectivity in defining reference classes, long-term random trials, etc.). In fact, I would argue the Bayesian is better placed to deal with researcher subjectivity.
The problem Max is pointing out becomes a big problem in contentious clames. For instance if person A strongly believes in ESP and person B strongly dose not believe in ESP then the choice of priors in Bayesian hypothesis testing will be a problem. Person A will want a prior that strongly favors the ESP claim (alternative hypothesis) and person B will want a prior that strongly favors the null hypothesis. A prior that is biased enough in one direction or another will overwhelm any data set.
ReplyDeleteThe compromise is to pick a neutral prior that favors neither hypothesis, but Bayesian statistics with a neutral prior will give similar answers to frequentist statistics. This is why (IMHO) frequentist stats are used in contentious claims.
As Julie explained, the hard part is defining the alternative hypothesis. The null hypothesis is simple: no effect or no difference. But there's an infinite number of alternatives: 20% increase, 80% increase, 81% increase on Tuesdays, etc., and you need a prior for each of them relative to the others. So we'd have to agree on the relative likelihood of different ESP effects that may not even exist! It's like arguing how many angels can dance on the head of a pin.
DeleteChris,
ReplyDeleteWhether a researcher selects-- either consciously or not-- a prior probability for the phenomenon too high or too low, via replication, peer review, and other forms of independent assessment (systematic reviews and meta-studies) this can be rectified in much the same way other forms of researcher bias can be rectified. So, if a so-called 'true believer' in ESP, e.g., sets the initial prior much too high, his work will be rightly criticized on that point when her peers review the research.