3.bp.blogspot.com |
My purpose in this essay is to address some claims Massimo has made over the years about parapsychology (the scientific discipline that studies claims of extrasensory perception, or ESP, psychokinesis, and survival of consciousness after bodily death), and to show why I think the scientific evidence for ESP (e.g. telepathy, clairvoyance, and precognition) is more plentiful than he seems to believe.
On many past occasions, I have heard Massimo publicly claim that ESP has been refuted, such as in a Skeptiko podcast interview last year in which he said “... research on the paranormal has been done for almost a century. We have done plenty of experiments, say on telepathy or clairvoyance or things like that, and we know it doesn’t work.” And in his recent book, Nonsense on Stilts: How to Tell Science from Bunk, Massimo even implies that parapsychology is a pseudoscience on par with astrology.
It would be reasonable to expect, especially from someone as learned as Massimo, that these bold claims about research on telepathy and clairvoyance, and the status of parapsychology as a discipline, were derived from a thorough assessment of the parapsychology literature (a literature which includes informed skeptical criticisms of parapsychology experiments). However, in my assessment of the parapsychology literature, I have been unable to find an evidenced basis for Massimo’s claims. Not only that, my study of the literature has turned up evidence that strongly supports a conclusion contrary to Massimo’s. Here’s why.
In parapsychology, the three research paradigms considered to provide some of the best evidence for ESP are (a) the Maimonides and subsequent dream telepathy/clairvoyance/precognition experiments, (b) the SRI, SAIC, and PEAR remote viewing experiments, and (c) the Ganzfeld experiments. Here I’ll limit myself to discussing (c) only, and refer the interested reader to this recent anthology which overviews the evidence from (a) and (b).
Within parapsychology, the Ganzfeld experiments have probably been the most widely used to test for the possibility of telepathy, clairvoyance, and to some extent precognition. For the unfamiliar reader, a concise account of the Ganzfeld procedure can be read here. The main point I want to make about the Ganzfeld experiments is that, since 1985, there have been 8 independent, published meta-analyses of Ganzfeld experiments; and with the exception of the 1999 meta-analysis by Julie Milton and Richard Wiseman, which was shown by statistician Jessica Utts and acknowledged by Wiseman (personal correspondence, July 2011) to have used a flawed estimate of the overall effect size and p-value of the combined results, all of them have shown statistically highly significant effects with a replication rate well above what’s expected by chance. The literature also shows rather convincingly, in my view, that the leading Ganzfeld critic, Ray Hyman, has been unable to account for these highly significant effects by prosaic means like publication bias, optional stopping, inadequate randomization of targets, sensory leakage, cheating, decline effect, etc. On this last point, I recommend reading Bem and Honorton’s 1994 paper, Bem’s reply to Hyman in 1994, and Storm and co.’s reply to Hyman in 2010.
As an example of the strength of the statistical evidence, let’s look at the most recent Ganzfeld meta-analysis by parapsychologist Patrizio Tressoldi, who applies a frequentist and Bayesian statistical analysis to 108 Ganzfeld experiments from 1974–2008. All these experiments were screened for adequate methodological quality and have an overall hit rate of 31.5% in 4,196 trials, instead of the 25% hit rate expected by chance. Moreover, using the conservative file-drawer estimate of Darlington/Hayes, the lower bound on the number of unreported experiments needed to nullify this overall hit rate is 357, which is considered implausible by Darlington/Hayes’ criterion.
For the frequentist analysis, Tressoldi applied two standard meta-analytic models, namely, a ‘fixed-effects’ model (which assumes a constant true effect size across all experiments) and a ‘random-effects’ model (which assumes a variable true effect size across all experiments). Whereas a standard deviation from the mean of only ~1.6 is needed for the results of a meta-analysis to achieve statistical significance, the fixed-effects model yields an overall effect that’s significant by more than 19 standard deviations from the mean effect of zero, while the random-effects model yields an overall effect more than 6 standard deviations from the mean. The corresponding odds against chance for the fixed-effects model is off the charts, and for the more conservative random-effects model is greater than a billion to 1.
For the Bayesian analysis (which I know Massimo believes is more reliable and valid than the classical approach), Tressoldi follows Rouder and co. in considering two hypotheses. The first is the null hypothesis that the true effect size is zero for all experiments, and the second is the ESP hypothesis that the true effect size is constant and positive across all experiments. He then finds that the ratio of the prior probability of the ESP hypothesis to the prior probability of the null hypothesis, each conditioned on the combined Ganzfeld data, yields a ‘Bayes factor’ of 18,886,051 (or the number of times the latter probability divides into the former probability). So, for a skeptical person who gives prior odds of, say, 1,000,000:1 against ESP, they should update their beliefs by a factor of 18,886,051 in favor of ESP. In other words, if we divide 1,000,000:1 by 18,886,051 we obtain posterior odds of about 0.053:1, or equivalently, 19:1 in favor of ESP. Interestingly, according to Utts, Ray Hyman told her personally that he would put prior odds of about 1,000:1 against ESP being real. For the Ganzfeld Bayes factor calculated by Tressoldi, this would mean that Hyman’s posterior odds should be 18,868:1 in favor of ESP.
And Tressoldi’s Bayesian analysis is not the only one. In the Ganzfeld meta-analysis by Utts and co., a Bayesian approach is also used. Their approach differs from Tressoldi’s in that they assume the true ESP effect size to be variable across experiments, which is the more common assumption in Bayesian statistics. They consider three priors labeled “psi-skeptic,” “open-minded,” and “psi-believer,” each corresponding to a guess about the most likely median Ganzfeld hit-rate. Then they examine the extent to which the combined empirical results of 56 procedurally standard Ganzfeld experiments shift each prior median hit rate to posterior median hit rates above chance. What they find is that the shift depends strongly on how wide one’s subjectively decided uncertainty is around each prior median.
In sum, I don’t see how to escape the conclusion that a classical statistical analysis of the Ganzfeld data gives strong evidence for ESP, judging by the standards of evidence commonly accepted by the social and behavioral sciences. And using Bayesian analysis, we’ve seen that one approach shows overwhelmingly strong evidence for ESP, while another shows that the strength of the evidence depends strongly on the uncertainty of one’s prior belief about the possibility of ESP.
Given this evidence from the Ganzfeld alone, it is difficult for me to see on what basis Massimo claims that “... we have done plenty of experiments, say on telepathy or clairvoyance or things like that, and we know it doesn’t work.” And to the best of my knowledge, he has never tried to justify his assertion.
Correspondingly, I don’t see an evidenced basis for Massimo’s characterization of parapsychology as ‘pseudoscience.’ In Nonsense, Massimo says, “lack of progress, i.e., lack of cumulative results over time, is one of the distinctive features of pseudoscience.” But with the example of the Ganzfeld, it seems indisputable to me that there are cumulative results in parapsychology, and that those results provide evidential support for the ESP hypothesis.
Finally, it is noteworthy that the prominent CSI Fellow, psychologist, and self-described parapsychologist, Richard Wiseman, has previously stated that "I agree that by the standards of any other area of science that remote viewing is proven, but begs the question: do we need higher standards of evidence when we study the paranormal? I think we do.”
Wiseman later clarified his comment: “That’s a slight misquote because I was using the term in more of a general sense of ESP. That is, I was not talking about remote viewing per se, but rather Ganzfeld, etc. as well. I think that they do meet the usual standards for a normal claim but are not convincing enough for an extraordinary claim.”
Even granting Wiseman’s insistence on higher standards of evidence for extraordinary claims, I wonder what Massimo thinks of Wiseman’s assertion that the scientific evidence for ESP is decisively in favor of it by the standards of “normal” scientific claims. Surely Massimo would agree that if Wiseman’s assertion is true, then this is characteristic not of pseudoscience but of science.
Good post, Maaneli! Are you familiar with Anthony Peake's work? If so, what do you think of his ideas about split brains, which in effect says that all of us have two 'conscious entities' within?
ReplyDeleteThanks, DaveS. Not familiar with Peake's work, but will check it out!
ReplyDeleteMaaneli,
ReplyDeleteTwo things to say here.
First, that the above mentioned studies & analyses are reputed to provide so-called 'strong evidence' for ESP could serve as evidence for the claim that we are in dire need to develop new statistical techniques for the assessment of small effect sizes, especially small effect sizes which have the implication that the entire edifice of modern physical science is wrong. See, for instance, the work of Fisher, Snedecor, and Cochran. Their conclusion is pretty much the same: Classical statistical techniques break down in the presence of small effect sizes.
Personally, I would wager that amongst the studies cited in support of parapsychology there are levels of measurement errors of magnitudes greater than any parapsychological effects, especially considering *if* there are parapsychological phenomena, then they at best reveal very small effects.
Second, the above aside, I simply do not see how one can come to your conclusion on the basis of current parapsychology research:
(1) we can identify no plausible causal mechanism which could account for the efficacy of parapsychology.
(2) the positive result studies you cite exist within a larger body of research, much of which has never seen the light of academic journals. Researchers are far more likely to publish positive results than not, and researchers who do produce negative results often file the data away because they are pretty sure nobody will be interested in studies which disconfirm ESP. Take, for example, the null replication of three experiments of Daryl Bem's ESP study which were rejected by the same journal that published Bem's paper.
What this amounts to, of course, is a publication bias and file drawer effect that, when viewed from afar, makes it far more likely that the positive results you mention are the types of chance results we would expect given a large body of research.
There is a bias in this: "never seen the light of academic journals." Publication of positive results in a journal with great impact factor is difficult; there is a catch-22.
DeleteBut using different standards of proof for different claims *is* part of proper scientific practice. Consider recent observations of FTL neutrinos. IIRC, the observation was six sigma, which corresponds to a Bayes factor of about 10^9. Of course, simply listing this large number misses the point, since most people think it's due to a systematic error we haven't thought of. A lot of science is debugging our procedures to find these errors.
ReplyDeleteThe first thing I would suggest to debug this experiment, is to vary the experiment (unbeknownst to the subjects, experimenters, or data analysts) in a way that ESP could not possibly work. For example, have the sender pick the target after the receiver reads it, or have the sender and receiver on opposite sides of the world without telling them, or something. I don't know enough about ESP to know the best procedure, but the point is you need a control. Does the control measure 25% success rate, or does it also measure about 31% due to some unknown systematic error? Has anything like this been done before?
"What this amounts to, of course, is a publication bias and file drawer effect"
ReplyDeleteSusan Blackmore would probably disagree with you on that. From the Skeptic's Dictionary article on the Ganzfeld experiments:
"Psychologist Ray Hyman did the first meta-analysis of the ganzfeld studies and though he used different criteria for excluding studies from the meta-analysis, he also found a hit rate of about 38% even though only 31% of the studies he included showed positive results (1989, 1996). Susan Blackmore found thirty-one unpublished studies from this period, but their results were not less successful than the published ones (1980). It seems unlikely that either chance or the file-drawer effect could account for the statistical significance of these results."
FWIW, I suspect a Clever Hans effect may be at work here, and there appear to be others who expect the same.
Eamon,
ReplyDeleteSurely the ganzfield success rates of over 0.3 on experiments that would produce a chance effect of 0.25, can hardly be described as a small effect size, can they?
Statistics prove that if something were possible, then this something would appear to be possible. Even when it's logically impossible.
ReplyDeleteMassimo, it's your blog, but I can't believe you even ran this. (And, this is after defending your liking of Blackburn's column on Hume on G+ yesterday.)
ReplyDeleteThe author's claims are nothing new and nothing true. As to the details of the not true part ...
Eamon and Ramsey are probably both right on just what fallacies are involved, dependent on the specific experiment/size, and specific ESP claim being tested.
We have an intuitive perceptive sense that there are intensional forces in the universe. What they bring about that affects us will be explainable by no more than educated guesswork - even when those forces have been able to form communicative methods that we have learned to grasp.
ReplyDeleteThose who wish to believe otherwise educate themselves to guess otherwise.
Maaneli,
ReplyDeleteOn this point especially:
"The literature also shows rather convincingly, in my view, that the leading Ganzfeld critic, Ray Hyman, has been unable to account for these highly significant effects by prosaic means like publication bias, optional stopping, inadequate randomization of targets, sensory leakage, cheating, decline effect, etc. On this last point, I recommend reading Bem and Honorton’s 1994 paper, Bem’s reply to Hyman in 1994, and Storm and co.’s reply to Hyman in 2010."
Bem's research in general, and the Bem and Honorton paper in particular, are bosh. In brief, if you *really* think that the Ganzfeld research overturns modern physical science, then I really must begin to question your qualifications as a prudent epistemic agent.
Eamon, thanks for your comments.
ReplyDelete<< First, that the above mentioned studies & analyses are reputed to provide so-called 'strong evidence' for ESP could serve as evidence for the claim that we are in dire need to develop new statistical techniques for the assessment of small effect sizes, especially small effect sizes which have the implication that the entire edifice of modern physical science is wrong. See, for instance, the work of Fisher, Snedecor, and Cochran. Their conclusion is pretty much the same: Classical statistical techniques break down in the presence of small effect sizes. >>
I'm unfamiliar with the work of Fisher, Snedecor, and Cochran that you cite. (References would be appreciated!) But I'm skeptical that their conclusion is really that "classical statistical techniques break down in the presence of small effect sizes". Taking that conclusion at face value, and assuming it is true, this would imply that we have to throw out all the results from classical statistical analyses of quantum mechanical experiments, simply because the effect sizes in quantum mechanical experiments are so incredibly small (in fact, they are several orders of magnitude smaller than the Ganzfeld effect size). I doubt Fisher et al. would argue that, so I suspect there are nuances being glossed over in your characterization of their work.
Also, the Ganzfeld, remote viewing, and dream telepathy experiments are subject to the simplest of all statistical models, namely, the binomial experiment. (Indeed, one can also compute an overall effect size for Tressoldi's 108 Ganzfeld studies, using the exact binomial test.) This is the same statistical model used to describe something as basic as the relative frequencies of heads and tails in N flips of a fair coin. Do Fisher et al. claim that their findings invalidate the Binomial model? I would be surprised if they did. Because if the binomial model were invalid even for small effect sizes, then it would mean that something is fundamentally wrong with the most basic assumptions of classical probability theory, which would also entail that all other frequentist-based statistical inferences made in science (including for phenomena with "large" effect sizes) are of dubious validity.
Also, even assuming there are legit problems with classical statistical methods applied to phenomena with small effect sizes, you may have noticed that I also discussed the use of non-classical (specifically Bayesian) statistical analyses of the Ganzfeld results, and that those results can also yield strong evidence for the ESP hypothesis (the problems with Bayesian statistics notwithstanding). So unless one can identify specific problems with the application of Bayesian analysis by Tressoldi and Utts et al. that I discussed, I don't see how advocating the use of non-classical statistical techniques (which is what you seem to be advocating) affects any of the claims in my article.
Also, you seem to imply that if the Ganzfeld ESP effect were real, then this would "have the implication that the entire edifice of modern physical science is wrong." Can I ask you elaborate on why you think so? Do you think that there's some inconsistency with the laws of physics, for example?
<< Personally, I would wager that amongst the studies cited in support of parapsychology there are levels of measurement errors of magnitudes greater than any parapsychological effects, especially considering *if* there are parapsychological phenomena, then they at best reveal very small effects. >>
ReplyDeleteWhat is your theoretical basis for predicting that the levels of measurement error are magnitudes greater than any putative parapsychological effects? It seems like you're making lot's of assumptions about how large parapsychological effects can theoretically be.
Of course, a priori, your hypothesis may nonetheless be correct. For example, sloppy implementation of the Ganzfeld procedure in the individual studies composing the meta-analysis could in principle result in the observed effect size which is greater than what's expected by chance. But it should be noted that sloppy implementation of procedure is a possibility that can always be leveled at any meta-analysis of any research paradigm (whether controversial or not) in the social/behavioral/medical sciences.
Fortunately, there is a way to test the hypothesis that sloppy implementation of procedure can explain the significantly above-chance effect size - Do a Ganzfeld experiment with a sample size comparable to the sample size of the latest meta-analysis (e.g. 4,000 trials), keep all the experimenters the same, and ensure that the Ganzfeld procedure is followed closely by auditing all the equipment before each Ganzfeld trial, using hidden cameras to monitor the experimenters and test subjects, etc. In fact, it may be necessary to do several of such experiments, given that the results of large-scale randomized controlled-trials on the same research hypothesis appear to disagree with each other about 1/3 of the time, which, incidentally, is the same rate at which the results of meta-analyses on the same research hypothesis disagree with each other. (For more on this, see page 381 of this textbook: http://www.amazon.com/Introduction-Meta-Analysis-Statistics-Practice-Borenstein/dp/0470057246/ref=sr_1_1?ie=UTF8&qid=1324150068&sr=8-1)
Unfortunately, doing such large-scale Ganzfeld experiments to test that hypothesis would require funding that parapsychology as a research community does not seem to have at the moment. And it seems highly unlikely that the NSF would fund such a Ganzfeld experiment, if it were proposed to them. IMO, the parapsychologists seem to have done about the best they can in proof-oriented research, given the financial and resources they have had thus far. This, in my view, justifies them getting more funding to do the type of large-scale Ganzfeld experiment mentioned above.
<< Second, the above aside, I simply do not see how one can come to your conclusion on the basis of current parapsychology research:
ReplyDelete(1) we can identify no plausible causal mechanism which could account for the efficacy of parapsychology.
(2) the positive result studies you cite exist within a larger body of research, much of which has never seen the light of academic journals. Researchers are far more likely to publish positive results than not, and researchers who do produce negative results often file the data away because they are pretty sure nobody will be interested in studies which disconfirm ESP. Take, for example, the null replication of three experiments of Daryl Bem's ESP study which were rejected by the same journal that published Bem's paper. >>
(1) In science, the lack of a known or plausible causal mechanism does not invalidate the claim that there can be strong statistical evidence for a hypothesis. (Examples I know of abound in the world of physics and medicine.) Also, I think one has to be cautious about what counts as a ‘plausible mechanism’, as that depends on your prior assumptions, which may not be valid in light of the new data under consideration, or may be due to a mere lack of scientific imagination. In any case, there do exist phenomenological theories of ESP that seem to fit the Ganzfeld/RV data well (e.g. Decision Augmentation Theory). See for example this paper by May et al.: http://www.lfr.org/lfr/csl/library/DATjp.pdf
And for a comprehensive review of theories of ESP, see chapter 5 of Douglas Stoke’s online book: http://noosphere.princeton.edu/papers/docs/stokes/
(2) Yes, you’re referring to the file-drawer/publication-bias problem. I addressed that in my article:
“The literature also shows rather convincingly, in my view, that the leading Ganzfeld critic, Ray Hyman, has been unable to account for these highly significant effects by prosaic means like publication bias, optional stopping, inadequate randomization of targets, sensory leakage, cheating, decline effect, etc. On this last point, I recommend reading Bem and Honorton’s 1994 paper, Bem’s reply to Hyman in 1994, and Storm and co.’s reply to Hyman in 2010.”
and
“Moreover, using the conservative file-drawer estimate of Darlington/Hayes, the lower bound on the number of unreported experiments needed to nullify this overall hit rate is 357, which is considered implausible by Darlington/Hayes’ criterion.”
The details of the use of the Darlington/Hayes approach can be found in Storm et al.’s 2010 Ganzfeld meta-analysis:
http://www.psy.unipd.it/~tressold/cmssimple/uploads/includes/MetaFreeResp010.pdf
Also, in the Ganzfeld meta-analysis by Utts et al., they point out that
ReplyDelete“The studies we included were conducted by many different investigators and at a variety of laboratories in multiple countries. There is unlikely to be a large “file drawer” of unpublished studies because the ganzfeld procedure requires a special laboratory, parapsychology is a small field in which most researchers know each other, there are a limited number of journals in which such results would be published, and the journals in parapsychology have a policy of publishing studies even if they produce non-significant results.”
And as Bem and Honorton point out in their 1994 paper,
“Parapsychologists were among the first to become sensitive to the problem; and, in 1975, the Parapsychological Association Council adopted a policy opposing the selective reporting of positive outcomes. As a consequence, negative findings have been routinely reported at the association's meetings and in its affiliated publications for almost two decades. As has already been shown, more than half of the ganzfeld studies included in the meta-analysis yielded outcomes whose significance falls short of the conventional .05 level.”
and
“Because it is impossible, by definition, to know how many unknown studies-exploratory or otherwise-are languishing in file drawers, the major tool for estimating the seriousness of selective reporting problems has become some variant of Rosenthal's file drawer statistic, an estimate of how many unreported studies with z scores of zero would be required to exactly cancel out the significance of the known database (Rosenthal, 1979). For the 28 direct-hit ganzfeld studies alone, this estimate is 423 fugitive studies, a ratio of unreported-to- reported studies of approximately 15:1. When it is recalled that a single ganzfeld session takes over an hour to conduct, it is not surprising that-despite his concern with the retrospective study problem-Hyman concurred with Honorton and other participants in the published debate that selective reporting problems cannot plausibly account for the overall statistical significance of the psi ganzfeld database (Hyman &h; Honorton, 1986).[Footnote 2]”
and
“A 1980 survey [by Susan Blackmore] of parapsychologists uncovered only 19 completed but unreported ganzfeld studies. Seven of these had achieved significantly positive results, a proportion (.37) very similar to the proportion of independently significant studies in the meta-analysis (.43)”
Blackmore, S. (1980). The extent of selective reporting of ESP Ganzfeld studies. European Journal of Parapsychology, 3, 213- 219.
So given all the above, the file-drawer problem doesn’t seem to be a plausible explanation.
This comment has been removed by the author.
DeleteThe pro and con arguments about file drawer concerns, statistical tools and robustness of Ganzfeld and PK studies have been dissected at length in books like Radin’s Entangled Minds and Chris Carter’s Parapsychology and the Skeptics. Anyone with a sincere interest in evaluating the truth status of these results (rather than defending a priori conclusions) can refer to these and similar discussions to see that methodological strength is now much higher in typical parapsychology experiments than in many hard and soft sciences (Sheldrake has published an interesting review on the topic of blind controls in mainstream disciplines and the picture is quite dismal).
ReplyDeleteAs far as “statistics breaking down in the presence of small effect sizes”, I would invite you to visit Paul Smith’s website and look at some of the declassified CIA Stargate remote viewing sessions http://www.rviewer.com/SG_Sessions.html (I’d recommend Hagia Sophia, Bridge, Radio Telescope, Suez and Khyber Pass to begin with). We see what we want to see, gentlemen – no amount of statistical evidence will change someone’s mind unless that person is prepared to take this conceptual leap. Most of us, deep down, are not.
Lian S
Miller,
ReplyDelete<< The first thing I would suggest to debug this experiment, is to vary the experiment (unbeknownst to the subjects, experimenters, or data analysts) in a way that ESP could not possibly work. For example, have the sender pick the target after the receiver reads it, or have the sender and receiver on opposite sides of the world without telling them, or something. I don't know enough about ESP to know the best procedure, but the point is you need a control. Does the control measure 25% success rate, or does it also measure about 31% due to some unknown systematic error? Has anything like this been done before? >>
Your suggestions are all perfectly reasonable one's. Your first suggestion has been done by Bierman et al. to test if precognition might be the form of ESP occurring in the Ganzfeld. The results were positive and significant, which would seem to suggest (if you entertain the ESP hypothesis for the moment, as outlandish as it may be) that either precognition is occurring simultaneously with telepathy and clairvoyance in these experiments, or is all that's occurring.
Your suggestion to send senders and receivers on opposite sides of the world is a suggestion that I've also made to one Ganzfeld researcher. It's never been done apparently, but it has been done in another, closely related research paradigm called remote viewing. The results of those experiments seem to show that the effect size in remote viewing (which is almost identical to that of the Ganzfeld) does not drop off with distance. This would still be consistent with the precognition hypothesis, but hard to give a physicalist explanation for in terms of telepathy and clairvoyance.
One type of control you could use (and which has been used in a couple experiments) is to have a trial in which the subject tries to identify the target under the Ganzfeld condition, and then a trial in which the subject tries to identify the target in a non-Ganzfeld condition. Two such experiments have been done with results that seem to confirm that the Ganzfeld condition does seem to enhance the capacity of test subjects to identify targets more frequently than would be expected by chance:
http://search.informit.com.au/documentSummary;dn=288879307410164;res=IELHSS
http://www.unibem.br/cipe/3_links_pdf/Link_01.pdf
Ideally, I think, the ultimate control test would be in a large-scale Ganzfeld experiment of the type I have suggested above to Eamon. This way, concerns about statistical power would be a non-issue.
Ramsey,
ReplyDeleteThanks for that article. The Clever Hans effect is unlikely because, in the Ganzfeld experiments, the experimenter who interacts with the receiver is blind to the target both during the 'sending' period and the judging period. That is, neither the receiver nor the experimenter knows what the target is under *after* judging has been completed. See the Bem/Honorton 1994 paper I referenced for more details.
David,
ReplyDeleteI think what Eamon means by a "small effect size" is the technical usage in stats. In stats, an effect size measure such as Pearson's correlation coefficient, r, is classified as "small" if r is 0.1 or smaller.
Eamon,
ReplyDelete<< Bem's research in general, and the Bem and Honorton paper in particular, are bosh. >>
Can you provide evidence for such a bold claim? If not, it just sounds like a dogmatic assertion, which is not really in the spirit of good critical thinking and skepticism.
<< In brief, if you *really* think that the Ganzfeld research overturns modern physical science, then I really must begin to question your qualifications as a prudent epistemic agent. >>
I don't think Ganzfeld research overturns modern physical science; and I think anyone who does is either mistaken or making premature assumptions.
Lian S, thanks for your comments and for the link to Smith's website. Will check it out.
ReplyDelete(1) we can identify no plausible causal mechanism which could account for the efficacy of parapsychology.
ReplyDeleteOr/but:
(1) In science, the lack of a known or plausible causal mechanism does not invalidate the claim that there can be strong statistical evidence for a hypothesis.
And yet/still:
We can identify no plausible causal mechanism which could account for the efficacy of parapsychology.
I'm neither a statistician nor experimentalist, and I've just skimmed the comments (I see the excellent suggestion about the need for a control and also that there has been additional discussion of the literature), so my comment might be untimely, but reading the post as someone who is not an expert (my academic training is in game theory), I have to say it seems to miss the mark to me. Its premise was that Massimo has not reviewed the literature, but then it proceeded to review just a sampling of the paranormal psych lit without assessing the skeptical literature (with the exception of just one researcher). As someone who is not an expert, I'm inclined to default to the expert consensus, and paranormalists have a very far way to go before they've won over the scientific consensus. The way you frame the post, however, is for the reader either to take your word for the paranormalists' relative plausability over the skeptic Hyman, thus going against the expert consensus just on your word, or to evaluate the primary literature directly, which the lay audience is not qualified to do. I just would have expected a survey of the ganzfeld lit to do more surveying.
ReplyDeleteAs someone who has had some courses in probability and bayesian updating though, if you had asked me, I would have put the prior somewhere in the area of vanishingly small. And you're simply overly hasty to infer these experiments, even taken at face value, are evidence of ESP, since first you would need to exclude more ordinary, naturalist explanations. Perhaps people in these circumstances simply have the same built-in, unconscious patterns of image selection that operate independently. It would be more convincing if they tried transmitting truly implausible and unlikely imagery rather than mundane images.
On that note, I never would have come up with an explanation for birds' ability to fly north unfailingly, and in a way it almost looks like an ESP phenomenon, but of course it turned out not to be. I wouldn't take my own inability to come up with an explanation as reason to buy into a paranormal explanation.
Also, I did a text search on this page for "25" and couldn't find the answer to where the claimed random chance percentage comes from, even though Miller had asked. From what I understand of the methodology having skimmed the page you linked to, it seems very strange.
One final thought: on the page you linked to that discusses what the Ganzfeld methodology is (which was not concise by any stretch), it mentioned its history, which apparently is that ESP believers crafted it so they could see the effect they wanted, after having become frustrated with prior studies that produced unfavorable results. It surprised me he had the gall to admit that. What wouldn't surprise me is that, given the thousands upon thousands of experiments science has done and will do, there would be a handful of long-lasting flukes that create a mirage of a persistent, unexplained effect. Given such mirages, it would be even less surprising that they arose after such cherry-picked methodology.
What is the Darlington-Hayes criterion, btw? Having 4 out of 5 experiments go unreported in this field over a roughly four decade period just doesn't sound that implausible to me. Google doesn't return anything about it. I put "Darlington Hayes criterion" in quotes and I get only three links, all to your post: the rationallyspeaking.org main site, the post's own page here, and a copy of the post at I guess a kind of mirror site, ingodwelust.com.
ReplyDeleteFrom a pragmatic atheist point of view it is easy for anyone to prove they have psychic abilities. JREF has offered a prize for some evidence like that.
ReplyDeleteArguing the statistical significance of something that is presumed to be all around us but is not detectable by 'ordinary' means is of little use. Pasteurizing milk is an example of demonstrating something that can't be detected. Later we learned how to detect it.
Small sample sizes have some serious problems, especially when you're talking about only small differences from 'chance' results.
Given the propensity for humans to take full advantage of anything they can to better their own lot it is fair to say that since we have not seen some group of people who are really good at picking lottery numbers, or betting on other games of chance the claim that ESP exists is an extraordinary claim. That's just common sense talking, and I'm from Missouri - Show Me!
While there might be a statistical chance that some people exhibit some form of esp, as long as it remains less of a force in life than random chance it's not really worth contemplating. What drives someone to express the notion that it's possible to know future events or current events in some distant place? I'm going to go out on a limb and bet that it's not an effort to explain something or some one that has no other explanation. It's more or less just wishful thinking.
Having experienced deja vue and even events which might be considered precognitive, I also know that I had enough prior experience and knowledge on the topic in question to know that they were not. They may well have been extremely intuitive guesses, but there is no reason to think they were more than that.
If any person could know the future, unless they keep it completely to themselves, it would stick out like a pink unicorn that has lost its invisibility. For me, drop your data analysis until you have huge samples to work with, then do the controlled experiments with outliers and such. Until there is a reason to express this notion as an explanation for something it's just wishful thinking.
For those that think it does exists: why haven't you raised money to train and improve the abilities of those who seem to possess such talents? There is some money to be made in this, and the fact that there is and we still only have small statistical analysis as evidence is IMO *VERY* strong evidence that it doesn't exist or if it does it is useless- which is the same thing as 'does not exist'.
Timothy,
ReplyDeleteAs someone who has researched parapsychology and the skeptics for a number of years, your skepticism is all too typical of what I encounter.
You have very little knowledge combined with many negative opinions. Maaneli has not addressed the skepticism in more detail because he doesn't need to. It is very weak and mostly revolves around the "we're not satisfied" theme without providing any convincing reasons.
I'm not going to address your concerns in any detail because they come from a lack of knowledge on this subject that is so profound that only a reasonable amount of research on your part could remedy it. Basically, you're just guessing and you're pretty much wrong.
In fact, none of the skepticism on this thread really demonstrates any substantial knowledge of the subject. This is a problem that has always plagued the field of parapsychology. I can count the number of skeptics who are well versed in the literature ON ONE HAND.
Of those skeptics, Blackmore has admitted that the existence of psi is a possibility, Wiseman has admitted that the evidence meets ordinary standards of evidence and Hyman has admitted that the evidence cannot be easily dismissed.
Most of the skeptical literature is rife with errors and reads more like propaganda than a careful examination of the facts.
Some people have "admitted" that some parts of psi are possible, etc.?
ReplyDeleteA "careful examination of the facts" shows some of these "scientific" practitioners also admit to beliefs in Gods, Devils and Ghosts.
Hi Timothy,
ReplyDeleteThanks for your comments.
<< I'm neither a statistician nor experimentalist, and I've just skimmed the comments (I see the excellent suggestion about the need for a control and also that there has been additional discussion of the literature) >>
Control conditions are not actually necessary for these experiments to be considered methodologically sound. If it is agreed that the Ganzfeld procedure removes all known means of ordinary sensory leakage (which is in fact what was agreed upon by Hyman and Honorton in their joint communique), then your experiment is good to go on its own terms. However, control conditions can be useful as an additional check on the claim that all the other aspects of the Ganzfeld procedure do what they're intended to do. And as I pointed out to Miller, there have been two Ganzfeld experiments which have used non-Ganzfeld control conditions. Both of them showed that, under the Ganzfeld condition, there are effects which are above chance and statistically significant, while for the non-Ganzfeld condition, the effects were nonsignificant and close to chance. Moreover, if you combine the results of the two studies and apply an exact binomial test (which is a valid statistical test for the Ganzfeld experiments), you will find that for a total trial size of N = 192, there were 75 hits in the Ganzfeld condition for an overall hit rate of 39%, with exact binomial p = 10^-05. The 95% confidence interval for the hit rate is from 32% - 46%, i.e. it excludes the chance hit rate. By contrast, for the non-Ganzfeld condition, there were 48 hits for a hit rate of 25% (exact binomial p = 0.5), and 95% confidence interval from 19% - 31%. The fact that these 95% CI's don't overlap is another important indication that the results of the Ganzfeld condition are significantly different from the non-Ganzfeld condition (see this Cochrane collaboration discussion of effect sizes and confidence intervals in subgrouping analyses for more on this last point: http://www.cochrane-net.org/openlearning/html/mod13-5.htm)
<< so my comment might be untimely, but reading the post as someone who is not an expert (my academic training is in game theory), I have to say it seems to miss the mark to me. Its premise was that Massimo has not reviewed the literature, but then it proceeded to review just a sampling of the paranormal psych lit without assessing the skeptical literature (with the exception of just one researcher). >>
ReplyDeleteThe premise was not that Massimo has not reviewed the literature. I don't know whether he has or not, and I was careful not to make the claim that he hasn't. All I said in my essay was that, in *my* reading of the parapsychological literature, not only could I not find evidential support for Massimo's claims about ESP research, but I found evidence to the contrary. I then proceeded to describe just one bit of the parapsychological literature that I consider to be evidence to the contrary. Unfortunately, it's not feasible to properly assess the skeptical literature with a 1,500 word essay cap. Nevertheless, you should know that, with respect to the Ganzfeld literature, the extent of the informed skeptical criticisms that have been published in peer-reviewed journals is limited essentially to Ray Hyman (and to a much lesser extent, Richard Wiseman, Susan Blackmore, and James Alcock). That's why I focused on Hyman's criticisms - and note that I was careful to say that, it is in *my* assessment that Hyman's criticisms don't hold up; and even with that qualification, I encouraged the reader to read all of the published exchanges between Hyman and the Ganzfeld researchers, so that he/she wouldn't have to merely take my word for it.
I'm disappointed by the strong negative reactions to a sincere and thoughtful post. I was unaware of much of the research that Maaneli shared, and I'm glad I now know more.
ReplyDeleteYes, I still don't believe the ESP hypothesis. I'd give 99% chance to experimental flaws that may or may not have been identified yet. Another 0.99% chance to some mundane, already-understood explanation for an effect that really exists. And 0.01% for something new within physics. It's not like we're talking about overturning physicalism here; if PSI existed, it would just involve remote physical interactions among neurons in different brains.
Maaneli wasn't aiming to convince us that PSI is real, only that these studies have produced interesting results on limited budgets and that further exploration isn't necessarily crazy.
<< As someone who is not an expert, I'm inclined to default to the expert consensus, and paranormalists have a very far way to go before they've won over the scientific consensus. >>
ReplyDeleteThe problem is that the 'expert consensus' is not what I suspect you think it is. Very few academic psychologists with solid mainstream reputations have actually engaged in the Ganzfeld literature (i.e. studying it, making comments and criticisms about it in the peer-reviewed literature, etc.). Of those that have and are considered experts on and skeptics of the Ganzfeld literature, there is only a handful. They are the people I already mentioned, namely, Ray Hyman, Richard Wiseman, Susan Blackmore, and James Alcock (and perhaps also David Marks and Richard Kammann). Even so, there are also academic psychologists and statisticians with solid mainstream reputations who have expert knowledge of the Ganzfeld literature, and have published opinions that concur with those of parapsychologists. Those are Harris, Rosenthal, Saunders, Utts, Johnson, Norris, Suess, Rouder, and Bem (Bem has a reputation as one of the most influential mainstream social psychologists in the 20th century, and says he was a skeptic about ESP before he looked into Honorton's Ganzfeld research back in the 80's). So I don't think there is any clear 'expert consensus' on the Ganzfeld literature, within the academic psychology community. If anything, my foray into the literature has led me to find that, among those academic psychologists and statisticians who have studied the Ganzfeld literature and published work relating to it, the majority of them (even if it is a slight majority) have opinions which are, overall, favorable to the views of the parapsychologists.
<< The way you frame the post, however, is for the reader either to take your word for the paranormalists' relative plausability over the skeptic Hyman, thus going against the expert consensus just on your word, or to evaluate the primary literature directly, which the lay audience is not qualified to do. >>
So I hope I have shown by now that I did not frame my post in that way (or at least that that was not at all my intent). Re evaluating the primary literature directly, yes, the lay audience, by and large, is not qualified to do this. Unfortunately, my experience is that this is the only reliable way to develop an informed opinion about the merits of ESP research.
The reason is that, when I initially became interested in this research area, I heavily relied on the skeptical community's lay literature regarding ESP research (e.g. Robert Carroll's Skeptics Dictionary entry on the Ganzfeld, Marks and Kamman's book, The Psychology of the Psychic, Hyman's essay's in Skeptical Inquirer, etc.). But once I learned enough of the statistics and research design methodology necessary to understand the peer-reviewed, published papers pertaining to the Ganzfeld (which admittedly took several years to do), I quickly found that the skeptical lay literature on this topic is - and I hate to say it - grossly misleading and inaccurate. So, with this hindsight, I would say that, for a controversial topic like this one, it seems to me that the fairest position a lay skeptic can take is one of agnosticism about the merits of Ganzfeld research (and ESP research more generally), until that lay skeptic is willing to acquire the skills and expertise necessary to properly and directly evaluate the literature for themselves.
<< As someone who has had some courses in probability and bayesian updating though, if you had asked me, I would have put the prior somewhere in the area of vanishingly small. >>
ReplyDeleteIn Rouder's Bayesian approach, that's irrelevant. In Utt et al.'s approach, there is the problem that your exact choice for the confidence interval over your priori median hit rate is completely subjective. Indeed, there is considerable freedom in the width of the confidence interval, and no considerations about what's likely (given known scientific principles) can constrain your range of plausible confidence intervals in any objective way. From a Bayesian view, one can choose a confidence interval (e.g. the "open-minded" prior), which when updated by the Ganzfeld data, yields an above chance overall hit rate that's statistically significant, and will be just as valid as a confidence interval that does not (e.g. the "psi-skeptic" prior). Also, it may even be questionable why you would put the prior as vanishingly small in the first place (i.e. it might be questionable why you would think ESP is, a priori, virtually impossible).
<< And you're simply overly hasty to infer these experiments, even taken at face value, are evidence of ESP, since first you would need to exclude more ordinary, naturalist explanations. >>
The more ordinary, naturalist explanations have been discussed and debated in great and gory detail in the parapsychology literature. The joint communique (i.e. the universal guidelines for all Ganzfeld experiments post 1985) was one of the major results of those discussions. Again, I invite you to read the Bem-Honorton 1994 paper, and the exchange between Bem and Hyman, both of which I linked to in my essay. Those papers discuss in great detail all of the conceived, ordinary, naturalist explanations.
<< Perhaps people in these circumstances simply have the same built-in, unconscious patterns of image selection that operate independently. It would be more convincing if they tried transmitting truly implausible and unlikely imagery rather than mundane images. >>
A suggestion like this indicates that you seem to have a basic misunderstanding about either classical statistics, the Ganzfeld procedure, or both. In the Ganzfeld experiments, there are 3 decoy images, and 1 target image. The target image is selected in each trial via a random number generator or random number table, and both the receiver and experimenter are blind to the target image for both the sending and judging phases of a single trial. Thus, under the null hypothesis, the receiver (with or without the experimenter's help) has only a 1/4 (or 25%) chance of identifying the target image during the judging phase. That the receiver and sender might have "the same built-in, unconscious patterns of image selection that operate independently", does not change this 1/4 probability of a hit.
<< I wouldn't take my own inability to come up with an explanation as reason to buy into a paranormal explanation. >>
ReplyDeleteWhat do you define as a 'paranormal explanation'? Is a paranormal explanation somehow distinct from a naturalistic explanation, in your usage?
<< Also, I did a text search on this page for "25" and couldn't find the answer to where the claimed random chance percentage comes from, even though Miller had asked. From what I understand of the methodology having skimmed the page you linked to, it seems very strange. >>
I'm not sure why you're confused about this. As I already said, the 25% hit rate is because there are 4 images, 3 of which are decoys, and 1 of which is the target. The 'concise' Bem article I linked to tells you this as well:
"Thus, if the experiment uses judging sets containing four stimuli (the target and three control stimuli), the hit rate expected by chance is one out of four, or 25 percent."
http://www.dbem.ws/ganzfeld.html
If it helps, here is an even more concise summary of the Ganzfeld procedure (see the section titled "Ganzfeld Studies" on page 3):
http://www.stat.auckland.ac.nz/~iase/publications/icots8/ICOTS8_PL2_UTTS.pdf
<< One final thought: on the page you linked to that discusses what the Ganzfeld methodology is (which was not concise by any stretch), it mentioned its history, which apparently is that ESP believers crafted it so they could see the effect they wanted, after having become frustrated with prior studies that produced unfavorable results. It surprised me he had the gall to admit that. >>
ReplyDeleteI'm afraid that you've grossly misunderstood whatever you read in that link. First of all, let's not use a condescending phrase like "ESP believers", and just call them parapsychologists (which is what they are by profession). Second, Bem does not at all say or imply that they got "frustrated with priori studies that produced unfavorable results". In fact, he makes many statements to the contrary:
"There is now experimental evidence consistent with these anecdotal observations. For example, several laboratory investigators have reported that meditation facilitates psi performance (Honorton, 1977) . An analysis of 25 experiments on hypnosis and psi conducted between 1945 and 1981 in 10 different laboratories suggests that hypnotic induction may also facilitate psi performance (Schechter, 1984) . And dream-mediated psi was reported in a series of studies conducted at Maimonides Medical Center in New York and published between 1966 and 1972 (Ullman, Krippner, & Vaughan, 1989). Ganzfeld experiments are the direct successors to the dream studies."
and
"The dream studies tested for the existence of telepathy, the transfer of information from one person to another without the mediation of any known channel of sensory communication. . . Across several variations of the procedure, dreams were judged to be significantly more similar to the target pictures than to the control pictures in the judging sets."
and
"Collectively, the results of the meditation, hypnosis, and dream studies suggested the hypothesis that psi information may function like a weak signal that is normally masked by the sensory "noise" of everyday life. The diverse altered states of consciousness that appear to enhance an individual's ability to detect psi information may do so simply because they reduce interfering sensory input. It was this hypothesis that prompted the use of the ganzfeld procedure."
<>
ReplyDeleteThe use of the Darlington-Hayes method is discussed at length in the Storm et al. 2010 meta-analysis I linked to in my essay:
http://www.psy.unipd.it/~tressold/cmssimple/uploads/includes/MetaFreeResp010.pdf
Specifically, pages 477-478:
"Darlington and Hayes’s (2000) online table gives critical MeanZ = 1.46, where s = 27. In the Stouffer-max test, the mean z for this large database, at MeanZ = 2.32, is sufficiently higher than is required. With Rosenthal’s (1995, p. 189) file drawer formula, there would have to be approximately 2,414 unpublished and nonsignificant papers in existence to reduce
our significant Stouffer Z to chance. Using Darlington and Hayes’s (2000) table, for 27 studies with significant positive outcomes, pooled p < .05 if the fail-safe N < 384 studies, based on the 27 significant studies in our database (N = 102). It should be noted that 357 studies (i.e., 384 minus 27) are permitted to be psi-missing studies."
And here is the reference to the Darlington-Hayes paper:
Darlington, R. B., & Hayes, A. F. (2000). Combining independent p values:
Extensions of the Stouffer and binomial methods. Psychological Methods, 5(4), 496 –515.
http://www.ncbi.nlm.nih.gov/pubmed/11194210
Note that their application of Darlington-Hayes to their 108 Ganzfeld studies is an extension of their findings of applying the Darlington-Hayes method to the 29 Ganzfeld studies from '97-'08. For those studies, they say (page 475),
"Using Darlington and Hayes’s (2000, p. 503, Table 2) tabled data,
for nine studies with significant positive outcomes, we find the pooled
p less than or equal to .05 if there is a total of up to 95 studies. In other
words, we find a “fail-safe N” of up to 95 unpublished studies must
exist in total for this category alone, and 86 of these (i.e., 95 minus 9)
could all be negative (psi-missing) studies, yet our nine significant
studies would still constitute a proof against the null hypothesis. The
existence of such a large number of hypothesized unpublished studies
(i.e., up to 86) is unlikely."
Hello everyone,
ReplyDeleteI think Maaneli's blogpost is very typical of current views in pararapsychology, from a psi-proponent point of view. I've listen to every single episode of the (non-skeptical) show Skeptiko, and it's what you can hear there in the mouths of psi-proponents all the time. Nothing very new, or very original in there. There's a lot of skeptical-bashing going on, because according to psi-proponents skeptics misrepresent the litterature (as Maaneli implies Massimo does) and, of course, don't know what they're talking about (as Craig expressed in his comment above). And of course people like Wiseman & co. are the ennemy that you have to fight against - those who lie to the public about psi researchs.
Just I suppose Maaneli tries to make a name for himself in parapsychological circles by publishing it on Massimo's skeptical blog. That's my impression anyway. Why not after all?
"I have heard Massimo publicly claim that ESP has been refuted, such as in a Skeptiko podcast interview last year in which he said"
About Skeptiko, I'd just like to point out that Tsakiris (who runs it) is also a conspirationnist (listen to his episode about the JFK assassination for exemple) and an evolution denier (because the so-called "materialist paradigm" is dead according to many psi-proponents, so of course evolution theory can't account for life on heart, right?).
I suppose like most psi-proponents Maaneli is a fan of the Skeptiko show (since he mentions it in his blogpost, without any hint of a criticism about it). I would be interested to know if he also thinks that Tsakiris is spot on conspiracy theory and/or the fact that evolution theory is dead with the death of the so-called "materialism pardigm").
In short, I'd just like to know how far of the mainstream his belief in the objective reality of psi (and connected subjects) goes...
With Skepticallity,
Thanks for the detailed responses, Maaneli. It’s something you rarely see. Some points:
ReplyDelete>>The premise was not that Massimo has not reviewed the literature. I don't know whether he has or not, and I was careful not to make the claim that he hasn't. All I said in my essay was that, in *my* reading of the parapsychological literature, not only could I not find evidential support for Massimo's claims about ESP research, but I found evidence to the contrary.<<
I’m sorry, but your second paragraph reads very much like a like your aim is to impugn the idea he has read the literature. It is completely superfluous otherwise (the essay could have proceeded from the first to the third paragraph) and the contrast between its sentences “It is reasonable to expect that...” and “However...” speaks volumes.
>>Unfortunately, it's not feasible to properly assess the skeptical literature with a 1,500 word essay cap. Nevertheless, you should know that, with respect to the Ganzfeld literature, the extent of the informed skeptical criticisms that have been published in peer-reviewed journals is limited essentially to Ray Hyman (and to a much lesser extent, Richard Wiseman, Susan Blackmore, and James Alcock).<<
I’m sorry, but I don’t understand this passage. You say it’s too big to review, and then you say there are only four people; it might have been better simply to mention that point, since that’s all it would have taken to avoid the appearance of a biased sampling of the literature – which I’m sorry to say the essay creates the appearance of, even now in my second reading. I can accept it’s unintentional, and I can also accept there are only four engaged skeptics – and I’d agree it seems the phenomenon warrants greater scrutiny.
>>In Rouder's Bayesian approach, that's irrelevant.<<
I’m not sure what you mean here – the prior very much affects the posterior distribution, even in the degenerate case (or, especially, come to think of it). You even have examples in the paragraph on Rouder!
(cont.)
>>A suggestion like this indicates that you seem to have a basic misunderstanding about either classical statistics, the Ganzfeld procedure, or both. In the Ganzfeld experiments, there are 3 decoy images, and 1 target image. The target...<<
ReplyDeleteIndeed, I had missed that the image was one a random number generator selected rather than one the transmitter selected. Still, it would be interesting to see whether the measured effect weakens or strengthens depending on the image.
>>I'm not sure why you're confused about this.<<
As I said, I did a search, and didn’t find the answer, although I appreciate you mentioning it again.
>>I'm afraid that you've grossly misunderstood whatever you read in that link. First of all, let's not use a condescending phrase like "ESP believers", and just call them parapsychologists (which is what they are by profession). Second, Bem does not at all say or imply that they got "frustrated with priori studies that produced unfavorable results".<<
I was reacting to that first paragraph. I will say I should apologize (and I do so) for inferring J. B. Rhine’s card procedure in the 1930s produced unfavorable results, which was too quick a reading of that passage. Perhaps that procedure appeared to produce favorable results but was just methodologically flawed, but he does imply they were fishing for positive results when they sought to create circumstances to observe the effect – if the motivation was methodological flaws the more obvious way to structure the paragraph would have been to mention it rather than saying it wasn’t close enough to meditation. For that reason I think “ESP believers” is a fair inference from his writing. It is hard to believe that, absent methodological mistakes, skeptical researchers found a positive result using the cards and then turned around and concluded they needed positive results from circumstances even more conducive to positive results, since the skeptical response would have been rather to test it in more stringent circumstances.
@Maaneli
ReplyDeleteYou said that researchers found that ESP works on targets not yet selected. Your interpretation is that there may be precognition in addition to ESP. But I take the alternative interpretation, that researchers did a control, and found that the control has the same effect.
I would compare this to the studies that showed that acupuncture and sham acupuncture had about the same positive effect. Does this mean (as some thought) that acupuncture is *so* effective that its effect remains even without real needles? Or does it just mean that the effect comes from something else (ie the placebo effect)?
Doing the experiment without the Ganzfield condition is a control of sorts, but not a very satisfying one. That changes in procedure could be big enough that the systematic error is no longer present. Also, I don't think this experiment can be done without the receiver knowing the change; I think it is important that the experimenter, receiver, and data analysts all be unaware of any differences in procedure.
I must admit that it is interesting to think about what variations in procedure can be done. What if the receiver is randomly told afterward that they were correct or incorrect, regardless of whether they were actually correct or incorrect? What if you had two senders, one who is, unknown to anyone, a decoy? Or if you had two senders and receivers, and switched them off without anyone knowing?
Telepathy, clairvoyance, and precognition all rolled into one "science" as ESP, Parapsychology or Psi, is by itself enough to discredit the least illogical of these, which is telepathy.
ReplyDeleteWe do after all, have radios and TV, which allow for the physical transmission of communicative ideation between almost unlimited distant points.
But no attempt to violate the laws of sequential change by which the universe must operate is involved with telepathy per se.
Until of course you telepathate your clairvoyant precognitions.
ESP theory would seem to involve the proposition that the future is determinatively present, in that its events are conceivably a predictive certainty, except that one needs an indeterminate environment to make it work.
ReplyDeleteJean-Michel,
ReplyDelete<< I suppose like most psi-proponents Maaneli is a fan of the Skeptiko show (since he mentions it in his blogpost, without any hint of a criticism about it). >>
The Skeptiko podcasts were simply convenient to reference - it has nothing to do with me being a 'fan' of Skeptiko (though I have enjoyed many of the past episodes on parapsych).
<< I would be interested to know if he also thinks that Tsakiris is spot on conspiracy theory and/or the fact that evolution theory is dead with the death of the so-called "materialism pardigm"). >>
I actually don't know what his views on those things are, but I'm a bit skeptical of your representation of them.
<< In short, I'd just like to know how far of the mainstream his belief in the objective reality of psi (and connected subjects) goes...>>
I don't know how you expect me to answer that.
<< Just I suppose Maaneli tries to make a name for himself in parapsychological circles by publishing it on Massimo's skeptical blog. That's my impression anyway. Why not after all? >>
No, that's not my intent. In the first place, publicly associating myself with a topic like this is very risky for my career as a professional theoretical physicist. But my reason for doing so is because, as a member of the skeptical community, I genuinely am convinced that there is enough empirical evidence for ESP to merit further scientific investigation, and I think someone in the community needs to stand up for it. I also see much of the dismissiveness by the skeptical community on a topic like this to be antithetical to the practice of good skepticism and critical thinking. Finally, I'm genuinely curious to understand why Massimo, someone who I've respected and admired for several years, has the views that he has about parapsychology, given that my reading of the literature has led me to such a different conclusion.
Jean-Michel,
ReplyDelete<< I suppose like most psi-proponents Maaneli is a fan of the Skeptiko show (since he mentions it in his blogpost, without any hint of a criticism about it). >>
The Skeptiko podcasts were simply convenient to reference - it has nothing to do with me being a 'fan' of Skeptiko (though I have enjoyed many of the past episodes on parapsych).
<< I would be interested to know if he also thinks that Tsakiris is spot on conspiracy theory and/or the fact that evolution theory is dead with the death of the so-called "materialism pardigm"). >>
I actually don't know what his views on those things are, but I'm a bit skeptical of your representation of them.
<< In short, I'd just like to know how far of the mainstream his belief in the objective reality of psi (and connected subjects) goes...>>
I don't know how you expect me to answer that.
<< Just I suppose Maaneli tries to make a name for himself in parapsychological circles by publishing it on Massimo's skeptical blog. That's my impression anyway. Why not after all? >>
No, that's not my intent. In the first place, publicly associating myself with a topic like this is very risky for my career as a professional theoretical physicist. But my reason for doing so is because, as a member of the skeptical community, I genuinely am convinced that there is enough empirical evidence for ESP to merit further scientific investigation, and I think someone in the community needs to stand up for it. I also see much of the dismissiveness by the skeptical community on a topic like this to be antithetical to the practice of good skepticism and critical thinking. Finally, I'm genuinely curious to understand why Massimo, someone who I've respected and admired for several years, has the views that he has about parapsychology, given that my reading of the literature has led me to such a different conclusion.
Timothy,
ReplyDelete<< I’m sorry, but your second paragraph reads very much like a like your aim is to impugn the idea he has read the literature. It is completely superfluous otherwise (the essay could have proceeded from the first to the third paragraph) and the contrast between its sentences “It is reasonable to expect that...” and “However...” speaks volumes. >>
The aim of the second paragraph was to raise the question - has Massimo really studied the literature? So I did suggest that there are reasons to have some doubts, but that's as far as I go.
<< You say it’s too big to review, and then you say there are only four people; >>
Yes, it's too big to review in 1,500 words.
<< it might have been better simply to mention that point, since that’s all it would have taken to avoid the appearance of a biased sampling of the literature – which I’m sorry to say the essay creates the appearance of, even now in my second reading. >>
Very well, I can accept that constructive criticism.
<< I can accept it’s unintentional, and I can also accept there are only four engaged skeptics – and I’d agree it seems the phenomenon warrants greater scrutiny. >>
All fair comments!
<< I’m not sure what you mean here – the prior very much affects the posterior distribution, even in the degenerate case (or, especially, come to think of it). You even have examples in the paragraph on Rouder! >>
Misunderstanding. When you said "I would have put the prior somewhere in the area of vanishingly small", I thought you were talking about the prior confidence interval width, not the prior probability distribution. The former is not relevant to Rouder's approach, while the latter is.
<< Still, it would be interesting to see whether the measured effect weakens or strengthens depending on the image. >>
In fact this has been studied! Studies have been done on the use of emotionally-laden targets vs neutral targets, with the evidence showing that the former produce significantly higher hit rates (see, for example, study 302 in the Bem-Honorton 1994 paper). In the closely related remote viewing research paradigm, it has been shown that visual targets with larger Shannon entropy gradients are correlated significantly with higher hit rates. The fact that none of these findings would be expected under the null hypothesis is another reason why the null seems implausible, IMO.
<< Perhaps that procedure appeared to produce favorable results but was just methodologically flawed, but he does imply they were fishing for positive results when they sought to create circumstances to observe the effect >>
ReplyDeleteIn fact, it appears that the Maimonides dream telepathy studies were never shown to have any serious methodological flaws. But don't take my word for that - see the essay by Stanley Krippner in the anthology I linked to in my essay. As for them 'fishing' for positive results, I think that's a misleading interpretation. What he's saying is that they found consistent, positive, and significant effects in the dream telepathy studies, and then sought to magnify the observed effect based on the observation that meditation practices seem to be strongly correlated with successful studies. That type of approach standard practice in the social/behavioral/medical sciences.
Miller,
ReplyDelete<< You said that researchers found that ESP works on targets not yet selected. Your interpretation is that there may be precognition in addition to ESP. But I take the alternative interpretation, that researchers did a control, and found that the control has the same effect. >>
Except that, if I recall correctly, the trials in which there was no future feedback to the receiver about the correct target produced nonsignificant results.
<< That changes in procedure could be big enough that the systematic error is no longer present. Also, I don't think this experiment can be done without the receiver knowing the change; I think it is important that the experimenter, receiver, and data analysts all be unaware of any differences in procedure. >>
Possibly, but it's hard for me to imagine how merely not using the Ganzfeld condition (which is just a sensory deprivation state for the receiver) could produce a systematic error that could result in significantly increased hitting of targets. Also, since both the receiver and experimenter are blind to the target during the sending and judging phase of a single trial, there's no reason I can see to think that the change to a non-Ganzfeld condition could produce a bias that increases the chance of identifying the target. Finally, there's no way I can see how the data analyst could be biased by knowing which trial was a Ganzfeld condition, and which was not. The choice of statistical analysis is decided before the experiment is run, and is applied uniformly to both the Ganzfeld and non-Ganzfeld trials.
<< I must admit that it is interesting to think about what variations in procedure can be done. What if the receiver is randomly told afterward that they were correct or incorrect, regardless of whether they were actually correct or incorrect? What if you had two senders, one who is, unknown to anyone, a decoy? Or if you had two senders and receivers, and switched them off without anyone knowing? >>
ReplyDeleteI agree, these are all interesting and valuable research questions, and I have made similar suggestions to some parapsychologists. Apparently, some of these experiments have been done, but not all of them due to lack of funding, and the paucity of researchers actively doing Ganzfeld studies. IMO, what you're doing - making suggestions for further experiments to do to test the intricacies of how this Ganzfeld effect works (i.e. process-oriented research) - is precisely what the skeptical community as a whole should be doing. Not only that, I think it would be reasonable for the community to encourage funding agencies to give the necessary funding to parapsychologists to do these types of experiments.
Hello Maaneli,
ReplyDelete"I actually don't know what his views on those things are, but I'm a bit skeptical of your representation of them."
Well then, you obviously haven't listen enough to the Skeptiko podcast. I advice you to do so, since you mentionned it in this blogpost.
Tsakiris talks about most of those points in this episode (a debate with the Monster Talk crew) of the podcast "Skepticality": The Tipping Point.
http://www.skepticality.com/tipping-point/
If at some point in this exchange they asked him about his conspiracy theory beliefs, I don't think they asked him about his view on evolution. But to have an idea about those, you should listen to this episode:
http://www.skeptiko.com/how-many-dinosaurs-fit-on-noah-ark/
He states for exemple it that episode:
"We pound on that so frequently in this show because materialism is just such a silly notion from any angle that you look at it. We’ve explored it from many angles of cutting-edge science. It just doesn’t make any sense, yet it’s still the predominant paradigm, the predominant worldview that we live in. So I hope that anyone listening to this show knows that materialism is kind of a dead-end street."
And about evolution, later:
"So the part that interests me is how are we stuck in this paradigm? And I think, as you pointed out very early on, it’s because Darwinism has become synonymous with science and science has become synonymous with materialism. We kind of stuck in this endless loop."
So, you were skeptical of what I said about him. Are you still?
My question above was do you agree with Tsakiris that "materialism is kind of a dead-end street."? Just curious... I hope my question is clearer now (since you wrote above "I don't know how you expect me to answer that.").
With skepticality,
"Finally, I'm genuinely curious to understand why Massimo, someone who I've respected and admired for several years, has the views that he has about parapsychology, given that my reading of the literature has led me to such a different conclusion."
ReplyDeleteWhy don't you ask him directly, since so far you've avoided all commentary about the philosophical aspects of the matter.
Alan Dawrst,
ReplyDelete<< It's not like we're talking about overturning physicalism here; if PSI existed, it would just involve remote physical interactions among neurons in different brains. >>
I basically agree with you. It seems though that there is an implicit assumption among some in the skeptical community that physicalism would in fact be threatened by these results.
<< Maaneli wasn't aiming to convince us that PSI is real, only that these studies have produced interesting results on limited budgets and that further exploration isn't necessarily crazy. >>
Essentially correct.
myatheistlife said:
ReplyDelete"From a pragmatic atheist point of view it is easy for anyone to prove they have psychic abilities. JREF has offered a prize for some evidence like that."
The challenge has no scientific value, nor is it a real test of psychic abilities. It is merely an atheist publicity stunt and has no place in this discussion. For further reading:
http://weilerpsiblog.wordpress.com/randis-million-dollar-challenge/
http://weilerpsiblog.wordpress.com/2011/03/01/can-you-win-randis-million-dollar-challenge/
http://dailygrail.com/features/the-myth-of-james-randis-million-dollar-challenge
http://www.skepticalinvestigations.org/New/Examskeptics/Sean_Randichallenge.html
There seems to be a drop off in comment posting, so let me just say the last comment as of my submitting is Maaneli’s reply to me about Bem’s ganzfeld writeup. Sorry if I’ve missed anything.
ReplyDeleteCraig,
Sorry I had missed your post earlier, but perhaps it was just as well. You would do better to peddle your frustration elsewhere. I was quite open about my lack of statistical training and did not purport to discredit the studies themselves, except to say that it might be possible that over the run of the many studies humans are bound to do over their history, a persistent mirage or two lasting decades might be likely, and also that point about imagery I’ve since conceded stemmed from a misunderstanding about the study design.
Whatever the merits of that former point, which remains open (and I’d imagine that it’s something statisticians could get at least a little traction on), the solution is very much not for me to engage in “a reasonable amount of research,” but rather for paranormalists to convince *scientists* that parapsychology has merit. Quantum mechanics is a zany theory. The solution was not for its researchers to take it to lay people to examine its claims, but for QM researchers to do the hard work of convincing other scientists that their results were legit. With diplomats like you on their side, parapsychologists will surely succeed.
Maaneli,
First thing, I had a bit of time and so I bit the bullet and looked at Storm et al.’s discussion of Darlington Hayes (DH). Its seems from their discussion (correct me if I’m wrong) that DH provide a function that maps certain statistics from a set of studies into a number that indicates how many studies would need to be in the file drawer to render a significant result insignificant, but unless I’ve missed something, Storm et al. provide no indication it’s a correspondence that also returns a second number signifying how many studies could plausibly be in the file drawer, as that one sentence in your post suggests. Even if it does return a second number, there is the issue DH apparently formulated it for ordinary psychological claims, so the criteria for assessing the plausible size of the file drawer might not be the same for fields still partly outside the academy.
I think the only remaining issue we really have is the point about the first paragraph in Daryl Bem’s ganzfeld page. I’ve reread it yet again just to check myself, and again, its phrasing is just odd if the issue with the card studies was purely methodological. Perhaps it was, but Bem didn’t write it that way. I’m not sure why you’re discussing the Maimonedes dream research as well, unless it used the J. B. Rhine card method, although that’s even less apparent from Bem’s writeup.
For an outsider, what stands out above all else is this: in perhaps more than a hundred years of psi research, not even the existence of something to explain has made it into mainstream science. This, in a century that has seen the weirdest ideas accepted, especially in physics.
ReplyDeleteIf there really is something going on and if, as some say, evidence is overwhelming, getting this most basic recognition should be rather easy, should it not?
A very simple task: prove, to the satisfaction of the scientific community. that something is going on that needs explaining. No need to propose any explanation or model or anything. Just establish the plain fact that some unexplained phenomenon exists.
A century of research has failed to accomplish even this simplest of task. How can one be anything but very skeptical?
I despair of catching up on this thread, but a summary of my position may (?) give a somewhat different perspective.
ReplyDelete"In sum, I don’t see how to escape the conclusion that a classical statistical analysis of the Ganzfeld data gives strong evidence for ESP, judging by the standards of evidence commonly accepted by the social and behavioral sciences."
I've never found this to be the most terribly convincing defense insofar as the claims of parapsychology are hardly about behavior at all. ESP is at root a claim about biology and physics. That is, ESP is not a statement about the psychology of perception, but about what information is actually made available to these lumps of goo we call human brains.
It is, in fact, somewhat suspicious that parapsychology is not pursued primarily as a medical science. It is a sign of bad science (or in this case, I think pseudoscience) that one should spend decades trying to prove that an effect holds, or trying to pinpoint the cases where it holds, rather than developing e.g. mathematical laws, testable mechanisms, or consistent procedures for turning the effect on and off at will.
Since there is not a well-established theoretical basis for ESP, since the effect size seems to vary so much, and since its development has been so agonizingly slow, produced so little, involved so much infamous con-artistry, and lost so much credibility even in arenas where it was once widely believed, I do not consider parapsychology to have, in any sense, proven itself a science.
Here's a relevant example of one of the big problems I see:
ReplyDelete"Studies have been done on the use of emotionally-laden targets vs neutral targets, with the evidence showing that the former produce significantly higher hit rates (see, for example, study 302 in the Bem-Honorton 1994 paper). In the closely related remote viewing research paradigm, it has been shown that visual targets with larger Shannon entropy gradients are correlated significantly with higher hit rates. The fact that none of these findings would be expected under the null hypothesis is another reason why the null seems implausible, IMO."
In contrast, I find both findings to be extremely irrelevant to the question unless there was a previously established account of ESP that explicitly explains why these findings should come about (and therefore did, or could have, predicted the results and given specific reasons for them). From a Bayesian perspective, in order to produce a prior probability for a hypothesis, you must specify what the hypothesis actually is. Otherwise, and test of it is meaningless.
If your hypothesis specifies "I expect to see a measurable ESP effect", that has some prior, P, which must necessarily be extremely low for an effect that has no recognized mechanism, and which apparently is not taken into account by most people and most academics in relevant fields. If it says "I expect to see an ESP effect that measurably varies based on differences in emotional state that are observable in the lab", that has a substantially lower prior, P'. If it says "I expect to see an ESP effect that is stronger for more emotional subjects", that has a lower prior, P''.
If you want to find out more about psi from an experiment, you need to substantially increase the power of your experiment to detect any additional features. Each extra bit of information you add into a hypothesis actually drops its probability down (and in fact it is quite easy to nickel-and-dime your prior down by a factor of 100 or more, even while claiming a "larger" effect size).
This is the real problem with any "fishing expedition". You are claiming a standard of evidence that would establish P, but your claim is actually narrower, and should be held to the standard that applies to P''(''''''...).
And this is also why you need a theoretical basis for the phenomenon that has some quantifiable rules or mechanisms. Experimental design is hopeless unless you either pin down your hypothesis to something that's not too vague, or are lucky enough to have an incredibly huge effect size. Otherwise it's impossible not to accidentally equate hypotheses that are grossly different in both their priors and in actual content.
(a) the Maimonides and subsequent dream telepathy/clairvoyance/precognition experiments, (b) the SRI, SAIC, and PEAR remote viewing experiments, and (c) the Ganzfeld experiments? This has got to be a joke, right?
ReplyDeleteHow unlikely is it that some sort of cheating has occured in these experiments? This seems to me to be the crux of the matter. Is it less than (say) 1/10? Could it possibly be less than 1/100? Suppose ones prior for the existence of ESP is 1/1000 (like Hyman) and the prior for cheating is 1/100 (which seems generous). Then, given the data supports the existence of ESP, it is 10:1 that there is something wrong with the data. I've been aware of this data for ESP for some time, and (as a Bayesian) I've always concluded from the data that cheating has probably occured. On the other hand, if I could be convinced that the likelihood of cheating is in fact very small (much less than 1/1000) then I would have to agree with Maaneli.
ReplyDeleteIn the Bayesian analysis it seems to me that another hypothesis should have been considered: the experimenters committed fraud or were significantly biased in their research. Prima facie, it's not so unlikely.
ReplyDeleteStill, this should be resolved. At the very least it should be treated as a significant flaw in the process of scientific research. We need to know what went wrong.
Derakhshani comments that:
ReplyDelete“It would be reasonable to expect, especially from someone as learned as Massimo, that these bold claims about research on telepathy and clairvoyance, and the status of parapsychology as a discipline, were derived from a thorough assessment of the parapsychology literature (a literature which includes informed skeptical criticisms of parapsychology experiments).” (second paragraph of original post)
There are indeed reasons to doubt Pigliucci’s familiarity with that literature. For instance, the very first sentence of his 2002 article “Hypothesis Testing and the Nature of Skeptical Investigations” claims:
“In 1987 a study by Schmidt, Jahn, and Radin proved that it is possible for the human mind to act on matter at a distance.” (Pigliucci 2002, p. 27)
Pigliucci did not give a full citation for this “study” (though he did provide a reading list at the end of his article). I am not aware of any such study published by those three authors, and Pigliucci was unable to provide a citation when I inquired. One wonders whether he would accept similar scholarship from his professional colleagues or students.
Debunkers, especially some of those who have commented in this forum, should be encouraged to read original reports and not rely on the “experts” within the debunking community.
Reference
Pigliucci, Massimo. (2002). Hypothesis Testing and the Nature of Skeptical Investigations. Skeptical Inquirer, Vol. 26, No. 6 (November/December), pp. 27-30, 48.
George P. Hansen
http://www.tricksterbook.com
http://paranormaltrickster.blogspot.com
http://twitter/ParaTrickster
Funny thing about Bayesian analysis. Say I show you a video of a creature that looks like bigfoot. That increases the probability that bigfoot exists, but it also increases the probability that the video is a hoax. So, if the null hypothesis is that it's a hoax, and the alternative is that bigfoot exists, then the Bayes factor may be 1.
ReplyDeleteThe devil is in the lab, not on paper. Like when Susan Blackmore visited Sargent's lab and saw him "push the subject towards picture B" despite "not officially taking part."
ReplyDeletehttp://www.susanblackmore.co.uk/Articles/JSPR%201987.htm
That's why skeptics have to independently replicate the experiments, not just reanalyze previous experiments.
Para Para,
ReplyDeletethat article was published in Skeptical Inquirer, which you may know is not a scholarly publication. The paragraph in question was from a lecture on Bayesian analysis available online, put together by James Berger of Duke:
http://www.stat.duke.edu/~berger/talks/cuba-preve.pdf
He is the one citing the Schmidt et al. 1987 paper. That research is summarized in:
http://www.amazon.com/exec/obidos/ISBN=0062515020/roberttoddcarrolA/
As for doubts about my familiarity with the literature in parapsychology, it depends on what you mean by familiarity. I've read as much about parapsychology as I've read about ufology or astrology (yes, including scholarly papers). And it has been enough to convince me that it is indeed a pseudoscience. I will respond to Maaneli's post in full after the winter break (sometimes in mid-January).
I'll bet Massimo will "drop the ball" again.
ReplyDeleteSo far all I've taken from reading these comments is a really bad knowledge of the psi literature by "debunkers." Also, their feeble if not circular ideas implying that even though the empirical evidence for psi is very good, they won't believe it until they have an explanation of how it works. One wonders if these are real scientists or real philosophers talking.
Empirical would be based on, concerned with, or verifiable by observation or experience. Nothing referenced so far was, in that context, verifiable as evidence that the phenomena was by a "very good" chance what it was purported to be.
ReplyDeleteAdd some logical philosophy to that and you have virtually no chance at all as to establishing that the future is more than predictively detectable.
My two cents about all of this: Yes, on occasion people get significant results running a psi study. This is very similar to many other areas where you get significant results that turns out not to be true. In order for psi to be taken seriously, it has to meet three main criteria in my view: First, the results must be replicated by independet researchers. As in the latest case of Bem's experiments, they were not (an no psychology journal wants to publish it). This is the case of all psi research i have seen. Second, there must be some explanation about the mechanism, or some theory that explains the forces behind it, but more importantly, explains how it is compatible with know theories that work. If psi violate some basic physics rules, how come that physics still works the way it does? Lastly, if psi works, I would expect it to work well with sizable effect sizes. In reality, all the published work show very small deviation from randomness at best (such as in Bem's experiments). This is very underwhelming evidence that is most likely explained by some random deviations that occasionally reach a level of statistical significance.
ReplyDeleteMark,
ReplyDelete> I'll bet Massimo will "drop the ball" again. <
Massimo is spending an inordinate amount of time curating, writing and answering comments for this blog. And this is all for free, so please don't complain about dropping balls, at least not until I introduce a premium subscribers version of the blog for only $19.99/month. (Just kidding.)
> One wonders if these are real scientists or real philosophers talking. <
One also wonders why defenders of psi feel compelled to engage in ad hominem attacks, such as doubting whether real scientists or real philosophers are contributing to this forum. (The answer to both questions, by the way, is yes.)
Everything is energy which transforms into matter. We are co creators and destroyers of worlds. The universe is a mathematical system and if the numbers don't add up the whole system is out of balance. Sorry but ESP is just another man made belief system and believes are just believes. Keep on wasting peoples time and money on B.S. like ESP and we will head for self destruction for the simple fact that it serves no purpose, none whatsoever. Go away with your none sense. Can you come up with something on how to be more conservative. Freaking religion and pseudo science is dragging us further and further into the abyss.
ReplyDelete@gil...
ReplyDelete1.)there have also been successful replications of bem's studies
2.)It's perfectly appropriate for science to operate in a data driven manner
3.)why is it that psi effects should necessarily need to be larger than other unconscious priming effects found in psychology?
This was an interesting article & I plan to read the literature cited further.
ReplyDeleteOne criticism: you (via Tressoldi) quoted a Bayes factor of 18,886,051 for ESP versus "pure chance." As you know, a Bayes factor describes the movement of the credibility of two hypotheses relative to each other, not an absolute movement of a credibility. For this reason, although the "null hypothesis" (no effect but also no funny business) gets swamped by the ESP hypothesis, I can define another hypothesis H_fb which contains all possible "funny business" including standard biases, self-deception... the Bayes factor between ESP and THIS hypothesis I do not know, but I bet it's less than 19 million.
Max said the same thing I was trying to say, but more succinctly. One can also find this in one of the early chapters of Jaynes under the title "resurrection of dead hypotheses."
ReplyDeleteOf course one should not use this as a Fully General counterargument to every favourable Bayes factor for an implausible claim. I will be looking at this stuff in more detail when time allows.
mf, who replicated Bem's studies and was it published?
ReplyDeleteScience in general, is theory driven. While it's true that sometimes some interesting data emerges without a theory behind it, the ultimate goal is to find a more comprehensive theory. Moreover, when data contradict known thoeries that something that requires explanations. psi studies make no serious attemp to reconcile their findings with existing knowledge. In fact, they reject many of the accepted scientific methods.
Priming effects are not small at all. They are very large in many cases and what's more important, consistent and replicable. The effect sizes usually found in psi studies are very small, within what we expect to find if many experiments are ran. One of the problems with psi studies (that is also commong in other psychology research) is the fact that they conduct many statistical tests and not adjust the alpha levels. This is partially what the criticism against Bem was.
Timothy,
ReplyDeleteThis is the relevant part of the Storm et al. quote: "Using Darlington and Hayes’s (2000) table, for 27 studies with significant positive outcomes, pooled p < .05 if the fail-safe N < 384 studies, based on the 27 significant studies in our database (N = 102)." I.e., the ratio of unpublished to published studies need be less than 384/27 = 14/1 for those combined 27 studies to remain statistically significant. For DH, such a ratio of unpublished to published studies is considered highly implausible for studies in research paradigms in mainstream psychology. And in a research paradigm like the Ganzfeld, it is even more implausible when you recall that a single Ganzfeld trial takes over an hour to complete, and when you recall Utts et al.'s comments that "the ganzfeld procedure requires a special laboratory, parapsychology is a small field in which most researchers know each other, there are a limited number of journals in which such results would be published, and the journals in parapsychology have a policy of publishing studies even if they produce non-significant results.”
I discussed the Maimonides stuff because you said "ESP believers crafted it so they could see the effect they wanted, after having become frustrated with prior studies that produced unfavorable results", and I'm pointed out that this is not the case by referring to the dream telepathy studies which were highly successful and on which the Ganzfeld design was based. Also, I still don't know why you think the phrasing in Bem's first paragraph is odd. He's just saying that they were dissatisfied with Rhine's card guessing approach because it "failed to capture the circumstances that characterize reported instances of psi in everyday life."
BurtSilverthorne,
I think your comment about priors on fraud vs ESP are quite reasonable. Here are some reasons to think that fraud is somewhat less likely to account for the Ganzfeld results than you might initially suspect: (1) The number of independently significant results have come from several independent laboratories and experimenters across the world and across decades (including from avowedly skeptical psychologists like Delgado-Romero and Howard), (2) the number of people who have done Ganzfeld research is very small (less than 50) so that if there were fraud occurring in its 30-40 year history, it seems likely that it would have been exposed at some point (as it was in the forced-choice precognition paradigm), (3) parapsychology as a field has had a long history of struggling against the perception of experimenter fraud, so it seems unlikely that those who do Ganzfeld studies would use massive collusion to fabricate positive data, knowing how that could destroy the reputation of the field, if exposed.
But the best way to test the fraud hypothesis against the ESP hypothesis, IMO, is to do what I suggested earlier, namely, a large-scale Ganzfeld experiment with equipment audited for every trial, and hidden cameras used to monitor the experimenters and receivers during each trial. Unfortunately, as I said, that kind of study requires more funding than this field seems to have at the moment.
Max,
ReplyDeleteIt should be noted though that Blackmore's accusations about Sargent's lab were never substantiated, and Sargent did write a rebuttal to Blackmore. Re skeptics doing Ganzfeld experiments, I agree that that needs to happen more often. It's happened once with Delgado-Romero and Howard, who obtained highly significant effects in 8 Ganzfeld experiments they ran: http://www.uniamsterdam.nl/D.J.Bierman/publications/2007/HTHP_A_241428.pdf
Bem also says he was a skeptic until he ran his own (unpublished) Ganzfeld experiments and got significant results which convinced him there was something real going on. Blackmore once did an unpublished Ganzfeld study with 36 trials back in the 80's - but apparently her Ganzfeld trials were (by her own admission) fraught with serious methodological flaws which would have kept it from being published in any peer-reviewed parapsychology journal: http://archived.parapsych.org/psiexplorer/blackmore_critique.htm
And outside of Blackmore, neither Hyman, nor Wiseman, nor French, nor Alcock, nor Marks, nor Kammann, nor any other academic psychologist who's also a known CSI fellow has done any Ganzfeld experiments. The Ganzfeld researcher Adrian Parker even said in print that he suggested a collaboration with Alcock on a Ganzfeld experiment, and Alcock declined.
Sean,
Lots of comments to respond to there, so I'll try to be succinct. I think the lack of a theory that would predict the correlation between emotional-laden targets and above-chance hitting is irrelevant to the fact that such a correlation exists, has been independently replicated by other researchers, and is inconsistent with what would be expected under the null hypothesis. For both classical and Bayesian hypothesis testing, the ESP hypothesis doesn't need to be anything more specific than something like 'some means of information acquisition about the environment which occurs through a mechanism other than the known sensory faculties'.
Also, the fact that there is no recognized mechanism for a hypothesis like ESP does not logically entail that the prior for ESP must necessarily be very low; moreover, part of the problem with specifying a small prior for the ESP hypothesis is that the choice of how small to make the prior is fundamentally arbitrary (i.e. 1/1000 or 1/500 or 1/10,000 are all equally valid from a Bayesian view, but will yield very different Bayes factors when conditioned on the Ganzfeld data). In that case, one might be better off using a flat prior distribution for the ESP hypothesis.
Re increasing statistical power, that's a big reason why meta-analyses are conducted. Also, re your comment about parapsych being pursued as a medical science, you should know that there are a few fMRI studies done looking for neurological correlates of ESP, and most have been successful. Re having predictive mathematical theories of ESP, there are, such as Decision Augmentation Theory (which I gave a reference for in this thread).
Lastly, re your comment that it is "a sign of bad science (or in this case, I think pseudoscience) that one should spend decades trying to prove that an effect holds", I don't see why. Parapsychology has to spend more time than other research paradigms in mainstream psychology to prove an effect because there are so fewer researchers in this area, and because funding is much more scarce and unstable. In fact, the psychologist Sybo Schouten calculated in 1993 that the total human and financial resources devoted to parapsychology since 1882 is at best equivalent to the expenditures devoted to fewer than two months of research in conventional psychology in the United States:
http://anson.ucdavis.edu/~utts/response.html
Ianpollock, I'd be interested in your impressions once you've had the chance to look into the literature.
ReplyDelete@gil
ReplyDeleteOff-hand I don't know the name of the authors of the successful replication, but Bem recently gave a few talks where he mentions this work (plus the Ritchie, French, & Wiseman failure to replicate). Unfortunately it's hard to publish replications, successful or not, so neither of these studies have been peer-reviewed. As such, it's premature to conclude anything about the replicability of Bem's effects.
With regards to unconscious priming, these effects are certainly not as clear cut as you believe them to be. It took quite some time for these findings to be embraced by mainstream psychologists and there are still plenty of failures to replicate (and yes, the effects are just as small as the precognition effects reported by Bem). In fact, one of the authors who critiqued Bem's work (Richard Morey) has recently argued that subliminal semantic priming doesn't even exist.
It's one thing to suspect reasonably that there are means of communication developed by biological systems, and most notably humans, that we have yet to discover, and another to suspect, with lesser to no reason, that these systems are communicating with the future.
ReplyDeleteAnd I suspect that maaneli will continue to avoid making any reference to a credible or even creditable argument to the contrary.
mf, I can comment about replications that I don't know about, but based on this particular study of Bem that I read thoroughly, there are many many statistical and methodolical problems in the study. I recommend reading the review by James Alcock of this study here: http://www.csicop.org/specialarticles/show/back_from_the_future
ReplyDeleteIn regard to priming, there are wide range of studies and granted not all of them are big, but some are quite large (more than 1 in Cohen's effect size formula if you know what it means). Moreover, the priming effects are well documented and there are hundred, if not thousands of stdies who replicate them. Can you say that about any of the psi studies?
The bottom line is that the psi are not replicable and what is more important, don't stand the scrutiny of scientific inquiry. The science is self correcting over a long perios of time and there is no reason why scientist would not endorse studies on psi if it were true. As mentioned earlier, scientists are willing to accept many bizarre findings in other areas so why shouldn't they accept psi if it was true?
> One also wonders why defenders of psi feel compelled to engage in ad hominem attacks <
ReplyDeleteThat's a biased comment. In fact, the way your comment is worded, it is itself an ad hominem attack. One wonders why you feel compelled to generalize the "defenders of psi" and dismiss them based not on their arguments but on a negative aspect of their personality. I believe that's the definition of ad hominem.
It seems to me that the statistics are fine in the ganzfeld studies, but without a testable hypothesis on how psi effects could be responsible, all one can really say is there is something going on here. And I'm not sure you can dismiss an unknown bias causing these effects. I think a larger study would really not help differentiate between these two possibilities, as even if you get the same effect, you're still left with the two possibilities. I'd think that changing the research designs in small ways to see what does and doesn't change the effect would be more reasonable. I don't know, has the Ganzfeld effect been studied without a judge? Has it been studied with the judge and subject randomly assigned no actual sender? Could the 4 images be randomly interchanged with say the initial 1000 images for each subject? Seems to me that if the Ganzfeld effect holds only for this one specific research protocol, with any tweak negating it, that some bias is at work rather than psi.
ReplyDelete<< I'd think that changing the research designs in small ways to see what does and doesn't change the effect would be more reasonable. >>
ReplyDeleteThis has been done pretty extensively. Most notably, the use of artistically talented populations as receivers (e.g. musicians, visual artists, drama students, etc.) have consistently produced significantly higher hit rates than normal populations as receivers.
<< I don't know, has the Ganzfeld effect been studied without a judge? >>
Sometimes the receiver is the judge, and sometimes the judge is a different person from the receiver. But you always need a judge to obtain quantifiable data. Otherwise, it won't be possible to determine what's a hit and what's a miss in a given trial.
<< Has it been studied with the judge and subject randomly assigned no actual sender? >>
Yes.
<< Could the 4 images be randomly interchanged with say the initial 1000 images for each subject? >>
Not sure I understand what you're suggesting.
<< Seems to me that if the Ganzfeld effect holds only for this one specific research protocol, with any tweak negating it, that some bias is at work rather than psi. >>
Possibly, but the effect has shown to be relatively robust over variations of protocol that don't decrease the methodological quality. Also, the Ganzfeld results have been conceptually replicated by the methodologically similar remote viewing research paradigm. Re the former, check out the Bem-Honorton paper for more details, and Jessica Utt's AIR report for the latter.
If I thought he'd answer, I'd ask maaneli if the mere viewing or communicating with the future didn't by that very effort change it, but I suspect that his answer would be that determinism would see that as irrelevant, and the indeterminates in his field don't care.
ReplyDeleteSo much then for a career in theoretical physics.
Malkiyahu,
ReplyDelete> One wonders why you feel compelled to generalize the "defenders of psi" and dismiss them based not on their arguments but on a negative aspect of their personality. I believe that's the definition of ad hominem. <
I'm just applying induction to my experience. And I believe you have missed the difference between an argument and a quip.
Hello everyone,
ReplyDeleteJust to add to what gil said a few comments above:
"but based on this particular study of Bem that I read thoroughly, there are many many statistical and methodolical problems in the study. I recommend reading the review by James Alcock of this study here:"
French statistician (and good friend of mine) Nicolas Gauvrit also wrote a critic of Daryl Bem's use of statistics. It was published in Skeptic magazine a few months back:
Precognition or pathological science? An analysis of Daryl Bem's Controversial “Feeling the Future” Paper, by Nicolas Gauvrit
Sorry, I don't have a link for that. Don't think it's on the web already (if it is, I don't know about it).
For those who can understand French, Nicolas Gauvrit did also an episode on the topic on my (French-speaking) podcast ("Scepticisme scientifique, le balado de la science et de la raison"):
Épisode #91: “Feeling the Future”
http://pangolia.com/blog/?p=598
With skepticality,
http://dbem.ws/FeelingFuture.pdf
ReplyDeletehttp://www.skeptic.com/eskeptic/11-04-13/
Maaneli, thank you. I do still find that paragraph's phrasing odd since it creates the strong implication - by contrasting their discontent with the card experiments with their searching for conditions conducive to psi - that the card experiments were unsuccessful and were what prompted the ganzfeld methodology. As it turns out, neither is the case; I'm finding online the discontent was methodological after all and as you point out, the essay goes on to say it was the dream research that prompted the ganzfeld studies.
ReplyDeleteAlso, my apologies for missing that passage in Storm. My reading comprehension on this whole matter has been abysmal and I thank you for suffering through it. All I can say is that I've been really busy and sleep-deprived lately.
Timothy,
ReplyDeleteNo problem, you're welcome. I appreciate your willingness to look into this stuff and change your mind when appropriate. IMO, this is the approach of an honest skeptic. If you'd like to continue discussing this work, I'd be happy to do so via email.
Robert Todd Carroll,
I just noticed your comment. If you're still reading, it is regrettable that you chose to troll this particular thread, as I now feel obliged to share with everyone here an exchange between you and one of the readers (Jason Ewing) of your dictionary entry on the Ganzfeld. The exchange shows your knowledge and understanding of the Ganzfeld literature to be questionable at best, and therefore places serious doubts on any judgments you might have about the Maimonides and SRI/SAIC/PEAR studies:
http://www.skepdic.com/comments/ganzfeldcom.html
All,
I just came across another study (a Masters thesis, actually) that employs both a classical and Bayesian analysis on 6 Ganzfeld experiments, along with non-Ganzfeld control conditions for each experiment, and I thought I'd share it:
http://etd.nd.edu/ETD-db/theses/available/etd-11172004-170234/unrestricted/LauM112004.pdf
Also, out of curiosity, I added the results of Lau's experiments for Ganzfeld and non-Ganzfeld conditions to the results of the other two experiments I combined earlier. I now find (1) for the Ganzfeld condition in 8 separate experiments, N = 312, hit rate = 36%, exact binomial p = 2*10^-05, and 95% CI for the hit rate from 30% - 40%; (2) for the non-Ganzfeld condition in 8 separate experiments, N = 1129, hit rate = 25%, exact binomial p = 0.5, and 95% CI from 22%-28%. The two hit rates are also significantly different from each other by a one-tailed t-test.
From: Feeling the Future, Daryl J. Bem
ReplyDelete"Unlike the White Queen, I do not advocate believing impossible things. But perhaps this article will prompt the other 66% of academic psychologists to raise their posterior probabilities of believing at least one anomalous thing before breakfast."
The impossible becomes anomalous for the sake of argument. How scientific is that!!
I should have added that the above seemed to mean that Bem accepts that actually accessing the information from the future is clearly possible - although he seems never to have said so directly.
ReplyDeleteMaaneli is equally silent as to the reasonableness of substituting anomalous for impossibility.
@ Robert Carroll:
ReplyDeleteWhy not bring on some substantiated criticism to what Maaneli wrote? If the studies are so laughable, it should be easy for you to point out the flaws, shouldn't it? Otherwise, the only joke is your post.
@Eamon:
ReplyDelete"(1) we can identify no plausible causal mechanism which could account for the efficacy of parapsychology."
quantum entanglement.
Of course that we cannot identify a plausible mechanism for paraphenomena such as ESP or precognition does not exclude a priori that such effects are not real; I never asserted otherwise. What I asserted was: that there is no plausible mechanism which could account for paraphenomena significantly decreases the probability that such effects are real.
ReplyDeleteFurthermore, without even a tentative conjecture as to the nature of a plausible mechanism for paraphenomena, the parapsychology researcher is relegated to anomaly hunting. Now, anomalous data offer opportunities for further scientific inquiry, but the non-skeptical psi researcher invariably infers unjustifiably from the anomalous data the conclusion that paraphenomena are real effects.
That aside, psi research en bloc consists primarily of studies which (1) are not rigorously designed, (2) have small effect sizes, and / or (3) are not replicable. If psi researchers want to convince the scientific community that vast portions of physics, chemistry, biology, and anatomy are fundamentally wrong, they will have to provide a truly immense resume of airtight, rigorously designed, replicable research which reveals statistically significant and unambiguously large effect sizes.
None of this has yet to occur.
Eamon, it sounds like you're making a lot of assumptions without any clear justification (at least to me):
ReplyDelete<< that there is no plausible mechanism which could account for paraphenomena significantly decreases the probability that such effects are real. >>
How do you know there is no plausible mechanism? And how do you determine what mechanisms are plausible and what are not?
<< Furthermore, without even a tentative conjecture as to the nature of a plausible mechanism for paraphenomena, the parapsychology researcher is relegated to anomaly hunting. >>
Again, how do you know that parapsychologists have no tentative conjectures? My guess is you're merely assuming this is so. Here are a couple examples to the contrary:
http://www.uniamsterdam.nl/D.J.Bierman/publications/2010/eBiermanJPfall2010.pdf
http://www.lfr.org/lfr/csl/library/DATjp.pdf
<< That aside, psi research en bloc consists primarily of studies which (1) are not rigorously designed, (2) have small effect sizes, and / or (3) are not replicable. >>
(1) What's your evidence that they're not rigorously designed? As a matter of fact, parapsychology has a higher fraction of double-blind studies than any other research area it's been compared to (including mainstream psychology and medicine). And the design of the Ganzfeld experiment was specifically approved of by Ray Hyman (CSICOP's go-to guy for academic critiques of parapsychology over the past 40 years) in the Joint Communique paper with Charles Honorton; (2) The effect sizes actually vary from small to medium (as defined by measures like Pearson's correlation coefficient and Cohen's d); (3) If by replicable you mean 'replicable on demand', then you're right - but by that standard, the vast majority of experiments in mainstream psychology and medicine are not replicable (e.g. you cannot replicate on demand the fact that smoking cigarettes can cause lung cancer). However, if by replicable you mean 'statistically replicable', then all the evidence from meta-analyses show that experiments like the Ganzfeld (and others in parapsychology) have a statistical replication rate significantly above what's expected by chance.
For my take on the Maimonides experiments see my review of Dean Radin's "Conscious Universe" (http://www.skepdic.com/refuge/radin6.html); the ganzfeld fell apart after the expose of the Sargent lab by Blackmore (http://www.skepdic.com/ganzfeld.html); the PEAR experiments are discussed here: http://www.skepdic.com/pear.html....the Maimonides experiments suffer from vague predictions and loose interpretations as to what counts as a "hit": the ganzfeld collapese if Sargent's data is thrown out and if it can't be replicated; the PEAR data requires us to accept a statistical artifact as proof of psi when it could be due to the equipment or Brenda Dunne. Manneli: I don't troll. My work is public and I've written about each of these "exemplars" in detail. Refute them if you will.
ReplyDeleteRobert Todd Carroll,
ReplyDeleteIn what way did the ganzfeld "fall apart" after the "expose" of the Sargent lab? Last I checked, the Storm et al meta-analysis gave a 32% significant hit rate which includes studies from 1992-2008. This did not include Sargent's work.
<<"the Maimonides experiments suffer from vague predictions and loose interpretations as to what counts as a "hit">>
A "hit" is when the receiver correctly chooses the target from amongst the decoys. There's nothing vague about that. This comment alone suggests your understanding of the ganzfeld rationale is quite poor.
Robert Todd Carroll,
ReplyDeleteI read your critique of the Maimonides experiments. You refer to Hansel's skeptical critique and complain that Radin "mentions none of the skeptical critique, which includes data on attempts at replication that failed because controls got tougher."
However, you yourself fail to mention that Krippner has critiqued Hansel's critique (as well as all the other published critiques of the Maimonides experiments) in his contribution to the Debating Psychic Experience anthology I cited in my essay. In fact, Krippner points out (page 199) that Hansel's critique is based on a factual error that Hansel has continued to repeat.
Regarding replication attempts, you fail to mention that there have never been any *exact* replication attempts of the Maimonides experiments - all of the replication attempts have been conceptual replications since they deviate in significant ways from the Maimonides protocol (e.g. having agents recall dreams after a full night's sleep, rather than being woken up to recall dreams right after a REM sleep period); and although some replication attempts have been unsuccessful, Sherwood and Roe did a review of all conceptual replication attempts and found that, overall, the conceptual replications produced effects that were statistically highly significant, even though about half the size of the Maimonides effect size:
http://www.ingentaconnect.com/content/imp/jcs/2003/00000010/F0020006/art00006
Your complaint about what counts as a hit in the Maimonides experiment seems (as Dave Smith pointed out) to be based on a very basic misunderstanding of the Maimonides experimental design. And it is noteworthy that none of the other parapsychology skeptics like Hyman or Wiseman (who, unlike you, have published peer-reviewed critiques of parapsychology) have ever complained about this aspect of both the Maimonides and Ganzfeld experiments.
Re the Ganzfeld, I concur with Dave Smith's question. And just to clarify it, Dave is pointing out that in Storm et al.'s meta-analysis, they find that in 29 Ganzfeld studies from 1997-2008 (which don't include any of Sargent's work), the overall hit rate is 32% and statistically highly significant. I'll also point out that in Utts et al.'s meta-analysis of 56 Ganzfeld studies from the 1970's-1999, they specifically exclude Sargent's work, and yet the overall hit rate is still highly significant at 33.4%. So as far as I can see, your claim that the Ganzfeld "collapses" when Sargent's work is removed is completely unfounded.
Finally, regarding PEAR, your link doesn't discuss their remote viewing studies at all (only their PK work, and even there your claims are about their work are dubious). Moreover, it is notable that you left untouched the SRI and SAIC remote viewing work.
Generally speaking, I think you do a great disservice to the skeptical community by having Dictionary entries on these topics that are fraught with so many factual inaccuracies omissions, and non sequiturs. It also makes me uneasy to think that you teach college-level courses in critical thinking, despite having such seemingly low standards for your own application of critical thinking (as evidenced by your claims in this thread).
Maaneli,
ReplyDeleteI suppose it is only fair that you end your response to me with a comment on my disservice to the skeptical community since I regretfully opened my comments to your post with a remark of disbelief that someone of Massimo’s stature and reputation would hand over his blog to be used as a bully pulpit to someone intent on flagellating three dead horses in the parapsychological morgue. I should have just smirked and left. I am embarrassed that I could not control my urges and rushed off a single-sentence dismissive remark.
I give you credit for finding and picking a few cherries in my work to back up your purported refutation of my critiques of the Maimonides, ganzfeld, and PEAR experiments.
You have already expressed your disdain for my low standards, so the following comments are meant more for others who might have been drawn to this exchange than it is for you.
I’ll begin with the reason I dismiss the Maimonides and the PEAR remote viewing experiments. I’ve already asserted that the criteria for what counts as a psychic hit are not specific enough. Let me explain. The Maimonides dream telepathy experiments were done in such a way that ambiguous data could easily be retrofitted to support the telepathy hypothesis. For example, in one experiment the target was Max Beckman’s Descent from the Cross. The experimenters and Dean Radin considered the telepathy a success because the receiver dreamt twice about Winston Churchill. Radin writes: “Note the symbolic relevance of ‘church-hill’ in the reported dream….The overall hit rate is seen to be 63 percent...The 95 percent confidence interval clearly excludes the chance expected hit rate of 50 percent.” This hit rate seems more indicative of the retrofitting talents of the experimenters to find connections than of the psychic abilities of the test subjects.
The same is true for the remote viewing experiments. Tests of remote viewing often involve having one person go to a remote site while another in a different location tries to get impressions of the site by reading the mind of the person at the remote site. There has never been such a test where one person looks at, say, the Golden Gate Bridge while another person across town says "she's looking at the Golden Gate Bridge." In one test, a person went to the Dumbarton Bridge and the remote viewer reported that he was getting "impressions" of
• half arch
• something dark about it
• darkness
• a feeling she had to park somewhere and had to go through a tunnel or something, a walkway of some kind, an overpass
• there's an abutment way up over her head
• we have a garden, it's a formal garden
• formal gardens get passed
• open area in the center
• trees
• some kind of art work in the center
• this art work is very bizarre, set in gravel, stone.
If you try hard enough, you can match some of the impressions of the remote viewer with the Dumbarton Bridge, but if you only had this list to go by, I don't think you'd ever figure out what he was talking about. These examples are not exceptions, but typical.
(Maaneli) part 2
ReplyDeleteAlso, if it is possible for the senders and receivers in psi experiments to transmit and receive information psychically, there is no reason why anybody in the world wouldn’t be receiving telepathic messages from millions of other people all the time, including the receivers in these experiments. Why assume there is no psychic drift? Just because you set up an experiment to have A transmit to B doesn’t exclude C ad infinitum from transmitting information inadvertently to B. These experimenters arbitrarily assume that they’ve created an impenetrable tunnel between sender and receiver so that only messages they are testing for can get through. Except, of course, when the experiment fails. Then it is ok to blame skeptics for sending bad vibes into the air and disturbing the transmission.
Finally, the assumption that getting a statistic that deviates from chance is evidence of psi should be challenged. From a logical point of view, parapsychologists are either begging the question or they are committing the fallacy of affirming the consequent. (If it’s psi, then the data deviate from chance. The data deviate from chance. So, it’s psi. Or, if a person is psychic, then that person will do better than chance in guessing experiments. That person did better than chance in a guessing experiment. Therefore, that person is psychic. Or, if a group of people collectively guess better than chance in a guessing experiment like the ganzfeld or the Maimonides experiments) then psi exists. This questionable assumption that if the stats deviate from chance then there is evidence of psi runs throughout the history of parapsychology.
The assumption is also questionable on methodological grounds. There is no reason to believe that the laws of probability, which are purely formal and ideal, should apply directly to any finite set of events. It may be true that the odds of a coin coming up heads or tails is 1 in 2, but that gives us no information as to what will happen in the real world for any given number of tosses. Ideally, in a large number of tosses, heads should come up 50% of the time. In the real world, there is no way to know exactly how many times heads will come up in, say, ten million tosses. We can be pretty sure the number will be very close to five million (assuming a fair coin and a fair toss), but we cannot know a priori exactly how many times heads will come up.
Studies comparing random strings with random strings, to simulate guessing numbers or cards, have found significant departures from what would be expected theoretically by chance. It is not justified to assume that statistical probability based on true randomness and a very large number of instances applies without further consideration to any finite operation in the real world such as guessing symbols in decks of 25 cards shuffled who knows how or how often, or rolling dice, or trying to affect a random number generator with one's mind.
(Maaneli) part 3 of my response
ReplyDeleteThe defender of psi might well ask: Why do some people or groups perform better or worse than chance in some experiments? We know some individuals do better because they cheat. Some do better or worse by chance. There are other possibilities as well: selective reporting, poor experimental design or contols, inadequate number of individuals in the study, inadequate number of trials in the experiment, inadequate number of experiments (e.g., drawing strong conclusions from single studies), file-drawer effect (for meta-studies), deliberate fraud, errors in calibration, inadequate randomization procedures, software errors, and various kinds of statistical errors. If any of the above occur, it is possible that the data would indicate performance at significantly greater or worse than expected by chance and would make it appear as if there had been a transfer or blockage of information when there had not been any such transfer or blockage. It is also possible that information is being transferred, but not telepathically, through sensory leakage. Or, maybe some people have an ability to subconsciously recognize hidden patterns. In the case of the ganzfeld, Sargent’s lab accounted for 25% of the data used by Bem and Honorton. Sargent’s work isn’t trustworthy, for reasons noted by Susan Blackmore and publicly available. Throw that data out and you are back to chance results. But even if you include Sargent’s work, the data don’t prove anything psychic happened.
There are other alternative explanations, as well, though some of them seem farfetched. For example, it is possible that statistically significant deviation from chance in psi experiments is caused by Zeus, aliens, angels, ghosts, Jehovah, jinn, or any one of a number of beings who dwell in other dimensions. These beings may be playing with parapsychologists, as James Alcock suggested with the Zeus hypothesis. Or they may be unwitting conduits of data transfer. Perhaps dolphins are picking up information telepathically from aliens and relaying it to subjects in psi experiments. As I said, some of these alternative notions are pretty farfetched, but they are possible nonetheless, and, in my view, just as viable as the psi hypothesis.
We should also note that the notion of statistical significance itself is an arbitrary concept and carries with it no necessary connection with our ordinary notion of importance. Statistical significance only tells us the probability that a given statistic is not spurious or due to a statistical accident. Statisticians express the likelihood that a statistic is due to pure accident by referring to its P-value. For example, P<0.01 means that there is a one percent chance the stat is accidental. The most commonly used P-value in the social sciences and medical studies is P<0.05, where there is a one in twenty chance that the result was a statistical fluke. This standard can be traced back to the 1930s and R. A. Fisher. At that time, the number of data points that might be produced by a scientific study would have been counted in the hundreds, thousands, or tens of thousands. Today, some psi studies have more than ten million data points. Should we assume that a statistical formula that was developed rather arbitrarily for studies with much smaller quantities of data can be applied without modification for studies with millions of data points?
(Maaneli) part 4 of my response
ReplyDeleteParapsychology is not alone in worshipping at the altar of P<0.05, but it is the only science that concerns us here. A good example of mistaking statistical significance for actual importance was provided by Dean Radin and Roger Nelson in their assessment of the data collected by Robert Jahn, Nelson, and Brenda Dunne in the PEAR experiments on psychokinesis. The experiments consisted of subjects who tried to use their minds to affect machines. In over 14 million trials by 33 subjects over a seven-year period they found that their subjects performed at the 50.02% level when 50.00% was expected by chance. With such a large number of trials, this data plugs into some statistical significance formula and spits out the result that the odds against this happening by accident were beyond a trillion to one. Why am I not impressed?
So, when confronted with data that indicate subjects in psi experiments are performing at levels significantly greater than chance, why should we conclude that psi is at work? We shouldn't, unless we can exclude all other possibilities. Of course, we can never do this with absolute certainly. But unless we can demonstrate that it is highly probable that all other possibilities are false, we are not justified in concluding that psi is the correct explanation for the data.
Some of the possibilities can be excluded on the grounds that they are too farfetched to be taken seriously. For example, the likelihood that Zeus, Jehova, dolphins, angels, ghosts, jinni, or aliens are the cause of the data strikes me as rather beyond belief. However, what I consider to be beyond belief should be irrelevant to the testing of any hypothesis. Therefore, even these wild notions should probably be considered by the parapsychologist if he or she is to be thorough in the investigation.
Other possibilities don't strike me as farfetched simply because we have ample evidence that they have occurred on numbers of occasions. We have numerous examples of cheating by subjects, fraud by scientific investigators, sloppy controls, inadequate protocols, poor record keeping, file-drawer effects, drawing grand conclusions from single or small studies, misusing statistics, and the like. Furthermore, these examples are not limited to parapsychologists, but can be found in all the sciences.
How many clear, decisive, unambiguous examples of psychic ability do we have? So far, we have none.
Hence, it is reasonable to conclude that there is little or no justification for assuming that deviation from chance in a psi experiment is evidence of anything anomalous or paranormal.
Robert,
ReplyDeleteYou've written a lot, but much of it either misses the point or has already been addressed in my essay or earlier in this thread. Hence, I'll only respond to a select few of your claims which are obvious factual inaccuracies.
<< This hit rate seems more indicative of the retrofitting talents of the experimenters to find connections than of the psychic abilities of the test subjects. >>
Sorry, but this is incorrect. Yes, most of the subjective correspondences between mentations and target images are ambiguous. But this has nothing to do with the hit rate. The hit rate is simply the frequency with which the preselected targets are identified (from a fixed number of decoys) by the judges/receivers. The judges/receivers make a prediction *beforehand* about which of the images the target is, and it is either confirmed or not (i.e. it is either a 'hit' or a 'miss'). There is no "retrofitting" to produce this hit rate. Indeed, this aspect of the design of the Maimonides/Ganzfeld/RV experiments is nothing but that of a binomial experiment. That you don't seem to understand this is puzzling.
<< In the case of the ganzfeld, Sargent’s lab accounted for 25% of the data used by Bem and Honorton. Sargent’s work isn’t trustworthy, for reasons noted by Susan Blackmore and publicly available. Throw that data out and you are back to chance results. >>
Bem and Honorton don't use the data from Sargent's lab in their autoganzfeld meta-analysis in 1994. All those experiments came from Honorton's PRL lab. Only Honorton used the data from Sargent's lab in his 1985 Ganzfeld meta-analysis. And even then, anyone can easily verify by actually reading the Bem-Honorton paper I linked in my essay that, even if one throws out the data from the 2 most prolific labs (which includes Sargent's) in Honorton's meta-analysis, the Stouffer Z across the rest of the 8 labs is still highly significant at z = 3.67.
Also, I think it is telling that earlier you made the generic claim that "throw out Sargent's data and the Ganzfeld collapses", and after I pointed out that this is false when you consider Utts et al.'s and Storm et al.'s meta-analyses (which you have conveniently failed to acknowledge), you now make the much more limited claim that it brings only Bem and Honorton's results to chance levels (which is also clearly false, as I have shown). It indicates that your knowledge of the Ganzfeld literature is unreliable.
<< There is no reason to believe that the laws of probability, which are purely formal and ideal, should apply directly to any finite set of events... >>
ReplyDeleteThat's why Ganzfeld researchers do calibration runs of the random number generators (RNGs) that select the targets - to ensure that the empirical distribution of the RNG output converges to the expected theoretical distribution. In the case of Bem and Honorton's autoganzfeld results, Bem showed in his response to Hyman (which I cited in my essay) that the RNG used to select the targets converged to the expected theoretical distribution of 25% in their 329 trials (it was 24.6%, to be exact). Moreover, as I already mentioned in this thread, there have been Ganzfeld experiments which have used non-Ganzfeld control conditions in which the receivers are asked to guess what the target is. Across 8 experiments with non-Ganzfeld conditions, there are 1129 trials, with a hit rate = 25%, exact binomial p = 0.5, and 95% CI from 22%-28%. So the empirical results of these Ganzfeld experiments confirm that the theoretical prediction that random guessing of targets should lead to a 25% hit rate in the limit of large N is valid.
By comparison, in the trials with the Ganzfeld condition across these 8 experiments, there are 312 trials for a hit rate = 36%, exact binomial p = 2*10^-05, and 95% CI for the hit rate from 30% - 40%. This indicates then that something about the Ganzfeld condition allows receivers to identify the targets with a frequency significantly greater than what's expected theoretically by chance (i.e. they are not merely guessing the targets, as they seem to be doing so in the control condition).
Most readers of this blog will understand that achieving a statistic unlikely due to chance is not strong proof of ESP.
ReplyDelete<< Most readers of this blog will understand that achieving a statistic unlikely due to chance is not strong proof of ESP. >>
ReplyDeleteI never used the term "proof" in my essay or in this thread. I said that I think the Ganzfeld provides strong *evidence* for ESP, judging by the standards of evidence commonly accepted by the social/behavioral sciences.
Yes, achieving a statistic that's unlikely due to chance is not, on its own, enough to constitute strong evidence for ESP. One has to also demonstrate that other possible prosaic explanations (e.g. sensory leakage, file-drawer, fraud, etc.) can be ruled out beyond a reasonable doubt, according to the standards commonly accepted by the social/behavioral sciences.
And I already explained in my essay and in this thread why I think most of these prosaic explanations have been ruled out beyond a reasonable doubt.
Most readers of this blog will understant that achieviing a statistic unlikely due to chance is not stron evidence for ESP. The other explanations commonly provided are much more probable than any supernatural or paranormal explanation for the statistics used to support belief that psi has been found. The readers will understand the lack of justification for the psi assumption, the psi-focus assumption, and the psi-conducive assumption. Your standard of "reasonable doubt" isn't reasonable.
ReplyDelete"And I already explained in my essay and in this thread why I think most of these prosaic explanations have been ruled out beyond a reasonable doubt."
ReplyDeleteI missed the one where you ruled in the ability to actually see what hasn't yet happened - in any place but your odd conception of the physicality of time, that is.
RT Carroll said,
ReplyDelete"The other explanations commonly provided are much more probable than any supernatural or paranormal explanation for the statistics used to support belief that psi has been found."
But Robert, your long posts clearly demonstrate a poor grasp of the rationale behind these experiments, as Maaneli has shown. So I don't think anyone will place much confidence in your assertions. Perhaps Maaneli's responses will give you pause for thought and provoke you to apply a bit a scepticism to your own beliefs.
Robert,
ReplyDelete<< The other explanations commonly provided are much more probable than any supernatural or paranormal explanation for the statistics used to support belief that psi has been found. >>
A priori, they do seem more probable to most scientists (even though there is no objective way of showing that they are in fact more probable), which is why it is fair that they be assessed before the ESP hypothesis is taken seriously. But if you take the time to study the Ganzfeld literature (e.g. start with the Bem-Honorton paper, Bem's reply to Hyman, and Storm et al.'s reply to Hyman), you'll see that they have been considered quite thoroughly; and all of them, except perhaps for fraud and sloppy implementation of protocol, have been decisively ruled out.
Regarding fraud, I have explained in this thread why such a hypothesis is unlikely on its own terms to explain the statistically significant Ganzfeld data, and why uncaught fraud is less likely to have occurred in the Ganzfeld experiments than in experiments in mainstream research paradigms in psychology.
Regarding sloppy implementation of protocol, that's a possibility in every experiment in the social/behavioral sciences. And as far as I can see, there is no reason to think it is any more likely in Ganzfeld experiments.
So unless you can provide clear evidence to suggest that fraud and sloppy implementation of protocol are in fact more likely to have occurred in the Ganzfeld experiments, you have to conceded that by the standards of evidence commonly accepted by the social/behavioral sciences (which is the standard that I'm using), the Ganzfeld data does provide strong evidence for the ESP hypothesis.
maaneli said
ReplyDelete"- the standards of evidence commonly accepted by the social/behavioral sciences (which is the standard that I'm using)"
Except that the behaviors being examined there are expected by all other common standards to be physically and conceptually and sequentially possible
Jeremybee,
ReplyDeleteIf our physical formalisms are time-symmetric, shouldn't precognition be physically, conceptually and sequentially possible?
The physical reactions are in some cases time symmetric in the sense that there are linear changes at a micro level that are, or appear to be, reversible. But it's the function on that level that's reversible, not the time. Changes at a macro level are not sequentially reversible. Water, to put it simply, will not run back uphill and its erosion dis-erode. Behaviors will not dis-behave. Awareness remembered may be forgotten but not unremembered.
ReplyDeleteHere's a quote on the subject from an interview at Big Think with David Albert, Professor of Philosophy with the Philosophical Foundations of Physics at Columbia University:
"Once again, it appears as if although the theory does an extremely good job of predicting the motions of elementary particles and so on and so forth, there's got to be something wrong with it, okay, because we have -- although we have very good, clear quantitative experience in the laboratory which bears out these fully time-reversal symmetric laws, at some point there's got to be something wrong with them, because the world that we live in manifestly not even close to being time-reversal symmetric. And once again there are proposals on the table for how to fiddle around with the theory, adding a new law governing initial conditions, for example. There are all kinds of proposals about how to deal with this, but this has been -- this is a very fundamental challenge. Although we've got laws that are doing a fantastic job on the micro level, there's some way in which these laws manifestly get things wrong on the macro level, and we need to figure out what to do about it."
<< But it's the function on that level that's reversible, not the time. Changes at a macro level are not sequentially reversible. >>
ReplyDeleteYes, it is the time that's reversible on the micro level. For Maxwell's equations, the Dirac equation, etc., the retarded and advanced waves are always solutions. The reason why we see an asymmetry between retarded and advanced waves on the macro level has to do with the initial conditions imposed on the solutions, not the physical laws themselves (or at least, that's the majority view in the foundations of physics). For more on this, I recommend reading Huw Price's book, Time's Arrow and Archimedes Point.
As far as theories of ESP are concerned, under the view that all types of ESP can be explained as precognition, one could potentially use advanced wave solutions (either for Maxwell's equations, or the Dirac equation, etc.) as a mechanism for explaining how a human brain/mind can acquire information about its future states. Indeed, this is what Bierman has suggested in his CIRTS model, and it leads to testable predictions:
http://www.uniamsterdam.nl/D.J.Bierman/publications/2010/eBiermanJPfall2010.pdf
@maaneli
ReplyDeleteNo, it is not the time that is reversible. Time is no more than our measure of sequential change, and change in its totality is not sequentially and/or exactly reversible except, theoretically and sometimes, at the micro level. The book you cite simply doesn't deal with this essential point.
Of course we can make testable predictions. But that's all they are or can be, predictions of the most highly probable in a probabilistic universe.
<< No, it is not the time that is reversible. Time is no more than our measure of sequential change, and change in its totality is not sequentially and/or exactly reversible except, theoretically and sometimes, at the micro level. >>
ReplyDeleteSorry, but you're just wrong about this. The 'measure of sequential change' in solutions of the physical laws is determined exactly by the time parameter. Just look at the solutions of the Dirac equation or Maxwell's equations.
Re the macro level, the emergence of the arrow of time is the result of an entropy gradient between two regions of space-time. (Physicists call this the 2nd law, but it is not really a physical law like, say, Maxwell's equations - rather, it is a statement about what type of physical behavior is statistically likely over some time interval for an ensemble of systems.) And the entropy gradient is the result of asymmetric boundary conditions between the two regions in space-time. And because those boundary conditions are contingent, one can in principle change them so as to reverse the entropy gradient, and hence the arrow of time.
Well, we've finally established the area of fundamental disagreement. And until you come to understand that change is the thing that we humans (and other choice makers) measure as time sequential, and not time that is measured by some sort of change sequential, you will continue to be wrong about accessing future changes as properties of time rather than of the energetic things in nature that measurably change in the only possible sequential direction.
ReplyDeleteIn other words you can't make the great causative collection of the universal wheels run backwards.
Believing that the world/universe is constructed mathematically and that time is one of its mathematical and dimensional constructs would seem to require belief in a form of mathematical determinism.
ReplyDeleteWhich would seemingly not allow a probabilistic universe to be mathematically constructed. Further, a mathematically determined and constructed universe would arguably not need and thus not be able to evolve. But be, in the end, non-anticipatory as to its functions and effectively inert (even though seemingly chaotic in its mathematical model).
<< And until you come to understand that change is the thing that we humans (and other choice makers) measure as time sequential, and not time that is measured by some sort of change sequential, you will continue to be wrong about accessing future changes as properties of time rather than of the energetic things in nature that measurably change in the only possible sequential direction. >>
ReplyDeleteAnd until you come to understand that the physical laws don't support your view, you will continue to be wrong in your understanding of the role of time in physics.
I wonder, what is your educational background jeremybee? If it is in physics, I recommend studying these papers:
New Insights on Time-Symmetry in Quantum Mechanics
Authors: Yakir Aharonov, Jeff Tollaksen
http://arxiv.org/abs/0706.1232
Two-time interpretation of quantum mechanics
Authors: Yakir Aharonov, Eyal Y. Gruss
http://arxiv.org/abs/quant-ph/0507269
The Two-State Vector Formalism of Qauntum Mechanics: an Updated Review
Authors: Yakir Aharonov, Lev Vaidman
http://arxiv.org/abs/quant-ph/0105101
Action Duality: A Constructive Principle for Quantum Foundations
Authors: K.B. Wharton, D.J. Miller, Huw Price
http://arxiv.org/abs/1103.2492
Does Time-Symmetry Imply Retrocausality? How the Quantum World Says "Maybe"
Authors: Huw Price
http://arxiv.org/abs/1002.0906
Time in Quantum Theory In: Compendium of Quantum Physics, D. Greenberger, K. Hentschel, and F. Weinert, edts. (Springer 2009)
http://www.rzuser.uni-heidelberg.de/~as3/TimeInQT.pdf
If it is not in physics, and if you can't read those papers, then you really have no place making such bold assertions about what time means in physical theories.
Too late, maaneli, see my just previous comment. And my working background involves psychology and evolutionary philosophy, although I do know some things about physics that you clearly don't. If you did, you wouldn't have restricted yourself to mathematical arguments, and known much more about the regulatory nature of universal "laws" and that the universe operates strategically, with a logic that your version of mathematics can only attempt to simulate.
ReplyDeletehttp://www.iep.utm.edu/time/#SH3c
ReplyDeletehttp://plato.stanford.edu/entries/time/#Oth
Psi does not necessarily mean causal influences from future to past. Precognition can be interpreted as combinations of telepathy, clairvoyance and psychokinesis. I don't believe that apparent precognition is real precognition because I don't think there really are causal influences from future to past.
ReplyDeletePhysicist Henry Stapp argues that Bem's recent experiments are only apparent precognition in this paper:
"Apparent Retrocausation As A Consequence of
Orthodox Quantum Mechanics Refined To
Accommodate The Principle Of Sufficient Reason"
http://www-physics.lbl.gov/~stapp/ReasonFIN.pdf
Some parapsychologists have argued that the Radin and Beirman presentiment experiments are psi but not precognition. There is a split between those that think influences can travel backward in time and those that don't. I side with the latter.
A bit of nonsense from your cited paper, New Insights on Time-Symmetry in Quantum Mechanics:
ReplyDelete*The “destiny-generalization” of QM inspired by TSQM (§4.2) posits that what happens in the present is a superposition of effects, with equal contribution from past and future events. At first blush, it appears that perhaps we, at the present, are not free to decide in our own mind what our future steps may be26. Nevertheless, we have shown [32] that freedom-of-will and destiny can “peacefully co-exist” in a way consistent with the aphorism “All is foreseen, yet choice is given” [78, 76].
The concept of free-will is mainly that the past may define the future, yet after this future effect takes place, i.e. after it becomes past, then it cannot be changed: we are free from the past, but, in this picture, we are not necessarily free from the future. Therefore, not knowing the future is a crucial requirement for the existence of free-will. In other words, the destiny vector cannot be used to inform us in the present of the result of our future free choices.*
Trying desperately to meld their/your fatalistic universe with free will, and not just the illusion, but the reality - that if allowed to exist, cannot be permitted to see the otherwise foreseeable future.
So much then for the accuracy of precognition.
Hello,
ReplyDeleteRobert Todd Caroll wrote way above:
"Also, if it is possible for the senders and receivers in psi experiments to transmit and receive information psychically, there is no reason why anybody in the world wouldn’t be receiving telepathic messages from millions of other people all the time, including the receivers in these experiments. Why assume there is no psychic drift?"
That's a very good point! I never thought of that. Thanks. In a similar vein, it's something that puzzles me about the Global Consciousness Project. Why consider that only humans can affect the results, and not every kind of animal with some sort of consciousness? Also, since psi is suppose to be not affected at all by distance, why aliens don't affect the results? Like if there is a galactic war somewhere in the cosmos affecting much more aliens than we are on hearth, why is not that that appears on the results?
The Global Consciousness Project, in is nature, is very anthropocentrist.
I think that if psi exists, especially if the super-psi hypothesis is true, well at that point science is impossible to do. If the intensions of everybody (including subjects, scorers, reviewers, publishers, readers and so on) can affect the results of an experience, even after that experience is over (going backward through time), as Dean Radin sometimes speculate, I don't see how the scientific endeavour could be possible. At all. But we can observe that the scientific project is working (going to the moon and so on). I think that simple fact goes against the super-psi hypothesis.
With skepticallity,