Daryl Bem, a psychologist at Cornell University, has a fascinating article in press (which means it is yet to be published) at the Journal of Personality and Social Psychology (I tweeted about this last year). The article argues that ESP exists, laying out 9 experiments in which participants were able to “predict” future content. Bem used extremely standard experimental design and commonly-used statistics to prove his point. According to Joe Cesario (whom I took a class with last semester), the “behind story” is that the four reviewers were against the idea of publishing a pro-ESP article, but could not find statistical fault, and thought that the article would at least spur a ton of subsequent efforts towards replication.
Unfortunately, the article has created a lot of controversy, not because scholars are outraged at the thought of ESP, but because we should rethink what it means to be statistically “significant.” Although the reviewers of the original article could not find fault with the stats, some more math-savvy scholars have found major flaws. Already, a “responding” article critiquing the statistics is lined up to be printed in a subsequent issue of the same journal. They basically say that the stats were poorly interpreted. This issue has also been picked up by popular press, so Bem is getting a lot of heat.
Lesson 1: Peer Review can suck, but science corrects itself
I think there are four lessons to be taken from this. I don’t think it’s Bem’s fault that he used the stats that he did to find “significant” results. The fact that the four reviewers did not pick up on the statistical flaws just shows how flawed the peer review process can be. I don’t think the reviewers are to blame either. For most social scientists who have learned statistics at the doctoral level, the stats looked fine. Besides, even if everyone messes up, we have a wonderful system in which we can critique others’ works.
Lesson 2: Choose your topics wisely
Would this article have received so much attention if it were not about ESP? I think this is the more important question. Would statisticians scrutinized the methods if the article were experiments about cognitive dissonance? Probably not. I feel that the sensational claim that the article was making led many people to look at it more in detail and rather encourage people to try to find faults. That said, this makes you think how many published papers “got away with” misguided interpretations of statistics.
Lesson 3: What is the deal with the academic publishing cycle?
Bem’s article is technically still unpublished, as is the response article. This is just plain silly. Journal articles aren’t like movies, there is nothing to be gained from waiting for the article to be published. I realize that journals aim for quality control by limiting the number of articles they publish, but in that process, they forget why we have journals in the first place. Journals, and all academic research, are meant to disseminate knowledge. Why would we want to harness knowledge? This also ties in with the fact that these journals make scholars do all the work (writing, reviewing, editorial duties), then create all these limitations about publication, and on top of that, make scholars pay to view the articles. This is by far the most distorted system I’ve ever seen… but that’s another story…
Lesson 4: What about ESP?
Unfortunately, all of the discussion that has been generated regarding the statistical issues has taken away the focus from ESP. The “true” results of the study suggests that there is no ESP. But that doesn’t mean ESP doesn’t exist. From anecdotal accounts, most likely it is something that is not found in a general undergraduate population, which is what Bem used for his experiments. So all this demonstrates is that ESP probably doesn’t exist among a general population. If there is premonition, I think there would have to be very tight boundary conditions that apply to an extremely small population. And then it is very probable that even people who do have ESP cannot demonstrate ESP with everything.