Back Goldfinch looking backwards

How important?


Argument 2.1 Mind exists apart from matter, the weak case.

The only way to make this argument is to adduce evidence, give instances where it happens that one person’s mind influences another’s under circumstances impossible for present science to accept.

For instance, Deepak Chopra’s then publicist (in the 1990’s), Arielle Ford, reports that when Good Morning America wanted Chopra for an event and he was in Africa on a safari with his family, “out of touch for two weeks,” she mentally asked him, with great energy, to call her, and he did so “within ten seconds,” saying, What do you want?” She was in NYC. This is, of course, not a very persuasive instance for a variety of reasons. But it is a classic instance of the weak case. Arielle is the “person or other entity with mind” who manifests her mind in “another person or entity,” Deepak Chopra, across the Atlantic ocean just by an act of conscious will, using no telephone, radio or the like.

Listening to and watching Arielle Ford tell her story to Renée Scheltema on the DVD, it is difficult to think she is perpetrating a hoax; however, it is easy enough to suppose that coincidence alone, combined perhaps with unconsciously wish-fulfilling memory distortions, can explain what is otherwise inexplicable in the story.

And this coincidence argument brings us to the usual paranormalist answer to the coincidence argument — statistical trials under controlled conditions. Rupert Sheldrake’s experiment with the Nolan sisters is available on Youtube (link) in sufficient detail to be persuasive. The five sisters (a singing group in GB popular in the 80’s) select one of themselves to be the receiver. She goes to a distant apartment in London while the other four remain behind. They call her in a random sequence determined by the roll of a die, attempting to send to their receiver sister the fact of who’s calling. The receiving sister hears the phone ring and states out loud which of the four she thinks is calling, then picks up the phone to see if she’s right. This is all done by landline without caller id and, of course, without the sisters using other phones to hoax the results, something we can be sure of because the whole thing is video recorded at both ends of the phone line with time-synchronized cameras. The video taping and other physical arrangements were made by a British TV outfit called 20/20 Productions which eventually aired the results in a show called “Are You Telepathic?” All the details can be studied in an academic paper published by Sheldrake and available on line here.

In this one-day production effort 12 phone calls were made in a little over an hour in the afternoon, and the sisters were successful 6 times. Chance would predict a success rate of only 3 in 12. At this point questions arise in one’s mind about how significant this finding might be. Some study of statistics is needed just to begin to appreciate how one might start to answer that question, and I will not go into it here. Those who are interested can look into the statements in that regard in the paper just linked. At that place I found reference to two more studies, by Sheldrake and his assistant Pam Smart. One of the studies details the repetition of virtually identical tests on four subjects like the one Nolan sister, running a total of 271 trials. (Both subjects and callers were videotaped.) Overall the four subjects scored a hit rate of 45 percent. However, the 271 trials included callers of two types, those the subjects were familiar with and total strangers. Among the strangers, the hit rate was only 20%, but among people known to the subjects, the success rate was 61%. The other study involved 63 subjects and 571 trials, but was not videotaped. The results were  40% overall, 53% with familiar callers, and 25% with strangers calling.

These results are extraordinarily unlikely to have been achieved by chance. The odds against that are given in the appropriate papers. There exist literally thousands of studies like this in the literature, involving other manifestations of mind operating at a distance, apart from its brain. The question for me is not whether results like this are empirically true or statistically significant. I ask, do these facts demonstrate that mind exists apart from matter in the weak sense?

Obviously they do unless there is some other explanation. The only other explanation I know of is called the file drawer effect (>>>). This refers to the consequences for calculating odds or effectiveness if the research people stuff their uninteresting studies in the file drawer instead of reporting them. It’s a version of the argument that we only remember the times when we were right with our hunches about who was calling. In the case of scientific studies on subjects other than the paranormal, as we can tell from the three links over to the right, this is a very significant problem. I first became aware of it maybe ten years ago in reading up on the nonsteroidal miracle drugs like Vioxx and Celebrex. I learned then that the drug companies didn’t have to submit all the studies they knew about to the FDA, just the ones that got good results. I found out a good deal more that was disturbing, but I’ll let Dr. Joseph Mercola summarize for me, in a July 7, 2010 article titled, “Pfizer 'Cherry-Picked' Celebrex Data, Memos Say” (my emphasis):

The problem was, Celebrex only appeared to be easier on the stomach because Pfizer, and its partner Pharmacia, only released the first six months of data from a year-long study. When the entire data set was looked at, the stomach "benefit" disappeared.

Folks this is what is called a blatant lie of omission and these companies do it on a regular basis. The system even encourages it. Contrary to what many people believe the FDA does no testing of drugs that are to be approved. Nor is there an objective third party that does tests. Rather the system the FDA employs has the drug company pay for and do the studies, and they only submit the studies that support the release of their drug. They are not required to submit failed ones.

That Pfizer withheld the critical data has been known for years, but newly unsealed documents showed this was all part of a carefully calculated plan by Pfizer and Pharmacia execs. While medical directors and scientists at the company expressed feeling uncomfortable with the "data massaging" and "cherry picking" of data, the powers that be moved full steam ahead with their deceptive marketing blitz.

So the file-drawer effect is very, very real, and we do well to challenge the paranormal results with this possibility. Now, unlike the drug companies, and unlike most “real” scientists themselves (>>>), paranormal investigators’ results are constantly challenged by skeptics with the file-drawer argument. It is, of course, a difficult challenge to refute because it’s hard to prove that you are not hiding information from those who say you might be. (Other scientists would doubtless take umbrage at any suggestion they might be hiding data or just not reporting it, but the paranormalist guys have gotten used to this kind of suspicion. However, as we can tell from the USA Today article linked above, it is seen as a current problem in many fields of hard science as well as the area of science-based medicine.)

The ramifications of this issue are perhaps especially significant in the realm of medical research using clinical trials to evaluate the various treatment options. The profit consequences for developing a new drug are such that people commonly speak of billion dollar settlements by the pharmaceutical industry as “the cost of doing business,” and it’s hard to imagine that medical researchers and publications do not feel significant pressures to disregard negative or unprofitable results. Nevertheless, quite without benefit of imagination, studies have been done on this topic that confirm at least the outward shadow of one’s darker suspicions. For instance, Controlled Clinical Trials published in 1987 the results of a study by the Clinical Trials Unit at Mount Sinai School of Medicine in New York City called, “Publication bias and clinical trials” in which it is reported that 156 medical researchers responded to requests to give details about how many controlled clinical trials they had conducted and gotten published or not, specifying whether or not the trial results were favorable or not to new therapies. They asked 318 for responses. Even with only half responding, the results are quite interesting for showing how often it is that the negative-result studies do not get published, and the authors conclude dryly, “The results of this study imply the existence of a publication bias of importance both to meta-analysis and the interpretation of statistically significant positive trials.” (>>>)

Since writing that summary of one study I have come across a mammoth study of studies of the same kind of thing that covers several dozens of specific medical conditions with regard to publication bias that basically indicates the world medical industry, taken as a single entity, is deliberately killing and maiming thousands upon thousands of patients by knowingly using strategies that lead to publication bias to hide both treatment-negative reports and reports of corresponding deadly side effects of medications from practicing doctors and patients. Of course the primary intention of the medical industry, taken as a single entity, is not to do this killing and maiming. The primary goal appears to be to make money. The following screen clip is linked to the study:


Coming back now to the telephone-calling cases I started out with, what are we to do with this problem of the file-drawer effect? One consideration here is that this objection has been made for such a long time against the statistical results of parapsychology experiments that the parapsychologists themselves have for some time had in place protocols that strictly prohibit any file-drawer activity at all. However, as the results of parapsychology experiments themselves fly so smartly in the face of established science, the experimenters likely fear being suspected of out and out fraud — that’s my guess. So, or in any case, they have looked to statistics itself for assistance.

That assistance is found in procedures that estimate, given the particular details of a particular collection of studies, how many studies would need to have been kept in file drawers to adversely affect the significance of such studies as actually were used. Thus we will find Dean Radin (in his Entangled Minds) stating this:

For the dream psi experiments it turns out that an additional 700 studies, averaging an overall chance outcome, would be needed to bring the observed results down to chance. Considering that about 20 different investigators have reported dream psi studies, this would mean that each of those investigators would have had to conduct, but not report, 35 failed experiments for each experiment with a positive result that they did report. Given that the average dream experiment involved 27 sessions, these 700 supposedly missing experiments would imply that about 700 × 27 or 18,900 sessions were conducted but not reported. One dream session takes one night, so we’d have to conclude that 18,900 nights, or over 50 years’ worth of data, wasn’t reported. That hardly seems plausible.

Radin’s calculations given just above utilize the methods of Rosenthal. Some challenge Rosenthal’s method and propose others. Thus:

 Rosenthal  proposed a method, based on probability calculations, for deciding whether or not a finding is "resistant to the file drawer threat."

    This method has become known as the fail-safe file drawer (or FSFD) analysis. It involves calculating a "fail-safe number" which is used to estimate whether or not the file-drawer problem is likely to be a problem for a particular review or meta-analysis.

    However, Scargle has criticized Rosenthal's method on the grounds that it fails to take into account the bias in the "file drawer" of unpublished studies, and thus can give misleading results.

    Scargle urges efforts, such as research registries, to try to limit publication bias. He also suggests that Bayesian methodologies may be best to deal with the file-drawer problem when combining different research results in a meta-analysis.

Various methods (including "funnel plots") have been devised to try to detect publication bias, but may have their own problems. (Source.)

Radin dutifully calculates the file drawer omissions needed under the Scargle approach (reducing the number of studies needed to be missing from 700 to 670) and shows the meta study data in a funnel plot, where it is seen not to reveal selective reporting. However, behind the simple reference to “Bayesian methodologies” lies a vast swamp of potential subjective or a priori (whence the word “priors”) difficulties, as the following paragraphs make clear. As they come from scientists themselves (at least in the case of Wiseman) well-known to be rather fiercely skeptical of parapsychology results, I think they may be trusted as to their suggestion of the potential for subjectivity to influence the determination of “Bayesia priors.” The discussion in these paragraphs is centered on the same Daryl Bem studies (Feeling the Future) I mention elsewhere as eliciting so much alarm in both the New York Times and more scholarly settings because of their showing of significant precognitive abilities among ordinary college students. The emphasis in the second paragraph is my own.

....The ‘Feeling the future’ study has become a test case for proponents of Bayesian theory in psychology, with some commentators (e.g. Rouder & Morey, 2011) suggesting that Bem’s seemingly extraordinary results are an inevitable consequence of psychology’s love for null-hypothesis significance testing. Indeed, Wagenmakers et al. (2011a) suggest that had Bayesian analyses been employed, with appropriate priors, most of Bem’s effects would have been reduced to a credibility level no higher than anecdotal evidence. Given that casinos are not going bankrupt across the world, argued the authors, our prior level of scepticism about the existence of precognitive psychic powers should be high.

Bem and colleagues responded (2011), suggesting a selection of priors which were in their view more reasonable, and which were in our view illustrative of the problem with Bayesian analyses, especially in a controversial area like parapsychology: Your Bayesian prior will depend on where you stand on the previous evidence. Do you, unlike most scientists, take seriously the positive results that are regularly published in parapsychology journals like the Journal of the Society for Psychical Research, or the Journal of Parapsychology? Or do you only accept those that occasionally appear in orthodox journals, like the recent meta-analysis of ‘ganzfeld’ telepathy studies in Psychological Bulletin (Storm et al., 2010)? Do you consider the real world – full as it is of the aforementioned successful casinos – as automatic evidence against the existence of any psychic powers? Your answers to these questions will inform your priors and, consequently, the results of your Bayesian analyses (see Wagenmakers et al., 2011b, for a response to Bem et al., 2011).

It is interesting to note that “casinos prohibit successful gamblers” as I discovered by Googling just those words. I would suppose this is automatic evidence for psychic powers on the same line of thought, except that I see from my Google results that some gamblers are successful because they are good at things like card counting. Of course, a truly successful psychic gambler would presumably be prohibited on the same grounds, so at a minimum we can say that successful casinos are not “automatic evidence against the existence of any psychic powers.”

(Renée Sheltema Something Unknown Is Doing We Don’t Know What... DVD) An excellent work, far more persuasive than this particular instance might imply.

The two studies referred to here are named in the linked Sheldrake paper on line (here). I have the pdfs on my hard drive and will provide copies to anyone who wants to peruse them.

There is a common but erroneous Not Hard Science objection to statistical studies of paranormal phenomena — a friend objects on those grounds to the significance of the sisters’ study. See here for a discussion.

Skeptic’s Dictionary, USA Today. Natural News.

(NaturalNews) More than 30 percent of studies conducted on antidepressant drugs go unpublished, apparently because they fail to show that the drug works as advertised, according to a new study published in the New England Journal of Medicine.


The case of Millikan’s measurement of the charge on the electron as told by Richard Feynman is illustrative: link.

There were 1041 published and 271 unpublished trials reported. That’s a ratio of just under 4 to 1. The respondents only indicated a positive or negative trend with respect to treatment for 178 of the unpublished trials that were completed. The abstract, all I have available at the moment, does not say what the story was for the nearly 100 left out — were they not completed? Were they showing unfavorable trends and so abandoned? But 178 completed and unpublished trials is still a lot. Of these 14% were favorable, compared to 55% of the published trials. That would mean 86% of the unpublished trials were showing trends unfavorable to treatment. The abstract adds: “For trials that were completed but not published, the major reasons for nonpublication were "negative" results and lack of interest. From the data provided, it appears that nonpublication was primarily a result of failure to write up and submit the trial results rather than rejection of submitted manuscripts.

A single, unsensational example.

The source for this quotation is The Psychologist MAY 2012. In their article, “Replication, replication, replication,” Ritchie, Wiseman and French state that they have each attempted to replicate Bem’s results without success.

They also, and more interestingly perhaps, point out that they were unable to get their results published in the first several journals they approached with their joint report. It appears that major scientific journals are not interested in mere replication studies for the most part, even in a case as noteworthy as Bem’s.