Interesting article in the NY Times today by Sally Satel (here) that was lamenting the lack of reproducibility in psychology experiments. She starts by talking in particular about the concept of “goal priming” which has gotten a lot of attention.; However, she expands her discussion to the lack of reproducibility in science (or at least health science) in general. For example,
“To be sure, a failure to replicate is not confined to psychology, as the Stanford biostatistician John P. A. Ioannidis documented in his much-discussed 2005 article “Why Most Published Research Findings Are False.” The cancer researchers C. Glenn Begley and Lee M. Ellis could replicate the findings of only 6 of 53 seminal publications from reputable oncology labs.”
Which of course, leads me to wonder about models built on studies. If we aim at QUANTEC again, there are many studies that give widely divergent results. Clearly, the frequentist view of clinical studies representing one out of the many possible trials inherent in the concept of statistical significance. I think it is pretty clear that the lack of reproducibility doesn’t reflect poor scientific technique but rather indicates that we know so little about the relevant variables that we are unable to repeat the study. So if the model is built under the belief that the study is reproducible (in fact, that belief underlies all the analysis justifying the study), what happens when that belief is proven false? The Bayesian approach would say that there was some probability that all the variables were accounted for, but under the evidence, that prior belief needs to be reassessed. If we look at Campbell-Ricketts’ blog entry about theology, it might be a similar type argument that the domain of variables is essentially infinite for such a trial, therefore rendering the probability of our belief of its repeatibility being very nearly zero.