IThis is an interesting discussion of Nate Silver’s book, “The Signal and the Noise”. While generally in agreement with Silver, the authors, Gary Marcus and Ernest Davis, take issue with what they see as Silver’s emphasis (insistence?) on Bayesian inference. They are a bit satirical when noting that this is based on a theorem that has been around a while (several centuries). They seem particularly miffed at the perceived slight of Fischerian statistics, which as they correctly note, is what is commonly used in scientific hypothesis testing.
Without having read “The Signal and the Noise”, I hesitate to delve too deeply into this, but it is pretty clear that Marcus and Davis are missing two boats. The first has to do with the fact that there has been a pretty stark divide between the frequentists (Fischer statistics) and Bayesians. It has only been in the last few decades that the classic academic Fischerians have acknowledged the Bayesian approach and I believe that Silver, in his book, is reacting to that rift. This rift is most clearly seen when it comes to applying probability to most real-life situations, which is what Silver emphasizes, and for which he is rapidly becoming famous.
This leads to their second missed boat. The problem that frequentists face is exactly with the situations that Marcus and Davis highlight as the weakness of Bayesian statistics, namely the case of sparse data. And while it is certainly a problem for Bayesians, it is an even greater problem for frequentists. Because they define probabilities strictly in an enumerative sense, a situation with no or few priors leaves them with no probabilities–which is all right if you don’t actually have to do anything. But in real life, lack of data is not a good excuse. Your widget may not have been used often enough to fail, but that doesn’t mean that you don’t have to make some contingency plans for when it does (even though frequentists would say that the probability of failure is zero).
Bayesians, on the other hand, view probabilities as degrees of belief (which may or may not be bolstered by counting). Thus, they can operate in situations where there is little actual data. Each time they get data, Bayes’ theorem provides a way to update that probability, which is helpful, as well.
In scientific modeling, Fischerians assume a model and see if the data fit it. Bayesians have data and use it to find the model that best fit it. Fischer’s tests are all about providing degrees of confidence as to whether two models are actually different given the data.
So, in reading this article, it seems to me that the authors are missing a major point about the two approaches, but as I said, since I didn’t read Silver’s book, I can’t comment on whether he did a poor job in describing it or not.