There is an excellent post by Tom Campbell-Ricketts in response to a paper by Gelman and Shalizi (“Philosophy and and the practice of Bayesian statistics“). He talks about some of their statements regarding testing of models and whether it needs to be deductive or inductive. I particularly like his parting comments about Popper and his ostensible views of science.
As a side note, the journal in which this paper is published, “British Journal of Mathematical and Statistical Psychology” makes me wonder of perhaps we have gone a bit too far off the analytic deep-end when it comes to some of these disciplines involving human beings. I am reminded of the QUANTEC effort to use statistical methods to model complications from radiation therapy. In general, the effort is hampered by the paucity of consistent data, so that the models tend not to be useful and I have the (unproven) belief that physicians can basically guess the results of these models. The next question that comes to mind is whether or not even an accurate model at the usual low levels of complications would change any treatment decisions. Much of this radiation therapy modeling seems to be based on a need to come up with a mathematical ranking of different actions, e.g. treatment plans, more than from any clinical need. In other words, physicans–given their current levels of knowledge of important variables and their effects on clinical outcomes–are willing to accept plans that are far from mathematically optimal.