You assume that I am interested in long-term average properties of procedures, even though I have so often argued that they are at most necessary (as consequences of good procedures), but scarcely sufficient for a severity assessment. The error statistical account I have developed is a statistical philosophy. It is not one to be found in Neyman and Pearson, jointly or separately, except in occasional glimpses here and there (unfortunately). It is certainly not about well-defined accept-reject rules. If N-P had only been clearer, and Fisher better behaved, we would not have had decades of wrangling. However, I have argued, the error statistical philosophy explicates, and directs the interpretation of, frequentist sampling theory methods in scientific, as opposed to behavioural, contexts. It is not a complete philosophy…but I think Gelmanian Bayesians could find in it a source of “standard setting”.
You say “the prior is both a probabilistic object, standard from this perspective, and a subjective construct, translating qualitative personal assessments into a probability distribution. The extension of this dual nature to the so-called “conventional” priors (a very good semantic finding!) is to set a reference … against which to test the impact of one’s prior choices and the variability of the resulting inference. …they simply set a standard against which to gauge our answers.”
I think there are standards for even an approximate meaning of “standard-setting” in science, and I still do not see how an object whose meaning and rationale may fluctuate wildly, even in a given example, can serve as a standard or reference. For what?
Perhaps the idea is that one can gauge how different priors change the posteriors, because, after all, the likelihood is well-defined. That is why the prior and not the likelihood is the camel. But it isn’t obvious why I should want the camel. (camel/gnat references in the paper and response).
Archive for frequentist inference
While preparing a book chapter, I checked on Google Ngram viewer the comparative uses of the words Bayesian (blue), maximum likelihood (red) and frequentist (yellow), producing the above (screen-copy quality, I am afraid!). It shows an increase of the use of the B word from the early 80′s and not the sudden rise in the 90′s I was expecting. The inclusion of “frequentist” is definitely in the joking mode, as this is not a qualification used by frequentists to describe their methods. In other words (!), “frequentist” does not occur very often in frequentist papers (and not as often as in Bayesian papers!)…
As I attended Jamie Robins’ session in Varanasi and did not have a clear enough idea of the Robbins and Wasserman paradox to discuss it viva vocce, here are my thoughts after reading Larry’s summary. My first reaction was to question whether or not this was a Bayesian statistical problem (meaning why should I be concered with the problem). Just as the normalising constant problem was not a statistical problem. We are estimating an integral given some censored realisations of a binomial depending on a covariate through an unknown function θ(x). There is not much of a parameter. However, the way Jamie presented it thru clinical trials made the problem sound definitely statistical. So end of the silly objection. My second step is to consider the very point of estimating the entire function (or infinite dimensional parameter) θ(x) when only the integral ψ is of interest. This is presumably the reason why the Bayesian approach fails as it sounds difficult to consistently estimate θ(x) under censored binomial observations, while ψ can be. Of course, if we want to estimate the probability of a success like ψ going through functional estimation this sounds like overshooting. But the Bayesian modelling of the problem appears to require considering all unknowns at once, including the function θ(x) and cannot forget about it. We encountered a somewhat similar problem with Jean-Michel Marin when working on the k-nearest neighbour classification problem. Considering all the points in the testing sample altogether as unknowns would dwarf the training sample and its information content to produce very poor inference. And so we ended up dealing with one point at a time after harsh and intense discussions! Now, back to the Robins and Wasserman paradox, I see no problem in acknowledging a classical Bayesian approach cannot produce a convergent estimate of the integral ψ. Simply because the classical Bayesian approach is an holistic system that cannot remove information to process a subset of the original problem. Call it the curse of marginalisation. Now, on a practical basis, would there be ways of running simulations of the missing Y’s when π(x) is known in order to achieve estimates of ψ? Presumably, but they would end up with a frequentist validation…
In relation with the special issue of Science & Vie on Bayes’ formula, the French national radio (France Culture) organised a round table with Pierre Bessière, senior researcher in physiology at Collège de France, Dirk Zerwas, senior researcher in particle physics in Orsay, and Hervé Poirier, editor of Science & Vie. And myself (as I was quoted in the original paper). While I am not particularly fluent in oral debates, I was interested by participating in this radio experiment, if only to bring some moderation to the hyperbolic tone found in the special issue. (As the theme was “Is there a universal mathematical formula? “, I was for a while confused about the debate, thinking that maybe the previous blogs on Stewart’s 17 Equations and Mackenzie’s Universe in Zero Words had prompted this invitation…)
As it happened [podcast link], the debate was quite moderate and reasonable, we discussed about the genesis, the dark ages, and the resurgimento of Bayesian statistics within statistics, the lack of Bayesian perspectives in the Higgs boson analysis (bemoaned by Tony O’Hagan and Dennis Lindley), and the Bayesian nature of learning in psychology. Although I managed to mention Poincaré’s Bayesian defence of Dreyfus (thanks to the Theory that would not die!), Nate Silver‘s Bayesian combination of survey results, and the role of the MRC in the MCMC revolution, I found that the information content of a one-hour show was in the end quite limited, as I would have liked to mention as well the role of Bayesian techniques in population genetic advances, like the Asian beetle invasion mentioned two weeks ago… Overall, an interesting experience, maybe not with a huge impact on the population of listeners, and a confirmation I’d better stick to the written world!
This morning I attended Alan Gelfand talk on directional data, i.e. on the torus (0,2π), and found his modeling via wrapped normals (i.e. normal reprojected onto the unit sphere) quite interesting and raising lots of probabilistic questions. For instance, usual moments like mean and variance had no meaning in this space. The variance matrix of the underlying normal, as well of its mean, obviously matter. One thing I am wondering about is how restrictive the normal assumption is. Because of the projection, any random change to the scale of the normal vector does not impact this wrapped normal distribution but there are certainly features that are not covered by this family. For instance, I suspect it can only offer at most two modes over the range (0,2π). And that it cannot be explosive at any point.
The keynote lecture this afternoon was delivered by Roderick Little in a highly entertaining way, about calibrated Bayesian inference in official statistics. For instance, he mentioned the inferential “schizophrenia” in this field due to the between design-based and model-based inferences. Although he did not define what he meant by “calibrated Bayesian” in the most explicit manner, he had this nice list of eight good reasons to be Bayesian (that came close to my own list at the end of the Bayesian Choice):
- conceptual simplicity (Bayes is prescriptive, frequentism is not), “having a model is an advantage!”
- avoiding ancillarity angst (Bayes conditions on everything)
- avoiding confidence cons (confidence is not probability)
- nails nuisance parameters (frequentists are either wrong or have a really hard time)
- escapes from asymptotia
- incorporates prior information and if not weak priors work fine
- Bayes is useful (25 of the top 30 cited are statisticians out of which … are Bayesians)
Bayesians go to Valencia![joke! Actually it should have been Bayesian go MCMskiing!]
- Calibrated Bayes gets better frequentists answers
He however insisted that frequentists should be Bayesians and also that Bayesians should be frequentists, hence the calibration qualification.
After an interesting session on Bayesian statistics, with (adaptive or not) mixtures and variational Bayes tools, I actually joined the “young statistician dinner” (without any pretense at being a young statistician, obviously) and had interesting exchanges on a whole variety of topics, esp. as Kerrie Mengersen adopted (reinvented) my dinner table switch strategy (w/o my R simulated annealing code). Until jetlag caught up with me.