Archive for hierarchical Bayesian modelling

empirically Bayesian [wISBApedia]

Posted in Statistics with tags , , , , , , , on August 9, 2021 by xi'an

Last week I was pointed out a puzzling entry in the “empirical Bayes” Wikipedia page. The introduction section indeed contains a description of an iterative simulation method that involves an hyperprior p(η) even though the empirical Bayes perspective does not involve an hyperprior.

While the entry is vague and lacks formulae

These suggest an iterative scheme, qualitatively similar in structure to a Gibbs sampler, to evolve successively improved approximations to p(θy) and p(ηy). First, calculate an initial approximation to p(θy) ignoring the η dependence completely; then calculate an approximation to p(η | y) based upon the initial approximate distribution of p(θy); then use this p(ηy) to update the approximation for p(θy); then update p(ηy); and so on.

it sounds essentially equivalent to a Gibbs sampler, possibly a multiple try Gibbs sampler (unless the author had another notion in mind, alas impossible to guess since no reference is included).

Beyond this specific case, where I think the entire paragraph should be erased from the “empirical Bayes” Wikipedia page, I discussed the general problem of some poor Bayesian entries in Wikipedia with Robin Ryder, who came with the neat idea of running (collective) Wikipedia editing labs at ISBA conferences. If we could further give an ISBA label to these entries, as a certificate of “Bayesian orthodoxy” (!), it would be terrific!

latent variables for a hierarchical Poisson model

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , on March 11, 2021 by xi'an

Answering a question on X validated about a rather standard hierarchical Poisson model, and its posterior Gibbs simulation, where observations are (d and w being a document and a word index, resp.)

N_{w,d}\sim\mathcal P(\textstyle\sum_{1\le k\le K} \pi_{k,d}\varphi_{k,w})\qquad(1)

I found myself dragged into an extended discussion on the validation of creating independent Poisson latent variables

N_{k,w,d}\sim\mathcal P(\pi_{k,d}\varphi_{k,w})\qquad(2)

since observing their sum in (1) was preventing the latent variables (2) from being independent. And then found out that the originator of the question had asked on X validated an unanswered and much more detailed question in 2016, even though the notations differ. The question does contain the solution I proposed above, including the Multinomial distribution on the Poisson latent variables given their sum (and the true parameters). As it should be since the derivation was done in a linked 2014 paper by Gopalan, Hofman, and Blei, later published in the Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence (UAI). I am thus bemused at the question resurfacing five years later in a much simplified version, but still exhibiting the same difficulty with the conditioning principles…

Bayes @ NYT

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , on August 8, 2020 by xi'an

A tribune in the NYT of yesterday on the importance of being Bayesian. When an epidemiologist. Tribune that was forwarded to me by a few friends (and which I missed on my addictive monitoring of the journal!). It is written by , a Canadian journalist writing about mathematics (and obviously statistics). And it brings to the general public the main motivation for adopting a Bayesian approach, namely its coherent handling of uncertainty and its ability to update in the face of new information. (Although it might be noted that other flavours of statistical analysis are also able to update their conclusions when given more data.) The COVID situation is a perfect case study in Bayesianism, in that there are so many levels of uncertainty and imprecision, from the models themselves, to the data, to the outcome of the tests, &tc. The article is journalisty, of course, but it quotes from a range of statisticians and epidemiologists, including Susan Holmes, whom I learned was quarantined 105 days in rural Portugal!, developing a hierarchical Bayes modelling of the prevalent  SEIR model, and David Spiegelhalter, discussing Cromwell’s Law (or better, humility law, for avoiding the reference to a fanatic and tyrannic Puritan who put Ireland to fire and the sword!, and had in fact very little humility for himself). Reading the comments is both hilarious (it does not take long to reach the point when Trump is mentioned, and Taleb’s stance on models and tails makes an appearance) and revealing, as many readers do not understand the meaning of Bayes’ inversion between causes and effects, or even the meaning of Jeffreys’ bar, |, as conditioning.

latent nested nonparametric priors

Posted in Books, Statistics with tags , , , , , , , on September 23, 2019 by xi'an

A paper on an extended type of non-parametric priors by Camerlenghi et al. [all good friends!] is about to appear in Bayesian Analysis, with a discussion open for contributions (until October 15). While a fairly theoretical piece of work, it validates a Bayesian approach for non-parametric clustering of separate populations with, broadly speaking, common clusters. More formally, it constructs a new family of models that allows for a partial or complete equality between two probability measures, but does not force full identity when the associated samples do share some common observations. Indeed, the more traditional structures prohibit one or the other, from the Dirichlet process (DP) prohibiting two probability measure realisations from being equal or partly equal to some hierarchical DP (HDP) already allowing for common atoms across measure realisations, but prohibiting complete identity between two realised distributions, to nested DP offering one extra level of randomness, but with an infinity of DP realisations that prohibits common atomic support besides completely identical support (and hence distribution).

The current paper imagines two realisations of random measures written as a sum of a common random measure and of one of two separate almost independent random measures: (14) is the core formula of the paper that allows for partial or total equality. An extension to a setting larger than facing two samples seems complicated if only because of the number of common measures one has to introduce, from the totally common measure to measures that are only shared by a subset of the samples. Except in the simplified framework when a single and universally common measure is adopted (with enough justification). The randomness of the model is handled via different completely random measures that involved something like four degrees of hierarchy in the Bayesian model.

Since the example is somewhat central to the paper, the case of one or rather two two-component Normal mixtures with a common component (but with different mixture weights) is handled by the approach, although it seems that it was already covered by HDP. Having exactly the same term (i.e., with the very same weight) is not, but this may be less interesting in real life applications. Note that alternative & easily constructed & parametric constructs are already available in this specific case, involving a limited prior input and a lighter computational burden, although the  Gibbs sampler behind the model proves extremely simple on the paper. (One may wonder at the robustness of the sampler once the case of identical distributions is visited.)

Due to the combinatoric explosion associated with a higher number of observed samples, despite obvious practical situations,  one may wonder at any feasible (and possibly sequential) extension, that would further keep a coherence under marginalisation (in the number of samples). And also whether or not multiple testing could be coherently envisioned in this setting, for instance when handling all hospitals in the UK. Another consistency question covers the Bayes factor used to assess whether the two distributions behind the samples are or not identical. (One may wonder at the importance of the question, hopefully applied to more relevant dataset than the Iris data!)

ABC with Gibbs steps

Posted in Statistics with tags , , , , , , , , , , , , , , , , , on June 3, 2019 by xi'an

With Grégoire Clarté, Robin Ryder and Julien Stoehr, all from Paris-Dauphine, we have just arXived a paper on the specifics of ABC-Gibbs, which is a version of ABC where the generic ABC accept-reject step is replaced by a sequence of n conditional ABC accept-reject steps, each aiming at an ABC version of a conditional distribution extracted from the joint and intractable target. Hence an ABC version of the standard Gibbs sampler. What makes it so special is that each conditional can (and should) be conditioning on a different statistic in order to decrease the dimension of this statistic, ideally down to the dimension of the corresponding component of the parameter. This successfully bypasses the curse of dimensionality but immediately meets with two difficulties. The first one is that the resulting sequence of conditionals is not coherent, since it is not a Gibbs sampler on the ABC target. The conditionals are thus incompatible and therefore convergence of the associated Markov chain becomes an issue. We produce sufficient conditions for the Gibbs sampler to converge to a stationary distribution using incompatible conditionals. The second problem is then that, provided it exists, the limiting and also intractable distribution does not enjoy a Bayesian interpretation, hence may fail to be justified from an inferential viewpoint. We however succeed in producing a version of ABC-Gibbs in a hierarchical model where the limiting distribution can be explicited and even better can be weighted towards recovering the original target. (At least with limiting zero tolerance.)

%d bloggers like this: