Archive for noninformative priors

noninformative Bayesian prior with a finite support

Posted in Statistics, University life with tags , , , , , , on December 4, 2018 by xi'an

A few days ago, Pierre Jacob pointed me to a PNAS paper published earlier this year on a form of noninformative Bayesian analysis by Henri Mattingly and coauthors. They consider a prior that “maximizes the mutual information between parameters and predictions”, which sounds very much like José Bernardo’s notion of reference priors. With the rather strange twist of having the prior depending on the data size m even they work under an iid assumption. Here information is defined as the difference between the entropy of the prior and the conditional entropy which is not precisely defined in the paper but looks like the expected [in the data x] Kullback-Leibler divergence between prior and posterior. (I have general issues with the paper in that I often find it hard to read for a lack of precision and of definition of the main notions.)

One highly specific (and puzzling to me) feature of the proposed priors is that they are supported by a finite number of atoms, which reminds me very much of the (minimax) least favourable priors over compact parameter spaces, as for instance in the iconic paper by Casella and Strawderman (1984). For the same mathematical reason that non-constant analytic functions must have separated maxima. This is conducted under the assumption and restriction of a compact parameter space, which must be chosen in most cases. somewhat arbitrarily and not without consequences. I can somehow relate to the notion that a finite support prior translates the limited precision in the estimation brought by a finite sample. In other words, given a sample size of m, there is a maximal precision one can hope for, producing further decimals being silly. Still, the fact that the support of the prior is fixed a priori, completely independently of the data, is both unavoidable (for the prior to be prior!) and very dependent on the choice of the compact set. I would certainly prefer to see a maximal degree of precision expressed a posteriori, meaning that the support would then depend on the data. And handling finite support posteriors is rather awkward in that many notions like confidence intervals do not make much sense in that setup. (Similarly, one could argue that Bayesian non-parametric procedures lead to estimates with a finite number of support points but these are determined based on the data, not a priori.)

Interestingly, the derivation of the “optimal” prior is operated by iterations where the next prior is the renormalised version of the current prior times the exponentiated Kullback-Leibler divergence, which is “guaranteed to converge to the global maximum” for a discretised parameter space. The authors acknowledge that the resolution is poorly suited to multidimensional settings and hence to complex models, and indeed the paper only covers a few toy examples of moderate and even humble dimensions.

Another difficulty with the paper is the absence of temporal consistency: since the prior depends on the sample size, the posterior for n i.i.d. observations is no longer the prior for the (n+1)th observation.

“Because it weights the irrelevant parameter volume, the Jeffreys prior has strong dependence on microscopic effects invisible to experiment”

I simply do not understand the above sentence that apparently counts as a criticism of Jeffreys (1939). And would appreciate anyone enlightening me! The paper goes into comparing priors through Bayes factors, which ignores the main difficulty of an automated solution such as Jeffreys priors in its inability to handle infinite parameter spaces by being almost invariably improper.

visual effects

Posted in Books, pictures, Statistics with tags , , , , , , , , , , , on November 2, 2018 by xi'an

As advertised and re-discussed by Dan Simpson on the Statistical Modeling, &tc. blog he shares with Andrew and a few others, the paper Visualization in Bayesian workflow he wrote with Jonah Gabry, Aki Vehtari, Michael Betancourt and Andrew Gelman was one of three discussed at the RSS conference in Cardiff, last week month, as a Read Paper for Series A. I had stored the paper when it came out towards reading and discussing it, but as often this good intention led to no concrete ending. [Except concrete as in concrete shoes…] Hence a few notes rather than a discussion in Series B A.

Exploratory data analysis goes beyond just plotting the data, which should sound reasonable to all modeling readers.

Fake data [not fake news!] can be almost [more!] as valuable as real data for building your model, oh yes!, this is the message I am always trying to convey to my first year students, when arguing about the connection between models and simulation, as well as a defense of ABC methods. And more globally of the very idea of statistical modelling. While indeed “Bayesian models with proper priors are generative models”, I am not particularly fan of using the prior predictive [or the evidence] to assess the prior as it may end up in a classification of more or less all but terrible priors, meaning that all give very little weight to neighbourhoods of high likelihood values. Still, in a discussion of a TAS paper by Seaman et al. on the role of prior, Kaniav Kamary and I produced prior assessments that were similar to the comparison illustrated in Figure 4. (And this makes me wondering which point we missed in this discussion, according to Dan.)  Unhappy am I with the weakly informative prior illustration (and concept) as the amount of fudging and calibrating to move from the immensely vague choice of N(0,100) to the fairly tight choice of N(0,1) or N(1,1) is not provided. The paper reads like these priors were the obvious and first choice of the authors. I completely agree with the warning that “the utility of the the prior predictive distribution to evaluate the model does not extend to utility in selecting between models”.

MCMC diagnostics, beyond trace plots, yes again, but this recommendation sounds a wee bit outdated. (As our 1998 reviewww!) Figure 5(b) links different parameters of the model with lines, which does not clearly relate to a better understanding of convergence. Figure 5(a) does not tell much either since the green (divergent) dots stand within the black dots, at least in the projected 2D plot (and how can one reach beyond 2D?) Feels like I need to rtfm..!

“Posterior predictive checks are vital for model evaluation”, to wit that I find Figure 6 much more to my liking and closer to my practice. There could have been a reference to Ratmann et al. for ABC where graphical measures of discrepancy were used in conjunction with ABC output as direct tools for model assessment and comparison. Essentially predicting a zero error with the ABC posterior predictive. And of course “posterior predictive checking makes use of the data twice, once for the fitting and once for the checking.” Which means one should either resort to loo solutions (as mentioned in the paper) or call for calibration of the double-use by re-simulating pseudo-datasets from the posterior predictive. I find the suggestion that “it is a good idea to choose statistics that are orthogonal to the model parameters” somewhat antiquated, in that this sounds like rephrasing the primeval call to ancillary statistics for model assessment (Kiefer, 1975), while pretty hard to implement in modern complex models.

a jump back in time

Posted in Books, Kids, Statistics, Travel, University life with tags , , , , , , , , , , , on October 1, 2018 by xi'an

As the Department of Statistics in Warwick is slowly emptying its shelves and offices for the big migration to the new building that is almost completed, books and documents are abandoned in the corridors and the work spaces. On this occasion, I thus happened to spot a vintage edition of the Valencia 3 proceedings. I had missed this meeting and hence the volume for, during the last year of my PhD, I was drafted in the French Navy and as a result prohibited to travel abroad. (Although on reflection I could have safely done it with no one in the military the wiser!) Reading through the papers thirty years later is a weird experience, as I do not remember most of the papers, the exception being the mixture modelling paper by José Bernardo and Javier Giròn which I studied a few years later when writing the mixture estimation and simulation paper with Jean Diebolt. And then again in our much more recent non-informative paper with Clara Grazian.  And Prem Goel’s survey of Bayesian software. That is, 1987 state of the art software. Covering an amazing eighteen list. Including versions by Zellner, Tierney, Schervish, Smith [but no MCMC], Jaynes, Goldstein, Geweke, van Dijk, Bauwens, which apparently did not survive the ages till now. Most were in Fortran but S was also mentioned. And another version of Tierney, Kass and Kadane on Laplace approximations. And the reference paper of Dennis Lindley [who was already retired from UCL at that time!] on the Hardy-Weinberg equilibrium. And another paper by Don Rubin on using SIR (Rubin, 1983) for simulating from posterior distributions with missing data. Ten years before the particle filter paper, and apparently missing the possibility of weights with infinite variance.

There already were some illustrations of Bayesian analysis in action, including one by Jay Kadane reproduced in his book. And several papers by Jim Berger, Tony O’Hagan, Luis Pericchi and others on imprecise Bayesian modelling, which was in tune with the era, the imprecise probability book by Peter Walley about to appear. And a paper by Shaw on numerical integration that mentioned quasi-random methods. Applied to a 12 component Normal mixture.Overall, a much less theoretical content than I would have expected. And nothing about shrinkage estimators, although a fraction of the speakers had worked on this topic most recently.

At a less fundamental level, this was a time when LaTeX was becoming a standard, as shown by a few papers in the volume (and as I was to find when visiting Purdue the year after), even though most were still typed on a typewriter, including a manuscript addition by Dennis Lindley. And Warwick appeared as a Bayesian hotpot!, with at least five papers written by people there permanently or on a long term visit. (In case a local is interested in it, I have kept the volume, to be found in my new office!)

improperties on an astronomical scale

Posted in Books, pictures, Statistics with tags , , , , , , , on December 15, 2017 by xi'an

As pointed out by Peter Coles on his blog, In the Dark, Hyungsuk Tak, Sujit Ghosh, and Justin Ellis just arXived a review of the unsafe use of improper priors in astronomy papers, 24 out of 75 having failed to establish that the corresponding posteriors are well-defined. And they exhibit such an instance (of impropriety) in a MNRAS paper by Pihajoki (2017), which is a complexification of Gelfand et al. (1990), also used by Jim Hobert in his thesis. (Even though the formal argument used to show the impropriety of the posterior in Pihajoki’s paper does not sound right since it considers divergence at a single value of a parameter β.) Besides repeating this warning about an issue that was rather quickly identified in the infancy of MCMC, if not in the very first publications on the Gibbs sampler, the paper seems to argue against using improper priors due to this potential danger, stating that instead proper priors that include all likely values and beyond are to be preferred. Which reminds me of the BUGS feature of using a N(0,10⁹) prior instead of the flat prior, missing the fact that “very large” variances do impact the resulting inference (if only for the issue of model comparison, remember Lindley-Jeffreys!). And are informative in that sense. However, it is obviously a good idea to advise checking for propriety (!) and using such alternatives may come as a safety button, providing a comparison benchmark to spot possible divergences in the resulting inference.

O’Bayes in action

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , on November 7, 2017 by xi'an

My next-door colleague [at Dauphine] François Simenhaus shared a paradox [to be developed in an incoming test!] with Julien Stoehr and I last week, namely that, when selecting the largest number between a [observed] and b [unobserved], drawing a random boundary on a [meaning that a is chosen iff a is larger than this boundary] increases the probability to pick the largest number above ½2…

When thinking about it in the wretched RER train [train that got immobilised for at least two hours just a few minutes after I went through!, good luck to the passengers travelling to the airport…] to De Gaulle airport, I lost the argument: if a<b, the probability [for this random bound] to be larger than a and hence for selecting b is 1-Φ(a), while, if a>b, the probability [of winning] is Φ(a). Hence the only case when the probability is ½ is when a is the median of this random variable. But, when discussing the issue further with Julien, I exposed an interesting non-informative prior characterisation. Namely, if I assume a,b to be iid U(0,M) and set an improper prior 1/M on M, the conditional probability that b>a given a is ½. Furthermore, the posterior probability to pick the right [largest] number with François’s randomised rule is also ½, no matter what the distribution of the random boundary is. Now, the most surprising feature of this coffee room derivation is that these properties only hold for the prior 1/M. Any other power of M will induce an asymmetry between a and b. (The same properties hold when a,b are iid Exp(M).)  Of course, this is not absolutely unexpected since 1/M is the invariant prior and since the “intuitive” symmetry only holds under this prior. Power to O’Bayes!

When discussing again the matter with François yesterday, I realised I had changed his wording of the puzzle. The original setting is one with two cards hiding the unknown numbers a and b and of a player picking one of the cards. If the player picks a card at random, there is indeed a probability of ½ of picking the largest number. If the decision to switch or not depends on an independent random draw being larger or smaller than the number on the observed card, the probability to get max(a,b) in the end hits 1 when this random draw falls into (a,b) and remains ½ outside (a,b). Randomisation pays.

priors without likelihoods are like sloths without…

Posted in Books, Statistics with tags , , , , , , , , , , , , on September 11, 2017 by xi'an

“The idea of building priors that generate reasonable data may seem like an unusual idea…”

Andrew, Dan, and Michael arXived a opinion piece last week entitled “The prior can generally only be understood in the context of the likelihood”. Which connects to the earlier Read Paper of Gelman and Hennig I discussed last year. I cannot state strong disagreement with the positions taken in this piece, actually, in that I do not think prior distributions ever occur as a given but are rather chosen as a reference measure to probabilise the parameter space and eventually prioritise regions over others. If anything I find myself even further on the prior agnosticism gradation.  (Of course, this lack of disagreement applies to the likelihood understood as a function of both the data and the parameter, rather than of the parameter only, conditional on the data. Priors cannot be depending on the data without incurring disastrous consequences!)

“…it contradicts the conceptual principle that the prior distribution should convey only information that is available before the data have been collected.”

The first example is somewhat disappointing in that it revolves as so many Bayesian textbooks (since Laplace!) around the [sex ratio] Binomial probability parameter and concludes at the strong or long-lasting impact of the Uniform prior. I do not see much of a contradiction between the use of a Uniform prior and the collection of prior information, if only because there is not standardised way to transfer prior information into prior construction. And more fundamentally because a parameter rarely makes sense by itself, alone, without a model that relates it to potential data. As for instance in a regression model. More, following my epiphany of last semester, about the relativity of the prior, I see no damage in the prior being relevant, as I only attach a relative meaning to statements based on the posterior. Rather than trying to limit the impact of a prior, we should rather build assessment tools to measure this impact, for instance by prior predictive simulations. And this is where I come to quite agree with the authors.

“…non-identifiabilities, and near nonidentifiabilites, of complex models can lead to unexpected amounts of weight being given to certain aspects of the prior.”

Another rather straightforward remark is that non-identifiable models see the impact of a prior remain as the sample size grows. And I still see no issue with this fact in a relative approach. When the authors mention (p.7) that purely mathematical priors perform more poorly than weakly informative priors it is hard to see what they mean by this “performance”.

“…judge a prior by examining the data generating processes it favors and disfavors.”

Besides those points, I completely agree with them about the fundamental relevance of the prior as a generative process, only when the likelihood becomes available. And simulatable. (This point is found in many references, including our response to the American Statistician paper Hidden dangers of specifying noninformative priors, with Kaniav Kamary. With the same illustration on a logistic regression.) I also agree to their criticism of the marginal likelihood and Bayes factors as being so strongly impacted by the choice of a prior, if treated as absolute quantities. I also if more reluctantly and somewhat heretically see a point in using the posterior predictive for assessing whether a prior is relevant for the data at hand. At least at a conceptual level. I am however less certain about how to handle improper priors based on their recommendations. In conclusion, it would be great to see one [or more] of the authors at O-Bayes 2017 in Austin as I am sure it would stem nice discussions there! (And by the way I have no prior idea on how to conclude the comparison in the title!)

Jeffreys priors for mixtures [or not]

Posted in Books, Statistics, University life with tags , , , , , on July 25, 2017 by xi'an

Clara Grazian and I have just arXived [and submitted] a paper on the properties of Jeffreys priors for mixtures of distributions. (An earlier version had not been deemed of sufficient interest by Bayesian Analysis.) In this paper, we consider the formal Jeffreys prior for a mixture of Gaussian distributions and examine whether or not it leads to a proper posterior with a sufficient number of observations.  In general, it does not and hence cannot be used as a reference prior. While this is a negative result (and this is why Bayesian Analysis did not deem it of sufficient importance), I find it definitely relevant because it shows that the default reference prior [in the sense that the Jeffreys prior is the primary choice in nonparametric settings] does not operate in this wide class of distributions. What is surprising is that the use of a Jeffreys-like prior on a global location-scale parameter (as in our 1996 paper with Kerrie Mengersen or our recent work with Kaniav Kamary and Kate Lee) remains legit if proper priors are used on all the other parameters. (This may be yet another illustration of the tequilla-like toxicity of mixtures!)

Francisco Rubio and Mark Steel already exhibited this difficulty of the Jeffreys prior for mixtures of densities with disjoint supports [which reveals the mixture latent variable and hence turns the problem into something different]. Which relates to another point of interest in the paper, derived from a 1988 [Valencià Conference!] paper by José Bernardo and Javier Giròn, where they show the posterior associated with a Jeffreys prior on a mixture is proper when (a) only estimating the weights p and (b) using densities with disjoint supports. José and Javier use in this paper an astounding argument that I had not seen before and which took me a while to ingest and accept. Namely, the Jeffreys prior on a observed model with latent variables is bounded from above by the Jeffreys prior on the corresponding completed model. Hence if the later leads to a proper posterior for the observed data, so does the former. Very smooth, indeed!!!

Actually, we still support the use of the Jeffreys prior but only for the mixture mixtures, because it has the property supported by Judith and Kerrie of a conservative prior about the number of components. Obviously, we cannot advocate its use over all the parameters of the mixture since it then leads to an improper posterior.