Archive for reparameterisation

Bertrand’s paradox [re]solved?

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , , , on September 29, 2023 by xi'an

On the plane back from Vancouver, I read Bertrand’s Paradox Resolution and Its Implications for the Bing–Fisher Problem by Richard A. Chechile [who had pointed out his paper to me] In this paper, Chechile considers the Bayesian connections/sequences of Betrand’s paradox, as he sees it Bertrand’s different solutions/paradox to be

“designed to illustrate his dissatisfaction with the Bayes and Laplace use of a probability distribution to represent an unknown parameter that can have any continuous value”

and proposes to “resolve” this paradox, which imho is neither a paradox nor in need of a resolution!, as I see it more like a reflection on the importance of sigma algebras and measure theory. The uniform distribution (behind the “random” chord) is not a uniquely specified concept, just like the maximum entropy distribution is relative to the dominating measure. When arguing that

“Such a definition [based on any possible distribution of a stochastic chord] would yield a random variable, but this weak sense of the word random is not satisfactory, because there is an infinite number of stochastic processes that can be defined to yield a probability distribution of chord lengths.”

the author is simply restating that infinite collection of dominating measures.  But imho he is somewhat missing this point when defining Shannon`s entropy by resorting to a discrete version. And when adopting a uniform measure on the chord as a reference (Section 3.2, on The Importance of a Dominant Metric Representation). While the probability P(L>1) is invariant under any increasing transform of L (and 1)… This amounts to arguing for a favourite parameterisation in constructing  a reference prior (Section 4, where Jeffreys prior is also dismissed for not being at maximum entropy). The ensuing discussion as to why the three solutions of Bertrand’s are not valid (Section 2.2) is thus most curious to me since they all are implementable/practical ways of producing stochastic chords. I find it rather amusing that one returns to the quest for the ideal priori distribution Bayesians were so fiercely debating at the turn of the previous century. And non-Bayesians were all too happy to exploit when arguing against this approach.

robust inference using posterior bootstrap

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on February 18, 2022 by xi'an

The famous 1994 Read Paper by Michael Newton and Adrian Raftery was entitled Approximate Bayesian inference, where the boostrap aspect is in randomly (exponentially) weighting each observation in the iid sample through a power of the corresponding density, a proposal that happened at about the same time as Tony O’Hagan suggested the related fractional Bayes factor. (The paper may also be equally famous for suggesting the harmonic mean estimator of the evidence!, although it only appeared as an appendix to the paper.) What is unclear to me is the nature of the distribution g(θ) associated with the weighted bootstrap sample, conditional on the original sample, since the outcome is the result of a random Exponential sample and of an optimisation step. With no impact of the prior (which could have been used as a penalisation factor), corrected by Michael and Adrian via an importance step involving the estimation of g(·).

At the Algorithm Seminar today in Warwick, Emilie Pompe presented recent research, including some written jointly with Pierre Jacob, [which I have not yet read] that does exactly that inclusion of the log prior as penalisation factor, along with an extra weight different from one, as motivated by the possibility of a misspecification. Including a new approach to cut models. An alternative mentioned during the talk that reminds me of GANs is to generate a pseudo-sample from the prior predictive and add it to the original sample. (Some attendees commented on the dependence of the later version on the chosen parameterisation, which is an issue that had X’ed my mind as well.)

multilevel linear models, Gibbs samplers, and multigrid decompositions

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on October 22, 2021 by xi'an

A paper by Giacommo Zanella (formerly Warwick) and Gareth Roberts (Warwick) is about to appear in Bayesian Analysis and (still) open for discussion. It examines in great details the convergence properties of several Gibbs versions of the same hierarchical posterior for an ANOVA type linear model. Although this may sound like an old-timer opinion, I find it good to have Gibbs sampling back on track! And to have further attention to diagnose convergence! Also, even after all these years (!), it is always a surprise  for me to (re-)realise that different versions of Gibbs samplings may hugely differ in convergence properties.

At first, intuitively, I thought the options (1,0) (c) and (0,1) (d) should be similarly performing. But one is “more” hierarchical than the other. While the results exhibiting a theoretical ordering of these choices are impressive, I would suggest pursuing an random exploration of the various parameterisations in order to handle cases where an analytical ordering proves impossible. It would most likely produce a superior performance, as hinted at by Figure 4. (This alternative happens to be briefly mentioned in the Conclusion section.) The notion of choosing the optimal parameterisation at each step is indeed somewhat unrealistic in that the optimality zones exhibited in Figure 4 are unknown in a more general model than the Gaussian ANOVA model. Especially with a high number of parameters, parameterisations, and recombinations in the model (Section 7).

An idle question is about the extension to a more general hierarchical model where recentring is not feasible because of the non-linear nature of the parameters. Even though Gaussianity may not be such a restriction in that other exponential (if artificial) families keeping the ANOVA structure should work as well.

Theorem 1 is quite impressive and wide ranging. It also reminded (old) me of the interleaving properties and data augmentation versions of the early-day Gibbs. More to the point and to the current era, it offers more possibilities for coupling, parallelism, and increasing convergence. And for fighting dimension curses.

“in this context, imposing identifiability always improves the convergence properties of the Gibbs Sampler”

Another idle thought of mine is to wonder whether or not there is a limited number of reparameterisations. I think that by creating unidentifiable decompositions of (some) parameters, eg, μ=μ¹+μ²+.., one can unrestrictedly multiply the number of parameterisations. Instead of imposing hard identifiability constraints as in Section 4.2, my intuition was that this de-identification would increase the mixing behaviour but this somewhat clashes with the above (rigorous) statement from the authors. So I am proven wrong there!

Unless I missed something, I also wonder at different possible implementations of HMC depending on different parameterisations and whether or not the impact of parameterisation has been studied for HMC. (Which may be linked with Remark 2?)

simplified Bayesian analysis

Posted in Statistics with tags , , , , , , , , , , , , on February 10, 2021 by xi'an

A colleague from Dauphine sent me a paper by Carlo Graziani on a Bayesian analysis of vaccine efficiency, asking for my opinion. The Bayesian side is quite simple: given two Poisson observations, N~P(μ) and M~P(ν), there exists a reparameterisation of (μ,ν) into

e=1-μ/rν  and  λ=ν(1+(1-e)r)=μ+ν

vaccine efficiency and expectation of N+M, respectively, when r is the vaccine-to-placebo ratio of person-times at risk, ie the ratio of the numbers of participants in each group. Reparameterisation such that the likelihood factorises into a function of e and a function of λ. Using a product prior for this parameterisation leads to a posterior on e times a posterior on λ. This is a nice remark, which may have been made earlier (as for instance another approach to infer about e while treating λ as a nuisance parameter is to condition on N+M). The paper then proposes as an application of this remark an analysis of the results of three SARS-Cov-2 vaccines, meaning using the pairs (N,M) for each vaccine and deriving credible intervals, which sounds more like an exercise in basic Bayesian inference than a fundamental step in assessing the efficiency of the vaccines…

risk-adverse Bayes estimators

Posted in Books, pictures, Statistics with tags , , , , , , , , , , on January 28, 2019 by xi'an

An interesting paper came out on arXiv in early December, written by Michael Brand from Monash. It is about risk-adverse Bayes estimators, which are defined as avoiding the use of loss functions (although why avoiding loss functions is not made very clear in the paper). Close to MAP estimates, they bypass the dependence of said MAPs on parameterisation by maximising instead π(θ|x)/√I(θ), which is invariant by reparameterisation if not by a change of dominating measure. This form of MAP estimate is called the Wallace-Freeman (1987) estimator [of which I never heard].

The formal definition of a risk-adverse estimator is still based on a loss function in order to produce a proper version of the probability to be “wrong” in a continuous environment. The difference between estimator and true value θ, as expressed by the loss, is enlarged by a scale factor k pushed to infinity. Meaning that differences not in the immediate neighbourhood of zero are not relevant. In the case of a countable parameter space, this is essentially producing the MAP estimator. In the continuous case, for “well-defined” and “well-behaved” loss functions and estimators and density, including an invariance to parameterisation as in my own intrinsic losses of old!, which the author calls likelihood-based loss function,  mentioning f-divergences, the resulting estimator(s) is a Wallace-Freeman estimator (of which there may be several). I did not get very deep into the study of the convergence proof, which seems to borrow more from real analysis à la Rudin than from functional analysis or measure theory, but keep returning to the apparent dependence of the notion on the dominating measure, which bothers me.