## Archive for München

## uncertainty quantification unable to quantify coronavirus risks

Posted in Statistics, Travel, University life with tags ABC in Grenoble, conference cancellation, coronavirus epidemics, Garching, München, SIAM Conference on Uncertainty Quantification, Technische Universität München, UQ20 on March 8, 2020 by xi'an## MaxEnt 2019 [last call]

Posted in pictures, Statistics, Travel with tags conference, Garching, Germany, MaxEnt 2019, maximum entropy, München, O'Bayes 2019, Warwick on April 30, 2019 by xi'an**F**or those who definitely do *not* want to attend O’Bayes 2019 in Warwick, the Max Ent 2019 conference is taking place at the Max Planck Institute for plasma physics in Garching, near Münich, (south) Germany at the same time. Registration is still open at a reduced rate and it is yet not too late to submit an abstract. A few hours left. (While it is still possible to submit a poster for O’Bayes 2019 and to register.)

## Max Ent at Max Plank

Posted in Statistics with tags Bayesian inference, Carl Friedrich Gauss, conference, Gauß, Germany, Max Planck Institute, MaxEnt 2019, maximum entropy, München, O'Bayes 2019, University of Warwick on December 21, 2018 by xi'an## a come-back of the harmonic mean estimator

Posted in Statistics with tags Alan Gelfand, Bayes factors, Bayesian computing, harmonic mean estimator, Max Planck Institute, München, Werner-Heisenberg-Institut on September 6, 2018 by xi'an**A**re we in for a return of the harmonic mean estimator?! Allen Caldwell and co-authors arXived a new document that Allen also sent me, following a technique that offers similarities with our earlier approach with Darren Wraith, the difference being in the more careful and practical construct of the partition set and use of multiple hypercubes, which is the smart thing. I visited Allen’s group at the Max Planck Institut für Physik (Heisenberg) in München (Garching) in 2015 and we confronted our perspectives on harmonic means at that time. The approach followed in the paper starts from what I would call the canonical Gelfand and Dey (1995) representation with a uniform prior, namely that the integral of an arbitrary non-negative function [or unnormalised density] ƒ can be connected with the integral of the said function ƒ over a smaller set Δ with a finite measure measure [or volume]. And therefore to simulations from the density ƒ restricted to this set Δ. Which can be recycled by the harmonic mean identity towards producing an estimate of the integral of ƒ over the set Δ. When considering a partition, these integrals sum up to the integral of interest but this is not necessarily the only exploitation one can make of the fundamental identity. The most novel part stands in constructing an adaptive partition based on the sample, made of hypercubes obtained after whitening of the sample. Only keeping points with large enough density and sufficient separation to avoid overlap. (I am unsure a genuine partition is needed.) In order to avoid selection biases the original sample is separated into two groups, used independently. Integrals that stand too much away from the others are removed as well. This construction may sound a bit daunting in the number of steps it involves and in the poor adequation of a Normal to an hypercube or conversely, but it seems to shy away from the number one issue with the basic harmonic mean estimator, the almost certain infinite variance. Although it would be nice to be completely certain this doom is avoided. I still wonder at the degenerateness of the approximation of the integral with the dimension, as well as at other ways of exploiting this always fascinating [if fraught with dangers] representation. And comparing variances.