**Y**esterday, Andrew posted an announcement for a postdoc position in Paris, at the national medical research institute (INSERM) on Bayesian approaches to high throughput genetic analyses using nonlinear mixed effect models and the comments went ballistic about the low salary attached to this postdoctoral position, namely 2600€ – 3000€. As I have already commented on the rather stale clichés on French academics, let me briefly reflect on the limitations of comparing 3000€ a month in Paris with say $5000 a month in New York City. (Which seems to be at the high end of US postdoc salaries.) First, the posted salaries are “gross” but the French one already excludes the 25% taxes paid by the employer. I do not know if this is the case in the US. Second, comparing absolute values makes little sense imho. Even if the purchasing power parity is about one between France and the US, I think the long term cost of living [as opposed to visiting for a week] is lower here than there. If only because the amount is similar to, if higher than, the starting academic salaries and around the median salary. Interestingly, the same appears to be true for the US, if less favourably for the postdocs there.

## Archive for Bayesian Analysis

## cost(s) of living

Posted in Kids, pictures, Travel, University life with tags Bayesian Analysis, cost of living, INSERM, mixed effect models, New York city, Paris, postdocs, salary on March 4, 2021 by xi'an## simplified Bayesian analysis

Posted in Statistics with tags 19nCoV, applied Bayesian analysis, bandwagon, Bayesian Analysis, conditional sufficiency, COVID-19, credible intervals, medrXiv, Poisson distribution, Porte Dauphine, reparameterisation, Université Paris Dauphine, vaccine on February 10, 2021 by xi'an

e=1-

/rvaccine efficiency and expectation of N+M, respectively, when r is the vaccine-to-placebo ratio of person-times at risk, ie the ratio of the numbers of participants in each group. Reparameterisation such that the likelihood factorises into a function of e and a function of another approach to infer about e while treating as a nuisance parameter is to condition on N+M). The paper then proposes as an application of this remark an analysis of the results of three SARS-Cov-2 vaccines, meaning using the pairs (N,M) for each vaccine and deriving credible intervals, which sounds more like an exercise in basic Bayesian inference than a fundamental step in assessing the efficiency of the vaccines…

. Using a product prior for this parameterisation leads to a posterior on e times a posterior on . This is a nice remark, which may have been made earlier (as for instance## stratified MCMC

Posted in Books, pictures, Statistics with tags 1872, 1872 Hidalgo issue, Bayesian Analysis, Brown University, ICERM, marginal density, MCMC algorithms, Mexican stamps, Mexico, mixtures of distributions, partitioned sampling, Providence, stratified sampling on December 3, 2020 by xi'an**W**hen working last week with a student, we came across [the slides of a talk at ICERM by Brian van Koten about] a stratified MCMC method whose core idea is to solve a eigenvector equation z’=z’F associated with the masses of “partition” functions Ψ evaluated at the target. (The arXived paper is also available since 2017 but I did not check it in more details.)Although the “partition” functions need to overlap for the matrix not to be diagonal (actually the only case that does not work is when these functions are truly indicator functions). As in other forms of stratified sampling, the practical difficulty is in picking the functions Ψ so that the evaluation of the terms of the matrix F is not overly impacted by the Monte Carlo error. If spending too much time in estimating these terms, there is not a clear gain in switching to stratified sampling, which may be why it is not particularly developed in the MCMC literature….

As an interesting aside, the illustration in this talk comes from the Mexican stamp thickness data I also used in my earlier mixture papers, concerning the 1872 Hidalgo issue that was printed on different qualities of paper. This makes the number k of components somewhat uncertain, although k=3 is sometimes used as a default. Hence a parameter and simulation space of dimension 8, even though the method is used toward approximating the marginal posteriors on the weights λ¹ and λ².

## Hélène Massam (1949-2020)

Posted in Statistics with tags 12w5105, Banff, Banff International Research Station for Mathematical Innovation, Bayesian Analysis, BIRS, Canada, DAG, Ecole Normal Supérieure, exponential families, Fontenay-aux-Roses, France, hyper-inverse Wishart distribution, ISBA, Marseiile, non-central Wishart distribution, obituary, Statistical Society of Canada, University of York, Wishart distribution, York on November 1, 2020 by xi'an**I** was much saddened to hear yesterday that our friend and fellow Bayesian Hélène Massam passed away on August 22, 2020, following a cerebrovascular accident. She was professor of Statistics at York University, in Toronto, and, as her field of excellence covered [the geometry of] exponential families, Wishart distributions and graphical models, we met many times at both Bayesian and non-Bayesian conferences (the first time may have been an IMS in Banff, years before BIRS was created). And always had enjoyable conversations on these occasions (in French since she was born in Marseille and only moved to Canada for her graduate studies in optimisation). Beyond her fundamental contributions to exponential families, especially Wishart distributions under different constraints [including the still opened 2007 Letac-Massam conjecture], and graphical models, where she produced conjugate priors for DAGs of all sorts, she served the community in many respects, including in the initial editorial board of Bayesian Analysis. I can also personally testify of her dedication as a referee as she helped with many papers along the years. She was also a wonderful person, with a great sense of humor and a love for hiking and mountains. Her demise is a true loss for the entire community and I can only wish her to keep hiking on new planes and cones in a different dimension. *[Last month, Christian Genest (McGill University) and Xin Gao (York University) wrote a moving obituary including a complete biography of Hélène for the Statistical Society of Canada.]*

## marginal likelihood with large amounts of missing data

Posted in Books, pictures, Statistics with tags Bayesian Analysis, Chib's approximation, evidence, harmonic mean estimator, importance sampling, marginal likelihood, normalising constant, reversible jump, University of Warwick on October 20, 2020 by xi'an**I**n 2018, Panayiota Touloupou, research fellow at Warwick, and her co-authors published a paper in Bayesian analysis that somehow escaped my radar, despite standing in my first circle of topics of interest! They construct an importance sampling approach to the approximation of the marginal likelihood, the importance function being approximated from a preliminary MCMC run, and consider the special case when the sampling density (i.e., the likelihood) can be represented as the marginal of a joint density. While this demarginalisation perspective is rather usual, the central point they make is that it is more efficient to estimate the sampling density based on the auxiliary or latent variables than to consider the joint posterior distribution of parameter and latent in the importance sampler. This induces a considerable reduction in dimension and hence explains (in part) why the approach should prove more efficient. Even though the approximation itself is costly, at about 5 seconds per marginal likelihood. But a nice feature of the paper is to include the above graph that includes both computing time and variability for different methods (the blue range corresponding to the marginal importance solution, the red range to RJMCMC and the green range to Chib’s estimate). Note that bridge sampling does not appear on the picture but returns a variability that is similar to the proposed methodology.