Archive for Bayesian statistics

terrible graph and chilies [not a book review]

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , on January 29, 2024 by xi'an

A question on X validated led me to this Bayesian book with a chill cover (except that it first made me seek a word from the chili sequence!), because of a graph within that confused the OP for that question. Here is the graph:

It represents three *prior* densities at once, namely the (uniform) prior density of a Binomial probability θ, the prior density of its transform θ², and the prior density of the other transform θ¹⁰. Which makes no sense since the first axis is indexed simultaneously by values of the three random variables. Meaning a particular index like 0.4 corresponds to three values of θ, namely θ=0.4 and θ²=0.4 and θ¹⁰=0.4… In other words, the “probability of event occurring, f(θ), corresponds to *three different* events and *three different* f’s. Another needless confusion is that the red and dashed density curves appeared as everywhere above one another, which is impossible since they both are probability densities. And the boxed legend does not help

off to Luminy [Autumn school in Bayesian statistics]

Posted in Statistics with tags , , , , , , , , , , , on October 28, 2023 by xi'an

Summer school on Bayesian statistics and computation

Posted in Books, Kids, Statistics, Travel, University life with tags , , , , , , , , on July 16, 2023 by xi'an

stats postdoc in Barcelona

Posted in Statistics, Travel, University life with tags , , , , , , , , , , , , on April 5, 2022 by xi'an

Here is a call for exciting postdoc positions in high-dimensional statistics for network & graphical models analysis, Barcelona:

We have two post-doctoral positions to work on a range of topics related to high-dimensional statistics for networks, ultra high-dimensional graphical models, frameworks linking networks and graphical models, multivariate structural learning and time series analysis. The positions are for 1 year, with a potential for extension to 18 months, with a gross yearly salary of 40,000€. The scope is ample and allows for sub-projects related to mathematical statistics and statistical learning theory, data analysis methodology in penalized likelihood and Bayesian statistics, and computational methods. The positions are related to a Huawei grant, which also offers opportunities to explore applications of the developed theory & methods.

The project is primarily hosted by the Statistics group at UPF and the BSE Data Science Center in Barcelona (Spain), and is in collaboration with Luc Devroye at McGill University in Montreal (Canada) and Piotr Zwiernik at the University of Toronto (Canada). The primary supervisors are Christian Brownlees, Luc Devroye, Gábor Lugosi, David Rossell and Piotr Zwiernik, although collaborations with other professors of these research groups are also possible.

Interested candidates should send an updated CV and a short research statement to David Rossell (david.rossell AT upf.edu). They should ask 3 referees to send a letter of reference on their behalf.

The deadline for applying for the first position is April 30 2022, the deadline for the second position is June 15 2022.

empirically Bayesian [wISBApedia]

Posted in Statistics with tags , , , , , , , on August 9, 2021 by xi'an

Last week I was pointed out a puzzling entry in the “empirical Bayes” Wikipedia page. The introduction section indeed contains a description of an iterative simulation method that involves an hyperprior p(η) even though the empirical Bayes perspective does not involve an hyperprior.

While the entry is vague and lacks formulae

These suggest an iterative scheme, qualitatively similar in structure to a Gibbs sampler, to evolve successively improved approximations to p(θy) and p(ηy). First, calculate an initial approximation to p(θy) ignoring the η dependence completely; then calculate an approximation to p(η | y) based upon the initial approximate distribution of p(θy); then use this p(ηy) to update the approximation for p(θy); then update p(ηy); and so on.

it sounds essentially equivalent to a Gibbs sampler, possibly a multiple try Gibbs sampler (unless the author had another notion in mind, alas impossible to guess since no reference is included).

Beyond this specific case, where I think the entire paragraph should be erased from the “empirical Bayes” Wikipedia page, I discussed the general problem of some poor Bayesian entries in Wikipedia with Robin Ryder, who came with the neat idea of running (collective) Wikipedia editing labs at ISBA conferences. If we could further give an ISBA label to these entries, as a certificate of “Bayesian orthodoxy” (!), it would be terrific!