Archive for Bayesian Analysis

advancements in Bayesian methods and implementations

Posted in Books, Statistics, University life with tags , , , , , , , , on November 10, 2022 by xi'an

The handbook of (recent) advances in Bayesian methods is now out (at the Elsevierian price of $250!) with chapters on Gibbs posteriors [Ryan Martin & Nicolas Syring], martingale distributions [Walker], selective inference [Daniel García Racines & Alastair Young], manifold simulations [Sumio Watanabe], MCMC for GLMMs [Vivek Roy] and multiple testing [Noirrit Chandra and Sourabh Bhattacharya]. (Along with my chapter on 50 shades of Bayesian testing.) Celebrating 102 years for C.R. Rao, one of the three editors of this volume (as well as the series) along with Arni Srivastava Rao and Alastair Young.

Ocean’s four!

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , on October 25, 2022 by xi'an

Fantastic news! The ERC-Synergy¹ proposal we submitted last year with Michael Jordan, Éric Moulines, and Gareth Roberts has been selected by the ERC (which explains for the trips to Brussels last month). Its acronym is OCEAN [hence the whale pictured by a murmuration of starlings!], which stands for On intelligenCE And Networks​: Mathematical and Algorithmic Foundations for Multi-Agent Decision-Making​. Here is the abstract, which will presumably turn public today along with the official announcement from the ERC:

Until recently, most of the major advances in machine learning and decision making have focused on a centralized paradigm in which data are aggregated at a central location to train models and/or decide on actions. This paradigm faces serious flaws in many real-world cases. In particular, centralized learning risks exposing user privacy, makes inefficient use of communication resources, creates data processing bottlenecks, and may lead to concentration of economic and political power. It thus appears most timely to develop the theory and practice of a new form of machine learning that targets heterogeneous, massively decentralized networks, involving self-interested agents who expect to receive value (or rewards, incentive) for their participation in data exchanges.

OCEAN will develop statistical and algorithmic foundations for systems involving multiple incentive-driven learning and decision-making agents, including uncertainty quantification at the agent’s level. OCEAN will study the interaction of learning with market constraints (scarcity, fairness), connecting adaptive microeconomics and market-aware machine learning.

OCEAN builds on a decade of joint advances in stochastic optimization, probabilistic machine learning, statistical inference, Bayesian assessment of uncertainty, computation, game theory, and information science, with PIs having complementary and internationally recognized skills in these domains. OCEAN will shed a new light on the value and handling data in a competitive, potentially antagonistic, multi-agent environment, and develop new theories and methods to address these pressing challenges. OCEAN requires a fundamental departure from standard approaches and leads to major scientific interdisciplinary endeavors that will transform statistical learning in the long term while opening up exciting and novel areas of research.

Since the ERC support in this grant mostly goes to PhD and postdoctoral positions, watch out for calls in the coming months or contact us at any time.

Continue reading

Andrew & All about that Bayes!

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , on October 6, 2022 by xi'an

Andrew Gelman is giving a talk on 11 October at 2 p.m. in Campus Pierre et Marie Curie (Sorbonne Université), room 16-26-209. He will talk about

Prior distribution for causal inference

In Bayesian inference, we must specify a model for the data (a likelihood) and a model for parameters (a prior). Consider two questions:

  1. Why is it more complicated to specify the likelihood than the prior?
  2. In order to specify the prior, how could can we switch between the theoretical literature (invariance, normality assumption, …) and the applied literature (experts elicitation, robustness, …)?

I will discuss those question in the domain of causal inference: prior distributions for causal effects, coefficients of regression and the other parameters in causal models.

Advancements in Bayesian Methods and Implementations [to appear]

Posted in Books, Statistics with tags , , , , , on July 17, 2022 by xi'an

As noted in another post, I wrote a chapter on Bayesian testing for an incoming handbook, Advancements in Bayesian methods and implementations which is published by Elsevier at an atrocious price (as usual). Here is the table of contents:

1. Fisher Information, Cramèr-Rao and Bayesian Paradigm by Roy Frieden
2. Compound beta binomial distribution functions by Angelo Plastino
3. MCMC for GLMMS by Vivekananda Roy
4. Signal Processing and Bayesian by Chandra Murthy
5. Mathematical theory of Bayesian statistics where all models are wrong by Sumio Watanabe
6. Machine Learning and Bayesian by Jun Zhu
7. Non-parametric Bayes by Stephen Walker
8. [50 shades of] Bayesian testing [of hypotheses] by Christian P. Robert
9. Data Analysis with humans by Sumio Kaski
10. Bayesian Inference under selection by G. Alastair Young
10. Variational inference or Functional horseshoe by Anirban Bhattacharya
11. Generalized Bayes by Ryan Martin

and my chapter is also available on arXiv, quickly gathered from earlier short courses at O’Bayes meetings and some xianblog entries on the topic, hence not containing much novelty!

Bayesian restricted likelihood with insufficient statistic [slides]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on February 9, 2022 by xi'an

A great Bayesian Analysis webinar this afternoon with well-balanced presentations by Steve MacEachern and John Lewis, and original discussions by Bertrand Clarke and Fabrizio Rugieri. Which attracted 122 participants. I particularly enjoyed Bertrand’s points that likelihoods were more general than models [made in 6 different wordings!] and that this paper was closer to the M-open perspective. I think I eventually got the reason why the approach could be seen as an ABC with ε=0, since the simulated y’s all get the right statistic, but this presentation does not bring a strong argument in favour of the restricted likelihood approach, when considering the methodological and computational effort. The discussion also made me wonder if tools like VAEs could be used towards approximating the distribution of T(y) conditional on the parameter θ. This is also an opportunity to thank my friend Michele Guindani for his hard work as Editor of Bayesian Analysis and in particular for keeping the discussion tradition thriving!

%d bloggers like this: