Archive for seminar

All about that [Detective] Bayes [seminar]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , on January 5, 2023 by xi'an
On 10 January 2023, at 14:00, Campus Pierre et Marie Curie (Sorbonne Université), Room 15.16-309, an All about that Bayes seminar presentation by Daniele Durante, visiting Paris Dauphine this month:

Daniele Durante (Bocconi University) – Detective Bayes: Bayesian nonparametric stochastic block modeling of criminal networks

Europol recently defined criminal networks as a modern version of the Hydra mythological creature, with covert structure and multifaceted evolutions. Indeed, relationships data among criminals are subject to measurement errors, structured missingness patterns, and exhibit a complex combination of an unknown number of core-periphery, assortative and disassortative structures that may encode key architectures of the criminal organization. The coexistence of these noisy block patterns limits the reliability of community detection algorithms routinely-used in criminology, thereby leading to overly-simplified and possibly biased reconstructions of organized crime topologies. In this seminar, I will present a number of model-based solutions which aim at covering these gaps via a combination of stochastic block models and priors for random partitions arising from Bayesian nonparametrics. These include Gibbs-type priors, and random partition priors driven by the urn scheme of a hierarchical normalized completely random measure. Product-partition models to incorporate criminals’ attributes, and zero-inflated Poisson representations accounting for weighted edges and secrecy strategies, will be also discussed. Collapsed Gibbs samplers for posterior computation are presented, and refined strategies for estimation, prediction, uncertainty quantification and model selection will be outlined. Results are illustrated in an application to an Italian Mafia network, where the proposed models unveil a structure of the criminal organization mostly hidden to state-of-the-art alternatives routinely used in criminology. I will conclude the seminar with ideas on how to learn the evolutionary history of the criminal organization from the relationship data among its criminals via a novel combination of latent space models for network data and phylogenetic trees.

diffusion means in geometric statistics [CRiSM Seminar]

Posted in Statistics with tags , , , , , on November 16, 2022 by xi'an

Andrew & All about that Bayes!

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , on October 6, 2022 by xi'an


Andrew Gelman is giving a talk on 11 October at 2 p.m. in Campus Pierre et Marie Curie (Sorbonne Université), room 16-26-209. He will talk about

Prior distribution for causal inference

In Bayesian inference, we must specify a model for the data (a likelihood) and a model for parameters (a prior). Consider two questions:

  1. Why is it more complicated to specify the likelihood than the prior?
  2. In order to specify the prior, how could can we switch between the theoretical literature (invariance, normality assumption, …) and the applied literature (experts elicitation, robustness, …)?

I will discuss those question in the domain of causal inference: prior distributions for causal effects, coefficients of regression and the other parameters in causal models.

robust inference using posterior bootstrap

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on February 18, 2022 by xi'an

The famous 1994 Read Paper by Michael Newton and Adrian Raftery was entitled Approximate Bayesian inference, where the boostrap aspect is in randomly (exponentially) weighting each observation in the iid sample through a power of the corresponding density, a proposal that happened at about the same time as Tony O’Hagan suggested the related fractional Bayes factor. (The paper may also be equally famous for suggesting the harmonic mean estimator of the evidence!, although it only appeared as an appendix to the paper.) What is unclear to me is the nature of the distribution g(θ) associated with the weighted bootstrap sample, conditional on the original sample, since the outcome is the result of a random Exponential sample and of an optimisation step. With no impact of the prior (which could have been used as a penalisation factor), corrected by Michael and Adrian via an importance step involving the estimation of g(·).

At the Algorithm Seminar today in Warwick, Emilie Pompe presented recent research, including some written jointly with Pierre Jacob, [which I have not yet read] that does exactly that inclusion of the log prior as penalisation factor, along with an extra weight different from one, as motivated by the possibility of a misspecification. Including a new approach to cut models. An alternative mentioned during the talk that reminds me of GANs is to generate a pseudo-sample from the prior predictive and add it to the original sample. (Some attendees commented on the dependence of the later version on the chosen parameterisation, which is an issue that had X’ed my mind as well.)

the many nuances of Bayesian testing [CERminar]

Posted in Statistics with tags , , , , , , , , , , , on January 19, 2022 by xi'an

CERminar

%d bloggers like this: