## Archive for neural network

## AIxcuse me?!

Posted in Statistics with tags The Guardian, University of Toronto, Google, chatbots, neural network, AI, ChatGPT, Geoffrey Hinton on May 3, 2023 by xi'an## posterior collapse

Posted in Statistics with tags ABC, identifiability, neural network, NeurIPS 2021, One World ABC Seminar, variational approximations, variational autoencoders on February 24, 2022 by xi'an**T**he latest ABC One World webinar was a talk by Yixin Wang about the posterior collapse of auto-encoders, of which I was completely unaware. It is essentially an *identifiability* issue with auto-encoders, where the latent variable z at the source of the VAE does not impact the likelihood, assumed to be an exponential family with parameter depending on z and on θ, through possibly a neural network construct. The *variational* part comes from the parameter being estimated as θ⁰, via a variational approximation.

*“….the problem of posterior collapse mainly arises from the model and the data, rather than from inference or optimization…”*

The collapse means that the posterior for the latent satisfies p(z|θ⁰,x)=p(z), which is not a standard property since θ⁰=θ⁰(x). Which Yixin Wang, David Blei and John Cunningham show is equivalent to p(x|θ⁰,z)=p(x|θ⁰), i.e. z being unidentifiable. The above quote is then both correct and incorrect in that the choice of the inference approach, i.e. of the estimator θ⁰=θ⁰(x) has an impact on whether or not p(z|θ⁰,x)=p(z) holds. As acknowledged by the authors when describing “*methods modify the optimization objectives or algorithms of VAE to avoid parameter values θ at which the latent variable is non-identifiable*“. They later build a resolution for identifiable VAEs by imposing that the conditional p(x|θ,z) is injective in z for all values of θ. Resulting in a neural network with Brenier maps.

From a Bayesian perspective, I have difficulties to connect to the issue, the folk lore being that selecting a proper prior is a sufficient fix for avoiding non-identifiability, but more fundamentally I wonder at the relevance of inferring about the latent z’s and hence worrying about their identifiability or lack thereof.

## One World ABC seminar [24.2.22]

Posted in Statistics, University life with tags ABC, Approximate Bayesian computation, approximate inference, computer simulation, confidence sets, neural network, normalizing flow, One World, One World ABC Seminar, University of Warwick, webinar on February 22, 2022 by xi'an**T**he next One World ABC seminar is on Thursday 24 Feb, with Rafael Izbicki talking on Likelihood-Free Frequentist Inference – Constructing Confidence Sets with Correct Conditional Coverage. It will take place at 14:30 CET (GMT+1).

Many areas of science make extensive use of computer simulators that implicitly encodelikelihood functions of complex systems. Classical statistical methods are poorly suitedfor these so-called likelihood-free inference (LFI) settings, outside the asymptotic and low-dimensional regimes. Although new machine learning methods, such as normalizing flows,have revolutionized the sample efficiency and capacity of LFI methods, it remains an openquestion whether they produce reliable measures of uncertainty. We present a statisticalframework for LFI that unifies classical statistics with modern machine learning to: (1)efficiently construct frequentist confidence sets and hypothesis tests with finite-sample guarantees of nominal coverage (type I error control) and power; (2) provide practical diagnosticsfor assessing empirical coverage over the entire parameter space. We refer to our frameworkas likelihood-free frequentist inference (LF2I). Any method that estimates a test statistic,like the likelihood ratio, can be plugged into our framework to create valid confidence setsand compute diagnostics, without costly Monte Carlo samples at fixed parameter settings.In this work, we specifically study the power of two test statistics (ACORE and BFF),which, respectively, maximize versus integrate an odds function over the parameter space.Our study offers multifaceted perspectives on the challenges in LF2I. This is joint work withNiccolo Dalmasso, David Zhao and Ann B. Lee.

## One World ABC seminar [3.2.22]

Posted in Statistics, University life with tags ABC, Approximate Bayesian computation, approximate inference, Brenier maps, convex neural networks, identifiability, neural network, One World, One World ABC Seminar, posterior collapse, University of Warwick, variational autoencoders, webinar on February 1, 2022 by xi'an**T**he next One World ABC seminar is on Thursday 03 Feb, with Yixing Want talking on Posterior collapse and latent variable non-identifiability It will take place at 15:30 CET (GMT+1).

Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful epresentations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is

not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.

## ABC by classification

Posted in pictures, Statistics, Travel, University life with tags ABC, Bayesian GANs, Biometrika, BIRS-CMO, Casa Matemática Oaxaca, Charlie Geyer, generalised Bayes estimators, neural network, Oaxaca, technical report, Université Paris Dauphine on December 21, 2021 by xi'an**A**s a(nother) coincidence, yesterday, we had a reading group discussion at Paris Dauphine a few days after Veronika Rockova presented the paper in person in Oaxaca. The idea in ABC by classification that she co-authored with Yuexi Wang and Tetsuya Kaj is to use the empirical Kullback-Leibler divergence as a substitute to the intractable likelihood at the parameter value θ. In the generalised Bayes setting of Bissiri et al. Since this quantity is not available it is estimated as well. By a classification method that somehow relates to Geyer’s 1994 inverse logistic proposal, using the (ABC) pseudo-data generated from the model associated with θ. The convergence of the algorithm obviously depends on the choice of the discriminator used in practice. The paper also makes a connection with GANs as a potential alternative for the generalised Bayes representation. It mostly focus on the *frequentist* validation of the ABC posterior, in the sense of exhibiting a posterior concentration rate in n, the sample size, while requiring performances of the discriminators that may prove hard to check in practice. Expanding our 2018 result to this setting, with the tolerance decreasing more slowly than the Kullback-Leibler estimation error.

Besides the shared appreciation that working with the Kullback-Leibler divergence was a nice and under-appreciated direction, one point that came out of our discussion is that using the (estimated) Kullback-Leibler divergence as a form of distance (attached with a tolerance) is less prone to variability (or more robust) than using directly (and without tolerance) the estimate as a substitute to the intractable likelihood, if we interpreted the discrepancy in Figure 3 properly. Another item was about the discriminator function itself: while a machine learning methodology such as neural networks could be used, albeit with unclear theoretical guarantees, it was unclear to us whether or not a *new* discriminator needed be constructed for *each* value of the parameter θ. Even when the simulations are run by a deterministic transform.