**A**mong the flury of papers arXived around the ICML 2019 deadline, I read on my way back from Oxford a paper by Wiqvist et al. on learning summary statistics for ABC by neural nets. Pointing out at another recent paper by Jiang et al. (2017, Statistica Sinica) which constructed a neural network for predicting each component of the parameter vector based on the input (raw) data, as an automated non-parametric regression of sorts. Creel (2017) does the same but with summary statistics. The current paper builds up from Jiang et al. (2017), by adding the constraint that exchangeability and partial exchangeability features should be reflected by the neural net prediction function. With applications to Markovian models. Due to a factorisation theorem for d-block invariant models, the authors impose partial exchangeability for order d Markov models by combining two neural networks that end up satisfying this factorisation. The concept is exemplified for one-dimension g-and-k distributions, alpha-stable distributions, both of which are made of independent observations, and the AR(2) and MA(2) models, as in our 2012 ABC survey paper. Since the later is not Markovian the authors experiment with different orders and reach the conclusion that an order of 10 is most appropriate, although this may be impacted by being a ble to handle the true likelihood.

## Archive for alpha-stable processes

## a pen for ABC

Posted in Books, pictures, Statistics, Travel, University life with tags ABC, alpha-stable processes, exchangeability, g-and-k distributions, ICML, MA(q) model, Markov model, neural network, Oxford, partial exchangeability on February 13, 2019 by xi'an## inverse stable priors

Posted in Statistics with tags All the pretty horses, alpha-stable processes, conjugate priors, Gösta Mittag-Leffler, non-informative priors, reference priors, Sofia Kovalevskaya on November 24, 2017 by xi'an**D**exter Cahoy and Joseph Sedransk just arXived a paper on so-called inverse stable priors. The starting point is the supposed defficiency of Gamma conjugate priors, which have explosive behaviour near zero. Albeit remaining proper. (This behaviour eventually vanishes for a large enough sample size.) The alternative involves a transform of alpha-stable random variables, with the consequence that the density of this alternative prior does not have a closed form. Neither does the posterior. When the likelihood can be written as exp(a.θ+b.log θ), modulo a reparameterisation, which covers a wide range of distributions, the posterior can be written in terms of the inverse stable density and of another (intractable) function called the generalized Mittag-Leffler function. (Which connects this post to an earlier post on Sofia Kovaleskaya.) For simulating this posterior, the authors suggest using an accept-reject algorithm based on the prior as proposal, which has the advantage of removing the intractable inverse stable density but the disadvantage of… simulating from the prior! (No mention is made of the acceptance rate.) I am thus reserved as to how appealing this new proposal is, despite “the inverse stable density (…) becoming increasingly popular in several areas of study”. And hence do not foresee a bright future for this class of prior…

## Stochastic volatility filtering with intractable likelihoods

Posted in Books, Statistics, University life with tags ABC, alpha-stable processes, auxiliary particle filter, EP-ABC, particle filters, SMC, stochastic volatility on May 23, 2014 by xi'an

“The contribution of our work is two-fold: first, we extend the SVM literature, by proposing a new method for obtaining the filtered volatility estimates. Second, we build upon the current ABC literature by introducing the ABC auxiliary particle filter, which can be easily applied not only to SVM, but to any hidden Markov model.”

**A**nother ABC arXival: Emilian Vankov and Katherine B. Ensor posted a paper with the above title. They consider a stochastic volatility model with an α-stable distribution on the observables (or *returns*). Which makes the likelihood unavailable, even were the hidden Markov sequence known… Now, I find very surprising that the authors do not mention the highly relevant paper of Peters, Sisson and Fan, Likelihood-free Bayesian inference for α-stable models, published in CSDA, in 2012, where an ABC algorithm is specifically designed for handling α-stable likelihoods. (Commented on that earlier post.) Similarly, the use of a particle filter coupled to ABC seems to be advanced as a novelty when many researchers have implemented such filters, including Pierre Del Moral, Arnaud Doucet, Ajay Jasra, Sumeet Singh and others, in similar or more general settings. Furthermore, Simon Barthelmé and Nicolas Chopin analysed this very model by EP-ABC and ABC. I thus find it a wee bit hard to pinpoint the degree of innovation contained in this new ABC paper…

## Advances in scalable Bayesian computation [day #1]

Posted in Books, Mountains, pictures, R, Statistics, University life with tags alpha-stable processes, Banff International Research Station for Mathematical Innovation, Bayesian nonparametrics, Canada, Canadian Rockies, objective Bayes, particle Gibbs sampler, probabilistic progamming, unbiasedness, uniform ergodicity on March 4, 2014 by xi'an**T**his was the first day of our workshop Advances in Scalable Bayesian Computation and it sounded like the “main” theme was *probabilistic programming*, in tune with my book review posted this morning. Indeed, both Vikash Mansinghka and Frank Wood gave talks about this concept, Vikash detailing the specifics of a new programming language called Venture and Frank focussing on his state-space version of the above called Anglican. This is a version of the language Church, developed to handle probabilistic models and inference (hence the joke about Anglican, “a Church of England Venture’! But they could have also added that Frank Wood was also the name of a former archbishop of Melbourne..!) I alas had an involuntary doze during Vikash’s talk, which made it harder for me to assess the fundamentals of those ventures, of how they extended beyond a “mere” new software (and of why I would invest in learning a Lisp-based language!).

**T**he other talks of Day #1 were of a more “classical” nature with Pierre Jacob explaining why non-negative unbiased estimators were impossible to provide in general, a paper I posted about a little while ago, and including an objective Bayes example that I found quite interesting. Then Sumeet Singh (no video) presented a joint work with Nicolas Chopin on the uniform ergodicity of the particle Gibbs sampler, a paper that I should have commented here (except that it appeared just prior to The Accident!), with a nice coupling proof. And Maria Lomeli gave us an introduction to the highly general Poisson-Kingman mixture models as random measures, which encompasses all of the previously studied non-parametric random measures, with an MCMC implementation that included a latent variable representation for the alpha-stable process behind the scene, representation that could be (and maybe is) also useful in parametric analyses of alpha-stable processes.

**W**e also had an open discussion in the afternoon that ended up being quite exciting, with a few of us voicing out some problems or questions about existing methods and others making suggestions or contradictions. We are still a wee bit short of considering a collective paper on MCMC under constraints with coherent cross-validated variational Bayes and loss-based pseudo priors, with applications to basketball data” to appear by the end of the week!

**A**dd to this two visits to the Sally Borden Recreation Centre for morning swimming and evening climbing, and it is no wonder I woke up a bit late this morning! Looking forward Day #2!