Archive for the Running Category

Seoul sunrise [jatp]

Posted in Mountains, pictures, Running, Travel, University life with tags , , , , , , , , , on November 8, 2019 by xi'an

catching my train with no training

Posted in pictures, Running, Travel with tags , , , , , , , , on October 22, 2019 by xi'an

2:14:04!!!

Posted in Running, Statistics, Travel with tags , , , , , on October 13, 2019 by xi'an

1:59:40!

Posted in pictures, Running, Travel with tags , , , , , , , , , on October 12, 2019 by xi'an

Japanese mushrooms [jatp]

Posted in Mountains, pictures, Running, Travel with tags , , , , , , , on October 12, 2019 by xi'an

Argentan half-marathon

Posted in Running with tags , , , , , , , , , on October 5, 2019 by xi'an

Today is the day of the Argentan half-marathon which I will not run this year as I have not yet fully recovered from my Achilles tendinitis. (If running too many days in a row, as I indulged in while in Salzburg, inflammation is back!) Frustrating, as this is my “race of the year” in the Norman countryside. But another break also occurred ten years ago, when I missed the 2009 and 2010 episodes. And somehow this is the “best year” to miss as I am switching to the next age group, V3 or grand-master!, in less than a month, and will thus end up as one of the youngsters in the next race I run! As an indicator, in the 5km trail I ran last Sunday in the Parc, I ended 4th (by one position and 6 seconds!) in my category and 2nd (by three positions and 16 seconds, plus a month!) in the following one.

likelihood-free inference by ratio estimation

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on September 9, 2019 by xi'an

“This approach for posterior estimation with generative models mirrors the approach of Gutmann and Hyvärinen (2012) for the estimation of unnormalised models. The main difference is that here we classify between two simulated data sets while Gutmann and Hyvärinen (2012) classified between the observed data and simulated reference data.”

A 2018 arXiv posting by Owen Thomas et al. (including my colleague at Warwick, Rito Dutta, CoI warning!) about estimating the likelihood (and the posterior) when it is intractable. Likelihood-free but not ABC, since the ratio likelihood to marginal is estimated in a non- or semi-parametric (and biased) way. Following Geyer’s 1994 fabulous estimate of an unknown normalising constant via logistic regression, the current paper which I read in preparation for my discussion in the ABC optimal design in Salzburg uses probabilistic classification and an exponential family representation of the ratio. Opposing data from the density and data from the marginal, assuming both can be readily produced. The logistic regression minimizing the asymptotic classification error is the logistic transform of the log-ratio. For a finite (double) sample, this minimization thus leads to an empirical version of the ratio. Or to a smooth version if the log-ratio is represented as a convex combination of summary statistics, turning the approximation into an exponential family,  which is a clever way to buckle the buckle towards ABC notions. And synthetic likelihood. Although with a difference in estimating the exponential family parameters β(θ) by minimizing the classification error, parameters that are indeed conditional on the parameter θ. Actually the paper introduces a further penalisation or regularisation term on those parameters β(θ), which could have been processed by Bayesian Lasso instead. This step is essentially dirving the selection of the summaries, except that it is for each value of the parameter θ, at the expense of a X-validation step. This is quite an original approach, as far as I can tell, but I wonder at the link with more standard density estimation methods, in particular in terms of the precision of the resulting estimate (and the speed of convergence with the sample size, if convergence there is).