Archive for Bernhard Flury

dynamic mixtures and frequentist ABC

Posted in Statistics with tags , , , , , , , , , , , , , , , on November 30, 2022 by xi'an

This early morning in NYC, I spotted this new arXival by Marco Bee (whom I know from the time he was writing his PhD with my late friend Bernhard Flury) and found he has been working for a while on ABC related problems. The mixture model he considers therein is a form of mixture of experts, where the weights of the mixture components are not constant but functions on (0,1) of the entry as well. This model was introduced by Frigessi, Haug and Rue in 2002 and is often used as a benchmark for ABC methods, since it is missing its normalising constant as in e.g.

f(x) \propto p(x) f_1(x) + (1-p(x)) f_2(x)

even with all entries being standard pdfs and cdfs. Rather than using a (costly) numerical approximation of the “constant” (as a function of all unknown parameters involved), Marco follows the approximate maximum likelihood approach of my Warwick colleagues, Javier Rubio [now at UCL] and Adam Johansen. It is based on the [SAME] remark that under a uniform prior and using an approximation to the actual likelihood the MAP estimator is also the MLE for that approximation. The approximation is ABC-esque in that a pseudo-sample is generated from the true model (attached to a simulation of the parameter) and the pair is accepted if the pseudo-sample stands close enough to the observed sample. The paper proposes to use the Cramér-von Mises distance, which only involves ranks. Given this “posterior” sample, an approximation of the posterior density is constructed and then numerically optimised. From a frequentist view point, a direct estimate of the mode would be preferable. From my Bayesian perspective, this sounds like a step backwards, given that once a posterior sample is available, reconnecting with an approximate MLE does not sound highly compelling.

continuous herded Gibbs sampling

Posted in Books, pictures, Statistics with tags , , , , , , , , on June 28, 2021 by xi'an

Read a short paper by Laura Wolf and Marcus Baum on Gibbs herding, where herding is a technique of “deterministic sampling”, for instance selecting points over the support of the distribution by matching exact and empirical (or “empirical”!) moments. Which reminds me of the principal points devised by my late friend Bernhard Flury. With an unclear argument as to why it could take over random sampling:

“random numbers are often generated by pseudo-random number generators, hence are not truly random”

Especially since the aim is to “draw samples from continuous multivariate probability densities.” The sequential construction of such a sample proceeds sequentially by adding a new (T+1)-th point to the existing sample of y’s by maximising in x the discrepancy

(T+1)\mathbb E^Y[k(x,Y)]-\sum_{t=1}^T k(x,y_t)

where k(·,·) is a kernel, e.g. a Gaussian density. Hence a complexity that grows as O(T). The current paper suggests using Gibbs “sampling” to update one component of x at a time. Using the conditional version of the above discrepancy. Making the complexity grow as O(dT) in d dimensions.

I remain puzzled by the whole thing as these samples cannot be used as regular random or quasi-random samples. And in particular do not produce unbiased estimators of anything. Obviously. The production of such samples being furthermore computationally costly it is also unclear to me that they could even be used for quick & dirty approximations of a target sample.

population quasi-Monte Carlo

Posted in Books, Statistics with tags , , , , , , , , , , , , on January 28, 2021 by xi'an

“Population Monte Carlo (PMC) is an important class of Monte Carlo methods, which utilizes a population of proposals to generate weighted samples that approximate the target distribution”

A return of the prodigal son!, with this arXival by Huang, Joseph, and Mak, of a paper on population Monte Carlo using quasi-random sequences. The construct is based on an earlier notion of Joseph and Mak, support points, which are defined wrt a given target distribution F as minimising the variability of a sample from F away from these points. (I would have used instead my late friend Bernhard Flury’s principal points!) The proposal uses Owen-style scrambled Sobol points, followed by a deterministic mixture weighting à la PMC, followed by importance support resampling to find the next location parameters of the proposal mixture (which is why I included an unrelated mixture surface as my post picture!). This importance support resampling is obviously less variable than the more traditional ways of resampling but the cost moves from O(M) to O(M²).

“The main computational complexity of the algorithm is O(M²) from computing the pairwise distance of the M weighted samples”

The covariance parameters are updated as in our 2008 paper. This new proposal is interesting and reasonable, with apparent significant gains, albeit I would have liked to see a clearer discussion of the actual computing costs of PQMC.

no more car talk

Posted in Books, Kids, Travel with tags , , , , , on November 9, 2014 by xi'an

When I first came went to the US in 1987, I switched from listening to the French public radio to listening to NPR, the National Public Radio network. However, it was not until I met both George Casella and Bernhard Flury that I started listening to “Car Talk”, the Sunday morning talk-show by the Magliozzi brothers where listeners would call and expose their car problem and get jokes and sometime advice in reply. Both George and Bernhard were big fans of the show, much more for the unbelievable high spirits it provided than for any deep interest in mechanics. And indeed there was something of the spirit of Zen and the art of motorcycle maintenance in that show, namely that through mechanical issues, people would come to expose deeper worries that the Magliozzi brothers would help bring out, playing the role of garage-shack psychiatrists…Which made me listen to them, despite my complete lack of interest in car, mechanics and repair in general.

One of George’s moments of fame was when he wrote to the Magliozzi brothers about Monty Hall’s problem, because they had botched their explanation as to why one should always change door. And they read it on the air, with the line “Who is this Casella guy from Cornell University? A professor? A janitor?” since George had just signed George Casella, Cornell University. Besides, Bernhard was such a fan of the show that he taped every single morning show, that he would later replay on long car trips (I do not know how his familly enjoyed the exposure to the show, though!). And so happened to have this line about George on tape, that he sent him a few weeks later… I am reminiscing all this because I saw in the NYT today that the older brother, Tom Magliozzi, had just died. Some engines can alas not be fixed… But I am sure there will be a queue of former car addicts in some heavenly place eager to ask him their question about their favourite car. Thanks for the ride, Tom!

%d bloggers like this: