Archive for high dimensions

congrats, Prof Rousseau!

Posted in Statistics with tags , , , , , , , , on April 4, 2019 by xi'an

distributed posteriors

Posted in Books, Statistics, Travel, University life with tags , , , , , , , on February 27, 2019 by xi'an

Another presentation by our OxWaSP students introduced me to the notion of distributed posteriors, following a 2018 paper by Botond Szabó and Harry van Zanten. Which corresponds to the construction of posteriors when conducting a divide & conquer strategy. The authors show that an adaptation of the prior to the division of the sample is necessary to recover the (minimax) convergence rate obtained in the non-distributed case. This is somewhat annoying, except that the adaptation amounts to take the original prior to the power 1/m, when m is the number of divisions. They further show that when the regularity (parameter) of the model is unknown, the optimal rate cannot be recovered unless stronger assumptions are made on the non-zero parameters of the model.

“First of all, we show that depending on the communication budget, it might be advantageous to group local machines and let different groups work on different aspects of the high-dimensional object of interest. Secondly, we show that it is possible to have adaptation in communication restricted distributed settings, i.e. to have data-driven tuning that automatically achieves the correct bias-variance trade-off.”

I find the paper of considerable interest for scalable MCMC methods, even though the setting may happen to sound too formal, because the study incorporates parallel computing constraints. (Although I did not investigate the more theoretical aspects of the paper.)

IMS workshop [day 3]

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , on August 30, 2018 by xi'an

I made the “capital” mistake of walking across the entire NUS campus this morning, which is quite green and pretty, but which almost enjoys an additional dimension brought by such an intense humidity that one feels having to get around this humidity!, a feature I have managed to completely erase from my memory of my previous visit there. Anyway, nothing of any relevance. oNE talk in the morning was by Markus Eisenbach on tools used by physicists to speed up Monte Carlo methods, like the Wang-Landau flat histogram, towards computing the partition function, or the distribution of the energy levels, definitely addressing issues close to my interest, but somewhat beyond my reach for using a different language and stress, as often in physics. (I mean, as often in physics talks I attend.) An idea that came out clear to me was to bypass a (flat) histogram target and aim directly at a constant slope cdf for the energy levels. (But got scared away by the Fourier transforms!)

Lawrence Murray then discussed some features of the Birch probabilistic programming language he is currently developing, especially a fairly fascinating concept of delayed sampling, which connects with locally-optimal proposals and Rao Blackwellisation. Which I plan to get back to later [and hopefully sooner than later!].

In the afternoon, Maria de Iorio gave a talk about the construction of nonparametric priors that create dependence between a sequence of functions, a notion I had not thought of before, with an array of possibilities when using the stick breaking construction of Dirichlet processes.

And Christophe Andrieu gave a very smooth and helpful entry to partly deterministic Markov processes (PDMP) in preparation for talks he is giving next week for the continuation of the workshop at IMS. Starting with the guided random walk of Gustafson (1998), which extended a bit later into the non-reversible paper of Diaconis, Holmes, and Neal (2000). Although I had a vague idea of the contents of these papers, the role of the velocity ν became much clearer. And premonitory of the advances made by the more recent PDMP proposals. There is obviously a continuation with the equally pedagogical talk Christophe gave at MCqMC in Rennes two months [and half the globe] ago,  but the focus being somewhat different, it really felt like a new talk [my short term memory may also play some role in this feeling!, as I now remember the discussion of Hilderbrand (2002) for non-reversible processes]. An introduction to the topic I would recommend to anyone interested in this new branch of Monte Carlo simulation! To be followed by the most recently arXived hypocoercivity paper by Christophe and co-authors.

ABCDay [arXivals]

Posted in Books, Statistics, University life with tags , , , , , , on March 2, 2018 by xi'an

A bunch of ABC papers on arXiv yesterday, most of them linked to the incoming Handbook of ABC:

    1. Overview of Approximate Bayesian Computation S. A. Sisson, Y. Fan, M. A. Beaumont
    2. Kernel Recursive ABC: Point Estimation with Intractable Likelihood Takafumi Kajihara, Keisuke Yamazaki, Motonobu Kanagawa, Kenji Fukumizu
    3. High-dimensional ABC D. J. Nott, V. M.-H. Ong, Y. Fan, S. A. Sisson
    4. ABC Samplers Y. Fan, S. A. Sisson

 

ISBA 2016

Posted in Kids, Statistics, Travel, University life, Wines with tags , , , , , , , , , , on June 14, 2016 by xi'an

non-tibetan flags in Pula, Sardinia, June 12, 2016I remember fondly the early Valencia meetings where we did not have to pick between sessions. Then one year there were two sessions and soon more. And we now have to pick among equally tantalising sessions. [Complaint of the super wealthy, I do realise.] After a morning trip to San’Antioco and the southern coast of Sardinia, I started my ISBA 2016 with an not [that Bayesian] high dimension session with Michael Jordan (who gave a talk related to his MCMski lecture), Isa Verdinelli and Larry Wasserman.

Larry gave a [non-Bayesian, what else?!] talk on the problem of data splitting versus double use of the same data. Or rather using a model index estimated from a given dataset to estimate the properties of the mean of the same data. As in model selection. While splitting the data avoids all sorts of problem, not splitting the data but using a different loss function could avoid the issue. (And the infinite regress that if we keep conducting inference, we may have to split further and further the data.) Namely, if we were looking only at quantities that do not vary across models. So it is surprising that prediction get affected by this.

In a second session around Bayesian tests and model choice, Sarah Filippi presented the Bayesian non-parametric test she devised with Chris Holmes, using Polya trees. And mentioned our testing-by-mixture approach as a valuable alternative! Veronika Rockova talked about her new approach to efficient variable selection by spike-and-slab priors, through a mix of particle MCMC and EM, plus some variational Bayes motivations. (She also mentioned extensions by repulsive sampling through the pinball sampler, of which her recent AISTATS paper reminded me.)

Later in the evening, I figured out that the poster sessions that make the ISBA/Valencia meetings so unique are alas out of reach for me as the level of noise and my reduced hearing capacities (!) make impossible any prolonged discussion on any serious notion. No poster session for ‘Og’s men!, then, even though I can hang out at the fringe and chat with friends!