Archive for dimension reduction

missing dimensions

Posted in Statistics with tags , , , , , , , , , , , , , on February 14, 2021 by xi'an

data assimilation and reduced modelling for high-D problems [CIRM]

Posted in Books, Kids, Mountains, pictures, Running, Statistics, University life with tags , , , , , , , , , , , , , , , , , on February 8, 2021 by xi'an

Next summer, from 19 July till 27 August, there will be a six week program at CIRM on the above theme, bringing together scientists from both the academic and industrial communities. The program includes a one-week summer school followed by 5 weeks of research sessions on projects proposed by academic and industrial partners.

Confirmed speakers of the summer school (Jul 19-23) are:

  • Albert Cohen (Sorbonne University)
  • Masoumeh Dashti (University of Sussex)
  • Eric Moulines (Ecole Polytechnique)
  • Anthony Nouy (Ecole Centrale de Nantes)
  • Claudia Schillings (Mannheim University)

Junior participants may apply for fellowships to cover part or the whole stay. Registration and application to fellowships will be open soon.

ABC in Clermont-Ferrand

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on September 20, 2019 by xi'an

Today I am taking part in a one-day workshop at the Université of Clermont Auvergne on ABC. With applications to cosmostatistics, along with Martin Kilbinger [with whom I worked on PMC schemes], Florent Leclerc and Grégoire Aufort. This should prove a most exciting day! (With not enough time to run up Puy de Dôme in the morning, though.)

ABC with kernelised regression

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on February 22, 2017 by xi'an

sunset from the Banff Centre, Banff, Canada, March 21, 2012The exact title of the paper by Jovana Metrovic, Dino Sejdinovic, and Yee Whye Teh is DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. It appeared last year in the proceedings of ICML.  The idea is to build ABC summaries by way of reproducing kernel Hilbert spaces (RKHS). Regressing such embeddings to the “optimal” choice of summary statistics by kernel ridge regression. With a possibility to derive summary statistics for quantities of interest rather than for the entire parameter vector. The use of RKHS reminds me of Arthur Gretton’s approach to ABC, although I see no mention made of that work in the current paper.

In the RKHS pseudo-linear formulation, the prediction of a parameter value given a sample attached to this value looks like a ridge estimator in classical linear estimation. (I thus wonder at why one would stop at the ridge stage instead of getting the full Bayes treatment!) Things get a bit more involved in the case of parameters (and observations) of interest, as the modelling requires two RKHS, because of the conditioning on the nuisance observations. Or rather three RHKS. Since those involve a maximum mean discrepancy between probability distributions, which define in turn a sort of intrinsic norm, I also wonder at a Wasserstein version of this approach.

What I find hard to understand in the paper is how a large-dimension large-size sample can be managed by such methods with no visible loss of information and no explosion of the computing budget. The authors mention Fourier features, which never rings a bell for me, but I wonder how this operates in a general setting, i.e., outside the iid case. The examples do not seem to go into enough details for me to understand how this massive dimension reduction operates (and they remain at a moderate level in terms of numbers of parameters). I was hoping Jovana Mitrovic could present her work here at the 17w5025 workshop but she sadly could not make it to Banff for lack of funding!

%d bloggers like this: