Archive for stochastic gradient descent
far south
Posted in Books, Statistics, Travel, University life with tags Biometrika, Computo, ENS Paris-Saclay, habilitation, HDR, MCMC convergence, overdamped Langevin algorithm, PDMP, Saclay, stochastic gradient descent, stochastic optimisation, unadjusted Langevin algorithm, Université Paris-Saclay on February 23, 2022 by xi'anséminaire P de S
Posted in Books, pictures, Statistics, University life with tags Biometrika, computer-based proof, extreme value theory, Institut Henri Poincaré, neural network, Paris, probabilistic numerics, séminaire, seminar, stochastic gradient descent on February 18, 2020 by xi'anAs I was in Paris and free for the occasion (!), I attended the Paris Statistics seminar this afternoon, in the Latin Quarter. With a first talk by Kweku Abraham on Bayesian inverse problems set a prior on the quantity of interest, γ, rather than its transform G(γ), observed with noise. Always perturbed by the juggling of different distances, like L² versus Kullback-Leibler, in non-parametric frameworks. Reminding me of probabilistic numerics, at least in the framework, since the crux of the talk was 100% about convergence. And a second talk by Leanaïc Chizat on convex neural networks corresponding to an infinite number of neurons, with surprising properties, including implicit bias. And a third talk by Anne Sabourin on PCA for extremes. Which assumed very little on the model but more on the geometry of the distribution, like extremes being concentrated on a subspace. As I was rather tired from an intense week at Warwick, and after a weekend of reading grant applications and Biometrika submissions (!), my foggy brain kept switching to these proposals, trying to make connections with the talks, not completely inappropriately in two cases out of three. (I am afraid the same may happen tomorrow at our probability seminar on computer-based proofs!)
double descent
Posted in Books, Statistics, University life with tags double descent, France, Gare de Lyon, INRIA, machine learning, neural network, Paris, randomisation, Seine, SMILE seminar, stochastic gradient descent, training versus testing on November 7, 2019 by xi'anLast Friday, I [and a few hundred others!] went to the SMILE (Statistical Machine Learning in Paris) seminar where Francis Bach was giving a talk. (With a pleasant ride from Dauphine along the Seine river.) Fancis was talking about the double descent phenomenon observed in recent papers by Belkin & al. (2018, 2019), and Mei & Montanari (2019). (As the seminar room at INRIA was quite crowded and as I was sitting X-legged on the floor close to the screen, I took a few slides from below!) The phenomenon is that the usual U curve warning about over-fitting and reproduced in most statistics and machine-learning courses can under the right circumstances be followed by a second decrease in the testing error when the number of features goes beyond the number of observations.
This is rather puzzling and counter-intuitive, so I briefkly checked the 2019 [8 pages] article by Belkin & al., who are studying two examples, including a standard “large p small n” Gaussian regression. where the authors state that
“However, as p grows beyond n, the test risk again decreases, provided that the model is fit using a suitable inductive bias (e.g., least norm solution). “
One explanation [I found after checking the paper] is that the variates (features) in the regression are selected at random rather than in an optimal sequential order. Double descent is missing with interpolating and deterministic estimators. Hence requiring on principle all candidate variates to be included to achieve minimal averaged error. The infinite spike is when the number p of variate is near the number n of observations. (The expectation accounts as well for the randomisation in T. Randomisation that remains an unclear feature in this framework…)
Bayesian webinar: Bayesian conjugate gradient
Posted in Statistics with tags Bayesian Analysis, Greenwich Mean Time, ISBA, stochastic gradient descent, University of Warwick, webinar, youtube on September 25, 2019 by xi'anBayesian Analysis is launching its webinar series on discussion papers! Meaning the first 90 registrants will be able to participate interactively via the Zoom Conference platform while additional registrants will be able to view the Webinar on a dedicated YouTube Channel. This fantastic initiative is starting with the Bayesian conjugate gradient method of Jon Cockayne (University of Warwick) et al., on October 2 at 4pm Greenwich time. (With available equivalences for other time zones!) I strongly support this initiative and wish it the widest possible success, as it could bring a new standard for conferences, having distant participants gathering in a nearby location to present talks and attend other talks from another part of the World, while effectively participating. An dense enough network could even see the non-stop conference emerging!
JSM 2018 [#3]
Posted in Mountains, Statistics, Travel, University life with tags ABC, Approximate Bayesian computation, Bayesian network, Bayesian p-values, British Columbia, Canada, curse of dimensionality, JSM 2018, prior predictive, pseudo-marginal MCMC, spectral analysis, spike-and-slab prior, stochastic gradient descent, Vancouver, variational Bayes methods on August 1, 2018 by xi'anAs I skipped day #2 for climbing, here I am on day #3, attending JSM 2018, with a [fully Canadian!] session on (conditional) copula (where Bruno Rémillard talked of copulas for mixed data, with unknown atoms, which sounded like an impossible target!), and another on four highlights from Bayesian Analysis, (the journal), with Maria Terres defending the (often ill-considered!) spectral approach within Bayesian analysis, modelling spectral densities (Fourier transforms of correlations functions, not probability densities), an advantage compared with MCAR modelling being the automated derivation of dependence graphs. While the spectral ghost did not completely dissipate for me, the use of DIC that she mentioned at the very end seems to call for investigation as I do not know of well-studied cases of complex dependent data with clearly specified DICs. Then Chris Drobandi was speaking of ABC being used for prior choice, an idea I vaguely remember seeing quite a while ago as a referee (or another paper!), paper in BA that I missed (and obviously did not referee). Using the same reference table works (for simple ABC) with different datasets but also different priors. I did not get first the notion that the reference table also produces an evaluation of the marginal distribution but indeed the entire simulation from prior x generative model gives a Monte Carlo representation of the marginal, hence the evidence at the observed data. Borrowing from Evans’ fringe Bayesian approach to model choice by prior predictive check for prior-model conflict. I remain sceptic or at least agnostic on the notion of using data to compare priors. And here on using ABC in tractable settings.
The afternoon session was [a mostly Australian] Advanced Bayesian computational methods, with Robert Kohn on variational Bayes, with an interesting comparison of (exact) MCMC and (approximative) variational Bayes results for some species intensity and the remark that forecasting may be much more tolerant to the approximation than estimation. Making me wonder at a possibility of assessing VB on the marginals manageable by MCMC. Unless I miss a complexity such that the decomposition is impossible. And Antonietta Mira on estimating time-evolving networks estimated by ABC (which Anto first showed me in Orly airport, waiting for her plane!). With a possibility of a zero distance. Next talk by Nadja Klein on impicit copulas, linked with shrinkage properties I was unaware of, including the case of spike & slab copulas. Michael Smith also spoke of copulas with discrete margins, mentioning a version with continuous latent variables (as I thought could be done during the first session of the day), then moving to variational Bayes which sounds quite popular at JSM 2018. And David Gunawan made a presentation of a paper mixing pseudo-marginal Metropolis with particle Gibbs sampling, written with Chris Carter and Robert Kohn, making me wonder at their feature of using the white noise as an auxiliary variable in the estimation of the likelihood, which is quite clever but seems to get against the validation of the pseudo-marginal principle. (Warning: I have been known to be wrong!)