Archive for machine learning

ISBA 2021 grand finale

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 3, 2021 by xi'an

Last day of ISBA (and ISB@CIRM), or maybe half-day, since there are only five groups of sessions we can attend in Mediterranean time.

My first session was one on priors for mixtures, with 162⁺ attendees at 5:15am! (well, at 11:15 Wien or Marseille time), Gertrud Malsiner-Walli distinguishing between priors on number of components [in the model] vs number of clusters [in the data], with a minor question of mine whether or not a “prior” is appropriate for a data-dependent quantity. And Deborah Dunkel presenting [very early in the US!] anchor models for fighting label switching, which reminded me of the talk she gave at the mixture session of JSM 2018 in Vancouver. (With extensions to consistency and mixtures of regression.) And Clara Grazian debating on objective priors for the number of components in a mixture [in the Sydney evening], using loss functions to build these. Overall it seems there were many talks on mixtures and clustering this year.

After the lunch break, when several ISB@CIRM were about to leave, we ran the Objective Bayes contributed session, which actually included several Stein-like minimaxity talks. Plus one by Théo Moins from the patio of CIRM, with ciccadas in the background. Incredibly chaired by my friend Gonzalo, who had a question at the ready for each and every speaker! And then the Savage Awards II session. Which ceremony is postponed till Montréal next year. And which nominees are uniformly impressive!!! The winner will only be announced in September, via the ISBA Bulletin. Missing the ISBA general assembly for a dinner in Cassis. And being back for the Bayesian optimisation session.

I would have expected more talks at the boundary of BS & ML (as well as COVID and epidemic decision making), the dearth of which should be a cause for concern if researchers at this boundary do not prioritise ISBA meetings over more generic meetings like NeurIPS… (An exception was George Papamakarios’ talk on variational autoencoders in the Savage Awards II session.)

Many many thanks to the group of students at UConn involved in setting most of the Whova site and running the support throughout the conference. It indeed went on very smoothly and provided a worthwhile substitute for the 100% on-site version. Actually, I both hope for the COVID pandemic (or at least the restrictions attached to it) to abate and for the hybrid structure of meetings to stay, along with the multiplication of mirror workshops. Being together is essential to the DNA of conferences, but travelling to a single location is not so desirable, for many reasons. Looking for ISBA 2022, a year from now, either in Montréal, Québec, or in one of the mirror sites!

statistical analysis of GANs

Posted in Books, Statistics with tags , , , , , , , , on May 24, 2021 by xi'an

My friend Gérard Biau and his coauthors have published a paper in the Annals of Statistics last year on the theoretical [statistical] analysis of GANs, which I had missed and recently read with a definitive interest in the issues. (With no image example!)

If the discriminator is unrestricted the unique optimal solution is the Bayes posterior probability

\dfrac{p^\star(x)}{p^\star(x)+p_\theta(x)}

when the model density is everywhere positive. And the optimal parameter θ corresponds to the closest model in terms of Kullback-Leibler divergence. The pseudo-true value of the parameter. This is however the ideal situation, while in practice D is restricted to a parametric family. In this case, if the family is wide enough to approximate the ideal discriminator in the sup norm, with error of order ε, and if the parameter space Θ is compact, the optimal parameter found under the restricted family approximates the pseudo-true value in the sense of the GAN loss, at the order ε². With a stronger assumption on the family ability to approximate any discriminator, the same property holds for the empirical version (and in expectation). (As an aside, the figure illustrating this property confusedly uses an histogramesque rectangle to indicate the expectation of the discriminator loss!) And both parameter (θ and α) estimators converge to the optimal ones with the sample size. An interesting foray from statisticians in a method whose statistical properties are rarely if ever investigated. Missing a comparison with alternative approaches, like MLE, though.

approximate Bayesian inference [survey]

Posted in Statistics with tags , , , , , , , , , , , , , , , , , , on May 3, 2021 by xi'an

In connection with the special issue of Entropy I mentioned a while ago, Pierre Alquier (formerly of CREST) has written an introduction to the topic of approximate Bayesian inference that is worth advertising (and freely-available as well). Its reference list is particularly relevant. (The deadline for submissions is 21 June,)

One World ABC seminar [season 2]

Posted in Books, Statistics, University life with tags , , , , , , on March 23, 2021 by xi'an

The One World ABC seminar will resume its talks on ABC methods with a talk on Thursday, 25 March, 12:30CET, by Mijung Park, from the Max Planck Institute for Intelligent Systems, on the exciting topic of producing differential privacy by ABC. (Talks will take place on a monthly basis.)

dependable AI/ML [France is AI]

Posted in Statistics with tags , , , , , on January 21, 2021 by xi'an