Archive for big data

Expectation Propagation as a Way of Life on-line

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , on March 18, 2020 by xi'an

After a rather extended shelf-life, our paper expectation propagation as a way of life: a framework for Bayesian inference on partitioned data which was started when Andrew visited Paris in… 2014!, and to which I only marginally contributed, has now appeared in JMLR! Which happens to be my very first paper in this journal.

Michael dans le Monde [#2]

Posted in Books, pictures, Statistics, University life with tags , , , , on January 5, 2020 by xi'an

A (second) back-page interview of Mike in Le Monde on the limitations of academics towards working with major high tech companies. And fatal attractions that are difficult to resist, given the monetary rewards. As his previous interview, this is quite an interesting read (in French), although it obviously reflects a US perspective rather than French (with the same comment applying to the recent interview of Yann LeCun on France Inter).

“…les chercheurs académiques français, qui sont vraiment très peu payés.”

The first part is a prediction that the GAFAs will not continue hiring (full-time or part-time) academic researchers to keep doing their academic research as the quest for more immediate profits will eventually win over the image produced by these collaborations. But maybe DeepMind is not the best example, as e.g. Amazon seems to be making immediate gains from such collaborations.

“…le modèle économique [de Amazon, Ali Baba, Uber, &tc] cherche à créer des marchés nouveaux avec à la source, on peut l’espérer, de nouveaux emplois.”

One stronger point of disagreement is about the above quote, namely that Uber or Amazon indeed create jobs. As I am uncertain that all jobs creations are worthwhile. Indeed, which kind of freedom there is in working after-hours for a reward that is so much below the minimal wage (in countries where there is a true minimal wage) that the workers [renamed entrepreneurs] are below the poverty line? Similarly, unless there are stronger regulations imposed by states or unions like the EU, it seems difficult to imagine how society as an aggregate of individuals can curb the hegemonic tendencies of the high tech leviathans…?

sampling and imbalanced

Posted in Statistics with tags , , , , , on June 21, 2019 by xi'an

Deborshee Sen, Matthias Sachs, Jianfeng Lu and David Dunson have recently arXived a sub-sampling paper for  classification (logistic) models where some covariates or some responses are imbalanced. With a PDMP, namely zig-zag, used towards preserving the correct invariant distribution (as already mentioned in an earlier post on the zig-zag zampler and in a recent Annals paper by Joris Bierkens, Paul Fearnhead, and Gareth Roberts (Warwick)). The current paper is thus an improvement on the above. Using (non-uniform) importance sub-sampling across observations and simpler upper bounds for the Poisson process. A rather practical form of Poisson thinning. And proposing unbiased estimates of the sub-sample log-posterior as well as stratified sub-sampling.

I idly wondered if the zig-zag sampler could itself be improved by not switching the bouncing directions at random since directions associated with almost certainly null coefficients should be neglected as much as possible, but the intensity functions associated with the directions do incorporate this feature. Except for requiring computation of the intensities for all directions. This is especially true when facing many covariates.

Thinking of the logistic regression model itself, it is sort of frustrating that something so close to an exponential family causes so many headaches! Formally, it is an exponential family but the normalising constant is rather unwieldy, especially when there are many observations and many covariates. The Polya-Gamma completion is a way around, but it proves highly costly when the dimension is large…

MASH in Le Monde

Posted in Statistics with tags , , , , , , , , on January 25, 2019 by xi'an

Big Bayes goes South

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on December 5, 2018 by xi'an

At the Big [Data] Bayes conference this week [which I found quite exciting despite a few last minute cancellations by speakers] there were a lot of clustering talks including the ones by Amy Herring (Duke), using a notion of centering that should soon appear on arXiv. By Peter Müller (UT, Austin) towards handling large datasets. Based on a predictive recursion that takes one value at a time, unsurprisingly similar to the update of Dirichlet process mixtures. (Inspired by a 1998 paper by Michael Newton and co-authors.) The recursion doubles in size at each observation, requiring culling of negligible components. Order matters? Links with Malsiner-Walli et al. (2017) mixtures of mixtures. Also talks by Antonio Lijoi and Igor Pruenster (Boconni Milano) on completely random measures that are used in creating clusters. And by Sylvia Frühwirth-Schnatter (WU Wien) on creating clusters for the Austrian labor market of the impact of company closure. And by Gregor Kastner (WU Wien) on multivariate factor stochastic models, with a video of a large covariance matrix evolving over time and catching economic crises. And by David Dunson (Duke) on distance clustering. Reflecting like myself on the definitely ill-defined nature of the [clustering] object. As the sample size increases, spurious clusters appear. (Which reminded me of a disagreement I had had with David McKay at an ICMS conference on mixtures twenty years ago.) Making me realise I missed the recent JASA paper by Miller and Dunson on that perspective.

Some further snapshots (with short comments visible by hovering on the picture) of a very high quality meeting [says one of the organisers!]. Following suggestions from several participants, it would be great to hold another meeting at CIRM in a near future. Continue reading