Archive for Flatiron Institute

ellis unconference [not in Hawai’i]

Posted in pictures, Running, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 26, 2023 by xi'an

As ICML 2023 is happening this week, in Hawai’i, many did not have the opportunity to get there, for whatever reason, and hence the ellis (European Lab for Learning {and} Intelligent Systems] board launched [fairly late!] with the help of Hi! Paris an unconference (i.e., a mirror) that is taking place in HEC, Jouy-en-Josas, SW of Paris, for AI researchers presenting works (theirs or others’) presented at ICML 2023. Or not. There was no direct broadcasting of talks as we had (had) in CIRM for ISBA 2020 2021. But some presentations based on preregistered talks. Over 50 people showed up in Jouy.

As it happened, I had quite an exciting bike ride to the HEC campus from home, under a steady rain, crossing a (modest) forest (de Verrières) I had never visited before, despite it being a few km from home, getting a wee bit lost, stopped by a train Xing between Bièvre and Jouy, and ending up at the campus just in time for the first talk (as I had not accounted for the huge altitude differential). Among curiosities met on the way, “giant” sequoias, a Tonkin pond, Chateaubriand’s house.

As always I am rather impressed by the efficiency of AI-ML conferences run, with papers+slides+reviews online, plus extra material as in this example. Lots of papers on diffusion models this year, apparently. (In conjunction with the trend observed at the Flatiron workshop last Fall.) Below are incoherent tidbits from the presentations I attended:

  • exponential convergence of the Sinkhorn algorithm by Alain Durmus and co-authors, with the surprise occurrence of a left Haar measure
  • a paper (by Jerome Baum, Heishiro Kanagawa, and my friend Arthur Gretton) on Stein discrepancy, with an Zanella Stein operator relating to Metropolis-Hastings/Barker since it has expectation zero under stationarity, interesting approach to variable length random variables, not a RJMCMC, but nearby.
  • the occurance of a criticism of the EU GDPR that did not feel appropriate for synthetic data used in privacy protection.
  • the alternative Sliced Wasserstein distance, making me wonder if we could optimally go from measure μ to measure ζ using random directions or how much was lost this way.

\mathbb E[y|X=x] = \mathbb E\left[y\frac{f_{XY}(x,y)}{f_X(x)f_Y(y)}|X=x\right] = \frac{\mathbb E\left[y\frac{f_{XY}(x,y)}{f_Y(y)}|X=x\right]}{f_X(x)}

as (a) densities are replaced with kernel estimates, (b) the outer density may be very small, (c) no variance assessment is provided.

  • Markov score climbing and transport score climbing using a normalising flow, for variational approximation, presented by Christian Naesseth, with a warping transform that sounded like inverting the flow (?)
  • Yazid Janati not presenting their ICML paper State and parameter learning with PARIS particle Gibbs written with Gabriel Cardoso, Sylvain Le Corff, Eric Moulines and Jimmy Olsson, but another work with a diffusion based model to be learned by SMC and a clever call to Tweedie’s formula. (Maurice Kenneth Tweedie, not Richard Tweedie!) Which I just realised I have used many times when working on Bayesian shrinkage estimators

séminaire parisien de statistique [09/01/23]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on January 22, 2023 by xi'an

I had missed the séminaire parisien de statistique for most of the Fall semester, hence was determined to attend the first session of the year 2023, the more because the talks were close to my interest. To wit, Chiara Amorino spoke about particle systems for McKean-Vlasov SDEs, when those are parameterised by several parameters, when observing repeatedly discretised versions, hereby establishing the consistence of a contrast estimator of these estimators. I was initially confused by the mention of interacting particles, since the work is not at all about related with simulation. Just wondering whether this contrast could prove useful for a likelihood-free approach in building a Gibbs distribution?

Valentin de Bortoli then spoke on diffusion Schrödinger bridges for generative models, which allowed me to better my understanding of this idea presented by Arnaud at the Flatiron workshop last November. The presentation here was quite different, using a forward versus backward explanation via a sequence of transforms that end up approximately Gaussian, once more reminiscent of sequential Monte Carlo. The transforms are themselves approximate Gaussian versions relying on adiscretised Ornstein-Ulhenbeck process, with a missing score term since said score involves a marginal density at each step of the sequence. It can be represented [as below] as an expectation conditional on the (observed) variate at time zero (with a connection with Hyvärinen’s NCE / score matching!) Practical implementation is done via neural networks.

Last but not least!, my friend Randal talked about his Kick-Kac formula, which connects with the one we considered in our 2004 paper with Jim Hobert. While I had heard earlier version, this talk was mostly on probability aspects and highly enjoyable as he included some short proofs. The formula is expressing the stationary probability measure π of the original Markov chain in terms of explorations between two visits to an accessible set C, more general than a small set. With at first an annoying remaining term due to the set not being Harris recurrent but which eventually cancels out. Memoryless transportation can be implemented because C is free for the picking, for instance the set where the target is bounded by a manageable density, allowing for an accept-reject step. The resulting chain is non-reversible. However, due to the difficulty to simulate from the target restricted to C, a second and parallel Markov chain is instead created. Performances, unsurprisingly, depend on the choice of C, but it can be adapted to the target on the go.

diffusions, sampling, and transport

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on November 21, 2022 by xi'an

The third and final day of the workshop was shortened for me as I had to catch an early flight back to Paris (and as I got overly conservative in my estimation for returning to JFK, catching a train with no delay at Penn Station and thus finding myself with two hours free before boarding, hence reviewing remaining Biometrika submission at the airport while waiting). As a result I missed the afternoon talks.

The morning was mostly about using scores for simulation (a topic of which I was mostly unaware), with Yang Song giving the introductory lecture on creating better [cf pix left] generative models via the score function, with a massive production of his on the topic (but too many image simulations of dogs, cats, and celebrities!). Estimating directly the score is feasible via Fisher divergence or score matching à la Hyvärinen (with a return of Stein’s unbiased estimator of the risk!). And relying on estimated scores to simulate / generate by Langevin dynamics or other MCMC methods that do not require density evaluations. Due to poor performances in low density / learning regions a fix is randomization / tempering but the resolution (as exposed) sounded clumsy. (And made me wonder at using some more advanced form of deconvolution since the randomization pattern is controlled.) The talk showed some impressive text to image simulations used by an animation studio!


And then my friend Arnaud Doucet continued on the same theme, motivating by estimating normalising constant through annealed importance sampling [Yuling’s meta-perspective comes back to mind in that the geometric mixture is not the only choice, but with which objective]. In AIS, as in a series of Arnaud’s works, like the 2006 SMC Read Paper with Pierre Del Moral and Ajay Jasra, the importance (!) of some auxiliary backward kernels goes beyond theoretical arguments, with the ideally sequence being provided by a Langevin diffusion. Hence involving a score, learned as in the previous talk. Arnaud reformulated this issue as creating a transportation map and its reverse, which is leading to their recent Schrödinger bridge generative model. Which [imho] both brings a unification perspective to his work and an efficient way to bridge prior to posterior in AIS. A most profitable morn for me!

Overall, this was an exhilarating workshop, full of discoveries for me and providing me with the opportunity to meet and exchange with mostly people I had not met before. Thanks to Bob Carpenter and Michael Albergo for organising and running the workshop!

transport, diffusions, and sampling

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on November 19, 2022 by xi'an

At the Sampling, Transport, and Diffusions workshop at the Flatiron Institute, on Day #2, Marilou Gabrié (École Polytechnique) gave the second introductory lecture on merging sampling and normalising flows targeting the target distribution, when driven by a divergence criterion like KL, that only requires the shape of the target density. I first wondered about ergodicity guarantees in simultaneous MCMC and map training due to the adaptation of the flow but the update of the map only depends on the current particle cloud in (8). From an MCMC perspective, it sounds somewhat paradoxical to see the independent sampler making such an unexpected come-back when considering that no insider information is available about the (complex) posterior to drive the [what-you-get-is-what-you-see] construction of the transport map. However, the proposed approach superposed local (random-walk like) and global (transport) proposals in Algorithm 1.

Qiang Liu followed on learning transport maps, with the  Interesting notion of causalizing a graph by removing intersections (which are impossible for an ODE, as discussed by Eric Vanden-Eijden’s talk yesterday) through  coupling. Which underlies his notion of rectified flows. Possibly connecting with the next lightning talk by Jonathan Weare on spurious modes created by a variational Monte Carlo sampler and the use of stochastic gradient, corrected by (case-dependent?) regularisation.

Then came a whole series of MCMC talks!

Sam Livingstone spoke on Barker’s proposal (an incoming Biometrika paper!) as part of a general class of transforms g of the MH ratio, using jump processes based on a nasty normalising constant related with g (tractable for the original Barker algorithm). I then realised I had missed his StatSci paper on how to speak to statistical physics researchers!

Charles Margossian spoke about using a massive number of short parallel runs (many-short-chain regime) from a recent paper written with Aki,  Andrew, and Lionel Riou-Durand (Warwick) among others. Which brings us back to the challenge of producing convergence diagnostics and precisely the Gelman-Rubin R statistic or its recent nR avatar (with its linear limitations and dependence on parameterisation, as opposed to fuller distributional criteria). The core of the approach is in using blocks of GPUs to improve and speed-up the estimation of the between-chain variance. (D for R².) I still wonder at a waste of simulations / computing power resulting from stopping the runs almost immediately after warm-up is over, since reaching the stationary regime or an approximation thereof should be exploited more efficiently. (Starting from a minimal discrepancy sample would also improve efficiency.)

Lu Zhang also talked on the issue of cutting down warmup, presenting a paper co-authored with Bob, Andrew, and Aki, recommending Laplace / variational approximations for reaching faster high-posterior-density regions, using an algorithm called Pathfinder that relies on ELBO checks to counter poor performances of Laplace approximations. In the spirit of the workshop, it could be profitable to further transform / push-forward the outcome by a transport map.

Yuling Yao (of stacking and Pareto smoothing fame!) gave an original and challenging (in a positive sense) talk on the many ways of bridging densities [linked with the remark he shared with me the day before] and their statistical significance. Questioning our usual reliance on arithmetic or geometric mixtures. Ignoring computational issues, selecting a bridging pattern sounds not different from choosing a parameterised family of embedding distributions. This new typology of models can then be endowed with properties that are more or less appealing. (Occurences of the Hyvärinen score and our mixtestin perspective in the talk!)

Miranda Holmes-Cerfon talked about MCMC on stratification (illustrated by this beautiful picture of nanoparticle random walks). Which means sampling under varying constraints and dimensions with associated densities under the respective Hausdorff measures. This sounds like a perfect setting for reversible jump and in a sense it is, as mentioned in the talks. Except that the moves between manifolds are driven by the proximity to said manifold, helping with a higher acceptance rate, and making the proposals easier to construct since projections (or the reverses) have a physical meaning. (But I could not tell from the talk why the approach was seemingly escaping the symmetry constraint set by Peter Green’s RJMCMC on the reciprocal moves between two given manifolds).

sampling, transport, and diffusions

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on November 18, 2022 by xi'an


This week, I am attending a very cool workshop at the Flatiron Institute (not in the Flatiron building!, but close enough) on Sampling, Transport, and Diffusions, organised by Bob Carpenter and Michael Albergo. It is quite exciting as I do not know most participants or their work! The Flatiron Institute is a private institute focussed on fundamental science funded by the Simons Foundation (in such working conditions universities cannot compete with!).

Eric Vanden-Eijden gave an introductory lecture on using optimal transport notion to improve sampling, with a PDE/ODE approach of continuously turning a base distribution into a target (formalised by the distribution at time one). This amounts to solving a velocity solution to an KL optimisation objective whose target value is zero. Velocity parameterised as a deep neural network density estimator. Using a score function in a reverse SDE inspired by Hyvärinnen (2005), with a surprising occurrence of Stein’s unbiased estimator, there for the same reasons of getting rid of an unknown element. In a lot of environments, simulating from the target is the goal and this can be achieved by MCMC sampling by normalising flows, learning the transform / pushforward map.

At the break, Yuling Yao made a very smart remark that testing between two models could also be seen as an optimal transport, trying to figure an optimal transform from one model to the next, rather than the bland mixture model we used in our mixtestin paper. At this point I have no idea about the practical difficulty of using / inferring the parameters of this continuum but one could start from normalising flows. Because of time continuity, one would need some driving principle.

Esteban Tabak gave another interest talk on simulating from a conditional distribution, which sounds like a no-problem when the conditional density is known but a challenge when only pairs are observed. The problem is seen as a transport problem to a barycentre obtained as a distribution independent from the conditioning z and then inverting. Constructing maps through flows. Very cool, even possibly providing an answer for causality questions.

Many of the transport talks involved normalizing flows. One by [Simons Fellow] Christopher Jazynski about adding to the Hamiltonian (in HMC) an artificial flow field  (Vaikuntanathan and Jarzynski, 2009) to make up for the Hamiltonian moving too fast for the simulation to keep track. Connected with Eric Vanden-Eijden’s talk in the end.

An interesting extension of delayed rejection for HMC by Chirag Modi, with a manageable correction à la Antonietta Mira. Johnatan Niles-Weed provided a nonparametric perspective on optimal transport following Hütter+Rigollet, 21 AoS. With forays into the Sinkhorn algorithm, mentioning Aude Genevay’s (Dauphine graduate) regularisation.

Michael Lindsey gave a great presentation on the estimation of the trace of a matrix by the Hutchinson estimator for sdp matrices using only matrix multiplication. Solution surprisingly relying on Gibbs sampling called thermal sampling.

And while it did not involve optimal transport, I gave a short (lightning) talk on our recent adaptive restore paper: although in retrospect a presentation of Wasserstein ABC could have been more suited to the audience.