Archive for Statistical Science

transport, diffusions, and sampling

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on November 19, 2022 by xi'an

At the Sampling, Transport, and Diffusions workshop at the Flatiron Institute, on Day #2, Marilou Gabrié (École Polytechnique) gave the second introductory lecture on merging sampling and normalising flows targeting the target distribution, when driven by a divergence criterion like KL, that only requires the shape of the target density. I first wondered about ergodicity guarantees in simultaneous MCMC and map training due to the adaptation of the flow but the update of the map only depends on the current particle cloud in (8). From an MCMC perspective, it sounds somewhat paradoxical to see the independent sampler making such an unexpected come-back when considering that no insider information is available about the (complex) posterior to drive the [what-you-get-is-what-you-see] construction of the transport map. However, the proposed approach superposed local (random-walk like) and global (transport) proposals in Algorithm 1.

Qiang Liu followed on learning transport maps, with the  Interesting notion of causalizing a graph by removing intersections (which are impossible for an ODE, as discussed by Eric Vanden-Eijden’s talk yesterday) through  coupling. Which underlies his notion of rectified flows. Possibly connecting with the next lightning talk by Jonathan Weare on spurious modes created by a variational Monte Carlo sampler and the use of stochastic gradient, corrected by (case-dependent?) regularisation.

Then came a whole series of MCMC talks!

Sam Livingstone spoke on Barker’s proposal (an incoming Biometrika paper!) as part of a general class of transforms g of the MH ratio, using jump processes based on a nasty normalising constant related with g (tractable for the original Barker algorithm). I then realised I had missed his StatSci paper on how to speak to statistical physics researchers!

Charles Margossian spoke about using a massive number of short parallel runs (many-short-chain regime) from a recent paper written with Aki,  Andrew, and Lionel Riou-Durand (Warwick) among others. Which brings us back to the challenge of producing convergence diagnostics and precisely the Gelman-Rubin R statistic or its recent nR avatar (with its linear limitations and dependence on parameterisation, as opposed to fuller distributional criteria). The core of the approach is in using blocks of GPUs to improve and speed-up the estimation of the between-chain variance. (D for R².) I still wonder at a waste of simulations / computing power resulting from stopping the runs almost immediately after warm-up is over, since reaching the stationary regime or an approximation thereof should be exploited more efficiently. (Starting from a minimal discrepancy sample would also improve efficiency.)

Lu Zhang also talked on the issue of cutting down warmup, presenting a paper co-authored with Bob, Andrew, and Aki, recommending Laplace / variational approximations for reaching faster high-posterior-density regions, using an algorithm called Pathfinder that relies on ELBO checks to counter poor performances of Laplace approximations. In the spirit of the workshop, it could be profitable to further transform / push-forward the outcome by a transport map.

Yuling Yao (of stacking and Pareto smoothing fame!) gave an original and challenging (in a positive sense) talk on the many ways of bridging densities [linked with the remark he shared with me the day before] and their statistical significance. Questioning our usual reliance on arithmetic or geometric mixtures. Ignoring computational issues, selecting a bridging pattern sounds not different from choosing a parameterised family of embedding distributions. This new typology of models can then be endowed with properties that are more or less appealing. (Occurences of the Hyvärinen score and our mixtestin perspective in the talk!)

Miranda Holmes-Cerfon talked about MCMC on stratification (illustrated by this beautiful picture of nanoparticle random walks). Which means sampling under varying constraints and dimensions with associated densities under the respective Hausdorff measures. This sounds like a perfect setting for reversible jump and in a sense it is, as mentioned in the talks. Except that the moves between manifolds are driven by the proximity to said manifold, helping with a higher acceptance rate, and making the proposals easier to construct since projections (or the reverses) have a physical meaning. (But I could not tell from the talk why the approach was seemingly escaping the symmetry constraint set by Peter Green’s RJMCMC on the reciprocal moves between two given manifolds).

Don Fraser (1925-2020)

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on December 24, 2020 by xi'an

I just received the very sad news that Don Fraser, emeritus professor of statistics at the University of Toronto, passed away this Monday, 21 December 2020. He was a giant of the field, with a unique ability for abstract modelling and he certainly pushed fiducial statistics much further than Fisher ever did. He also developed a theory of structural  inference that came close to objective Bayesian statistics, although he remained quite critical of the Bayesian approach (always in a most gentle manner, as he was a very nice man!). And most significantly contributed to high order asymptotics, to the critical analysis of ancilarity and sufficiency principles, and more beyond. (Statistical Science published a conversation with Don, in 2004, providing more personal views on his career till then.) I met with Don and Nancy rather regularly over the years, as they often attended and talked at (objective) Bayesian meetings, from the 1999 edition in Granada, to the last one in Warwick in 2019. I also remember a most enjoyable barbecue together, along with Ivar Ekeland and his family, during JSM 2018, on Jericho Park Beach, with a magnificent sunset over the Burrard Inlet. Farewell, Don!

remembering Joyce Fienberg through Steve’s words

Posted in Statistics with tags , , , , , , on October 28, 2018 by xi'an

I just learned the horrific news that Joyce Fienberg was one of the eleven people murdered yesterday morning at the Tree of Life synagogue. I had been vaguely afraid this could be the case since hearing about the shooting there, just because it was not far from the University of Pittsburgh, and CMU, but then a friend emailed me she indeed was one of the victims. When her husband Steve was on sabbatical in Paris, we met a few times for memorable dinners. I think the last time I saw her was a few years ago in a Paris hotel where Joyce, Steve and I had breakfast together to take advantage of one of their short trips to Paris. In remembrance of this wonderful woman who got assassinated by an anti-Semitic extremist, here is how Steve described their encounter in his Statistical Science interview:

I had met my wife Joyce at the University of Toronto when we were both undergraduates. I was actually working in the fall of 1963 in the registrar’s office, and on the first day the office opened to enroll people, Joyce came through. And one of the benefits about working in the registrar’s office, besides earning some spending money, was meeting all these beautiful women students passing through. That first day I made a note to ask Joyce out on a date. The next day she came through again, this time bringing through another young woman who turned out to be the daughter of friends of her parents. And I thought this was a little suspicious, but auspicious in the sense that maybe I would succeed in getting a date when I asked her. And the next day, she came through again! This time with her cousin! Then I knew that this was really going to work out. And it did. We got engaged at the end of the summer of 1964 after I graduated, but we weren’t married when I went away to graduate school. In fact, yesterday I was talking to one of the students at the University of Connecticut who was a little concerned about graduate school; it was wearing her down, and I told her I almost left after the first semester because I wasn’t sure if I was going to make a go of it, in part because I was lonely. But I did survive, and Joyce came at the end of the first year; we got married right after classes ended, and we’ve been together ever since.

Gaussian hare and Laplacian tortoise

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , on October 19, 2018 by xi'an

A question on X validated on the comparative merits of L¹ versus L² estimation led me to the paper of Stephen Portnoy and Roger Koenker entitled “The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators”, which I had missed at the time, despite enjoying a subscription to Statistical Science till the late 90’s.. The authors went as far as producing a parody of Granville’s Fables de La Fontaine by sticking Laplace’s and Gauss’ heads on the tortoise and the hare!

I remember rather vividly going through Steve Stigler’s account of the opposition between Laplace’s and Legendre’s approaches, when reading his History of Statistics in 1990 or 1991… Laplace defending the absolute error on the basis of the default double-exponential (or Laplace) distribution, when Legendre and then Gauss argued in favour of the squared error loss on the basis of a defaul Normal (or Gaussian) distribution. (Edgeworth later returned to the support of the L¹ criterion.) Portnoy and Koenker focus mostly on ways of accelerating the derivation of the L¹ regression estimators. (I also learned from the paper that Koenker was one of the originators of quantile regression.)

the first Bayesian

Posted in Statistics with tags , , , , , , , on February 20, 2018 by xi'an

In the first issue of Statistical Science for this year (2018), Stephen Stiegler pursues the origins of Bayesianism as attributable to Richard Price, main author of Bayes’ Essay. (This incidentally relates to an earlier ‘Og piece on that notion!) Steve points out the considerable inputs of Price on this Essay, even though the mathematical advance is very likely to be entirely Bayes’. It may however well be Price who initiated Bayes’ reflections on the matter, towards producing a counter-argument to Hume’s “On Miracles”.

“Price’s caution in addressing the probabilities of hypotheses suggested by data is rare in early literature.”

A section of the paper is about Price’s approach data-determined hypotheses and to the fact that considering such hypotheses cannot easily fit within a Bayesian framework. As stated by Price, “it would be improbable as infinite to one”. Which is a nice way to address the infinite mass prior.

 

%d bloggers like this: