Archive for Cornell University

more air [&q’s] for MCMC [comments]

Posted in Books, pictures, Statistics with tags , , , , , , , , , on June 11, 2021 by xi'an

[A rich set of comments by Tom Loredo about convergence assessments for MCMC that I feel needs reposting:]

Two quick points:

  • By coincidence (and for a different problem), I’ve just been looking at the work of Gorham & Mackey that I believe Pierre is referring to. This is probably the relevant paper: “Measuring Sample Quality with Kernels“.
  • Besides their new rank-based R-hat, bloggers on Gelman’s blog have also pointed to another R-hat replacement, R, developed by some Stan team members; it is “based on how well a machine learning classifier model can successfully discriminate the individual chains.” See: “R: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers”.

In addition, here’s an anecdote regarding your comment, “I remain perplexed and frustrated by the fact that, 30 years later, the computed values of the visited likelihoods are not better exploited.”

That has long bothered me, too. During a SAMSI program around 2006, I spent time working on one approach that tried to use the prior*likelihood (I call it q(θ), for “quasiposterior” and because it’s next to “p”!) to compute the marginal likelihood. It would take posterior samples (from MCMC or another approach) and find their Delaunay triangulation. Then, using q(θ) on the nodes of the simplices comprising the triangulation, it used a simplicial cubature rule to approximate the integral of q(theta) over the volume spanned by the samples.

As I recall, I only explored it with multivariate normal and Student-t targets. It failed, but in an interesting way. It worked well in low dimensions, but gave increasingly poor estimates as dimension grew. The problem appeared related to concentration of measure (or the location of the typical set), with the points not sufficiently covering the center or the large volume in the tails (or both; I can’t remember what diagnostics said exactly).

Another problem is that Delaunay triangulation gets expensive quickly with growing dimension. This method doesn’t need an optimal triangulation, so I wondered if there was a faster sub-optimal triangulation algorithm, but I couldn’t find one.

An interesting aspect of this approach is that the fact that the points are drawn from the prior doesn’t matter. Any set of points is a valid set of points for approximating the integral (in the spanned volume). I just used posterior samples because I presumed those would be available from MCMC. I briefly did some experiments taking the samples, and reweighting them to draw a subset for the cubature that was either over- or under-dispersed vs. the target. And one could improve things this way (I can’t remember what choice was better). This suggests that points drawn from q(theta) aren’t optimal for such cubature, but I never tried looking formally for the optimal choice.

I called the approach “adaptive simplicial cubature,” adaptive in the sense that the points are chosen in a way that depends on the integrand.

The only related work I could find at the time was work by you and Anne Philippe on Riemanns sums with MCMC (https://doi.org/10.1023/A:1008926514119). I later stumbled upon a paper on “random Riemann sum estimators” as an alternative to Monte Carlo that seems related but that I didn’t explore further (https://doi.org/10.1016/j.csda.2006.09.041).

I still find it hard to believe that the q values aren’t useful. Admittedly, in an n-dimensional distribution, it’s just 1 more quantity available beyond the n that comprise the sample location. But it’s a qualitatively different type of information from the sample location, and I can’t help but think there’s some clever way to use it (besides emulating the response surface).

estimation of a normal mean matrix

Posted in Statistics with tags , , , , , , , , , on May 13, 2021 by xi'an

A few days ago, I noticed the paper Estimation under matrix quadratic loss and matrix superharmonicity by Takeru Matsuda and my friend Bill Strawderman had appeared in Biometrika. (Disclaimer: I was not involved in handling the submission!) This is a “classical” shrinkage estimation problem in that covariance matrix estimators are compared under under a quadratic loss, using Charles Stein’s technique of unbiased estimation of the risk is derived. The authors show that the Efron–Morris estimator is minimax. They also introduce superharmonicity for matrix-variate functions towards showing that generalized Bayes estimator with respect to a matrix superharmonic priors are minimax., including a generalization of Stein’s prior. Superharmonicity that relates to (much) earlier results by Ed George (1986), Mary-Ellen Bock (1988),  Dominique Fourdrinier, Bill Strawderman, and Marty Wells (1998). (All of whom I worked with in the 1980’s and 1990’s! in Rouen, Purdue, and Cornell). This paper also made me realise Dominique, Bill, and Marty had published a Springer book on Shrinkage estimators a few years ago and that I had missed it..!

monomial representations on Netflix

Posted in Books, Kids, pictures, Travel with tags , , , , , , , , , , , , on February 16, 2021 by xi'an

When watching the first episode of Queen’s Gambit, following the recommendations of my son, I glimpsed the cover of a math thesis defended at Cornell by the mother of the main character..! Prior to 1957, year of her death. Searching a wee bit further, I found that there exists an actual thesis with this very title, albeit defended by Stephen Stanley in 1998 at the University of Birmingham. that is, Birmingham, UK [near Coventry]. Apart from this amusing trivia piece, I also enjoyed watching the first episodes of the series, the main actor being really outstanding in her acting, and the plot unfolding rather nicely, except for the chess games that are unrealistically hurried, presumably because watching people thinking is anathema on TV! The representation of misogyny at the time is however most realistic (I presume|!) and definitely shocking. (The first competition game when Beth Hamon loses is somewhat disappointing as failing to predict a Queen exchange is implausible at this level…) However, the growing self-destructive behaviour of Beth made me cringe to the point of stopping the series. The early episodes also reminded me of the days when my son had started playing chess with me, winning on a regular basis, had then joined a Saturday chess nearby, was moved to the adult section within a few weeks, and … stopped altogether a few weeks later as he (mistakenly) thought the older players were making fun of him!!! He never got to any competitive level but still plays on a regular basis and trashes me just as regularly. Coincidence or not, the Guardian has a “scandalous” chess story to relate last week,  when the Dutch champion defeated the world top two players, with one game won by him having prepared the Najdorf Sicilian opening up to the 17th round! (The chess problem below is from the same article but relates to Antonio Medina v Svetozar Gligoric, Palma 1968.)

your GAN is secretly an energy-based model

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on January 5, 2021 by xi'an

As I was reading this NeurIPS 2020 paper by Che et al., and trying to make sense of it, I came across a citation to our paper Casella, Robert and Wells (2004) on a generalized accept-reject sampling scheme where the proposal changes at each simulation that sounds surprising if appreciated! But after checking this paper also appears as the first reference on the Wikipedia page for rejection sampling, which makes me wonder if many actually read it. (On the side, we mostly wrote this paper on a drive from Baltimore to Ithaca, after JSM 1999.)

“We provide more evidence that it is beneficial to sample from the energy-based model defined both by the generator and the discriminator instead of from the generator only.”

The paper seems to propose a post-processing of the generator output by a GAN, generating from the mixture of both generator and discriminator, via a (unscented) Langevin algorithm. The core idea is that, if p(.) is the true data generating process, g(.) the estimated generator and d(.) the discriminator, then

p(x) ≈ p⁰(x)∝g(x) exp(d(x))

(The approximation would be exact the discriminator optimal.) The authors work with the latent z’s, in the GAN meaning that generating pseudo-data x from g means taking a deterministic transform of z, x=G(z). When considering the above p⁰, a generation from p⁰ can be seen as accept-reject with acceptance probability proportional to exp[d{G(z)}]. (On the side, Lemma 1 is the standard validation for accept-reject sampling schemes.)

Reading this paper made me realised how much the field had evolved since my previous GAN related read. With directions like Metropolis-Hastings GANs and Wasserstein GANs. (And I noticed a “broader impact” section past the conclusion section about possible misuses with societal consequences, which is a new requirement for NeurIPS publications.)

Grand Central Terminal

Posted in Books, pictures, Travel with tags , , , , , , , , , , , on April 22, 2020 by xi'an