Archive for the University life Category

deadlines for BayesComp’2020

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on August 17, 2019 by xi'an

While I have forgotten to send a reminder that August 15 was the first deadline of BayesComp 2020 for the early registrations, here are further deadlines and dates

  1. BayesComp 2020 occurs on January 7-10 2020 in Gainesville, Florida, USA
  2. Registration is open with regular rates till October 14, 2019
  3. Deadline for submission of poster proposals is December 15, 2019
  4. Deadline for travel support applications is September 20, 2019
  5. There are four free tutorials on January 7, 2020, related with Stan, NIMBLE, SAS, and AutoStat

conditional noise contrastive estimation

Posted in Books, pictures, University life with tags , , , , , , , , on August 13, 2019 by xi'an

At ICML last year, Ciwan Ceylan and Michael Gutmann presented a new version of noise constrative estimation to deal with intractable constants. While noise contrastive estimation relies upon a second independent sample to contrast with the observed sample, this approach uses instead a perturbed or noisy version of the original sample, for instance a Normal generation centred at the original datapoint. And eliminates the annoying constant by breaking the (original and noisy) samples into two groups. The probability to belong to one group or the other then does not depend on the constant, which is a very effective trick. And can be optimised with respect to the parameters of the model of interest. Recovering the score matching function of Hyvärinen (2005). While this is in line with earlier papers by Gutmann and Hyvärinen, this line of reasoning (starting with Charlie Geyer’s logistic regression) never ceases to amaze me!

prime suspects [book review]

Posted in Books, Kids, University life with tags , , , , , , , , , , , , , , on August 6, 2019 by xi'an

 

I was contacted by Princeton University Press to comment on the comic book/graphic novel Prime Suspects (The Anatomy of Integers and Permutations), by Andrew Granville (mathematician) & Jennifer Granville (writer), and Robert Lewis (illustrator), and they sent me the book. I am not a big fan of graphic book entries to mathematical even less than to statistical notions (Logicomix being sort of an exception for its historical perspective and nice drawing style) and this book did nothing to change my perspective on the subject. First, the plot is mostly a pretense at introducing number theory concepts and I found it hard to follow it for more than a few pages. The [noires maths] story is that “forensic maths” detectives are looking at murders that connects prime integers and permutations… The ensuing NCIS-style investigation gives the authors the opportunity to skim through the whole cenacle of number theorists, plus a few other mathematicians, who appear as more or less central characters. Even illusory ones like Nicolas Bourbaki. And Alexander Grothendieck as a recluse and clairvoyant hermit [who in real life did not live in a Pyrénées cavern!!!]. Second, I [and nor is Andrew who was in my office when the book arrived!] am not particularly enjoying the drawings or the page composition or the colours of this graphic novel, especially because I find the characters drawn quite inconsistently from one strip to the next, to the point of being unrecognisable, and, if it matters, hardly resembling their real-world equivalent (as seen in the portrait of Persi Diaconis). To be completely honest, the drawings look both ugly and very conventional to me, in that I do not find much of a characteristic style to them. To contemplate what Jacques TardiFrançois Schuiten or José Muñoz could have achieved with the same material… (Or even Edmond Baudoin, who drew the strips for the graphic novels he coauthored with Cédric Villani.) The graphic novel (with a prime 181 pages) is postfaced with explanations about the true persons behind the characters, from Carl Friedriech Gauß to Terry Tao, and of course on the mathematical theory for the analogies between the prime and cycles frequencies behind the story. Which I find much more interesting and readable, obviously. (With a surprise appearance of Kingman’s coalescent!) But also somewhat self-defeating in that so much has to be explained on the side for the links between the story, the characters and the background heavily loaded with “obscure references” to make sense to more than a few mathematician readers. Who may prove to be the core readership of this book.

There is also a bit of a Gödel-Escher-and-Bach flavour in that a piece by Robert Schneider called Réverie in Prime Time Signature is included, while an Escher’s infinite stairway appears in one page, not far from what looks like Milano Vittorio Emmanuelle gallery (On the side, I am puzzled by the footnote on p.208 that “I should clarify that selecting a random permutation and a random prime, as described, can be done easily, quickly, and correctly”. This may be connected to the fact that the description of Bach’s algorithm provided therein is incomplete.)

[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Books Review section in CHANCE. As appropriate for a book about Chance!]

unbiased product of expectations

Posted in Books, Statistics, University life with tags , , , , , , , , on August 5, 2019 by xi'an

m_biomet_106_2coverWhile I was not involved in any way, or even aware of this research, Anthony Lee, Simone Tiberi, and Giacomo Zanella have an incoming paper in Biometrika, and which was partly written while all three authors were at the University of Warwick. The purpose is to design an efficient manner to approximate the product of n unidimensional expectations (or integrals) all computed against the same reference density. Which is not a real constraint. A neat remark that motivates the method in the paper is that an improved estimator can be connected with the permanent of the n x N matrix A made of the values of the n functions computed at N different simulations from the reference density. And involves N!/ (N-n)! terms rather than N to the power n. Since it is NP-hard to compute, a manageable alternative uses random draws from constrained permutations that are reasonably easy to simulate. Especially since, given that the estimator recycles most of the particles, it requires a much smaller version of N. Essentially N=O(n) with this scenario, instead of O(n²) with the basic Monte Carlo solution, towards a similar variance.

This framework offers many applications in latent variable models, including pseudo-marginal MCMC, of course, but also for ABC since the ABC posterior based on getting each simulated observation close enough from the corresponding actual observation fits this pattern (albeit the dependence on the chosen ordering of the data is an issue that can make the example somewhat artificial).

off to Osaka

Posted in Mountains, pictures, Travel, University life with tags , , , , , , , , , , , , on August 3, 2019 by xi'an

Today, I am off to Japan to visit Kengo Kamatani at Osaka University (where I will give a seminar on Tuesday) for a week and then for two weeks of vacation hiking the Kumano Kodō, a network of ancient pilgrimage routes in the Kii peninsula, south of Osaka. (Presumably with little access to the Internet or even to my laptop!)

on anonymisation

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , on August 2, 2019 by xi'an

An article in the New York Times covering a recent publication in Nature Communications on the ability to identify 99.98% of Americans from almost any dataset with fifteen covariates. And mentioning the French approach of INSEE, more precisely CASD (a branch of GENES, as ENSAE and CREST to which I am affiliated), where my friend Antoine worked for a few years, and whose approach is to vet researchers who want access to non-anonymised data, by creating local working environments on the CASD machines  so that data does not leave the site. The approach is to provide the researcher with a dedicated interface, which “enables access remotely to a secure infrastructure where confidential data is safe from harm”. It further delivers reproducibility certificates for publications, a point apparently missed by the New York Times which advances the lack of reproducibility as a drawback of the method. It also mentions the possibility of doing cryptographic data analysis, again missing the finer details with a lame objection.

“Our paper shows how the likelihood of a specific individual to have been correctly re-identified can be estimated with high accuracy even when the anonymized dataset is heavily incomplete.”

The Nature paper is actually about the probability for an individual to be uniquely identified from the given dataset, which somewhat different from the NYT headlines. Using a copula for the distribution of the covariates. And assessing the model with a mean square error evaluation when what matters are false positives and false negatives. Note that the model need be trained for each new dataset, which reduces the appeal of the claim, especially when considering that individuals tagged as uniquely identified about 6% are not. The statistic of 99.98% posted in the NYT is actually a count on a specific dataset,  the 5% Public Use Microdata Sample files, and Massachusetts residents, and not a general statistic [which would not make much sense!, as I can easily imagine 15 useless covariates] or prediction from the authors’ model. And a wee bit anticlimactic.

thermodynamic integration plus temperings

Posted in Statistics, Travel, University life with tags , , , , , , , , , , , , on July 30, 2019 by xi'an

Biljana Stojkova and David Campbel recently arXived a paper on the used of parallel simulated tempering for thermodynamic integration towards producing estimates of marginal likelihoods. Resulting into a rather unwieldy acronym of PT-STWNC for “Parallel Tempering – Simulated Tempering Without Normalizing Constants”. Remember that parallel tempering runs T chains in parallel for T different powers of the likelihood (from 0 to 1), potentially swapping chain values at each iteration. Simulated tempering monitors a single chain that explores both the parameter space and the temperature range. Requiring a prior on the temperature. Whose optimal if unrealistic choice was found by Geyer and Thomson (1995) to be proportional to the inverse (and unknown) normalising constant (albeit over a finite set of temperatures). Proposing the new temperature instead via a random walk, the Metropolis within Gibbs update of the temperature τ then involves normalising constants.

“This approach is explored as proof of concept and not in a general sense because the precision of the approximation depends on the quality of the interpolator which in turn will be impacted by smoothness and continuity of the manifold, properties which are difficult to characterize or guarantee given the multi-modal nature of the likelihoods.”

To bypass this issue, the authors pick for their (formal) prior on the temperature τ, a prior such that the profile posterior distribution on τ is constant, i.e. the joint distribution at τ and at the mode [of the conditional posterior distribution of the parameter] is constant. This choice makes for a closed form prior, provided this mode of the tempered posterior can de facto be computed for each value of τ. (However it is unclear to me why the exact mode would need to be used.) The resulting Metropolis ratio becomes independent of the normalising constants. The final version of the algorithm runs an extra exchange step on both this simulated tempering version and the untempered version, i.e., the original unnormalised posterior. For the marginal likelihood, thermodynamic integration is invoked, following Friel and Pettitt (2008), using simulated tempering samples of (θ,τ) pairs (associated instead with the above constant profile posterior) and simple Riemann integration of the expected log posterior. The paper stresses the gain due to a continuous temperature scale, as it “removes the need for optimal temperature discretization schedule.” The method is applied to the Glaxy (mixture) dataset in order to compare it with the earlier approach of Friel and Pettitt (2008), resulting in (a) a selection of the mixture with five components and (b) much more variability between the estimated marginal  likelihoods for different numbers of components than in the earlier approach (where the estimates hardly move with k). And (c) a trimodal distribution on the means [and unimodal on the variances]. This example is however hard to interpret, since there are many contradicting interpretations for the various numbers of components in the model. (I recall Radford Neal giving an impromptu talks at an ICMS workshop in Edinburgh in 2001 to warn us we should not use the dataset without a clear(er) understanding of the astrophysics behind. If I remember well he was excluded all low values for the number of components as being inappropriate…. I also remember taking two days off with Peter Green to go climbing Craigh Meagaidh, as the only authorised climbing place around during the foot-and-mouth epidemics.) In conclusion, after presumably too light a read (I did not referee the paper!), it remains unclear to me why the combination of the various tempering schemes is bringing a noticeable improvement over the existing. At a given computational cost. As the temperature distribution does not seem to favour spending time in the regions where the target is most quickly changing. As such the algorithm rather appears as a special form of exchange algorithm.