Archive for Normans

a journal of the plague, sword, and famine year

Posted in Books, Kids, Mountains, pictures, Running, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on January 2, 2023 by xi'an

Read my very first Annie Ernaux piece and it was in English, in The New Yorker! A very short piece on a short visit to her mother. Beautifully written, carrying the bittersweet feeling of the impossibility to reconnect with earlier times and earlier impressions. I was much less impressed, however, by her Nobel discourse and the use of Rimbaud’s race (and Galton’s and Fisher’s…) in such a different context. A constant projection/fixation on her background and class inequalities, supplemented by an ethic of ressentiment, does not sound enticing, the more because auto-fiction has never appealed to me. (Sharing similar social and geographic [Rouen!] backgrounds sounds precisely as the wrong reason to contemplate reading her books.)

Cooked weekly butternut soups, red cabbage stews and squid woks as these are the seasonal best offers at the local market, along with plentiful Norman scallops, not yet impacted by inflation. Also restarted making buckwheat bread, with the side advantages of temporarily heating home (and a pretense to add the rice pudding dish in the oven!).

Watched Trolls, Wednesday (only on Wednesdays), and Decision to Leave. Apart from the Norge exposure, the first is terrible, esp. when compared with the earlier 2010 tongue-in-cheek Troll Hunter (Trolljegeren).Wednesday is a television series that centres on Wednesday Addams, the dead-pan daughter in the Addams family. I found the series hilarious, even though intended for YA audiences. The quality of the episodes varies, those from Tim Burton usually coming on top, but the main character (Wednesday, in case you are not paying attention!) is fantastic. (The fact that, Christina Ricci, the actor playing Wednesday in the 1991 movie is also involved in the series is a great wink to the earlier installments of this series.) And, final argument, a series where the heroin pogoes to a song by The Cramps cannot turn all bad! The Korean Decision to Leave (헤어질 결심) is a masterpiece (except for the ridiculous climbing scenes!) in deception and ambiguity (with a very thin connection to Hitchcock’s Vertigo). Far from his backup role in the stunning Memories of Murder, Park Hae-il is fabulous as a policeman torn between his duty and an inexplicable attraction for the main suspect, brilliantly played by  Tang Wei, who manages the ambiguous character till the very end.

Hastings 50 years later

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , on January 9, 2020 by xi'an

What is the exact impact of the Metropolis-Hastings algorithm on the field of Bayesian statistics? and what are the new tools of the trade? What I personally find the most relevant and attractive element in a review on the topic is the current role of this algorithm, rather than its past (his)story, since many such reviews have already appeared and will likely continue to appear. What matters most imho is how much the Metropolis-Hastings algorithm signifies for the community at large, especially beyond academia. Is the availability or unavailability of software like BUGS or Stan a help or an hindrance? Was Hastings’ paper the start of the era of approximate inference or the end of exact inference? Are the algorithm intrinsic features like Markovianity a fundamental cause for an eventual extinction because of the ensuing time constraint and the lack of practical guarantees of convergence and the illusion of a fully automated version? Or are emerging solutions like unbiased MCMC and asynchronous algorithms a beacon of hope?

In their Biometrika paper, Dunson and Johndrow (2019) recently wrote a celebration of Hastings’ 1970 paper in Biometrika, where they cover adaptive Metropolis (Haario et al., 1999; Roberts and Rosenthal, 2005), the importance of gradient based versions toward universal algorithms (Roberts and Tweedie, 1995; Neal, 2003), discussing the advantages of HMC over Langevin versions. They also recall the significant step represented by Peter Green’s (1995) reversible jump algorithm for multimodal and multidimensional targets, as well as tempering (Miasojedow et al., 2013; Woodard et al., 2009). They further cover intractable likelihood cases within MCMC (rather than ABC), with the use of auxiliary variables (Friel and Pettitt, 2008; Møller et al., 2006) and pseudo-marginal MCMC (Andrieu and Roberts, 2009; Andrieu and Vihola, 2016). They naturally insist upon the need to handle huge datasets, high-dimension parameter spaces, and other scalability issues, with links to unadjusted Langevin schemes (Bardenet et al., 2014; Durmus and Moulines, 2017; Welling and Teh, 2011). Similarly, Dunson and Johndrow (2019) discuss recent developments towards parallel MCMC and non-reversible schemes such as PDMP as highly promising, with a concluding section on the challenges of automatising and robustifying much further the said procedures, if only to reach a wider range of applications. The paper is well-written and contains a wealth of directions and reflections, including those in my above introduction. Here are some mostly disconnected directions I would have liked to see covered or more covered

  1. convergence assessment today, e.g. the comparison of various approximation schemes
  2. Rao-Blackwellisation and other post-processing improvements
  3. other approximate inference tools than the pseudo-marginal MCMC
  4. importance of the parameterisation of the problem for convergence
  5. dimension issues and connection with quasi-Monte Carlo
  6. constrained spaces of measure zero, as for instance matrix distributions imposing zeros outside a diagonal band
  7. given the rise of the machine(-learners), are exploratory and intrinsically slow algorithms like MCMC doomed or can both fields feed one another? The section on optimisation could be expanded in that direction
  8. the wasteful nature of the random walk feature of MCMC algorithms, as opposed to non-reversible kernels like HMC and other PDMPs, missing from the gradient based methods section (and can we once again learn from physicists?)
  9. finer convergence issues and hence inference difficulties with complex MCMC algorithms like Gibbs samplers with incompatible conditionals
  10. use of the Hastings ratio in other algorithms like ABC or EP (in link with the section on generalised Bayes)
  11. adapting Metropolis-Hastings methods for emerging computing tools like GPUs and quantum computers

or possibly less covered, namely data augmentation put forward when it is a special case of auxiliary variables as in slice sampling and in earlier physics literature. For instance, both probit and logistic regressions do not truly require data augmentation and are more toy examples than really challenging applications. The approach of Carlin & Chib (1995) is another illustration, which has met with recent interest, despite requiring heavy calibration (just like RJMCMC). As well as a a somewhat awkward opposition between Gibbs and Hastings, in that I am not convinced that Gibbs does not remain ultimately necessary to handle high dimension problems, in the sense that the alternative solutions like Langevin, HMC, or PDMP, or…, are relying on Euclidean assumptions for the entire vector, while a direct product of Euclidean structures may prove more adequate.

from William’s castle [jatp]

Posted in pictures, Travel with tags , , , , , , on May 16, 2017 by xi'an

Hastings without Metropolis

Posted in Books, Kids, Travel with tags , , , , , , , on October 14, 2016 by xi'an

Today marks the 950th anniversary of the battle of Hastings, when Guillaume, Duke of Normandy and pretendent to the throne of England following the death of the childless King Edward the Confessor in January 1066, defeated Harold Godwinson, recently elected King and even more recently the victor of a battle against another pretended, Harald Hardrada of Norway. (I had always thought that Guillaume had fought the battle the day his fleet of hundreds of Viking drakkars landed, but he arrived two weeks earlier and had time to build a fort in Hastings.) One of the consequences of this victory would be significant changes in the structure and vocabulary of the English language. [One may wonder at why I am mentioning this anniversary but been “born and raised” in the heart of Guillaume’s Norman kingdom prompted some long-lasting interest in the Norman invasion.]

%d bloggers like this: