Archive for the Mountains Category

burial rites [book review]

Posted in Books, Mountains, pictures, Travel with tags , , , , , , , , , on April 23, 2016 by xi'an

snaefell2This book by Hannah Kent was published in 2013 and was recommended to me by Peter when we were discussing about our respective trips to Iceland. This is a novelised re-enactment of a historical murder that took place in North-West Iceland in 1828, when a woman convicted of murdering two men is sent to a remote croft to wait for her execution, the last one in Iceland.  The novel caries many levels at once, from uncovering the true (?) story behind the murders, to the incredibly harsh life of those Icelandic farmers, to the very rigid religious atmosphere imposed by Lutheran pastors, along with a large degree of superstition to the literacy of even the most remote farmers and the love of sagas and poems, to the savage beauty of the land and of its winter, to the treatment of servants and paupers in those rural communities… It is a beautiful book, if of a definitely dark kind of beauty, The description of the communal life in those crofts, with all members of the household sleeping in the same comes out as very outlandish, until I remembered the common room in Brittany where the only privacy was afforded by the lits-clos, box-beds aligned along the walls with a door turning them into as many tiny alcoves… The book also reminded me at times of [the magnificent] An instance of the fingerpost, where another unusual women again stands accused of a murder, with contradicting statements about her, except that there is nothing Christic about Agnes Magnusdottir (or Jòndóttir). The building of her character tiny piece by tiny piece throughout the book is impressive and touching, and so are the other characters at the farm, forced into partaking in this tragedy just like they are forced in hearing the confession of Agnes to the priest while sharing the common room with her. And eventually accepting her as a whole person rather than a murderess.

“I let my body swing, I let my arms fall. I feel the muscles of my stomach contract and twist. The scythe rises, falls, rises, falls, catches the sun across its blades and flicks the light back into my eyes – a bright wink of God. I watch you, the scythe says, rippling through the green sea, catching the sun, casting it back to me.”

The book is truly telling much (too much?) about the daily life of those farmers, as in the above passage which reminded me of watching my grandfathers cutting hay with their scythe in the summer, with a practice that made them go for hours, only stopping for sharpening the blade… Obviously, I am not fit to judge the historical accuracy of such details, especially in Iceland, but it rings true or true enough to merge with the psychological part of the novel. And I wanted to hear about how Icelanders reacted to this book (since the author is Australian, if clearly in love with Iceland!): as a coincidence, I met with an Icelander in Oxford Royal Oak earlier this week who told me that the book sounded Icelandic to her, so much that the English version read as if it had been translated from Icelandic! [I just found this entry about travelling around the sites appearing in the book. As a last note, some sites and blogs have ranked the book within Icelandic or Scandinavian crime novels: this is completely inappropriate.]

Villa Arvedi

Posted in Mountains, pictures, Travel, Wines with tags , , , , , , on March 20, 2016 by xi'an

back from CIRM

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , on March 20, 2016 by xi'an

near Col de Sugiton, Parc National des Calanques, Marseille, March 01, 2016As should be clear from earlier posts, I tremendously enjoyed this past week at CIRM, Marseille, and not only for providing a handy retreat from where I could go running and climbing at least twice a day!  The programme (with slides and films soon to be available on the CIRM website) was very well-designed with mini-courses and talks of appropriate length and frequency. Thanks to Nicolas Chopin (ENSAE ParisTech) and Gilles Celeux  (Inria Paris) for constructing so efficiently this program and to the local organisers Thibaut Le Gouic (Ecole Centrale de Marseille), Denys Pommeret (Aix-Marseille Université), and Thomas Willer (Aix-Marseille Université) for handling the practical side of inviting and accommodating close to a hundred participants on this rather secluded campus. I hope we can reproduce the experiment a few years from now. Maybe in 2018 if we manage to squeeze it between BayesComp 2018 [ex-MCMski] and ISBA 2018 in Edinburgh.

One of the bonuses of staying at CIRM is indeed that it is fairly isolated and far from the fury of down-town Marseille, which may sound like a drag, but actually helps with concentration and interactions. Actually, the whole Aix-Marseille University campus of Luminy on which CIRM is located is surprisingly quiet: we were there in the very middle of the teaching semester and saw very few students around (although even fewer boars!). It is a bit of a mystery that a campus built in such a beautiful location with the Mont Puget as its background and the song of cicadas as the only source of “noise” is not better exploited towards attracting more researchers and students. However remoteness and lack of efficient public transportation may explain a lot about this low occupation of the campus. As may the poor quality of most buildings on the campus, which must be unbearable during the summer months…

In a potential planning for the future Bayesian week at CIRM, I think we could have some sort of poster sessions after-dinner (with maybe a cash bar operated by some of the invited students since there is no bar at CIRM or around). Or trail-running under moonlight, trying to avoid tripping over rummaging boars… A sort of Kaggle challenge would be nice but presumably too hard to organise. As a simpler joint activity, we could collectively contribute to some wikipedia pages related to Bayesian and computational statistics.

approximations of Markov Chains [another garden of forking paths]

Posted in Books, Mountains, pictures, Statistics, University life with tags , , , , , , , , , , on March 15, 2016 by xi'an

On the Sétaz cabin ride, Valloire, Dec. 23, 2011James Johndrow and co-authors from Duke wrote a paper on approximate MCMC that was arXived last August and that I missed. David Dunson‘s talk at MCMski made me aware of it. The paper studies the impact of replacing a valid kernel with a close approximation. Which is a central issue for many usages of MCMC in complex models, as exemplified by the large number of talks on that topic at MCMski.

“All of our bounds improve with the MCMC sample path length at the expected rate in t.”

A major constraint in the paper is Doeblin’s condition, which implies uniform geometric ergodicity. Not only it is a constraint on the Markov kernel but it is also one for the Markov operator in that it may prove impossible to… prove. The second constraint is that the approximate Markov kernel is close enough to the original, which sounds reasonable. Even though one can always worry that the total variation norm is too weak a norm to mean much. For instance, I presume with some confidence that this does not prevent the approximate Markov kernel from not being ergodic, e.g., not irreducible, not absolutely continuous wrt the target, null recurrent or transient. Actually, the assumption is stronger in that there exists a collection of approximations for all small enough values ε of the total variation distance. (Small enough meaning ε is much smaller than the complement α to 1 of the one step distance between the Markov kernel and the target. With poor kernels, the approximation must thus be very good.) This is less realistic than assuming the availability of one single approximation associated with an existing but undetermined distance ε. (For instance, the three examples of Section 3 in the paper show the existence of approximations achieving a certain distance ε, without providing a constructive determination of such approximations.) Under those assumptions, the average of the sequence of Markov moves according to the approximate kernel converges to the target in total variation (and in expectation for bounded functions). With sharp bounds on those distances. I am still a bit worried at the absence of conditions for the approximation to be ergodic.

“…for relatively short path lengths, there should exist a range of values for which aMCMC offers better performance in the compminimax sense.”

The paper also includes computational cost into the picture. Introducing the notion of compminimax error, which is the smallest (total variation) distance among all approximations at a given computational budget. Quite an interesting, innovative, and relevant notion that may however end up being too formal for practical use. And that does not include the time required to construct and calibrate the approximations.

a ghastly ghost

Posted in Books, Kids, Mountains with tags , , , , , , , , , , , , , , , , , , , , on March 13, 2016 by xi'an

My daughter sort of dragged me to watch The Revenant as it just came out in French cinemas and I reluctantly agreed as I had read about magnificent winter and mountain sceneries, shot in an unusually wide format with real light. And indeed the landscape and background of the entire movie are magnificent, mostly shot in the Canadian Rockies, around Kananaskis and Canmore, which is on the way to Banff. (Plus a bit in Squamish rain forest.) The story is however quite a disappointment as it piles up one suspension of disbelief after another. This is a tale of survival (as I presume everyone knows!) but so implausible as to cancel any appreciation of the film. It may be the director Iñárritu is more interested in a sort of new age symbolism than realism, since there are many oniric passages with floating characters and falling meteors, desecrated churches and pyramids of bones, while the soundtrack often brings in surreal sounds, but the impossible survival of Hugh Glass made me focus more and more on the scenery… While the true Hugh Glass did manage to survive on his own, fixing his broken leg, scrawling to a river, and making a raft that brought him to a fort downstream, [warning, potential spoilers ahead!] the central character in the movie takes it to a fantasy level as he escapes hypothermia while swimming in freezing rapids, drowning while wearing a brand new bearskin, toxocariasis while eating raw liver,  bullets when fleeing from both Araka Indians and French (from France, Louisiana, or Québec???) trappers, a 30 meter fall from a cliff with not enough snow at the bottom to make a dent on, subzero temperatures while sleeping inside a horse carcass [and getting out of it next morning when it should be frozen solid], massive festering bone-deep wounds, and the deadly Midwestern winter… Not to mention the ability of make fire out of nothing in the worst possible weather conditions or to fire arrows killing men on the spot or to keep a never ending reserve of bullets. And while I am at it, the ability to understand others: I had trouble even with the French speaking characters, despite their rather modern French accent!

Sugiton at dawn

Posted in Mountains, pictures, Running, Travel, University life with tags , , , , , , , , , on March 5, 2016 by xi'an

at CIRM [#3]

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , on March 4, 2016 by xi'an

Simon Barthelmé gave his mini-course on EP, with loads of details on the implementation of the method. Focussing on the EP-ABC and MCMC-EP versions today. Leaving open the difficulty of assessing to which limit EP is converging. But mentioning the potential for asynchronous EP (on which I would like to hear more). Ironically using several times a logistic regression example, if not on the Pima Indians benchmark! He also talked about approximate EP solutions that relate to consensus MCMC. With a connection to Mark Beaumont’s talk at NIPS [at the time as mine!] on the comparison with ABC. While we saw several talks on EP during this week, I am still agnostic about the potential of the approach. It certainly produces a fast proxy to the true posterior and hence can be exploited ad nauseam in inference methods based on pseudo-models like indirect inference. In conjunction with other quick and dirty approximations when available. As in ABC, it would be most useful to know how far from the (ideal) posterior distribution does the approximation stands. Machine learning approaches presumably allow for an evaluation of the predictive performances, but less so for the modelling accuracy, even with new sampling steps. [But I know nothing, I know!]

Dennis Prangle presented some on-going research on high dimension [data] ABC. Raising the question of what is the true meaning of dimension in ABC algorithms. Or of sample size. Because the inference relies on the event d(s(y),s(y’))≤ξ or on the likelihood l(θ|x). Both one-dimensional. Mentioning Iain Murray’s talk at NIPS [that I also missed]. Re-expressing as well the perspective that ABC can be seen as a missing or estimated normalising constant problem as in Bornn et al. (2015) I discussed earlier. The central idea is to use SMC to simulate a particle cloud evolving as the target tolerance ξ decreases. Which supposes a latent variable structure lurking in the background.

Judith Rousseau gave her talk on non-parametric mixtures and the possibility to learn parametrically about the component weights. Starting with a rather “magic” result by Allman et al. (2009) that three repeated observations per individual, all terms in a mixture are identifiable. Maybe related to that simpler fact that mixtures of Bernoullis are not identifiable while mixtures of Binomial are identifiable, even when n=2. As “shown” in this plot made for X validated. Actually truly related because Allman et al. (2009) prove identifiability through a finite dimensional model. (I am surprised I missed this most interesting paper!) With the side condition that a mixture of p components made of r Bernoulli products is identifiable when p ≥ 2[log² r] +1, when log² is base 2-logarithm. And [x] the upper rounding. I also find most relevant this distinction between the weights and the remainder of the mixture as weights behave quite differently, hardly parameters in a sense.

Follow

Get every new post delivered to your Inbox.

Join 1,020 other followers