Last month, I ordered several books on amazon, taking advantage of my amazon associate gains, and some of them were suggested by amazon algorithms based on my recent history. As I had recently read books involving thieves (like Giant Thief, or Broken Blade and the subsequent books), a lot of titles involved thieves or thievery related names… I picked Den of Thieves mainly for its cover as I did not know the author and the story sounded rather common. When I started reading the book, the story got more and more common, pertaining more to an extended Dungeons & Dragons scenario than to a genuine book! The theme of a bright young thief emerging from the gritty underworld of a close city has been over and over exploited in the fantasy literature, the best (?) example being The lies of Locke Lamora. (Whose third volume, The Republic of Thieves, is in my bag for Reykjavik!) This time, the thief does not appear particularly bright, except at times when he starts philosophy-sing with extremely dangerous enemies!, and the way he eventually overcomes insanely unbalanced odds is just too much. Most characters in the novel are not particularly engaging and way too much caricaturesque from the terribly evil sorcerer cavorting with she-demons to the rigid knight sticking to an idealistic vision of the world where ‘honour” and the code of chivalry is the solution to all problems. It is not even in the slightest sarcastic or tongue-in-cheek as the many novels by David Eddings and the main characters are mostly humourless. I wonder why the book did not get better edited as the weaknesses are very easy to spot! A good example where amazon software failed to make a worthy recommendation!
Archive for the Kids Category
Today, I am leaving Paris for a 8 day stay in Iceland! This is quite exciting, for many reasons: first, I missed the AISTATS 2013 last year as I was still in the hospital; second, I am giving a short short tutorial on ABC methods which will be more like a long (two hours) talk; third, it gives me the fantastic opportunity to visit Iceland for a few days, a place that was top of my wish list of countries to visit. The weather forecast is rather bleak but I am carrying enough waterproof layers to withstand a wee bit of snow and rain… The conference proper starts next Tuesday, April 22, with the tutorials taking place next Friday, April 25. Hence leaving me three completely free days for exploring the area near Reykjavik.
Daniel Simpson gave a seminar at CREST yesterday on his recently arXived paper, “Penalising model component complexity: A principled, practical approach to constructing priors” written with Thiago Martins, Andrea Riebler, Håvard Rue, and Sigrunn Sørbye. Paper that he should also have given in Banff last month had he not lost his passport in København airport… I have already commented at length on this exciting paper, hopefully to become a discussion paper in a top journal!, so I am just pointing out two things that came to my mind during the energetic talk delivered by Dan to our group. The first thing is that those penalised complexity (PC) priors of theirs rely on some choices in the ordering of the relevance, complexity, nuisance level, &tc. of the parameters, just like reference priors. While Dan already wrote a paper on Russian roulette, there is also a Russian doll principle at work behind (or within) PC priors. Each shell of the Russian doll corresponds to a further level of complexity whose order need be decided by the modeller… Not very realistic in a hierarchical model with several types of parameters having only local meaning.
My second point is that the construction of those “politically correct” (PC) priors reflects another Russian doll structure, namely one of embedded models, hence would and should lead to a natural multiple testing methodology. Except that Dan rejected this notion during his talk, by being opposed to testing per se. (A good topic for one of my summer projects, if nothing more, then!)
Randal Douc, Florian Maire, and Jimmy Olsson recently arXived a paper on the use of Markov chain Monte Carlo methods for the sampling of mixture models, which contains the recourse to Carlin and Chib (1995) pseudo-priors to simulate from a mixture distribution (and not from the posterior distribution associated with a mixture sampling model). As reported earlier, I was in the thesis defence of Florian Maire and this approach had already puzzled me at the time. In short, a mixture structure
gives rises to as many auxiliary variables as there are components, minus one: namely, if a simulation z is generated from a given component i of the mixture, one can create pseudo-simulations u from all the other components, using pseudo-priors à la Carlin and Chib. A Gibbs sampler based on this augmented state-space can then be implemented: (a) simulate a new component index m given (z,u); (b) simulate a new value of (z,u) given m. One version (MCC) of the algorithm simulates z given m from the proper conditional posterior by a Metropolis step, while another one (FCC) only simulate the u‘s. The paper shows that MCC has a smaller asymptotic variance than FCC. I however fail to understand why a Carlin and Chib is necessary in a mixture context: it seems (from the introduction) that the motivation is that a regular Gibbs sampler [simulating z by a Metropolis-Hastings proposal then m] has difficulties moving between components when those components are well-separated. This is correct but slightly moot, as each component of the mixture can be simulated separately and in advance in z, which leads to a natural construction of (a) the pseudo-priors used in the paper, (b) approximations to the weights of the mixture, and (c) a global mixture independent proposal, which can be used in an independent Metropolis-Hastings mixture proposal that [seems to me to] alleviate(s) the need to simulate the component index m. Both examples used in the paper, a toy two-component two-dimensional Gaussian mixture and another toy two-component one-dimensional Gaussian mixture observed with noise (and in absolute value), do not help in perceiving the definitive need for this Carlin and Chib version. Especially when considering the construction of the pseudo-priors.
For those interested in visiting Toulouse at the end of the summer for a French speaking conference in Probability and Statistics, the Modélisation-Aléatoire-Statistique branch of SMAI (the French version of SIAM) is holding its yearly conference. The main theme this year is “High dimension phenomena”, but a large panel of the French research in Probability and Statistics will be represented. The program contains in particular:
- Six plenary conferences and 3 talks by the recent winners of the “Prix Jacques Neveu” award [including Pierre Jacob!],
- 22 parallel sessions, from probability theory to applied statistics and machine learning,
- Posters session for students
More detail is available on the conference website (in French). (The organizing committee is made of Aurélien Garivier, Sébastien Gerchinovitz, Aldéric Joulin, Clément Pellegrini, and Laurent Risser.)