Archive for Marc Kac

Kick-Kac teleportation

Posted in Books, pictures, Statistics with tags , , , , , , , , on January 23, 2022 by xi'an

Randal Douc, Alain Durmus, Aurélien Enfroy, and Jimmy Olson have arXived their Kick-Kac teleportation paper, which was presented by Randal at CIRM last semester. It is based on Kac’s theorem, which states that, for a Markov chain with invariant distribution π, under (π) stationarity, the average tour between two visits to an accessible set C is also stationary. Which can be used for approximating π(h) if π(C) is known (or well-estimated). Jim Hobert and I exploited this theorem in our 2004 perfect sampling paper. The current paper contains a novel proof of the theorem under weaker conditions. (Note that the only condition on C is that it is accessible, rather than a small set. Which becomes necessary for geometric ergodicity, see condition (A4).)

What they define as the Kick-Kac teleportation (KKT) process is the collection of trajectories between two visits to C. Their memoryless version requires perfect simulations from π restricted to the set C. With a natural extension based on a Markov kernel keeping π restricted to the set C stationary. And a further generalisation allowing for lighter tails that also contains the 2005 paper by Brockwell and Kadane as a special case.

The ability of generating from a different kernel Q at each visit to C allows for different dynamics (as in other composite kernels). In their illustrations, the authors use lowest density regions for C, which is rather surprising to me. Except that it allows for a better connection between modes of the target π: the higher performances of the KKT algorithms against the considered alternatives are apparently dependent on the ability of the kernel Q to explore other modes with sufficient frequency.

reis naar Amsterdam

Posted in Books, Kids, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , on April 16, 2015 by xi'an

Amster4On Monday, I went to Amsterdam to give a seminar at the University of Amsterdam, in the department of psychology. And to visit Eric-Jan Wagenmakers and his group there. And I had a fantastic time! I talked about our mixture proposal for Bayesian testing and model choice without getting hostile or adverse reactions from the audience, quite the opposite as we later discussed this new notion for several hours in the café across the street. I also had the opportunity to meet with Peter Grünwald [who authored a book on the minimum description length principle] pointed out a minor inconsistency of the common parameter approach, namely that the Jeffreys prior on the first model did not have to coincide with the Jeffreys prior on the second model. (The Jeffreys prior for the mixture being unavailable.) He also wondered about a more conservative property of the approach, compared with the Bayes factor, in the sense that the non-null parameter could get closer to the null-parameter while still being identifiable.

Amster6Among the many persons I met in the department, Maarten Marsman talked to me about his thesis research, Plausible values in statistical inference, which involved handling the Ising model [a non-sparse Ising model with O(p²) parameters] by an auxiliary representation due to Marc Kac and getting rid of the normalising (partition) constant by the way. (Warning, some approximations involved!) And who showed me a simple probit example of the Gibbs sampler getting stuck as the sample size n grows. Simply because the uniform conditional distribution on the parameter concentrates faster (in 1/n) than the posterior (in 1/√n). This does not come as a complete surprise as data augmentation operates in an n-dimensional space. Hence it requires more time to get around. As a side remark [still worth printing!], Maarten dedicated his thesis as “To my favourite random variables , Siem en Fem, and to my normalizing constant, Esther”, from which I hope you can spot the influence of at least two of my book dedications! As I left Amsterdam on Tuesday, I had time for a enjoyable dinner with E-J’s group, an equally enjoyable early morning run [with perfect skies for sunrise pictures!], and more discussions in the department. Including a presentation of the new (delicious?!) Bayesian software developed there, JASP, which aims at non-specialists [i.e., researchers unable to code in R, BUGS, or, God forbid!, STAN] And about the consequences of mixture testing in some psychological experiments. Once again, a fantastic time discussing Bayesian statistics and their applications, with a group of dedicated and enthusiastic Bayesians!Amster12

%d bloggers like this: