## shortened iterations [code golf]

Posted in Kids, pictures, Statistics, Travel with tags , , , , , , , , on October 29, 2019 by xi'an

A codegolf lazy morning exercise towards finding the sequence of integers that starts with an arbitrary value n and gets updated by blocks of four as

$a_{4k+1} = a_{4k} \cdot(4k+1)\\ a_{4k+2} = a_{4k+1} + (4k+2)\\ a_{4k+3} = a_{4k+2} - (4k+3)\\ a_{4k+4} = a_{4k+3} / (4k+4)$

until the last term is not an integer. While the update can be easily implemented with the appropriate stopping rule, a simple congruence analysis shows that, depending on n, the sequence is 4, 8 or 12 values long when

$n\not\equiv 1(4)\\ n\equiv 1(4)\ \text{and}\ 3(n-1)+4\not\equiv 0(32)\\ 3(n-1)+4\equiv 0(32)$

respectively. But sadly the more interesting fixed length solution

~=rep #redefine function
b=(scan()-1)*c(32~4,8,40~4,1,9~3)/32+c(1,1,3,0~3,6,-c(8,1,9,-71,17)/8)
b[!b%%1] #keep integers only


ends up being longer than the more basic one:

a=scan()
while(!a[T]%%1)a=c(a,d<-a[T]*T,d+T+1,e<-d-1,e/((T<-T+4)-1))
a[-T]


where Robin’s suggestion of using T rather than length is very cool as T has double meaning, first TRUE (and 1) then the length of a…

## Nature tidbits

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on September 18, 2018 by xi'an

“Louis Chen was technically meant to retire in 2005. The mathematician at the National University of Singapore was turning 65, the university’s official retirement age. But he was only five years into his tenure as director of the university’s new Institute for Mathematical Sciences, and the university wanted him to stay on. So he remained for seven more years, stepping down in 2012. Over the next 18 months, he travelled and had knee surgery, before returning in summer 2014 to teach graduate courses for a year.”

And [yet] another piece on the biases of AIs. Reproducing earlier papers discussed here, with one obvious reason being that the learning corpus is not representative of the whole population, maybe survey sampling should become compulsory in machine learning training degrees. And yet another piece on why protectionism is (also) bad for the environment.

## un lagarto en las Cataratas del Iguazú [guest jatp]

Posted in Kids, pictures, Travel with tags , , , , , on January 10, 2017 by xi'an

## Statistics may be harmful to your freedom

Posted in Statistics with tags , , , , , on January 29, 2013 by xi'an

On Wednesday, I was reading the freshly delivered Significance and esp. the several papers therein about statisticians being indicted, fired, or otherwise sued for doing statistics. I mentioned a while ago the possible interpretations of L’Aquila verdict (where I do not know whether any of the six scientists is a statistician), but did not know about Graciela Bevacqua‘s hardship in the Argentinian National Statistics Institute, nor about David Nutt being sacked from the Advisory Council on the Misuse of Drugs, nor about Peter Wilmshurst being sued by NMT (a US medical device corporation) for expressing concern about a clinical trial they conducted. What is most frightening in those stories is that those persons ended up facing those hardships without any support from their respective institutions (quite the opposite in two cases!). And then, on the way home, I further read that the former head of the Greek National Statistics Institute (Elstat) was fired and indicted for over-estimating the Greek deficit, after resisting official pressure to lower it down…  Tough job!

## AMOR at 5000ft in a water tank…

Posted in Mountains, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on November 22, 2012 by xi'an

On Monday, I attended the thesis defence of Rémi Bardenet in Orsay as a member (referee) of his thesis committee. While this was a thesis in computer science, which took place in the Linear Accelerator Lab in Orsay, it was clearly rooted in computational statistics, hence justifying my presence in the committee. The justification (!) for the splashy headline of this post is that Rémi’s work was motivated by the Pierre-Auger experiment on ultra-high-energy cosmic rays, where particles are detected through a network of 1600 water tanks spread over the Argentinian Pampa Amarilla on an area the size of Rhode Island (where I am incidentally going next week).

The part of Rémi’s thesis presented during the defence concentrated on his AMOR algorithm, arXived in a paper written with Olivier Cappé and Gersende Fort. AMOR stands for adaptive Metropolis online relabelling and combines adaptive MCMC techniques with relabelling strategies to fight label-switching (e.g., in mixtures). I have been interested in mixtures for eons (starting in 1987 in Ottawa with applying Titterington, Smith, and Makov to chest radiographs) and in label switching for ages (starting at the COMPSTAT conférence in Bristol in 1998). Rémi’s approach to the label switching problem follows the relabelling path, namely a projection of the original parameter space into a smaller subspace (that is also a quotient space) to avoid permutation invariance and lack of identifiability. (In the survey I wrote with Kate Lee, Jean-Michel Marin and Kerrie Mengersen, we suggest using the mode as a pivot to determine which permutation to use on the components of the mixture.) The paper suggests using an Euclidean distance to a mean determined adaptively, μt, with a quadratic form Σt also determined on-the-go, minimising (Pθ-μt)TΣt(Pθ-μt) over all permutations P at each step of the algorithm. The intuition behind the method is that the posterior over the restricted space should look like a roughly elliptically symmetric distribution, or at least like a unimodal distribution, rather than borrowing bits and pieces from different modes. While I appreciate the technical tour de force represented by the proof of convergence of the AMOR algorithm, I remain somehow sceptical about the approach and voiced the following objections during the defence: first, the assumption that the posterior becomes unimodal under an appropriate restriction is not necessarily realistic. Secondary modes often pop in with real data (as in the counter-example we used in our paper with Alessandra Iacobucci and Jean-Michel Marin). Next, the whole apparatus of fighting multiple modes and non-identifiability, i.e. fighting label switching, is to fall back on posterior means as Bayes estimators. As stressed in our JASA paper with Gilles Celeux and Merrilee Hurn, there is no reason for doing so and there are several reasons for not doing so:

• it breaks down under model specification, i.e., when the number of components is not correct
• it does not improve the speed of convergence but, on the opposite, restricts the space visited by the Markov chain
• it may fall victim to the fatal attraction of secondary modes by fitting too small an ellipse around one of those modes
• it ultimately depends on the parameterisation of the model
• there is no reason for using posterior means in mixture problems, posterior modes or cluster centres can be used instead

I am therefore very much more in favour of producing a posterior distribution that is as label switching as possible (since the true posterior is completely symmetric in this respect). Post-processing the resulting sample can be done by using off-the-shelf clustering in the component space, derived from the point process representation used by Matthew Stephens in his thesis and subsequent papers. It also allows for a direct estimation of the number of components.

In any case, this was a defence worth-attending that led me to think afresh about the label switching problem, with directions worth exploring next month while Kate Lee is visiting from Auckland. Rémi Bardenet is now headed for a postdoc in Oxford, a perfect location to discuss further label switching and to engage into new computational statistics research!