Archive for geometric mean

[maximin] geometric climbing

Posted in Books, R with tags , , , , , on October 5, 2021 by xi'an

A puzzle from The Riddler this week returning to the ranking of climbing competitors in Tokyo. And asking for the maximin score, that is, the worst possible absolute score guaranteeing victory. In the case of eight competitors, a random search for a maximin over 10⁶ draws leads to a value of 48=1x7x8, for a distribution of ranks as follows

[1,]    1    8    8
[2,]    2    6    4
[3,]    3    4    5
[4,]    4    2    6
[5,]    5    5    2
[6,]    6    3    3
[7,]    7    7    1
[8,]    8    1    7

while over seven competitors (the case with men this year, since one of the brothers Mawem got hurt during the qualification), the value is 35=1x5x7, for a distribution of ranks as follows

[1,]    1    7    5
[2,]    2    3    6
[3,]    3    4    3
[4,]    4    5    2
[5,]    5    2    4
[6,]    6    1    7
[7,]    7    6    1

exhibiting a tie in the later case (and no first position for the winners!).

geometric climbing

Posted in Mountains, pictures with tags , , , , , , , , , , , , on August 5, 2021 by xi'an

On the qualifying round for the Tokyo Olympics, the French climber Mickaël Mawem ended up first, while his brother Bassa was the fastest on the speed climb (as a 2018 and 2019 World Champion) but ruptured a tendon while lead climbing and had to be flown back to Paris for a operation. The New York Times inappropriately and condescendingly qualified this first position as being “unexpected” when Mickaël is the 2019 European Champion in bouldering… The NYT is piling up in its belittling by stating that “Anouck Jaubert of France used a second-place finish in speed to squeak into the final¨… (The other French female climber did not make it, despite being one of the first women to reach the 9b level.)

I remain puzzled by the whole concept of mixing the three sports together. As well as by the scoring system, based on a geometric average of the three rankings, which means in particular that the eight finalists will suffer less than in the qualifying round from a poor performance in one of the three climbs (as Adam Ondra for the speed climb). In addition, there is an obscure advantage coming to Adam Ondra for Bassa Mawem cancelling his participation: according to the NYT, “Ondra will receive a bye and an automatic slot in the speed semifinals” meaning “that a likely eighth-place finish in speed — a ranking number that can be hard to overcome in the multiplication of the combined format — will now be no worse than fourth for Ondra”. (The sentence on the strong impact due to the geometric mean is incorrect in that it has less impact that the arithmetic!)

meandering

Posted in Books, Kids, R, Statistics with tags , , , , , , , on March 12, 2021 by xi'an

A bit of a misunderstanding from Randall Munroe and then some: the function F returns a triplet, hence G should return a triplet as well. Even if the limit does return three identical values. And he should have also included the (infamous) harmonic mean! And the subtext (behind the picture) mentions random forest statistics, using every mean one can think of and dropping those that are doing worse, while here all solutions return the same value, hence do not directly discriminate between the averages (and there is no objective function to create the nodes in the trees, &tc.).

Here is a test R code including the harmonic mean:

xkcd=function(x)c(mean(x),exp(mean(log(x))),median(x),1/mean(1/x))
xxxkcd=function(x,N=10)ifelse(rep(N==1,4),xkcd(x),xxxkcd(xkcd(x),N-1))
xxxkcd(rexp(11))
[1] 1.018197 1.018197 1.018197 1.018197

workshop a Padova

Posted in pictures, R, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , on March 22, 2013 by xi'an

Needless to say, it is with great pleasure I am back in beautiful Padova for the workshop Recent Advances in statistical inference: theory and case studies, organised by Laura Ventura and Walter Racugno. Esp. when considering this is one of the last places I met with George Casella, in June 2010. As we have plenty of opportunities to remember him with so many of his friends here. (Tomorrow we will run around Prato della Valle in his memory.)

The workshop is of a “traditional Bayesian facture”, I mean one I enjoy very much: long talks with predetermined discussants and discussion from the floor. This makes for less talks (although we had eight today!) but also for more exciting sessions if the talks are broad and innovative. This was the case today (not including my talk of course) and I enjoyed the sessions a lot.

Jim Berger gave the first talk on “global” objective priors, starting from the desiderata to build a “general” reference prior when one does not want to separate parameters of interest from nuisance parameters and when one already has marginal reference priors on those parameters. This setting was actually addressed in Berger and Sun (AoS, 2008) and Jim presented some of the solutions therein: while I could not really see a strong incentive in using an arithmetic average of those, because it does not make much sense with improper priors, I definitely liked the notion of geometric averages, which evacuate the problem of the normalising constants. (There are open questions as well, about whether one improper prior could dwarf another one in the geometric average. Tail-wise for instance. Gauri Datta mentioned in his discussion that the geometric average is a specific Kullback-Leibler optimum.)

In his discussion of Tom Severini’s paper on integrated likelihood (which really stands at the margin of Bayesian inference), Brunero Liseo proposed a new use of ABC to approximate the likelihood function (while regular ABC relies on an approximation of the likelihood), a bit à la Chib. I cannot tell about the precision of this approximation but this is rather exciting!

Laura Ventura presented four of her current papers on the use of high order asymptotics in approximating (Bayesian) posteriors, following the JASA 2012 paper by Ventura, Cabras and Racugno. (The same issue featured a paper by Gill and Casella, coincidentally.) She showed the improvement brought by moving from first order (normal) to third order (non-normal). This is in a sense at the antipode of ABC, e.g. I’d like to see the requirements on the likelihood functions to be able to come up with a manageable Laplace approximation. She also mentioned a resolution of the Jeffreys-Lindley paradox via the Pereira et al. (2008) evidence, which computes a sort of Bayesian p-value by assessing the posterior probability of the posterior density being lower than its value at the null. I had missed or forgotten about this idea, but I wonder at some caveats like the impact of parameterisation, the connection with the testing problem, the calibration of the quantity, the extension to non-nested models, &tc. (Note that Ventura et al. developed an R package called hoa, for higher-order asymptotics.)

David Dunson presented some very recent work on compressed sensing that summed up for me into the idea of massively projecting (huge vectors of) regressors into much smaller dimension convex combinations, using random matrices for the projections. This point was somehow unclear to me. And to the first discussant Michael Wiper as well, who stressed that a completely random selection of those matrices could produce “mostly rubbish”, unless a learning mechanism was instated. The second discussant, Peter Müller, made the same point about this completely random search in a huge dimension space, while considering the survival frequency of covariates could help towards the efficiency of the method.

%d bloggers like this: