Archive for optimisation

A knapsack riddle [#2]?

Posted in Kids, R, Statistics with tags , , , on February 17, 2017 by xi'an

gear

Still about this allocation riddle of the past week, and still with my confusion about the phrasing of the puzzle, when looking at a probabilistic interpretation of the game, rather than for a given adversary’s y, the problem turns out to search for the maximum of

\mathbb{E}[L(x,Y)]=\sum_{i=1}^{10} i\{P(Y_i<x_i)-P(Y_i>x_i)\}

where the Y’s are Binomial B(100,p). Given those p’s, this function of x is available in closed form and can thus maximised by a simulated annealing procedure, coded as

utility=function(x,p){
  ute=2*pbinom(x[1]-1,100,prob=p[1])+
   dbinom(x[1],100,p[1])
  for (i in 2:10)
   ute=ute+2*i*pbinom(x[i]-1,100,prob=p[i])+
    i*dbinom(x[i],100,p[i])
  return(ute)}
#basic term in utility
baz=function(i,x,p){
  return(i*dbinom(x[i],100,p[i])+
   i*dbinom(x[i]-1,100,p[i]))}
#relies on a given or estimated p
x=rmultinom(n=1,siz=100,prob=p)
maxloz=loss=0
newloss=losref=utility(x,p)
#random search
T=1e3
Te=1e2
baza=rep(0,10)
t=1
while ((t<T)||(newloss>loss)){
 loss=newloss
 i=sample(1:10,1,prob=(10:1)*(x>0))
#moving all other counters by one
 xp=x+1;xp[i]=x[i]
#corresponding utility change
 for (j in 1:10) baza[j]=baz(j,xp,p)
  proz=exp(log(t)*(baza-baza[i])/Te)
#soft annealing move
 j=sample(1:10,1,prob=proz)
 if (i!=j){ x[i]=x[i]-1;x[j]=x[j]+1}
newloss=loss+baza[j]-baza[i]
if (newloss>maxloz){
 maxloz=newloss;argz=x}
t=t+1
if ((t>T-10)&(newloss<losref)){
 t=1;loss=0
 x=rmultinom(n=1,siz=100,prob=p)
 newloss=losref=utility(x,p)}}

which seems to work, albeit not always returning the same utility. For instance,

> p=cy/sum(cy)
> utility(argz,p)
[1] 78.02
> utility(cy,p)
[1] 57.89

or

> p=sy/sum(sy)
> utility(argz,p)
[1] 82.04
> utility(sy,p)
[1] 57.78

Of course, this does not answer the question as intended and reworking the code to that purpose is not worth the time!

a knapsack riddle?

Posted in Books, pictures, R, Statistics, Travel with tags , , , , , , on February 13, 2017 by xi'an

gear

The [then current now past] riddle of the week is a sort of multiarmed bandits optimisation. Of sorts. Or rather a generalised knapsack problem. The question is about optimising the allocation of 100 undistinguishable units to 10 distinct boxes against a similarly endowed adversary, when the loss function is

L(x,y)=(x_1>y_1)-(x_1<y_1)+...+10((x_{10}>y_{10})-(x_{10}<y_{10}))

and the distribution q of the adversary is unknown. As usual (!), the phrasing of the riddle is somewhat ambiguous but I am under the impression that the game is played sequentially, hence that one can learn about the distribution of the adversary, at least when assuming this adversary keeps the same distribution q at all times. Continue reading

optimality stands in the eye of the riddler

Posted in Books, Kids, pictures, Statistics with tags , , , , , , on March 22, 2016 by xi'an

When looking at US elections on FiveThirtyEight, I came across this riddle:

Two players go (…) into separate booths (…) and a random number between zero and one appears on a screen (…) chosen from a standard uniform distribution. They can choose to keep that first number, or to (…) get a second random number, which they must keep (…) The prize (…) is awarded to the player who kept the higher number (…) Which number is the optimal cutoff for players to discard their first number and choose another? [The Riddler, Mar 4, 2016]

While the solution is now available, I wonder at the wording of this riddle, where “optimality” is not spelled out. Unless I missed some key entry, as it often happens with puzzles… Assuming both players use the same “optimal” cut-off C to make their decision to keep or drop the original uniform, the probability that one does better than the other is exactly ½ since both outcomes are iid. When considering the expected value of the number kept by a player, a simple integral shows that this value is

½(1-C²+C),

which is maximal for C=½. If one considers instead the median of the number Z kept by a player, a bit more computation leads to the function med(Z) = 1/2C when C²>1/2 and (½+C)/(1+C) when C²<1/2, which is maximal for C²=½.

“…using the golden ratio gives the best chance to win the gold bullion!” [The Riddler, Mar 4, 2016]

Incidentally (or not), the solution on 538 is also difficult to understand in that it starts with the event that the first draw is C, which is an event of probability zero. However, when trying to optimise the choice of C for one player, given that the other player has a known cuttoff of D, I found that the only case when C=D coincides with the solution proposed on Riddler, namely ½(√5-1). To check whether or not this derivation was correct, I also plotted the theoretical (right) versus the empirical (left) versions of the winning probability:

riddlerThere is no difference between the two. But the diagonal is exactly at the .5 level, as expected:riddlerinthestormwith an interesting second curve at the .5 probability level. These two level sets happen to meet at the “golden number” solution, ½(√5-1), which comes as a confirmation of my earlier remark. [Another question connected with this riddle would be to figure out the value of D used by the other player from a sequence of games.]

Approximate Maximum Likelihood Estimation

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , on September 21, 2015 by xi'an

linz3Bertl et al. arXived last July a paper on a maximum likelihood estimator based on an alternative to ABC techniques. And to indirect inference. (One of the authors in et al. is Andreas Futschik whom I visited last year in Linz.) Paper that I only spotted when gathering references for a reading list on ABC… The method is related to the “original ABC paper” of Diggle and Gratton (1984) which, parallel to Rubin (1984), contains in retrospect the idea of ABC methods. The starting point is stochastic approximation, namely the optimisation of a function of a parameter θ when written as an expectation of a random variable Y, E[Y|θ], as in the Kiefer-Wolfowitz algorithm. However, in the case of the likelihood function, there is rarely an unbiased estimator and the authors propose instead to use a kernel density estimator of the density of the summary statistic. This means that, at each iteration of the Kiefer-Wolfowitz algorithm, two sets of observations and hence of summary statistics are simulated and two kernel density estimates derived, both to be applied to the observed summary. The sequences underlying the Kiefer-Wolfowitz algorithm are taken from (the excellent optimisation book of) Spall (2003). Along with on-the-go adaptation and convergence test.

The theoretical difficulty in this extension is however that the kernel density estimator is not unbiased and thus that, rigorously speaking, the validation of the Kiefer-Wolfowitz algorithm does not apply here. On the practical side, the need for multiple starting points and multiple simulations of pseudo-samples may induce considerable time overload. Especially if  bootstrap is used to evaluate the precision of the MLE approximation. Besides normal and M/G/1 queue examples, the authors illustrate the approach on a population genetic dataset of Borneo and Sumatra orang-utans. With 5 parameters and 28 summary statistics. Which thus means using a kernel density estimator in dimension 28, a rather perilous adventure..!

future of computational statistics

Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , , , , , , , , on September 29, 2014 by xi'an

I am currently preparing a survey paper on the present state of computational statistics, reflecting on the massive evolution of the field since my early Monte Carlo simulations on an Apple //e, which would take a few days to return a curve of approximate expected squared error losses… It seems to me that MCMC is attracting more attention nowadays than in the past decade, both because of methodological advances linked with better theoretical tools, as for instance in the handling of stochastic processes, and because of new forays in accelerated computing via parallel and cloud computing, The breadth and quality of talks at MCMski IV is testimony to this. A second trend that is not unrelated to the first one is the development of new and the rehabilitation of older techniques to handle complex models by approximations, witness ABC, Expectation-Propagation, variational Bayes, &tc. With a corollary being an healthy questioning of the models themselves. As illustrated for instance in Chris Holmes’ talk last week. While those simplifications are inevitable when faced with hardly imaginable levels of complexity, I still remain confident about the “inevitability” of turning statistics into an “optimize+penalize” tunnel vision…  A third characteristic is the emergence of new languages and meta-languages intended to handle complexity both of problems and of solutions towards a wider audience of users. STAN obviously comes to mind. And JAGS. But it may be that another scale of language is now required…

If you have any suggestion of novel directions in computational statistics or instead of dead ends, I would be most interested in hearing them! So please do comment or send emails to my gmail address bayesianstatistics

variational particle approximations

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , on February 28, 2014 by xi'an

IMG_2515In the plane to Montréal, today, I read this paper by Kulkarni, Saeedi and Gershman, which will be presented at AISTATS. The main idea is to create a mix between particle Monte Carlo and a kind of quasi-Monte Carlo technique (qNC is not mentionned in the paper), using variational inference (and coordinate ascent) to optimise the location and weight of the particles. It is however restricted to cases with finite support (as a product of N latent variables) as in an HMM with a finite state space. There is also something I do not get in the HMM case, which is that the variational approximation to the filtering is contracted sequentially. This means that at time the K highest weight current particles are selected while the past remains unchanged. Is this due to the Markovian nature of the hidden model? (Blame oxygen deprivation, altitude dizziness or travelling stress, then!) I also fail to understand how for filtering, “at each time step, the algorithm selects the K continuations (new variable assignments of the current particle set) that maximize the variational free energy.” Because the weight function to be optimised (eqn (11)) seems to freeze the whole past path of particles… I presume I will find an opportunity while in Reykjavik to discuss those issues with the authors.

Foundations of Statistical Algorithms [book review]

Posted in Books, Linux, R, Statistics, University life with tags , , , , , , , , , , , , , on February 28, 2014 by xi'an

There is computational statistics and there is statistical computing. And then there is statistical algorithmic. Not the same thing, by far. This 2014 book by Weihs, Mersman and Ligges, from TU Dortmund, the later being also a member of the R Core team, stands at one end of this wide spectrum of techniques required by modern statistical analysis. In short, it provides the necessary skills to construct statistical algorithms and hence to contribute to statistical computing. And I wish I had the luxury to teach from Foundations of Statistical Algorithms to my graduate students, if only we could afford an extra yearly course…

“Our aim is to enable the reader (…) to quickly understand the main ideas of modern numerical algorithms [rather] than having to memorize the current, and soon to be outdated, set of popular algorithms from computational statistics.”(p.1)

The book is built around the above aim, first presenting the reasons why computers can produce answers different from what we want, using least squares as a mean to check for (in)stability, then second establishing the ground forFishman Monte Carlo methods by discussing (pseudo-)random generation, including MCMC algorithms, before moving in third to bootstrap and resampling techniques, and  concluding with parallelisation and scalability. The text is highly structured, with frequent summaries, a division of chapters all the way down to sub-sub-sub-sections, an R implementation section in each chapter, and a few exercises. Continue reading