Archive for permutations

Le Monde puzzle [#1051]

Posted in Books, Kids, R with tags , , , , , , on May 18, 2018 by xi'an

A combinatoric Le Monde mathematical puzzle of limited size:
When the only allowed move is to switch two balls from adjacent boxes, what is the minimal number of moves to return all balls in the above picture to their respective boxes? Same question with six boxes and 12 balls.

The question is rather interesting to code as I decided to use recursion (as usual!) but wanted to gain time by storing the number of steps needed by any configuration to reach its ordered recombination. Meaning I had to update an external vector within the recursive function for each new configuration I met. With help from Julien Stoehr, who presented me with the following code, a simplification of a common R function

v.assign <- function (i,value,...) {
  temp <- get(i, pos = 1)
  temp[...] <- value
  assign(i, temp, pos = 1)}

which assigns one or several entries to the external vector i. I thus used this trick in the following R code, where cosz is a vector of size 5¹⁰, much larger than the less than 10! values I need but easier to code. While n≤5.

n=5;tn=2*n
baz=n^(0:(tn-1))
cosz=rep(-1,n^tn)
swee <- function(balz){
  indz <- sum((balz-1)*baz)
  if (cosz[indz]==-1){ 
  if (min(diff(balz))==0){ #ordered
     v.assign("cosz",indz,value=1)}else{
       val <- n^tn
       for (i in 2:n)
       for (j in (2*i-1):(2*i))
       for (k in (2*i-3):(2*i-2)){
         calz <- balz
         calz[k] <- balz[j];calz[j] 0) 
           val <- min(val,1+swee(calz))}
     v.assign("cosz",indz,value=val)
  }}
 return(cosz[indz])}

which returns 2 for n=2, 6 for n=3, 11 for n=4, 15 for n=5. In the case n=6, I need a much better coding of the permutations of interest. Which is akin to ranking all words within a dictionary with letters (1,1,…,6,6). After some thinking (!) and searching, I came up with a new version, defining

parclass=rep(2,n)
rankum=function(confg){
    n=length(confg);permdex=1
    for (i in 1:(n-1)){
      x=confg[i]
      if (x>1){
        for (j in 1:(x-1)){
            if(parclass[j]>0){
                parclass[j]=parclass[j]-1
                permdex=permdex+ritpermz(n-i,parclass)
                parclass[j]=parclass[j]+1}}}
        parclass[x]=parclass[x]-1}
    return(permdex)}

ritpermz=function(n,parclass){
    return(factorial(n)/prod(factorial(parclass)))}

for finding the index of a given permutation, between 1 and (2n)!/2!..2!, and then calling the initial swee(p) with this modified allocation. The R code was still running when I posted this entry… and six days later, it returned the answer of 23.

Le Monde puzzle [#1043]

Posted in Books, Kids with tags , , , , , , on March 5, 2018 by xi'an

An arithmetic Le Monde mathematical puzzle :

A number is “noble” if all its digits are different and if it is equal to the average of all numbers created by permuting its digits. What are the noble numbers?

There is no need for simulation when plain enumeration works. After failing to install the R packge permutations, I installed the R package permute, which works, although (a) the function allPerm does not apply directly to a vector of characters or numbers but only to its size:

> allPerms(c("a","r","h"))
     [,1] [,2] [,3]
[1,]    1    3    2
[2,]    2    1    3
[3,]    2    3    1
[4,]    3    1    2
[5,]    3    2    1

and (b) as seen above the function does not contain “all” permutations since it misses the identity permutation.  Which ends up being fine for solving this puzzle. Using a bit of digit-character manipulation

findzol=function(N=2){
  for (u in 1:(10^N-1)){
    digz=strsplit(as.character(u),"")[[1]]
    if (length(digz)<N) 
      digz=c(rep("0",N-length(digz)),digz)
    if (length(unique(digz))==N){
      permz=apply(matrix(digz[allPerms(1:N)],
             ncol=N),2,as.numeric)
      if (mean(permz%*%10^{(N-1):0})==u) print(u)}}}

I found solutions for N=3

> findzol(3)
[1] 370
[1] 407
[1] 481
[1] 518
[1] 592
[1] 629

and none for N=4,5,6. Le Monde gives solutions for N=9, which is not achievable by my code!

Le Monde puzzle [#977]

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , on October 3, 2016 by xi'an

A mild arithmetic Le Monde mathematical puzzle:

Find the optimal permutation of {1,2,..,15} towards minimising the maximum of sum of all three  consecutive numbers, including the sums of the 14th, 15th, and first numbers, as well as the 15th, 1st and 2nd numbers.

If once again opted for a lazy solution, not even considering simulated annealing!,

parme=sample(15)
max(apply(matrix(c(parme,parme[-1],
parme[1],parme[-(1:2)],parme[1:2]),3),2,sum))

and got the minimal value of 26 for the permutation

14 9 2 15 7 1 11 10 4 12 8 5 13 6 3

Le Monde gave a solution with value 25, though, which is

11 9 7 5 13 8 2 10 14 6 1 12 15 4 3

but there is a genuine mistake in the solution!! This anyway shows that brute force may sometimes fail. (Which sounds like a positive conclusion to failing to find the proper solution! But trying with a simple simulated annealing version did not produce any 25 either…)

relabelling mixtures (#2)

Posted in Statistics, Travel, University life with tags , , , , , , on February 5, 2015 by xi'an

Following the previous post, I went and had  a (long) look at Puolamäki and Kaski’s paper. I must acknowledge that, despite having several runs through the paper, I still have trouble with the approach… From what I understand, the authors use a Bernoulli mixture pseudo-model to reallocate the observations to components.  That is, given an MCMC output with simulated allocations variables (a.k.a., hidden or latent variables), they create a (TxK)xn matrix of component binary indicators e.g., for a three component mixture,

0 1 0 0 1 0…
1 0 0 0 0 0…
0 0 1 1 0 1…
0 1 0 0 1 1…

and estimate a probability to be in component j for each of the n observations, according to the (pseudo-)likelihood

\prod_{r=1}^R \sum_{j=1}^K \prod_{i=1}^n \beta_{i,j}^{z_{i,r}}(1-\beta_{i,j})^{1-z_{i,r}}

It took me a few days, between morning runs and those wee hours when I cannot get back to sleep (!), to make some sense of this Bernoulli modelling. The allocation vectors are used together to estimate the probabilities of being “in” component j together. However the data—which is the outcome of an MCMC simulation and de facto does not originate from that Bernoulli mixture—does not seem appropriate, both because it is produced by an MCMC simulation and is made of blocks of highly correlated rows [which sum up to one]. The Bernoulli likelihood above also defines a new model, with many more parameters than in the original mixture model. And I fail to see why perfect, partial or inexistent label switching [in the MCMC sequence] is not going to impact the estimation of the Bernoulli mixture. And why an argument based on a fixed parameter value (Theorem 3) extends to an MCMC outcome where parameters themselves are subjected to some degree of label switching. Bemused, I remain…

importance sampling schemes for evidence approximation [revised]

Posted in Statistics, University life with tags , , , , , , , on November 18, 2014 by xi'an

After a rather intense period of new simulations and versions, Juong Een (Kate) Lee and I have now resubmitted our paper on (some) importance sampling schemes for evidence approximation in mixture models to Bayesian Analysis. There is no fundamental change in the new version but rather a more detailed description of what those importance schemes mean in practice. The original idea in the paper is to improve upon the Rao-Blackwellisation solution proposed by Berkoff et al. (2002) and later by Marin et al. (2005) to avoid the impact of label switching on Chib’s formula. The Rao-Blackwellisation consists in averaging over all permutations of the labels while the improvement relies on the elimination of useless permutations, namely those that produce a negligible conditional density in Chib’s (candidate’s) formula. While the improvement implies truncated the overall sum and hence induces a potential bias (which was the concern of one referee), the determination of the irrelevant permutations after relabelling next to a single mode does not appear to cause any bias, while reducing the computational overload. Referees also made us aware of many recent proposals that conduct to different evidence approximations, albeit not directly related with our purpose. (One was Rodrigues and Walker, 2014, discussed and commented in a recent post.)

Le Monde puzzle [#868]

Posted in Books, Kids, Statistics with tags , , , on June 1, 2014 by xi'an

Another permutation-based Le Monde mathematical puzzle:

Given the integers 1,…n, a “perfect” combination is a pair (i,j) of integers such that no other pair enjoys the same sum. For n=33, what is the maximum of perfect combinations one can build? And for n=214? 

A rather straightforward problem, or so it seemed: take the pairs (2m,2m+1), their sums all differ, and we get the maximal possible number of sums, ⌊n/2⌋… However, I did not read the question properly (!) and the constraint is on the sum (i+j), namely

How many mutually exclusive pairs (i,j) can be found with different sums all bounded by n=33? n=2014?

In which case, the previous and obvious proposal works no longer… The dumb brute-force search

T=10^6
sol=0
for (t in 1:T){
  perm=sample(1:sample(seq(20,32,2),1))
  sal=sum(unique(apply(matrix(perm,ncol=2),1,sum))<33)
  if (sal>sol){
     sol=sal;laperm=perm}
  }

leads to a solution of

> sol
[1] 12
> laperm
 [1]  6  9  1 24 13 20  4  7 21 14 17  3 16 11 19 25 23 18 12 26 15  2  5 10 22
[26]  8
> unique(apply(matrix(laperm,ncol=2),1,sum))
[1] 17 28 26 47 31 32 30 22 23 19 27 25 24

which is close of the solution sol=13 proposed in Le Monde… It is obviously hopeless for a sum bounded by 2014. A light attempt at simulated annealing did not help either.

Importance sampling schemes for evidence approximation in mixture models

Posted in R, Statistics, University life with tags , , , , , , , , , on November 27, 2013 by xi'an

boximpJeong Eun (Kate) Lee and I completed this paper, “Importance sampling schemes for evidence approximation in mixture models“, now posted on arXiv. (With the customary one-day lag for posting, making me bemoan the days of yore when arXiv would give a definitive arXiv number at the time of submission.) Kate came twice to Paris in the past years to work with me on this evaluation of Chib’s original marginal likelihood estimate (also called the candidate formula by Julian Besag). And on the improvement proposed by Berkhof, van Mechelen, and Gelman (2003), based on averaging over all permutations, idea that we rediscovered in an earlier paper with Jean-Michel Marin. (And that Andrew seemed to have completely forgotten. Despite being the very first one to publish [in English] a paper on a Gibbs sampler for mixtures.) Given that this averaging can get quite costly, we propose a preliminary step to reduce the number of relevant permutations to be considered in the averaging, removing far-away modes that do not contribute to the Rao-Blackwell estimate and called dual importance sampling. We also considered modelling the posterior as a product of k-component mixtures on the components, following a vague idea I had in the back of my mind for many years, but it did not help. In the above boxplot comparison of estimators, the marginal likelihood estimators are

  1. Chib’s method using T = 5000 samples with a permutation correction by multiplying by k!.
  2. Chib’s method (1), using T = 5000 samples which are randomly permuted.
  3. Importance sampling estimate (7), using the maximum likelihood estimate (MLE) of the latents as centre.
  4. Dual importance sampling using q in (8).
  5. Dual importance sampling using an approximate in (14).
  6. Bridge sampling (3). Here, label switching is imposed in hyperparameters.