Archive for temperature

gerrymandering detection by MCMC

Posted in Books, Statistics with tags , , , , , , , on June 16, 2017 by xi'an

In the latest issue of Nature I read (June 8), there is a rather long feature article on mathematical (and statistical) ways of measuring gerrymandering, that is the manipulation of the delimitations of a voting district toward improving the chances of a certain party. (The name comes from Elbridge Gerry (1812) and the salamander shape of the district he created.) The difficulty covered by the article is about detecting gerrymandering, which leads to the challenging and almost philosophical question of defining a “fair” partition of a region into voting districts, when those are not geographically induced. Since each partition does not break the principles of “one person, one vote” and of majority rule. Having a candidate or party win at the global level and loose at every local level seems to go against this majority rule, but with electoral systems like in the US, this frequently happens (with dire consequences in the latest elections). Just another illustration of Simpson’s paradox, essentially. And a damning drawback of multi-tiered electoral systems.

“In order to change the district boundaries, we use a Markov Chain Monte Carlo algorithm to produce about 24,000 random but reasonable redistrictings.”

In the arXiv paper that led to this Nature article (along with other studies), Bagiat et al. essentially construct a tail probability to assess how extreme the current district partition is against a theoretical distribution of such partitions. Finding that the actual redistrictings of 2012 and 2016 in North Carolina are “extremely atypical”.  (The generation of random partitions obeyed four rules, namely equal population, geographic compacity and connexity, proximity to county boundaries, and a majority of Afro-American voters in at least two districts, the latest being a requirement in North Carolina. A score function was built by linear combination of four corresponding scores, mostly χ² like, and turned into a density, simulated annealing style. The determination of the final temperature β=1 (p.18) [or equivalently of the weights (p.20)] remains unclear to me. As does the use of more than 10⁵ simulated annealing iterations to produce a single partition (p.18)…

From a broader perspective, agreeing on a method to produce random district allocations could be the way to go towards solving the judicial dilemma in setting new voting maps as what is currently under discussion in the US.

simulated annealing for Sudokus [2]

Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , on March 17, 2012 by xi'an

On Tuesday, Eric Chi and Kenneth Lange arXived a paper on a comparison of numerical techniques for solving sudokus. (The very Kenneth Lange who wrote this fantastic book on numerical analysis.) One of these techniques is the simulated annealing approach I had played with a long while ago.  They seem to use the same penalisation function as mine, i.e., the number of constraint violations, but the moves are different in that they pick pairs of cells without clues (i.e., not constrained) and swap their contents. The pairs are not picked at random but with probability proportional to exp(k), if k is the number of constraint violations. The temperature decreases geometrically and the simulated annealing program stops when the zero cost is achieved or when a maximum 10⁵ iterations are reached. The R program I wrote while visiting SAMSI had more options, but it was also horrendously slow! The CPU time reported by the authors is far far lower, almost in the range of the backtracking solution that serves as their reference. (Of course, it is written in Fortran 95, not in R…) As in my case, the authors mentioned they sometimes get stuck in a local minimum with only 2 cells with constraint violations.

So I reprogrammed an R code following (as much as possible) their scheme. However, I do not get a better behaviour than with my earlier code, and certainly no solution within seconds, if any. For instance, the temperature decrease in 100(.99)t seems too steep to manage 105 steps. So, either I am missing a crucial element in the code, or my R code is very poor and clever Fortran programming does the trick! Here is my code

target=function(s){
tar=sum(apply(s,1,duplicated)+apply(s,2,duplicated))
for (r in 1:9){
bloa=(1:3)+3*(r-1)%%3
blob=(1:3)+3*trunc((r-1)/3)
tar=tar+sum(duplicated(as.vector(s[bloa,blob])))
}
return(tar)
}

cost=function(i,j,s){
#constraint violations in cell (i,j)
  cos=sum(s[,j]==s[i,j])+sum(s[i,]==s[i,j])
  boxa=3*trunc((i-1)/3)+1;
  boxb=3*trunc((j-1)/3)+1;
  cos+sum(s[boxa:(boxa+2),boxb:(boxb+2)]==s[i,j])
  }

entry=function(){
  s=con
  pop=NULL
  for (i in 1:9)
    pop=c(pop,rep(i,9-sum(con==i)))
  s[s==0]=sample(pop)
  return(s)
  }

move=function(tau,s,con){
  pen=(1:81)
  for (i in pen[con==0])
        pen[i]=cost(((i-1)%%9)+1,trunc((i-1)/9)+1,s)
  wi=sample((1:81)[con==0],2,prob=exp(pen[(1:81)[con==0]]))
  prop=s
  prop[wi[1]]=s[wi[2]]
  prop[wi[2]]=s[wi[1]]

  if (runif(1)<exp((target(s)-target(prop)))/tau)
    s=prop
  return(s)
  }

#Example:
s=matrix(0,ncol=9,nrow=9)
s[1,c(1,6,7)]=c(8,1,2)
s[2,c(2:3)]=c(7,5)
s[3,c(5,8,9)]=c(5,6,4)
s[4,c(3,9)]=c(7,6)
s[5,c(1,4)]=c(9,7)
s[6,c(1,2,6,8,9)]=c(5,2,9,4,7)
s[7,c(1:3)]=c(2,3,1)
s[8,c(3,5,7,9)]=c(6,2,1,9)

con=s
tau=100
s=entry()
for (t in 1:10^4){
  for (v in 1:100) s=move(tau,s,con)
  tau=tau*.99
  if (target(s)==0) break()
  }