“…crimes that occur in locations frequented by police are more likely to appear in the database simply because that is where the police are patrolling.”

Kristian and William examine in this paper one statistical aspect of the police forces relying on crime prediction software, namely the bias in the data exploited by the software and in the resulting policing. (While the accountability of the police actions when induced by such software is not explored, this is obviously related to the Nature editorial of last week, “Algorithm and blues“, which [in short] calls for watchdogs on AIs and decision algorithms.) When the data is gathered from police and justice records, any bias in checks, arrests, and condemnations will be reproduced in the data and hence will repeat the bias in targeting potential criminals. As aptly put by the authors, the resulting machine learning algorithm will be “predicting future policing, not future crime.” Worse, by having no reservation about over-fitting [the more predicted crimes the better], it will increase the bias in the same direction. In the Oakland drug-user example analysed in the article, the police concentrates almost uniquely on a few grid squares of the city, resulting into the above self-predicting fallacy. However, I do not see much hope in using other surveys and datasets towards eliminating this bias, as they also carry their own shortcomings. Even without biases, predicting crimes at the individual level just seems a bad idea, for statistical and ethical reasons.

Filed under: Books, pictures, Statistics Tagged: AIs, algorithms, crime prediction, deep learning, drug users, Nature, Oakland, Philip K. DIck, Significance, social determinism, software, The Minority Report ]]>

0≤u²≤**ƒ**(v/u)

induces the distribution with density proportional to **ƒ** on V/U. Hence the name. The proof is straightforward and the result can be seen as a consequence of the fundamental lemma of simulation, namely that simulating from the uniform distribution on the set B of (w,x)’s in **R⁺**x**X** such that

0≤w≤**ƒ**(x)

induces the marginal distribution with density proportional to **ƒ** on X. There is no mathematical issue with this result, but I have difficulties with picturing the construction of efficient random number generators based on this principle.

I thus took the opportunity of the second season of [the Warwick reading group on] Non-uniform random variate generation to look anew at this approach. (Note that the book is freely available on Luc Devroye’s website.) The first thing I considered is the shape of the set A. Which has nothing intuitive about it! Luc then mentions (p.195) that the boundary of A is given by

u(x)=√**ƒ**(x),v(x)=x√**ƒ**(x)

which then leads to bounding both **ƒ** and x→x²**ƒ**(x) to create a box around A and an accept-reject strategy, but I have trouble with this result without making further assumptions about **ƒ**… Using a two component normal mixture as a benchmark, I found bounds on u(.) and v(.) and simulated a large number of points within the box to end up with the above graph that indeed the accepted (u,v)’s were within this boundary. And the same holds with a more ambitious mixture:

Filed under: Books, pictures, R, Statistics Tagged: Luc Devroye, Non-Uniform Random Variate Generation, random number generation, ratio of uniform algorithm, University of Warwick ]]>

Filed under: Books, Kids Tagged: book review, Elderlings, Fool's Assassin, Fool's Quest, heroic fantasy, Robin Hobb, Royal Assassin, Six Duchies ]]>

Filed under: pictures, Running, Travel Tagged: Argentan, Argentan half-marathon, countryside, Fall, furrows, harrowing, Normandie, ploughing, soil, sunrise ]]>

Filed under: Kids, Statistics, University life Tagged: academic position, assistant professor position, deadline, France, French universities, lecturer, qualification, Université Paris Dauphine ]]>

Filed under: pictures, Statistics, University life Tagged: Ada Lovelace, Bletchley Park, Cambridge University, Euler's formula, Great-Britain, Imperial College London, jewellery, MRC Unit, Suffrage Science awards, Suffragettes, University of Warwick, WPSU ]]>

Filed under: Kids, Travel, Wines Tagged: Italian wines, Montepulciano d'Abruzzo, red wine, Testarossa ]]>

Filed under: pictures, Statistics, University life Tagged: Cox Model, David Cox, IMS, International Prize in Statistics, Nobel Prize, Nuffield College, proportional hazards model, survival analysis, University of Oxford ]]>

**A**s discussed in the previous entry, there are two interpretations to this question from The Riddler:

“…how long is the longest path a knight can travel on a standard 8-by-8 board without letting the path intersect itself?”

as to what constitutes a path. As a (terrible) chess player, I would opt for the version on the previous post, the knight moving two steps in one direction and one in the other (or vice-versa), thus occupying three squares on the board. But one can consider instead the graph of the moves of that knight, as in the above picture and as in The Riddler. And since I could not let the problem go I also wrote an R code (as clumsy as the previous one!) to explore at random (with some minimal degree of annealing) the maximum length of a self-avoiding knight canter. The first maximal length I found this way is 32, although I also came by hand to a spiralling solution with a length of 33.

Running the R code longer over the weekend however led to a path of length 34, while the exact solution to the riddle is 35, as provided by the Riddler (and earlier in many forms, including Martin Gardner’s and Donald Knuth’s).

*[An unexpected side-effect of this riddle was ending up watching half of Bergman’s Seventh Seal in Swedish…]*

Filed under: Books, Kids, pictures, R, Statistics Tagged: chess, Donald Knuth, Ingmar Bergman, knight, Martin Gardner, self-avoiding random walk, The Riddler, The Seventh Seal ]]>

n=20 trueda=matrix(rexp(100*n),ncol=n) #prior values er0=0 Q0=1

Then I applied the formulas found in the paper for approximating the evidence, first defining the log normalising constant for an unnormalised Gaussian density as on the top of p.318

Psi=function(er,Q){ -.5*(log(Q/2/pi)-Q*er*er)}

[which exhibited a typo in the paper, as Simon Barthelmé figured out after emails exchanges that the right formulation was the inverse]

Psi=function(er,Q){ -.5*(log(Q/2/pi)-er*er/Q)}

and iterating the site updates as described in Sections 2.2, 3.1 and Algorithms 1 and 3:

ep=function(dat,M=1e4,eps,Nep=10){ n=length(dat) #global and ith natural parameters for Gaussian approximations #initialisation: Qi=rep(0,n) Q=Q0+sum(Qi) eri=rep(0,n) er=er0+sum(eri) #constants Ci in (2.6) normz=rep(1,n) for (t in 1:Nep){ for (i in sample(1:n)){ #site i update Qm=Q-Qi[i] #m for minus i erm=er-eri[i] #ABC sample thetas=rnorm(M,me=erm/Qm,sd=1/sqrt(Qm)) dati=rexp(M,rate=exp(thetas)) #update by formula (3.3) Mh=sum((abs(dati-dat[i])<eps)) mu=mean(thetas[abs(dati-dat[i])<eps]) sig=var(thetas[abs(dati-dat[i])<eps]) if (Mh>1e3){ #enough proposals accepted Qi[i]=1/sig-Qm eri[i]=mu/sig-erm Q=Qm+Qi[i] er=erm+eri[i] #update of Ci as in formula (2.7) #with correction bottom of p.319 normz[i]=log(Mh/M/2/eps)- Psi(er,Q)+Psi(erm,Qm) }}} #approximation of the log evidence as on p.318 return(sum(normz)+Psi(er,Q)-Psi(er0,Q0)) }

except that I made an R beginner’s mistake (!) when calling the normal simulation to be

thetas=rnorm(M,me=erm/Qm)/sqrt(Qm)

to be compared with the genuine evidence under a conjugate Exp(1) prior *[values of the evidence should differ but should not differ that much]*

ev=function(dat){ n=length(dat) lgamma(n+1)-(n+1)*log(1+sum(dat)) }

After correcting for those bugs and too small sample sizes, thanks to Simon kind-hearted help!, running this code results in minor discrepancies between both quantities:

evid=ovid=rep(0,100) M=1e4 #number of ABC samples propep=.1 #tolerance factor for (t in 1:100){ #tolerance scaled by data epsi=propep*sd(trueda[t,]) evid[t]=ep(trueda[t,],M,epsi) ovid[t]=ev(trueda[t,]) } plot(evid,ovid,cex=.6,pch=19)

as shown by the comparison below between the evidence and the EP-ABC approximations to the evidence, called *epabence* (the dashed line is the diagonal):

Obviously, this short (if lengthy at my scale) experiment is not intended to conclude that EP-ABC work or does not work. (Simon also provided additional comparison with the genuine evidence, that is under the right prior, and with the zero Monte-Carlo version of the EP-ABC evidence, achieving a high concentration over the diagonal.) I just conclude that the method does require a certain amount of calibration to become stable. Calibrations steps that are clearly spelled out in the paper (adaptive choice of M, stabilisation by quasi-random samples, stabilisation by stochastic optimisation steps and importance sampling), but also imply that EP-ABC is also “far from routine use because it make take days to tune on real problems” (pp.315-316). Thinking about the reasons for such discrepancy over the past days, one possible explanation I see for the impact of the calibration parameters is that the validation of EP if any occurs at a numerical rather than Monte Carlo level, hence that the Monte Carlo error must be really small to avoid interfering with the numerical aspects.

Filed under: R, Statistics, University life Tagged: ABC, computer experiment, expectation-propagation, Gaussian approximation, implementation, Monte Carlo Statistical Methods, R ]]>