Archive for Monte Carlo algorithm

infinite mixtures are likely to take a while to simulate

Posted in Books, Statistics with tags , , , , , , , , on February 22, 2018 by xi'an

Another question on X validated got me highly interested for a while, as I had considered myself the problem in the past, until I realised while discussing with Murray Pollock in Warwick that there was no general answer: when a density f is represented as an infinite series decomposition into weighted densities, some weights being negative, is there an efficient way to generate from such a density? One natural approach to the question is to look at the mixture with positive weights, f⁺, since it gives an upper bound on the target density. Simulating from this upper bound f⁺ and accepting the outcome x with probability equal to the negative part over the sum of the positive and negative parts f⁻(x)/f(x) is a valid solution. Except that it is not implementable if

  1.  the positive and negative parts both involve infinite sums with no exploitable feature that can turn them into finite sums or closed form functions,
  2.  the sum of the positive weights is infinite, which is the case when the series of the weights is not absolutely converging.

Even when the method is implementable it may be arbitrarily inefficient in the sense that the probability of acceptance is equal to to the inverse of the sum of the positive weights and that simulating from the bounding mixture in the regular way uses the original weights which may be unrelated in size with the actual importance of the corresponding components in the actual target. Hence, when expressed in this general form, the problem cannot allow for a generic solution.

Obviously, if more is known about the components of the mixture, as for instance the sequence of weights being alternated, there exist specialised methods, as detailed in the section of series representations in Devroye’s (1985) simulation bible. For instance, in the case when positive and negative weight densities can be paired, in the sense that their weighted difference is positive, a latent index variable can be included. But I cannot think of a generic method where the initial positive and negative components are used for simulation, as it may on the opposite be the case that no finite sum difference is everywhere positive.

the three-body problem [book review]

Posted in Books with tags , , , , , , , on February 5, 2017 by xi'an

“Back then, I thought of one thing: Have you heard of the Monte Carlo method? Ah, it’s a computer algorithm often used for calculating the area of irregular shapes. Specifically, the software puts the figure of interest in a figure of known area, such as a circle, and randomly strikes it with many tiny balls, never targeting the same spot twice. After a large number of balls, the proportion of balls that fall within the irregular shape compared to the total number of balls used to hit the circle will yield the area of the shape. Of course, the smaller the balls used, the more accurate the result.

Although the method is simple, it shows how, mathematically, random brute force can overcome precise logic. It’s a numerical approach that uses quantity to derive quality. This is my strategy for solving the three-body problem. I study the system moment by moment. At each moment, the spheres’ motion vectors can combine in infinite ways. I treat each combination like a life form. The key is to set up some rules: which combinations of motion vectors are “healthy” and “beneficial,” and which combinations are “detrimental” and “harmful.” The former receive a survival advantage while the latter are disfavored. The computation proceeds by eliminating the disadvantaged and preserving the advantaged. The final combination that survives is the correct prediction for the system’s next configuration, the next moment in time.”

While I had read rather negative reviews of the Three-Body Problem, I still decided to buy the book from an Oxford bookstore and give it a try. Ìf only because this was Chinese science-fiction and I had never read any Chinese science-fiction. (Of course the same motivation would apply for most other countries!) While the historical (or pseudo-historical) part of the novel is most interesting, about the daughter of a university physicist killed by Red Guards during the Cultural Revolution and hence forever suspect, even after decades of exile, the science-fiction part about contacting another inhabited planet and embracing its alien values based on its sole existence is quite deficient and/or very old-fashioned. As is [old-fashioned] the call to more than three dimensions to manage anything, from space travel to instantaneous transfer of information, to ultimate weapons. And an alien civilization that is not dramatically alien. As for the three body problem itself, there is very little of interest in the book and the above quote on using Monte Carlo to “solve” the three-body problem is not of any novelty since it started in the early 1940’s.

I am thus very much surprised at the book getting a Hugo award. For a style that is more reminiscent of early Weird Tales than of current science-fiction… In addition, the characters are rather flat and often act in unnatural ways. (Some critics blame the translation, but I think it gets deeper than that.)

simulation under zero measure constraints

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , on November 17, 2016 by xi'an

A theme that comes up fairly regularly on X validated is the production of a sample with given moments, either for calibration motives or from a misunderstanding of the difference between a distribution mean and a sample average. Here are some entries on that topic:

In most of those questions, the constraint in on the sum or mean of the sample, which allows for an easy resolution by a change of variables. It however gets somewhat harder when the constraint involves more moments or, worse, an implicit solution to an equation. A good example of the later is the quest for a sample with a given maximum likelihood estimate in the case this MLE cannot be derived analytically. As for instance with a location-scale t sample…

Actually, even when the constraint is solely on the sum, a relevant question is the production of an efficient simulation mechanism. Using a Gibbs sampler that changes one component of the sample at each iteration does not qualify, even though it eventually produces the proper sample. Except for small samples. As in this example

n=3;T=1e4
s0=.5 #fixed average
sampl=matrix(s0,T,n)
for (t in 2:T){
 sampl[t,]=sampl[t-1,]
 for (i in 1:(n-1)){
  sampl[t,i]=runif(1,
  min=max(0,n*s0-sum(sampl[t,c(-i,-n)])-1),
  max=min(1,n*s0-sum(sampl[t,c(-i,-n)])))
 sampl[t,n]=n*s0-sum(sampl[t,-n])}}

For very large samples, I figure that proposing from the unconstrained density can achieve a sufficient efficiency, but the in-between setting remains an interesting problem.

go, go, go…deeper!

Posted in pictures, Statistics with tags , , , , , , , , , , on February 19, 2016 by xi'an

While visiting Warwick, last week, I came across the very issue of Nature with the highly advertised paper of David Silver and co-authors from DeepMind detailing how they designed their Go player algorithm that bested a European Go master five games in a row last September. Which is a rather unexpected and definitely brilliant feat given the state of the art! And compares (in terms of importance, if not of approach) with the victory of IBM Deep Blue over Gary Kasparov 20 years ago… (Another deep algorithm, showing that the attraction of programmers for this label has not died off over the years!)This paper is not the easiest to read (especially over breakfast), with (obviously) missing details, but I gathered interesting titbits from this cursory read. One being the reinforced learning step where the predictor is improved by being applied against earlier versions. While this can lead to overfitting, the authors used randomisation to reduce this feature. This made me wonder if a similar step could be on predictors like random forests. E.g., by weighting the trees or the probability of including a predictor or another.Another feature of major interest is their parallel use of two neural networks in the decision-making, a first one estimating a probability distribution over moves learned from millions of human Go games and a second one returning a utility or value for each possible move. The first network is used for tree exploration with Monte Carlo steps, while the second leads to the final decision.

This is a fairly good commercial argument for machine learning techniques (and for DeepMind as well), but I do not agree with the doom-sayers predicting the rise of the machines and our soon to be annihilation! (Which is the major theme of Superintelligence.) This result shows that, with enough learning data and sufficiently high optimising power and skills, it is possible to produce an excellent predictor of the set of Go moves leading to a victory. Without the brute force strategy of Deep Blue that simply explored the tree of possible games to a much more remote future than a human player could do (along with the  perfect memory of a lot of games). I actually wonder if DeepMind has also designed a chess algorithm on the same principles: there is no reason why it should no work. However, this success does not predict the soon to come emergence of AI’s able to deal with vaguer and broader scopes: in that sense, automated drivers are much more of an advance (unless they start bumping into other cars and pedestrians on a regular basis!).