Archive for stratification

computational methods for statistical mechanics [day #3]

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on June 6, 2014 by xi'an

Arthur Seat, Edinburgh, Sep. 7, 2011

The third day [morn] at our ICMS workshop was dedicated to path sampling. And rare events. Much more into [my taste] Monte Carlo territory. The first talk by Rosalind Allen looked at reweighting trajectories that are not in an equilibrium or are missing the Boltzmann [normalizing] constant. Although the derivation against a calibration parameter looked like the primary goal rather than the tool for constant estimation. Again papers in J. Chem. Phys.! And a potential link with ABC raised by Antonietta Mira… Then Jonathan Weare discussed stratification. With a nice trick of expressing the normalising constants of the different terms in the partition as solution(s) of a Markov system

v\mathbf{M}=v

Because the stochastic matrix M is easier (?) to approximate. Valleau’s and Torrie’s umbrella sampling was a constant reference in this morning of talks. Arnaud Guyader’s talk was in the continuation of Toni Lelièvre’s introduction, which helped a lot in my better understanding of the concepts. Rephrasing things in more statistical terms. Like the distinction between equilibrium and paths. Or bias being importance sampling. Frédéric Cérou actually gave a sort of second part to Arnaud’s talk, using importance splitting algorithms. Presenting an algorithm for simulating rare events that sounded like an opposite nested sampling, where the goal is to get down the target, rather than up. Pushing particles away from a current level of the target function with probability ½. Michela Ottobre completed the series with an entry into diffusion limits in the Roberts-Gelman-Gilks spirit when the Markov chain is not yet stationary. In the transient phase thus.

Stratified sampling

Posted in R, Statistics with tags , , , on June 9, 2010 by xi'an

The recently arXived paper of Goldstein, Rinott and Scarsini studies the impact of refining a partition on the precision of a stratified maximising/integration Monte Carlo approach. Quite naturally, if the partition gets improved, simulating points in each set of the partition can only improve the quality of the approximation, whether the problem is in maximising or in integrating. However, the authors include an interesting (formal) counterexample where the stratification leads to a higher L1 (if not L2) error. (And they include extensions of more classical results to cases when the function is observed with errors or contains missing data.) The difficulty I have with stratification in practice is that it is really difficult to come up with a partition which is relevant for the problem at hand and whose partition weights are known exactly… Reading this nice mathematical paper also led me to ponder the possibility of doing unbiased maximisation, while wondering if stratification was eventually compelling for maximisation, since only one set in the partition is of interest. Eliminating sets from the simulation would thus be leading to higher efficiency, if this was feasible.