Archive for Bernoulli factory

not a Bernoulli factory

Posted in Books, Kids, pictures, R with tags , , , , , , , on May 20, 2020 by xi'an

A Riddler riddle I possibly misunderstood:

Four isolated persons are given four fair coins, which can be either flipped once or returned without being flipped. If all flipped coins come up heads, the team wins! Else, if any comes up tails, or if no flip at all is done, it looses. Each person is further given an independent U(0,1) realisation. What is the best strategy?

Since the players are separated, I would presume the same procedure is used by all. Meaning that a coin is tossed with probability p, ie if the uniform is less than p, and untouched otherwise. The probability of winning is then

4(1-p)³p½+6(1-p)³p½²+4(1-p)p³½³+p⁴½⁴

which is maximum for p=0.3420391, with a winning probability of 0.2848424.

And an extra puzzle for free:

solve xxxx=2020

Where the integral part is the integer immediately below x. Puzzle that I first fail solving by brute force, because I did not look at negative x’s… Since the fourth root of 2020 is between 6 and 7, the solution is either x=6+ε or x=-7+ε, with ε in (0,1). The puzzle then becomes either

(6+ε)⌊(6+ε)⌊(6+ε)⌊6+ε⌋⌋ = (6+ε)⌊(6+ε)⌊36+6ε⌋⌋ = (6+ε)⌊(6+ε)(36+⌊6ε⌋)⌋ = 2020

where there are 6 possible integer values for ⌊6ε⌋, with only ⌊6ε⌋=5 being possible, turning the equation into

(6+ε)⌊41(6+ε) = (6+ε)(246+⌊41ε) = 2020

where again only ⌊42ε=40 being possible, ending up with

1716+286ε = 2020

which has no solution in (0,1). In the second case

(-7+ε)(-7+ε)(-7+ε)-7+ε⌋⌋ = (-7+ε)(-7+ε)(49+-7ε⌋)= 2020

shows that only -7ε=-3 is possible, leading to

(-7+ε)⌊46(-7+ε)) = (-7+ε) (-322+46ε⌋)=2020

with only 46ε=17 possible, hence

2135-305ε=2020

and

ε=115/305.

A brute force simulated annealing resolution returns x=-6.622706 after 10⁸ iterations. A more interesting question is to figure out the discontinuity points of the function

ℵ(x) = xxxx

as they seem to be numerous:

For instance, only 854 of the first 2020 integers enjoy a solution to ℵ(x)=n.

another Bernoulli factory

Posted in Books, Kids, pictures, R, Statistics with tags , , , , , , on May 18, 2020 by xi'an

A question that came out on X validated is asking for help in figuring out the UMVUE (uniformly minimal variance unbiased estimator) of (1-θ)½ when observing iid Bernoulli B(θ). As it happens, there is no unbiased estimator of this quantity and hence not UMVUE. But there exists a Bernoulli factory producing a coin with probability (1-θ)½ from draws of a coin with probability θ, hence a mean to produce unbiased estimators of this quantity. Although of course UMVUE does not make sense in this sequential framework. While Nacu & Peres (2005) were uncertain there was a Bernoulli factory for θ½, witness their Question #1, Mendo (2018) and Thomas and Blanchet (2018) showed that there does exist a Bernoulli factory solution for θa, 0≤a≤1, with constructive arguments that only require the series expansion of θ½. In my answer to that question, using a straightforward R code, I tested the proposed algorithm, which indeed produces an unbiased estimate of θ½… (Most surprisingly, the question got closed as a “self-study” question, which sounds absurd since it could not occur as an exercise or an exam question, unless the instructor is particularly clueless.)

back to the Bernoulli factory

Posted in Books, Statistics, University life with tags , , , on April 7, 2020 by xi'an

“The results show that the proposed algorithm is asymptotically optimal for the mentioned subclass of functions, in the sense that for any other fast algorithm E[N] grows at least as fast with p.”

Murray Pollock (Warwick U. for a wee more days!) pointed out to me this paper of Luis Mendo on a Bernoulli factory algorithm that estimates functions [of p] that can be expressed as power series [of p]. Essentially functions f(p) such that f(0)=0 and f(1)=1. The big difference with earlier algorithms, as far as I can tell, is that the approach involves a randomised stopping rule that involves, on top of the unlimited sequence of Bernoulli B(p) variates a second sequence of Uniform variates, which sounds to me like a change of paradigm, given the much higher degree of freedom brought by Uniform variates (as opposed to Bernoulli variates with an unknown value of p). Although there exists a non-randomised version in the paper. The proposed algorithm is as follows, using a sequence of d’s issued from the power series coefficients:

1. Set i=1.
2. Take one input X[i].
3. Produce U[i] uniform on (0,1). Let V[i]=1 if U[i]<d[i] and V[i]=0 otherwise.
If V[i] or X[i] are equal to 1, output X[i] and finish.
Else increase i and go back to step2.

As the author mentions, this happens to be a particular case of the reverse-time martingale approach of Łatuszynski, Kosmidis, Papaspiliopoulos and Roberts (Warwick connection as well!). With an average number of steps equal to f(p)/p, surprisingly simple, and somewhat of an optimal rate. While the functions f(p) are somewhat restricted, this is nice work

BayesComp’20

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , on January 10, 2020 by xi'an

First, I really have to congratulate my friend Jim Hobert for a great organisation of the meeting adopting my favourite minimalist principles (no name tag, no “goodies” apart from the conference schedule, no official talks). Without any pretense at objectivity, I also appreciated very much the range of topics and the sweet frustration of having to choose between two or three sessions each time. Here are some notes taken during some talks (with no implicit implication for the talks no mentioned, re. above frustration! as well as very short nights making sudden lapse in concentration highly likely).

On Day 1, Paul Fearnhead’s inaugural plenary talk was on continuous time Monte Carlo methods, mostly bouncy particle and zig-zag samplers, with a detailed explanation on the simulation of the switching times which likely brought the audience up to speed even if they had never heard of them. And an opening on PDMPs used as equivalents to reversible jump MCMC, reminding me of the continuous time (point process) solutions of Matthew Stephens for mixture inference (and of Preston, Ripley, Møller).

The same morn I heard of highly efficient techniques to handle very large matrices and p>n variables selections by Akihiko Nishimura and Ruth Baker on a delayed acceptance ABC, using a cheap proxy model. Somewhat different from indirect inference. I found the reliance on ESS somewhat puzzling given the intractability of the likelihood (and the low reliability of the frequency estimate) and the lack of connection with the “real” posterior. At the same ABC session, Umberto Picchini spoke on a joint work with Richard Everitt (Warwick) on linking ABC and pseudo-marginal MCMC by bootstrap. Actually, the notion of ABC likelihood was already proposed as pseudo-marginal ABC by Anthony Lee, Christophe Andrieu and Arnaud Doucet in the discussion of Fearnhead and Prangle (2012) but I wonder at the focus of being unbiased when the quantity is not the truth, i.e. the “real” likelihood. It would seem more appropriate to attempt better kernel estimates on the distribution of the summary itself. The same session also involved David Frazier who linked our work on ABC for misspecified models and an on-going investigation of synthetic likelihood.

Later, there was a surprise occurrence of the Bernoulli factory in a talk by Radu Herbei on Gaussian process priors with accept-reject algorithms, leading to exact MCMC, although the computing implementation remains uncertain. And several discussions during the poster session, incl. one on the planning of a 2021 workshop in Oaxaca centred on objective Bayes advances as we received acceptance of our proposal by BIRS today!

On Day 2, David Blei gave a plenary introduction to variational Bayes inference and latent Dirichlet allocations, somewhat too introductory for my taste although other participants enjoyed this exposition. He also mentioned a recent JASA paper on the frequentist consistency of variational Bayes that I should check. Speaking later with PhD students, they really enjoyed this opening on an area they did not know that well.

A talk by Kengo Kamatani (whom I visited last summer) on improved ergodicity rates for heavy tailed targets and Crank-NIcholson modifications to the random walk proposal (which uses an AR(1) representation instead of the random walk). With the clever idea of adding the scale of the proposal as an extra parameter with a prior of its own. Gaining one order of magnitude in the convergence speed (i.e. from d to 1 and from d² to d, where d is the dimension), which is quite impressive (and just published in JAP).Veronica Rockova linked Bayesian variable selection and machine learning via ABC, with conditions on the prior for model consistency. And a novel approach using part of the data to learn an ABC partial posterior, which reminded me of the partial  Bayes factors of the 1990’s although it is presumably unrelated. And a replacement of the original rejection ABC via multi-armed bandits, where each variable is represented by an arm, called ABC Bayesian forests. Recalling the simulation trick behind Thompson’s approach, reproduced for the inclusion or exclusion of variates and producing a fixed estimate for the (marginal) inclusion probabilities, which makes it sound like a prior-feeback form of empirical Bayes. Followed by a talk of Gregor Kastner on MCMC handling of large time series with specific priors and a massive number of parameters.

The afternoon also had a wealth of exciting talks and missed opportunities (in the other sessions!). Which ended up with a strong if unintended French bias since I listened to Christophe Andrieu, Gabriel Stolz, Umut Simsekli, and Manon Michel on different continuous time processes, with Umut linking GANs, multidimensional optimal transport, sliced-Wasserstein, generative models, and new stochastic differential equations. Manon Michel gave a highly intuitive talk on creating non-reversibility, getting rid of refreshment rates in PDMPs to kill any form of reversibility.

riddles on Egyptian fractions and Bernoulli factories

Posted in Books, Kids, R with tags , , , , , , , , , , , , , on June 11, 2019 by xi'an

Two fairy different riddles on the weekend Riddler. The first one is (in fine) about Egyptian fractions: I understand the first one as

Find the Egyptian fraction decomposition of 2 into 11 distinct unit fractions that maximises the smallest fraction.

And which I cannot solve despite perusing this amazing webpage on Egyptian fractions and making some attempts at brute force  random exploration. Using Fibonacci’s greedy algorithm. I managed to find such decompositions

2 = 1 +1/2 +1/6 +1/12 +1/16 +1/20 +1/24 +1/30 +1/42 +1/48 +1/56

after seeing in this short note

2 = 1 +1/3 +1/5 +1/7 +1/9 +1/42 +1/15 +1/18 +1/30 +1/45 +1/90

And then Robin came with the following:

2 = 1 +1/4 +1/5 +1/8 +1/10 +1/12 +1/15 +1/20 +1/21 +1/24 +1/28

which may prove to be the winner! But there is even better:

2 = 1 +1/5 +1/6 +1/8 +1/9 +1/10 +1/12 +1/15 +1/18 +1/20 +1/24

The second riddle is a more straightforward Bernoulli factory problem:

Given a coin with a free-to-choose probability p of head, design an experiment with a fixed number k of draws that returns three outcomes with equal probabilities.

For which I tried a brute-force search of all possible 3-partitions of the 2-to-the-k events for a range of values of p from .01 to .5 and for k equal to 3,4,… But never getting an exact balance between the three groups. Reading later the solution on the Riddler, I saw that there was an exact solution for 4 draws when

p=\frac{3-\sqrt{3(4\sqrt{9}-6)}}{6}

Augmenting the precision of my solver (by multiplying all terms by 100), I indeed found a difference of

> solver((3-sqrt(3*(4*sqrt(6)-9)))/6,ba=1e5)[1]
[1] 8.940697e-08

which means an error of 9 x 100⁻⁴ x 10⁻⁸, ie roughly 10⁻¹⁵.