Archive for combinatorics

shelled and riddled

Posted in Books, Kids, pictures, R, Statistics with tags , , , , , , , on August 10, 2022 by xi'an

Consider a shell game with three shells and a ball with The Riddler constraint that the location of the shell with the ball is always exchanged with the location of an empty shell, randomly chosen. If one starts with the ball as rightmost, what is the distribution of the location of the ball after N steps?

Running an exploratory R code like

o=rep(0,3)
for(n in 1:1e6){
  b=c(0,0,1)
  for(t in 1:N){
    i=sample((1:3)[!b],1);b=0*b;b[i]=1}
  o=o+b}

shows that the difference in probability is between the rightmost position and both others, starting at zero, and evolving as p⁺=(1-p⁻)/2, with the successive values 0,1/2,1/4,3/8,5/15,11/32,… Very quickly converging to 1/3.

self-avoiding random angles

Posted in Books, Kids, pictures, Statistics, Travel with tags , , , , , on June 17, 2022 by xi'an

An apparently easy riddle from The Riddler this week: given N random half-lines starting from a N-S segment, what is the probability that none ever intersect? If m ½-lines are on the same side of the segment, they will not intersect when their angle with the segment is decreasing with the longitude of the endpoint of this ½-line on the segment. Assuming that drawing a ½-line at random is akin to uniformely drawing an angle on (0,π), this no X’ing event happens when the m angles are properly ordered, a 1/m! event. Independently, the probability for the segments on the other side is 1/(N-m)! The joint probability is thus

\displaystyle\sum_{m=0}^N {N\choose m}\frac{1}{2^N} \frac{1}{m!(N-m)!}=\frac{(2N)!}{2^N(N!)^3}

What are the chances of that?

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on May 13, 2022 by xi'an

What are the chances that I review a book with this title, a few months after reviewing a book called What is luck?! This one is written by Andrew Elliott, whose Is that a big number? I reviewed a wee bit earlier… And that the cover of this book involves a particularly unlucky sequence of die as in my much earlier review of Krysz Burdzy’s book? (About 10⁻⁶ less likely than the likeliest draw!)

The (relative) specificity of this book is to try to convey the notions of chance and uncertainty to the general public, more in demonstrating that our intuition is most often wrong by examples and simulations, than in delving into psychological reasons as in Barbara Blatchley’s book. The author advances five dualities that underly our (dysfunctional) relation to chance: individual vs. collective, randomness vs. meaning, foresight vs. insight, uniformity vs. variability, and disruption vs. opportunity.

“News programmes clearly understand that the testimonies of individuals draw better audiences than the summaries of statisticians.” (p. xvii)

Some of the nice features of the book  are (a) the description of a probabilistic problem at the beginning of each chapter, to be solved at the end, (b) the use of simulation experiments, represented by coloured pixels over a grey band crossing the page, including a section on pseudorandom generators [which is less confusing that the quote below may indicate!], (c) taking full advantage of the quincunx apparatus, and (d) very few apologies for getting into formulas. And even a relevant quote of Taleb’s Black Swan about the ludic fallacy. On the other hand, the author spends quite a large component of the book on chance games, exhibiting a ludic tendency! And contemplates biased coins, while he should know better! The historical sections may prove too much for both informed and uninformed readers. (However, I learned that the UK Government had used a form of lottery to pay interests on premium bonds.) And the later parts are less numerical and quantified, even though the author brings in the micromort measurement [invented by Ronald Howard and] favoured by David Spiegelhalter. Who actually appears to have inspired several other sections, like the one on coincidences (which remains quite light in its investigation!). I finished the book rather quickly by browsing though mostly anecdotes and a lesser feel of a unified discourse. I did not find the attempt to link with the COVID pandemic, which definitely resets our clocks on risk, particularly alluring…

“People go to a lot of trouble to generate truly random numbers—sequences that are impossible to predict.” (p.66)

The apparition of the Normal distribution is somewhat overdone and almost mystical, if the tone gets more reasonable by the end of the corresponding chapter.

“…combining random numbers from distributions that really have no business being added together (…) ends up with a statistic that actually fits the normal distribution quite well.” (p.83)

The part about Bayes and Bayesian reasoning does not include any inference, with a rather duh! criticism of prior modelling.

“If you are tempted to apply a group statistic derived from a broad analysis to a more narrow purpose, you run the risk of making an unfair judgement.” (p.263)

The section about Xenakis’ musical creations as a Markov process was most interesting (and novel to me). I also enjoyed the shared cultural entries, esp. literary ones. Like citing the recent Chernobyl TV drama. Or Philip K. Dick’s Do Androids Dream of Electric Sheep? Or yet Monty Python’s Life of Brian. Overall, there is enough trivia and engagement to keep reading the book till its end!

almost reversed 2-lag Markov chain

Posted in Kids, R, Statistics with tags , , , , on July 7, 2021 by xi'an

Another simple riddle from the Riddler: take a binary sequence and associate to this sequence a score vector made of the numbers of consecutive ones from each position. If the sequence is ten step long and there are 3 ones located at random, what is the expected total score? (The original story is much more complex and involves as often strange sports!)

Adding two zeroes at time 11 and 12, this is quite simple to code, e.g.

f=0*(1:10) #frequencies
for(v in 1:1e6){
 r=0*f#reward
 s=sample(1:10,3)
 for(t in s)r[t]=1+((t+1)%in%s)*(1+((t+2)%in%s))
 f[sum(r)]=f[sum(r)]+1}
f=f/1e6

and the outcome recovers the feature that the only possible scores are 1+1+1=3 (all ones separated), 1+1+2=4 (two ones contiguous),  and 1+2+3=6 (all ones contiguous). With respective frequencies 56/120, 56/120, and 8/120. With 120 being the number of possible locations of the 3 ones.

Bernoulli factory in the Riddler

Posted in Books, Kids, R, Statistics with tags , , , , , , , , , , on December 1, 2020 by xi'an

“Mathematician John von Neumann is credited with figuring out how to take a p biased coin and “simulate” a fair coin. Simply flip the coin twice. If it comes up heads both times or tails both times, then flip it twice again. Eventually, you’ll get two different flips — either a heads and then a tails, or a tails and then a heads, with each of these two cases equally likely. Once you get two different flips, you can call the second of those flips the outcome of your “simulation.” For any value of p between zero and one, this procedure will always return heads half the time and tails half the time. This is pretty remarkable! But there’s a downside to von Neumann’s approach — you don’t know how long the simulation will last.” The Riddler

The associated riddle (first one of the post-T era!) is to figure out what are the values of p for which an algorithm can be derived for simulating a fair coin in at most three flips. In one single flip, p=½ sounds like the unique solution. For two flips, p²,(1-p)^2,2p(1-p)=½ work, but so do p+(1-p)p,(1-p)+p(1-p)=½, and the number of cases grows for three flips at most. However, since we can have 2³=8 different sequences, there are 2⁸ ways to aggregate these events and thus at most 2⁸ resulting probabilities (including 0 and 1). Running a quick R code and checking for proximity to ½ of any of these sums leads to

[1] 0.2062997 0.7937005 #p^3
[1] 0.2113249 0.7886753 #p^3+(1-p)^3
[1] 0.2281555 0.7718448 #p^3+p(1-p)^2
[1] 0.2372862 0.7627143 #p^3+(1-p)^3+p(1-p)^2
[1] 0.2653019 0.7346988 #p^3+2p(1-p)^2
[1] 0.2928933 0.7071078 #p^2
[1] 0.3154489 0.6845518 #p^3+2p^2(1-p)
[1] 0.352201  0.6477993 #p^3+p(1-p)^2+p^2(1-p)
[1] 0.4030316 0.5969686 #p^3+p(1-p)^2+3(1-p)p^2
[1] 0.5

which correspond to

1-p³=½, p³+(1-p)³=½,(1-p)³+(1-p)p²=½,p³+(1-p)³+p²(1-p),(1-p)³+2(1-p)p²=½,1-p²=½, p³+(1-p)³+p²(1-p)=½,(1-p)³+p(1-p)²+p²(1-p)=½,(1-p)³+p²(1-p)+3p(1-p)²=½,p³+p(1-p)²+3(p²(1-p)=½,p³+2p(1-p)²+3(1-p)p²=½,p=½,

(plus the symmetric ones), leading to 19 different values of p producing a “fair coin”. Missing any other combination?!

Another way to look at the problem is to find all roots of the 2^{2^n} equations

a_0p^n+a_1p^{n-1}(1-p)+\cdots+a_{n-1}p(1-p)^{n-1}+a_n(1-p)^n=1/2

where

0\le a_i\le{n \choose i}

(None of these solutions is rational, by the way, except p=½.) I also tried this route with a slightly longer R code, calling polyroot, and finding the same 19 roots for three flips, [at least] 271 for four, and [at least] 8641 for five (The Riddler says 8635!). With an imprecision in the exact number of roots due to rather poor numerical rounding by polyroot. (Since the coefficients of the above are not directly providing those of the polynomial, I went through an alternate representation as a polynomial in (1-p)/p, with a straightforward derivation of the coefficients.)

%d bloggers like this: