Archive for wikipedia

on arithmetic derivations of square roots

Posted in Books, Kids, pictures, R, Statistics with tags , , , , , , , , , on November 13, 2020 by xi'an

An intriguing question made a short-lived appearance on the CodeGolf section of Stack Exchange, before being removed, namely the (most concise possible) coding of an arithmetic derivation of the square root of an integer, S, with a 30 digit precision and using only arithmetic operators. I was not aware of the myriad of solutions available, as demonstrated on the dedicated WIkipedia page. And ended playing with three of them during a sleepless pre-election night!

The first solution for finding √S is based on a continued fraction representation of the root,

\sqrt{S}=a+\cfrac{r}{2a+\cfrac{r}{2a+\ddots}}

with a²≤S and r=S-a². It is straightforward to code-golf:

while((r<-S-T*T)^2>1e-9)T=(F<-2*T+r/(2*T+F))-T;F

but I found it impossible to reach the 30 digit precision (even when decreasing the error bound from 10⁻⁹). Given the strict rules of the game, this would have been considered to be a failure.

The second solution is Goldschmidt’s algorithm

b=S
T=1/sum((1:S)^2<S) 
while((1-S*prod(T)^2)^2>1e-9){
  b=b*T[1]^2
  T=c((3-b)/2,T)}
S*prod(T)

which is longer for code-golfing but produces both √S and 1/√S (and is faster than the Babylonian method and Newton-Raphson). Again no luck with high precision and almost surely unacceptable for the game.

The third solution is the most interesting [imho] as it mimicks long division, working two digits at a time (and connection with Napier’s bones)

`~`=length
D=~S
S=c(S,0*(1:30))
p=d=0
a=1:9
while(~S){ 
  F=c(F,x<-sum(a*(20*p+a)<=(g<-100*d+10*S[1]+S[2])))
  d=g-x*(20*p+x)
  p=x+10*p
  S=S[-1:-2]}
sum(10^{1+D/2-1:~F}*F)

plus providing an arbitrary number of digits with no error. This code requires S to be entered as a sequence of digits (with a possible extra top digit 0 to make the integer D even). Returning one digit at a time, it would further have satisfied the constraints of the question (if in a poorly condensed manner).

artificial EM

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , on October 28, 2020 by xi'an

When addressing an X validated question on the use of the EM algorithm when estimating a Normal mean, my first comment was that it was inappropriate since there is no missing data structure to anchor by (right preposition?). However I then reflected upon the infinite number of ways to demarginalise the normal density into a joint density

f(x,z;μ)dz = φ(xμ)

from the (slice sampler) call to an indicator function for f(x,z;μ) to a joint Normal distribution with an arbitrary correlation. While the joint Normal representation produces a sequence converging to the MLE, the slice representation utterly fails as the indicator functions make any starting value of μ a fixed point for EM.

Incidentally, when quoting from Wikipedia on the purpose of the EM algorithm, the following passage

Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations.

struck me as confusing and possibly wrong since it seems to suggest to seek a maximum in both the parameter and the latent variables. Which does not produce the same value as the observed likelihood maximisation.

babbage in, babbage out?!

Posted in Books, Kids, Statistics with tags , , , , , , on May 25, 2020 by xi'an

When checking for the origin of “garbage in, garbage out” on Wikipedia, I came upon this citation from Charles Babbage:

“On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

following earlier quotes from him on this ‘Og.

support Wikipedia

Posted in Books, University life with tags , , on March 15, 2020 by xi'an

psycho-history [Hari Seldon to the rescue!]

Posted in Books, Kids, pictures, Statistics, Travel with tags , , , , , , , , , on December 13, 2019 by xi'an

A “long read” article in the Guardian a few weeks ago sounds like Isaac Asimov’s Foundation‘s core concept, namely psychohistory, turning into a real academic discipline! In the books of this fantastic series, the father of this new science of predictive mathematical (or statistical) sociology, Hari Seldon, makes predictions that extend so far in the future that, at every major crisis of Asimov’s galactic empire, he delivers a per-registered message that indicates how to cope with the crisis to save the empire. Or so it seems! (As a teenager, I enjoyed the Foundation books very much, reading the three first volumes several times, to the point I wonder now if they were influential to my choice of a statistics major…! Presumably not, but it makes a nice story!!! Actually, Paul Krugman blames Asimov for his choice of economics as being the closest to psychohistory.)

“I assumed that the time would come when there would be a science in which things could be predicted on a probabilistic or statistical basis (…) can’t help but think it would be good, except that in my stories, I always have opposing views. In other words, people argue all possible… all possible… ways of looking at psychohistory and deciding whether it is good or bad. So you can’t really tell. I happen to feel sort of on the optimistic side. I think if we can somehow get across some of the problems that face us now, humanity has a glorious future, and that if we could use the tenets of psychohistory to guide ourselves we might avoid a great many troubles. But on the other hand, it might create troubles. It’s impossible to tell in advance.” I. Asimov

The Guardian entry is about Peter Turchin, a biologist who had “by the late 1990s answered all the ecological questions that interested him” and then turned his attention to history, creating a new field called cliodynamics. Which bears eerie similarities with Seldon’s psychohistory! Using massive databases of historical events (what is a non-historical event, by the way?!) to predict the future. And relying on a premise of quasi-periodic cycles to fit such predictions with a whiff of Soviet-era theories… I did not read in depth the entire paper (it’s a “long read”, remember?!) and even less the background theory, but I did not spot there a massive support from a large academic community for Turchin’s approach (mentioned in the psychohistory entry in Wikipedia). And, while this is not a major argument from Feyerabend’s perspective (of fundamental scientific advances resulting from breaks from consensus), it seems hard to think of a predictive approach that is not negatively impacted by singularity events, from the emergence of The Mule in Foundation, to the new scale of challenges posed by the acceleration of the climate collapse or the societal globalisation cum communitarian fragmentation caused by social media. And as a last warning, a previous entry in the same column wanted to warn readers “how statistics lost their power and big data controlled by private companies is taking over”, hence going the opposite direction.