Archive for IBM

quantum simulation or supremacy?

Posted in Books, Statistics, University life with tags , , , , , on November 11, 2019 by xi'an

d41586-019-03167-2_17301142Nature this week contains a prominent paper by Arute et al. reporting an experiment on a quantum computer running a simulation on a state-space of dimension 253 (which is the number of qubits in their machine, plus one dedicated to error correction if I get it right). With a million simulation of the computer state requiring 200 seconds. Which they claim would take 10,000 years (3 10¹¹ seconds) to run on a classical super-computer. And which could be used towards producing certified random numbers, an impressive claim given the intrinsic issue of qubit errors. (This part is not developed in the paper but I wonder how a random generator could handle such errors.)

“…a “challenger” generates a random quantum circuit C (i.e., a random sequence of 1-qubit and nearest-neighbor 2-qubit gates, of depth perhaps 20, acting on a 2D grid of n = 50 to 60 qubits). The challenger then sends C to the quantum computer, and asks it apply C to the all-0 initial state, measure the result in the {0,1} basis, send back whatever n-bit string was observed, and repeat some thousands or millions of times. Finally, using its knowledge of C, the classical challenger applies a statistical test to check whether the outputs are consistent with the QC having done this.” The blog of Scott Aaronson

I have tried reading the Nature paper but had trouble grasping the formidable nature of the simulation they were discussing, as it seems to be the reproduction by a simulation of a large quantum circuit of depth 20, as helpfully explained in the above quote. And checking the (non-uniform) distribution of the random simulation is the one expected. Which is the hard part and requires a classical (super-)computer to determine the theoretical distribution. And the News & Views entry in the same issue of Nature. According to Wikipedia, “the best known algorithm for simulating an arbitrary random quantum circuit requires an amount of time that scales exponentially with the number of qubits“. However, IBM (a competitor of Google in the quantum computer race) counter-claims that the simulation of the circuit takes only 2.5 days on a classical computer with optimised coding. (And this should be old news by the time this blog post comes out, since even a US candidate for the presidency has warned about it!)

go, go, go…deeper!

Posted in pictures, Statistics with tags , , , , , , , , , , on February 19, 2016 by xi'an

While visiting Warwick, last week, I came across the very issue of Nature with the highly advertised paper of David Silver and co-authors from DeepMind detailing how they designed their Go player algorithm that bested a European Go master five games in a row last September. Which is a rather unexpected and definitely brilliant feat given the state of the art! And compares (in terms of importance, if not of approach) with the victory of IBM Deep Blue over Gary Kasparov 20 years ago… (Another deep algorithm, showing that the attraction of programmers for this label has not died off over the years!)This paper is not the easiest to read (especially over breakfast), with (obviously) missing details, but I gathered interesting titbits from this cursory read. One being the reinforced learning step where the predictor is improved by being applied against earlier versions. While this can lead to overfitting, the authors used randomisation to reduce this feature. This made me wonder if a similar step could be on predictors like random forests. E.g., by weighting the trees or the probability of including a predictor or another.Another feature of major interest is their parallel use of two neural networks in the decision-making, a first one estimating a probability distribution over moves learned from millions of human Go games and a second one returning a utility or value for each possible move. The first network is used for tree exploration with Monte Carlo steps, while the second leads to the final decision.

This is a fairly good commercial argument for machine learning techniques (and for DeepMind as well), but I do not agree with the doom-sayers predicting the rise of the machines and our soon to be annihilation! (Which is the major theme of Superintelligence.) This result shows that, with enough learning data and sufficiently high optimising power and skills, it is possible to produce an excellent predictor of the set of Go moves leading to a victory. Without the brute force strategy of Deep Blue that simply explored the tree of possible games to a much more remote future than a human player could do (along with the  perfect memory of a lot of games). I actually wonder if DeepMind has also designed a chess algorithm on the same principles: there is no reason why it should no work. However, this success does not predict the soon to come emergence of AI’s able to deal with vaguer and broader scopes: in that sense, automated drivers are much more of an advance (unless they start bumping into other cars and pedestrians on a regular basis!).

Horizon Maths 2015: Santé & Données

Posted in pictures, Statistics, University life with tags , , , , , , , , on November 16, 2015 by xi'an

Le Monde and the replication crisis

Posted in Books, Kids, Statistics with tags , , , , , , , , , , , , , , , on September 17, 2015 by xi'an

An rather poor coverage of the latest article in Science on the replication crisis in psychology in Le Monde Sciences & Medicine weekly pages (and mentioned a few days ago on Andrew’s blog, with the terrific if unrelated poster for Blade Runner…):

L’étude repose également sur le rôle d’un critère très critiqué, la “valeur p”, qui est un indicateur statistique estimant la probabilité que l’effet soit bien significatif.

As you may guess from the above (pardon my French!), the author of this summary of the Science article (a) has never heard of a p-value (which translates as niveau de signification in French statistics books) and (b) confuses the probability of exceeding the observed quantity under the null with the probability of the alternative. The remainder of the paper is more classical, pointing out the need for preregistered protocols in experimental sciences. Even though it mostly states evidence, like the decrease in significant effects for prepublished protocols. Apart from this mostly useless entry, rather interesting snapshots in the issue: Stephen Hawking’s views on how information could escape a black hole, an IBM software for predicting schizophrenia, Parkinson disease as a result of hyperactive neurons, diseased Formica fusca ants taking some harmful drugs to heal, …

Monte Carlo patent

Posted in Statistics, University life with tags , , on August 17, 2011 by xi'an

Julien just pointed me to this incredible patent of the Monte Carlo principle! I cannot see there anything new compared with the principles laid by Ulam, von Neuman and Metropolis in the 40’s… So each time one uses a Monte Carlo estimation of variation, incl. bootstrap, this patent should be acknowledged?! This surely sounds absurd… The worse because those “authors” work at IBM research labs.