Archive for Vancouver

Nature tidbits [the Bayesian brain]

Posted in Statistics with tags , , , , , , , , , , , , , , on March 8, 2020 by xi'an

In the latest Nature issue, a long cover of Asimov’s contributions to science and rationality. And a five page article on the dopamine reward in the brain seen as a probability distribution, seen as distributional reinforcement learning by researchers from DeepMind, UCL, and Harvard. Going as far as “testing” for this theory with a p-value of 0.008..! Which could be as well a signal of variability between neurons to dopamine rewards (with a p-value of 10⁻¹⁴, whatever that means). Another article about deep learning about protein (3D) structure prediction. And another one about learning neural networks via specially designed devices called memristors. And yet another one on West Africa population genetics based on four individuals from the Stone to Metal age (8000 and 3000 years ago), SNPs, PCA, and admixtures. With no ABC mentioned (I no longer have access to the journal, having missed renewal time for my subscription!). And the literal plague of a locust invasion in Eastern Africa. Making me wonder anew as to why proteins could not be recovered from the swarms of locust to partly compensate for the damages. (Locusts eat their bodyweight in food every day.) And the latest news from NeurIPS about diversity and inclusion. And ethics, as in checking for responsibility and societal consequences of research papers. Reviewing the maths of a submitted paper or the reproducibility of an experiment is already challenging at times, but evaluating the biases in massive proprietary datasets or the long-term societal impact of a classification algorithm may prove beyond the realistic.

Metropolis in 95 characters

Posted in pictures, R, Statistics, Travel with tags , , , , , , , , on January 2, 2020 by xi'an

Here is an R function that produces a Metropolis-Hastings sample for the univariate log-target f when the later is defined outside as another function. And when using a Gaussian random walk with scale one as proposal. (Inspired from a X validated question.)

m<-function(T,y=rnorm(1))ifelse(rep(T>1,T),
  c(y*{f({z<-m(T-1)}[1])-f(y+z[1])<rexp(1)}+z[1],z),y)

The function is definitely not optimal, crashes for values of T larger than 580 (unless one modifies the stack size), and operates the most basic version of a Metropolis-Hastings algorithm. But as a codegolf challenge (on a late plane ride), this was a fun exercise.

local mayhem, again and again and again…

Posted in Kids, pictures, Travel, University life with tags , , , , , , , , , , , , , , , , , , on December 27, 2019 by xi'an

The public transports in France and in particular in Paris have now been on strike for three weeks. In connection with a planned reform of the retirement conditions of workers with special status, like those in the train and metro companies, who can retire earlier than the legal age (62). As usual with social unrest in France, other categories joined the strike and the protest, including teachers and health service public workers, as well as police officers, fire-fighters and opera dancers, and even some students. Below are some figures from the OECD about average retirement conditions in nearby EU countries that show that these conditions are apparently better in France. (With the usual provision that these figures have been correctly reported.) In particular, the life expectancy at the start of retirement is the highest for both men and women. Coincidence (or not), my UCU affiliated colleagues in Warwick were also on strike a few weeks ago about their pensions…

Travelling through and around Paris by bike, I have not been directly affected by the strikes (as heavy traffic makes biking easier!), except for the morning of last week when I was teaching at ENSAE, when I blew up a tyre midway there and had to hop to the nearest train station to board the last train of the morning, arriving (only) 10mn late. Going back home was only feasible by taxi, which happened to be large enough to take my bicycle as well… Travelling to and from the airport for Vancouver and Birmingham was equally impossible by public transportation, meaning spending fair amounts of time in and money on taxis! And listening to taxi-drivers’ opinions or musical tastes. Nothing to moan about when considering the five to six hours spent by some friends of mine to get to work and back.

tea tasting at Van Cha

Posted in Kids, pictures, Travel with tags , , , , , , , , , , , , , on December 26, 2019 by xi'an

This recent trip to Vancouver gave me the opportunity of enjoying a Chinese tea tasting experience. On my last visit to the city, I had noticed a small tea shop very near the convention centre but could not find the time to stop there. This round I took advantage of the AABI lunch break to get back to the shop, which was open (on a Sunday), and sat for a ripe Pu-Ehr tasting. A fatal if minor mistake in ordering, namely that this was Pu-Ehr withing a dried yuzu shell, which gave the tea a mixed taste of fruit and tea, as least for the first brews. And remaining very far from the very earthy tastes I was expecting. (But it reminded me of a tangerine based Pu-Ehr Yulia gave me last time we went to Banff. And I missed an ice climbing opportunity!)
This was nonetheless a very pleasant tasting experience, with the tea hostess brewing one tiny tea pot after another, including a first one to wet and clean the tea, with very short infusion times, and tea rounds keeping their strong flavour even after several passes. In a very quiet atmosphere altogether, with a well-used piece of wood (as shown on top) in lieu of a sink to get rid of the water used to warm and clean pots and mugs (and a clay frog which role remained mysterious throughout!).
At some point in the degustation, another customer came in, obviously from a quite different league as he was carrying his own tea pancake, from which the hostess extracted a few grams and processed most carefully. This must have been an exceptional tea as she was rewarded by a small cup of the first brew, which she seemed to appreciate a lot (albeit in Chinese so I could not say).
As I was about to leave, having spent more time than expected and drank five brews of my tea, plus extra cups of a delicate Oolong, hence missing a talk by Matt Hoffman to which I was looking forward!, I discussed for a little while with this connoisseur, who told me of the importance of using porous clay pots and not mix them for different teas. Incidentally he was also quite dismissive of Japanese teas, (iron) teapots, and tea ceremony, which I found in petto a rather amusing attitude (if expected from some aficionados).

riddle on a circle

Posted in Books, Kids, R, Travel with tags , , , , , , , on December 22, 2019 by xi'an

The Riddler’s riddle this week provides another opportunity to resort to brute-force simulated annealing!

Given a Markov chain defined on the torus {1,2,…,100} with only moves a drift to the right (modulo 100) and a uniformely random jump, find the optimal transition matrix to reach 42 in a minimum (average) number of moves.

Which I coded in my plane to Seattle, under the assumption that there is nothing to do when the chain is already in 42. And the reasoning that there is not gain (on average) in keeping the choice between right shift and random jump random.

dure=min(c(41:0,99:42),50)
temp=.01
for (t in 1:1e6){
  i=sample((1:100)[-42],1)
  dura=1+mean(dure)
  if (temp*log(runif(1))<dure[i]-dura) dure[i]=dura
  if(temp*log(runif(1))<dure[i]-(dura<-1+dure[i*(i<100)+1])) 
    dure[i]=dura 
  temp=temp/(1+.1e-4*(runif(1)>.99))}

In all instances, the solution is to move at random for any position but those between 29 and 41, for an average 13.64286 number of steps to reach 42. (For values outside the range 29-42.)

no dichotomy between efficiency and interpretability

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , on December 18, 2019 by xi'an

“…there are actually a lot of applications where people do not try to construct an interpretable model, because they might believe that for a complex data set, an interpretable model could not possibly be as accurate as a black box. Or perhaps they want to preserve the model as proprietary.”

One article I found quite interesting in the second issue of HDSR is “Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition” by Cynthia Rudin and Joanna Radin, which describes the setting of a NeurIPS competition last year, the Explainable Machine Learning Challenge, of which I was blissfully unaware. The goal was to construct an operational black box predictor fpr credit scoring and turn it into something interpretable. The authors explain how they built instead a white box predictor (my terms!), namely a linear model, which could not be improved more than marginally by a black box algorithm. (It appears from the references that these authors have a record of analysing black-box models in various setting and demonstrating that they do not always bring more efficiency than interpretable versions.) While this is but one example and even though the authors did not win the challenge (I am unclear why as I did not check the background story, writing on the plane to pre-NeuriPS 2019).

I find this column quite refreshing and worth disseminating, as it challenges the current creed that intractable functions with hundreds of parameters will always do better, if only because they are calibrated within the box and have eventually difficulties to fight over-fitting within (and hence under-fitting outside). This is also a difficulty with common statistical models, but having the ability to construct error evaluations that show how quickly the prediction efficiency deteriorates may prove the more structured and more sparsely parameterised models the winner (of real world competitions).

emergence [jatp]

Posted in Mountains, pictures, Travel, University life with tags , , , , , , , on December 11, 2019 by xi'an