sweet red bean paste [あん]

Posted in Books, Kids, pictures, Travel with tags , , , , , , , on February 13, 2016 by xi'an

I am just back from watching this Japanese movie by Naomi Kawase that came out last year and won Un certain regard award at the Cannes festival. It is indeed a movie with a most unusual “regard” and as such did not convince many critics. For instance, one Guardian critic summed up his view with the qualification of a “preposterous and overly sentimental opener to this year’s Un Certain Regard serves up major disappointment”. (As a contrapunto the finereview in Les Cahiers du Cinéma catches the very motives I saw in the movie.) And of course one can watch the movie as a grossly stereotypical and unreservedly sentimental lemon if one clings to realism. For me, who first and mistakenly went to see it as an ode to Japanese food (in the same vein as Tampopo!), it unrolled as a wonderful tale that got deeper and deeper consistence, just like the red bean jam thickening over the fire. There is clearly nothing realistic in the three characters and in the way they behave, from the unnaturally cheerful and wise old woman Tokue to the overly mature high-school student looking after the introspective cook. That no-one seemed aware of a sanatorium of lepers at the centre of town and that the customers move from ecstatic about the taste of the bean jam made by Tokue to scared by her (former) leprosy and that the awful owner of the shop where Sentaro cooks can be so obviously pressuring him, all this does not work for a real story, but it fits perfectly the philosophical tale that An is and the reflection it raises. While I am always bemused by the depth and wholeness in the preparation of the Japanese food, the creation of a brilliant red bean jam is itself tangential to the tale (and I do not feel like seeking dorayaki when exiting the cinema), which is more about discovering one’s inner core and seeking harmony through one’s realisations. (I know this definitely sounds like cheap philosophy, but I still feel somewhat and temporarily enlightened from following the revolutions of those three characters towards higher spheres in the past two hours!)

new version of abcrf

Posted in R, Statistics, University life with tags , , , , , , on February 12, 2016 by xi'an
fig-tree near Brisbane, Australia, Aug. 18, 2012Version 1.1 of our R library abcrf version 1.1  is now available on CRAN.  Improvements against the earlier version are numerous and substantial. In particular,  calculations of the random forests have been parallelised and, for machines with multiple cores, the computing gain can be enormous. (The package does along with the random forest model choice paper published in Bioinformatics.)

The answer is e, what was the question?!

Posted in Books, R, Statistics with tags , , , , , on February 12, 2016 by xi'an

Sceaux, June 05, 2011A rather exotic question on X validated: since π can be approximated by random sampling over a unit square, is there an equivalent for approximating e? This is an interesting question, as, indeed, why not focus on e rather than π after all?! But very quickly the very artificiality of the problem comes back to hit one in one’s face… With no restriction, it is straightforward to think of a Monte Carlo average that converges to e as the number of simulations grows to infinity. However, such methods like Poisson and normal simulations require some complex functions like sine, cosine, or exponential… But then someone came up with a connection to the great Russian probabilist Gnedenko, who gave as an exercise that the average number of uniforms one needs to add to exceed 1 is exactly e, because it writes as

\sum_{n=0}^\infty\frac{1}{n!}=e

(The result was later detailed in the American Statistician as an introductory simulation exercise akin to Buffon’s needle.) This is a brilliant solution as it does not involve anything but a standard uniform generator. I do not think it relates in any close way to the generation from a Poisson process with parameter λ=1 where the probability to exceed one in one step is e⁻¹, hence deriving  a Geometric variable from this process leads to an unbiased estimator of e as well. As an aside, W. Huber proposed the following elegantly concise line of R code to implement an approximation of e:

1/mean(n*diff(sort(runif(n+1))) > 1)

Hard to beat, isn’t it?! (Although it is more exactly a Monte Carlo approximation of

\left(1-\frac{1}{n}\right)^n

which adds a further level of approximation to the solution….)

conference deadlines [register now!!]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , on February 11, 2016 by xi'an

bike trail from Kenilworth to the University of WarwickRegistration is now open for our [fabulous!] CRiSM workshop on estimating [normalising] constants, in Warwick, on April 20-22 this year. While it is almost free (almost as in £40.00!), we strongly suggest you register asap if only to secure a bedroom on the campus at a moderate rate of £55.00 per night (breakfast included!). Plus we would like to organise the poster session(s) and the associated “elevator” talks for the poster presenters.

While the deadline for early registration at AISTATS is now truly over, we also encourage all researchers interested in this [great] conference to register as early as possible, if only [again] to secure a room at the conference location, the Parador Hotel in Cádiz. (Otherwise, there are plenty of rentals in the neighbourhood.)

Last and not least, the early registration for ISBA 2016 in Santa Margherita di Pula, Sardinia, is still open till February 29. And the rate will move immediately to late registration fees. The same deadline applies to bedroom reservations in the resort, with apparently only a few rooms left for some of the nights. Rentals and hotels around are also getting filled rather quickly.

Bayesian composite likelihood

Posted in Books, Statistics, University life with tags , , , , , , on February 11, 2016 by xi'an

“…the pre-determined weights assigned to the different associations between observed and unobserved values represent strong a priori knowledge regarding the informativeness of clues. A poor choice of weights will inevitably result in a poor approximation to the “true” Bayesian posterior…”

Last Xmas, Alexis Roche arXived a paper on Bayesian inference via composite likelihood. I find the paper quite interesting in that [and only in that] it defends the innovative notion of writing a composite likelihood as a pool of opinions about some features of the data. Recall that each term in the composite likelihood is a marginal likelihood for some projection z=f(y) of the data y. As in ABC settings, although it is rare to derive closed-form expressions for those marginals. The composite likelihood is parameterised by powers of those components. Each component is associated with an expert, whose weight reflects the importance. The sum of the powers is constrained to be equal to one, even though I do not understand why the dimensions of the projections play no role in this constraint. Simplicity is advanced as an argument, which sounds rather weak… Even though this may be infeasible in any realistic problem, it would be more coherent to see the weights as producing the best Kullback approximation to the true posterior. Or to use a prior on the weights and estimate them along the parameter θ. The former could be incorporated into the later following the approach of Holmes & Walker (2013). While the ensuing discussion is most interesting, it remains missing in connecting the different components in terms of the (joint) information brought about the parameters. Especially because the weights are assumed to be given rather than inferred. Especially when they depend on θ. I also wonder why the variational Bayes interpretation is not exploited any further. And see no clear way to exploit this perspective in an ABC environment.

ABC for wargames

Posted in Books, Kids, pictures, Statistics with tags , , , , , , on February 10, 2016 by xi'an

I recently came across an ABC paper in PLoS ONE by Xavier Rubio-Campillo applying this simulation technique to the validation of some differential equation models linking force sizes and values for both sides. The dataset is made of battle casualties separated into four periods, from pike and musket to the American Civil War. The outcome is used to compute an ABC Bayes factor but it seems this computation is highly dependent on the tolerance threshold. With highly variable numerical values. The most favoured model includes some fatigue effect about the decreasing efficiency of armies along time. While the paper somehow reminded me of a most peculiar book, I have no idea on the depth of this analysis, namely on how relevant it is to model a battle through a two-dimensional system of differential equations, given the numerous factors involved in the matter…

snapshot from Oxford [#2]

Posted in Kids, pictures, Travel, University life with tags , , , , on February 9, 2016 by xi'an

Follow

Get every new post delivered to your Inbox.

Join 982 other followers