Archive for Wharton Business School

uniform correlation mixtures

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on December 4, 2015 by xi'an

Philadelphia, Nov. 01, 2010Kai Zhang and my friends from Wharton, Larry Brown, Ed George and Linda Zhao arXived last week a neat mathematical foray into the properties of a marginal bivariate Gaussian density once the correlation ρ is integrated out. While the univariate marginals remain Gaussian (unsurprising, since these marginals do not depend on ρ in the first place), the joint density has the surprising property of being


which turns an infinitely regular density into a density that is not even differentiable everywhere. And which is constant on squares rather than circles or ellipses. This is somewhat paradoxical in that the intuition (at least my intuition!) is that integration increases regularity… I also like the characterisation of the distributions factorising through the infinite norm as scale mixtures of the infinite norm equivalent of normal distributions. The paper proposes several threads for some extensions of this most surprising result. Other come to mind:

  1. What happens when the Jeffreys prior is used in place of the uniform? Or Haldane‘s prior?
  2. Given the mixture representation of t distributions, is there an equivalent for t distributions?
  3. Is there any connection with the equally surprising resolution of the Drton conjecture by Natesh Pillai and Xiao-Li Meng?
  4. In the Khintchine representation, correlated normal variates are created by multiplying a single χ²(3) variate by a vector of uniforms on (-1,1). What are the resulting variates for other degrees of freedomk in the χ²(k) variate?
  5. I also wonder at a connection between this Khintchine representation and the Box-Müller algorithm, as in this earlier X validated question that I turned into an exam problem.

sampling from time-varying log-concave distributions

Posted in Statistics, University life with tags , , , , , on October 2, 2013 by xi'an

Philadelphia downtown from Ben Franklin bridge, Oct. 31, 2020Sasha Rakhlin from Wharton sent me this paper he wrote (and arXived) with Hariharan Narayanan on a specific Markov chain algorithm that handles sequential Monte Carlo problems for log-concave targets. By relying on novel (by my standards) mathematical techniques, they manage to obtain geometric ergodicity results for random-walk based algorithms and log-concave targets. One of the new tools is the notion of self-concordant barrier, a sort of convex potential function F associated with a reference convex set and with Lipschitz properties. The second tool is a Gaussian distribution based on the metric induced by F. The third is the Dikin walk Markov chain, which uses this Gaussian as proposal and moves almost like the Metropolis-Hastings algorithm, except that it rejects with at least a probability of ½. The scale (or step size) of the Gaussian proposal is determined by the regularity of the log-concave target. In that setting, the total variation distance between the target at the t-th level and the distribution of the Markov chain can be fairly precisely approximated. Which leads in turn to a scaling of the number of random walk steps that are necessary to ensure convergence. Depending on the pace of the moving target, a single step of the random walk may be sufficient, which is quite an interesting feature.

Back from Philly

Posted in R, Statistics, Travel, University life with tags , , , , , , , , , on December 21, 2010 by xi'an

The conference in honour of Larry Brown was quite exciting, with lots of old friends gathered in Philadelphia and lots of great talks either recollecting major works of Larry and coauthors or presenting fairly interesting new works. Unsurprisingly, a large chunk of the talks was about admissibility and minimaxity, with John Hartigan starting the day re-reading Larry masterpiece 1971 paper linking admissibility and recurrence of associated processes, a paper I always had trouble studying because of both its depth and its breadth! Bill Strawderman presented a new if classical minimaxity result on matrix estimation and Anirban DasGupta some large dimension consistency results where the choice of the distance (total variation versus Kullback deviance) was irrelevant. Ed George and Susie Bayarri both presented their recent work on g-priors and their generalisation, which directly relate to our recent paper on that topic. On the afternoon, Holger Dette showed some impressive mathematics based on Elfving’s representation and used in building optimal designs. I particularly appreciated the results of a joint work with Larry presented by Robert Wolpert where they classified all Markov stationary infinitely divisible time-reversible integer-valued processes. It produced a surprisingly small list of four cases, two being trivial.. The final talk of the day was about homology, which sounded a priori rebutting, but Robert Adler made it extremely entertaining, so much that I even failed to resent the powerpoint tricks! The next morning, Mark Low gave a very emotional but also quite illuminating about the first results he got during his PhD thesis at Cornell (completing the thesis when I was using Larry’s office!). Brenda McGibbon went back to the three truncated Poisson papers she wrote with Ian Johnstone (via gruesome 13 hour bus rides from Montréal to Ithaca!) and produced an illuminating explanation of the maths at work for moving from the Gaussian to the Poisson case in a most pedagogical and enjoyable fashion. Larry Wasserman explained the concepts at work behind the lasso for graphs, entertaining us with witty acronyms on the side!, and leaving out about 3/4 of his slides! (The research group involved in this project produced an R package called huge.) Joe Eaton ended up the morning with a very interesting result showing that using the right Haar measure as a prior leads to a matching prior, then showing why the consequences of the result are limited by invariance itself. Unfortunately, it was then time for me to leave and I will miss (in both meanings of the term) the other half of the talks. Especially missing Steve Fienberg’s talk for the third time in three weeks! Again, what I appreciated most during those two days (besides the fact that we were all reunited on the very day of Larry’s birthday!) was the pain most speakers went to to expose older results in a most synthetic and intuitive manner… I also got new ideas about generalising our parallel computing paper for random walk Metropolis-Hastings algorithms and for optimising across permutation transforms.

Postdoc in Wharton

Posted in R, Statistics, University life with tags , , on November 16, 2010 by xi'an

Just received this email from José Bernardo about an exciting postdoc position in Wharton:


The Department of Statistics at The Wharton School of the University of Pennsylvania is seeking candidates for a Post-Doctoral Fellowship. This research fellowship provides full funding without any teaching requirements at a competitive salary for two years beginning in Summer 2011.

Applicants are expected to show outstanding capacity for research as well as excellent communication skills. Although our department is located in the Wharton School, we provide services to the entire University of Pennsylvania and hold research interests across diverse scientific fields. We have strong research programs in many areas of statistics, including Continue reading

Looking back

Posted in pictures, Statistics, Travel, University life with tags , , , on November 5, 2010 by xi'an

This extended week in the department of Statistics at Wharton has been quite fruitful for me! Partly due to the extra four hours of work I was getting every night by remaining (almost) on French time, partly due to the warm welcome I received from the department, partly due to having to prepare this course on likelihood-free methods and rethinking about the fundamentals (the abc?!) of ABC (and partly due to resisting buying Towers of Midnight on Tuesday morning at dawn!). The feedback I got during the course, mostly from Wharton faculty, was invaluable and I also appreciated the invitation to chat with the students at the lunch student seminar about my research experience (although it must have been mostly boring for them!) It was also a wonderful opportunity to catch up with friends of more than 20 years… In short, a great “working vacation”!

ABC lectures [finale]

Posted in R, Statistics, University life with tags , , , , , on November 1, 2010 by xi'an

The latest version of my ABC slides is on slideshare. To conclude with a pun, I took advantage of the newspaper clipping generator once pointed out by Andrew. (Note that nothing written in the above should be taken seriously.) On the serious side, I managed to cover most of the 300 slides (!) over the four courses and, thanks to the active attendance of several Wharton faculty, detailed PMC and ABC algorithms in ways I hope were accessible to the students. This course preparation was in any case quite helpful in the composition of a survey on ABC now with my co-authors.


Posted in Statistics, Travel, University life with tags , , , , , , on October 24, 2010 by xi'an

Here are my (preliminary) slides for the Wharton short course, in an evolutionary (!) version that will keep changing along the week as I incorporate the material from a survey on ABC we are currently writing with Jean-Michel Marin and Robin Ryder.



Get every new post delivered to your Inbox.

Join 981 other followers