Archive for Wharton Business School

Lawrence D. Brown PhD Student Award

Posted in Kids, Statistics, Travel, University life with tags , , , , , , on June 3, 2019 by xi'an

[Reproduced from the IMS Bulletin, an announcement of a travel award for PhD students in celebration of my friend Larry Brown!]

Lawrence D. Brown (1940-2018), Miers Busch Professor and Professor of Statistics at The Wharton School, University of Pennsylvania, had a distinguished academic career with groundbreaking contributions to a range of fields in theoretical and applied statistics. He was an IMS Fellow, IMS Wald Lecturer, and a former IMS President. Moreover, he was an enthusiastic and dedicated mentor to many graduate students. In 2011, he was recognized for these efforts as a recipient of the Provost’s Award for Distinguished PhD Teaching and Mentoring at the University of Pennsylvania.

Brown’s firm dedication to all three pillars of academia — research, teaching and service — sets an exemplary model for generations of new statisticians. Therefore, the IMS is introducing a new award for PhD students created in his honor: the IMS Lawrence D. Brown PhD Student Award.

This annual travel award will be given to three PhD students, who will present their research at a special invited session during the IMS Annual Meeting. The submission process is now open and applications are due by July 15th, 2019 for the 2020 award. More details, including eligibility and application requirements, can be found at: https://www.imstat.org/ims-awards/ims-lawrence-d-brown-ph-d-student-award/

Donations are welcome as well, through https://www.imstat.org/contribute-to-the-ims/ under “IMS Lawrence D. Brown Ph.D. Student Award Fund”

Larry Brown (1940-2018)

Posted in Books, pictures, Statistics, University life with tags , , , , , , on February 21, 2018 by xi'an

Just learned a few minutes ago that my friend Larry Brown has passed away today, after fiercely fighting cancer till the end. My thoughts of shared loss and deep support first go to my friend Linda, his wife, and to their children. And to all their colleagues and friends at Wharton. I have know Larry for all of my career, from working on his papers during my PhD to being a temporary tenant in his Cornell University office in White Hall while he was mostly away in sabbatical during the academic year 1988-1989, and then periodically meeting with him in Cornell and then Wharton along the years. He and Linday were always unbelievably welcoming and I fondly remember many times at their place or in superb restaurants in Phillie and elsewhere.  And of course remembering just as fondly the many chats we had along these years about decision theory, admissibility, James-Stein estimation, and all aspects of mathematical statistics he loved and managed at an ethereal level of abstraction. His book on exponential families remains to this day one of the central books in my library, to which I kept referring on a regular basis… For certain, I will miss the friend and the scholar along the coming years, but keep returning to this book and have shared memories coming back to me as I will browse through its yellowed pages and typewriter style. Farewell, Larry, and thanks for everything!

uniform correlation mixtures

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on December 4, 2015 by xi'an

Philadelphia, Nov. 01, 2010Kai Zhang and my friends from Wharton, Larry Brown, Ed George and Linda Zhao arXived last week a neat mathematical foray into the properties of a marginal bivariate Gaussian density once the correlation ρ is integrated out. While the univariate marginals remain Gaussian (unsurprising, since these marginals do not depend on ρ in the first place), the joint density has the surprising property of being

[1-Φ(max{|x|,|y|})]/2

which turns an infinitely regular density into a density that is not even differentiable everywhere. And which is constant on squares rather than circles or ellipses. This is somewhat paradoxical in that the intuition (at least my intuition!) is that integration increases regularity… I also like the characterisation of the distributions factorising through the infinite norm as scale mixtures of the infinite norm equivalent of normal distributions. The paper proposes several threads for some extensions of this most surprising result. Other come to mind:

  1. What happens when the Jeffreys prior is used in place of the uniform? Or Haldane‘s prior?
  2. Given the mixture representation of t distributions, is there an equivalent for t distributions?
  3. Is there any connection with the equally surprising resolution of the Drton conjecture by Natesh Pillai and Xiao-Li Meng?
  4. In the Khintchine representation, correlated normal variates are created by multiplying a single χ²(3) variate by a vector of uniforms on (-1,1). What are the resulting variates for other degrees of freedomk in the χ²(k) variate?
  5. I also wonder at a connection between this Khintchine representation and the Box-Müller algorithm, as in this earlier X validated question that I turned into an exam problem.

sampling from time-varying log-concave distributions

Posted in Statistics, University life with tags , , , , , on October 2, 2013 by xi'an

Philadelphia downtown from Ben Franklin bridge, Oct. 31, 2020Sasha Rakhlin from Wharton sent me this paper he wrote (and arXived) with Hariharan Narayanan on a specific Markov chain algorithm that handles sequential Monte Carlo problems for log-concave targets. By relying on novel (by my standards) mathematical techniques, they manage to obtain geometric ergodicity results for random-walk based algorithms and log-concave targets. One of the new tools is the notion of self-concordant barrier, a sort of convex potential function F associated with a reference convex set and with Lipschitz properties. The second tool is a Gaussian distribution based on the metric induced by F. The third is the Dikin walk Markov chain, which uses this Gaussian as proposal and moves almost like the Metropolis-Hastings algorithm, except that it rejects with at least a probability of ½. The scale (or step size) of the Gaussian proposal is determined by the regularity of the log-concave target. In that setting, the total variation distance between the target at the t-th level and the distribution of the Markov chain can be fairly precisely approximated. Which leads in turn to a scaling of the number of random walk steps that are necessary to ensure convergence. Depending on the pace of the moving target, a single step of the random walk may be sufficient, which is quite an interesting feature.

Back from Philly

Posted in R, Statistics, Travel, University life with tags , , , , , , , , , on December 21, 2010 by xi'an

The conference in honour of Larry Brown was quite exciting, with lots of old friends gathered in Philadelphia and lots of great talks either recollecting major works of Larry and coauthors or presenting fairly interesting new works. Unsurprisingly, a large chunk of the talks was about admissibility and minimaxity, with John Hartigan starting the day re-reading Larry masterpiece 1971 paper linking admissibility and recurrence of associated processes, a paper I always had trouble studying because of both its depth and its breadth! Bill Strawderman presented a new if classical minimaxity result on matrix estimation and Anirban DasGupta some large dimension consistency results where the choice of the distance (total variation versus Kullback deviance) was irrelevant. Ed George and Susie Bayarri both presented their recent work on g-priors and their generalisation, which directly relate to our recent paper on that topic. On the afternoon, Holger Dette showed some impressive mathematics based on Elfving’s representation and used in building optimal designs. I particularly appreciated the results of a joint work with Larry presented by Robert Wolpert where they classified all Markov stationary infinitely divisible time-reversible integer-valued processes. It produced a surprisingly small list of four cases, two being trivial.. The final talk of the day was about homology, which sounded a priori rebutting, but Robert Adler made it extremely entertaining, so much that I even failed to resent the powerpoint tricks! The next morning, Mark Low gave a very emotional but also quite illuminating about the first results he got during his PhD thesis at Cornell (completing the thesis when I was using Larry’s office!). Brenda McGibbon went back to the three truncated Poisson papers she wrote with Ian Johnstone (via gruesome 13 hour bus rides from Montréal to Ithaca!) and produced an illuminating explanation of the maths at work for moving from the Gaussian to the Poisson case in a most pedagogical and enjoyable fashion. Larry Wasserman explained the concepts at work behind the lasso for graphs, entertaining us with witty acronyms on the side!, and leaving out about 3/4 of his slides! (The research group involved in this project produced an R package called huge.) Joe Eaton ended up the morning with a very interesting result showing that using the right Haar measure as a prior leads to a matching prior, then showing why the consequences of the result are limited by invariance itself. Unfortunately, it was then time for me to leave and I will miss (in both meanings of the term) the other half of the talks. Especially missing Steve Fienberg’s talk for the third time in three weeks! Again, what I appreciated most during those two days (besides the fact that we were all reunited on the very day of Larry’s birthday!) was the pain most speakers went to to expose older results in a most synthetic and intuitive manner… I also got new ideas about generalising our parallel computing paper for random walk Metropolis-Hastings algorithms and for optimising across permutation transforms.

%d bloggers like this: