Archive for Python

postdoc in Bayesian machine learning in Berlin [reposted]

Posted in R, Statistics, Travel, University life with tags , , , , , , , , , , , , on December 24, 2019 by xi'an

The working group of Statistics at Humboldt University of Berlin invites applications for one Postdoctoral research fellow (full-time employment, 3 years with extension possible) to contribute to the research on mathematical and statistical aspects of (Bayesian) learning approaches. The research positions are associated with the Emmy Noether group Regression Models beyond the Mean – A Bayesian Approach to Machine Learning and working group of Applied Statistics at the School of Business and Economics at Humboldt-Universität Berlin. Opportunities for own scientific qualification (PhD)/career development are provided, see an overview and further links. The positions are to be filled at the earliest possible date and funded by the German Research Foundation (DFG) within the Emmy Noether programme.

Requirements:
– an outstanding PhD in Statistics, Mathematics, or related field with specialisation in Statistics, Data Science or Mathematics;
– a strong background in at least one of the following fields: mathematical statistics, computational methods, Bayesian statistics, statistical learning, advanced regression modelling;
– a thorough mathematical understanding.
– substantial experience in scientific programming with Matlab, Python, C/C++, R or similar;
– strong interest in developing novel statistical methodology and its applications in various fields such as economics or natural and life sciences;
– a very good communication skills and team experience, proficiency of the written and spoken English language (German is not obligatory).

Opportunities:
We offer the unique environment of young researchers and leading international experts in the fields. The vibrant international network includes established collaborations in Singapore and Australia. The positions offer potential to closely work with several applied sciences. Information about the research profile of the research group and further contact details can be found here. The positions are paid according to the Civil Service rates of the German States “TV-L”, E13 (if suitably qualified).

Applications should include:
– a CV with list of publications
– a motivational statement (at most one page) explaining the applicant’s interest in the announced position as well as their relevant skills and experience
– copies of degrees/university transcripts
– names and email addresses of at least two professors that may provide letters of recommendation directly to the hiring committee Applications should be sent as a single PDF file to: Prof. Dr. Nadja Klein (nadja.klein[at]hu-berlin.de), whom you may also contact for questions concerning this job post. Please indicate “Research Position Emmy Noether”.

Application deadline: 31st of January 2020

HU is seeking to increase the proportion of women in research and teaching, and specifically encourages qualified female scholars to apply. Severely disabled applicants with equivalent qualifications will be given preferential consideration. People with an immigration background are specifically encouraged to apply. Since we will not return your documents, please submit copies in the application only.

AABI9 tidbits [& misbits]

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on December 10, 2019 by xi'an

Today’s Advances in Approximate Bayesian Inference symposium, organised by Thang Bui, Adji Bousso Dieng, Dawen Liang, Francisco Ruiz, and Cheng Zhang, took place in front of Vancouver Harbour (and the tentalising ski slope at the back) and saw more than 400 participants, drifting away from the earlier versions which had a stronger dose of ABC and much fewer participants. There were students’ talks in a fair proportion, as well (and a massive number of posters). As of below, I took some notes during some of the talks with no pretense at exhaustivity, objectivity or accuracy. (This is a blog post, remember?!) Overall I found the day exciting (to the point I did not suffer at all from the usal naps consecutive to very short nights!) and engaging, with a lot of notions and methods I had never heard about. (Which shows how much I know nothing!)

The first talk was by Michalis Titsias, Gradient-based Adaptive Markov Chain Monte Carlo (jointly with Petros Dellaportas) involving as its objective function the multiplication of the variance of the move and of the acceptance probability, with a proposed adaptive version merging gradients, variational Bayes, neurons, and two levels of calibration parameters. The method advocates using this construction in a burnin phase rather than continuously, hence does not require advanced Markov tools for convergence assessment. (I found myself less excited by adaptation than earlier, maybe because it seems like switching one convergence problem for another, with additional design choices to be made.)The second talk was by Jakub Swiatkowsk, The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks, involving mean field approximation in variational inference (loads of VI at this symposium!), meaning de facto searching for a MAP estimator, and reminding me of older factor analysis and other analyse de données projection methods, except it also involved neural networks (what else at NeurIPS?!)The third talk was by Michael Gutmann, Robust Optimisation Monte Carlo, (OMC) for implicit data generated models (Diggle & Graton, 1982), an ABC talk at last!, using a formalisation through the functional representation of the generative process and involving derivatives of the summary statistic against parameter, in that sense, with the (Bayesian) random nature of the parameter sample only induced by the (frequentist) randomness in the generative transform since a new parameter “realisation” is obtained there as the one providing minimal distance between data and pseudo-data, with no uncertainty or impact of the prior. The Jacobian of this summary transform (and once again a neural network is used to construct the summary) appears in the importance weight, leading to OMC being unstable, beyond failing to reproduce the variability expressed by the regular posterior or even the ABC posterior. It took me a while to wonder `where is Wally?!’ (the prior) as it only appears in the importance weight.

The fourth talk was by Sergey Levine, Reinforcement Learning, Optimal , Control, and Probabilistic Inference, back to Kullback-Leibler as the objective function, with linkage to optimal control (with distributions as actions?), plus again variational inference, producing an approximation in sequential settings. This sounded like a type of return of the MaxEnt prior, but the talk pace was so intense that I could not follow where the innovations stood.

The fifth talk was by Iuliia Molchanova, on Structured Semi-Implicit Variational Inference, from BAyesgroup.ru (I did not know of a Bayesian group in Russia!, as I was under the impression that Bayesian statistics were under-represented there, but apparently the situation is quite different in machine learning.) The talk brought an interesting concept of semi-implicit variational inference, exploiting some form of latent variables as far as I can understand, using mixtures of Gaussians.

The sixth talk was by Rianne van den Berg, Normalizing Flows for Discrete Data, and amounted to covering three papers also discussed in NeurIPS 2019 proper, which I found somewhat of a suboptimal approach to an invited talk, as it turned into a teaser for following talks or posters. But the teasers it contained were quite interesting as they covered normalising flows as integer valued controlled changes of variables using neural networks about which I had just became aware during the poster session, in connection with papers of Papamakarios et al., which I need to soon read.

The seventh talk was by Matthew Hoffman: Langevin Dynamics as Nonparametric Variational Inference, and sounded most interesting, both from title and later reports, as it was bridging Langevin with VI, but I alas missed it for being “stuck” in a tea-house ceremony that lasted much longer than expected. (More later on that side issue!)

After the second poster session (with a highly original proposal by Radford Neal towards creating  non-reversibility at the level of the uniform generator rather than later on), I thus only attended Emily Fox’s Stochastic Gradient MCMC for Sequential Data Sources, which superbly reviewed (in connection with a sequence of papers, including a recent one by Aicher et al.) error rate and convergence properties of stochastic gradient estimator methods there. Another paper I need to soon read!

The one before last speaker, Roman Novak, exposed a Python library about infinite neural networks, for which I had no direct connection (and talks I have always difficulties about libraries, even without a four hour sleep night) and the symposium concluded with a mild round-table. Mild because Frank Wood’s best efforts (and healthy skepticism about round tables!) to initiate controversies, we could not see much to bite from each other’s viewpoint.

ABC for vampires

Posted in Books, pictures, Statistics, University life with tags , , , , , on September 4, 2018 by xi'an

Ritabrata Dutta (Warwick), along with coauthors including Anto Mira, published last week a paper in frontiers in physiology about using ABC for deriving the posterior distribution of the parameters of a dynamic blood (platelets) deposition model constructed by Bastien Chopard, the second author. While based on only five parameters, the model does not enjoy a closed form likelihood and even the simulation of a new platelet deposit takes about 10 minutes. The paper uses the simulated annealing ABC version, due to Albert, Künsch, and Scheidegger (2014), which relies a sequence of Metropolis kernels, associated with a decreasing sequence of tolerances, and claims better efficiency at reaching a stable solution. It also relies on the package abcpy, written by Ritabrata Dutta, in Python, for various aspects of ABC analysis. One feature of interest is the use of 24 summary statistics to conduct the inference on the 5 model parameters, a ratio of 24 to 5 that possibly gets improved by a variable selection tool such as random forests. Which would also avoid the choice of a specific loss function called the Bhattacharya distance (which sounds like entropy distance for the normal case).

a null hypothesis with a 99% probability to be true…

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , on March 28, 2018 by xi'an

When checking the Python t distribution random generator, np.random.standard_t(), I came upon this manual page, which actually does not explain how the random generator works but spends instead the whole page to recall Gosset’s t test, illustrating its use on an energy intake of 11 women, but ending up misleading the readers by interpreting a .009 one-sided p-value as meaning “the null hypothesis [on the hypothesised mean] has a probability of about 99% of being true”! Actually, Python’s standard deviation estimator x.std() further returns by default a non-standard standard deviation, dividing by n rather than n-1…

ABCπ

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , on May 17, 2017 by xi'an

Ritabrata Dutta, Marcel Schöengens, Jukka-Pekka Onnela, and Antonietta Mira recently put a new ABC software on-line, called ABCpy for ABC with Python. The software aims at  an automated parallelisation of ABC runs, requiring only code to generate from the (generative) model and the choice of summary statistics and of associated distance. Alternatively an approximate likelihood (as in synthetic likelihood) can be used. The tolerance ε is chosen as a percentile of the prior predictive distribution on the distance. The versions of ABC found in ABCpy are

  1. Population Monte Carlo for ABC (PMCABC);
  2. sequential Monte Carlo ABC (ABC-SMC);
  3. replenishment Sequential Monte Carlo ABC (RSMC-ABC);
  4. adaptive Population Monte Carlo ABC (APMCABC);
  5. ABC with subset simulation (ABCsubsim); and
  6. simulated annealing ABC (SABC)

Anto mentioned ABCpy to me while in Harvard last week and I have not tested the program (my only brush with Python being the occasional call to latex2wp for SeriesB’log). And obviously, writing a blog about Monte (Carlo and) Python makes a link to the Monty Pythons irresistible: