I did not read very far in the recent arXival by Neu and Bartók, but I got the impression that it was a version of ABC for bandit problems where the probabilities behind the bandit arms are not available but can be generated. Since the stopping rule found in the “Recurrence weighting for multi-armed bandits” is the generation of an arm equal to the learner’s draw (p.5). Since there is no tolerance there, the method is exact (“unbiased”). As no reference is made to the ABC literature, this may be after all a mere analogy…
Archive for machine learning
As my sorry excuse of an Internet provider has been unable to fix my broken connection for several days, I had more time to read and enjoy the latest Significance I received last week. Plenty of interesting entries, once again! Even though, faithful to my idiosyncrasies, I must definitely criticise the cover (but you may also skip till the end of the paragraph!): It shows a pile of exams higher than the page frame on a student table in a classroom and a vague silhouette sitting behind the exams. I do not know whether or not this is intentional but the silhouette has definitely been added to the original picture (and presumably the exams as well!), because the seat and blackboard behind this silhouette show through it. If this is intentional, does that mean that the poor soul grading this endless pile of exams has long turned into a wraith?! If not intentional, that’s poor workmanship for a magazine usually apt at making the most from the graphical side. (And then I could go on and on about the clearly independent choice of illustrations by the managing editor rather than the author(s) of the article…) End of the digression! Or maybe not because there also was an ugly graph from Knowledge is Beautiful about the causes of plane crashes that made pie-charts look great… Not that all the graphs in the book are bad, far from it!
“The development of full artificial intelligence could spell the end of the human race.’ S. Hawkins
The central theme of the magazine is artificial intelligence (and machine learning). A point I wanted to mention in a post following the recent doom-like messages of Gates and Hawking about AIs taking over humanity à la Blade Runner… or in Turing’s test. As if they had not already impacted our life so much and in so many ways. And no all positive or for the common good. Witness the ultra-fast codes on the stock market. Witness the self-replicating and modifying computer viruses. Witness the increasingly autonomous military drones. Or witness my silly Internet issue, where I cannot get hold of a person who can tell me what the problem is and what the company is doing to solve it (if anything!), but instead have to listen to endless phone automata that tell me to press “1 if…” and “3 else”, and that my incident ticket has last been updated three days ago… But at the same time the tone of The Independent tribune by Hawking, Russell, Tegmark, and Wilczek is somewhat misguided, if I may object to such luminaries!, and playing on science fiction themes that have been repeated so many times that they are now ingrained, rather than strong scientific arguments. Military robots that could improve themselves to the point of evading their conceptors are surely frightening but much less realistic than a nuclear reaction that could not be stopped in a Fukushima plant. Or than the long-term impacts of genetically modified crops and animals. Or than the current proposals of climate engineering. Or than the emerging nano-particles.
“If we build systems that are game-theoretic or utility maximisers, we won’t get what we’re hoping for.” P. Norvig
The discussion of this scare in Significance does not contribute much in my opinion. It starts with the concept of a perfect Bayesian agent, supposedly the state of an AI creating paperclips, which (who?) ends up using the entire Earth’s resources to make more paperclips. The other articles in this cover story are more relevant, as for instance how AI moved from pure logic to statistical or probabilist intelligence. With Yee Whye Teh discussing Bayesian networks and the example of Google translation (including a perfect translation into French of an English sentence).
So today was the NIPS 2014 workshop, “ABC in Montréal“, which started with a fantastic talk by Juliane Liepe on some exciting applications of ABC to the migration of immune cells, with the analysis of movies involving those cells acting to heal a damaged fly wing and a cut fish tail. Quite amazing videos, really. (With the great entry line of ‘We have all cut a finger at some point in our lives’!) The statistical model behind those movies was a random walk on a grid, with different drift and bias features that served as model characteristics. Frank Wood managed to deliver his talk despite a severe case of food poisoning, with a great illustration of probabilistic programming that made me understand (at last!) the very idea of probabilistic programming. And Vikash Mansinghka presented some applications in image analysis. Those two talks led me to realise why probabilistic programming was so close to ABC, with a programming touch! Hence why I was invited to talk today! Then Dennis Prangle exposed his latest version of lazy ABC, that I have already commented on the ‘Og, somewhat connected with our delayed acceptance algorithm, to the point that maybe something common can stem out of the two notions. Michael Blum ended the day with provocative answers to the provocative question of Ted Meeds as to whether or not machine learning needed ABC (Ans. No!) and whether or not machine learning could help ABC (Ans. ???). With an happily mix-up between mechanistic and phenomenological models that helped generating discussion from the floor.
The posters were also of much interest, with calibration as a distance measure by Michael Guttman, in continuation of the poster he gave at MCMski, Aaron Smith presenting his work with Luke Bornn, Natesh Pillai and Dawn Woodard, on why a single pseudo-sample is enough for ABC efficiency. This gave me the opportunity to discuss with him the apparent contradiction with the result of Kryz Łatunsziński and Anthony Lee about the geometric convergence of ABC-MCMC only attained with a random number of pseudo-samples… And to wonder if there is a geometric versus binomial dilemma in this setting, Namely, whether or not simulating pseudo-samples until one is accepted would be more efficient than just running one and discarding it in case it is too far. So, although the audience was not that large (when compared with the other “ABC in…” and when considering the 2500+ attendees at NIPS over the week!), it was a great day where I learned a lot, did not have a doze during talks (!), [and even had an epiphany of sorts at the treadmill when I realised I just had to take longer steps to reach 16km/h without hyperventilating!] So thanks to my fellow organisers, Neil D Lawrence, Ted Meeds, Max Welling, and Richard Wilkinson for setting the program of that day! And, by the way, where’s the next “ABC in…”?! (Finland, maybe?)
After a somewhat prolonged labour (!), we have at last completed our paper on ABC model choice with random forests and submitted it to PNAS for possible publication. While the paper is entirely methodological, the primary domain of application of ABC model choice methods remains population genetics and the diffusion of this new methodology to the users is thus more likely via a media like PNAS than via a machine learning or statistics journal.
When compared with our recent update of the arXived paper, there is not much different in contents, as it is mostly an issue of fitting the PNAS publication canons. (Which makes the paper less readable in the posted version [in my opinion!] as it needs to fit the main document within the compulsory six pages, relegated part of the experiments and of the explanations to the Supplementary Information section.)
In connection with the previous announcement of ABC in Montréal, a call for papers that came out today:
NIPS 2014 Workshop: ABC in Montreal
December 12, 2014
Montréal, Québec, Canada
Approximate Bayesian computation (ABC) or likelihood-free (LF) methods have developed mostly beyond the radar of the machine learning community, but are important tools for a large segment of the scientific community. This is particularly true for systems and population biology, computational psychology, computational chemistry, etc. Recent work has both applied machine learning models and algorithms to general ABC inference (NN, forests, GPs) and ABC inference to machine learning (e.g. using computer graphics to solve computer vision using ABC). In general, however, there is significant room for collaboration between the two communities.
The workshop will consist of invited and contributed talks, poster spotlights, and a poster session. Rather than a panel discussion we will encourage open discussion between the speakers and the audience!
Examples of topics of interest in the workshop include (but are not limited to):
* Applications of ABC to machine learning, e.g., computer vision, inverse problems
* ABC in Systems Biology, Computational Science, etc
* ABC Reinforcement Learning
* Machine learning simulator models, e.g., NN models of simulation responses, GPs etc.
* Selection of sufficient statistics
* Online and post-hoc error
* ABC with very expensive simulations and acceleration methods (surrogate modeling, choice of design/simulation points)
* ABC with probabilistic programming
* Posterior evaluation of scientific problems/interaction with scientists
* Post-computational error assessment
* Impact on resulting ABC inference
* ABC for model selection
=========== Continue reading
[An announcement from ISBA about sponsoring young researchers at NIPS that links with my earlier post that our ABC in Montréal proposal for a workshop had been accepted and a more global feeling that we (as a society) should do more to reach towards machine-learning.]
The International Society for Bayesian Analysis (ISBA) is pleased to announce its new initiative *ISBA@NIPS*, an initiative aimed at highlighting the importance and impact of Bayesian methods in the new era of data science.
Among the first actions of this initiative, ISBA is endorsing a number of *Bayesian satellite workshops* at the Neural Information Processing Systems (NIPS) Conference, that will be held in Montréal, Québec, Canada, December 8-13, 2014.
Furthermore, a special ISBA@NIPS Travel Award will be granted to the best Bayesian invited and contributed paper(s) among all the ISBA endorsed workshops.
ISBA endorsed workshops at NIPS
- ABC in Montréal. This workshop will include topics on: Applications of ABC to machine learning, e.g., computer vision, other inverse problems (RL); ABC Reinforcement Learning (other inverse problems); Machine learning models of simulations, e.g., NN models of simulation responses, GPs etc.; Selection of sufficient statistics and massive dimension reduction methods; Online and post-hoc error; ABC with very expensive simulations and acceleration methods (surrogate modelling, choice of design/simulation points).
- Networks: From Graphs to Rich Data. This workshop aims to bring together a diverse and cross-disciplinary set of researchers to discuss recent advances and future directions for developing new network methods in statistics and machine learning.
- Advances in Variational Inference. This workshop aims at highlighting recent advancements in variational methods, including new methods for scalability using stochastic gradient methods, , extensions to the streaming variational setting, improved local variational methods, inference in non-linear dynamical systems, principled regularisation in deep neural networks, and inference-based decision making in reinforcement learning, amongst others.
- Women in Machine Learning (WiML 2014). This is a day-long workshop that gives female faculty, research scientists, and graduate students in the machine learning community an opportunity to meet, exchange ideas and learn from each other. Under-represented minorities and undergraduates interested in machine learning research are encouraged to attend.
Following a proposal put forward by Ted Meeds, Max Welling, Richard Wilkinson, Neil Lawrence and myself, our ABC in Montréal workshop has been accepted by the NIPS 2014 committee and will thus take place on either Friday, Dec. 11, or Saturday, Dec. 12, at the end of the main NIPS meeting (Dec. 8-10). (Despite the title, this workshop is not part of the ABC in … series I started five years ago. It will only last a single day with a few invited talks and no poster. And no free wine & cheese party.) On top of this workshop, our colleagues Vikash K Mansinghka, Daniel M Roy, Josh Tenenbaum, Thomas Dietterich, and Stuart J Russell have also been successful in their bid for the 3rd NIPS Workshop on Probabilistic Programming which will presumably be held on the opposite day to ours, as Vikash is speaking at our workshop, while I am speaking in this workshop. I am yet undecided as to whether or not to attend the main conference, given that I am already travelling a lot this semester and have to teach two courses, incl. a large undergraduate statistics inference course… Obviously, I will try to attend if our joint paper is accepted by the editorial board! Even though Marco will then be the speaker.