Archive for scientific journals

AISTATS 2016 [post-decisions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on December 27, 2015 by xi'an

Now that the (extended) deadline for AISTATS 2016 decisions is gone, I can gladly report that out of 594 submissions, we accepted 165 papers, including 35 oral presentations. As reported in the previous blog post, I remain amazed at the gruesome efficiency of the processing machinery and at the overwhelmingly intense involvement of the various actors who handled those submissions. And at the help brought by the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. I clearly was not as active and responsive as many of those actors and definitely not [and by far] as my co-program-chair, Arthur Gretton, who deserves all the praise for achieving a final decision by the end of the year. We have already received a few complaints from rejected authors, but this is to be expected with a rejection rate of 73%. (More annoying were the emails asking for our decisions in the very final days…) An amazing and humbling experience for me, truly! See you in Cadiz, hopefully.

AISTATS 2016 [post-submissions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 22, 2015 by xi'an

Now that the deadline for AISTATS 2016 submissions is past, I can gladly report that we got the amazing number of 559 submissions, which is much more than what was submitted to the previous AISTATS conferences. To the point it made us fear for a little while [but not any longer!] that the conference room was not large enough. And hope that we had to install video connections in the hotel bar!

Which also means handling about the same amount of papers as a year of JRSS B submissions within a single month!, the way those submissions are handled for the AISTATS 2016 conference proceedings. The process is indeed [as in other machine learning conferences] to allocate papers to associate editors [or meta-reviewers or area chairs] with a bunch of papers and then have those AEs allocate papers to reviewers, all this within a few days, as the reviews have to be returned to authors within a month, for November 16 to be precise. This sounds like a daunting task but it proceeded rather smoothly due to a high degree of automation (this is machine-learning, after all!) in processing those papers, thanks to (a) the immediate response to the large majority of AEs and reviewers involved, who bid on the papers that were of most interest to them, and (b) a computer program called the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. Which tremendously helps with managing about everything! Even when accounting for the more formatted entries in such proceedings (with an 8 page limit) and the call to the conference participants for reviewing other papers, I remain amazed at the resulting difference in the time scales for handling papers in the fields of statistics and machine-learning. (There was a short lived attempt to replicate this type of processing for the Annals of Statistics, if I remember well.)

“la formule qui décrypte le monde”

Posted in Books, Statistics, University life with tags , , , , , , , on November 6, 2012 by xi'an

“It is only in the 1980s that the American mathematician Judea Pearl has shown that, by aligning hundreds of Bayes formulas, it was possible to take into account the multiple causes of a complex phenomenon.” (my translation)

As a curious coincidence, the latest issue of Science & Vie appeared on the day I was posting about Peter Coles’s warnings on scientific communication. The cover title of the magazine is the title of this post, The formula decrypting the World, and it is of course about… Bayes’ formula, no-one else’s!!! The major section (16 pages) in this French scientific vulgarization magazine is indeed dedicated to Bayesian statistics and even more Bayesian networks, with the usual stylistic excesses of journalism. As it happens, one of the journalists in charge of this issue came to discuss the topic with me a long while ago in Paris-Dauphine and I remember the experience as being not particularly pleasant since I had trouble communicating the ideas of Bayesian statistics in layman terms. In the end, this rather lengthy interview produced two quotes from me, one that could be mine (in connection with some sentences from Henri Poincaré) and another that is definitely apocryphal (yes, indeed, the one above! I am adamant I could not have mentioned Judea Pearl, whose work I am not familiar with, and even less this bizarre image of hundreds of Bayes’ theorems… Presumably, this got mixed up with a quote from another interviewed Bayesian. The same misquoting occurred for my friend Jean-Michel Marin!).

Among the illustrations selected in the journal as vignettes, the Monty Hall paradox—which is an exercise in conditioning, not in statistical reasoning!—, signal processing for microscope images, Bayesian networks for robots, population genetics (and the return of the musk ox!), stellar cloud formation, tsunami prediction, microarray analysis, climate meta-analysis (with a quote from Noel Cressie), post-Higgs particle physics, ESP studies invalidation by Wagenmakers (missing the fact that the reply by Bern, Utts, and Johnson is equally Bayesian), quantum physics. From a more remote perspective, those are scientific studies using Bayesian statistics to establish important and novel results. However, it would have been easy to come up with equally important and novel results demonstrated via classical non-Bayesian approaches, such as exhibiting the Higgs boson. Now, I understand the difficulty in conveying to the layman the difference resulting from using a Bayesian reasoning to support a scientific argument, however this accumulation of superlatives opens the door to suspicions of bias and truncated perspectives… The second half of the report is less about statistics and more about psychology and learning, expanding on the notion that the brain operates in ways similar to Bayesian learning and networks. Continue reading

In praise of the referee (2)

Posted in Statistics, University life with tags , , , , , , on May 24, 2012 by xi'an

Following Nicolas’ guest-post on this ‘Og, plus Andrew’s and mine’s, we took advantage of Kerrie Mengersen visiting Paris to write a common piece on the future of the refereeing system and on our proposals to improve it from within. Rather than tearing the whole thing down. In particular, one idea is to make writing referees’ reports part of the academic vitas, by turning them into discussions of published papers. Another one is to achieve some training of referees, by setting refereeing codes and more formalised steps. Yet another one is to federate reports rather than repeating the process one journal at a time for the unlucky ones… The resulting paper has now appeared on arXiv and has just been submitted (I am rather uncertain about the publication chances of this paper, given it is an opinion column, rather than a research paper…! It has already been rejected once, twice, three five times!)

the modern internet of things

Posted in University life with tags , on May 20, 2012 by xi'an

Here is an hillarious email I got this morning about a journal called the Modern Internet of Things (sic):

Dear Pro. , Considering your research in related areas, we cordially
invite you to submit a paper to Modern Internet of Things 
(MIOT). The Journal of Modern Internet of Things (MIOT) is 
published in English, andis a peer reviewed free-access 
journal which provides rapid publications and a forum for 
researchers, research results, and knowledge on Internet of 
Things. It serves the objective of international academic 
exchange. 

On the journal webpage, I noticed the following

Papers to be submitted to MIOT are required at least 6 pages after formatting according to the template on this website.

which is quite unusual a request since journals prefer to cull long papers!

In praise of the referee (guest post)

Posted in Books, Statistics, University life with tags , , , , on May 3, 2012 by xi'an

Nicolas Chopin sent me this piece after reading Larry’s radical proposal. And my post on it. This is a preliminary version, so feel free to comment!

In a provocative column in the latest ISBA Bulletin, Larry Wasserman calls for “a world without referees”. This is an interesting read, not devoid of mala fide arguments (“We are using a refereeing system that is almost 350 years old. If we used the same printing methods as we did in 1665 it would be considered laughable.”), but this is hard to avoid, given how passionate this subjects is for many scientists. In this article, I’d like to propose a defense of the too often derided referee, arguing that a system that has served us so well for 350 years cannot be ditched so easily.

To start with, talking about getting rid of peer-reviewing is a bit of idle talk, as we all know it is never going to happen. In fact, it’s a perfect example of the prisoner’s dilemma. Stopping to send papers to journals would make sense only if a majority of scientists would decide to do so simultaneously. But, for many of us, so much depends on our publication record (including jobs, promotions, grants, even salaries in certain institutions) that very few would dare to “shoot first”. And, even if we assume a given field would be ready to do so (say all the statisticians), would that such a move make any sense, without all the other fields of Science doing it also at the same time? The prisoner’s dilemma simply moves to a higher level, as scientific fields are also competing for grants, jobs openings and so on.

I think US scientists would be surprised to see how regular, yet basic and dumbly quantitative is the evaluation of research by governments and Universities in most countries. This is certainly unfortunate, but it reinforces the prisoner’s dilemma I am talking about.

The previous section seems to make the usual argument that refereeing is a necessary evil. We believe on the contrary it is a necessary good. Yes, certain referees are annoying, or even aggressive or too dismissive about our work. Of course, like Larry, we can tell several horror stories about referees completely missing the point, or even perhaps being outright dishonest.

But, ego bruising and venting aside, all this is beside the point. The real question is: does refereeing increase the quality of our papers? or is it just “noise”, as stated by Larry?

My personal experience is that all my papers have benefited from refereeing. The most extreme example I can share is that of a very obnoxious referee which was obviously doing all his best to get my paper rejected (while making me add irrelevant references, probably his), and managed to point out a mistake in my rejection algorithm during the third revision. How glad I am to have had such a nitpicking referee in this case. Publishing a wrong paper is much more damaging in the long run than getting rejected. Plus, getting rejected is not such a big deal. If the paper is worth it, we always find the energy to submit it elsewhere, hopefully in a better shape.

In fact, the only referee I fear is the sloppy one who quickly reads the paper and thinks he “gets it”. But usually associate editors are good at spotting these, so this does occur so often.

I also believe that our papers get improved “preemptively” by refereeing; that is, we write better papers because we know it is going to be evaluated by colleagues. We go the extra mile, chase typos, think more carefully about real examples, and so on. And, finally, refereeing simply filters the very long tail of bad papers. How glad I am that referees guard the gates of my favorite journals against them. Please do not ask me instead to check every other paper on arXiv. I don’t have the time, when I do it, I’m strongly biased in favor of authors I know well (like Larry), so this is not fair, and it does not make sense in the first place that all of us replicate this filtering. Plus, from the papers I review, I can see that quite a small proportion of submitted papers are actually send first to arXiv; if all papers were sent there, we would be even less able to deal with this deluge of papers.

Finally, some people seem to be outraged that referees ask certain modifications, because they consider that, as “authors”, they should have an inalienable right to decide the exact form of their texts. But we are not artists (who actually rarely obtain this right anyway), we are scientists, and science (in particular hard science, especially mathematics) works better through consensus on the validity and correctness of the proposed research.

As a final note, Larry’s letter seems to be part of a wider movement of rethinking the way we are dealing with scientific publishing, especially in lights of the “disruptive” impact of the Internet, whatever that oft-used word means. Such questioning is of course welcome. But, as most things, our mental energy comes in limited supply, and should be targeted first at the most glaringly obvious drawbacks of the current system. I am talking of course of the issue of unethical publishers, which is a polite word for thugs. The problem is well-known: we, the scientists, write the papers, evaluate the papers, do the editorial work, everything for free, yet these publishers take our work from us, and charge ridiculous, and ever-increasing, prices for accessing it. Harvard, of all places, cannot afford anymore to pays for journal subscriptions (3.5 millions a year, see an official memorandum). Can you imagine how is the situation in less endowed Universities? This is not sustainable, and we should do our best to get rid of these thugs. This is another subject, but may we quickly urge our readers to sign the current boycott of Elsevier, and prefer to submit to open access journals, such as our beloved Bayesian Analysis‘. Let us push together to get rid of this non-sense.

When this is done, we may return to questioning the refereeing system, and perhaps try to improve on it. One thing I’d like to experiment with would be to reveal the names of referees, not only to the authors, but also to the public, by mentioning their on each publication. That way, referees cannot be either too complacent, or too negative. And at the same time, this would be good recognition of the role of referees, in their help of publishing better research. I would go as far as saying that this would help us to recognize them as co-authors. Not so bad for the poor referee we have been venting at for the last 350 years.

down with referees, up with ???

Posted in Books, Statistics, University life, Wines with tags , , , , , , , , , , on April 18, 2012 by xi'an

Statisfaction made me realise I had missed the latest ISBA Bulletin when I read what Julyan posted about Larry’s tribune on a World without referees. While I agree on many of Larry’s points, first and foremost on his criticisms of the refereeing process which seems to worsen and worsen, here are a few items of dissension…

The argument that the system is 350 years old and thus must be replaced may be ok at the rethoretical level, but does not carry any serious weight! First, what is the right scale for a change: 100 years?! 200 years?! Should I burn down my great-grand-mother’s house because it is from the 1800’s and buy a camping-car instead?! Should I smash my 1690 Stradivarius and buy a Fender Stratocaster?! Further, given the intensity and the often under-the-belt level of the Newton vs. Leibniz dispute, maybe refereeing and publishing in the Philosophical Transactions of the Royal Society should have been abolished right away from the start. Anyway, this is about rethoric, not matter. (Same thing about the wine store ellipse. It is not even a good one:  Indeed, when I go to a wine store, I have to rely on (a) well-known brands; (b) brands I have already tried and appreciated; (c) someone else’s advice, like the owner, or friends, or Robert Parker…. In the former case, it can prove great or disastrous. But this is the most usual way to pick wines as one cannot hope [dream?] to sample all wines in the shop.)

My main issue with doing away with referees is the problem of sifting through the chaff. The amount of research documents published everyday is overwhelming. There is a maximal amount of time I can dedicate to looking at websites, blogs, twitter accounts like Scott Sisson’s and Richard Everitt’s, and such. And there clearly is a limited amount of trust I put in the opinions expressed in a blog (e.g., take the ‘Og where this anonymous X’racter writes about everything, mostly non-scientific stuff, and reviews papers with a definite bias!) Even keeping track of new arXiv postings sometimes get overwhelming. So, Larry’s “if you don’t check arXiv for new papers every day, then you are really missing out” means to me that missing arXiv for a few days and I cannot recover. One week away at an intense workshop or on vacations and I am letting some papers going by forever, even though I carry them in my bag for a while…. Noll’s suggestion to publish only on one’s own website is even more unrealistic: why should anyone bother to comment on poor or wrong papers, except when looking for ‘Og’s fodder?! So the fundamental problem is separating the wheat from the chaff, given the amount of chaff and the connected tendency to choke on it! Getting rid of referees and journals to rely on depositories like [the great, terrific, essential] arXiv forces me to also rely on other sources for ranking, selecting, and eliminating papers. Again with a component of arbitrariness, subjectivity, bias, variation, randomness, peer pressure, &tc. In addition, having no prior check of papers means reading a new paper a tremendous chore as one would have to check the references as well, leading to a sort of infinite regress… and forcing one to rely on reputation and peer opinions, once again! And imagine the inflation in reference letters! I already feel I have to write too many reference letters at the moment, but a world without (good and bad) journals would be the Hell of non-stop reference letters. I definitely prefer to referee (except for Elsevier!) and even more being a journal editor, because I can get an idea of the themes in the field and sometimes spot new trends, rather than writing over and over again about an old friend’s research achievements or having to assess from scratch the worth of a younger colleague’s work…

Furthermore, and this is a more general issue, I do not believe that the multiplication of blogs, websites, opinion posts, tribunes, &tc., is necessarily a “much more open, democratic approach”: everyone voicing an opinion on the Internet does not always get listened to and the loudest ones (or most popular ones) are not always the most reliable ones. A complete egalitarian principle means everyone talks/writes and no one listens/reads: I’d rather stick to the principles set by the Philosophical Transactions of the Royal Society!

Anyway, thanks to Larry for launching a worthwhile debate into discovering new ways of making academia a more rational and scientific place!