Archive for data privacy

Big Bayes goes South

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on December 5, 2018 by xi'an

At the Big [Data] Bayes conference this week [which I found quite exciting despite a few last minute cancellations by speakers] there were a lot of clustering talks including the ones by Amy Herring (Duke), using a notion of centering that should soon appear on arXiv. By Peter Müller (UT, Austin) towards handling large datasets. Based on a predictive recursion that takes one value at a time, unsurprisingly similar to the update of Dirichlet process mixtures. (Inspired by a 1998 paper by Michael Newton and co-authors.) The recursion doubles in size at each observation, requiring culling of negligible components. Order matters? Links with Malsiner-Walli et al. (2017) mixtures of mixtures. Also talks by Antonio Lijoi and Igor Pruenster (Boconni Milano) on completely random measures that are used in creating clusters. And by Sylvia Frühwirth-Schnatter (WU Wien) on creating clusters for the Austrian labor market of the impact of company closure. And by Gregor Kastner (WU Wien) on multivariate factor stochastic models, with a video of a large covariance matrix evolving over time and catching economic crises. And by David Dunson (Duke) on distance clustering. Reflecting like myself on the definitely ill-defined nature of the [clustering] object. As the sample size increases, spurious clusters appear. (Which reminded me of a disagreement I had had with David McKay at an ICMS conference on mixtures twenty years ago.) Making me realise I missed the recent JASA paper by Miller and Dunson on that perspective.

Some further snapshots (with short comments visible by hovering on the picture) of a very high quality meeting [says one of the organisers!]. Following suggestions from several participants, it would be great to hold another meeting at CIRM in a near future. Continue reading

the DeepMind debacle

Posted in Books, Statistics, Travel with tags , , , , , , , , on August 19, 2017 by xi'an

“I hope for a world where data is at the heart of understanding and decision making. To achieve this we need better public dialogue.” Hetan Shah

As I was reading one of the Nature issues I brought on vacations, while the rain was falling on an aborted hiking day on the fringes of Monte Rosa, I came across a 20 July tribune by Hetan Shah, executive director of the RSS. A rare occurrence of a statistician’s perspective in Nature. The event prompting this column is the ruling against the Royal Free London hospital group providing patient data to DeepMind for predicting kidney. Without the patients’ agreement. And with enough information to identify the patients. The issues raised by Hetan Shah are that data transfers should become open, and that they should be commensurate in volume and details to the intended goals. And that public approval should be seeked. While I know nothing about this specific case, I find the article overly critical of DeepMind, which interest in health related problems is certainly not pure and disinterested but nonetheless can contribute advances in (personalised) care and prevention through its expertise in machine learning. (Disclaimer: I have neither connection nor conflict with the company!) And I do not see exactly how public approval or dialogue can help in making progress in handling data, unless I am mistaken in my understanding of “the public”. The article mentions the launch of a UK project on data ethics, involving several [public] institutions like the RSS: this is certainly commandable and may improve personal data is handled by companies, but I would not call this conglomerate representative of the public, which most likely does not really trust these institutions either…

datazar

Posted in R, Statistics, University life with tags , , , , , , , on June 4, 2017 by xi'an

A few weeks ago and then some, I [as occasional blogger!] got contacted by datazar.com to write a piece on this data-sharing platform. I then went and checked what this was all about, having the vague impression this was a platform where I could store and tun R codes, besides dropping collective projects, but from what I quickly read, it sounds more like being able to run R scripts from one’s machine using data and code stored on datazar.com. But after reading just one more blog entry I finally understood it is also possible to run R, SQL, NotebookJS (and LaTeX) directly on that platform, without downloading code or data to one’s machine. Which makes it a definitive plus with this site, as users can experiment with no transfer to their computer. Hence on a larger variety of platforms. While personally I do not [yet?] see how to use it for my research or [limited] teaching, it seems like an [yet another] interesting exploration of the positive uses of Internet to collaborate and communicate on scientific issues! With no opinion on privacy and data protection offered by the site, of course.

Steve Fienberg’ obituary in Nature

Posted in Statistics with tags , , , , , , , , on March 10, 2017 by xi'an

“Stephen Fienberg was the ultimate public statistician.”

Robin Mejia from CMU published in the 23 Feb issue of Nature an obituary of Steve Fienberg that sums up beautifully Steve’s contributions to science and academia. I like the above quote very much, as indeed Steve was definitely involved in public policies, towards making those more rational and fair. I remember the time he came to Paris-Dauphine to give a seminar and talk on his assessment in a NAS committee on the polygraph (and my surprise at it being used at all in the US and even worse in judiciary issues). Similarly, I remember his involvement in making the US Census based on surveys rather than on an illusory exhaustive coverage of the entire US population. Including a paper in Nature about the importance of surveys. And his massive contributions to preserving privacy in surveys and databases, an issue in which he was a precursor (even though my colleagues at the French Census Bureau did not catch the opportunity when he spent a sabbatical in Paris in 2004). While it is such a sad circumstance that lead to statistics getting a rare entry in Nature, I am glad that Steve can also be remembered that way.

weapons of math destruction [book review]

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on December 15, 2016 by xi'an

wmd As I had read many comments and reviews about this book, including one by Arthur Charpentier, on Freakonometrics, I eventually decided to buy it from my Amazon Associate savings (!). With a strong a priori bias, I am afraid, gathered from reading some excerpts, comments, and the overall advertising about it. And also because the book reminded me of another quantic swan. Not to mention the title. After reading it, I am afraid I cannot tell my ascertainment has changed much.

“Models are opinions embedded in mathematics.” (p.21)

The core message of this book is that the use of algorithms and AI methods to evaluate and rank people is unsatisfactory and unfair. From predicting recidivism to fire high school teachers, from rejecting loan applications to enticing the most challenged categories to enlist for for-profit colleges. Which is indeed unsatisfactory and unfair. Just like using the h index and citation ranking for promotion or hiring. (The book mentions the controversial hiring of many adjunct faculty by KAU to boost its ranking.) But this conclusion is not enough of an argument to write a whole book. Or even to blame mathematics for the unfairness: as far as I can tell, mathematics has nothing to do with unfairness. Some analysts crunch numbers, produce a score, and then managers make poor decisions. The use of mathematics throughout the book is thus completely inappropriate, when the author means statistics, machine learning, data mining, predictive algorithms, neural networks, &tc. (OK, there is a small section on Operations Research on p.127, but I figure deep learning can bypass the maths.) Continue reading

AISTATS 2016 [#2]

Posted in Kids, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , on May 13, 2016 by xi'an

The second and third days of AISTATS 2016 passed like a blur, with not even the opportunity to write my impressions in real time! Maybe long tapa breaks are mostly to blame for this… In any case, we had two further exciting plenary talks about privacy-preserving data analysis by Kamalika Chaudhuri and crowdsourcing and machine learning by Adam Tauman Kalai. The talk by Kamalika was covering recent results by Kamalika and coauthors about optimal privacy preservation in classification and a generalisation to correlated data, with the neat notion of a Markov Quilt.  Other talks that same day also dwelt on this privacy issue, but I could not be . The talk by Adam was full of fun illustrations on humans training learning systems (with the unsolved difficulty of those humans deliberately mis-training the system, as exhibited recently by the short-lived Microsoft Tay experiment).

Both poster sessions were equally exciting, with the addition of MLSS student posters on the final day. Among many, I particularly enjoyed Iain Murray’s pseudo-marginal slice sampling, David Duvenaud’s fairly intriguing use of early stopping for non-parametric inference,  Garrett Bernstein’s work on aggregated Markov chains, Ye Wang’s scalable geometric density estimation [with a special bonus for his typo on the University of Turing, instead of Torino], Gemma Moran’s and Chengtao Li’s posters on determinantal processes, and Matej Balog’s Mondrian forests with a Laplace kernel [envisioning potential applications for ABC]. Again, just to mention a few…

The participants [incl. myself] also took one evening off to visit a sherry winery in Jerez, with a well-practiced spiel on the story of the company, with some building designed by Gutave Eiffel, and with a wine-tasting session. As I personally find this type of brandy too strong in alcohol, I am not a big fan of sherry but it was nonetheless an amusing trip! With no visible after-effects the next morning, since the audience was as large as usual for Adam’s talk [although I did not cross a machine-learning soul on my 6am run…]

In short, I enjoyed very much AISTATS 2016 and remain deeply impressed by the efficiency of the selection process and the amount of involvement of the actors of this selection, as mentioned earlier on the ‘Og. Kudos!

reproducibility

Posted in Books, Statistics with tags , , , , , , , , on December 1, 2015 by xi'an

WariseWhile in Warwick this week, I borrowed a recent issue (Oct. 08, 2015) of Nature from Tom Nichols and read it over diners in a maths house. Its featured topic was reproducibility, with a long initial (or introductory) article about “Fooling ourselves”, starting with an illustration from Andrew himself who had gotten a sign wrong in one of those election studies that are the basis of Red State, Blue State. While this article is not bringing radically new perspectives on the topic, there is nothing shocking about it and it even goes on mentioning Peter Green and his Royal Statistical Society President’s tribune about the Sally Clark case and Eric-Jan Wagenmakers with a collaboration with competing teams that sounded like “putting one’s head on a guillotine”. Which relates to a following “comment” on crowdsourcing research or data analysis.

I however got most interested by another comment by MacCoun and Perlmutter, where they advocate a systematic blinding of data to avoid conscious or unconscious biases. While I deem the idea quite interesting and connected with anonymisation techniques in data privacy, I find the presentation rather naïve in its goals (from a statistical perspective). Indeed, if we consider data produced by a scientific experiment towards the validation or invalidation of a scientific hypothesis, it usually stands on its own, with no other experiment of a similar kind to refer to. Add too much noise and only noise remains. Add too little and the original data remains visible. This means it is quite difficult to calibrate the blinding mechanisms in order for the blinded data to remain realistic enough to be analysed. Or to be different enough from the original data for different conclusions to be drawn. The authors suggest blinding being done by a software, by adding noise, bias, label switching, &tc. But I do not think this blinding can be done blindly, i.e., without a clear idea of what the possible models are, so that the perturbed datasets created out of the original data favour more one of the models under comparison. And are realistic for at least one of those models. Thus, some preliminary analysis of the original or of some pseudo-data from each of the proposed models is somewhat unavoidable to calibrate the blinding machinery towards realistic values. If designing a new model is part of the inferential goals, this may prove impossible… Again, I think having several analyses run in parallel with several perturbed datasets quite a good idea to detect the impact of some prior assumptions. But this requires statistically savvy programmers. And possibly informative prior distributions.