undecidable learnability

“There is an unknown probability distribution P over some finite subset of the interval [0,1]. We get to see m i.i.d. samples from P for m of our choice. We then need to find a finite subset of [0,1] whose P-measure is at least 2/3. The theorem says that the standard axioms of mathematics cannot be used to prove that we can solve this problem, nor can they be used to prove that we cannot solve this problem.”

In the first issue of the (controversial) nature machine intelligence journal, Ben-David et al. wrote a paper they present a s the machine learning equivalent to Gödel’s incompleteness theorem. The result is somewhat surprising from my layman perspective and it seems to only relate to a formal representation of statistical problems. Formal as in the Vapnik-Chervonenkis (PAC) theory. It sounds like, given a finite learning dataset, there are always features that cannot be learned if the size of the population grows to infinity, but this is hardly exciting…

The above quote actually makes me think of the Robbins-Wasserman counter-example for censored data and Bayesian tail prediction, but I am unsure the connection is anything more than sheer fantasy..!
~

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.