There is also a view that empirical likelihood is (nearly) a likelihood on a least favorable family with dimension equal to that of the parameter. Ch9 of my book points to work by DiCiccio and Romano (1990) on this. Then it is reasonable to multiply a prior on that family by an empirical likelihood. It would be interesting to connect these dots a bit more.

There are lots of papers on entropy methods. Entropy is natural for finding least informative distributions subject to constraints. It also leads to the familiar exponential tilting. But it looks like the probability of the model under the data, ie, a backwards likelihood. Empirical likelihood gives a reciprocal tilting that can be solved by convex optimization.

]]>See the following paper by Schennach:

http://biomet.oxfordjournals.org/content/92/1/31.short

“We show that a likelihood function very closely related to empirical likelihood naturally arises from a nonparametric Bayesian procedure which places a type of noninformative prior on the space of distributions. This prior gives preference to distributions having a small support and, among those sharing the same support, it favours entropy-maximising distributions. The resulting nonparametric Bayesian procedure admits a computationally convenient representation as an empirical-likelihood-type likelihood where the probability weights are obtained via exponential tilting.”

]]>