Recently I have been digging back in time to re-acquant myself with Maximum Entropy data processing (MaxEnt). It’s a full 11 years since I last earned a living by applying MaxEnt and trying to get back into the area has been a good experience – I have been pleasantly surprised by how things are in MaxEnt land.
Last I knew, MaxEnt and Bayes where officially two distinct things. There was an understanding that MaxEnt could be thought of as a Bayesian prior probability distribution for positive additive distributed quantities (e.g. images and spectra). Now it seems that they are deeply and fundamentally connected and that there is a single underlying theory (confusingly called Maximum Relative Entropy or MrE) that shows that Bayes updating and MaxEnt are both special cases of a unifying inference framework. I do not pretend to fully understand all of the nuances, either philosophical or mathematical, but it appears that Ariel Caticha and his PhD student Adom Giffin have been having great fun setting up the new formalism and applying it (see particularly Catcicha’s lecture notes and Giffins thesis on arXiv.org which I have been reading). My experience is that even the best ideas take a time to catch on – som many people have to unlearn so much etc, but somehow I suspect that in another 11 years or so we may all be maximising our relative entropies!