In 1955, McGill published a multivariate generalisation of - TopicsExpress



          

In 1955, McGill published a multivariate generalisation of Shannons mutual information. Algorithms such as Independent Component Analysis use a different generalisation, the redundancy, or multi-information [13]. McGills concept expresses the information shared by all of K random variables, while the multi-information expresses the information shared by any two or more of them. Partly to avoid confusion with the multi-information, I call his concept here the co-information. Co-informations, oddly, can be negative. They form a partially ordered set, or lattice, as do the entropies. Entropies and co-informations are simply and symmet-rically related by Möbius inversion [12]. The co-information lattice sheds light on the problem of approximating a joint density with a set of marginal densities, though as usual we run into the partition function. Since the marginals correspond to higher-order edges in Bayesian hypergraphs, this approach motivates new algorithms such as Dependent Component Analysis, which we describe, and (loopy) Generalised Belief Propagation on hypergraphs, which we do not. Simulations of subspace-ICA (a tractable DCA) on natural images are presented on the web. In neural computation theory, we identify the co-information of a group of neurons (possibly in space/ time staggered patterns) with the degree of existence of a corresponding cell assembly. researchgate.net/publication/228849052_The_co-information_lattice .. really into the idea of co-information right now, has some really elegant ideas and spin-offs of mutual information, but just erupts in an ontological sense as far as consideration as well as actuation. The idea of shared entropy is pretty fascinating and neat.
Posted on: Fri, 10 Oct 2014 21:44:56 +0000

Trending Topics



Recently Viewed Topics




© 2015