Authors
Naftali Tishby, Fernando C Pereira, William Bialek
Publication date
2000/4/24
Journal
arXiv preprint physics/0004057
Description
We define the relevant information in a signal as being the information that this signal provides about another signal $y\in \Y$. Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal requires more than just predicting , it also requires specifying which features of $\X$ play a role in the prediction. We formalize this problem as that of finding a short code for $\X$ that preserves the maximum information about $\Y$. That is, we squeeze the information that $\X$ provides about $\Y$ through a `bottleneck' formed by a limited set of codewords $\tX$. This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure $d(x,\x)$ emerges from the joint statistics of $\X$ and $\Y$. This approach yields an exact set of self consistent equations for the coding rules $X \to \tX$ and $\tX \to \Y$. Solutions to these equations can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere.
Total citations
2001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320242545657277728994971187884126929193136204251310367438501211
Scholar articles
N Tishby, FC Pereira, W Bialek - arXiv preprint physics/0004057, 2000