Mutual information is a fundamental quantity for measuring the mutual dependence between the random variables. It estimates the amount of information acquired about one random variable through observing the other random variable. In learning representation, mutual information maximization possesses an important and appealing concept for the learning representation of data. Estimation of mutual information maximization is applied for different learning representations such as unsupervised learning representation, self-supervised learning representation, learning deep representation, and semi-supervised learning representation.
The advanced approach is mutual information maximization for reinforcement learning representation. The reinforcement learning representation aims to learn the close representation by avoiding unwanted and redundant information of state-space while storing the relevant information in the policy or function-based task. Estimation of mutual information is based on the representation and considers it as a random variable. The objectives of mutual information for representation learning in RL are forward information, state-only transition information, and inverse information. Sufficient analysis maximizing the representation for each objective of mutual information is performed. Estimation of mutual information is a well-established and essential task in data analytics and learning representation of data sets for the statistical and similarity dependence between the variables. A wide range of applications and methods are developing to address mutual information estimation.