Research Area:  Machine Learning
Learning useful representations with little or no supervision is a key challenge in artificial intelligence. We provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. To organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. In particular, we uncover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encoding and decoding distribution, or (iii) introducing a structured prior distribution. While there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.
Keywords:  
Autoencoder
Representation Learning
Machine Learning
Deep Learning
Author(s) Name:  Michael Tschannen, Olivier Bachem, Mario Lucic
Journal name:  Computer Science
Conferrence name:  
Publisher name:  arXiv:1812.05069
DOI:   https://doi.org/10.48550/arXiv.1812.05069
Volume Information:  
Paper Link:   https://arxiv.org/abs/1812.05069