Research Area:  Machine Learning
Variational Autoencoders (VAEs) are powerful generative models that merge elements from statistics and information theory with the flexibility offered by deep neural networks to efficiently solve the generation problem for high-dimensional data. The key insight of VAEs is to learn the latent distribution of data in such a way that new meaningful samples can be generated from it. This approach led to tremendous research and variations in the architectural design of VAEs, nourishing the recent field of research known as unsupervised representation learning. In this article, we provide a comparative evaluation of some of the most successful, recent variations of VAEs. We particularly focus the analysis on the energetic efficiency of the different models, in the spirit of the so-called Green AI, aiming both to reduce the carbon footprint and the financial cost of generative techniques. For each architecture, we provide its mathematical formulation, the ideas underlying its design, a detailed model description, a running implementation and quantitative results.
Author(s) Name:  Asperti, A., Evangelista, D. & Loli Piccolomini, E
Journal name:  SN Computer Science
Publisher name:  Springer Nature Switzerland
Volume Information:   2, 301 (2021).
Paper Link:   https://link.springer.com/article/10.1007/s42979-021-00702-9