In deep learning, autoencoder is an unsupervised neural network, and it is composed of encoder and decoder sub-models. It trains unlabeled and unclassified data, and it is used to map the input data to another compressed feature representation(encoder), and from that feature, representation reconstructs back the input data(decoder). The autoencoder learns significant features present in the data by minimizing reconstruction error between the input and output data. The number of neurons in the output layer is exactly the same as that in the input layer. The deep autoencoder is extremely versatile because they learn compressed data unsupervised and build an effective model with low computation resources by training one layer at a time. The different forms of autoencoders are Regularized autoencoders, Concrete autoencoder, and Variational autoencoders. Regularized autoencoders are designed to learn rich representation, improve the ability of information capture, and some of their techniques are Sparse Auto Encoders(SAE), Denoising Auto Encoder(DAE), and Contractive Auto Encoder(CAE). A concrete autoencoder is designed for discrete feature selection. Variational autoencoder performs based on variational Bayesian methods. Deep autoencoders produce better compression compared to linear autoencoders. The application areas of the autoencoder are dimensionality reduction, feature extraction, image denoising, data compression, image compression and generation, Sequence to sequence prediction, Recommendation system, pharmaceutical discovery, popularity prediction, information retrieval, and more. Recent advancements in deep autoencoders are network anomaly detection, identification of abnormalities on electrocardiograms, fault diagnosis, cyber security, and multimodal data fusion.