Learning representation is the important data analysis process in deep learning or machine learning. Deep learning possesses the great capability of extracting the feature representation. Learning representations are conducted in the euclidean space in most deep learning applications. Euclidean space is the basic space of classical geometry in three dimensions. Euclidean spaces are unable to produce meaningful geometric representations and fail to obtain low distortion in tree data. Many domains utilize tree-like structures, which are represented hierarchically. Noneuclidean spaces are suitable for complex data. Hyperbolic space is a non-euclidean space that learns hierarchical representations from textual and graph-structured data. Some advantages of hyperbolic spaces are better generalization, low distortion embedding, reduction in the number of model parameters and embedding dimensions, better model understanding, and interpretation.
Deep learning representations in hyperbolic spaces are an emerging field and possess great potential due to the high capacity of modeling data by using deep neural networks. Convolutional neural networks (CNN) and recurrent neural networks(RNN) are some operations to generalize hyperbolic space utilizing deep neural networks. Hyperbolic architectures include hyperbolic embedding, hyperbolic clustering, hyperbolic attention networks, hyperbolic graph neural networks, hyperbolic normalizing flow, hyperbolic variational auto-encoder, and hyperbolic neural networks with mixed geometries. Some applications of deep neural networks in hyperbolic space are learning embeddings of graphs, NLP, tree-like properties, computer vision, and recommender systems.