Graph neural networks (GNN) is a specific class of deep learning model discovered to perform interpretation on data described as a graph and easily applied on node level, edge level, or graph level prediction tasks. Due to its convincing performance, the GNN method is the widely used graph analysis method. GNN is rediscovered from deep neural networks such as Recurrent Neural Network (RNN) and convolutional neural network (CNN).
GNN achieves better results due to its connectionist model and topological information of graphs in an iterative process. The significance of GNN is the maintenance of state information to represent the neighborhood properties of the node. Different categories of GNN are recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. Some of the new variants of GNNs that attain breakthrough performance on deep learning tasks such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN).
Application scenarios of GNN are divided based on the structure of data. Structural scenarios emerge from scientific research, such as graph mining, modeling physical systems, chemical systems, and rise from industrial applications such as knowledge graphs, traffic networks, and recommendation systems. On the other hand, Non-structural scenarios generally include computer vision and natural language processing. Future developments of GNN are deep graph neural networks, heterogeneous graph-based GNN, and GNN for complex graph structure.