In recent years, Graph Neural Network (GNN) has become an upward trending research topic due to its great potential in the increasing number of applications that deal with graph data which contains rich relation information among elements. The notion of GNN, a graph analysis tool, employs neural network models to capture the dependence of graphs through message passing between the nodes of graphs. Due to the ubiquitous graph-structured data in numerous real-world scenarios, the field of GNN confronts graph-related tasks in an end-to-end manner and significantly attained new interest. Also, a plethora of applications in GNNs has emerged during the last years. To a broader range of real-world applications, GNNs types have achieved superior performances in many deep learning tasks, myriads of graph analytic tasks, and applications. The types involving Graph Convolutional Network (GCN), Graph Recurrent Network (GRN), Graph Attention Network (GAT).
Graph Convolutional Networks (GCN):
Graph convolutional network has been the hottest and fast-developing research area over the past few years. It leverages the graph structure and feature information from the neighborhoods through feature propagation and aggregation and learn better graph representations in a convolutional fashion. The spectral-based approaches and the spatial-based approaches are the two main streams of GCN.
Graph recurrent network (GRN)
GRNs aim to learn node representations with the recurrent neural framework by extending prior recurrent models to handle general graphs, e.g., acyclic, cyclic, directed, and undirected graphs.
Graph autoencoder (GAEs):
Graph autoencoder (GAEs) are unsupervised learning facilitates to learn network embeddings and graph generative distributions. It employs GCN as an encoder and a link prediction layer as a decoder which encodes nodes or graphs into a latent vector space and reconstructs graph data from the encoded information.
Advantages:
• GNNs tackle various graph analytical tasks, such as node classification, link prediction, and graph classification.
• It addresses the network embedding problem through a graph autoencoder framework.
• GNNs explicitly extract high-level representations.
Potential Applications:
GNNs methods have been gained considerable success in many tasks such as node classification, graph classification, network embedding, graph generation, and spatial-temporal graph forecasting, other general graph-related tasks. The following list describes some popular applications based on the following research domains.
Computer vision:
Applications of GNNs in computer vision include scene graph generation, point clouds classification, and action recognition.
Natural language processing:
NLP tasks, ranging from classification tasks like sentence classification, Text classification. Text classification is one of the common applications of GNNs in natural language processing. Some natural data contains an internal graph structure, such as a syntactic dependency tree. The Syntactic GCN aggregates hidden word representations based on the syntactic dependency tree. Also, neural machine translation can be performed through Syntactic GCN.
Recommender systems:
Graph-based recommender systems consider items and users as nodes and can produce high-quality recommendations through leveraging the relations between items and items, users and users, users and items, as well as content information.
Traffic:
GNNs address the traffic prediction problem using spatial-temporal graph neural networks. GNNs also facilitate industrial-level applications such as taxi-demand prediction.
Other potential applications are a variety of problems such as program verification, program reasoning adversarial attacks prevention, electrical health records modeling, brain networks, and many others.
Research Challenges in Graph Neural Network:
The complexity of graphs in learning graph data has imposed significant challenges, and it is instructive to emphasize some challenges of GNN.
• During sampling or clustering, a model fails to provide the graph information.
• Lack of analyzing the adversarial attacks on graphs, GNNs are vulnerable to several attacks.
• GNNs are also black boxes and lack explanations, and interpretability is an important challenge for neural models.
Future Directions in Graph Neural Network:
Even though previous Graph Neural Network models performed well on several domains and tasks, several gaps have been identified.
• Owing to the dynamic nature of graphs, New graph convolutions are needed to adapt to the dynamicity of graphs.
• Trade-off algorithm scalability and graph integrity need more attention during sampling or clustering.
• Even though GNNs assume homogeneous graphs, it is paramount to develop new methods to handle heterogeneous graphs.