Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Multi-modal knowledge graphs representation learning via multi-headed self-attention - 2022

Multi-modal knowledge graphs representation learning via multi-headed self-attention

Research paper on Multi-modal knowledge graphs representation learning via multi-headed self-attention

Research Area:  Machine Learning

Abstract:

Traditional knowledge graphs (KG) representation learning focuses on the link information between entities, and the effectiveness of learning is influenced by the complexity of KGs. Considering a multi-modal knowledge graph (MKG), due to the introduction of considerable other modal information(such as images and texts), the complexity of KGs further increases, which degrades the effectiveness of representation learning. To resolve this solve the problem, this study proposed the multi-modal knowledge graphs representation learning via multi-head self-attention (MKGRL-MS) model, which improved the effectiveness of link prediction by adding rich multi-modal information to the entity. We first generated a single-modal feature vector corresponding to each entity. Then, we used multi-headed self-attention to obtain the attention degree of different modal features of entities in the process of semantic synthesis. In this manner, we learned the multi-modal feature representation of entities. New knowledge representation is the sum of traditional knowledge representation and an entity’s multi-modal feature representation. Simultaneously, we successfully train our model on two existing models and two different datasets and verified its versatility and effectiveness on the link prediction task.

Keywords:  
Multi-modal knowledge graphs
Representation learning
Multi-modal information fusion
Machine Learning
Deep Learning

Author(s) Name:  Enqiang Wang, Qing Yu, Yelin Chen,Wushouer Slamu, Xukang Luo

Journal name:  Information Fusion

Conferrence name:  

Publisher name:  Elsevier

DOI:  10.1016/j.inffus.2022.07.008

Volume Information:  Volume 88, December 2022, Pages 78-85