Research Area:  Machine Learning
Recommender systems have shown great potential to solve the information explosion problem and enhance user experience in various online applications. To tackle data sparsity and cold start problems in recommender systems, researchers propose knowledge graphs (KGs) based recommendations by leveraging valuable external knowledge as auxiliary information. However, most of these works ignore the variety of data types (e.g., texts and images) in multi-modal knowledge graphs (MMKGs). In this paper, we propose Multi-modal Knowledge Graph Attention Network (MKGAT) to better enhance recommender systems by leveraging multi-modal knowledge. Specifically, we propose a multi-modal graph attention technique to conduct information propagation over MMKGs, and then use the resulting aggregated embedding representation for recommendation. To the best of our knowledge, this is the first work that incorporates multi-modal knowledge graph into recommender systems. We conduct extensive experiments on two real datasets from different domains, results of which demonstrate that our model MKGAT can successfully employ MMKGs to improve the quality of recommendation system.
Keywords:  
Multi-modal Knowledge Graphs
Recommender Systems
Multi-modal Knowledge Graph Attention Network
Machine Learning
Author(s) Name:  Rui Sun , Xuezhi Cao , Yan Zhao , Junchen Wan , Kun Zhou , Fuzheng Zhang , Zhongyuan Wang , Kai Zheng
Journal name:  
Conferrence name:  CIKM -20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management
Publisher name:  ACM
DOI:  10.1145/3340531.3411947
Volume Information:  
Paper Link:   https://dl.acm.org/doi/abs/10.1145/3340531.3411947