Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Richpedia: A Large-Scale, Comprehensive Multi-Modal Knowledge Graph - 2020

Richpedia: A Large-Scale, Comprehensive Multi-Modal Knowledge Graph

Research paper on A Large-Scale, Comprehensive Multi-Modal Knowledge Graph

Research Area:  Machine Learning

Abstract:

Large-scale knowledge graphs such as Wikidata and DBpedia have become a powerful asset for semantic search and question answering. However, most of the knowledge graph construction works focus on organizing and discovering textual knowledge in a structured representation, while paying little attention to the proliferation of visual resources on the Web. To consolidate this recent trend, in this paper, we present Richpedia, aiming to provide a comprehensive multi-modal knowledge graph by distributing sufficient and diverse images to textual entities in Wikidata. We also set Resource Description Framework links (visual semantic relations) between image entities based on the hyperlinks and descriptions in Wikipedia. The Richpedia resource is accessible on the Web via a faceted query endpoint, which provides a pathway for knowledge graph and computer vision tasks, such as link prediction and visual relation detection.

Keywords:  
Knowledge graph
Multi-modal
Wikidata
Ontology
Machine Learning
Deep Learning

Author(s) Name:  Meng Wang, Haofen Wang, Guilin Qi, Qiushuo Zheng

Journal name:  Big Data Research

Conferrence name:  

Publisher name:  Elsevier

DOI:  10.1016/j.bdr.2020.100159

Volume Information:  Volume 22, December 2020, 100159