Research Area:  Machine Learning
Automatically describing the content of an image is an interesting and challenging task in artificial intelligence. In this paper, an enhanced image captioning model—including object detection, color analysis, and image captioning—is proposed to automatically generate the textual descriptions of images. In an encoder–decoder model for image captioning, VGG16 is used as an encoder and an LSTM (long short-term memory) network with attention is used as a decoder. In addition, Mask R-CNN with OpenCV is used for object detection and color analysis. The integration of the image caption and color recognition is then performed to provide better descriptive details of images. Moreover, the generated textual sentence is converted into speech. The validation results illustrate that the proposed method can provide more accurate description of images.
Keywords:  
Author(s) Name:  Yeong-Hwa Chang ,Yen-Jen Chen ,Ren-Hung Huang and Yi-Ting Yu
Journal name:   Applied Sciences
Conferrence name:  
Publisher name:  MDPI
DOI:  10.3390/app12010209
Volume Information:  Volume 12 Issue 1
Paper Link:   https://www.mdpi.com/2076-3417/12/1/209