List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

Hybrid Explainable Image Caption Generation Using Image Processing and Natural Language Processing - 2024

hybrid-explainable-image-caption-generation-using-image-processing-and-natural-language-processing.png

Research Paper on Hybrid Explainable Image Caption Generation Using Image Processing and Natural Language Processing

Research Area:  Machine Learning

Abstract:

Image caption generation is among the most rapidly growing research areas that combine image processing methodologies with natural language processing (NLP) technique(s). The effectiveness of the combination of image processing and NLP techniques can revolutionaries the areas of content creation, media analysis, and accessibility. The study proposed a novel model to generate automatic image captions by consuming visual and linguistic features. Visual image features are extracted by applying Convolutional Neural Network and linguistic features by Long Short-Term Memory (LSTM) to generate text. Microsoft Common Objects in Context dataset with over 330,000 images having corresponding captions is used to train the proposed model. A comprehensive evaluation of various models, including VGGNet?+?LSTM, ResNet?+?LSTM, GoogleNet?+?LSTM, VGGNet?+?RNN, AlexNet?+?RNN, and AlexNet?+?LSTM, was conducted based on different batch sizes and learning rates. The assessment was performed using metrics such as BLEU-2 Score, METEOR Score, ROUGE-L Score, and CIDEr. The proposed method demonstrated competitive performance, suggesting its potential for further exploration and refinement. These findings underscore the importance of careful parameter tuning and model selection in image captioning tasks.

Keywords:  

Author(s) Name:  Atul Mishra, Anubhav Agrawal & Shailendra Bhasker

Journal name:  International Journal of System Assurance Engineering and Management

Conferrence name:  

Publisher name:  Springer

DOI:  10.1007/s13198-024-02495-5

Volume Information:  Volume 15, Pages 4874-4884, (2024)