Research Area:  Machine Learning
Remote Sensing Image Captioning (RSIC) aims to generate precise and informative descriptive text for remote sensing images using computational algorithms. Traditional “encoder-decoder” approaches face limitations due to their high training costs and heavy reliance on large-scale annotated datasets, hindering their practical applications. To address these challenges, we propose a lightweight solution based on an enhanced CLIP-GPT framework. Our approach utilizes CLIP for zero-shot multimodal feature extraction of remote sensing images, followed by the design and optimization of a mapping network based on an improved Transformer with adaptive multi-head attention to align these features with the text space of GPT-2, facilitating the generation of high-quality descriptive text. Experimental results on the Sydney-captions, UCM-captions, and RSICD datasets demonstrate that the proposed mapping network outperforms existing methods in leveraging CLIP-extracted multimodal features, leading to more accurate and stylistically appropriate text generated by the GPT language model. Furthermore, our method achieves comparable or superior performance to traditional “encoder-decoder” baselines in terms of BLEU, CIDEr, and METEOR metrics, while requiring only one-fifth of the training time. Experiments conducted on an additional Chinese-English bilingual RSIC dataset underscore the distinct advantages of our CLIP-GPT framework, which leverages extensive multimodal pre-training to effectively demonstrate the robust potential of this approach in cross-lingual RSIC tasks.
Keywords:  
Author(s) Name:  Rui Song; Beigeng Zhao; Lizhi Yu
Journal name:  IEEE Access
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/ACCESS.2024.3522585
Volume Information:  Volume: 13, Pages: 904 - 915, (2024)
Paper Link:   https://ieeexplore.ieee.org/document/10816156