The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Neurorobot.
Volume 18 - 2024 |
doi: 10.3389/fnbot.2024.1489658
Edge-guided Feature Fusion Network For RGB-T Salient Object Detection
Provisionally accepted- Shanghai Maritime University, pudong, Shanghai, China
RGB-T Salient object detection (SOD) is to accurately segment salient regions in both visible light images and thermal infrared images. However, most of existing SOD methods neglects the critical complementarity between different modalities images, which is beneficial to further improve the detection accuracy. In this letter, we propose the edge-guided feature fusion network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion and salience map prediction. Firstly, cross-modal feature extraction module is explored to capture and aggregate united information and intersecting information in each local region of the RGB and thermal images and extracts feature-wise information. Then, considering that edge information is very helpful in refining the edge details of significant areas, edge-guided feature fusion module is explored to enhance the edge features of salient region. Moreover, the layer-bylayer decoding structure is designed to integrate the multi-level features and generate the prediction of salient maps. Compared with the state-of-the-art algorithms on all three datasets, our proposed method has the best performance. Our source codes have been available at https://github.com/0shuihan/EGFF-Net.
Keywords: saliency detection, Pixel features, Dynamic compensation, Edge information, Feature fusion
Received: 01 Sep 2024; Accepted: 29 Nov 2024.
Copyright: © 2024 Chen, Sun, Yan and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yuanlin Chen, Shanghai Maritime University, pudong, Shanghai, China
Cheng Yan, Shanghai Maritime University, pudong, Shanghai, China
Ming Zhao, Shanghai Maritime University, pudong, Shanghai, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.