Skip to main content

ORIGINAL RESEARCH article

Front. Mar. Sci.

Sec. Marine Ecosystem Ecology

Volume 12 - 2025 | doi: 10.3389/fmars.2025.1535917

This article is part of the Research Topic Intelligent Multi-scale Big Data Mapping of Coastal Habitats View all articles

TCCFNet: a semantic segmentation method for mangrove remote sensing images based on two-channel cross-fusion networks

Provisionally accepted
Lixiang Fu Lixiang Fu 1Yaoru Wang Yaoru Wang 1Shulei Wu Shulei Wu 1*Jiashen Zhuang Jiashen Zhuang 1Zhongqiang Wu Zhongqiang Wu 1Jian Wu Jian Wu 2Huandong Chen Huandong Chen 1
  • 1 Hainan Normal University, Haikou, China
  • 2 China Mobile Group Hainan Company Limited, HaiKou, China

The final, formatted version of the article will be published soon.

    Mangrove ecosystems play vital roles in coastal environments. Deep learning-based image segmentation has emerged as a crucial tool for mangrove research and monitoring.However, current methods face significant challenges in distinguishing similar categories and effectively capturing information from large target objects in mangrove remote sensing images.To address these limitations, we propose TCCFNet (Two-Channel Cross-Fusion Network), a semantic segmentation method that leverages a dual backbone architecture. The network combines ResNet for detailed local feature extraction with Swin Transformer for comprehensive global context modeling. This complementary structure enables robust feature representation across different scales and contexts.To strengthen feature integration, we introduce the Cross Integration Module (CIM). This module enhances the network's feature processing capabilities through a dual-path design. The first path employs a cross-attention mechanism to highlight significant features, while the second path implements a multi-scale feature fusion strategy to combine information effectively across different levels of the network.The experimental results show that TCCFNet has excellent performance in mangrove remote sensing image segmentation, the average cross-linking degree (MIoU) is 88.34%, the pixel accuracy (PA) is 97.35%, and the F1-score is 93.55%.Notably, the model demonstrated excellent ability in handling challenging scenarios, especially in distinguishing similar categories and accurately segmenting large target areas.Compared with existing methods such as MSFANet (86.02%) and DC-Swin (86.26%), TCCFNet has significantly improved segmentation accuracy, especially in complex mangrove environments, where accurate boundary detection is crucial.

    Keywords: Mangrove remote sensing image segmentation, TCCFNet, CIM, Multi-scale feature fusion, mangrove

    Received: 28 Nov 2024; Accepted: 10 Mar 2025.

    Copyright: © 2025 Fu, Wang, Wu, Zhuang, Wu, Wu and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Shulei Wu, Hainan Normal University, Haikou, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    94% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more