Skip to main content

ORIGINAL RESEARCH article

Front. Remote Sens.
Sec. Land Cover and Land Use Change
Volume 6 - 2025 | doi: 10.3389/frsen.2025.1538808
This article is part of the Research Topic One Forest Vision Initiative (OFVi) for Monitoring Tropical Forests: The Remote Sensing Pilar View all 3 articles

Detection of degraded forests in Guinea, West Africa, using convolutional neural networks and Sentinel-2 time series

Provisionally accepted
An Vo Quang An Vo Quang 1,2*Nicolas Delbart Nicolas Delbart 2,3*Gabriel Jaffrain Gabriel Jaffrain 1Camille Pinet Camille Pinet 1
  • 1 IGN FI, Paris, France
  • 2 UMR8236 Laboratoire Interdisciplinaire des Energies de Demain (LIED), Paris, Île-de-France, France
  • 3 Université Paris Cité, Paris, France

The final, formatted version of the article will be published soon.

    Forest degradation is the alteration of forest biomass, structure or services without the conversion to another land cover. Unlike deforestation, forest degradation is subtle and less visible, but it often leads to deforestation eventually. In this study we conducted a comprehensive analysis of degraded forest detection in the Guinea forest region using remote sensing techniques. Our aim was to explore the use of Sentinel-2 satellite imagery in detecting and monitoring forest degradation in Guinea, West Africa, where selective logging is the primary degradation process observed. Consequently, degraded forests exhibit fewer large trees than intact forests, resulting in discontinuities in the canopy structure. This study consists in a comparative analysis between the contextual Random Forest (RF) algorithm introduced by Vo Quang et al. ( 2022), three convolutional neural network (CNN) models (U-Net, SegNet, ResNet-UNet), and the photo-interpreted (PI) method, with all model results undergoing independent validation by external Guinean photo-interpreters. The CNN and RF models were trained using subsets of the maps obtained by the PI method. The results show that the CNN U-Net model is the most adequate method, with an 94% agreement with the photo-interpreted map in the Ziama massif for the year 2021 unused for the training. All models were also tested over the Mount Nimba area, which was not included in the training dataset. Again, the U-Net model surpassed all other models with an overall agreement above 91%, and an accuracy of 91.5% as established during a second validation exercise carried out by independent photo-interpreters following the widely used Verified Carbon Standard validation methodology. These results underscore the robustness and efficiency of the U-Net model in accurately identifying degraded forests across diverse areas with similar typology of degraded forests.Altogether, the results show that the method is transferable and applicable across different years and among the different Guinean forest regions, such as the Ziama, Diécké, and Nimba massifs. Based on the superior performance and robustness demonstrated by the U-Net model, we selected it to replace the previous photo-interpretation-based method for forest class updates in the land cover map produced for the Guinean ministry of agriculture.

    Keywords: remote sensing, Tropical Forest, Forest degradation, Land use/land cover mapping, random forest, Convolutional Neural Networks

    Received: 03 Dec 2024; Accepted: 04 Feb 2025.

    Copyright: © 2025 Quang, Delbart, Jaffrain and Pinet. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    An Vo Quang, IGN FI, Paris, France
    Nicolas Delbart, Université Paris Cité, Paris, France

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.