Skip to main content

ORIGINAL RESEARCH article

Front. For. Glob. Change, 14 October 2022
Sec. Forest Management
This article is part of the Research Topic Restoration Ecology: Lessons Learned and Perspectives From the Field View all 4 articles

Monitoring early-successional trees for tropical forest restoration using low-cost UAV-based species classification

\nJonathan Williams,Jonathan Williams1,2Toby D. JacksonToby D. Jackson1Carola-Bibiane SchnliebCarola-Bibiane Schönlieb2Tom Swinfield,Tom Swinfield1,3Bambang IrawanBambang Irawan4Eva AchmadEva Achmad4Muhammad ZudhiMuhammad Zudhi4Habibi HabibiHabibi Habibi5Elva GemitaElva Gemita5David A. Coomes
David A. Coomes1*
  • 1Department of Plant Sciences, Conservation Research Institute, University of Cambridge, Cambridge, United Kingdom
  • 2Image Analysis Group, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
  • 3Centre for Conservation Science, Royal Society for Protection of Birds, Cambridge, United Kingdom
  • 4Faculty of Agriculture, Jambi University, Jambi, Indonesia
  • 5PT Restorasi Ekosistem Indonesia, Bogor, Indonesia

Logged forests cover four million square kilometers of the tropics, capturing carbon more rapidly than temperate forests and harboring rich biodiversity. Restoring these forests is essential to help avoid the worst impacts of climate change. Yet monitoring tropical forest recovery is challenging. We track the abundance of early-successional species in a forest restoration concession in Indonesia. If the species are carefully chosen, they can be used as an indicator of restoration progress. We present SLIC-UAV, a new pipeline for processing Unoccupied Aerial Vehicle (UAV) imagery using simple linear iterative clustering (SLIC)to map early-successional species in tropical forests. The pipeline comprises: (a) a field verified approach for manually labeling species; (b) automatic segmentation of imagery into “superpixels” and (c) machine learning classification of species based on both spectral and textural features. Creating superpixels massively reduces the dataset's dimensionality and enables the use of textural features, which improve classification accuracy. In addition, this approach is flexible with regards to the spatial distribution of training data. This allowed us to be flexible in the field and collect high-quality training data with the help of local experts. The accuracy ranged from 74.3% for a four-species classification task to 91.7% when focusing only on the key early-succesional species. We then extended these models across 100 hectares of forest, mapping species dominance and forest condition across the entire restoration project.

1. Introduction

Tropical forest restoration is central to mitigating the worst impacts of global climate breakdown while simultaneously protecting vast swathes of terrestrial biodiversity (Palmer et al., 1997; Myers et al., 2000; Duffy, 2009; Thompson et al., 2009; Isbell et al., 2011; Joppa et al., 2011; Bastin et al., 2019). The IPCC have called for “unprecedented changes in all aspects of society,” including reversing the forecast loss of 2.5 million km2 of forest to a 9.5 million km2 increase in forest cover by 2050 (Zhongming et al., 2021). Logged-over tropical forests are particularly important carbon sinks because they are widespread, covering 4 million km2 (Cerullo and Edwards, 2019), and capture carbon rapidly as they recover lost biomass (Edwards et al., 2014). Natural tropical forests, such as these, are more likely to be successfully restored and persist (Crouzeilles et al., 2017), and host vastly more biodiversity value than actively managed forests (Edwards et al., 2014). But natural tropical forests continue to be threatened by agricultural expansion and the intention of many countries to use fast-growing plantations to meet international restoration commitments is a serious concern (Lewis et al., 2019). It is therefore of critical importance to develop remote sensing methods capable of assessing restoration performance in terms of biodiversity recovery to complement the already advanced techniques for measuring above-ground biomass (Asner et al., 2010; Aerts and Honnay, 2011; Melo et al., 2013; Chave et al., 2014; Zahawi et al., 2015; Iglhaut et al., 2019).

Biodiversity recovery may correlate poorly with above-ground biomass in regenerating tropical forests, and measuring species richness is often intractable due to the thousands of species involved, so developing reliable indicators of biodiversity is necessary (Martin et al., 2015; Sullivan et al., 2017). Although biodiversity typically increases as forests accumulate above-ground biomass, the relationship is complicated by disturbance history, fragmentation and active management, so that forests of equivalent biomass harbor very different levels of biodiversity (Slik et al., 2002, 2008; Sullivan et al., 2017). For example, a single round of logging removing 100m3 of wood per hectare may result in a 40% reduction in biomass but only a 10% reduction in biodiversity (Martin et al., 2015), whereas a fast-growing plantation can rapidly accumulate biomass without a corresponding increase in biodiversity (Bernal et al., 2018). To properly account the benefits of forest restoration it is therefore important to assess biodiversity, but tropical forests may host more than 1,000 species per hectare (Myers et al., 2000; Joppa et al., 2011). This makes direct measurements of species richness in restoration projects prohibitively costly using field measurements potentially hampering direct estimation by remote sensing (Turner et al., 2003; Sullivan et al., 2017). Instead it may be possible to assess biodiversity by assessing the abundance and composition of early-successional species: following disturbance, early-successional species including grasses, shrubs, lianas, and fast-growing trees become abundant, often representing more than 30% of the canopy (Slik et al., 2002, 2008; Slik and Eichhorn, 2003). These species have adaptations that make them competitive in high light environments, including large, thin leaves (e.g., low leaf mass per area), long petioles, open canopies and high foliar nutrient concentrations, which also make them visually distinct and easy to identify (Slik, 2009). If disturbance ceases, early-successional species gradually become less frequent, through their mortality and failure to recruit in the shaded-understory, making them valuable indicators of both historic disturbance and subsequent recovery (Slik et al., 2003, 2008).

Quantifying recovery of secondary tropical forest in terms of indicative early-successional species still requires methods which scale to enable cost-effective application across management units and remote sensing approaches are able to offer this (Petrou et al., 2015; Fassnacht et al., 2016; de Almeida et al., 2020). Traditional approaches to biodiversity or species occurrence monitoring rely on field observations that sample only a tiny fraction of the landscape (Turner et al., 2003), which is also true for newer approaches, including environmental DNA and functional trait measurements, each allowing diversity to be viewed from a different lens (Asner and Martin, 2009; Zhang et al., 2016; Bush et al., 2017; Colkesen and Kavzoglu, 2018). Remotely-sensed satellite imagery can be used to interpolate data from field plots based upon variability of spectral signatures, estimating variation and approximating species composition across landscapes (Adelabu et al., 2013), but the spatial and temporal resolution of most satellite imagery remains a constraint (Carleer and Wolff, 2004), and higher resolution data, such as those collected from aircraft, are needed to monitor individual trees (Bergseng et al., 2015). By combining of aerial laser scans with hyperspectral or multispectral imagery species can be mapped (Zhang and Qiu, 2012; Alonzo et al., 2013; Dalponte et al., 2014; Maschler et al., 2018; Marconi et al., 2019), with crown-level precision if the resolution of the sensors is sufficient (Ballanti et al., 2016; Fassnacht et al., 2016). However, these sensors are often custom-designed or prohibitively expensive where commercially available, which limits the accessibility of these surveys (Surový and Kuželka, 2019). Finding a balance between feasibility, cost and utility is key to seeing methods adopted and approaches must adapt to emerging technologies (Turner et al., 2003; Toth and Józków, 2016; Kitzes and Schricker, 2019).

Unoccupied Aerial Vehicles (UAVs) offer a cheap remote sensing methodology which increases the temporal and spatial resolution of imagery available and are increasingly adopted in forest research (Saari et al., 2011; Anderson and Gaston, 2013; Bergseng et al., 2015; Rokhmana, 2015; Surový and Kuželka, 2019). UAVs are being deployed to map insect damage (Näsi et al., 2015), post-logging stumps (Samiappan et al., 2017), flowering events (López-Granados et al., 2019), leaf phenology (Park et al., 2019), and forest biomass (Dandois and Ellis, 2013; Zahawi et al., 2015) but analytical methods to evaluate forest recovery in terms of species composition or biodiversity with UAVs are lacking (Messinger et al., 2016; Goodbody et al., 2018a). Even approaches to detect tree species from UAV imagery remain scarce and are often limited to classifying species for manually delineated crowns (Lisein et al., 2015; Tuominen et al., 2018) or else work in other ecological contexts such as high latitude (Puliti et al., 2017; Alonzo et al., 2018; Franklin and Ahmed, 2018), riparian strips (Michez et al., 2016) or managed nurseries (Gini et al., 2018). All of these methods require manual field data collection, either in the form of complete plot inventory (Puliti et al., 2017) or crown delineation with GPS (Alonzo et al., 2018), taking time and access to trees, which is tricky in the tropics and approaches to collect reference data should take advantage of new technologies to improve efficiency. Further, detailed mapping of species across management units from UAV imagery requires methods that can extend knowledge of species for a sample of crowns to a whole region. Object-based image analysis on UAV imagery offers promise for mapping tree species in this way, allowing use of textural information computed over adjacent pixels, rather than simply evaluating the pixel values individually (Giannetti et al., 2018; Gini et al., 2018; Lu et al., 2019; Puliti et al., 2019). Typically, regions of interest are manually-defined such as pre-defined management units, inventory plots or tree crowns, for which statistics are generated (Lisein et al., 2015; Michez et al., 2016; Alonzo et al., 2018; Franklin and Ahmed, 2018; Tuominen et al., 2018), meaning models can only apply to other similarly created objects. Extending these approaches to all imagery across a site requires automated partitioning of imagery into groups of neighboring pixels (superpixels) (Ren and Malik, 2003). Superpixels labeled with species identities are used to build and validate models that can then be applied to all superpixels, covering the whole landscape (Feduck et al., 2018; Wu et al., 2019). This approach has yielded promising results in limited settings, such as conifer seedling mapping in logged 50 m2 plots and mapping a single invasive species across an island in Japan (Feduck et al., 2018), but the approach has not been applied to detect early-successional species in recovering tropical forests. Developing and applying UAV technologies to tropical forest restoration settings to map key species indicative of disturbance and recovery trajectory will help improve efficiency of management of forest restoration: knowing where interventions are most likely to work or most needed can reduce labor costs (Rose et al., 2015).

Our contribution: This study presents, SLIC-UAV, a novel and complete workflow for mapping early-successional species in degraded tropical forests. We developed this end-to-end pipeline combining UAV data collection with an object-based approach to learn species from the textural and spectral properties of superpixel clusters, enabling extension of data from a sample of crowns to map indicative species occurrence across 100 ha of forest. In contrast to existing methods, SLIC-UAV enables wall-to-wall mapping of multiple early-successional species, so that forest recovery and successional status can be evaluated. The UAVs that were are rapidly deployed, commercially available, and can map approximately 100 ha per day, enabling small sites to be mapped in their entirety, large sites to be sampled, and repeat surveys to track recovery through time. We evaluate the performance of conventional red-green-blue (RGB) and multispectral (RGB+NIR; using a $3,000 camera) imagery, comparing the accuracy of both types of data. We develop an integrated UAV-based approach to collect species identity data, greatly reducing time and effort in the field, and use oil palm to show that additional categories can be added through desk-based mapping. Finally, we produced heat-maps across 100 ha of forest to reveal the spatial signature of disturbance, created by logging, which can be used as a baseline for tracking recovery and directing restoration management.

2. Materials and methods

This section introduces the simple linear iterative clustering on unoccupied aerial vehicle data (SLIC-UAV) method. This section explains the steps of superpixel extraction, feature generation and subsequent classification comparing three options: Lasso Regression, Support Vector Machines and Random Forests. This section also introduce our study data used to illustrate the use of SLIC-UAV, including the development of a data collection pipeline using a UAV in place of traditional field survey to reduce the time needed to curate a reference set of crowns used for classification of key species of interest, focusing on early-successional species indicative of disturbance and typical long-lived species.

2.1. Data collection

2.1.1. Study site

Data for this study were collected at Hutan Harapan (“forest of hope”) on the island of Sumatra, Indonesia (Figure 1). Hutan Harapan is an Ecosystem Restoration Concession where 98,455ha of ex-logging concessions are now leased for restoration (Harrison and Swinfield, 2015). Heavy logging occurred since the 1970s, resulting in a heterogeneous secondary lowland dipterocarp forest in various stages of recovery. Harapan has a weakly seasonal climate with monthly mean rainfall varying from 79 to 285 mm, with a dry season with less than 100 mm of rain for three consecutive months between June and August. The terrain at Harapan is undulating, however elevation remain low in the range 30–120 m above sea level. Despite heavy logging since the 1970s, Harapan supports a large amount of biodiversity, with 302 bird species and over 600 tree species from 107 plant families recorded (Harrison and Swinfield, 2015).

FIGURE 1
www.frontiersin.org

Figure 1. Location of Hutan Harapan within Indonesia (green polygon). Imagery data courtesy of Google.

Our study site comprised a 100 ha area close at the boundary of Hutan Harapan West of the main camp. The site was characterized by a closed-canopy forest with pre-disturbance remnant trees emerging from dense regrowth of early-successional trees including Macaranga spp. (Euphorbiaceae) and the invasive pioneer Bellucia pentamera (Melastomataceae) from South America (de Kok et al., 2015). These species are common in across the entire landscape and within disturbed forest more generally in Southeast Asia (Slik et al., 2003, 2008; Dillis et al., 2018). The study area also included oil palm within the adjacent concession. Data were collected in two survey periods in 2017 and 2018.

2.1.2. UAV imagery

We collected UAV data for 100 ha of forest in Hutan Harapan. Multispectral (MS) imagery were collected in April 2017. For this a 3DR Solo UAV (3DR, Berkeley, USA) was equipped with a Parrot Sequoia (Parrot, Paris, France) camera held in a fixed mount angled close to nadir when flying at mission speed. The camera records four bands of MS imagery with centers and approximate response widths (both in nm): Red (550, 40), Green (660, 40), Red Edge (735, 10) and Near Infrared (790, 40). Images in these bands are recorded at 1.2 megapixel resolution, giving a ground sampling distance of 14.8 cm per pixel at an altitude of 120 m. Additionally, a sensor atop the UAV records illumination in each band at the time of exposure allowing radiometric correction of illumination to reflectance values, reducing the effect of varying solar illumination. UAV flights were flown by autopilot. Each flight covered a 10.75 ha footprint in a grid designed in QGIS (QGIS Development Team, 2019). Mission Planner (ArduPilot Dev Team, 2017) was used to design the flight path, in a snaking pattern with 80% in-line and 70% between-line overlap between images, also referred to as front-lap and side-lap.

Unfortunately, the RGB data collected by the Parrot Sequoia was blurred due to the use of a rolling shutter. We therefore collected additional RGB data in November 2018. For this a DJI Phantom 4 UAV was used with its stock camera (DJI, Hong Kong, China). This camera records standard RGB imagery at 12.4 megapixel resolution, which at a height of 100 m gives a pixel size of 4.35 cm. We used the same flight pattern as in April 2017. The DJI GS Pro app was used to plan each flight, in a snaking pattern with 90% in-line and 75% between-line overlap, all flown at an altitude of 100 m.

Agisoft Photoscan Professional (Agisoft, St. Petersburg, Russia) was used to process both datasets. For the MS imagery the steps were to align photos, calibrate reflectance (using data from the sunshine sensor to correct for illumination based upon the data from the onboard sensor which is known to improve reliability of classification methods Tuominen et al., 2018), build a dense 3D point cloud, build a DSM (digital surface model) and an orthomosaic. This allowed the algorithm to refine estimates of camera location from initial GPS stamps, correct for illumination, build a photogrammetric model for the area and extract outputs of a rasterised DSM and orthomosaic (OM) of the image mosaics after correcting for the surface geometry. Parameters for each step are listed in Table 1. For the MS imagery this process took approximately 34 h using a workstation running Windows 7 equipped with an Intel Xeon E3-1240 V2 CPU, comprising 8 cores running at 3.4 GHz with 16 GB RAM, though this included producing a dense point cloud. Had we worked with only the sparse option, this process would take 14 h 35 m. RGB imagery was treated in the same way, with the exception of the reflectance calibration. The higher resolution of the RGB data required the process be separated into chunks to enable loading into the 16 GB of RAM on the desktop used, taking a total of approximately 11 days to run, though sticking to the sparse point reduced this to 17 h 59 m. We chose to use the dense cloud for higher structural detail, but all steps could be completed with only the sparse cloud.

TABLE 1
www.frontiersin.org

Table 1. Values of parameters used in agisoft photoscan.

We then split the data into overlapping chunks which were then aligned and merged using the Agisoft marker-based alignment. Markers were manually set for all flights with overlap between the chunks, with a minimum of 4 clearly visible fixed locations used for each pair of flights with overlap, taking roughly 1 h of human input. The RGB and MS data co-aligned using the Georeferencer tool in QGIS (version 3.4.5). Tie points were generated as a random set of 100 points across the region. We ensured good correspondence using clearly visible features present in both datasets close to each random point as the final tie point. A further 51 points were then plotted by starting from the centroid of the largest remaining voronoi polygons across the network of tie points. Drawing tie points took roughly 4 h of human input. These points were then used to transform the MS data to align with the RGB data using a polynomial transformation (polynomial 3 in QGIS) with nearest neighbor resampling. The final outputs used in this study were a multispectral orthomosaic (MS), RGB-derived surface elevation model (DSM) and RGB orthomosaic (RGB) with pixel resolutions of 11.3, 8.01, and 4.01 cm, respectively (examples shown in Figure 2).

FIGURE 2
www.frontiersin.org

Figure 2. Examples of the three imagery types used in this work, standard Red-Green-Blue, Digital Surface Model and Multispectral (left to right). The multispectral imagery is a false-color image with red, green and blue representing red, green and red edge channels.

2.1.3. Generating labeled tree crowns for training and testing

The first signs of succession on damaged soils in this region are usually ferns, ginger and bamboo. However, our aim is to evaluate whether the forest is returning to a rain forest. We therefore focused on tree species which are found in rain forests nearby, and are considered indicative of forest recovery after disturbance. We focused on two early-successional species and two long-lived species, indicative of more established secondary forest. The early-successional species were Macaranga gigantea (Euphorbiaceae) and the non-native invasive Bellucia pentamera (Melastomataceae) which are both prevalent across Harapan, especially in more degraded areas. We also chose two long-lived (although still considered early-successional) species, with less visually distinct crown and leaf traits: Alstonia scholaris (Apocynaceae) is a tree that can reach to 60 m in height and produces commercially valuable timber; Endospermum malaccense (Euphorbiaceae), locally known as Sendok-sendok, is a mid-canopy tree, reaching 34 m in height typical of secondary regrowth (Slik, 2009). Examples of these species are shown in Figure 3. Visually the “long-lived” species appear more similar, both from the ground and from above, making identification of these from each other, and other upper canopy trees, difficult. The early-successional species are easy to spot from the ground owing to their low height and distinctive leaves. Their leaf arrangements also lead to striking patterns and textures when visualized from above, making these easier to identify by eye from UAV imagery. A set of georeferrenced hand-drawn polygons were produced for the four species of interest in November 2018.

FIGURE 3
www.frontiersin.org

Figure 3. Individuals of the four main species as photographed in Harapan: (A) Bellucia pentamera, (B) Macaranga gigantea, (C) Alstonia scholaris, and (D) Endospermum malaccense.

We used the Phantom 4 UAV to record the location of trees, which has two significant advantages over the traditional approach of mapping trees from the forest floor: (1) above-canopy UAV-based GPS measurements are high precision (< ±3 m) when compared with sub-canopy hand-held GPS measurements (±15 m); and (2) high quality images of tree crowns are collected that facilitate the production of hand-drawn tree crown polygons. We worked along roads and at other locations from which we could launch the UAV and then scanned the canopy at low altitude (typically 20–30 m above the canopy) using the UAV flown in manual mode to identify crowns and species from the live high resolution imagery; images were reviewed by local experts to confirm identifications. Once we were happy we had identified a crown of interest, two team members would position themselves at right angles to each other relative to the crown center. The UAV operator would then move the UAV horizontally until both team members agreed it was above the crown. Here multiple images were captured at various heights. Images from about 30 crowns were collected in this way in a morning, and manually digitized in the afternoon (while the trees were fresh in memories). The crown boundaries were marked on the highest resolution image with reference to images at multiple heights, and any crowns with unclear boundaries were re-confirmed in the next batch of flights. Once RGB OM and DSM rasters were produced from mapping surveys, the crowns were converted to geospatial polygons using the initial annotated images and GPS tags. The raw imagery was used as the primary reference, whilst using contrast in the RGB OM and boundaries in height in the DSM to refine any boundaries. In total, data were collected for 328 crowns: B. pentamera (n = 120), M. gigantea (n = 65), A. scholaris (n = 93) and E. malaccense (n = 50); example digitisations shown in Figure 5.

In addition to the crowns mapped in the field, we also digitally delineated three other classes to include in our model. This gives us a “background” from which to distinguish our species of interest. We digitized 105 oil palm crowns, drawn at random locations across the plantations or (occasionally) within the recovering forest. We also digitized 100 “other” crowns by drawing points at random across our study site. We considered each of these carefully, checking the original (higher resolution) imagery to ensure each crown was not in fact one of the four target species. Finally, to allow our models to distinguish the miscellaneous non-vegetation regions present in the data and so map the complete area, we labeled non-vegetated regions (including water bodies, roads, bare ground and buildings). To ensure that crown shape didn't confound our analysis, we created our non-vegetated labels in the shape of tree crowns using 100 existing crown outlines of varying species, size and structure. Examples of the new labels are shown in Figure 4. Overall this gave us 633 labeled regions across seven categories: five focal species, one other tree class and a miscellaneous class.

FIGURE 4
www.frontiersin.org

Figure 4. Example manual digitisation of oil palm, other species and miscellaneous regions completed with UAV imagery.

2.2. Identifying species with SLIC-UAV

SLIC-UAV is an object-based image analysis workflow1. This enables the context and texture of the imagery to be used in modeling, in contrast to a pixel-based approach which focuses only on the local spectral data for each pixel (Yu et al., 2006; Colkesen and Kavzoglu, 2018). First, we created superpixels (clusters of neighboring pixels) through simple linear iterative clustering (SLIC) using the RGB imagery data from 2018 (detailed in Section 2.2.1). Second, we extracted both spectral and textural features from the imagery data for each superpixel. Third, we modeled species using the manual labels. We tested lasso regression (LR), support vector machines (SVM) and random forests (RF) models. Finally, after validation, the trained models were used to classify species for automatically-created segments across the 100 ha study site, eventually building heatmaps of canopy dominance (proportion of area classified as each species) to indicate forest condition.

2.2.1. Superpixel segmentation with SLIC

Within our approach, we classify the species for each superpixel, combining many individual pixels to compare then in their local context. Superpixel segmentation separates imagery into groups of connected pixels, based on similarity (Ren and Malik, 2003). We use this approach in our automated landscape mapping, enabling extension of the pipeline to complete coverage of any region where imagery exists.

Automated segmentation was completed within SLIC-UAV, using the RGB imagery from 2018, by Simple Linear Iterative Clustering (SLIC) (Achanta et al., 2010, 2012), see Figure 5 for an example. This is similar to k-means clustering, but is designed to produce superpixels of roughly similar area in a regular spacing by starting with a regular grid of squares. These were then iterated using k-means clustering in a local neighborhood, four times the average superpixel size, using a weighted sum of euclidean distance between pixel locations and distance in color space as the distance metric. Once superpixel centers become sufficiently stable (based on sequential changes) connectivity is enforced, ensuring all pixels in a given superpixel are locally connected. The algorithm adapts to contours of the image, like k-means, but the regularity constraints ensure superpixels have a similar size. The size is then mostly controlled by the number of initial superpixels. For our work we used the implementation of SLIC in the scikit-image Python library, using Python 3.7 (van der Walt et al., 2014; Python Core Team, 2018). We used the default compactness of 10 and sigma of 1, and initialized superpixels to have an average area of 0.5 m2 to ensure these were smaller than all but the smallest crowns.

FIGURE 5
www.frontiersin.org

Figure 5. Example images of complete pipeline for the main four species recorded in the field. Each row includes imagery of one crown: (A) Alstonia scholaris, (B) Bellucia pentamera, (C) Endospermum malaccense, and (D) Macaranga gigantea. Across the columns (left to right) are shown the manual flight image marked up in the field for crown extent, the outline converted to a shapefile overlain on the RGB orthomosaic, the SLIC superpixels laid over this orthomosaic and the labeling of these superpixels using the SVM approach for the model with all categories. We have only included the outline and labeling for the crown in the center of each image, but the columns with superpixels are taken from the full landscape map to show this crown in context and so include labels on other superpixels. Note that each row has a different scale as indicated by the scale bars. The purpose of this is to show the tree and its immediate surroundings in sufficient detail.

2.2.2. Feature extraction

Imagery for each superpixel was used to generate a set of summary features to use for species classification. We generated features for each of the three imagery types, treating the DSM as a greyscale raster with floating point values. The features were all scaled and centered to mean zero and variance one based on training data.

We computed features in two broad classes: spectral and textural. Spectral features are based on summary statistics of the individual pixel values in the imagery, as is commonly used in UAV mapping approaches (Ota et al., 2015; Kachamba et al., 2016). In contrast, textural features were computed by treating each superpixel as an image, computing statistics based on repeating patterns and frequencies of pattern motifs in the arrangement of pixels (Franklin and Ahmed, 2018). Example visualizations of these concepts are shown in Figure 6. Vegetation indices were computed as stated in Supplementary Table S1 based on the bands of the orthomosaics and treated as extra spectral bands for the spectral analysis such as in Fuentes-Peailillo et al. (2018) and Goodbody et al. (2018b). Similarly, we converted the RGB imagery into HSV space treating hue, saturation and value as additional spectral bands (Smith, 1978). In an effort to focus on only illuminated portions of each superpixel, we also filtered the top 50% brightest pixels, defined by lightness in CIELAB color space (International Commission on Illumination, 2019), and computed the RGB spectral features for just these pixels. We computed the same statistics on the RGB bands, the MS bands, the DSM float imagery, the RGB and MS indices, the HSV channels and the bands of the brightness filtered RGB imagery. Details of all the spectral statistics computed for each the data types are listed in Supplementary Table S2.

FIGURE 6
www.frontiersin.org

Figure 6. Illustration of the key features used in this work. On the left are example visualizations of distribution for parametric (A) and non-parametric statistics (B) for RGB pixels in the central image. (C) Is an example of a filter used for Laws textural features, with the E5E5 kernel for detecting edges. (D) Shows autocorrelation scores in four directions spaced by 45° with average score overlaid. (E) Is an example calculation of a gray-level co-occurrence matrix and the filter given by the dissimilarity measure in a local 7 × 7 neighborhood. (F) Is an example of computing a Local Binary Pattern, with a histogram of occurrences of each motif score across the inset imagery.

Textural features were produced from four approaches: the greylevel co-occurrence matrix (GLCM) (Haralick et al., 1973), local binary patterns (LBP) (He and Wang, 1990; Ojala et al., 1996), Laws features (Laws, 1980) and spatial autocorrelation. GLCM statistics summarize patterns or frequently co-occurring local pairs of pixel values, with both the mean and range of scores when considering all directions with the distance of offset reported. For the DSM data we converted the float values to 32 integer values, defined as a linear spacing (and rounding) from 1 for heights below the 5th percentile within that superpixel to 32 for heights above the 95th percentile. GLCM statistics were computed for offsets of 1, 2, and 3 pixels. LBPs quantifies the frequency of patterns of relative pixel values for a neighborhood of a given radius. We used a rotationally invariant form of LBPs counting all motifs equivalent up to a rotation as one single pattern. The DSM was treated the same way as for GLCM, and again this was applied at radii of 1, 2 and 3 pixels. Laws features compute convolutions of the imagery with particular kernels constructed as a cross product of vectors designed to identify spots, waves, lines, ripples and intensity. Imagery is first modified by subtracting the mean value in a 15 x 15 window for each pixel. Then each 5 x 5 kernel is convolved with the resulting image and we report the mean and standard deviation for resulting pixel values, using the float version of DSM imagery in this case. Spatial autocorrelation scores the correlation of the image with itself, and we recorded the mean correlation across all directions for each offset, along with the range across all directions. We computed this for 1, 2 and 3 pixel offsets, again using float data when looking at the DSM. RGB data were transformed into a greyscale image for all textural features and each of the four MS bands were treated as a greyscale image and had textural features computed independently. Details of all textural features are listed in Supplementary Table S3.

2.2.3. Species classification models

We added labels to all superpixels with 50% or greater of their area within a labeled crown, leading to multiple labeled superpixels for most crowns. In total this produced 11,996 manually labeled superpixels. We trained the models on 75% of these superpixels and reserved 25% for evaluation. Accuracies stated in the results section (e.g., Figure 7) are for the 25% evaluation data. Within the training data, we assessed the models using 10-fold cross-validation. Ten models were fitted on 90% of the crowns, with the remaining 10% used for validation. Folds were split in a random stratified way, to balance each species label equally across all folds, with the validation sets forming a complete covering of all crowns where each crown was used to build nine of the models and test the tenth, independently built model. This split was the same for all models. The splitting of the training and test superpixels in each fold of cross-validation was also based on the original crown split, keeping all superpixels for a given crown in the same set to avoid inflation of accuracy from training on superpixels within crowns which are included in the test set. Accuracies stated in Figure 7A are mean accuracies for each 10% of reserved data in the 10–fold validation process.

FIGURE 7
www.frontiersin.org

Figure 7. Accuracy of species classification with (A) model types, (B) imagery used, and (C) features used. Classification accuracy increases with decreasing number of species (x-axes). The two species model classified the early-successional Bellucia pentamera and Macaranga gigantea from an 'other' class. The three species model additionally classified the long-lived Alstonia scholaris and the four species model also included Endospermum malaccense.

We considered three modeling approaches to classify the species for each superpixel: lasso regression (LR), support vector machine (SVM) and random forest (RF). LR is an extension to least-squares regression which regularizes the coefficients of the resulting model. This both reduces the likelihood that the model is heavily reliant on any one feature, but more critically, restricts the number of predictors included in the model. This gives a sparse model that can readily be interpreted (Tibshirani, 1996). We fitted models using the glmnet package in R Core Team (2019). Here we used a multinomial logistic regression to give relative confidence scores for each class label, with the highest species being the final prediction. Data examples were weighted inversely proportionally to the number of examples with that species label to account for mismatching number. We constructed our models to ensure that where a feature was included for one class, it was included for all classes, and restricted our models to be the best-fitting model (based on overall accuracy on the training set) which had at most 25 features included in the model. SVM is a less restrictive but harder to interpret model (Cortes and Vapnik, 1995). SVM modeling was completed using the e1071 package in R. Here the model was fitted to the training data using the default radial basis function kernel with parameters tuned by the inbuilt method and class weights set to balance the contribution of each class, as for lasso regression. The model was allowed to use all variables in contrast to the restriction applied in lasso regression. Similarly, the RF models (Ho, 1995, 1998) were built on the training data, using the R package randomForest, again applying weights to correct for varying class size. Here 500 trees were used, with the default tree structure used (sampling sqrt(f) variables at each node, where f features are supplied to the model). All approaches used here have a built in within-sample validation for model parameter selection.

We explored the contribution of different imagery and feature types to the accuracy of our models. We chose to focus on a sequential addition of imagery in line with the additional processing or sensors that are required to see if these steps are justified by performance. We considered two features classes (spectral and textural) separately and also combined. We explored the effect of using all valid combinations of data sources combined with using either or both classes of features. In total this gave us 21 possible model input options in addition to the model using all variables. Given the number of replicates, we chose to use a single training and test data split, using 75% of crowns to train each model and 25% to evaluate, keeping the split the same for all combinations. Owing to issues with the MS sensor recording illumination, leading to artifacts in the imagery produced (Supplementary Figure S3c), we masked out all crowns which were within a 50 m radius of any issue, marked by the orange hashing, leaving 409 crowns for evaluation.

As noted previously, visually the difference between the long-lived species, Endospermum malaccense and Alstonia scholaris, and other canopy species was subtle. This was in contrast to Bellucia pentamera and Macaranga gigantea, with distinctive structure and leaf texture, whose occurrence is known to be closely related to disturbance history. We also noted that we had a small sample of E. malaccense crowns and that these were often more difficult to confidently identify in the field. We therefore considered modeling where these trees were included with the other tree species category, keeping only Alstonia scholaris as an indicative long-lived species example. Finally, we also considered models where this was included with other crowns to form a ‘lower management concern' class, actively seeking only the invasive Bellucia pentamera, early-successional Macaranga gigantea and potentially encroaching palm oil tree classes compared to species more indicative of progress beyond initial stages of succession. We believe this final model, whilst simpler, is potentially of more interest to forest managers.

2.3. Landscape mapping

We applied our SLIC-UAV superpixel approach to produce labels across the whole study site for which we had imagery. For this we used selected the best performing model, and then retrained a classifier given all training data. This produced maps of species occurrence, which were used to compute the area and percentage of cover of each species. We also used these to produce density maps by computing the percentage prevalence of each species in a grid, where each cell was 0.25 ha in size. As there was a region of clearance to establish agroforestry between the UAV surveys, we masked regions where this had occurred by the 2018 survey when computing the landscape models of species occurrence, since the MS data would include trees no longer present. This doesn,t affect the model building phases: of the labeled crowns only six ‘other' species trees were in this region, all of which were in small fragments of canopy left after clearing and were verified to be of very similar extent in both years of imagery, both in the processed imagery and original source images. We expect the outputs from models across the study site to be of direct value for measuring successional status and directing active restoration toward areas highlighted as being early-successional or dominated by low biodiversity value species. The full pipeline for this approach is summarized in Figure 8.

FIGURE 8
www.frontiersin.org

Figure 8. Schematic of the full SLIC-UAV pipeline.

3. Results

3.1. Automated species mapping with SLIC-UAV

SVM modeling performed best on all species label options (74.3% accuracy on all species labels rising to 91.7% for the simplest version with long-lived species labels removed). Random forests modeling performed slightly worse than SVM and lasso regression performed markedly worse. Based on this, as well the training scores on the superpixel classification, we chose to proceed with SVM as our final model. Notably, the test performance improved markedly for all three modeling approaches when moving to combine all long-lived species and to focus on early-successional species and oil palm (Figure 7C). This reaffirms our observation that these species are visually most distinctive, and given their significance as indicators of disturbance, being able to identify these species well is a very valuable advancement.

3.2. Contribution of imagery and feature types

Adding structural data (DSM) improved accuracy for all three species label sets. This was expected as these data add more information on vertical structure, whereas RGB data contain information on spectral response and two-dimensional structure, from image contrasts. The addition of MS data further improved model performance in the two more complex models but actually decreased performance on in the simplest model (Figures 7A,B). This improvement added more to the model than adding DSM data, showing the additional value of these data.

Comparing RGB and MS data directly suggested a similar pattern (Supplementary Figure S2), where comparable models using MS imagery in place of RGB performed better whenever textural features were included, except in the case of the simplest problem with only five species labels, as in Figure 7B. However, when using only spectral features, MS imagery over RGB gave better fit on data used to train at the cost of worse fit on held-back data. This suggests the extra information in the MS imagery may have led to overfitting when only the pixel values were considered and not their textural context.

Including both spectral and textural features resulted in the highest accuracies for the two models with a reduced set of species. In these models the hierarchy was clear, with textural features producing a slight improvement over spectral features, with the combination doing best (Figure 7C). This wasn,t the case for the model with all species labels. Here the best performance occurred for spectral features alone, with textural features doing worst. Adding spectral features to these improved performance, but this was still worse than using spectral features alone.

3.3. Landscape species mapping with SLIC-UAV

We used our best four species SLIC-UAV model to predict species occurrence across the entire 100 ha landscape (Figure 9). The confusion matrix of the final model used for landscape mapping, assessed using the mapped crowns, is given in Table 2. In particular, the model does best in identifying Non-vegetation and oil palm, with both having precision and recall greater than 95%. Amongst the species mapped, the early-successional species were identified with highest precision (Bellucia pentamera: 90.1%, Macaranga gigantea: 96.9%) whereas the longer-lived species are more commonly confused with each other as well as the ‘other' species category, leading to lower precision (Alstonia scholaris: 86.4%, Endospermum malaccense: 89.5% and “others”: 87.5%). Recall values were generally lower, in most cases owing to miss-classification of known species as the “other” category, being a catch-all category this is unsurprising. The value of including this label enables a “control” label for vegetation, ensuring that superpixels identified as a species of interest are done so with high precision, since the model is not forced to choose between a limited set of species labels. This high precision is of value to management as it gives high confidence in species dominance maps.

FIGURE 9
www.frontiersin.org

Figure 9. Map of the percentage occurrence by area of vegetation species in 0.25 ha cells across the study area as predicted by the superpixel SVM model including all species. (A) Shows the combined occurrence for long-lived species for Alstonia scholaris, Endospermum malaccense and “other” tree species, with (B) showing Bellucia pentamera alone, and (C) showing Macaranga gigantea alone.

TABLE 2
www.frontiersin.org

Table 2. Confusion matrix for support vector machine model applied to the automatically delineated SLIC-UAV superpixels using all labels, as used for landscape mapping.

The predicted species distribution corresponds precisely with the site history, with oil palm, non-vegetation and forest all well-identified. Notably, the oil palm plantation at the top of the region, lying outside the Hutan Harapan boundary, shows the specificity of our model, with regions of bare ground, buildings and remaining vegetation clearly picked up in contrast to the oil palm. After clipping the map to just the area within Hutan Harapan, the early-successional species together comprised 45.34% of the study site with oil palm and other vegetation represented 7.21% and 41.81% of the site respectively (Table 3). A. Scholaris represented 38.37% of the study area most commonly occurring close to roads. The other species of interest made up 6.97% of the cover, or 8.00% of the cover once non-vegetation and oil palm are removed. The remaining forest species covered the remaining 41.81%. Higher occurrence of B. pentamera and M. gigantea was expected, but these species tend to dominate the sub-canopy and appear more rarely as large ‘top of canopy' trees. These were often identified in canopy gaps and near the edge of crowns. Gridded maps of canopy dominance for the combination of long-lived species with ‘other' vegetation, Bellucia pentamera and Macaranga gigantea (Figure 9), reveal increased occurrence of these early-successional species close to roads, where disturbance is highest. Generally, there was also a slight gradient for increased prevalence of long-lived species further from the boundary of Hutan Harapan, owing in part to reduced sporadic occurrences of oil palm moving further from the plantation (Figure 9).

TABLE 3
www.frontiersin.org

Table 3. Abundance of species predicted across the Harapan landscape, with the oil palm plantation in the North and regions of forest clearance removed as shown in Supplementary Figure S3.

4. Discussion

The challenge in tracking biodiversity recovery during forest restoration is producing species classification methods that generalize over large contiguous areas without need for exhaustive surveys. The SLIC-UAV approach was able to distinguish four species of interest from among other vegetation and non-vegetated superpixels with 74.3% accuracy, rising to 91.7% when focused only on early-succesional species indicative of recent disturbance: Bellucia pentamera and Macaranga gigantea. We used this approach to predict species distribution across the rest of the 100 ha landscape.

4.1. Accurately monitoring forest restoration with SLIC-UAV

The output of our SLIC-UAV pipeline are dominance maps for early-successional species (Figure 9). These maps could be used to assess forest condition, according to successional status, and enable recovery to be tracked through time. This complements approaches which focus primarily on carbon content and changes in biomass. In addition, this tool could be used to help focus active restoration toward areas toward more degraded areas, where indicative early-successional species are most prevalent, so that assisted natural regeneration techniques such as release cutting, enrichment planting and selective thinning can be implemented (Ansell et al., 2011; Swinfield et al., 2016). These approaches have been shown to accelerate carbon sequestration (Reynolds et al., 2011; Gourlet-Fleury et al., 2013; Wheeler et al., 2016) and the development of suitable habitats for forest specialist species (Ansell et al., 2011). This application for guiding management illustrates the potential value of SLIC-UAV. Our pipeline for species occurrence mapping includes UAVs at all steps, making reference data collection simpler, though care must be taken to ensure species of interest are well represented. The mapping pipeline can be applied to any orthorectified imagery of sufficiently high resolution, where crowns are made up of at least 100 pixels or so. With the increasing resolution of satellite data SLIC-UAV could be applicable globally, provided sufficient training data are available.

The high accuracy of our method on the indicative early-successional species is a particular strength for it's use in operational settings. Models focusing on Macaranga gigantea and Bellucia pentamera performed particularly well, both species were identified with over 90% precision (Table 2). This is comparable to Wu et al. (2019), which mapped an invasive species on a Chinese island using eCognition to generate objects from UAV RGB imagery, validated on a per-pixel basis with 95.6% overall accuracy and 94.4% precision for the invasive species, though this focussed on only one species. Apostol et al. (2020) also used object-based image analysis through eCognition to identify regions as either spruce or birch in Romanian forests with accuracy varying from 73.9 to 77.3%.

In Southeast Asian tropical forests, Macaranga gigantea is one of several species whose presence is strongly linked to the severity and recency of disturbance, and Bellucia pentamera is an invasive species, which is particularly prevalent at Hutan Harapan (Slik et al., 2003, 2008). They are also most commonly found in heavily degraded forest and so can act as a signature species for degradation, while their absence is associated with recovery. We chose these species based on perceived and measured occurrence at Hutan Harapan. There are additional species which could be used as proxies for disturbance and including these would lead to a more complete picture of degradation, as they may live in different microclimate niches. Our work is an illustration of the SLIC-UAV pipeline, and the flexibility of the method will allow other species of interest to be added to models with data collection again possible using a UAV. Knowledge of the relative rate of occurrence of our chosen species can highlight regions within the project where the effects of disturbance are strongest. It has been shown that structural recovery is quick after fire disturbance, but that this has a longer lasting effect on species composition (Slik et al., 2002). Managers of projects like Hutan Harapan can use approaches like SLIC-UAV to help distinguish this signature of prior disturbance, based on indicative species occurrence in addition to simpler structural metrics focusing on carbon (Sullivan et al., 2017). This should improve understanding of forest history and current recovery status based on aerial UAV survey, reducing the need to access difficult terrain and the need to use sampling plots to interpolate over management units.

4.2. Collecting high quality training data with UAVs

The use of UAVs through the entire pipeline allowed us to reduce the need for mapping by hand in the field, and is lower-cost than arranging for aerial occupied aircraft survey. A key step in our work which adds value is our pipeline for collecting training data. Most comparable methods have manually mapped crowns in the field with great effort (Lisein et al., 2015; Franklin and Ahmed, 2018; Tuominen et al., 2018), or else have made use of existing forest inventories (Alonzo et al., 2018; Fujimoto et al., 2019). In contrast, Gini et al. (2018) work on the UAV imagery, manually labeling individual points on the resulting imagery, but this approach was only pixel-based, using a small proportion of total data for training and testing in an already small study area. Michez et al. (2016), manually delineated whole crowns on orthomosaic imagery. However, this was completed after processing and not compared to any reference in the field at the time. Our approach combines the best of both approaches by using field-verified UAV training data.

This study describes a clear pipeline for using UAVs to enable generation of reference crown images, with attached GPS location metadata. These can then be used in later digitisations on processed UAV imagery. This not only allows a faster approach to mapping crown locations, but also enables easier access to harder to reach areas. The trade-off for this is focus on only a few key species of restoration interest, enabling production of occurrence maps for these to guide intervention, such as the rule-based approach in Reis et al. (2019). This process isn,t perfect, and will only capture crowns visible from above the canopy, but enables a quick way to construct reference datasets for forest restoration projects, such as at our study site of Hutan Harapan. This is traded-off with incomplete sampling. We chose to focus on key species which are indicative of degradation and recovery status. The manual mapping process efficiently built a dataset for species of interest, which were then used to train a model which was applied to 100 ha of data: something that isn,t commonly possible with other methods. This shows the power of our approach to aid restoration management.

A simple and quickly deployable approach allows collection of data for tree species of interest, which can then be used to develop models that can generate heatmaps at the scale of management units (Figure 9) to guide restoration. This approach was tested in the structurally complex and biodiverse tropical forest of Hutan Harapan. This is a key ecosystem for global conservation.

4.3. Model selection

Support vector machines (SVM) had the highest out-of-sample classification accuracy (Figure 7A) and was used for the landscape mapping, ensuring that the output maps of forest status are reliable and of use to project managers. Lasso regression and random forests had a lower accuracy. The superpixel boundaries generated using SLIC generally follow image contrast boundaries, which naturally occur within crowns owing to different components (leaves and branches). This leads to superpixels with local homogeneity where differences within a crown are split causing dilution of the boundaries between each species. Correct labeling of superpixels requires adaptability to this structure, without risk of overfitting species boundaries. This presumably is why SVM did best, by using a non-linear kernel for boundaries whilst regularizing to control overfitting. In our study, lasso regression was not flexible enough to adapt to this complex problem, and random forests, being less regularized, struggled with the intra-species variability of superpixels and overfitted. This is also consistent with our observations that the early-successional species have much more distinct textures than Alstonia scholaris and Endospermum malaccense.

4.4. Value of structural information and multispectral imagery

Unsurprisingly, progressively adding more imagery generally improved the accuracy of resulting models (Figure 7B). However, this must be considered alongside the additional cost and complexity involved in adding these data.

The DSM was derived from the RGB imagery and provides additional structural information. It therefore did not increase the cost of data acquisition but generating the DSM was computationally intensive. The DSM only provided marginal increase in accuracy and was therefore not worth the additional complexity.

The MS data provides two additional bands of spectral reflectance for which vegetation reflect a greater proportion of incident light. We collected this data using a separate UAV survey with a multi-spectral camera, which added substantial cost to the data collection. Co-locating the RGB and MS imagery added complexity to our processing pipeline. The MS data substantially improved the accuracy of the model in the four and three species models. This supports findings in existing works such as Lisein et al. (2015) and Michez et al. (2016) who found models worked best when including both RGB and MS data. The exceptions to these broad trends were models focusing only on the simplest problem, removing the distinction of Alstonia scholaris and Endospermum malaccense from ‘other' species. Here the addition of MS imagery led to a slightly lower accuracy on held back data (Figure 7C). This may be due to the greater variation in the MS data. This is supported by our full multiplex analysis in Supplementary Figure S2. Here the models which only use features based on the spectral properties of each superpixel all show better fit on the training data for models using MS imagery compared to RGB, but universally worse performance on held back superpixels. This suggests models using only the spectral response from MS imagery may be prone to overfitting.

The MS imagery was processed to correct for illumination, without which performance is reduced, such as in Tuominen et al. (2018). Addition of more advanced sensors, such as for hyperspectral imagery, may further improve results, but are often prohibitively expensive for this context or custom made (Hruska et al., 2012; Colomina and Molina, 2014; Aasen et al., 2015). Therefore, this imagery can add to performance in species mapping models, but must be considered carefully, and certainly a consideration of the benefits in accuracy compared to the additional cost for such sensors is worth making for any application of our work.

4.5. Value of textural and spectral features

Using superpixels allows us to extract textural features based on patterns within each superpixel. Textural features generally allow better discrimination of training examples than spectral features, but combining both is generally best (Figure 7). This pattern is consistent for our two simpler models on held-back data. For these models the early-successional species, with distinctive textures, are increasingly important and so texture is key. However, adding spectral information can still help discrimination. This is not the case for our model with the most species labels, where spectral data alone perform best, and adding textural measures actually reduces performance. This is surprising, suggesting the addition of textural features to a spectral model alone makes the classification boundaries less clear. This presumably follows from confusion with the species Endospermum malaccense and the early-successional species. The pattern of leaves for many crowns of this species have a very ‘jagged' appearance, very similar to that of Bellucia pentamera. The spectral signature of the two are more distinctive using subsets of E. malaccense crowns as superpixels can appear texturally similar to B. pentamera. This problem disappears once the label of E. malaccense is removed. With more examples of E. malaccense this boundary should be better defined, as the increased weighting given to this label to balance training examples may contribute to this confusion. Overall our results justify the inclusion of textural information, with the noted confusion in one case. This emphasizes the power of our superpixel approach, taking local patterns as well as spectral responses into consideration.

The current pipeline doesn't consider the species of neighboring superpixels, although a strong local correlation would be expected, given few crowns are as small as our superpixels. A processing step which then allows adjustment based on confidence of classification relative to local superpixels and their species label could improve robustness of the classification. This approach has been shown to improve accuracy by as much as 4% (Tong et al., 2019), but this hasn't been tested in the current study.

The classification accuracy of long-lived species with SLIC-UAV was hindered by the visual similarities of these species. This finer difference may be better picked up by more bespoke features as opposed to our use of generic features based on existing approaches in image analysis. An alternative approach which may help with expanding available features leading to better discrimination of these species are convolutional neural networks. These are able to create complex features fully automatically which may help discriminate the more visually similar species. As a trade off, these approaches normally require more training data in which all trees are labeled within a regular input shape. This was not possible in the field and is usually achieved by manually annotating images. This can be highly accurate, but cannot be independently validated by local experts in the field. Using this approach, Fujimoto et al. (2019) extracted cedar and cypress crowns in a Japanese forest with an automated method, before using neural networks to classify standardized images using only the equivalent of our DSM imagery with an accuracy of 83.6%, much higher than any approach using only DSM imagery in Supplementary Figure S2. Kattenborn et al. (2019) used neural networks to map two species form RGB UAV imagery across a successional gradient in Chile, with accuracies of 87% and 84%. They extended this work in Kattenborn et al. (2020) to instead map percentage species occurrence in cells of a grid, which may be a more computationally feasible way to build maps to guide management in such an approach. These accuracies compare favorably for SLIC-UAV, but show the power of a method with no explicitly computed features. A future step for SLIC-UAV could be to work with training a neural network to distinguish differences between the most hard to distinguish species and to use this to help construct or learn features for modeling to help this distinction.

5. Conclusions

Our approach to automated species mapping enables a scaling up of local expertise to aid in guiding restoration project managers to monitor and plan interventions. Our SLIC-UAV pipeline centers on the use of UAVs, which are affordable within the budget of most projects. The data these can collect are an improvement in both temporal and spatial resolution relative to other remote sensing approaches. This study demonstrates that it is possible to map up to 100 ha of land a day with a single UAV and operator team of two or three people. This scaling up and deployability of UAV approaches can drastically improve efficiency of human-power in these projects. Our approach is particularly valuable in its ability to map early-successional species of particular management concern. Within Harapan there are already projects exploring the benefit to forest health of selectively logging such species (Swinfield et al., 2016), and our approach will allow more targeted application of such strategies across the landscape. This study also describes how to collect reference crown location data with UAVs, making use of their high spatial detail combined with GPS information. With local experts it is possible to rapidly collect information on any species of interest, although we focused on four primary species of restoration interest. It would be possible to extend our approach to any species of interest, especially those of high value for biodiversity or for non-timber forest products. This is something which would need validation before deployment, within the same framework. It is important to note that in our modeling, the early-successional species were more visually distinct, thus performance on the species Alstonia scholaris and Endospermum malaccense are more indicative of expected application to later-successional species. Overall, our work has shown and assessed a full pipeline from field mapping to management occurrence mapping (Figure 9) showing the power of this low-cost, easy to implement approach to aid restoration project management.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://github.com/jonvw28/SLICUAV.

Author contributions

JW developed the analysis pipeline, collected the field data with the help of local collaborators, analyzed the data and wrote the code, and drafted the paper. BI, EA, and MZ guided the conception of the project, developed the plan for the field data collection, and facilitated the international collaboration. HH and EG oversaw and facilitated the field data collection and supported all work at Hutan Harapan. DC, C-BS, and TS wrote the grant proposals that supported the study and formed a Ph.D. supervisory team that contributed advice as well as contributing to writing and editing the manuscript. TJ revised the manuscript and figures. All authors contributed to the article and approved the submitted version.

Funding

This project was primarily supported by a NERC CASE studentship partnered with Royal Society for the Protection of Birds (RSPB) [NE/N008952/1] and a grant from the Cambridge Conservation Initiative Collaborative Fund supporting interdisciplinary research between the University of Cambridge and the Royal Society for the Protection of Birds. The field season in 2018 was further supported by a Mark Pryor Grant at Trinity College, Cambridge. C-BS acknowledges support from the Leverhulme Trust project on Breaking the non-convexity barrier, the Philip Leverhulme Prize, the EPSRC grant [EP/T003553/1], the RISE projects CHiPS and NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute. DC was supported by an International Academic Fellowship from the Leverhulme Trust. TJ and DC were supported by the Natural Environment Research Council (NE/S010750/1).

Acknowledgments

We thank Rhett Harrison for his significant input into grant writing. We are grateful to Dr. Tuomo Valkonen, whose early attempt to classify species without delineating trees was unsuccessful but paved the way for the development of more sophisticated approaches. We wish to thank all partners at Hutan Harapan for their help with managing the UAV and tree data collection at Hutan Harapan. We particularly wish to thank Adi, Agustiono, and Dika for their support with UAV flying and data collection. We are also very grateful for the support from members of Universitas Jambi who supported the logistics of our collaboration.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/ffgc.2022.876448/full#supplementary-material

Footnotes

References

Aasen, H., Burkart, A., Bolten, A., and Bareth, G. (2015). Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: from camera calibration to quality assurance. ISPRS J. Photogram. Remote Sens. 108, 245–259. doi: 10.1016/j.isprsjprs.2015.08.002

CrossRef Full Text | Google Scholar

Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., and Süsstrunk, S. (2010). SLIC superpixels. Technical Report no. 149300, EPFL, Lausanne, Switzerland.

Google Scholar

Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., and Süsstrunk, S. (2012). SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282. doi: 10.1109/TPAMI.2012.120

PubMed Abstract | CrossRef Full Text | Google Scholar

Adelabu, S., Mutanga, O., Adam, E. E., and Cho, M. A. (2013). Exploiting machine learning algorithms for tree species classification in a semiarid woodland using RapidEye image. J. Appl. Remote Sens. 7, 1–14. doi: 10.1117/1.JRS.7.073480

CrossRef Full Text | Google Scholar

Aerts, R., and Honnay, O. (2011). Forest restoration, biodiversity and ecosystem functioning. BMC Ecol. 11, 29. doi: 10.1186/1472-6785-11-29

PubMed Abstract | CrossRef Full Text | Google Scholar

Alonzo, M., Andersen, H.-E., Morton, D. C., and Cook, B. D. (2018). Quantifying boreal forest structure and composition using UAV structure from motion. Forests 9, 119. doi: 10.3390/f9030119

CrossRef Full Text | Google Scholar

Alonzo, M., Roth, K., and Roberts, D. (2013). Identifying santa barbara's urban tree species from aviris imagery using canonical discriminant analysis. Remote Sens. Lett. 4, 513–521. doi: 10.1080/2150704X.2013.764027

CrossRef Full Text | Google Scholar

Anderson, K., and Gaston, K. J. (2013). Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 11, 138–146. doi: 10.1890/120150

CrossRef Full Text | Google Scholar

Ansell, F. A., Edwards, D. P., and Hamer, K. C. (2011). Rehabilitation of logged rain forests: avifaunal composition, habitat structure, and implications for biodiversity-friendly redd+. Biotropica 43, 504–511. doi: 10.1111/j.1744-7429.2010.00725.x

CrossRef Full Text | Google Scholar

Apostol, B., Petrila, M., Loren,t, A., Ciceu, A., Gancz, V., and Badea, O. (2020). Species discrimination and individual tree detection for predicting main dendrometric characteristics in mixed temperate forests by use of airborne laser scanning and ultra-high-resolution imagery. Sci. Total Environ. 698, 134074. doi: 10.1016/j.scitotenv.2019.134074

PubMed Abstract | CrossRef Full Text | Google Scholar

ArduPilot Dev Team (2017). Mission Planner. Canberra, ACT: ArduPilot Dev Team.

Google Scholar

Asner, G. P., and Martin, R. E. (2009). Airborne spectranomics: mapping canopy chemical and taxonomic diversity in tropical forests. Front. Ecol. Environ. 7, 269–276. doi: 10.1890/070152

CrossRef Full Text | Google Scholar

Asner, G. P., Powell, G. V. N., Mascaro, J., Knapp, D. E., Clark, J. K., Jacobson, J., et al. (2010). High-resolution forest carbon stocks and emissions in the Amazon. Proc. Natl. Acad. Sci. U.S.A. 107, 16738–16742. doi: 10.1073/pnas.1004875107

PubMed Abstract | CrossRef Full Text | Google Scholar

Ballanti, L., Blesius, L., Hines, E., and Kruse, B. (2016). Tree species classification using hyperspectral imagery: a comparison of two classifiers. Remote Sens. 8, 445. doi: 10.3390/rs8060445

CrossRef Full Text | Google Scholar

Bastin, J.-F., Finegold, Y., Garcia, C., Mollicone, D., Rezende, M., Routh, D., et al. (2019). The global tree restoration potential. Science 365, 76–79. doi: 10.1126/science.aax0848

PubMed Abstract | CrossRef Full Text | Google Scholar

Bergseng, E., Ørka, H. O., Næsset, E., and Gobakken, T. (2015). Assessing forest inventory information obtained from different inventory approaches and remote sensing data sources. Ann. For. Sci. 72, 33–45. doi: 10.1007/s13595-014-0389-x

CrossRef Full Text | Google Scholar

Bernal, B., Murray, L. T., and Pearson, T. R. (2018). Global carbon dioxide removal rates from forest landscape restoration activities. Carbon. Balance Manag. 13, 1–13. doi: 10.1186/s13021-018-0110-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Bush, A., Sollmann, R., Wilting, A., Bohmann, K., Cole, B., Balzter, H., et al. (2017). Connecting earth observation to high-throughput biodiversity data. Nat. Ecol. Evolut. 1, 176. doi: 10.1038/s41559-017-0176

PubMed Abstract | CrossRef Full Text | Google Scholar

Carleer, A., and Wolff, E. (2004). Exploitation of very high resolution satellite data for tree species identification. Photogram. Eng. Remote Sens. 70, 135–140. doi: 10.14358/PERS.70.1.135

CrossRef Full Text | Google Scholar

Cerullo, G. R., and Edwards, D. P. (2019). Actively restoring resilience in selectively logged tropical forests. J. Appl. Ecol. 56, 107–118. doi: 10.1111/1365-2664.13262

CrossRef Full Text | Google Scholar

Chave, J., Réjou-Méchain, M., Búrquez, A., Chidumayo, E., Colgan, M. S., Delitti, W. B. C., et al. (2014). Improved allometric models to estimate the aboveground biomass of tropical trees. Glob. Chang Biol. 20, 3177–3190. doi: 10.1111/gcb.12629

PubMed Abstract | CrossRef Full Text | Google Scholar

Colkesen, I., and Kavzoglu, T. (2018). Selection of optimal object features in object-based image analysis using filter-based algorithms. J. Indian Soc. Remote Sens. 46, 1233–1242. doi: 10.1007/s12524-018-0807-x

CrossRef Full Text | Google Scholar

Colomina, I., and Molina, P. (2014). Unmanned aerial systems for photogrammetry and remote sensing: a review. ISPRS J. Photogram. Remote Sens. 92, 79–97. doi: 10.1016/j.isprsjprs.2014.02.013

CrossRef Full Text | Google Scholar

Cortes, C., and Vapnik, V. (1995). Support-vector networks. Mach. Learn. 20, 273–297. doi: 10.1007/BF00994018

CrossRef Full Text | Google Scholar

Crouzeilles, R., Ferreira, M. S., Chazdon, R. L., Lindenmayer, D. B., Sansevero, J. B. B., Monteiro, L., et al. (2017). Ecological restoration success is higher for natural regeneration than for active restoration in tropical forests. Sci. Adv. 3, e1701345. doi: 10.1126/sciadv.1701345

PubMed Abstract | CrossRef Full Text | Google Scholar

Dalponte, M., Ørka, H. O., Ene, L. T., Gobakken, T., and Næsset, E. (2014). Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data. Remote Sens. Environ. 140, 306–317. doi: 10.1016/j.rse.2013.09.006

CrossRef Full Text | Google Scholar

Dandois, J. P., and Ellis, E. C. (2013). High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens. Environ. 136, 259–276. doi: 10.1016/j.rse.2013.04.005

CrossRef Full Text | Google Scholar

de Almeida, D. R. A., Stark, S. C., Valbuena, R., Broadbent, E. N., Silva, T. S. F., de Resende, A. F., et al. (2020). A new era in forest restoration monitoring. Restorat. Ecol. 28, 8–11. doi: 10.1111/rec.13067

CrossRef Full Text | Google Scholar

de Kok, R. P., Briggs, M., Pirnanda, D., and Girmansyah, D. (2015). Identifying targets for plant conservation in Harapan Rainforest, Sumatra. Trop. Conservat. Sci. 8, 28–32. doi: 10.1177/194008291500800105

CrossRef Full Text | Google Scholar

Dillis, C., Marshall, A. J., Webb, C. O., and Grote, M. N. (2018). Prolific fruit output by the invasive tree Bellucia pentamera Naudin (Melastomataceae) is enhanced by selective logging disturbance. Biotropica 50, 598–605. doi: 10.1111/btp.12545

CrossRef Full Text | Google Scholar

Duffy, J. E. (2009). Why biodiversity is important to the functioning of real-world ecosystems. Front. Ecol. Environ. 7, 70195. doi: 10.1890/070195

CrossRef Full Text | Google Scholar

Edwards, D. P., Tobias, J. A., Sheil, D., Meijaard, E., and Laurance, W. F. (2014). Maintaining ecosystem function and services in logged tropical forests. Trends Ecol. Evolut. 29, 511–520. doi: 10.1016/j.tree.2014.07.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Fassnacht, F. E., Latifi, H., Stereńczak, K., Modzelewska, A., Lefsky, M., Waser, L. T., et al. (2016). Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 186, 64–87. doi: 10.1016/j.rse.2016.08.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Feduck, C., McDermid, G. J., and Castilla, G. (2018). Detection of coniferous seedlings in UAV imagery. Forests 9, 432. doi: 10.3390/f9070432

CrossRef Full Text | Google Scholar

Franklin, S. E., and Ahmed, O. S. (2018). Deciduous tree species classification using object-based analysis and machine learning with unmanned aerial vehicle multispectral data. Int. J. Remote Sens. 39, 5236–5245. doi: 10.1080/01431161.2017.1363442

CrossRef Full Text | Google Scholar

Fuentes-Peailillo, F., Ortega-Farías, S., Rivera, M., Bardeen, M., and Moreno, M. (2018). “Comparison of vegetation indices acquired from RGB and multispectral sensors placed on UAV,” in 2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA) (Concepcion: IEEE), 1–6.

Google Scholar

Fujimoto, A., Haga, C., Matsui, T., Machimura, T., Hayashi, K., Sugita, S., et al. (2019). An end to end process development for UAV-SfM based forest monitoring: Individual tree detection, species classification and carbon dynamics simulation. Forests 10, 680. doi: 10.3390/f10080680

CrossRef Full Text | Google Scholar

Giannetti, F., Chirici, G., Gobakken, T., Næsset, E., Travaglini, D., and Puliti, S. (2018). A new approach with DTM-independent metrics for forest growing stock prediction using UAV photogrammetric data. Remote Sens. Environ. 213, 195–205. doi: 10.1016/j.rse.2018.05.016

CrossRef Full Text | Google Scholar

Gini, R., Sona, G., Ronchetti, G., Passoni, D., and Pinto, L. (2018). Improving tree species classification using UAS multispectral images and texture measures. ISPRS Int. J. Geoinform. 7, 315. doi: 10.3390/ijgi7080315

CrossRef Full Text | Google Scholar

Goodbody, T. R., Coops, N. C., Hermosilla, T., Tompalski, P., and Crawford, P. (2018a). Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int. J. Remote Sens. 39, 5246–5264. doi: 10.1080/01431161.2017.1402387

CrossRef Full Text | Google Scholar

Goodbody, T. R., Coops, N. C., Hermosilla, T., Tompalski, P., McCartney, G., and MacLean, D. A. (2018b). Digital aerial photogrammetry for assessing cumulative spruce budworm defoliation and enhancing forest inventories at a landscape-level. ISPRS J. Photogram. Remote Sens. 142, 1–11. doi: 10.1016/j.isprsjprs.2018.05.012

CrossRef Full Text | Google Scholar

Gourlet-Fleury, S., Mortier, F., Fayolle, A., Baya, F., Ouédraogo, D., Bénédet, F., et al. (2013). Tropical forest recovery from logging: a 24 year silvicultural experiment from central africa. Philos. Trans. R. Soc. B Biol. Sci. 368, 20120302. doi: 10.1098/rstb.2012.0302

PubMed Abstract | CrossRef Full Text | Google Scholar

Haralick, R. M., Shanmugam, K., and Dinstein, I. (1973). Textural features for image classification. IEEE Trans. Syst. Man Cybern. 3, 610–621. doi: 10.1109/TSMC.1973.4309314

CrossRef Full Text | Google Scholar

Harrison, R. D., and Swinfield, T. (2015). Restoration of logged humid tropical forests: an experimental programme at Harapan Rainforest, Indonesia. Trop. Conservat. Sci. 8, 4–16. doi: 10.1177/194008291500800103

CrossRef Full Text | Google Scholar

He, D.-C., and Wang, L. (1990). Texture unit, texture spectrum, and texture analysis. IEEE Trans. Geosci. Remote Sens. 28, 509–512. doi: 10.1109/TGRS.1990.572934

CrossRef Full Text | Google Scholar

Ho, T. K. (1995). “Random decision forests,” in Proceedings of 3rd International Conference on Document Analysis and Recognition, Vol. 1 (Montreal, QC), 278–282.

Google Scholar

Ho, T. K. (1998). The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20, 832–844. doi: 10.1109/34.709601

CrossRef Full Text | Google Scholar

Hruska, R., Mitchell, J., Anderson, M., and Glenn, N. F. (2012). Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle. Remote Sens. 4, 2736–2752. doi: 10.3390/rs4092736

CrossRef Full Text | Google Scholar

Iglhaut, J., Cabo, C., Puliti, S., Piermattei, L., O'Connor, J., and Rosette, J. (2019). Structure from motion photogrammetry in forestry: a review. Curr. Forestry Rep. 5, 155–168. doi: 10.1007/s40725-019-00094-3

CrossRef Full Text | Google Scholar

International Commission on Illumination (2019). Colorimetry–Part 4: CIE1976 L*a*b* Colour Space. Vienna: Standard ISO/CIE 11664-4, 2019 [CIE LEAD], International Commission on Illumination.

Google Scholar

Isbell, F., Calcagno, V., Hector, A., Connolly, J., Harpole, W. S., Reich, P. B., et al. (2011). High plant diversity is needed to maintain ecosystem services. Nature 477, 199–202. doi: 10.1038/nature10282

PubMed Abstract | CrossRef Full Text | Google Scholar

Joppa, L. N., Roberts, D. L., Myers, N., and Pimm, S. L. (2011). Biodiversity hotspots house most undiscovered plant species. Proc. Natl. Acad. Sci. U.S.A. 108, 13171–13176. doi: 10.1073/pnas.1109389108

PubMed Abstract | CrossRef Full Text | Google Scholar

Kachamba, D. J., Ørka, H. O., Gobakken, T., Eid, T., and Mwase, W. (2016). Biomass estimation using 3D data from unmanned aerial vehicle imagery in a tropical woodland. Remote Sens. 8, 968. doi: 10.3390/rs8110968

CrossRef Full Text | Google Scholar

Kattenborn, T., Eichel, J., and Fassnacht, F. E. (2019). Convolutional neural networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution uav imagery. Sci. Rep. 9, 1–9. doi: 10.1038/s41598-019-53797-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Kattenborn, T., Eichel, J., Wiser, S., Burrows, L., Fassnacht, F. E., and Schmidtlein, S. (2020). Convolutional neural networks accurately predict cover fractions of plant species and communities in unmanned aerial vehicle imagery. Remote Sens. Ecol. Conservat. 6, 472–486. doi: 10.1002/rse2.146

CrossRef Full Text | Google Scholar

Kitzes, J., and Schricker, L. (2019). The necessity, promise and challenge of automated biodiversity surveys. Environ. Conserv. 46, 247–250. doi: 10.1017/S0376892919000146

CrossRef Full Text | Google Scholar

Laws, K. I. (1980). “Rapid texture identification,” in Image Processing for Missile Guidance, Vol. 0238, eds T. F. Wiener (Bellingham, WA: SPIE), 376–381.

Google Scholar

Lewis, S. L., Wheeler, C. E., Mitchard, E. T. A., and Koch, A. (2019). Restoring natural forests is the best way to remove atmospheric carbon. Nature 568, 25–28. doi: 10.1038/d41586-019-01026-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Lisein, J., Michez, A., Claessens, H., and Lejeune, P. (2015). Discrimination of deciduous tree species from time series of unmanned aerial system imagery. PLoS ONE 10, e0141006. doi: 10.1371/journal.pone.0141006

PubMed Abstract | CrossRef Full Text | Google Scholar

López-Granados, F., Torres-Sánchez, J., Jiménez-Brenes, F. M., Arquero, O., Lovera, M., and de Castro, A. I. (2019). An efficient RGB-UAV-based platform for field almond tree phenotyping: 3-D architecture and flowering traits. Plant Methods 15, 1–16. doi: 10.1186/s13007-019-0547-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Lu, N., Zhou, J., Han, Z., Li, D., Cao, Q., Yao, X., et al. (2019). Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system. Plant Methods 15, 17. doi: 10.1186/s13007-019-0402-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Marconi, S., Graves, S. J., Gong, D., Nia, M. S., Le Bras, M., Dorr, B. J., et al. (2019). A data science challenge for converting airborne remote sensing data into ecological information. PeerJ. 6, e5843. doi: 10.7717/peerj.5843

PubMed Abstract | CrossRef Full Text | Google Scholar

Martin, P. A., Newton, A. C., Pfeifer, M., Khoo, M., and Bullock, J. M. (2015). Impacts of tropical selective logging on carbon storage and tree species richness: a meta-analysis. For. Ecol. Manag. 356, 224–233. doi: 10.1016/j.foreco.2015.07.010

CrossRef Full Text | Google Scholar

Maschler, J., Atzberger, C., and Immitzer, M. (2018). Individual tree crown segmentation and classification of 13 tree species using airborne hyperspectral data. Remote Sens. 10, 1218. doi: 10.3390/rs10081218

CrossRef Full Text | Google Scholar

Melo, F. P., Pinto, S. R., Brancalion, P. H., Castro, P. S., Rodrigues, R. R., Aronson, J., et al. (2013). Priority setting for scaling-up tropical forest restoration projects: early lessons from the atlantic forest restoration pact. Environ. Sci. Policy 33, 395–404. doi: 10.1016/j.envsci.2013.07.013

CrossRef Full Text | Google Scholar

Messinger, M., Asner, G. P., and Silman, M. (2016). Rapid assessments of amazon forest structure and biomass using small unmanned aerial systems. Remote Sens. 8, 615. doi: 10.3390/rs8080615

CrossRef Full Text | Google Scholar

Michez, A., Piégay, H., Lisein, J., Claessens, H., and Lejeune, P. (2016). Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system. Environ. Monit. Assess 188, 146. doi: 10.1007/s10661-015-4996-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Myers, N., Mittermeier, R. A., Mittermeier, C. G., da Fonseca, G. A. B., and Kent, J. (2000). Biodiversity hotspots for conservation priorities. Nature 403, 853–858. doi: 10.1038/35002501

PubMed Abstract | CrossRef Full Text | Google Scholar

Näsi, R., Honkavaara, E., Lyytikäinen-Saarenmaa, P., Blomqvist, M., Litkey, P., Hakala, T., et al. (2015). Using UAV-based photogrammetry and hyperspectral imaging for mapping bark beetle damage at tree-level. Remote Sens. 7, 15467–15493. doi: 10.3390/rs71115467

CrossRef Full Text | Google Scholar

Ojala, T., Pietikäinen, M., and Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 29, 51–59. doi: 10.1016/0031-3203(95)00067-4

CrossRef Full Text | Google Scholar

Ota, T., Ogawa, M., Shimizu, K., Kajisa, T., Mizoue, N., Yoshida, S., et al. (2015). Aboveground biomass estimation using structure from motion approach with aerial photographs in a seasonal tropical forest. Forests 6, 3882–3898. doi: 10.3390/f6113882

CrossRef Full Text | Google Scholar

Palmer, M. A., Ambrose, R. F., and Poff, N. L. (1997). Ecological theory and community restoration ecology. Restorat. Ecol. 5, 291–300. doi: 10.1046/j.1526-100X.1997.00543.x

CrossRef Full Text | Google Scholar

Park, J. Y., Muller-Landau, H. C., Lichstein, J. W., Rifai, S. W., Dandois, J. P., and Bohlman, S. A. (2019). Quantifying leaf phenology of individual trees and species in a tropical forest using unmanned aerial vehicle (UAV) images. Remote Sens. 11, 1534. doi: 10.3390/rs11131534

CrossRef Full Text | Google Scholar

Petrou, Z. I., Manakos, I., and Stathaki, T. (2015). Remote sensing for biodiversity monitoring: a review of methods for biodiversity indicator extraction and assessment of progress towards international targets. Biodivers Conserv. 24, 2333–2363. doi: 10.1007/s10531-015-0947-z

CrossRef Full Text | Google Scholar

Puliti, S., Gobakken, T., Ørka, H. O., and Næsset, E. (2017). Assessing 3D point clouds from aerial photographs for species-specific forest inventories. Scand. J. For. Res. 32, 68–79. doi: 10.1080/02827581.2016.1186727

CrossRef Full Text | Google Scholar

Puliti, S., Solberg, S., and Granhus, A. (2019). Use of UAV photogrammetric data for estimation of biophysical properties in forest stands under regeneration. Remote Sens. 11, 233. doi: 10.3390/rs11030233

CrossRef Full Text | Google Scholar

Python Core Team (2018). Python: A Dynamic, Open Source Programming Language. Amsterdam: Python Core Team.

Google Scholar

QGIS Development Team (2019). QGIS Geographic Information System. Düsseldorf: QGIS Development Team.

Google Scholar

R Core Team (2019). R: A Language and Environment for Statistical Computing. Vienna: R Core Team.

Google Scholar

Reis, B. P., Martins, S. V., Fernandes Filho, E. I., Sarcinelli, T. S., Gleriani, J. M., Marcatti, G. E., et al. (2019). Management recommendation generation for areas under forest restoration process through images obtained by UAV and LiDAR. Remote Sens. 11, 1508. doi: 10.3390/rs11131508

CrossRef Full Text | Google Scholar

Ren, X., and Malik, J. (2003). “Learning a classification model for segmentation,” in Proceedings Ninth IEEE International Conference on Computer Vision, Vol. 1 (Nice: IEEE), 10–17.

Google Scholar

Reynolds, G., Payne, J., Sinun, W., Mosigil, G., and Walsh, R. P. D. (2011). Changes in forest land use and management in sabah, malaysian borneo, 1990–2010, with a focus on the danum valley region. Philos. Trans. R. Soc. B Biol. Sci. 366, 3168–3176. doi: 10.1098/rstb.2011.0154

PubMed Abstract | CrossRef Full Text | Google Scholar

Rokhmana, C. A. (2015). The potential of UAV-based remote sensing for supporting precision agriculture in indonesia. Procedia Environ. Sci. 24, 245–253. doi: 10.1016/j.proenv.2015.03.032

CrossRef Full Text | Google Scholar

Rose, R. A., Byler, D., Eastman, J. R., Fleishman, E., Geller, G., Goetz, S., et al. (2015). Ten ways remote sensing can contribute to conservation. Conservat. Biol. 29, 350–359. doi: 10.1111/cobi.12397

PubMed Abstract | CrossRef Full Text | Google Scholar

Saari, H., Pellikka, I., Pesonen, L., Tuominen, S., Heikkilä, J., Holmlund, C., et al. (2011). “Unmanned aerial vehicle (UAV) operated spectral camera system for forest and agriculture applications,” in Remote Sensing for Agriculture, Ecosystems, and Hydrology XIII, Vol. 8174 (Prague: International Society for Optics and Photonics), 81740H.

Google Scholar

Samiappan, S., Turnage, G., McCraine, C., Skidmore, J., Hathcock, L., and Moorhead, R. (2017). Post-logging estimation of loblolly pine (Pinus taeda) stump size, area and population using imagery from a small unmanned aerial system. Drones 1, 4. doi: 10.3390/drones1010004

CrossRef Full Text | Google Scholar

Slik, J. W. F. (2009). Plants of Southeast Asia. Available online at: http://asianplant.net/ (accessed April 8, 2020).

Google Scholar

Slik, J. W. F., Bernard, C. S., Van Beek, M., Breman, F. C., and Eichhorn, K. A. (2008). Tree diversity, composition, forest structure and aboveground biomass dynamics after single and repeated fire in a Bornean rain forest. Oecologia 158, 579–588. doi: 10.1007/s00442-008-1163-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Slik, J. W. F., and Eichhorn, K. A. (2003). Fire survival of lowland tropical rain forest trees in relation to stem diameter and topographic position. Oecologia 137, 446–455. doi: 10.1007/s00442-003-1359-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Slik, J. W. F., Keßler, P. J. A., and van Welzen, P. C. (2003). Macaranga and Mallotus species (Euphorbiaceae) as indicators for disturbance in the mixed lowland dipterocarp forest of East Kalimantan (Indonesia). Ecol. Indic. 2, 311–324. doi: 10.1016/S1470-160X(02)00057-2

CrossRef Full Text | Google Scholar

Slik, J. W. F., Verburg, R. W., and Keßler, P. J. A. (2002). Effects of fire and selective logging on the tree species composition of lowland dipterocarp forest in East Kalimantan, Indonesia. Biodiversity Conservat. 11, 85–98. doi: 10.1023/A:1014036129075

CrossRef Full Text | Google Scholar

Smith, A. R. (1978). Color gamut transform pairs. SIGGRAPH Comput. Graph. 12, 12–19. doi: 10.1145/965139.807361

CrossRef Full Text | Google Scholar

Sullivan, M. J. P., Talbot, J., Lewis, S. L., Phillips, O. L., Qie, L., Begne, S. K., et al. (2017). Diversity and carbon storage across the tropical forest biome. Sci. Rep. 7, 1–12. doi: 10.1038/srep39102

PubMed Abstract | CrossRef Full Text | Google Scholar

Surový, P., and Kuželka, K. (2019). Acquisition of forest attributes for decision support at the forest enterprise level using remote-sensing techniques–a review. Forests 10, 273. doi: 10.3390/f10030273

CrossRef Full Text | Google Scholar

Swinfield, T., Afriandi, R., Antoni, F., and Harrison, R. D. (2016). Accelerating tropical forest restoration through the selective removal of pioneer species. For. Ecol. Manag. 381, 209–216. doi: 10.1016/j.foreco.2016.09.020

CrossRef Full Text | Google Scholar

Thompson, I., Mackey, B., McNulty, S., and Mosseler, A. (2009). Forest resilience, biodiversity, and climate change. Technical report, Secretariat of the Convention on Biological Diversity.

Google Scholar

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288. doi: 10.1111/j.2517-6161.1996.tb02080.x

CrossRef Full Text | Google Scholar

Tong, H., Tong, F., Zhou, W., and Zhang, Y. (2019). Purifying SLIC superpixels to optimize superpixel-based classification of high spatial resolution remote sensing image. Remote Sens. 11, 2627. doi: 10.3390/rs11222627

CrossRef Full Text | Google Scholar

Toth, C., and Józków, G. (2016). Remote sensing platforms and sensors: a survey. ISPRS J. Photogram. Remote Sens. 115, 22–36. doi: 10.1016/j.isprsjprs.2015.10.004

CrossRef Full Text | Google Scholar

Tuominen, S., Näsi, R., Honkavaara, E., Balazs, A., Hakala, T., Viljanen, N., et al. (2018). Assessment of classifiers and remote sensing features of hyperspectral imagery and stereo-photogrammetric point clouds for recognition of tree species in a forest area of high species diversity. Remote Sens. 10, 714. doi: 10.3390/rs10050714

CrossRef Full Text | Google Scholar

Turner, W., Spector, S., Gardiner, N., Fladeland, M., Sterling, E., and Steininger, M. (2003). Remote sensing for biodiversity science and conservation. Trends Ecol. Evolut. 18, 306–314. doi: 10.1016/S0169-5347(03)00070-3

CrossRef Full Text | Google Scholar

van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., et al. (2014). scikit-image: image processing in python. PeerJ. 2, e453. doi: 10.7717/peerj.453

PubMed Abstract | CrossRef Full Text | Google Scholar

Wheeler, C. E., Omeja, P. A., Chapman, C. A., Glipin, M., Tumwesigye, C., and Lewis, S. L. (2016). Carbon sequestration and biodiversity following 18years of active tropical forest restoration. For. Ecol. Manag. 373, 44–55. doi: 10.1016/j.foreco.2016.04.025

CrossRef Full Text | Google Scholar

Wu, Z., Ni, M., Hu, Z., Wang, J., Li, Q., and Wu, G. (2019). Mapping invasive plant with UAV-derived 3D mesh model in mountain area–a case study in Shenzhen Coast, China. Int. J. Appl. Earth Observat. Geoinf. 77, 129–139. doi: 10.1016/j.jag.2018.12.001

CrossRef Full Text | Google Scholar

Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., and Schirokauer, D. (2006). Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogram. Eng. Remote Sens. 72, 799–811. doi: 10.14358/PERS.72.7.799

CrossRef Full Text | Google Scholar

Zahawi, R. A., Dandois, J. P., Holl, K. D., Nadwodny, D., Reid, J. L., and Ellis, E. C. (2015). Using lightweight unmanned aerial vehicles to monitor tropical forest recovery. Biol. Conserv. 186, 287–295. doi: 10.1016/j.biocon.2015.03.031

CrossRef Full Text | Google Scholar

Zhang, C., and Qiu, F. (2012). Mapping individual tree species in an urban forest using airborne lidar data and hyperspectral imagery. Photogram. Eng. Remote Sens. 78, 1079–1087. doi: 10.14358/PERS.78.10.1079

CrossRef Full Text | Google Scholar

Zhang, K., Lin, S., Ji, Y., Yang, C., Wang, X., Yang, C., et al. (2016). Plant diversity accurately predicts insect diversity in two tropical landscapes. Mol. Ecol. 25, 4407–4419. doi: 10.1111/mec.13770

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhongming, Z., Linong, L., Xiaona, Y., Wangqiang, Z., and Wei, L. (2021). AR6 Climate Change 2021: The Physical Science Basis.

Google Scholar

Keywords: forest restoration, tropical forest recovery, unoccupied aerial vehicles, texture, multispectral imagery, simple linear iterative clustering

Citation: Williams J, Jackson TD, Schönlieb C-B, Swinfield T, Irawan B, Achmad E, Zudhi M, Habibi H, Gemita E and Coomes DA (2022) Monitoring early-successional trees for tropical forest restoration using low-cost UAV-based species classification. Front. For. Glob. Change 5:876448. doi: 10.3389/ffgc.2022.876448

Received: 15 February 2022; Accepted: 28 September 2022;
Published: 14 October 2022.

Edited by:

Hormoz Sohrabi, Tarbiat Modares University, Iran

Reviewed by:

Yan Gao, Universidad Nacional Autonoma de Mexico, Mexico
Ashutosh Bhardwaj, Indian Institute of Remote Sensing, India

Copyright © 2022 Williams, Jackson, Schönlieb, Swinfield, Irawan, Achmad, Zudhi, Habibi, Gemita and Coomes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: David A. Coomes, dac18@cam.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.