Skip to main content

ORIGINAL RESEARCH article

Front. Plant Sci., 19 June 2024
Sec. Technical Advances in Plant Science

Quantification of species composition in grass-clover swards using RGB and multispectral UAV imagery and machine learning

  • 1Plant Sciences Unit, Flanders Research Institute for Agriculture, Fisheries and Food (ILVO), Melle, Belgium
  • 2Research Group Plants and Ecosystems (PLECO), Department of Biology, University of Antwerp, Wilrijk, Belgium
  • 3Institute of Agricultural Sciences, Spanish National Research Council (ICA-CSIC), Madrid, Spain

Introduction: Growing grass-legume mixtures for forage production improves both yield productivity and nutritional quality, while also benefiting the environment by promoting species biodiversity and enhancing soil fertility (through nitrogen fixation). Consequently, assessing legume proportions in grass-legume mixed swards is essential for breeding and cultivation. This study introduces an approach for automated classification and mapping of species in mixed grass-clover swards using object-based image analysis (OBIA).

Methods: The OBIA procedure was established for both RGB and ten band multispectral (MS) images capturedby an unmanned aerial vehicle (UAV). The workflow integrated structural (canopy heights) and spectral variables (bands, vegetation indices) along with a machine learning algorithm (Random Forest) to perform image segmentation and classification. Spatial k-fold cross-validation was employed to assess accuracy.

Results and discussion: Results demonstrated good performance, achieving an overall accuracy of approximately 70%, for both RGB and MS-based imagery, with grass and clover classes yielding similar F1 scores, exceeding 0.7 values. The effectiveness of the OBIA procedure and classification was examined by analyzing correlations between predicted clover fractions and dry matter yield (DMY) proportions. This quantification revealed a positive and strong relationship, with R2 values exceeding 0.8 for RGB and MS-based classification outcomes. This indicates the potential of estimating (relative) clover coverage, which could assist breeders but also farmers in a precision agriculture context.

1 Introduction

In European temperate grasslands, grasses and legumes are frequently cultivated in mixtures (Skovsen et al., 2017), with the main legume species being white clover (Trifolium repens L.) and red clover (Trifolium pratense L.) (Rognli et al., 2021). The main reason is that grass-clover mixtures can perform better than each of the component species in monoculture, in terms of yield and/or nutritional quality (Nyfeler et al., 2011; Fujiwara et al., 2022). Owing to their ability to capture atmospheric nitrogen through a symbiotic relationship with soil bacteria, legumes generally are rich in protein content (Sun et al., 2021). A key advantage of incorporating legumes into grassland mixtures stems from the fact that nitrogen fixed by legumes can be transferred to neighboring non-legume plants (e.g. grass) (Pirhofer-Walzl et al., n.d; Thilakarathna et al., 2016). Establishing grass-legume swards by combining species with varying aboveground characteristics and root architecture (species niche complementarity) offers other benefits, including an increase in biodiversity (Rochon et al., 2004) and other positive consequences for the environment (Lüscher et al., 2014). Such mixtures have demonstrated enhanced resistance to weed invasion when compared to monocultures (Deak et al., 2007). Introducing legumes can also help restore nitrogen levels in the soil, thus reducing the need for fertilizer application (Khatiwada et al., 2020).

A forage with a high clover content can offer a high feed quality (Sun et al., 2021) and can increase fodder intake by livestock (Dewhurst et al., 2009; Mortensen et al., 2017). Consequently, the objective of forage production is to maintain a predetermined proportion of the legume component (Skovsen et al., 2017), while maximizing the annual dry matter yield. However, maintaining such well-balanced grass-legume mixtures over time is challenging (Lüscher et al., 2014) due to various environmental factors (e.g. soil fertility, temperature, soil moisture content, species and cultivar choice) that play a crucial role in the growth and persistence of clover in mixed swards (Botha, 2009), as well as management factors (e.g. cutting regime and nitrogen fertilization) which affect the competition dynamics between grass and clover (Sun et al., 2021). For instance, the grass-clover ratio can be regulated by manipulating the amount of nitrogen fertilizer applied (Skovsen et al., 2017). With ample soil nitrogen availability (high fertilization), the faster-growing grass will outcompete clover. On the contrary, with limited soil-available nitrogen, clover will dominate the sward (Skovsen et al., 2017). An optimal quantity and quality of forage is reached when the clover fraction ranges from 30% to 50% in total dry matter (Botha, 2009). A higher percentage of clover usually leads to a notable decrease in the yield (Khatiwada et al., 2020). Conversely, if clover content falls below 30%, the nutritional value of the feed decreases (Botha, 2009). Therefore, an accurate follow-up of the clover ratio by farmers is essential for targeted management regarding fertilization or reseeding, and to estimate the nutritional value of the forage (Biewer et al., 2009; Himstedt et al., 2010; Skovsen et al., 2017).

A common method used for estimating the clover fraction in grass-clover swards in the context of breeding and research involves destructive sampling: cutting vegetation subsamples, followed by manual separation of the component species (e.g. grasses, legumes etc.), drying and weighing (Himstedt et al., 2010; Mortensen et al., 2017). Due to its labor-intensive and time-consuming nature, this approach is expensive and difficult to employ in practice (Mortensen et al., 2017). A non-destructive and simpler method consists of the visual assessment of species coverage and their composition, in the same way as farmers do. Estimating the clover ratio visually comes with challenges. For instance, to achieve 30% of clover, the pasture must visually display about 50 to 60% of clover coverage because the observable clover content is typically twice its actual amount (AHDB, 2021). In addition, coarse visual inspections are subjective and prone to inaccuracies (Himstedt et al., 2010). They also do not account for spatial variation, both intra-field or across different fields (Skovsen et al., 2017). These limitations have encouraged the advancement of remote sensing (RS) solutions (Li et al., 2021). RS technologies, particularly unmanned aerial vehicles (UAV), offer several advantages in this respect. They are cost-efficient, capable of covering large areas and can capture high-resolution imagery (Deng et al., 2018). These attributes make RS technologies useful for grassland monitoring.

Several RS-based studies that estimate the grass-clover content/ratio in mixed swards through image analysis have been conducted in recent years. The most recent ones focused on the use of deep learning methods for semantic segmentation. For example, Skovsen et al. (2017) trained a fully convolutional network (FCN) to automatically generate a pixel-wise classification of clover, grass, and weeds. Bateman et al. (2020) introduced a new local context network (LC-Net) designed for dense swards and canopies with high occlusion. Sun et al. (2021) utilized not only the FCN architecture but also fine-tuned DeepLab V3+ and SegNet transfer learning methods for clover detection. (Kartal, 2021) compared thirty different segmentation models built with a combination of three deep learning architectures and then randomly initialized encoders. Finally, Fujiwara et al. (2022) employed the fine-tuned GoogLeNet model to estimate legume proportion. Overall, these studies showed that the methods applied are useful to determine the clover fraction in mixed grass-clover swards. A shared characteristic across all these studies is their utilization of RGB imagery acquired with close-range remote sensing; either with a camera setup mounted on a pushcart (Skovsen et al., 2017), on a ground-based platform (Bateman et al., 2020), on a UAV flying at an altitude of 4 m (Fujiwara et al., 2022) or using an Apple iPhone SE camera (Sun et al., 2021). As a result, ultra-high-resolution imagery was captured, revealing fine details of the canopy cover, such as individual grass and clover leaves. Acquiring such high-resolution images and then analyzing them using deep learning techniques comes with challenges, including high computational cost (Justus et al., 2018), extensive manual labeling (Skovsen et al., 2017), and limited field coverage. To address these challenges, in this study we used UAV imagery captured at higher altitudes, thereby facilitating increased spatial coverage up to the field level. Furthermore, we explored the Object-Based Image Analysis (OBIA) approach, to mitigate computational costs and reduce manual labeling efforts. Thus, potentially offering a simplified procedure for the end user while still achieving satisfactory results.

In a traditional pixel-based approach, each pixel is classified separately, and the classification procedure predominantly relies on spectral properties (Blaschke et al., 2014; Zou and Greenberg, 2019). In contrast, OBIA operates at the ‘object level’ (Blaschke, 2010). OBIA segments an image by grouping pixels into non-overlapping objects with a meaningful representation (Blaschke, 2010; Grippa et al., 2017; Hossain and Chen, 2019) and then classifies each object (Blaschke, 2010). This approach was designed primarily to analyze high-spatial resolution imagery by incorporating spectral, shape and textural characteristics as well as spatial patterns (Lang et al., 2009), but is considered an efficient tool for classifying remotely sensed imagery (Lu and He, 2018). Clustering pixels into image objects also aims to overcome the ‘salt and pepper effect’ (Blaschke, 2010). The primary objective of the study presented here was to automatically detect and quantify the percentage of component species in mixed grassland swards using UAV-derived imagery and an OBIA approach. To analyze the effectiveness of the applied method we investigated the relationship between the clover fraction determined using the generated OBIA classification maps and the relative proportion of clover in terms of harvestable dry matter yield determined with destructive methods.

2 Materials and methods

2.1 Study site

The study was carried out in the municipality of Merelbeke in Belgium (N50°98’, E3°79’; Figure 1), using a trial established to test and compare the performance of various species mixtures, when mown frequently (five cuts per year). The field trial was sown in September 2020 with nine different combinations of the following species: perennial ryegrass (Lolium perenne L., R), tall fescue (Festuca arundinacea Schreb., F), white clover (Trifolium repens L., WC), red clover (Trifolium pratense L., RC), bird’s-foot trefoil (Lotus corniculatus L., T), common sainfoin (Onobrychis vicifolia Scop., S), narrowleaf plantain (Plantago lanceolata L., P) and common chicory (Cichorium intybus L., C). In addition, each species was sown separately in border rows. The mixtures were arranged in a randomized block design with four replicates (A, B, C, D), resulting in a total of 44 plots (2.5 x 6 m).

Figure 1
www.frontiersin.org

Figure 1 Study site in Merelbeke with a field trial featuring different mixtures of species. Letters A, B, C, and D represent four replicates. Mixture 2: R + WC + RC; mixture 3: R + WC + RC + P. Imagery captured with a UAV-based RGB sensor (collected in May 2021) was set as the base image.

To address the objectives of this study, only two different combinations (8 plots in total) were selected for further analysis: mixture 2 (grass and clover) and mixture 3 (grass, clover and plantain), marked in yellow in Figure 1. Such sward mixtures are commonly grown for forage production in Belgium. The remaining mixtures were not considered in this investigation as this would greatly increase the complexity of the analysis, which was not the aim of this study.

2.2 Field sampling and data acquisition

For this study, data was collected at the beginning of May (spring cut) of 2021. Field sampling and data acquisition can be divided into five steps (Figure 2): (1) UAV-based image collection, (2) reference photo capturing, (3) biomass harvesting, (4) manual separation of component species and (5) weighing and drying of sorted plant material. All the steps are described in detail below.

Figure 2
www.frontiersin.org

Figure 2 Stages of field sampling and data collection carried out in 2021 at the study site.

The first step in the procedure was to perform two drone flights. We utilized a DJI Matrice 600 Pro (DJI, Shenzhen, China) platform and two different sensors: (a) a standard digital RGB camera (α6000, Sony Corporation, Tokyo, Japan) and (b) a multispectral (MS) camera (Dual Camera System, MicaSense, Seattle, USA), with the following ten bands: coastal blue (444 nm), blue (475 nm), green (531 nm), green (560 nm), red (650 nm), red (668 nm), red edge (705 nm), red edge (717 nm), red edge (740 nm), and near-infrared (840 nm). The flight mission was carried out on 6th May around solar noon (2 p.m.). The UAV with each sensor was operated along a pre-defined route and at different altitudes above the ground level. For the RGB camera, the flight altitude was set to 18 m (the lowest possible), as descending further may potentially disrupt the canopy due to the air turbulence generated by the heavy M600 drone propellers. Conversely, with the MicaSense multispectral sensor we followed a recommended minimum flying altitude of 30 meters. Different sensor parameters and flight heights resulted in distinct spatial resolutions, with RGB and MS imagery achieving pixel sizes of 2 mm and 2 cm, respectively. Once the UAV flights were completed, we started a field sampling campaign. Non-destructive and destructive measurements performed in the next steps are not only more time-consuming but also labor-intensive. Therefore, a limited number of subplots were selected within the grass-clover and grass-clover-plantain mixtures (a total of 10 subplots, marked in blue in Figure 1). A metal frame (0.55 x 0.55 m in size) and plastic markers were used to define the borders and mark the location of the subplots. First, we captured reference images at ground level (around 1.2 m height) using a tripod and a consumer-grade RGB camera (D90, Nikon Corporation Tokyo, Japan). In the next step, we harvested all aboveground biomass (green vegetation) within the metal frame to a height of 5 cm above soil level. Shortly after the cut, the collected biomass samples were manually separated and sorted into four classes: grass, clover, plantain and weeds. The sorted biomass fractions were oven-dried at 70°C for a minimum of 72 h and weighed. In the last step, the relative proportion (weight-%) of harvestable dry matter yield (DMY) of each fraction related to the total DMY was calculated.

2.3 Image processing and image analysis – workflow

A graphical workflow (Figure 3) represents the key steps of the processing chain, including image processing, calculation of different indices, extraction of information, sampling procedure, segmentation, image classification and performance evaluation, which are described in more detail below.

Figure 3
www.frontiersin.org

Figure 3 Graphical workflow illustrating the main stages of the processing chain, including image processing, segmentation, image classification and performance assessment (*HIS: Hue, Intensity, and Saturation; ** raster stack of spectral bands and/or vegetation indices).

2.3.1 Image processing

Images acquired with the UAV-based sensors were photogrammetrically processed using Agisoft Metashape Professional (Agisoft LLC, St. Petersburg, Russia) and Pix4DMapper v4.5.6 (Pix4D S.A., Prilly, Switzerland) for the RGB and the multispectral camera, respectively. Both software packages are commonly used for aerial imagery processing and both apply the structure from motion (SfM) technique to produce accurate, georeferenced maps and 3D models. The main steps in the image processing procedure for both sensors (and software packages) are similar. They were adopted from our previous study (Pranga et al., 2021), so further details on selected options and parameter settings can be found there. In this study, six Ground Control Points (GCPs), were evenly spread across the field and utilized for precise georeferencing. The geographic coordinates of the GCPs were measured on-site with an RTK GPS (Stonex S10 GNSS, Stonex SRL, Italy).

2.3.2 Image post-processing

As two different sensors were used in this study, the Canopy Height Model (CHM) was computed separately for RGB (CHMRGB) and multispectral (CHMMS) imagery. The CHM was calculated by subtracting the Digital Surface Model (DSM) and the Digital Terrain Model (DTM) at a pixel level. To compute the DTM, we applied a Triangulated Irregular Network (TIN) interpolation tool. Here, we measured 20 ground points, evenly spread across the study site, with the same RTK GPS.

The RGB orthomosaic created in the previous step was further transformed into hue (H), intensity (I), and saturation (S) color space using the GRASS GIS module with its i.rgb.his tool. Similarly, vegetation indices (VIs) were calculated for RGB and MS imagery. Vegetation indices are relatively simple but powerful features that can help in quantitative and qualitative vegetation monitoring and assessment (Xue and Su, 2017). Four VIs were selected for the RGB imagery: (Normalized) Excess Green (ExG), (Normalized) Excess Red (ExR), Excess Green - Excess Red (ExGR), and Normalized Green-Red Difference Index (NGRDI). The MS sensor, with additional data from the red-edge and near-infrared part of the spectrum, provides more possibilities for index calculations. Here, we selected eight indices, comprising Chlorophyll Index Green (Clg), Enhanced Vegetation Index (EVI), Green Atmospherically Resistant Vegetation Index (GARI), Green Normalized Difference Vegetation Index (GNDVI), Modified Chlorophyll Absorption in Reflectance Index (MCARI), and Simple Ratio (SR) with the red-edge band. All vegetation indices were calculated using equations enlisted in the previous study (Pranga et al., 2021). The selection of VIs was based on their general applicability in vegetation monitoring.

A set of further processing steps were applied to the acquired rasters. As digital surface models and maps were generated with different pixel sizes, we started with the raster alignment tool. Here, rasters were resampled to the same cell size and offset in the grid using the nearest neighbor resampling method. The output pixel size was approximately 2 mm and 2 cm for RGB-based and MS-based imagery respectively. All RGB-based rasters (RGB + HIS + CHRGB + VIs) were mosaicked into one raster stack in the next step. The same procedure was repeated for MS-based rasters (10 spectral bands + CHMS + VIs). To perform all these calculations, we used the open-source QGIS 3.22.8 with GRASS 7.8.3. software (QGIS Geographic Information System, QGIS Development Team, Open Source Geospatial Foundation). As the procedure had to be repeated for different sensors, we used the integrated Python console for scripting within the QGIS software (Python version 3.9).

2.3.3 Object-Based Image Analysis

We can divide the OBIA procedure into two main stages: 1) image segmentation and 2) feature extraction and classification.

2.3.3.1 Image segmentation

Image segmentation is the first and key step in OBIA (Lu and He, 2018). During this step, disjoint, spatially continuous and homogenous segments, also known as objects, are generated (Blaschke et al., 2014). The essential objective here is to cluster individual pixels from any raster or image into meaningful objects that should match, as much as possible, the real-world objects (Grippa et al., 2017). A segmentation procedure aims at producing internally homogeneous segments (intra-segment homogeneity) that are distinct from their neighbors (inter-segment heterogeneity) (Espindola et al., 2006). Finding the balance between this intra-segment homogeneity and inter-segment heterogeneity is crucial as it affects the segmentation quality (Grippa et al., 2017; Wijesingha et al., 2020).

In this study, the image segmentation procedure was performed using the open-source project Orfeo ToolBox (OTB) (Grizonnet et al., 2017), which was developed by the Centre National d’Etudes Spatiales (CNES) in France. OTB incorporates many ready-to-use tools for remote sensing (RS) tasks (OTB, 2018), including OBIA-related ones. To create automated processing scripts these tools were applied through the QGIS software with an integrated Python console (Python version 3.9). Image segmentation was performed using the mean-shift algorithm (Michel et al., 2015) option available in the OTB platform, with computation implemented on a tile-wise basis. This way, we could limit memory usage and apply the method to very high-resolution (VHR) imagery (Michel et al., 2015; OTB, 2018). In the mean-shift algorithm, a moving window average is used to group pixels close in the spatial and spectral domain into segments (Comaniciu and Meer, 2002; Hossain and Chen, 2019).

Three key parameters must be set within the OTB mean-shift algorithm: spatial radius (spatialr), range radius (ranger), and minimum region size (minsize) (Teodoro and Araujo, 2016; Varo-Martínez et al., 2017). The first parameter defines the radius of the spatial neighborhood for averaging. The range radius determines the interval in the spectral space (expressed in radiometry units). Thus pixels with a lower range distance than the given parameter value will be grouped into image objects. The last parameter defines the minimum size of a segment (in pixels) to be kept after clustering. Smaller image segments will be merged with the neighboring one that has the closest spectral signature (Teodoro and Araujo, 2016; OTB, 2018; De Luca et al., 2019). The selection of segmentation parameters is a crucial step in the OBIA procedure (Neubert et al., 2008; Grippa et al., 2017), as they control the segmentation quality (Espindola et al., 2006). The most common methods for parameter selection and segmentation evaluation are based on visual and expert interpretations (Zhang et al., 2008; Durgan et al., 2020). Here, a trial-and-error visual approach with a gradual (step-by-step) adaptation of segmentation parameters was implemented. The ranger parameter was adapted following the sensor used to obtain the imagery. Pixel values in RGB imagery are recorded as digital numbers (DN), where each band is encoded in the range of 0 to 255. With an MS sensor, a reflectance map is produced, where each pixel indicates the reflectance of the object (values between 0 and 1). The minsize parameter was selected based on the geographical context (smallest patch of grass or clover) and spatial resolution of the analyzed imagery (lower resolution corresponds to lower parameter value) (Table 1). It has been suggested to set the range of parameter values that will result in under and oversegmented results as extremes (Grippa et al., 2017).

Table 1
www.frontiersin.org

Table 1 The list of spatialr, ranger and minsize parameter combinations tested during the segmentation procedure using RGB and MS imagery.

The segmentation procedure was carried out using either RGB-based (RGB + HIS + CHRGB) or MS-based (10 spectral bands + CHMS) images mosaicked into a raster stack (Figure 3; Table 2). As a result, a segmented vector layer was generated.

Table 2
www.frontiersin.org

Table 2 Variables used for segmentation and/or classification procedure with the description.

2.3.3.2 Feature extraction

Before proceeding with image classification, an important step of feature extraction must be completed, as it builds a learning database. In this step, vector layers, generated during the segmentation stage (representing image objects), were used to compute zonal statistics from multiband raster layers. For this purpose, the Zonal Statistics tool from the OTB toolbox was applied. Both spectral and structural features were extracted and used further as predictor variables in the classification process (Table 2). The mean and standard deviation statistics were computed for each polygon. Segmented vectors were also used to calculate various shape indices (SI) (Table 2), mainly based on area, perimeter and maximum diameter values. To calculate SI, the Polygon Shape Indices tool from the SAGA toolbox was applied (Conrad et al., 2015). A recent study by Lam et al. (2021) has shown that selected shape indices are useful in distinguishing small-leaved species from other plant species.

2.3.3.3 Labeling procedure

To train supervised learning algorithms and conduct image classification, a set of labeled ground truth data (with selected species classes) is needed. The sampling and labeling procedure was performed outside of the main processing chain using a polygon-based approach. This means that polygons were manually generated across the plots and then labeled by hand through visual image interpretation. The high spatial resolution UAV-derived RGB orthomosaic (pixel size of 2 mm) was used as a reference. The polygons were labeled as either grass (class 1), clover (class 2), plantain (class 3) or weeds (class 4). The polygon digitization was spatially randomized and concentrated on defining patches of species rather than single leaves. The 1167 labeled polygon were evenly spread across the analyzed plots and covered around 17% of the total plot area. Special attention was paid to the equal distribution of species classes among the polygons created. Nevertheless, plots were dominated by grass and clover with sporadic and scattered occurrences of plantain or weeds. Consequently, it was impossible to obtain an equal number of samples per class. In the end, grass was represented by 434, clover by 468, plantain by 192, and weeds by 73 labeled polygons.

These manually generated polygons which are treated as ground truth data (also known as reference data) were utilized to extract mean values from raster variables (spectral bands, CHMs, and vegetation indices), and were then used to build grouped boxplots and Principal Component Analysis (PCA). The main aim here was to identify and visualize the patterns within the dataset and understand the differences among classes and relationships among variables. Boxplots and PCAs were generated separately for data obtained with the RGB and with the multispectral sensor.

In this study, sample segments (learning database) used for OBIA classification were generated by applying spatial join of manually labeled polygons (ground truth data) and image segmentation results. The spatial join parameters were set as ‘within’ and ‘overlap’ to select segments that were entirely contained within the labeled polygons, as well as segments that partially overlapped the labeled polygons.

2.3.3.4 Image classification

The prepared learning databases that can be used for the classification procedure were then exported to the RStudio (RStudio: IDE for R, R Studio Inc., Boston, MA, USA), where further analysis on model calibration and validation was conducted. In this study, we applied the Random Forest (RF) machine learning algorithm. Due to its high processing speed and great classification performance, RF has gained increasing attention over the past decades (Belgiu and Drăguţ, 2016). Several studies (Akcay et al., 2018; De Castro et al., 2018; Lu and He, 2018; Hall and Lara, 2022) have demonstrated that RF is suitable for land cover and species classification using high-resolution UAV imagery. The main reasons for selecting this classifier are its ability to manage large datasets with many variables (Akcay et al., 2018) and to adjust for interactions (correlations) among those predictor features (Hall and Lara, 2022). Random forest, introduced by Breiman (Breiman, 2001), is an ensemble learning technique that uses a combination of de-correlated decision trees. Such ensemble classifiers perform better, with higher accuracy and generalization capability than a single classifier (Rodriguez-Galiano et al., 2012).

Random Forest was trained for each prepared learning database (RGB and MS-based) using the ranger package (Wright and Ziegler, 2017) The following hyperparameters were considered: (1) the number of decision trees to be generated (num.trees) was set to default 500, as previous studies suggest that errors stabilize before this number of decision trees is reached (Belgiu and Drăguţ, 2016), (2) the number of variables selected and tested at each split (mtry) was set to the default square root of the number of input variables present in the learning dataset, and (3) minimal node size (min.node.size) was set to the default 1 for classification.

2.3.4 Accuracy assessment

As stated before (section 2.1.), only two types of species mixtures, totalling 8 plots, were chosen for Object-Based Image Analysis (OBIA): one comprised of grass and clover (mixture 2) and the other of grass, clover, and plantain (mixture 3). To assess the performance of the developed procedure, a form of spatial k-fold cross-validation approach was implemented. This involved resampling based on the location of observations, which was determined by the replicate scheme of the field trial. In total, four Random Forest (RF) classification models were built. Each model utilized 6 plots for training (from 3 different replicates and 2 mixtures), and the remaining 2 plots (from 1 replicate and 2 mixtures) were used to test the model. This process was repeated 4 times, using various replicate combinations, as illustrated in Figure 4.

Figure 4
www.frontiersin.org

Figure 4 Visual representation of 4-fold cross-validation implemented in the study (green color: training set, orange color: test set).

Both qualitative evaluation through visual inspection and quantitative evaluation using reference data are integral components of the accuracy assessment in OBIA (Zhao et al., 2020). In supervised classification, a confusion matrix is a commonly used tool to organize information essential for accuracy assessment (Bratic et al., 2018). To summarize the confusion matrix information, several classification measures (scalar metrics) can be calculated. In this study, we focused on two metrics: a) the overall accuracy and b) the F1 score on a class-wise basis. The F1 score combines the harmonic mean of precision and recall (Guns et al., 2012), aiming to maximize both measures and obtain a better classifier. F1 evaluates model performance based on individual classes, as opposed to accuracy where the overall performance is computed. In general, an F1 score equal to 1 represents a model that perfectly classifies each observation into the correct class.

2.4 Species mapping

In order to obtain a classification map, a random forest (RF) model was built using labeled segments from all 8 plots. This model was then used to predict species class in all unclassified segments of the analyzed plots. The predicted class was used as a symbol label to create a species classification map. This procedure was repeated for different image segmentation results, obtained with different parameter combinations tested in the segmentation procedure.

3 Results

3.1 Multivariate relationships in reference data

Mean values extracted from rasters (bands, CHMs and VIs) with manually generated polygons (ground truth data) were used to build grouped boxplots (Figure 5) and PCAs (Figures 6, 7). The multivariate analysis indicates that spectral separation among the grass, clover, plantain, and weed classes is clearer with RGB imagery than with MS. What stands out in this figure is the difference between median and spread values of the plantain class for the RGB-derived data (Figures 5A, B). For almost all analyzed variables (except CHM, ExR, and NGRDI) the range of the plantain class is the largest, indicating wider distribution and more scattered data (spectral response). For MS-based data (Figures 5C, D) measures of spread and central tendency for plantain are closer to that of clover and weed class.

Figure 5
www.frontiersin.org

Figure 5 Visual summary (boxplots) of data extracted using labeled polygons as ground truth data for RGB-based color space with HIS color space (A) and RGB-based vegetation indices with CHM (B) MS-based spectral bands (C) and MS-based vegetation indices (D).

Figure 6
www.frontiersin.org

Figure 6 Principal Component Analysis (PCA) score plot representing RGB-based data of four distinct classes (grass, clover, plantain, and weed) obtained from labeled ground truth polygons, utilizing three dimensions.

Figure 7
www.frontiersin.org

Figure 7 Principal Component Analysis (PCA) score plot representing MS-based data of four distinct classes (grass, clover, plantain, and weed) obtained from labeled ground truth polygons, utilizing three dimensions.

The interquartile range (the box heights) does not overlap between the grass and clover classes for the majority of analyzed variables (Figure 5). This indicates a clear difference between the two classes in terms of spectral response. Features, such as hue, CHM, ExR, ExGR and NGRDI, NDVI are an exception, as the median scores are similar for grass and clover and boxes overlap. On average, grass has lower values for almost all spectral bands and vegetation indices. On the contrary, when considering saturation, ExG, ExGR, and GNDVI features, grass shows higher values than clover. There are several similarities between clover and weed in their spectral response, as the range and median values are close to each other. Nonetheless, differences between these two classes can be observed in the RGB-based blue, ExG, and ExGR variables, as they are more aligned with the grass class. Canopy height has similar median and interquartile range values for the four analyzed classes.

The RGB-based 3D PCA plot presented in Figure 6 shows that the first three principal components (PC1, PC2 & PC3) capture almost 96% of variance from the data. The first component (PC1) explains 79.5% of that variation, the second component (PC2) 9.1% and the third component (PC3) 7% of that variation. Hence, they are included in the further description and results.

The clusters in Figure 6 reveal the presence of multiple distinct distributions within the data. There is not much overlap between plantain and the other three classes. The spread of observations is the largest for plantain, especially across PC1 and PC3. These samples are not as closely grouped as the grass or clover classes. There are a number of similarities and differences between grass and clover across the PCs. Both grass and clover samples share a similar spread of values across PC2. There is also a considerable overlap between these two classes along PC1 and PC3 However, when looking at observations in the three-dimensional space, one can see that grass and clover cluster separately. Even though both grass and clover observations cluster rather diagonally from low (negative) values on PC1 and PC3 space to higher values on PC1 and PC3, grass displays lower values on PC1 and higher values on PC3 than clover samples. The weed class overlaps to a large extent with the grass and clover classes in PC1 and PC2 values, but it differs when considering PC3. The weed class reaches higher values (on average) along PC3.

The next important step is to look into the relationship of the principal components with their original variables. For this purpose, the magnitude and the direction of linear coefficients, also known as loadings, were explored (Supplementary Table S1). The first principal component is primarily an equally weighted contrast between blue, green, intensity, red, and ExR variables (positive coefficients) and hue, saturation, ExG, ExGR, and NGRDI variables (negative coefficients). In contrast, the second principal component has a large positive association with CHM and very small contributions from all the other variables. The third principal component has the strongest weighted contrast between hue and saturation. Blue, NGRDI (negative coefficients), and ExR, CHM (positive coefficients) are other variables influencing PC3.

The 3D PCA plot on multispectral rasters (Figure 7), shows that nearly 98% of variance from the data is explained by the first three principal components (PC1, PC2, & PC3). PC1 captures 70.1% of the variability, followed by PC2 with 21.4%, and PC3 with 6.2%. Consequently, these components are considered for further description and analysis.

Similar to the RGB-based PCA, the plantain class displays the widest spread, particularly across PC1. However, in contrast to the RGB-based PCA, observations of the plantain category overlap with those of the other three classes. Also, in this case, grass and clover classes share several similarities and differences across the PCs. Observations of both classes have a similar spread of values across PC2 and PC3, but a different spread across PC1. While grass observations cluster around negative PC1 values, clover observations center around the positive PC1 values. Similar to the RGB-based PCA, the weed class intersects with other classes in PC1 and PC2 space, yet it diverges in PC3 dimension, where it typically reaches higher values.

The first principal component represents an evenly weighted sum of all analyzed variables (Supplementary Table S2). This PC has negative associations with GNDVI, Clg, SR, and NDVI, while it has positive associations with the remaining variables. Conversely, the second principal component shows the strongest positive correlation with MSAVI. Features such as EVI, NDVI, GARI, GNDVI or nir spectral band are the other variables influencing PC2. PC3 demonstrates the strongest weighted contrast between MCARI (positive coefficient) and SR with blue spectral band (negative coefficients).

3.2 Object-based image analysis: segmentation and classification results with accuracy assessment

In this study, different combinations of parameters were investigated and their impact on segmentation was explored and visualized. Figure 8 illustrates the results of such image segmentation (red polygons) for both RGB (left side) and multispectral (right side) rasters. A few sets of segmentation parameters (minsize, ranger, and spatialr) were selected as an example to present varying levels of detail, with decreasing number of acquired segments (from top to bottom). Figure 8A also indicates grass, clover, plantain and weed classes, showcasing the variations.

Figure 8
www.frontiersin.org

Figure 8 Image segmentation results of a B3 plot fragment, performed separately for RGB (left) and MS (right) imagery. RGB-based orthomosaic (A) and MS-based true color composite (E) are used as a background. The parameters of minsize, ranger, spatialr were set respectively, as follows: (B) 30, 10, 10; (C) 200, 10, 10; (B) 120, 30, 10; (F) 2, 0.01, 5; (G) 8, 0.02, 5; (H) 14, 0.01, 15.

As expected, the spatial resolution associated with the UAV sensor (RGB vs. MS) and its flight altitude, plays an essential role in defining the outcomes of the applied OBIA technique. With high spatial resolution imagery, like the one acquired with the RGB sensor, segmentation is possible with a very high level of detail. With lower resolution rasters (i.e. multispectral ones) we achieve coarser image segmentation. As shown in Figure 8B, pixels are clustered into very small objects such as individual grass leaves or clover leaflets. What is interesting about this segmentation outcome is that the single plantain leaf, as well as the weed plant (accentuated in Figure 8A), are separated into several objects. This is a consequence of over-segmentation, with multiple objects that represent a single feature. In the following segmentation (Figure 8C), the number of segments obtained was much lower. Certain plants are still depicted by more than one object (i.e. plantain leaf). While other plants, e.g. weeds or some grass patches are more homogeneous and spatially continuous. In this case, a weed plant is well-delineated into one object. The segmentation result shown in Figure 8D also shows a low number of segments and the boundaries between features are rather well-defined. For instance, in the top left corner of Figure 8D, a grass patch (comprising multiple grass blades) is clustered into one object, similarly to a clover patch located nearby.

Upon close examination of MS-based segmentation results (presented in Figures 8F–H), several notable differences with the RGB-based results can be observed. While MS imagery may lack the level of detail of RGB images, it is still able to distinguish patches of grass and clover, with grass appearing darker than clover. In MS rasters, plantain leaves do not stand out anymore and cannot be distinguished easily (mixed pixel problem, as pixels contain information from multiple ground cover classes). In addition, the weed plant closely resembles a clover patch. Figure 8F displays the most detailed MS-based segmentation, capturing the smallest objects. In the segmentation shown in Figure 8G), less detailed results are obtained with a reduced number of objects. The segmentation output shown in Figure 8H is a clear case of under-segmentation. Multiple features, representing different ground cover classes, are contained within a single segment.

Figure 9 showcases the outcomes of species classification using UAV imagery with different spatial resolutions derived from both sensors. The C2 plot, sown with a mixture of grass-clover, and a blow-up of the yellow-framed area are used as examples of further analysis. The detailed RGB-based orthomosaic shown in Figure 9A and the MS-based true color composite shown in Figure 9D were utilized as input. Image segmentation using the mean-shift algorithm is shown in Figures 9B, E. Classification maps differentiating four classes (grass, clover, plantain, and weed) are presented in Figures 9C, F. The RGB-based classification yields a more intricate and detailed species map Figure 9C than the MS-based one (Figure 9F) But, as can be seen from the selected area of interest, both OBIA classification maps share some key similarities. The largest patches of clover and grass are identified in similar/corresponding areas. However, clover covers a larger area in the RGB-based classification map compared to the MS counterpart. The main difference is visible in the upper section, where a larger grass patch is predicted with coarser MS imagery. In both cases, a weed plant located on the upper right side (weed class) was correctly classified. However, the predicted coverage area of that weed is larger than in reality, particularly evident in the MS-based classification map. For instance, in Figure 9F, the small clover patch (bottom left) was partially misclassified as weeds. Visual inspection of the generated classification maps reveals that the OBIA procedure facilitates fine segmentation and classification of the UAV-derived imagery. It is possible to successfully identify the two primary cover classes (grass and clover), but distinguishing the remaining two classes (plantain and weeds) remains challenging.

Figure 9
www.frontiersin.org

Figure 9 A C2 plot with a sown mixture of grass and clover species, the area highlighted by the yellow frame represents an area of interest (AOI) for further analysis (top). Enlarged view of AOI showing RGB-based orthomosaic (A) and MS-based true color composite (D) as input imagery for OBIA; results of image segmentation (B, E); classification maps acquired with OBIA technique (C, F), where green: grass, yellow: clover, blue: plantain, pink: weed.

The results of the performance assessment for OBIA are presented separately for RGB (Figure 10A) and MS (Figure 10B) imagery. The overall accuracy and the F1 score on a class-wise basis are the two metrics used for this evaluation (x-axis), with a range between 0 and 1 (y-axis). Outcomes for different segmentation parameter combinations used in the classification procedure (numbered from 1 to 12, presented in Table 1) are summarized. Segmentations 10, 11, and 12 from the RGB-based analysis are removed from further analysis due to insufficient object separation. The results were obtained by applying a Random Forest (RF) classifier and 4-fold cross-validation.

Figure 10
www.frontiersin.org

Figure 10 Performance assessment of multiresolution image segmentation using RGB (A) and multispectral (B) imagery, with different parameter combinations using Random Forest (RF) classifier and spatial k-fold cross-validation. The evaluation metrics included the overall accuracy and the F1 score on a class-wise basis (x-axis), having a range between 0 and 1 (y-axis).

Both RGB and MS-based imagery yield good to moderate performance, with overall accuracies of approximately 70%. For both RGB and MS-based classification, the highest overall accuracy was achieved with segmentation 1, 4, and 7, with approximate accuracies of 0.69 and 0.72, respectively. On the contrary, the least accurate outcomes were recorded for segmentations 3, 6, 9 or 12, with accuracies of 0.65 and 0.64 for RGB and MS-based classification, respectively. Overall, segmentations that generated smaller objects demonstrated higher overall accuracy. This trend can be observed for both RGB and MS-based OBIA. The minsize parameter plays a key role in determining the size of segments and has thus the strongest influence on the achieved accuracy.

The highest F1 score for the clover class in RGB-based OBIA was achieved with segmentation 4 (F1 = 0.73), while in MS-based OBIA, segmentations 1 and 7 resulted in the highest F1 score (F1 = 0.74). Conversely, the lowest F1 score values of 0.67 (segmentations 3 and 9) for RGB and 0.66 (segmentations 3 and 6) for MS-based OBIA were noted. Similar F1 values were observed for the analyzed grass class. With high-resolution RGB imagery, the F1 score peaked at 0.7 (for segmentations 1 and 7) and was the lowest for segmentation 6 (0.62). With MS imagery, the F1 score was the highest for segmentation 1 (0.75) and the lowest for segmentations 3 and 6 (0.66). These results demonstrate that both RGB and MS-based OBIA yield similar F1 scores for the grass and clover class. The plantain and weed classes are particularly difficult to detect and yielded very low F1 scores. When compared, MS-based OBIA results in lower F1 values (on average) for the plantain class than for the weed class. The variability (spread) of F1 scores across different segmentation parameters is also more pronounced for the plantain and weed classes.

3.3 Clover fraction estimation

To check the effectiveness of the OBIA procedure, we opted to explore the relationship between the OBIA results and the relative proportion of dry matter yield. Classification maps obtained with the OBIA technique were clipped by 10 subplots selected in the procedure (shown in Figure 1) and the cover area of each class was calculated. This information was coupled with the Dry Matter Yield (DMY) data collected in the field. The correlation between classification results (clover coverage) and corresponding dry matter proportion is presented in Figure 11 by employing the following metrics, defined in Equations 1 and 2:

Figure 11
www.frontiersin.org

Figure 11 Relationship between the predicted clover cover (x-axis) and the harvested dry matter (y-axis) using linear regression with 95% confidence interval (grey area), regression line equation, Pearson correlation coefficient (R2 value), and p-value. Selected classification results, based on segmentation 4 for RGB (A) and segmentation 1 for MS (B) imagery, are given as an example.

clover coverage fraction [cover %]=clover coverage/total subplot area(1)
clover DMY proportion [weight %] = DMY of clover/total subplot DMY(2)

A positive and strong relationship was found between the predicted clover cover and the dry matter fraction. Despite the lower resolution of the MS sensor imagery, similar correlations were observed between the clover ratio and the DMY proportion (0.81 and 0.8 for RGB and MS, respectively).

4 Discussion

Image analysis of mixed sward canopies faces numerous challenges due to the varying optical plant properties and varying leaf shapes and colors of the constituent species, as well as overlapping plant parts (Himstedt et al., 2009). Very high-resolution imagery can pose further challenges in species classification as it captures detailed features, such as gaps, shadows or nonuniform soil background, which may introduce noise and result in misclassification (Lu and He, 2018). Object-based image analysis (OBIA), as an alternative to pixel-based classification, segments an image into meaningful objects and assigns a specific vegetation class to each object (Blaschke et al., 2014). A recent study by Ventura et al. (2022) utilized OBIA classification and UAV imagery to monitor and map semi-natural grasslands. They successfully differentiated three grassland types, such as closed and open grasslands, achieving an overall classification accuracy of over 89%. Another study conducted by Lu and He (2017) employed OBIA on UAV-acquired images to investigate species composition in a tall grassland, focusing on brome, goldenrod, milkweed, and fescue species. They recorded an overall accuracy of approximately 85% across images obtained at different times.

The assessment of the accuracy of the OBIA approach applied in this study demonstrated good performance, achieving an overall accuracy of approximately 70%, for both RGB and MS-based imagery. The F1-score used to estimate the accuracy achieved for the four analyzed classes, offered a deeper insight into model performance. Grass and clover classes yielded similar F1 scores, exceeding values above 0.7 in both RGB and MS-based OBIA, which is an indication of good performance (Van Otten, 2023). It implies that roughly 70% of the samples were classified correctly. The confusion matrices show that clover was primarily misclassified as grass and that grass was predominantly misclassified as clover. Misclassification of clover and grass into other classes (plantain and weeds) was negligible. This pattern is consistent across RGB and MS imagery. In contrast, detecting the plantain and weed classes were challenging, with F1 scores reaching at best 0.27. According to the confusion matrix, both plantain and weeds were predominantly misclassified as grass and clover. In RGB-based classification, the misclassification of plantains and weeds was evenly distributed between the grass and clover classes, while in MS-based classification, plantain and weeds were more frequently misclassified as clover, probably because they are interpreted as a dicot due to the lower resolution images. Furthermore, lower performance (lower F1 scores, depicted in Figure 10) for the plantain and weed classes was to be expected due to imbalances in the dataset: grass and clover classes had considerably more instances (labeled polygons) than the other two classes. Consequently, classifiers tend to overlook the less represented classes while focusing on and prioritizing classes with higher representation (Lin and Chen, 2013) in the learning database.

Similarities in the spectral response among the classes can also be a reason for misclassification, as explained in section 3.1. In MS data, the measures of spread and central tendency for plantains are closer to those of clover and weed class. This could be explained by the lower spatial resolution of MS rasters, along with the small size of the plantain polygons used for data extraction. Here, a mixed pixel problem arises, as other classes contribute to the spectral response of plantains. In consequence, delineating segments containing plantains proved to be challenging. Additionally, similarities in multispectral response between weed and clover classes could possibly explain difficulties in the delineation and classification of weed segments.

To examine the effectiveness of the OBIA procedure, the relationship between the predicted clover coverage and the relative proportion of harvestable dry matter yield was investigated. A positive and strong relationship was found between the predicted clover fraction and DMY proportion, with R2 values exceeding 0.8 for RGB and MS-based OBIA classification results. Remarkably, even though the MS imagery has a lower resolution, comparable correlations were found, underscoring the applicability of the analysis across different data resolutions. Nevertheless, as a relatively small sample size (10 subplots) was used to examine the relationship between the obtained clover cover and DMY proportion, these findings should be interpreted with caution. Further research (with additional data collection) is necessary for deeper investigation of these relationships. Image segmentation stands as the initial step in OBIA, and it partitions the entire image into distinct, non-overlapping segments (Marçal and Rodrigues, 2009), with meaningful representation. Hence, an optimal segmentation should yield an image partitioning where each segment corresponds (as much as possible) to an object of interest in the study area (Troya-Galvis et al., 2015), while still maintaining high internal within-segment homogeneity (Hao et al., 2021). This makes the segmentation parameter selection a critical step in the OBIA procedure. Segmentation errors can be due either to over-segmentation or to under-segmentation. Over-segmentation occurs when a single object of interest is segmented into too many polygons, whereas under-segmentation happens when multiple objects of interest are included in a single segment (Troya-Galvis et al., 2015; Hao et al., 2021). As shown in Figure 10, segmentations which generated more and smaller segments (e.g. 1, 4 or 7) resulted in higher overall accuracy and F1 scores for the grass and clover class. A similar trend was observed with clover fraction estimation outcomes. In general, the correlations between clover coverage (classification results) and corresponding dry matter yield proportion achieved better R2 values when more detailed segmentation was obtained in the OBIA procedure.

The minsize parameter also referred to as the scale parameter, which determines the relative segment size (Drǎguţ et al., 2010), has a substantial impact on OBIA and classification results (Zhao et al., 2020; Hao et al., 2021). Possibly, setting a smaller minsize parameter reduces the probability of segmenting multiple classes into a single object, leading to increased overall accuracy. These findings are consistent with Huang et al. (2020) who showed that a smaller scale parameter produces higher accuracy. Correspondingly, Grippa et al. (2017) advocated for over-segmentation over under-segmentation, arguing that the former allows for correction during the classification phase. The findings of this study corroborate these conclusions. Therefore, employing segmentations that generate numerous smaller segments should be preferred. Nevertheless, performing detailed segmentation, particularly with high-spatial resolution RGB imagery is time-consuming and computationally intensive. Therefore, a selection of technology (sensors, segmentation parameters) would depend on the end users and their objectives (e.g., acceptable error margins or required decision-making time).

In this study, sample segments used for OBIA classification assessment were generated with spatial join where ‘within’ and ‘overlap’ parameters were selected. As a result, a higher number of segments were selected for sample representation which led to decreased agreement between samples and segments, treated as a limitation of the selected method. While this approach performs better with segmentation that generates smaller objects (e.g., Figure 12A for RGB-based OBIA and Figure 12C for MS-based OBIA), it shows limitations with larger segments (e.g., Figures 12B, D). With such larger objects, there is a tendency to incorporate segments into the sample representation that, in reality, belong to a different class. The evidence of this can be seen in the center of graphs B and D where segments representing grass and clover were incorrectly assigned to the plantain class. Larger objects also present a higher likelihood of assigning multiple classes into a single segment. This can be seen in the bottom of graphs B & D, where one bigger segment that actually represents both a weed plant and a patch of clovers was assigned as a weed class.

Figure 12
www.frontiersin.org

Figure 12 Defining sample segments (learning database) for OBIA classification. In this study, ‘within’ and ‘overlap’ parameters were selected when applying the spatial join tool, presented on the left side (A–D). For a comparison, only the ‘within’ parameter could be selected, shown on the right side (E–H). Examples of different segmentation results are shown here, with RGB-based segmentation creating smaller (A, E) and larger objects (B, F); and MS-based segmentation creating smaller (C, G) and larger objects (D, H).

An alternative approach that would enhance conformity between labeled polygons and segments is to define fewer segments for sample representation. This option is depicted in Figure 12 (right side), where only the ‘within’ parameter was applied. This means that only segments whose entire geometry is enclosed by the labeled polygon, without any shared boundary points, are considered. While this approach might be effective for RGB-based OBIA, particularly for segmentation that generates smaller objects (Figure 12E). It is not suitable for MS-based OBIA, especially when dealing with segmentation that produces larger objects (like in Figure 12H), where no segments would be selected to represent the analyzed classes and build a learning database. This issue can be further illustrated in graphs F and G, where only a few segments were chosen for the grass and clover classes, but none were selected for the weed and plantain classes. It seems that the results presented in Figure 10 (F1 scores) could be improved if the selection of sample segments is refined, by minimizing incorrect assignment of segments to a class. This improvement could then be utilized to construct an enhanced learning database for OBIA classification.

When extensive fields need to be assessed, visual inspection of clover coverage and proportion becomes impractical (Skovsen et al., 2017). Hence, automated and efficient methods are deemed necessary. Farmers and breeders could greatly benefit from tools, that enable visualization of clover spatial distribution, like the ones examined in this study. The main question here revolves around the end-user of such outcomes (e.g., classification maps). While very detailed segmentations and classifications may be useful for scientific analysis and applications, for a farmer, coarser maps showing the approximate distribution of grass-clovers patches across the field may suffice. With this objective in mind, it becomes apparent that high-resolution RGB imagery is not necessary and multispectral data with lower resolution and faster processing time can be successfully applied using the OBIA procedure.

The classification maps derived from both RGB and MS-based OBIA hold potential for agricultural applications. They could be leveraged by farmers for forage quality assessments and for optimizing agronomic tasks, such as the need for resowing or fine-tuning fertilizer applications in a precision agriculture context. While the determination of some of the analyzed classes (plantain and weeds) with MS-based OBIA is unsatisfactory, it could certainly be utilized to differentiate between monocots (grass) and dicots (clover, weeds, and plantain possibly grouped into one class). Even though the results for the plantain and weed classes obtained with RGB-based OBIA were disappointing, they still outperformed the MS-based OBIA. Several possible factors, such as clearer distinction of spectral responses among classes (Figure 5) and higher resolution of RGB camera (reducing mixed pixel problem), could explain this.

5 Conclusions

Given the significance of having automated and efficient methods for estimating legume coverage and proportion in mixed swards, we proposed an open-source OBIA approach. This study aimed to detect and quantify the ratio of species within mixed grass-clover swards, while also providing a simpler procedure suitable for breeders and farmers (as an alternative to more complex deep learning methods). A UAV was employed to capture high spatial resolution imagery with an RGB and a multispectral sensor, achieving 2 mm and 2 cm spatial resolution, respectively. The findings showed that both RGB and MS-based OBIA yielded good performances, where comparable F1 scores for the grass and clover classes were reached (exceeding 0.7 values). Conversely, identifying less prevalent plantain and weed classes posed challenges, resulting in low F1 scores. Both RGB and multispectral sensors yielded comparable results. However, the choice between them depends on the specific objectives of the application. For instance, when coarse maps delineating the distribution of grass-clover patches suffice, multispectral data with lower resolution and faster processing times may be preferred. The strong correlation observed between predicted clover fraction and dry matter yield proportion highlights the potential of the proposed procedure for estimating and visualizing clover coverage in mixed grass-clover fields. Such findings indicate the practical applicability in providing valuable support for breeders and also farmers operating within the realm of precision agriculture. Moving forward, further research could explore refinements of this methodology to enhance its utility across different sites and environmental conditions.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

JP: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft, Writing – review & editing. IB: Conceptualization, Data curation, Investigation, Methodology, Writing – review & editing. PQ: Formal analysis, Methodology, Visualization, Writing – review & editing. TD: Conceptualization, Formal analysis, Investigation, Methodology, Writing – review & editing. TV: Data curation, Resources, Writing – review & editing. KW: Resources, Writing – review & editing. GR: Conceptualization, Funding acquisition, Writing – review & editing. IJ: Funding acquisition, Supervision, Writing – review & editing. IR: Conceptualization, Supervision, Writing – review & editing. PL: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813114, FutureArctic (https://www.futurearctic.be/).

Acknowledgments

The authors wish to thank Thomas Vanderstocken and Aaron Van Gehuchten for performing the UAV flights and the ILVO field team for the trial management.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2024.1414181/full#supplementary-material

References

AHDB. (2021). How to assess the clover content of your pasture. Available online at: https://ahdb.org.uk/news/how-to-assess-the-clover-content-of-your-pastures (Accessed 9.14.23).

Google Scholar

Akcay, O., Avsar, E., Inalpulat, M., Genc, L., Cam, A. (2018). Assessment of segmentation parameters for object-based land cover classification using color-infrared imagery. ISPRS Int. J. Geo-Inf. 7, 424. doi: 10.3390/ijgi7110424

CrossRef Full Text | Google Scholar

Bateman, C. J., Fourie, J., Hsiao, J., Irie, K., Heslop, A., Hilditch, A., et al. (2020). Assessment of mixed sward using context sensitive convolutional neural networks. Front. Plant Sci. 11. doi: 10.3389/fpls.2020.00159

CrossRef Full Text | Google Scholar

Belgiu, M., Drăguţ, L. (2016). Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 114, 24–31. doi: 10.1016/j.isprsjprs.2016.01.011

CrossRef Full Text | Google Scholar

Biewer, S., Erasmi, S., Fricke, T., Wachendorf, M. (2009). Prediction of yield and the contribution of legumes in legume-grass mixtures using field spectrometry. Precis. Agric. 10, 128–144. doi: 10.1007/s11119-008-9078-9

CrossRef Full Text | Google Scholar

Blaschke, T. (2010). Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 65, 2–16. doi: 10.1016/j.isprsjprs.2009.06.004

CrossRef Full Text | Google Scholar

Blaschke, T., Hay, G. J., Kelly, M., Lang, S., Hofmann, P., Addink, E., et al. (2014). Geographic Object-Based Image Analysis – Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 87, 180–191. doi: 10.1016/j.isprsjprs.2013.09.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Botha, P. (2009). Factors affecting the persistence of clovers in grass-clover pastures, Grassroots. 9 (3). Available online at: https://grassland.org.za/publications/grassroots/issues/September%202009/4%20200909%20Botha.pdf. (Accessed 11.1.2024).

Google Scholar

Bratic, G., Brovelli, M. A., Molinari, M. E. (2018). A free and open source tool to assess the accuracy of land cover maps: implementation and application to Lombardy region (Italy). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XLII–3, 87–92. doi: 10.5194/isprs-archives-XLII-3-87-2018

CrossRef Full Text | Google Scholar

Breiman, L. (2001). Random forests. Mach. Learn. 45, 5–32. doi: 10.1023/A:1010933404324

CrossRef Full Text | Google Scholar

Comaniciu, D., Meer, P. (2002). Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619. doi: 10.1109/34.1000236

CrossRef Full Text | Google Scholar

Conrad, O., Bechtel, B., Bock, M., Dietrich, H., Fischer, E., Gerlitz, L., et al. (2015). System for automated geoscientific analyses (SAGA) v. 2.1.4. Geosci. Model. Dev. 8, 1991–2007. doi: 10.5194/gmd-8-1991-2015

CrossRef Full Text | Google Scholar

Deak, A., Hall, M. H., Sanderson, M. A., Archibald, D. D. (2007). Production and nutritive value of grazed simple and complex forage mixtures. Agron. J. 99, 814–821. doi: 10.2134/agronj2006.0166

CrossRef Full Text | Google Scholar

De Castro, A., Torres-Sánchez, J., Peña, J., Jiménez-Brenes, F., Csillik, O., López-Granados, F. (2018). An automatic random forest-OBIA algorithm for early weed mapping between and within crop rows using UAV imagery. Remote Sens. 10, 285. doi: 10.3390/rs10020285

CrossRef Full Text | Google Scholar

De Luca, G., Silva, J. M. N., Cerasoli, S., Araújo, J., Campos, J., Di Fazio, S., et al. (2019). Object-based land cover classification of cork oak woodlands using UAV imagery and orfeo toolBox. Remote Sens. 11, 1238. doi: 10.3390/rs11101238

CrossRef Full Text | Google Scholar

Deng, L., Mao, Z., Li, X., Hu, Z., Duan, F., Yan, Y. (2018). UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 146, 124–136. doi: 10.1016/j.isprsjprs.2018.09.008

CrossRef Full Text | Google Scholar

Dewhurst, R. J., Delaby, L., Moloney, A., Boland, T., Lewis, E. (2009). Nutritive value of forage legumes used for grazing and silage. Ir. J. Agric. Food Res. 48, 167–187.

Google Scholar

Drǎguţ, L., Tiede, D., Levick, S. R. (2010). ESP: a tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 24, 859–871. doi: 10.1080/13658810903174803

CrossRef Full Text | Google Scholar

Durgan, S. D., Zhang, C., Duecaster, A., Fourney, F., Su, H. (2020). Unmanned aircraft system photogrammetry for mapping diverse vegetation species in a heterogeneous coastal wetland. Wetlands 40, 2621–2633. doi: 10.1007/s13157-020-01373-7

CrossRef Full Text | Google Scholar

Espindola, G. M., Camara, G., Reis, I. A., Bins, L. S., Monteiro, A. M. (2006). Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. Int. J. Remote Sens. 27, 3035–3040. doi: 10.1080/01431160600617194

CrossRef Full Text | Google Scholar

Fujiwara, R., Nashida, H., Fukushima, M., Suzuki, N., Sato, H., Sanada, Y., et al. (2022). Convolutional neural network models help effectively estimate legume coverage in grass-legume mixed swards. Front. Plant Sci. 12. doi: 10.3389/fpls.2021.763479

PubMed Abstract | CrossRef Full Text | Google Scholar

Grippa, T., Lennert, M., Beaumont, B., Vanhuysse, S., Stephenne, N., Wolff, E. (2017). An open-source semi-automated processing chain for urban object-based classification. Remote Sens. 9, 358. doi: 10.3390/rs9040358

CrossRef Full Text | Google Scholar

Grizonnet, M., Michel, J., Poughon, V., Inglada, J., Savinaud, M., Cresson, R. (2017). Orfeo ToolBox: open source processing of remote sensing images. Open Geospatial Data Software Stand. 2, 15. doi: 10.1186/s40965-017-0031-6

CrossRef Full Text | Google Scholar

Guns, R., Lioma, C., Larsen, B. (2012). The tipping point: F-score as a function of the number of retrieved items. Inf. Process. Manage. 48, 1171–1180. doi: 10.1016/j.ipm.2012.02.009

CrossRef Full Text | Google Scholar

Hall, E. C., Lara, M. J. (2022). Multisensor UAS mapping of plant species and plant functional types in midwestern grasslands. Remote Sens. 14, 3453. doi: 10.3390/rs14143453

CrossRef Full Text | Google Scholar

Hao, S., Cui, Y., Wang, J. (2021). Segmentation scale effect analysis in the object-oriented method of high-spatial-resolution image classification. Sensors 21, 7935. doi: 10.3390/s21237935

PubMed Abstract | CrossRef Full Text | Google Scholar

Himstedt, M., Fricke, T., Wachendorf, M. (2009). Determining the contribution of legumes in legume–grass mixtures using digital image analysis. Crop Sci. 49, 1910–1916. doi: 10.2135/cropsci2008.11.0636

CrossRef Full Text | Google Scholar

Himstedt, M., Fricke, T., Wachendorf, M. (2010). The relationship between coverage and dry matter contribution of forage legumes in binary legume–grass mixtures. Crop Sci. 50, 2186–2193. doi: 10.2135/cropsci2009.10.0592

CrossRef Full Text | Google Scholar

Hossain, M. D., Chen, D. (2019). Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 150, 115–134. doi: 10.1016/j.isprsjprs.2019.02.009

CrossRef Full Text | Google Scholar

Huang, H., Lan, Y., Yang, A., Zhang, Y., Wen, S., Deng, J. (2020). Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. Int. J. Remote Sens. 41, 3446–3479. doi: 10.1080/01431161.2019.1706112

CrossRef Full Text | Google Scholar

Justus, D., Brennan, J., Bonner, S., McGough, A. S. (2018). “Predicting the computational cost of deep learning models,” in IEEE international conference on big data (Big Data). IEEE. 3873–3882. doi: 10.48550/arXiv.1811.11880

CrossRef Full Text | Google Scholar

Kartal, S. (2021). Comparison of semantic segmentation algorithms for the estimation of botanical composition of clover-grass pastures from RGB images. Ecol. Inform. 66, 101467. doi: 10.1016/j.ecoinf.2021.101467

CrossRef Full Text | Google Scholar

Khatiwada, B., Acharya, S. N., Larney, F. J., Lupwayi, N. Z., Smith, E. G., Islam, M. A., et al. (2020). Benefits of mixed grass–legume pastures and pasture rejuvenation using bloat-free legumes in western Canada: a review. Can. J. Plant Sci. 100, 463–476. doi: 10.1139/cjps-2019-0212

CrossRef Full Text | Google Scholar

Lam, O. H. Y., Dogotari, M., Prüm, M., Vithlani, H. N., Roers, C., Melville, B., et al. (2021). An open source workflow for weed mapping in native grassland using unmanned aerial vehicle: using Rumex obtusifolius as a case study. Eur. J. Remote Sens. 54, 71–88. doi: 10.1080/22797254.2020.1793687

CrossRef Full Text | Google Scholar

Lang, S., Schöpfer, E., Langanke, T. (2009). Combined object-based classification and manual interpretation–synergies for a quantitative assessment of parcels and biotopes. Geocarto Int. 24, 99–114. doi: 10.1080/10106040802121093

CrossRef Full Text | Google Scholar

Li, K.-Y., Burnside, N. G., Sampaio De Lima, R., Villoslada Peciña, M., Sepp, K., Yang, M.-D., et al. (2021). The application of an unmanned aerial system and machine learning techniques for red clover-grass mixture yield estimation under variety performance trials. Remote Sens. 13, 1–22. doi: 10.3390/rs13101994

CrossRef Full Text | Google Scholar

Lin, W.-J., Chen, J. J. (2013). Class-imbalanced classifiers for high-dimensional data. Brief. Bioinform. 14, 13–26. doi: 10.1093/bib/bbs006

PubMed Abstract | CrossRef Full Text | Google Scholar

Lu, B., He, Y. (2017). Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland. ISPRS J. Photogramm. Remote Sens. 128, 73–85. doi: 10.1016/j.isprsjprs.2017.03.011

CrossRef Full Text | Google Scholar

Lu, B., He, Y. (2018). Optimal spatial resolution of Unmanned Aerial Vehicle (UAV)-acquired imagery for species classification in a heterogeneous grassland ecosystem. GIScience Remote Sens. 55, 205–220. doi: 10.1080/15481603.2017.1408930

CrossRef Full Text | Google Scholar

Lüscher, A., Mueller-Harvey, I., Soussana, J. F., Rees, R. M., Peyraud, J. L. (2014). Potential of legume-based grassland–livestock systems in Europe: a review. Grass Forage Sci. 69, 206–228. doi: 10.1111/gfs.12124

PubMed Abstract | CrossRef Full Text | Google Scholar

Marçal, A. R. S., Rodrigues, A. S. (2009). A method for multi-spectral image segmentation evaluation based on synthetic images. Comput. Geosci. 35, 1574–1581. doi: 10.1016/j.cageo.2008.11.008

CrossRef Full Text | Google Scholar

Michel, J., Youssefi, D., Grizonnet, M. (2015). Stable mean-shift algorithm and its application to the segmentation of arbitrarily large remote sensing images. IEEE Trans. Geosci. Remote Sens. 53, 952–964. doi: 10.1109/TGRS.2014.2330857

CrossRef Full Text | Google Scholar

Mortensen, A., Karstoft, H., Søegaard, K., Gislum, R., Jørgensen, R. (2017). Preliminary results of clover and grass coverage and total dry matter estimation in clover-grass crops using image analysis. J. Imaging 3, 59. doi: 10.3390/jimaging3040059

CrossRef Full Text | Google Scholar

Neubert, M., Herold, H., Meinel, G. (2008). “Assessing image segmentation quality – concepts, methods and application,” in Object-Based Image Analysis, Lecture Notes in Geoinformation and Cartography. Eds. Blaschke, T., Lang, S., Hay, G. J. (Springer Berlin Heidelberg, Berlin, Heidelberg), 769–784. doi: 10.1007/978–3-540–77058-9_42

CrossRef Full Text | Google Scholar

Nyfeler, D., Huguenin-Elie, O., Suter, M., Frossard, E., Lüscher, A. (2011). Grass–legume mixtures can yield more nitrogen than legume pure stands due to mutual stimulation of nitrogen uptake from symbiotic and non-symbiotic sources. Agric. Ecosyst. Environ. 140, 155–163. doi: 10.1016/j.agee.2010.11.022

CrossRef Full Text | Google Scholar

OTB (2018). OTB CookBook Documentation Release 6.4.0. Paris, France: OTB Development Team at CNES. Available at: https://orfeo-toolbox.org/packages/archives/Doc/CookBook-6.4.0.pdf.

Google Scholar

Pirhofer-Walzl, K., Rasmussen, J., Hogh-Jensen, H., Eriksen, J., Soegaard, K., Rasmussen, J. Nitrogen transfer from forage legumes to nine neighbouring plants in a multi-species grassland. Plant Soil (2012) 350, 71–84. doi: 10.1007/s11104–011-0882-z

CrossRef Full Text | Google Scholar

Pranga, J., Borra-Serrano, I., Aper, J., De Swaef, T., Ghesquiere, A., Quataert, P., et al. (2021). Improving accuracy of herbage yield predictions in perennial ryegrass with UAV-based structural and spectral data fusion and machine learning. Remote Sens. 13, 3459. doi: 10.3390/rs13173459

CrossRef Full Text | Google Scholar

Rochon, J. J., Doyle, C. J., Greef, J. M., Hopkins, A., Molle, G., Sitzia, M., et al. (2004). Grazing legumes in Europe: a review of their status, management, benefits, research needs and future prospects. Grass Forage Sci. 59, 197–214. doi: 10.1111/j.1365-2494.2004.00423.x

CrossRef Full Text | Google Scholar

Rodriguez-Galiano, V. F., Ghimire, B., Rogan, J., Chica-Olmo, M., Rigol-Sanchez, J. P. (2012). An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 67, 93–104. doi: 10.1016/j.isprsjprs.2011.11.002

CrossRef Full Text | Google Scholar

Rognli, O. A., Pecetti, L., Kovi, M. R., Annicchiarico, P. (2021). Grass and legume breeding matching the future needs of European grassland farming. Grass Forage Sci. 76, 175–185. doi: 10.1111/gfs.12535

CrossRef Full Text | Google Scholar

Skovsen, S., Dyrmann, M., Mortensen, A., Steen, K., Green, O., Eriksen, J., et al. (2017). Estimation of the botanical composition of clover-grass leys from RGB images using data simulation and fully convolutional neural networks. Sensors 17, 2930. doi: 10.3390/s17122930

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, S., Liang, N., Zuo, Z., Parsons, D., Morel, J., Shi, J., et al. (2021). Estimation of botanical composition in mixed clover–grass fields using machine learning-based image analysis. Front. Plant Sci. 12. doi: 10.3389/fpls.2021.622429

CrossRef Full Text | Google Scholar

Teodoro, A. C., Araujo, R. (2016). Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data. J. Appl. Remote Sens. 10, 16011. doi: 10.1117/1.JRS.10.016011

CrossRef Full Text | Google Scholar

Thilakarathna, M. S., McElroy, M. S., Chapagain, T., Papadopoulos, Y. A., Raizada, M. N. (2016). Belowground nitrogen transfer from legumes to non-legumes under managed herbaceous cropping systems. A review. Agron. Sustain. Dev. 36, 58. doi: 10.1007/s13593–016-0396–4

CrossRef Full Text | Google Scholar

Troya-Galvis, A., Gançarski, P., Passat, N., Berti-Équille, L. (2015). Unsupervised quantification of under- and over-segmentation for object-based remote sensing image analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 8, 1936–1945. doi: 10.1109/JSTARS.2015.2424457

CrossRef Full Text | Google Scholar

Van Otten, N. (2023) F1 Score The Ultimate Guide: Formulas, Explanations, Examples, Advantages, Disadvantages, Alternatives & Python Code. Available online at: https://spotintelligence.com/2023/05/08/f1-score/ (Accessed 4.11.23).

Google Scholar

Varo-Martínez, M.Á., Navarro-Cerrillo, R. M., Hernández-Clemente, R., Duque-Lazo, J. (2017). Semi-automated stand delineation in Mediterranean Pinus sylvestris plantations through segmentation of LiDAR data: The influence of pulse density. Int. J. Appl. Earth Obs. Geoinformation 56, 54–64. doi: 10.1016/j.jag.2016.12.002

CrossRef Full Text | Google Scholar

Ventura, D., Napoleone, F., Cannucci, S., Alleaume, S., Valentini, E., Casoli, E., et al. (2022). Integrating low-altitude drone based-imagery and OBIA for mapping and manage semi natural grassland habitats. J. Environ. Manage. 321, 115723. doi: 10.1016/j.jenvman.2022.115723

PubMed Abstract | CrossRef Full Text | Google Scholar

Wijesingha, J., Astor, T., Schulze-Brüninghoff, D., Wachendorf, M. (2020). Mapping invasive lupinus polyphyllus lindl. in semi-natural grasslands using object-based image analysis of UAV-borne images. PFG – J. Photogramm. Remote Sens. Geoinformation Sci. 88, 391–406. doi: 10.1007/s41064–020-00121–0

CrossRef Full Text | Google Scholar

Wright, M. N., Ziegler, A. (2017). ranger: A fast implementation of random forests for high dimensional data in C++ and R. J. Stat. Software 77, 1–17. doi: 10.18637/jss.v077.i01

CrossRef Full Text | Google Scholar

Xue, J., Su, B. (2017). Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 1–17. doi: 10.1155/2017/1353691

CrossRef Full Text | Google Scholar

Zhang, H., Fritts, J. E., Goldman, S. A. (2008). Image segmentation evaluation: A survey of unsupervised methods. Comput. Vis. Image Underst. 110, 260–280. doi: 10.1016/j.cviu.2007.08.003

CrossRef Full Text | Google Scholar

Zhao, M., Meng, Q., Zhang, L., Hu, D., Zhang, Y., Allam, M. (2020). A fast and effective method for unsupervised segmentation evaluation of remote sensing images. Remote Sens. 12, 3005. doi: 10.3390/rs12183005

CrossRef Full Text | Google Scholar

Zou, Y., Greenberg, J. A. (2019). A spatialized classification approach for land cover mapping using hyperspatial imagery. Remote Sens. Environ. 232, 111248. doi: 10.1016/j.rse.2019.111248

CrossRef Full Text | Google Scholar

Keywords: OBIA, drone, supervised classification, pasture, Lolium, Trifolium, high-throughput field phenotyping

Citation: Pranga J, Borra-Serrano I, Quataert P, De Swaef T, Vanden Nest T, Willekens K, Ruysschaert G, Janssens IA, Roldán-Ruiz I and Lootens P (2024) Quantification of species composition in grass-clover swards using RGB and multispectral UAV imagery and machine learning. Front. Plant Sci. 15:1414181. doi: 10.3389/fpls.2024.1414181

Received: 08 April 2024; Accepted: 28 May 2024;
Published: 19 June 2024.

Edited by:

Shawn Carlisle Kefauver, University of Barcelona, Spain

Reviewed by:

Orly Enrique Apolo-Apolo, KU Leuven, Belgium
Dalia Andrea Plata Reyes, Universidad Autónoma del Estado de México, Mexico

Copyright © 2024 Pranga, Borra-Serrano, Quataert, De Swaef, Vanden Nest, Willekens, Ruysschaert, Janssens, Roldán-Ruiz and Lootens. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Peter Lootens, cGV0ZXIubG9vdGVuc0BpbHZvLnZsYWFuZGVyZW4uYmU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.