Skip to main content

OPINION article

Front. Mar. Sci., 15 March 2024
Sec. Ocean Observation
This article is part of the Research Topic Towards Standards for Marine Environment Impact Assessment View all 6 articles

Towards standardizing automated image analysis with artificial intelligence for biodiversity

  • 1Key Laboratory of Marine Ecosystem Dynamics, Ministry of Natural Resources and Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou, China
  • 2Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, China

Introduction

Marine biodiversity, which refers to the variety of life in the oceans and seas, plays a key role through ecosystem services. These services have diverse ecological functions and provide economic wealth and resources, including products from fisheries and aquaculture, active ingredients for pharmaceuticals, and contribute to cultural well-being (Goulletquer et al., 2014). The climate change and human activities, such as maritime transport, waste deposition and resource exploitation, could affect marine biodiversity. To understand and conserve the marine biodiversity, the international cooperation projects, such as Census of Marine Life (http://coml.org/), have been carried out to investigate the marine biodiversity and huge amounts of images (still images and videos) of specimens and their habitats have been collected. A few easy-to-access repositories have been established to store and manage the image data as well as the associated information, such as the Ocean Biogeographic Information System (OBIS, https://obis.org). OBIS integrates biological, physical, and chemical oceanographic data and focuses on geo-referenced marine biodiversity. To assess the environmental impact, surveys are needed to measure species richness and relative abundance. Compared to traditional survey methods, such as trawling, image-based underwater observations are less invasive to the environments and could achieve better spatial coverage within the monitored ecosystems. To provide statistical data for assessment of the variability, it is necessary to perform image analysis, such as recognition of the biological individuals from the acquired images, which is based on the morphological features. The analysis of the obtained images can be conducted with reference to the accumulating photographic atlas and handbooks on taxa (Desbruyères et al., 2006; Tilot, 2006; Simon-Lledó et al., 2019; Xu et al., 2020). However, manually analyzing many images based on morphological characteristics, such as image annotation and classification, is labor-intensive and time-consuming. Additionally, the expertise of domain knowledge is required, but the number of experts is limited. For these reasons, artificial intelligence (AI), especially deep learning (DL) techniques, recently have been applied to automated image analysis, including detection, recognition, and objective tracking, from megafauna to plankton and microalgae (Cheng et al., 2019; Zhuang et al., 2021; Zhou et al., 2023). Previously, what the AI-tools that can be used for analysis of marine field imagery, such as the kinds of information extracted, has been well reviewed (Belcher et al., 2023). The developed AI-based tools improved the efficiency compared with traditional manual strategies and were used for practical applications, such as automated annotation of marine life imagery (Zhang et al., 2023), and detection of megabenthic fauna in optical images from the Clarion-Clipperton Zone (CCZ) in the Central Pacific, where intensive mining exploration are undergoing (Mbani et al., 2023). The analysis results may be used as general baseline data to assess the impacts of post-implementation of an activity. To build and use a machine learning-based pipeline, researchers usually need define an analytical task, construct the training and test datasets of the task, select and train the model, evaluate and improve the model performance, and apply to new data of the task (Belcher et al., 2023). Standardizations in these steps could improve the efficiency, reproducibility, and reliability in building AI-based tools.

Achievements in data standardization

Image data are the basis for developing AI-based tools. To tackle the increasing heterogeneity among the image formats and challenges to data management strategies, the data standard Darwin Core was developed for publishing and integrating biodiversity information (Wieczorek et al., 2012), and to help promote data standardization in image annotation of taxonomy, a global standardized marine taxon reference image database (SMarTaR-ID) was introduced (Howell et al., 2020). Moreover, frameworks have been made to help make the image data findable, accessible, interoperable, reusable (FAIR) (Schoening et al., 2022). There have been benchmark datasets constructed and published for enabling artificial intelligence, such as the global image database, named FathomNet (www.fathomnet.org ) (Katija et al., 2022). The ongoing ocean explorations are filling the vast knowledge gaps on biodiversity of global oceans, especially for deep-sea habitats. With the exploration of the global oceans, the datasets are being kept updated since the images are increasing and the image annotation could be changed accordingly with the validated taxonomy system, such as the World Register of Marine Species (WoRMS, https://www.marinespecies.org). WoRMS is aiming to provide an authoritative and comprehensive list of names of marine organisms, including the taxonomic data in Darwin Core format, which can be downloaded by following the webservice manual and used for image labelling (Horton et al., 2017). These efforts benefit the standardization of the accumulating image data and the biodiversity conservation and environmental impact assessments. For instance, the major ocean database on biodiversity in the regions of the seabed beyond national jurisdiction may have flaws, such as duplication of datasets, significant taxonomic data-quality issues (Gilbert, 2023; Rabone et al., 2023), which could be corrected and updated, by following the existing standards on taxonomic and imagery data, such as Darwin Core (Wieczorek et al., 2012), FAIR (Schoening et al., 2022).

Standardization gap addressed in development and application of AI-based tools for automated image analysis

Taking automated fish image recognition as an example, many AI-based tools have been developed for automated fish recognition (Saleh et al., 2022). Selection of image datasets and algorithms, as well as the data division and model parameters, is critical to the development of AI-based tools. There are many image datasets and DL models, which could be chosen for automated fish recognition (Saleh et al., 2022), including Fish4Knowledge for 23 fish species (Boom et al., 2012), Croatian fish dataset for 12 fish species (Jäger et al., 2015), WildFish for 1000 fish species (Zhuang et al., 2018), Wildfish++ for 2,348 fish species (Zhuang et al., 2021), FishNet for 17,357 fish species (Khan et al., 2023). Moreover, various DL backbones, including convolutional neural network (CNN), AlexNet, EfficientNet, ResNet-50 CNN, Vision Transformer, have been introduced to automated image recognition. The difference in datasets and algorithms as well as the parameters could affect the performance of trained AI-based tools, such as accuracy and precision. Even on the same dataset, the AI-based tools may exhibit different performance due to difference in data division or classification models. For instance, although the images of 23 different fish categories from the same dataset Fish4Knowledge were used in the following two studies on automated fish recognition, the difference in dataset division and architecture of models led to different recognition accuracy, 99.45% (Tamou et al., 2018) and 98.79% (Deep and Dash, 2019), respectively. In addition, during the development of the machine learning-based tools, programming is usually needed to handle the vast data and, such as linking up images with taxonomic labels and diving data, train and fine-tune the models. Sharing the source codes that accompanies the work described in publications can benefit repeating the analysis and accelerate the follow-up study. However, it is still not as widespread among researchers working in marine science as it is in the broader machine learning community (Belcher et al., 2023). Sometimes, the source code was not provided simultaneously along with the published works on AI-based tools (Belcher et al., 2023), although the source code may be acquired upon request. The lack of detailed documentation on the released AI-based tools may result in barriers to international cooperation in biodiversity. Therefore, it’s necessary to clearly state the data source, algorithm, source code, or software package used, as well as the version and parameters selected. Some possible specifications are recommended in the following section.

Specifications recommended for development and application of AI-based tools

The machine learning community is adopting standards to enable released model sharing via detailed accompanying documentation, including data sources and links to code repositories that deposit the code necessary to repeat the analyses (Mitchell et al., 2019; Belcher et al., 2023). The universally adopted standards would increase the transparency and benefit the reproducibility, and the framework could be used to document any trained machine learning models (Mitchell et al., 2019). It is recommended that researchers who are developing and applying trained DL models to analyze biodiversity follow the same practice, such as clearly indicating the information of the databases/datasets used. To move a step further specially in biodiversity survey, besides just mentioning the number of images used for developing AI-based tools, the detailed information about the image data should also be released, such as the source links, by which the imagery and taxonomic data can be found, since in database updates there may be modifications in the taxonomic labelling.

As for consistency, even human experts, such as taxonomists, with varying educational backgrounds or experiences, may exhibit biases in analyzing identical images, such as giving different recognition results. The global biodiversity survey needs cooperation among international researchers. For instance, in the benthic biodiversity survey of the international seabed, it is time-consuming and labor-intensive to review and compare the analysis results from different researchers. Assisted with a unified AI-based tool, it is relatively easy to review or re-analyze the data, which would help ensure the consistency of analysis and improve comparability across studies. Additionally, sometimes the underwater images taken from the field may appear blurry because of the motion of the marine life or insufficient light intensity. As such, it would be difficult to determine the marine life at species or genus level simply based on images (Mbani et al., 2023; Zhang et al., 2023). The difficulty in the identification increases from higher to lower taxonomic level. Therefore, it is suggested that models should be trained and evaluated at a variety of taxonomic levels. The details of the performance evaluation procedures should be disclosed as well. The users could choose relevant models according to the context in which models are intended to be used. The document about data and code should be efficiently shared, for example, through public repositories, such as GitHub, which may be included in the standard. Ideally, an authoritative, powerful, and user-friendly AI-cloud platform/system could be developed to handle automated image analysis. Such an AI-cloud platform/system also needs to be kept updated along with the latest databases and state-of-the-art artificial intelligence technology. As such, the researchers need not program or set up configuration environment for deploying AI-based tools by themselves. It would be beneficial not only for improving the efficiency in survey and monitoring of the global biodiversity, but also for standardizing the process and enhancing comparability in environmental impact assessment.

Discussion

As the AI-based assistant tools are increasingly used in areas of marine ecological research, especially the analysis of image data generated in field biodiversity survey and might substantially boost the robustness of the related research, there is an urgent need to establish related standards. Reproducibility across experiments and comparability among results are essential in scientific research and international cooperation. Thereby to enhance the reproducibility and comparability in AI-based automated image analysis, this opinion provides recommendations, including: 1) detailed documentation of biodiversity data sources and data division; 2) sharing all involved codes, algorithms, and software packages, as well as the version of these packages and parameters used during the development and application; 3) development of AI-based tools for automated image analysis at multiple taxonomic levels for different needs. Considering the progress of artificial intelligence and image database, adaptive strategies can be used for standardization to promote the integration of state-of-the-art technical advancements to achieve the best practices in automated image analysis, such as accuracy improvement for fine-grained image analysis. The marine technology subcommittee of International Standardization Organization (ISO/TC 8/SC 13) focuses on the development of standards on marine observation, exploration, and environmental protection, under which a family of standards on Marine Environment Impact Assessment (MEIA) already exist. For instance, the standard ISO 23731:2021, Performance specification for in situ image-based surveys in deep seafloor environments, provides recommendations for the gathering of image-based data at seafloor. The ISO Technical committees related with biodiversity (e.g. ISO/TC 8, ISO/TC 331) will play an important role in making International Standards related with automated image analysis about biodiversity, which could support biodiversity conversation and environmental protection.

Author contributions

PZ: Project administration, Methodology, Funding acquisition, Writing – review & editing, Writing – original draft, Supervision, Resources, Investigation, Conceptualization. Y-XB: Writing – review & editing, Writing – original draft. GF: Writing – review & editing, Funding acquisition. CW: Writing – review & editing. XP: Writing – review & editing, Funding acquisition. X-WX: Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was sponsored by the National Key R&D Program of China (2022YFC2804005), and the Oceanic Interdisciplinary Program of Shanghai Jiao Tong University (No. SL2021MS005, SL2022ZD108).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Belcher B. T., Bower E. H., Burford B., Celis M. R., Fahimipour A. K., Guevara I. L., et al. (2023). Demystifying image-based machine learning: a practical guide to automated analysis of field imagery using modern machine learning tools. Front. Mar. Sci. 10. doi: 10.3389/fmars.2023.1157370

CrossRef Full Text | Google Scholar

Boom B. J., Huang P. X., He J., Fisher R. B. (2012). “Supporting ground-truth annotation of image datasets using clustering,” in Proceedings of the 21st international conference on pattern recognition (ICPR2012)) Tsukuba, Japan: IEEE, 1542–1545.

Google Scholar

Cheng K., Cheng X., Wang Y., Bi H., Benfield M. C. (2019). Enhanced convolutional neural network for plankton identification and enumeration. PloS One 14, e0219570. doi: 10.1371/journal.pone.0219570

PubMed Abstract | CrossRef Full Text | Google Scholar

Deep B. V., Dash R. (2019). “Underwater fish species recognition using deep learning techniques,” in 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN) Noida, India: IEEE. 665–669.

Google Scholar

Desbruyères D., Segonzac M., Bright M. (2006). Handbook of deep-sea hydrothermal vent fauna. Linz, Austria: Biologiezentrum der Oberosterreichischen Landesmuseen.

Google Scholar

Gilbert N. (2023). Major ocean database that will guide deep-sea mining has flaws, scientists warn. Nature. doi: 10.1038/d41586-023-01303-7

CrossRef Full Text | Google Scholar

Goulletquer P., Gros P., Boeuf G., Weber J. (2014). “The importance of marine biodiversity,” in Biodiversity in the marine environment. Eds. Goulletquer P., Gros P., Boeuf G., Weber J. (Springer Netherlands, Dordrecht), 1–13.

Google Scholar

Horton T., Gofas S., Kroh A., Poore G., Read G., Rosenberg G., et al. (2017). Improving nomenclatural consistency: A decade of experience in the World Register of Marine Species. Eur. J. Taxonomy 389, 1–24. doi: 10.5852/ejt.2017.389

CrossRef Full Text | Google Scholar

Howell K. L., Davies J. S., Allcock A. L., Braga-Henriques A., Buhl-Mortensen P., Carreiro-Silva M., et al. (2020). A framework for the development of a global standardised marine taxon reference image database (SMarTaR-ID) to support image-based analyses. PloS One 14, e0218904. doi: 10.1371/journal.pone.0218904

CrossRef Full Text | Google Scholar

Jäger J., Simon M., Denzler J., Wolff V., Fricke-Neuderth K., Kruschel C. (2015). Croatian Fish Dataset: Fine-grained classification of fish species in their natural habitat. Swansea, United Kingdom: Machine Vision of Animals and their Behaviour Workshop. doi: 10.5244/C.29.MVAB.6

CrossRef Full Text | Google Scholar

Katija K., Orenstein E., Schlining B., Lundsten L., Barnard K., Sainz G., et al. (2022). FathomNet: A global image database for enabling artificial intelligence in the ocean. Sci. Rep. 12, 15914. doi: 10.1038/s41598-022-19939-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Khan F. F., Li X., Temple A. J., Elhoseiny M. (2023). “FishNet: A large-scale dataset and benchmark for fish recognition, detection, and functional trait prediction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE 20496–20506.

Google Scholar

Mbani B., Buck V., Greinert J. (2023). An automated image-based workflow for detecting megabenthic fauna in optical images with examples from the Clarion–Clipperton Zone. Sci. Rep. 13, 8350. doi: 10.1038/s41598-023-35518-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Mitchell M., Wu S., Zaldivar A., Barnes P., Vasserman L., Hutchinson B., et al. (2019). “Model cards for model reporting,” in Proceedings of the conference on fairness, accountability, and transparency: association for computing machinery. New York, United States: Association for Computing Machinery. 220–229.

Google Scholar

Rabone M., Horton T., Jones D. O. B., Simon-Lledó E., Glover A. G. (2023). A review of the International Seabed Authority database DeepData from a biological perspective: challenges and opportunities in the UN Ocean Decade. Database 2023, baad013. doi: 10.1093/database/baad013

PubMed Abstract | CrossRef Full Text | Google Scholar

Saleh A., Sheaves M., Rahimi Azghadi M. (2022). Computer vision and deep learning for fish classification in underwater habitats: A survey. Fish Fisheries 23, 977–999. doi: 10.1111/faf.12666

CrossRef Full Text | Google Scholar

Schoening T., Durden J. M., Faber C., Felden J., Heger K., Hoving H.-J. T., et al. (2022). Making marine image data FAIR. Sci. Data 9, 414. doi: 10.1038/s41597-022-01491-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Simon-Lledó E., Bett B. J., Huvenne V. A. I., Schoening T., Benoist N. M. A., Jeffreys R. M., et al. (2019). Megafaunal variation in the abyssal landscape of the Clarion Clipperton Zone. Prog. Oceanography 170, 119–133. doi: 10.1016/j.pocean.2018.11.003

CrossRef Full Text | Google Scholar

Tamou A. B., Benzinou A., Nasreddine K., Ballihi L. (2018). Underwater live fish recognition by deep learning (Cham: Springer International Publishing), 275–283.

Google Scholar

Tilot V. C. (2006). Biodiversity and distribution of the megafauna vol. 2 annotated photographic atlas of the echinoderms of the clarion-clipperton fracture zone ioc. Paris, France: UNESCO/IOC.

Google Scholar

Wieczorek J., Bloom D., Guralnick R., Blum S., Döring M., Giovanni R., et al. (2012). Darwin core: an evolving community-developed biodiversity data standard. PloS One 7, e29715. doi: 10.1371/journal.pone.0029715

PubMed Abstract | CrossRef Full Text | Google Scholar

Xu K., Dong D., Gong L. (2020). Photographic atlas of megafauna on yap-mariana-caroline seamounts in the western pacific ocean (Beijing: Science Press), 239.

Google Scholar

Zhang Z., Kaveti P., Singh H., Powell A., Fruh E., Clarke M. E. (2023). An iterative labeling method for annotating marine life imagery. Front. Mar. Sci. 10. doi: 10.3389/fmars.2023.1094190

CrossRef Full Text | Google Scholar

Zhou S., Jiang J., Hong X., Fu P., Yan H. (2023). Vision meets algae: A novel way for microalgae recognization and health monitor. Front. Mar. Sci. 10. doi: 10.3389/fmars.2023.1105545

CrossRef Full Text | Google Scholar

Zhuang P., Wang Y., Qiao Y. (2018). “WildFish: A large benchmark for fish recognition in the wild,” in Proceedings of the 26th ACM international conference on Multimedia (Seoul, Republic of Korea: Association for Computing Machinery). doi: 10.1145/3240508.3240616

CrossRef Full Text | Google Scholar

Zhuang P., Wang Y., Qiao Y. (2021). Wildfish++: A comprehensive fish benchmark for multimedia research. IEEE Trans. Multimedia 23, 3603–3617. doi: 10.1109/TMM.2020.3028482

CrossRef Full Text | Google Scholar

Keywords: automated image analysis, ocean exploration, biodiversity, marine environment impact assessment, standard, artificial intelligence

Citation: Zhou P, Bu Y-X, Fu G-Y, Wang C-S, Xu X-W and Pan X (2024) Towards standardizing automated image analysis with artificial intelligence for biodiversity. Front. Mar. Sci. 11:1349705. doi: 10.3389/fmars.2024.1349705

Received: 07 December 2023; Accepted: 01 March 2024;
Published: 15 March 2024.

Edited by:

Koichi Yoshida, Yokohama National University, Japan

Reviewed by:

Wong Yue Him, Shenzhen University, China

Copyright © 2024 Zhou, Bu, Fu, Wang, Xu and Pan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Peng Zhou, zhoupeng@sio.org.cn; Xiaoyong Pan, 2008xypan@sjtu.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.