Skip to main content

ORIGINAL RESEARCH article

Front. Neurorobot.
Volume 18 - 2024 | doi: 10.3389/fnbot.2024.1423848

CLIB: Contrastive learning of ignoring background for underwater fish image classification

Provisionally accepted
Qiankun Yan Qiankun Yan Xiujuan Du Xiujuan Du *Chong Li Chong Li Xiaojing Tian Xiaojing Tian
  • Qinghai Normal University, Xining, China

The final, formatted version of the article will be published soon.

    Aiming at the problem that the existing methods are insufficient in dealing with the background noise anti-interference of underwater fish images, a contrastive learning method of ignoring background called CLIB for underwater fish image classification is proposed to improve the accuracy and robustness of underwater fish image classification. First, CLIB effectively separates the subject from the background in the image through the extraction module and applies it to contrastive learning by composing three complementary views with the original image. To further improve the adaptive ability of CLIB in complex underwater images, we propose a multi-view-based contrastive loss function, whose core idea is to enhance the similarity between the original image and the subject and maximize the difference between the subject and the background, making CLIB focus more on learning the core features of the subject during the training process, and effectively ignoring the interference of background noise. Experiments on the Fish4Knowledge, Fish-gres, WildFish-30, and QUTFish-89 public datasets show that our method performs well, with improvements of 1.43%-6.75%, 8.16%-8.95%, 13.1%-14.82%, and 3.92%-6.19%, respectively, compared with the baseline model, further validating the effectiveness of CLIB.

    Keywords: underwater fish image classification 1, contrastive learning 2, deep learning 3, selfsupervised visual representation learning 4, background noise 5

    Received: 26 Apr 2024; Accepted: 17 Jul 2024.

    Copyright: © 2024 Yan, Du, Li and Tian. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Xiujuan Du, Qinghai Normal University, Xining, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.