Skip to main content

ORIGINAL RESEARCH article

Front. Mar. Sci.
Sec. Ocean Observation
Volume 12 - 2025 | doi: 10.3389/fmars.2025.1541265

MUFFNet: Lightweight dynamic underwater image enhancement network based on multi-scale frequency

Provisionally accepted
  • 1 School of Artificial Intelligence, Henan Institute of Science and Technology, Xinxiang, China
  • 2 National and Local Joint Engineering Laboratory of Internet Application Technology on Mine, China University of Mining and Technolog, Xuzhou, China
  • 3 School of Information Science and Engineering, Shenyang University of Technology, Shenyang, Liaoning Province, China

The final, formatted version of the article will be published soon.

    The advancement of Underwater Human-Robot Interaction technology has significantly driven marine exploration, conservation, and resource utilization. However, challenges persist due to the limitations of underwater robots equipped with basic cameras, which struggle to handle complex underwater environments. This leads to blurry images, severely hindering the performance of automated systems. We propose MUFFNet, an underwater image enhancement network leveraging multi-scale frequency analysis to address the challenge. The network introduces a frequency-domain-based convolutional attention mechanism to extract spatial information effectively. A Multi-Scale Enhancement Prior algorithm enhances high-frequency and low-frequency features while the Information Flow Interaction module mitigates information stratification and blockage. A Multi-Scale Joint Loss framework facilitates dynamic network optimization. Experimental results demonstrate that MUFFNet outperforms existing state-of-theart models while consuming fewer computational resources and aligning enhanced images more closely with human visual perception.

    Keywords: Underwater image enhancement, underwater human-robot interaction, multi-scale knowledge, multi-frequency extraction, Convolutional attention, deep learning

    Received: 07 Dec 2024; Accepted: 20 Jan 2025.

    Copyright: © 2025 Kong, Zhang, Zhao, Wang and Cai. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Dechuan Kong, School of Artificial Intelligence, Henan Institute of Science and Technology, Xinxiang, China
    Xiaohu Zhao, National and Local Joint Engineering Laboratory of Internet Application Technology on Mine, China University of Mining and Technolog, Xuzhou, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.