Skip to main content

ORIGINAL RESEARCH article

Front. Neurosci.
Sec. Visual Neuroscience
Volume 18 - 2024 | doi: 10.3389/fnins.2024.1502499
This article is part of the Research Topic Advances in Computer Vision: From Deep Learning Models to Practical Applications View all 11 articles

Asymmetric Large Kernel Distillation Network for Efficient Single Image Super-Resolution

Provisionally accepted
  • 1 School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
  • 2 School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, China

The final, formatted version of the article will be published soon.

    Recently, significant advancements have been made in the field of efficient single-image superresolution, primarily driven by the innovative concept of information distillation. This method adeptly leverages multi-level features to facilitate high-resolution image reconstruction, allowing for enhanced detail and clarity. However, many existing approaches predominantly emphasize the enhancement of distilled features, often overlooking the critical aspect of improving the feature extraction capabilities of the distillation module itself. In this paper, we address this limitation by introducing an asymmetric large-kernel convolution design. By increasing the size of the convolution kernel, we expand the receptive field, which enables the model to more effectively capture long-range dependencies among image pixels. This enhancement significantly improves the model's perceptual ability, leading to more accurate reconstructions.To maintain a manageable level of model complexity, we adopt a lightweight architecture that employs asymmetric convolution techniques. Building on this foundation, we propose the Lightweight Asymmetric Large Kernel Distillation Network (ALKDNet). Comprehensive experiments reveal that ALKDNet not only preserves efficiency but also achieves state-of-the-art performance compared to existing super-resolution methods. The source code is available at https://github.com/Qudaokuan/ALKDNet.

    Keywords: Single-image super-resolution, Efficient method, Asymmetric Large Kernel Convolution, Information distillation, Convolutional Neural Network

    Received: 27 Sep 2024; Accepted: 21 Oct 2024.

    Copyright: © 2024 Qu and Ke. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Daokuan Qu, School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.