Skip to main content

ORIGINAL RESEARCH article

Front. Phys.
Sec. Quantum Engineering and Technology
Volume 13 - 2025 | doi: 10.3389/fphy.2025.1529188
This article is part of the Research Topic Advancing Quantum Computation: Optimizing Algorithms and Error Mitigation in NISQ Devices View all articles

Optimizing Quantum Convolutional Neural Network Architectures for Arbitrary Data Dimension

Provisionally accepted
Changwon Lee Changwon Lee 1Israel F Araujo Israel F Araujo 1Dongha Kim Dongha Kim 2Junghan Lee Junghan Lee 1Siheon Park Siheon Park 3Ju-Young Ryu Ju-Young Ryu 2,4Daniel Kyungdeock Park Daniel Kyungdeock Park 1*
  • 1 Yonsei University, Seoul, Seoul, Republic of Korea
  • 2 Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Daejeon, Republic of Korea
  • 3 Seoul National University, Seoul, Republic of Korea
  • 4 Norma Inc., Seoul, Republic of Korea

The final, formatted version of the article will be published soon.

    Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis. This approach is particularly attractive due to the absence of the barren plateau problem, a fundamental challenge in training quantum neural networks (QNNs), and its feasibility. However, a limitation arises when applying QCNNs to classical data. The network architecture is most natural when the number of input qubits is a power of two, as this number is reduced by a factor of two in each pooling layer. The number of input qubits determines the dimensions (i.e. the number of features) of the input data that can be processed, restricting the applicability of QCNN algorithms to real-world data. To address this issue, we propose a QCNN architecture capable of handling arbitrary input data dimensions while optimizing the allocation of quantum resources such as ancillary qubits and quantum gates. This optimization is not only important for minimizing computational resources, but also essential in noisy intermediate-scale quantum (NISQ) computing, as the size of the quantum circuits that can be executed reliably is limited. Through numerical simulations, we benchmarked the classification performance of various QCNN architectures across multiple datasets with arbitrary input data dimensions, including MNIST, Landsat satellite, Fashion-MNIST, and Ionosphere. The results validate that the proposed QCNN architecture achieves excellent classification performance while utilizing a minimal resource overhead, providing an optimal solution when reliable quantum computation is constrained by noise and imperfections.

    Keywords: Quantum computing, Quantum machine learning, machine learning, quantum circuit, Quantum algorithm

    Received: 16 Nov 2024; Accepted: 03 Feb 2025.

    Copyright: © 2025 Lee, Araujo, Kim, Lee, Park, Ryu and Park. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Daniel Kyungdeock Park, Yonsei University, Seoul, 03722, Seoul, Republic of Korea

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.