Skip to main content

ORIGINAL RESEARCH article

Front. High Perform. Comput.
Sec. Cloud Computing
Volume 2 - 2024 | doi: 10.3389/fhpcp.2024.1301384
This article is part of the Research Topic Real-Time Machine Learning on Edge Devices with HPC Support View all articles

Neural Architecture Search for Adversarial Robustness via Learnable Pruning *

Provisionally accepted

The final, formatted version of the article will be published soon.

    The convincing performances of deep neural networks (DNNs) can be degraded tremendously under malicious samples, known as adversarial examples. Besides, with the widespread edge platforms, it is essential to reduce the DNN model size for efficient deployment on resourcelimited edge devices. To achieve both adversarial robustness and model sparsity, we propose a robustness-aware search framework, an Adversarial Neural Architecture Search by the Pruning policy (ANAS-P). The layer-wise width is searched automatically via the binary convolutional mask, titled Depth-wise Differentiable Binary Convolutional indicator (D2BC). By conducting comprehensive experiments on three classification datasets (CIFAR-10, CIFAR-100, and Tiny-ImageNet) utilizing two adversarial losses (TRADES and MART), we empirically demonstrate the effectiveness of ANAS in terms of clean accuracy and adversarial robust accuracy across various sparsity levels. Our proposed approach, ANAS-P, outperforms previous representative methods, especially in high-sparsity settings with significant improvements.

    Keywords: Efficient AI, Neural Network Sparsity, Neural architecture search, Adversarial robustness, Adversarial Pruning

    Received: 24 Sep 2023; Accepted: 18 Jun 2024.

    Copyright: © 2024 Li, Zhao, Ding, Zhou, Fei, Xu and Lin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Yize Li, Northeastern University, Boston, United States
    Xue Lin, Northeastern University, Boston, United States

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.