Neural networks can be drastically shrinked in size by removing redundant parameters. While crucial for the deployment on resource-constraint hardware, oftentimes, compression comes with a severe drop in accuracy and lack of adversarial robustness. Despite recent advances, counteracting both aspects has only succeeded for moderate compression rates so far. We propose a novel method, HARP, that copes with aggressive pruning significantly better than prior work. For this, we consider the network holistically. We learn a global compression strategy that optimizes how many parameters (compression rate) and which parameters (scoring connections) to prune specific to each layer individually. Our method fine-tunes an existing model with dynamic regularization, that follows a step-wise incremental function balancing the different objectives. It starts by favoring robustness before shifting focus on reaching the target compression rate and only then handles the objectives equally. The learned compression strategies allow us to maintain the pre-trained model’s natural accuracy and its adversarially robustness for a reduction by 99% of the network’s original size. Moreover, we observe a crucial influence of non-uniform compression across layers.
For further details please consult the conference publication.
The figure below shows an overview of pruning weights of a VGG16 model for CIFAR-10 (left) and SVHN (right) with PGD-10 adversarial training. Solid lines show the natural accuracy. Dashed lines represent the robustness against AUTOATTACK.
For the sake of reproducibility and to foster future research, we make the implementations of HARP for generating non-uniform pruning strategies publicly available at:
https://github.com/intellisec/harp
A detailed description of our work will been presented at the (ICLR 2023) in May 2023. If you would like to cite our work, please use the reference as provided below:
@InProceedings{Zhao2023Holistic,
author = {Qi Zhao and Christian Wressnegger},
booktitle = {Proc. of the International Conference on Learning Representations (ICLR)},
title = {Holistic Adversarially Robust Pruning},
year = {2023},
month = may,
}
A preprint of the paper is available here.