Your browser doesn't support javascript.
loading
Defense against adversarial attacks: robust and efficient compressed optimized neural networks.
Kraidia, Insaf; Ghenai, Afifa; Belhaouari, Samir Brahim.
Afiliación
  • Kraidia I; LIRE Laboratory, University of Constantine 2 - Abdelhamid Mehri, Ali Mendjeli Campus, 25000, Constantine, Algeria. insaf.kraidia@univ-constantine2.dz.
  • Ghenai A; LIRE Laboratory, University of Constantine 2 - Abdelhamid Mehri, Ali Mendjeli Campus, 25000, Constantine, Algeria.
  • Belhaouari SB; Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Ar-Rayyan, Qatar. sbelhaouari@hbku.edu.qa.
Sci Rep ; 14(1): 6420, 2024 Mar 17.
Article en En | MEDLINE | ID: mdl-38494519
ABSTRACT
In the ongoing battle against adversarial attacks, adopting a suitable strategy to enhance model efficiency, bolster resistance to adversarial threats, and ensure practical deployment is crucial. To achieve this goal, a novel four-component methodology is introduced. First, introducing a pioneering batch-cumulative approach, the exponential particle swarm optimization (ExPSO) algorithm was developed for meticulous parameter fine-tuning within each batch. A cumulative updating loss function was employed for overall optimization, demonstrating remarkable superiority over traditional optimization techniques. Second, weight compression is applied to streamline the deep neural network (DNN) parameters, boosting the storage efficiency and accelerating inference. It also introduces complexity to deter potential attackers, enhancing model accuracy in adversarial settings. This study compresses the generative pre-trained transformer (GPT) by 65%, saving time and memory without causing performance loss. Compared to state-of-the-art methods, the proposed method achieves the lowest perplexity (14.28), the highest accuracy (93.72%), and an 8 × speedup in the central processing unit. The integration of the preceding two components involves the simultaneous training of multiple versions of the compressed GPT. This training occurs across various compression rates and different segments of a dataset and is ultimately associated with a novel multi-expert architecture. This enhancement significantly fortifies the model's resistance to adversarial attacks by introducing complexity into attackers' attempts to anticipate the model's prediction integration process. Consequently, this leads to a remarkable average performance improvement of 25% across 14 different attack scenarios and various datasets, surpassing the capabilities of current state-of-the-art methods.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article País de afiliación: Argelia Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article País de afiliación: Argelia Pais de publicación: Reino Unido