Research Area:  Machine Learning
Deep Learning algorithms have achieved state-of-the-art performance for Image Classification. For this reason, they have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been proposed recently in the literature. However, devising an efficient defense mechanism has proven to be a difficult task, since many approaches demonstrated to be ineffective against adaptive attackers. Thus, this article aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, nevertheless, with a defender’s perspective. This article introduces novel taxonomies for categorizing adversarial attacks and defenses, as well as discuss possible reasons regarding the existence of adversarial examples. In addition, relevant guidance is also provided to assist researchers when devising and evaluating defenses. Finally, based on the reviewed literature, this article suggests some promising paths for future research.
Keywords:  
Deep Learning algorithm
image classification
biometric recognition
self-driving car
computer vision
adaptive attackers
Author(s) Name:  Gabriel Resende Machado, Eugênio Silva, Ronaldo Ribeiro Goldschmidt
Journal name:  ACM Computing Surveys
Conferrence name:  
Publisher name:  ACM
DOI:  https://doi.org/10.1145/3485133
Volume Information:  Volume 55, Issue 1, Article No.: 8, pp 1–38,January 2023
Paper Link:   https://dl.acm.org/doi/abs/10.1145/3485133