Research Area:  Machine Learning
Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pre-trained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pre-trained features with simple anomaly detection and segmentation methods convincingly outperforms, much more complex, state-of-the-art methods.In order to obtain further performance gains in anomaly detection, we adapt pre-trained features to the target distribution. Although transfer learning methods are well established in multi-class classification problems, the one-class classification (OCC) setting is not as well explored. It turns out that naive adaptation methods, which typically work well in supervised learning, often result in catastrophic collapse (feature deterioration) and reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates using specialized architectures, but this limits the adaptation performance gain. We propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning.
Keywords:  
Author(s) Name:  Tal Reiss; Niv Cohen; Liron Bergman; Yedid Hoshen
Journal name:  
Conferrence name:  IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher name:  IEEE
DOI:  10.1109/CVPR46437.2021.00283
Volume Information:  
Paper Link:   https://ieeexplore.ieee.org/document/9577693