Detection by Attack: Detecting Adversarial Samples by Undercover Attack
-
Updated
Feb 13, 2021 - Python
Detection by Attack: Detecting Adversarial Samples by Undercover Attack
Adversarial attack generation techniques for CIFAR10 based on Pytorch: L-BFGS, FGSM, I-FGSM, MI-FGSM, DeepFool, C&W, JSMA, ONE-PIXEL, UPSET
Add a description, image, and links to the jsma topic page so that developers can more easily learn about it.
To associate your repository with the jsma topic, visit your repo's landing page and select "manage topics."