All code files for my science research project titled "Safeguarding Personal Privacy by Using Adversarial Machine Learning Against Invasive Facial Recognition Models."
File structure:
- VGG_Face_Images_Targeted contains original images, adversarial images, and log files for the targeted adversarial model.
- VGG_Face_Images_Untargeted contains original images, adversarial images, and log files for the untargeted adversarial model.
- The mask folders contain the perturbation applied to the original image every 5 iterations. Please note these images are amplified by a factor of 10 - otherwise, you wouldn't be able to see the perturbation.
- The adversary folders contain the adversarial image generated by the model every 5 iterations.
- The red number on the top left of each adversarial image is the number of iterations.
- The green text on each adversarial image is what the VGG-Face model classifies the face as.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.