Skip to content

Latest commit

 

History

History
13 lines (10 loc) · 1.38 KB

README.md

File metadata and controls

13 lines (10 loc) · 1.38 KB

Adversarial attack against VGG-Face facial recognition model

All code files for my science research project titled "Safeguarding Personal Privacy by Using Adversarial Machine Learning Against Invasive Facial Recognition Models."

File structure:

  • VGG_Face_Images_Targeted contains original images, adversarial images, and log files for the targeted adversarial model.
  • VGG_Face_Images_Untargeted contains original images, adversarial images, and log files for the untargeted adversarial model.
  • The mask folders contain the perturbation applied to the original image every 5 iterations. Please note these images are amplified by a factor of 10 - otherwise, you wouldn't be able to see the perturbation.
  • The adversary folders contain the adversarial image generated by the model every 5 iterations.
    • The red number on the top left of each adversarial image is the number of iterations.
    • The green text on each adversarial image is what the VGG-Face model classifies the face as.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.