DC and Wasserstein GANs trained on CelebA dataset using Tensorflow 2.12.
Two modules contain models for DC GAN and Wasserstein GAN that were written using Keras library and trained on CelebA dataset. It generates human faces. Both models allow to select between RMSProp or Adam optimizers. Use adaptive augmenter during training which adjusts its parameters depending on the accuracy. Kernel Inception Distance metrics is used along with accuracy but only in the test mode.
The following folders should be present in the project:
src/datasets/CelebA/original
(folder where CelebA dataset images should reside)src/models
(folder that will contain saved trained models)src/samples
(folder with sampled images generated by the GAN during training)
It's assumed you have Tensorflow 2.12 installed with all necessary libraries.
Install all third-party modules from the requirements.txt file.
To launch a DC GAN model run the following command:
python dcgan.py
for the WGAN:
python wgan.py
By default both models run in a test mode. To make a successful execution model artifacts should be present in the src/models/<DC_GAN|WGAN>
directory.
Here is a full list of optional arguments:
-t
- launch a model in a train mode
-e
- number of epochs to train (defaults to 50)
-b
- batch size for the training (defaults to 100)
-o
- preferable optimizer (either 'adam' or 'rmsprop' options are supported)
Each 400 batches the sample images will be added to the src/samples
folder.
After training is finished Loss history.png
file with loss history will appear in the same folder.
- result after 60 epochs: