THE FUTURE IS HERE

Understanding GANs (Generative Adversarial Networks) | Deep Learning

GANs use an elegant adversarial learning framework to generate high quality samples of everything from images to audio. Here, we explore the theoretical underpinnings, as well as some practical problems that can plague training, such as non-convergence and mode collapse.

Timestamps
——————–
00:00 Introduction
01:28 Generative modelling
04:46 The GAN approach
07:37 Loss function
12:14 Game theory perspective
13:18 Optimal discriminator
15:33 Optimal generator
17:26 Training dynamics
19:45 Optimal discriminator problem
21:39 Training steps
22:13 Non-convergence
23:39 Mode collapse

Links
——–
– Original GAN paper (https://arxiv.org/abs/1406.2661)
– Analysis of vanishing/unstable gradients (https://arxiv.org/abs/1701.04862)
– Analysis of mode collapse (https://arxiv.org/abs/1606.03498)
– Wasserstein GAN paper (https://arxiv.org/abs/1701.07875)
– Keras CGAN tutorial (https://keras.io/examples/generative/conditional_gan/)
– PyTorch DCGAN tutorial (https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)