diff --git a/doc/tutorials/gan/index_en.md b/doc/tutorials/gan/index_en.md index e3841c4c9fb71679feaae0a92d60f1fe8a234f0f..00879c6ae303d430cdc11f470a7ab28898e52a54 100644 --- a/doc/tutorials/gan/index_en.md +++ b/doc/tutorials/gan/index_en.md @@ -4,7 +4,7 @@ This demo implements GAN training described in the original GAN paper (https://a The high-level structure of GAN is shown in Figure. 1 below. It is composed of two major parts: a generator and a discriminator, both of which are based on neural networks. The generator takes in some kind of noise with a known distribution and transforms it into an image. The discriminator takes in an image and determines whether it is artificially generated by the generator or a real image. So the generator and the discriminator are in a competitive game in which generator is trying to generate image to look as real as possible to fool the discriminator, while the discriminator is trying to distinguish between real and fake images. -
![](./gan.png)
+
![](./gan.png =300x)
Figure 1. GAN-Model-Structure
The generator and discriminator take turn to be trained using SGD. The objective function of the generator is for its generated images being classified as real by the discriminator, and the objective function of the discriminator is to correctly classify real and fake images. When the GAN model is trained to converge to the equilibrium state, the generator will transform the given noise distribution to the distribution of real images, and the discriminator will not be able to distinguish between real and fake images at all. @@ -106,7 +106,7 @@ $python gan_trainer.py -d uniform --useGpu 1 ``` The generated samples can be found in ./uniform_samples/ and one example is shown below as Figure 2. One can see that it roughly recovers the 2D uniform distribution. -
![](./uniform_sample.png)
+
![](./uniform_sample.png =300x)
Figure 2. Uniform Sample
## MNIST Example