提交 ea5594d1 编写于 作者: W wangyang59

modification of gan tutorial following luotao01 comments

上级 0186edec
# Generative Adversarial Networks (GAN) # Generative Adversarial Networks (GAN)
This demo implements GAN training described in the original GAN paper (https://arxiv.org/abs/1406.2661) and DCGAN (https://arxiv.org/abs/1511.06434). This demo implements GAN training described in the original [GAN paper](https://arxiv.org/abs/1406.2661) and deep convolutional generative adversarial networks [DCGAN paper](https://arxiv.org/abs/1511.06434).
The high-level structure of GAN is shown in Figure. 1 below. It is composed of two major parts: a generator and a discriminator, both of which are based on neural networks. The generator takes in some kind of noise with a known distribution and transforms it into an image. The discriminator takes in an image and determines whether it is artificially generated by the generator or a real image. So the generator and the discriminator are in a competitive game in which generator is trying to generate image to look as real as possible to fool the discriminator, while the discriminator is trying to distinguish between real and fake images. The high-level structure of GAN is shown in Figure. 1 below. It is composed of two major parts: a generator and a discriminator, both of which are based on neural networks. The generator takes in some kind of noise with a known distribution and transforms it into an image. The discriminator takes in an image and determines whether it is artificially generated by the generator or a real image. So the generator and the discriminator are in a competitive game in which generator is trying to generate image to look as real as possible to fool the discriminator, while the discriminator is trying to distinguish between real and fake images.
<center>![](./gan.png)</center> <center>![](./gan.png)</center>
<center>Figure 1. GAN-Model-Structure Source: ishmaelbelghazi.github.io/ALI/</center> <center>Figure 1. GAN-Model-Structure [Source](https://ishmaelbelghazi.github.io/ALI/)</center>
The generator and discriminator take turn to be trained using SGD. The objective function of the generator is for its generated images being classified as real by the discriminator, and the objective function of the discriminator is to correctly classify real and fake images. When the GAN model is trained to converge to the equilibrium state, the generator will transform the given noise distribution to the distribution of real images, and the discriminator will not be able to distinguish between real and fake images at all. The generator and discriminator take turn to be trained using SGD. The objective function of the generator is for its generated images being classified as real by the discriminator, and the objective function of the discriminator is to correctly classify real and fake images. When the GAN model is trained to converge to the equilibrium state, the generator will transform the given noise distribution to the distribution of real images, and the discriminator will not be able to distinguish between real and fake images at all.
## Implementation of GAN Model Structure ## Implementation of GAN Model Structure
Since GAN model involves multiple neural networks, it requires to use paddle python API. So the code walk-through below can also partially serve as an introduction to the usage of Paddle Python API. Since GAN model involves multiple neural networks, it requires to use paddle python API. So the code walk-through below can also partially serve as an introduction to the usage of Paddle Python API.
There are three networks defined in gan_conf.py, namely **generator_training**, **discriminator_training** and **generator**. The relationship to the model structure we defined above is that **discriminator_training** is the discriminator, **generator** is the generator, and the **generator_training** combined the generator and discriminator since training generator would require the discriminator to provide loss function. This relationship is described in the following code There are three networks defined in gan_conf.py, namely **generator_training**, **discriminator_training** and **generator**. The relationship to the model structure we defined above is that **discriminator_training** is the discriminator, **generator** is the generator, and the **generator_training** combined the generator and discriminator since training generator would require the discriminator to provide loss function. This relationship is described in the following code:
```python ```python
if is_generator_training: if is_generator_training:
noise = data_layer(name="noise", size=noise_dim) noise = data_layer(name="noise", size=noise_dim)
...@@ -34,7 +34,7 @@ if is_generator: ...@@ -34,7 +34,7 @@ if is_generator:
outputs(generator(noise)) outputs(generator(noise))
``` ```
In order to train the networks defined in gan_conf.py, one first needs to initialize a Paddle environment, parse the config, create GradientMachine from the config and create trainer from GradientMachine as done in the code chunk below. In order to train the networks defined in gan_conf.py, one first needs to initialize a Paddle environment, parse the config, create GradientMachine from the config and create trainer from GradientMachine as done in the code chunk below:
```python ```python
import py_paddle.swig_paddle as api import py_paddle.swig_paddle as api
# init paddle environment # init paddle environment
...@@ -60,7 +60,7 @@ dis_trainer = api.Trainer.create(dis_conf, dis_training_machine) ...@@ -60,7 +60,7 @@ dis_trainer = api.Trainer.create(dis_conf, dis_training_machine)
gen_trainer = api.Trainer.create(gen_conf, gen_training_machine) gen_trainer = api.Trainer.create(gen_conf, gen_training_machine)
``` ```
In order to balance the strength between generator and discriminator, we schedule to train whichever one is performing worse by comparing their loss function value. The loss function value can be calculated by a forward pass through the GradientMachine In order to balance the strength between generator and discriminator, we schedule to train whichever one is performing worse by comparing their loss function value. The loss function value can be calculated by a forward pass through the GradientMachine.
```python ```python
def get_training_loss(training_machine, inputs): def get_training_loss(training_machine, inputs):
outputs = api.Arguments.createArguments(0) outputs = api.Arguments.createArguments(0)
...@@ -69,7 +69,7 @@ def get_training_loss(training_machine, inputs): ...@@ -69,7 +69,7 @@ def get_training_loss(training_machine, inputs):
return numpy.mean(loss) return numpy.mean(loss)
``` ```
After training one network, one needs to sync the new parameters to the other networks. The code below demonstrates one example of such use case. After training one network, one needs to sync the new parameters to the other networks. The code below demonstrates one example of such use case:
```python ```python
# Train the gen_training # Train the gen_training
gen_trainer.trainOneDataBatch(batch_size, data_batch_gen) gen_trainer.trainOneDataBatch(batch_size, data_batch_gen)
...@@ -84,13 +84,13 @@ copy_shared_parameters(gen_training_machine, generator_machine) ...@@ -84,13 +84,13 @@ copy_shared_parameters(gen_training_machine, generator_machine)
## A Toy Example ## A Toy Example
With the infrastructure explained above, we can now walk you through a toy example of generating two dimensional uniform distribution using 10 dimensional Gaussian noise. With the infrastructure explained above, we can now walk you through a toy example of generating two dimensional uniform distribution using 10 dimensional Gaussian noise.
The Gaussian noises are generated using the code below The Gaussian noises are generated using the code below:
```python ```python
def get_noise(batch_size, noise_dim): def get_noise(batch_size, noise_dim):
return numpy.random.normal(size=(batch_size, noise_dim)).astype('float32') return numpy.random.normal(size=(batch_size, noise_dim)).astype('float32')
``` ```
The real samples (2-D uniform) are generated using the code below The real samples (2-D uniform) are generated using the code below:
```python ```python
# synthesize 2-D uniform data in gan_trainer.py:114 # synthesize 2-D uniform data in gan_trainer.py:114
def load_uniform_data(): def load_uniform_data():
...@@ -106,12 +106,16 @@ $python gan_trainer.py -d uniform --useGpu 1 ...@@ -106,12 +106,16 @@ $python gan_trainer.py -d uniform --useGpu 1
``` ```
The generated samples can be found in ./uniform_samples/ and one example is shown below as Figure 2. One can see that it roughly recovers the 2D uniform distribution. The generated samples can be found in ./uniform_samples/ and one example is shown below as Figure 2. One can see that it roughly recovers the 2D uniform distribution.
<center>![](./uniform_sample.png)</center> <p align="center">
<center>Figure 2. Uniform Sample</center> <img src="./uniform_sample.png" width="256" height="256">
</p>
<p align="center">
Figure 2. Uniform Sample
</p>
## MNIST Example ## MNIST Example
### Data preparation ### Data preparation
To download the MNIST data, one can use the following commands. To download the MNIST data, one can use the following commands:
```bash ```bash
$cd data/ $cd data/
$./get_mnist_data.sh $./get_mnist_data.sh
...@@ -121,10 +125,10 @@ $./get_mnist_data.sh ...@@ -121,10 +125,10 @@ $./get_mnist_data.sh
Following the DC-Gan paper (https://arxiv.org/abs/1511.06434), we use convolution/convolution-transpose layer in the discriminator/generator network to better deal with images. The details of the network structures are defined in gan_conf_image.py. Following the DC-Gan paper (https://arxiv.org/abs/1511.06434), we use convolution/convolution-transpose layer in the discriminator/generator network to better deal with images. The details of the network structures are defined in gan_conf_image.py.
### Training the model ### Training the model
To train the GAN model on mnist data, one can use the following command To train the GAN model on mnist data, one can use the following command:
```bash ```bash
$python gan_trainer.py -d mnist --useGpu 1 $python gan_trainer.py -d mnist --useGpu 1
``` ```
The generated sample images can be found at ./mnist_samples/ and one example is shown below as Figure 3. The generated sample images can be found at ./mnist_samples/ and one example is shown below as Figure 3.
<center>![](./mnist_sample.png)</center> <center>![](./mnist_sample.png)</center>
<center>Figure 2. MNIST Sample</center> <center>Figure 3. MNIST Sample</center>
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册