@@ -6,13 +6,13 @@ The source codes of this tutorial are in book/09.gan . Please refer to the instr
...
@@ -6,13 +6,13 @@ The source codes of this tutorial are in book/09.gan . Please refer to the instr
GAN \[[1](#References)\] is a kind of unsupervised learning method, which learns through games between two neural networks. This method was proposed by lan·Goodfellow et al in 2014, for whose paper you can refer to [Generative Adversarial Network](https://arxiv.org/abs/1406.2661)。
GAN \[[1](#References)\] is a kind of unsupervised learning method, which learns through games between two neural networks. This method was proposed by lan·Goodfellow et al in 2014, for whose paper you can refer to [Generative Adversarial Network](https://arxiv.org/abs/1406.2661)。
GAN is constituted by a generative network and a discrimination network. The generative network takes random sampling from latent space as input, while its output results need to imitate the real samples in training set to the greatest extent. The discrimination network takes real samples or the output of the generative network as input, aimed to distinguish the output of the generative network from real samples. And the generative network tries to cheat the discrimination network. These two networks confront each other and adjust parameters constantly in order to tell the samples generated by the generative network and the real samples apart. \[[2](#References)\]).
GAN is constituted by a generative network and a discrimination network. The generative network takes random sampling from latent space as input, while its output results need to imitate the real samples in training set to the greatest extent. The discrimination network takes real samples or the output of the generative network as input, aimed to distinguish the output of the generative network from real samples. And the generative network tries to cheat the discrimination network. These two networks confront each other and adjust parameters constantly in order to tell the samples generated by the generative network and the real samples apart. \[[2](#References)\] .
\[[2](#References)\] .
\[[2](#References)\] .
GAN is commonly used to generate convincing pictures that can \[[3](#References)\] ). What's more, it can also generate videos and 3D object models etc.
GAN is commonly used to generate convincing pictures that can be passed for genuine ones \[[3](#References)\]. What's more, it can also generate videos and 3D object models etc.
## Effect Display
## Effect Display
In this tutorial, MNIST data set are input to the network for training. After training for 19 turns, we can see that the generative pictures are very close to the real pictures. In the figure below, the first eight rows show real pictures and the rest show pictures generated by the network:
In this tutorial, MNIST dataset are input to the network for training. After training for 19 epochs, we can see that the generative pictures are very close to the real pictures. In the figure below, the first eight rows show real pictures and the rest show pictures generated by the network:
figure 1. generative handwriting digit effect of GAN
figure 1. generative handwriting digit effect of GAN
...
@@ -64,7 +64,7 @@ figure 3. Generator(G) in DCGAN
...
@@ -64,7 +64,7 @@ figure 3. Generator(G) in DCGAN
## Data Preparation
## Data Preparation
In this tutorial, MNIST of small data size is used to train Generator ande Discriminator , which can be downloaded to lacal automatically by paddle.dataset module.
In this tutorial, we use MNIST of small size to train Generator and Discriminator , which can be downloaded to local automatically by paddle.dataset module.
Please refer to [Digit Recognition](https://github.com/PaddlePaddle/book/tree/develop/02.recognize_digits) for specific introduction to NMIST.
Please refer to [Digit Recognition](https://github.com/PaddlePaddle/book/tree/develop/02.recognize_digits) for specific introduction to NMIST.
...
@@ -74,7 +74,7 @@ Please refer to [Digit Recognition](https://github.com/PaddlePaddle/book/tree/de
...
@@ -74,7 +74,7 @@ Please refer to [Digit Recognition](https://github.com/PaddlePaddle/book/tree/de
### Load the Package
### Load the Package
First load the Fluid and other relaterd packages of PaddlePaddle.
First load the Fluid and other related packages of PaddlePaddle.
Call `fluid.nets.simple_img_conv_pool` to realize pooling of convolution. The kernel dimension is 3x3, the pooling window dimension is 2x2, the window slide step size is 2, and the activation function is appointed by certain network structure.
Call `fluid.nets.simple_img_conv_pool` to realize pooling of convolution. The filter size is 3x3, the pooling size is 2x2, the pool stride size is 2, and the activation function is appointed by certain network structure.
Discriminator uses real data set and fake pictures generated by Generator to train at the same time, and try to make the output result 1 in the real data set case and make it 0 in the fake case. In this tutorial, Discriminator realized is constituted by two convolution pooling layers and two fully connected layers, in which the neuron number of the lat FCL is 1, outputting a dichotomy result.
Discriminator trains with real dataset and fake pictures generated by Generator at the same time, and try to make the output result 1 in the real dataset case and 0 in the fake case. In this tutorial, the realized Discriminator is constituted by two convolution pooling layers and two fully connected layers, in which the neuron number of the last FC is 1, outputting a binary classification result.
```python
```python
defD(x):
defD(x):
...
@@ -289,9 +289,9 @@ parameters = [p.name for p in g_program.global_block().all_parameters()]
...
@@ -289,9 +289,9 @@ parameters = [p.name for p in g_program.global_block().all_parameters()]
Next, we start the training process. We take paddle.dataset.mnist.train() as training dataset, which returns a reader —— reader in PaddlePaddle is a Python function, which returns a Python yield generator every time it's called.
Next, we start the training process. We take paddle.dataset.mnist.train() as training dataset, which returns a reader —— reader in PaddlePaddle is a Python function, which returns a Python yield generator every time it's called.
The shuffle below is reader decorator, which receive a reader A and return another reader B. Reader B writes training data whose quantity is buffer_size into a buffer every time, and then disrupts the order and outputs item by item.
The shuffle below is reader decorator, which receive a reader A and return another reader B. Reader B writes training data whose quantity is buffer_size into a buffer every time, and then disrupts the order and outputs item by item.
...
@@ -412,7 +412,7 @@ for pass_id in range(epoch):
...
@@ -412,7 +412,7 @@ for pass_id in range(epoch):
plt.close(fig)
plt.close(fig)
```
```
print generative results in a certain turn:
print generative results in a certain epoch:
```python
```python
defdisplay_image(epoch_no,batch_id):
defdisplay_image(epoch_no,batch_id):
...
@@ -425,7 +425,7 @@ display_image(10,460)
...
@@ -425,7 +425,7 @@ display_image(10,460)
## Summary
## Summary
DCGAN takes a random noise vector as input, which goes through a structure similiar but opposite with CNN and is magnifyed to 2D data. By generative models of such structure and discrimination models of CNN structure, DCGAN can perform very well in image generation. In this example, we generate handwriting digit image by DCGAN. You can try changing dataset to generate images satisfied with your personal requirements or changing the network structure to observe different generation effects.
DCGAN takes a random noise vector as input, which goes through a structure similiar but opposite with CNN and is magnifyed to 2D data. By generative models of such structure and discrimination models of CNN structure, DCGAN can perform very well in image generation. In this example, we generate handwriting digit image by DCGAN. You can try changing dataset to generate images satisfied with your personal requirements or changing the network structure to observe different generation effects.