gan_api.md 7.7 KB
Newer Older
Z
zchen0211 已提交
1 2 3 4 5 6 7 8
# Design for GAN

GAN (General Adversarial Net) is an important model for unsupervised learning and widely used in many areas. 

It contains several important machine learning concepts, including building and running subgraphs, dependency tracing, different optimizers in one executor and so forth.

In our GAN design, we wrap it as a user-friendly easily customized python API to design different models. We take the conditional DC-GAN as an example due to its good performance on image generation.

Z
zchen0211 已提交
9 10 11 12 13
<p align="center">
<img src="./dcgan.png" width = "90%" align="center"/><br/>
Borrow this photo from the original DC-GAN paper.
</p>

Z
zchen0211 已提交
14 15 16
## The Conditional-GAN might be a class. 
This design we adopt the popular open source design in https://github.com/carpedm20/DCGAN-tensorflow and https://github.com/rajathkmp/DCGAN. It contains following data structure:

Z
zchen0211 已提交
17
- DCGAN(object): which contains everything required to build a GAN model. It provides following member functions methods as API:
Z
zchen0211 已提交
18

Z
zchen0211 已提交
19
- __init__(...): Initialize hyper-parameters (like conv dimension and so forth), and declare model parameters of discriminator and generator as well.
Z
zchen0211 已提交
20

Z
zchen0211 已提交
21
- generator(z, y=None): Generate a fake image from input noise z. If the label y is provided, the conditional GAN model will be chosen.
Z
zchen0211 已提交
22 23
Returns a generated image.

Z
zchen0211 已提交
24
- discriminator(image):
Z
zchen0211 已提交
25 26 27
Given an image, decide if it is from a real source or a fake one. 
Returns a 0/1 binary label.

Z
zchen0211 已提交
28
- build_model(self):
Z
zchen0211 已提交
29
build the whole GAN model, define training loss for both generator and discrimator.
Z
zchen0211 已提交
30

Z
zchen0211 已提交
31 32 33 34 35 36 37 38 39
## Discussion on Engine Functions required to build GAN
- Trace the ternsor and variable dependency in the engine executor. (Very critical, otherwise GAN can'be be trained correctly)
- Different optimizers responsible for optimizing different loss.

To be more detailed, we introduce our design of DCGAN as following:

### Class member Function: Initializer
- Set up hyper-parameters, including condtional dimension, noise dimension, batch size and so forth.
- Declare and define all the model variables. All the discriminator parameters are included in the list self.theta_D and all the generator parameters are included in the list self.theta_G.
Z
gan api  
zchen0211 已提交
40
```python
Z
zchen0211 已提交
41 42 43 44 45 46 47 48 49
class DCGAN(object):
  def __init__(self, y_dim=None):
  
    # hyper parameters  
    self.y_dim = y_dim # conditional gan or not
    self.batch_size = 100
    self.z_dim = z_dim # input noise dimension

    # define parameters of discriminators
Z
zchen0211 已提交
50
    self.D_W0 = pd.Variable(shape=[3,3, 1, 128], data=pd.gaussian_normal_randomizer())
Z
gan api  
zchen0211 已提交
51
    self.D_b0 = pd.Variable(np.zeros(128)) # variable also support initialization using a  numpy data
Z
zchen0211 已提交
52 53 54 55
    self.D_W1 = pd.Variable(shape=[784, 128], data=pd.gaussian_normal_randomizer())
    self.D_b1 = pd.Variable(np.zeros(128)) # variable also support initialization using a  numpy data
    self.D_W2 = pd.Varialble(np.random.rand(128, 1))
    self.D_b2 = pd.Variable(np.zeros(128))
Z
gan api  
zchen0211 已提交
56
    self.theta_D = [self.D_W0, self.D_b0, self.D_W1, self.D_b1, self.D_W2, self.D_b2]
Z
zchen0211 已提交
57 58

    # define parameters of generators
Z
gan api  
zchen0211 已提交
59 60
    self.G_W0 = pd.Variable(shape=[784, 128], data=pd.gaussian_normal_randomizer())
    self.G_b0 = pd.Variable(np.zeros(128)) # variable also support initialization using a  numpy data
Z
zchen0211 已提交
61 62 63 64
    self.G_W1 = pd.Variable(shape=[784, 128], data=pd.gaussian_normal_randomizer())
    self.G_b1 = pd.Variable(np.zeros(128)) # variable also support initialization using a  numpy data
    self.G_W2 = pd.Varialble(np.random.rand(128, 1))
    self.G_b2 = pd.Variable(np.zeros(128))
Z
gan api  
zchen0211 已提交
65 66
    self.theta_G = [self.G_W0, self.G_b0, self.G_W1, self.G_b1, self.G_W2, self.G_b2]
```
Z
zchen0211 已提交
67

Z
zchen0211 已提交
68 69 70 71
### Class member Function: Generator
- Given a noisy input z, returns a fake image.
- Concatenation, batch-norm, FC operations required;
- Deconv layer required, which is missing now...
Z
gan api  
zchen0211 已提交
72
```python
Z
zchen0211 已提交
73
def generator(self, z, y = None):
Z
zchen0211 已提交
74 75 76 77
    # input z: the random noise
    # input y: input data label (optional)
    # output G_im: generated fake images
    
Z
zchen0211 已提交
78 79 80 81 82 83 84
    if not self.y_dim:
      z = pd.concat(1, [z, y])
      
    G_h0 = pd.fc(z, self.G_w0, self.G_b0)
    G_h0_bn = pd.batch_norm(G_h0)
    G_h0_relu = pd.relu(G_h0_bn)
    
Z
zchen0211 已提交
85
    G_h1 = pd.deconv(G_h0_relu, self.G_w1, self.G_b1)
Z
zchen0211 已提交
86 87 88 89 90 91
    G_h1_bn = pd.batch_norm(G_h1)
    G_h1_relu = pd.relu(G_h1_bn)
    
    G_h2 = pd.deconv(G_h1_relu, self.G_W2, self.G_b2))
    G_im = pd.tanh(G_im)
    return G_im
Z
gan api  
zchen0211 已提交
92 93
```

Z
zchen0211 已提交
94 95 96
### Class member function: Discriminator
- Given a noisy input z, returns a fake image.
- Concatenation, Convolution, batch-norm, FC, Leaky-ReLU operations required;
Z
gan api  
zchen0211 已提交
97
```python
Z
zchen0211 已提交
98
def discriminator(self, image):
Z
zchen0211 已提交
99 100
    # input image: either generated images or real ones
    # output D_h2: binary logit of the label
Z
zchen0211 已提交
101 102 103 104 105 106 107 108 109 110 111

    D_h0 = pd.conv2d(image, self.D_w0, self.D_b0)
    D_h0_bn = pd.batchnorm(h0)
    D_h0_relu = pd.lrelu(h0_bn)
    
    D_h1 = pd.conv2d(D_h0_relu, self.D_w1, self.D_b1)
    D_h1_bn = pd.batchnorm(D_h1)
    D_h1_relu = pd.lrelu(D_h1_bn)
    
    D_h2 = pd.fc(D_h1_relu, self.D_w2, self.D_b2)
    return D_h2
Z
gan api  
zchen0211 已提交
112
```
Z
zchen0211 已提交
113 114

### Class member function: Build the model
Z
zchen0211 已提交
115 116 117
- Define data readers as placeholders to hold the data;
- Build generator and discriminators;
- Define two training losses for discriminator and generator, respectively. 
Z
gan api  
zchen0211 已提交
118
```python
Z
zchen0211 已提交
119 120 121 122 123 124 125 126 127
def build_model(self):

    # input data
    if self.y_dim:
        self.y = pd.data(pd.float32, [self.batch_size, self.y_dim])
    self.images = pd.data(pd.float32, [self.batch_size, self.im_size, self.im_size])
    self.faked_images = pd.data(pd.float32, [self.batch_size, self.im_size, self.im_size])
    self.z = pd.data(tf.float32, [None, self.z_size])
    
Z
zchen0211 已提交
128 129
    # step 1: generate images by generator, classify real/fake images with discriminator
    if self.y_dim: # if conditional GAN, includes label
Z
zchen0211 已提交
130 131 132 133 134 135 136 137 138 139 140 141
      self.G = self.generator(self.z, self.y)
      self.D_t = self.discriminator(self.images)
      # generated fake images
      self.sampled = self.sampler(self.z, self.y)
      self.D_f = self.discriminator(self.images)
    else: # original version of GAN
      self.G = self.generator(self.z)
      self.D_t = self.discriminator(self.images)
      # generate fake images
      self.sampled = self.sampler(self.z)
      self.D_f = self.discriminator(self.images)
    
Z
zchen0211 已提交
142
    # step 2: define the two losses
Z
zchen0211 已提交
143 144 145 146 147
    self.d_loss_real = pd.reduce_mean(pd.cross_entropy(self.D_t, np.ones(self.batch_size))
    self.d_loss_fake = pd.reduce_mean(pd.cross_entropy(self.D_f, np.zeros(self.batch_size))
    self.d_loss = self.d_loss_real + self.d_loss_fake
    
    self.g_loss = pd.reduce_mean(pd.cross_entropy(self.D_f, np.ones(self.batch_szie))
Z
gan api  
zchen0211 已提交
148
```
Z
zchen0211 已提交
149

Z
zchen0211 已提交
150
## Main function for the demo:
Z
zchen0211 已提交
151 152 153 154
Generally, the user of GAN just need to the following things:
- Define an object as DCGAN class;
- Build the DCGAN model;
- Specify two optimizers for two different losses with respect to different parameters.
Z
gan api  
zchen0211 已提交
155
```python
Z
zchen0211 已提交
156 157 158 159 160
# pd for short, should be more concise.
from paddle.v2 as pd
import numpy as np
import logging

Z
zchen0211 已提交
161 162 163 164 165 166 167 168 169
if __name__ == "__main__":
    # dcgan
    dcgan = DCGAN()
    dcgan.build_model()

    # load mnist data
    data_X, data_y = self.load_mnist()
    
    # Two subgraphs required!!!
Z
zchen0211 已提交
170 171
    d_optim = pd.train.Adam(lr = .001, beta= .1).minimize(dcgan.d_loss, dcgan.theta_D)
    g_optim = pd.train.Adam(lr = .001, beta= .1).minimize(dcgan.g_loss, dcgan.theta_G)
Z
zchen0211 已提交
172 173 174 175 176 177 178 179 180 181 182 183 184 185

    # executor
    sess = pd.executor()
    
    # training
    for epoch in xrange(10000):
      for batch_id in range(N / batch_size):
        idx = ...
        # sample a batch
        batch_im, batch_label = data_X[idx:idx+batch_size], data_y[idx:idx+batch_size]
        # sample z
        batch_z = np.random.uniform(-1., 1., [batch_size, z_dim])

        if batch_id % 2 == 0:
Z
zchen0211 已提交
186
          sess.run(d_optim, 
Z
zchen0211 已提交
187 188 189 190
                   feed_dict = {dcgan.images: batch_im,
                                dcgan.y: batch_label,
                                dcgan.z: batch_z})
        else:
Z
zchen0211 已提交
191
          sess.run(g_optim,
Z
zchen0211 已提交
192
                   feed_dict = {dcgan.z: batch_z})
Z
gan api  
zchen0211 已提交
193
```