paddlepaddle是自动判断权重共享的吗?我需要像tensorflow那样传reuse参数吗?
Created by: ManWingloeng
paddlepaddle是自动判断权重共享的吗?我需要像tensorflow那样传reuse参数吗?
def discriminator(self, x, y_, scope='discriminator', is_training=True, reuse=False):
with tf.variable_scope(scope, reuse=reuse) :
x = dropout(x, rate=0.2, is_training=is_training)
y = tf.reshape(y_, [-1, 1, 1, self.y_dim])
x = conv_concat(x,y)
x = lrelu(conv_layer(x, filter_size=32, kernel=[3,3],
layer_name=scope+'_conv1'))
x = conv_concat(x,y)
x = lrelu(conv_layer(x, filter_size=32, kernel=[3,3], stride=2,
layer_name=scope+'_conv2'))
x = dropout(x, rate=0.2, is_training=is_training)
x = conv_concat(x,y)
x = lrelu(conv_layer(x, filter_size=64, kernel=[3,3],
layer_name=scope+'_conv3'))
x = conv_concat(x,y)
x = lrelu(conv_layer(x, filter_size=64, kernel=[3,3], stride=2,
layer_name=scope+'_conv4'))
x = dropout(x, rate=0.2, is_training=is_training)
x = conv_concat(x,y)
x = lrelu(conv_layer(x, filter_size=128, kernel=[3,3],
layer_name=scope+'_conv5'))
x = conv_concat(x,y)
x = lrelu(conv_layer(x, filter_size=128, kernel=[3,3],
layer_name=scope+'_conv6'))
x = conv_concat(x,y)
x = Global_Average_Pooling(x)
x = flatten(x)
x = concat([x,y_]) # mlp_concat
x_logit = linear(x, unit=1, layer_name=scope+'_linear1')
out = sigmoid(x_logit)
return out, x_logit, x
类似GAN中我需要重复使用D模型,paddle是直接判断shape然后判断是否reuse吗?