提交 8a7d2db5 编写于 作者: X xiaoting 提交者: ruri

Cherry pick for paddle 1.6 (#843)

上级 a34ea9c6
...@@ -239,6 +239,37 @@ def multilayer_perceptron(): ...@@ -239,6 +239,37 @@ def multilayer_perceptron():
return prediction return prediction
``` ```
- 卷积池化层:在LeNet-5中会出现多个卷积-池化的操作,为避免代码重复书写,将串联的卷积-池化写成conv_pool函数。
```python
def conv_pool(input, num_filters, filter_size, pool_size, pool_stride, act="relu"):
"""
定义卷积池化层:
含有一个卷积层和一个池化层
Args:
input —— 网络输入
num_filters —— 卷积核的个数
filter_size —— 卷积核的大小
pool_size —— 池化核的大小
pool_stride —— 池化的步长
act —— 卷积层的激活函数
Return:
out -- 经过卷积池化后的特征图
"""
conv_out = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
act=act)
out = fluid.layers.pool2d(
input=conv_out,
pool_size=pool_size,
pool_stride=pool_stride)
return out
```
- 卷积神经网络LeNet-5: 输入的二维图像,首先经过两次卷积层到池化层,再经过全连接层,最后使用以softmax为激活函数的全连接层作为输出层。 - 卷积神经网络LeNet-5: 输入的二维图像,首先经过两次卷积层到池化层,再经过全连接层,最后使用以softmax为激活函数的全连接层作为输出层。
```python ```python
...@@ -254,7 +285,7 @@ def convolutional_neural_network(): ...@@ -254,7 +285,7 @@ def convolutional_neural_network():
img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32') img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32')
# 第一个卷积-池化层 # 第一个卷积-池化层
# 使用20个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu # 使用20个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu
conv_pool_1 = fluid.nets.simple_img_conv_pool( conv_pool_1 = conv_pool(
input=img, input=img,
filter_size=5, filter_size=5,
num_filters=20, num_filters=20,
...@@ -264,7 +295,7 @@ def convolutional_neural_network(): ...@@ -264,7 +295,7 @@ def convolutional_neural_network():
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1) conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
# 第二个卷积-池化层 # 第二个卷积-池化层
# 使用50个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu # 使用50个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu
conv_pool_2 = fluid.nets.simple_img_conv_pool( conv_pool_2 = conv_pool(
input=conv_pool_1, input=conv_pool_1,
filter_size=5, filter_size=5,
num_filters=50, num_filters=50,
......
...@@ -218,6 +218,36 @@ def multilayer_perceptron(): ...@@ -218,6 +218,36 @@ def multilayer_perceptron():
return prediction return prediction
``` ```
-Conv_pool layer: LeNet-5 has multiple convolution-pooling operations. In order to avoid repeated code writing, the convolution-pooling in series is written as conv_pool function.
```python
def conv_pool(input, num_filters, filter_size, pool_size, pool_stride, act="relu"):
"""
Define convolution-pooling layer:
Conv_pool layer has a convolutional layer and a pooling layer
Args:
input —— Input
num_filters —— The number of filter
filter_size —— The filter size
pool_size —— The pool kernel size
pool_stride —— The pool stride size
act —— Activation type
Return:
out -- output
"""
conv_out = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
act=act)
out = fluid.layers.pool2d(
input=conv_out,
pool_size=pool_size,
pool_stride=pool_stride)
return out
```
-Convolutional neural network LeNet-5: The input two-dimensional image first passes through two convolutional layers to the pooling layer, then passes through the fully connected layer, and finally fully connection layer with softmax as activation function is used as output layer. -Convolutional neural network LeNet-5: The input two-dimensional image first passes through two convolutional layers to the pooling layer, then passes through the fully connected layer, and finally fully connection layer with softmax as activation function is used as output layer.
```python ```python
...@@ -233,7 +263,7 @@ def convolutional_neural_network(): ...@@ -233,7 +263,7 @@ def convolutional_neural_network():
img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32') img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32')
# the first convolution-pooling layer # the first convolution-pooling layer
# Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu. # Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu.
conv_pool_1 = fluid.nets.simple_img_conv_pool( conv_pool_1 = conv_pool(
input=img, input=img,
filter_size=5, filter_size=5,
num_filters=20, num_filters=20,
...@@ -243,7 +273,7 @@ def convolutional_neural_network(): ...@@ -243,7 +273,7 @@ def convolutional_neural_network():
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1) conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
# the second convolution-pooling layer # the second convolution-pooling layer
# Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu. # Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu.
conv_pool_2 = fluid.nets.simple_img_conv_pool( conv_pool_2 = conv_pool(
input=conv_pool_1, input=conv_pool_1,
filter_size=5, filter_size=5,
num_filters=50, num_filters=50,
......
...@@ -281,6 +281,37 @@ def multilayer_perceptron(): ...@@ -281,6 +281,37 @@ def multilayer_perceptron():
return prediction return prediction
``` ```
- 卷积池化层:在LeNet-5中会出现多个卷积-池化的操作,为避免代码重复书写,将串联的卷积-池化写成conv_pool函数。
```python
def conv_pool(input, num_filters, filter_size, pool_size, pool_stride, act="relu"):
"""
定义卷积池化层:
含有一个卷积层和一个池化层
Args:
input —— 网络输入
num_filters —— 卷积核的个数
filter_size —— 卷积核的大小
pool_size —— 池化核的大小
pool_stride —— 池化的步长
act —— 卷积层的激活函数
Return:
out -- 经过卷积池化后的特征图
"""
conv_out = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
act=act)
out = fluid.layers.pool2d(
input=conv_out,
pool_size=pool_size,
pool_stride=pool_stride)
return out
```
- 卷积神经网络LeNet-5: 输入的二维图像,首先经过两次卷积层到池化层,再经过全连接层,最后使用以softmax为激活函数的全连接层作为输出层。 - 卷积神经网络LeNet-5: 输入的二维图像,首先经过两次卷积层到池化层,再经过全连接层,最后使用以softmax为激活函数的全连接层作为输出层。
```python ```python
...@@ -296,7 +327,7 @@ def convolutional_neural_network(): ...@@ -296,7 +327,7 @@ def convolutional_neural_network():
img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32') img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32')
# 第一个卷积-池化层 # 第一个卷积-池化层
# 使用20个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu # 使用20个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu
conv_pool_1 = fluid.nets.simple_img_conv_pool( conv_pool_1 = conv_pool(
input=img, input=img,
filter_size=5, filter_size=5,
num_filters=20, num_filters=20,
...@@ -306,7 +337,7 @@ def convolutional_neural_network(): ...@@ -306,7 +337,7 @@ def convolutional_neural_network():
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1) conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
# 第二个卷积-池化层 # 第二个卷积-池化层
# 使用50个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu # 使用50个5*5的滤波器,池化大小为2,池化步长为2,激活函数为Relu
conv_pool_2 = fluid.nets.simple_img_conv_pool( conv_pool_2 = conv_pool(
input=conv_pool_1, input=conv_pool_1,
filter_size=5, filter_size=5,
num_filters=50, num_filters=50,
......
...@@ -260,6 +260,36 @@ def multilayer_perceptron(): ...@@ -260,6 +260,36 @@ def multilayer_perceptron():
return prediction return prediction
``` ```
-Conv_pool layer: LeNet-5 has multiple convolution-pooling operations. In order to avoid repeated code writing, the convolution-pooling in series is written as conv_pool function.
```python
def conv_pool(input, num_filters, filter_size, pool_size, pool_stride, act="relu"):
"""
Define convolution-pooling layer:
Conv_pool layer has a convolutional layer and a pooling layer
Args:
input —— Input
num_filters —— The number of filter
filter_size —— The filter size
pool_size —— The pool kernel size
pool_stride —— The pool stride size
act —— Activation type
Return:
out -- output
"""
conv_out = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
act=act)
out = fluid.layers.pool2d(
input=conv_out,
pool_size=pool_size,
pool_stride=pool_stride)
return out
```
-Convolutional neural network LeNet-5: The input two-dimensional image first passes through two convolutional layers to the pooling layer, then passes through the fully connected layer, and finally fully connection layer with softmax as activation function is used as output layer. -Convolutional neural network LeNet-5: The input two-dimensional image first passes through two convolutional layers to the pooling layer, then passes through the fully connected layer, and finally fully connection layer with softmax as activation function is used as output layer.
```python ```python
...@@ -275,7 +305,7 @@ def convolutional_neural_network(): ...@@ -275,7 +305,7 @@ def convolutional_neural_network():
img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32') img = fluid.data(name='img', shape=[None, 1, 28, 28], dtype='float32')
# the first convolution-pooling layer # the first convolution-pooling layer
# Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu. # Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu.
conv_pool_1 = fluid.nets.simple_img_conv_pool( conv_pool_1 = conv_pool(
input=img, input=img,
filter_size=5, filter_size=5,
num_filters=20, num_filters=20,
...@@ -285,7 +315,7 @@ def convolutional_neural_network(): ...@@ -285,7 +315,7 @@ def convolutional_neural_network():
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1) conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
# the second convolution-pooling layer # the second convolution-pooling layer
# Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu. # Use 20 5*5 filters, the pooling size is 2, the pooling step is 2, and the activation function is Relu.
conv_pool_2 = fluid.nets.simple_img_conv_pool( conv_pool_2 = conv_pool(
input=conv_pool_1, input=conv_pool_1,
filter_size=5, filter_size=5,
num_filters=50, num_filters=50,
......
...@@ -339,7 +339,7 @@ def train_program(): ...@@ -339,7 +339,7 @@ def train_program():
cost = fluid.layers.cross_entropy(input=predict, label=label) cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(cost) avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=predict, label=label) accuracy = fluid.layers.accuracy(input=predict, label=label)
return [avg_cost, accuracy] return [avg_cost, accuracy, predict]
``` ```
## Optimizer Function 配置 ## Optimizer Function 配置
...@@ -384,7 +384,7 @@ feed_order = ['pixel', 'label'] ...@@ -384,7 +384,7 @@ feed_order = ['pixel', 'label']
main_program = fluid.default_main_program() main_program = fluid.default_main_program()
star_program = fluid.default_startup_program() star_program = fluid.default_startup_program()
avg_cost, acc = train_program() avg_cost, acc, predict = train_program()
# Test program # Test program
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
......
...@@ -339,7 +339,7 @@ def train_program(): ...@@ -339,7 +339,7 @@ def train_program():
cost = fluid.layers.cross_entropy(input=predict, label=label) cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(cost) avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=predict, label=label) accuracy = fluid.layers.accuracy(input=predict, label=label)
return [avg_cost, accuracy] return [avg_cost, accuracy, predict]
``` ```
## Optimizer Function Configuration ## Optimizer Function Configuration
...@@ -387,7 +387,7 @@ feed_order = ['pixel', 'label'] ...@@ -387,7 +387,7 @@ feed_order = ['pixel', 'label']
main_program = fluid.default_main_program() main_program = fluid.default_main_program()
star_program = fluid.default_startup_program() star_program = fluid.default_startup_program()
avg_cost, acc = train_program() avg_cost, acc, predict = train_program()
# Test program # Test program
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
......
...@@ -381,7 +381,7 @@ def train_program(): ...@@ -381,7 +381,7 @@ def train_program():
cost = fluid.layers.cross_entropy(input=predict, label=label) cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(cost) avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=predict, label=label) accuracy = fluid.layers.accuracy(input=predict, label=label)
return [avg_cost, accuracy] return [avg_cost, accuracy, predict]
``` ```
## Optimizer Function 配置 ## Optimizer Function 配置
...@@ -426,7 +426,7 @@ feed_order = ['pixel', 'label'] ...@@ -426,7 +426,7 @@ feed_order = ['pixel', 'label']
main_program = fluid.default_main_program() main_program = fluid.default_main_program()
star_program = fluid.default_startup_program() star_program = fluid.default_startup_program()
avg_cost, acc = train_program() avg_cost, acc, predict = train_program()
# Test program # Test program
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
......
...@@ -381,7 +381,7 @@ def train_program(): ...@@ -381,7 +381,7 @@ def train_program():
cost = fluid.layers.cross_entropy(input=predict, label=label) cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(cost) avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=predict, label=label) accuracy = fluid.layers.accuracy(input=predict, label=label)
return [avg_cost, accuracy] return [avg_cost, accuracy, predict]
``` ```
## Optimizer Function Configuration ## Optimizer Function Configuration
...@@ -429,7 +429,7 @@ feed_order = ['pixel', 'label'] ...@@ -429,7 +429,7 @@ feed_order = ['pixel', 'label']
main_program = fluid.default_main_program() main_program = fluid.default_main_program()
star_program = fluid.default_startup_program() star_program = fluid.default_startup_program()
avg_cost, acc = train_program() avg_cost, acc, predict = train_program()
# Test program # Test program
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册