diff --git a/03.image_classification/.README.cn.md.swp b/03.image_classification/.README.cn.md.swp new file mode 100644 index 0000000000000000000000000000000000000000..6617a9b33bd0cce658e11636638ffa1070376690 Binary files /dev/null and b/03.image_classification/.README.cn.md.swp differ diff --git a/03.image_classification/README.cn.md b/03.image_classification/README.cn.md index 574bb3ca3fd435eaabd10a0b2c737b8ecade9d09..299a0aa077b7ecb02e2feb634e400cc27689c277 100644 --- a/03.image_classification/README.cn.md +++ b/03.image_classification/README.cn.md @@ -221,7 +221,7 @@ def vgg_bn_drop(input): ``` -1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.networks`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 +1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.nets`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 2. 五组卷积操作,即 5个conv_block。 第一、二组采用两次连续的卷积操作。第三、四、五组采用三次连续的卷积操作。每组最后一个卷积后面Dropout概率为0,即不使用Dropout操作。 @@ -288,7 +288,7 @@ def layer_warp(block_func, input, ch_in, ch_out, count, stride): 3. 最后对网络做均值池化并返回该层。 -注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足 $(depth - 2) % 6 = 0$ 。 +注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足(depth-2)%6=0 。 ```python def resnet_cifar10(ipt, depth=32): @@ -370,7 +370,7 @@ test_reader = paddle.batch( ``` ### Trainer 程序的实现 -我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_func`。 +我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_program`。 ```python diff --git a/03.image_classification/README.md b/03.image_classification/README.md index e8975b17bf487210e279b0cee8129f90da9ac13a..5724277628a48d9ba5a1336f40f77b5256bc03c8 100644 --- a/03.image_classification/README.md +++ b/03.image_classification/README.md @@ -216,7 +216,7 @@ def vgg_bn_drop(input): ``` - 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.networks` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . + 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.nets` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . 2. Five groups of convolutions. The first two groups perform two consecutive convolutions, while the last three groups perform three convolutions in sequence. The dropout rate of the last convolution in each group is set to 0, which means there is no dropout for this layer. @@ -283,7 +283,7 @@ The following are the components of `resnet_cifar10`: 2. The next level is composed of three residual blocks, namely three `layer_warp`, each of which uses the left residual block in Figure 10. 3. The last level is average pooling layer. -Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy $(depth - 2) % 6 == 0$. +Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy (depth-2)%6=0. ```python def resnet_cifar10(ipt, depth=32): @@ -369,7 +369,7 @@ test_reader = paddle.batch( ### Implementation of the trainer program -We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_func` previously defined . +We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_program` previously defined . diff --git a/03.image_classification/index.cn.html b/03.image_classification/index.cn.html index 16656bf9ab292e448674c6aeea93cedb3547c35e..ec7d9d9669b3c76a4f6007ae648b6e762b170e42 100644 --- a/03.image_classification/index.cn.html +++ b/03.image_classification/index.cn.html @@ -263,7 +263,7 @@ def vgg_bn_drop(input): ``` -1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.networks`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 +1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.nets`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 2. 五组卷积操作,即 5个conv_block。 第一、二组采用两次连续的卷积操作。第三、四、五组采用三次连续的卷积操作。每组最后一个卷积后面Dropout概率为0,即不使用Dropout操作。 @@ -330,7 +330,7 @@ def layer_warp(block_func, input, ch_in, ch_out, count, stride): 3. 最后对网络做均值池化并返回该层。 -注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足 $(depth - 2) % 6 = 0$ 。 +注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足(depth-2)%6=0 。 ```python def resnet_cifar10(ipt, depth=32): @@ -412,7 +412,7 @@ test_reader = paddle.batch( ``` ### Trainer 程序的实现 -我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_func`。 +我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_program`。 ```python diff --git a/03.image_classification/index.html b/03.image_classification/index.html index 506f348339e846ebd191021e6c98f52a48cbcdee..3b362e35696fa731365ee76568ed3bd2148db8c8 100644 --- a/03.image_classification/index.html +++ b/03.image_classification/index.html @@ -258,7 +258,7 @@ def vgg_bn_drop(input): ``` - 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.networks` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . + 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.nets` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . 2. Five groups of convolutions. The first two groups perform two consecutive convolutions, while the last three groups perform three convolutions in sequence. The dropout rate of the last convolution in each group is set to 0, which means there is no dropout for this layer. @@ -325,7 +325,7 @@ The following are the components of `resnet_cifar10`: 2. The next level is composed of three residual blocks, namely three `layer_warp`, each of which uses the left residual block in Figure 10. 3. The last level is average pooling layer. -Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy $(depth - 2) % 6 == 0$. +Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy (depth-2)%6=0. ```python def resnet_cifar10(ipt, depth=32): @@ -411,7 +411,7 @@ test_reader = paddle.batch( ### Implementation of the trainer program -We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_func` previously defined . +We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_program` previously defined .