From ed820164a12bdd6642eabe70db70a93c61f0ca86 Mon Sep 17 00:00:00 2001 From: ruri Date: Wed, 19 Jun 2019 17:45:18 +0800 Subject: [PATCH] fix typo in image classification (#751) * fix typo in image classification * fix formula --- 03.image_classification/.README.cn.md.swp | Bin 0 -> 16384 bytes 03.image_classification/README.cn.md | 6 +++--- 03.image_classification/README.md | 6 +++--- 03.image_classification/index.cn.html | 6 +++--- 03.image_classification/index.html | 6 +++--- 5 files changed, 12 insertions(+), 12 deletions(-) create mode 100644 03.image_classification/.README.cn.md.swp diff --git a/03.image_classification/.README.cn.md.swp b/03.image_classification/.README.cn.md.swp new file mode 100644 index 0000000000000000000000000000000000000000..6617a9b33bd0cce658e11636638ffa1070376690 GIT binary patch literal 16384 zcmeHOS#TTKb?vdUI-c0&{KobRRF#k_q5*J`D3&CbhSH2Vrev9vIg6rFHGwA3W&@2h z8k96ucBqA`NRe7dNkd8$wU0(5wPi#CqF5FGB>$--|3)wLByp0TB#sm3-1iy~$Ps7! zF-|2_K-DGs?f0I$z4zGY40p8bWn1NX37@Y@(!J4+KmDO}>1$uyE=i_gSXBL4pJRU( zjs)ZBY%tOg3D(vI!(mxXN99gMi)H1gkt~i69Wl)AP^V#Zhr)HTmQ>>E;i#@;GFq1w zRV>X&h1&M)+P!y=98Jl|SaDe8yMjOkfzJhjSEahzNSLPD6!;_d2VeXC=Q3+$k_rM9 z1S$wr5U3zfL7;*_1%V0z6$Ji&AYi4xB>g#9{esV~?0>(q?)#Vi=No=~XkGh%`|aQK z>-Vl}|1ZD&PyBj)UHdQm_GZ8S{JQr4^xNC~`oCS*{&T-Q<=0pF0u<%=55K*5t1!8) z{onoe7JvND*R}tf-yZPWn|;8F^ZV76uL=Sc1S$wr5U3zfL7;*_1%V0z6$B~>R1l~j z@P85k1qb58IKS7FNzz|}9)TW$z6$yZ=oh~!Nk>8NfDVA#L2aPb-@qCm7xXuv z?}0*~&7f*f73h^Of=AFlg4a0cP0$|DZcsglWT*vIKEGZBHf>_g-C6tUY3KF{@6Mnk zZ4ae)O7_(AI+-~`5A5N)cy@;d?1vNX&D_D-+C$QIEg5GSGrA*?u&i{ZDHMupRwCO; zuZM*8E3ufaiofFZkWSs`48_zQRX5TbzAM6CAPMy-rpDhL$;jzc{9C4SWQS#D)c`x9 z#jM1RKtn?~z;s!S!qSYLyWlO{@=h+>H)l#iygU8g zopV3wJ#P06xqWx=;GG2UIVR>8YgKtG zmsbZyoWZk&{Czh+14i9D4+_iEepe23^-p?F5by%(Lav|0$xVqaBoXfghP~$}%VjJ) zdmqbr$L~3p&Y)W%mxGuX>s)&(WGUwaYrvq2O0Jceo9}lYowIYd#X69KK;qi|HEOy` zH;$<025PPkf4Z8C(EeY7DQ3hE<87*S-cWRhy3cTjHLQ;GIj8S`QUsXsO96D} zhJ-mrNda0yj=^zlZJ3MLAR!(^I^dEoC3eNb3F&*7!$!!_d@oa9X3cP1{ocTbD1 zh5TtZKUyva>GJ7AGS|7=g{5m^Rp=Fpw8w_++b0Uk6NTj!D5^XXdGm8_{$%sMeWE{K z+v^>%XL5zynezECefet6z6?$lZL$_;qFbMcyZCcY+@>}O+} zHw(`24LiF(YmZE#%^g~@pC5O|#=MmWoZBM;%6=x_1`bTUXuun7e$|%}-Y~8>t>c&(hm?Y@eDn z6&+s|y}4Jz07kX9a@JmY;#|39Py2SgdcEJCnE_+=^NYmT$zcj7kZo|>&3^#SG0dHM z;+(l<4}S2%8%~b7Pp%bKW}V{;V6w27^PZo#`);AXGu><7yaRJwSP%mH=GhmbeT1e!aAH|&B%9J-dQbNNS8z&~ujjVoniEtiX_8!jCay*yOODFTpG9jT=M>pVcc@?Zrhu z7#Bkfpu;H0XfLfekEdvC?gFslR)@Ik4c-Uj-og{`TW)N=b#Ytp?PYLMJgixKVab_z zLV1oyA_0uucNa;hWMA&7kL>X?!VmmVbr~t@CeohpJ`v(L1J|6no5F8=Q2DgrS}g4i zJhuBDuMUjDb0GmNrQ87B`HOIn!qRDb2wEbyqvUk+eKHvU&;83=81=57u6j+0Goy>W zWtg!{wa^n^go1(KQFt3EhK~@%LjH_5a1R1`3v=FG7}F_t9GP*>el{U246z5G+G9ZU zif~hKff^v>-RL7zTNuScV8Nan_h#>68)Nr@`*n7?P zJ6Asv%!?=hHV{B^6R>UH6o!{!b(=SXmCc()mHTYcy@nF6Drs4a-@Li7I7A4n48ZrW zG?IR~twA}?wIyeCUQ}{^<_b%`2igyS@QeMDKX4?6MeJw2FkT3Y84#^V|I<0XN5XP* z!b~t5G>=fedU1aB`XzM6=Bng}&W&E@FTm3A5Q<(oWl+Z%dK8NTjGSo$=!srmW`sKJ)Oh0P>kh$&Uwc#AQ1Z@mAiu7 z-)GM+5k8z%K;b^Qj)mN*MZ13lc24{4<#X^(&S4&YUs^>B@uL{e&PA1Cw6k!*nH+$j z`8&R`@xt>dXX=SPe9gTugINjh`6cJld3cX^=LT{mwe~&rhM=eg=hO^I&jUDChOU4F zwyNGTwhZSAi)SFQGkRTgKu@7(7=n`i3g35^1QI?f zrYczqyt!PB+;Xu?5PNR19gtq^60kfnD7?U(nzxrHz{f%UO8U#*M?X~cD;-xXQ&m6f zxKdqX4_-lT_aolIQ|H88d+dbQd-C9Gcq1N~aAt258=qiX{wuD%=BQNu`urwu1)*yM z2@5zEmTuEg0q^~?a;PMU&i{0d{wENf{{#NfaWBsOtsoue{cg}1ocBjT*KyXL1^vVC zNz%ulO`rhi(U)+}2Ymtb3g}0;3-}@EuYX69{tEOe=*yswaR=~I(APnK2>KXj|DS@E zK#QQ$pkdG{5b;E3{mQ3;z~_U&c7>U$z9XP(xB$rDk|1HKUE)pxR}xWE#Z5wwiaUv< zk;=%184vkAGoj97!Hg2Bm&2jJPJWxA=+=&aJA*Amzuot+ko(Y^Uqm{#N5{PR`2fpU z$8>c^pc7jc)eQ2>h$b9S*W7_<@=KR2TdT40XkCPkasd|Fxjm%NMFu@((n^XQP3oyk zQ!ENTC`>_ubk@{4+p%Z}_Z!JfXq|}V9b(a>W*tkbp#$nsE5OpGkycIX*p2{ZY_c>< z2Ry&PDyl8#(YIaGqrV*l@1!L3 zj)>lLnRevJ5jmQXEyFZYmLaRLZ0JYbruJSas&y%*oUoEQjR=?KkEv=}R~0j*rQ&i* zwL(@FgEd7D=_-oF5s__&*tgXh_8qx~y_--{@f!9v9^2GZ4Qt1Pl8UkREDCSR?BH%y z@4<9zH|Ay<_&~p?h2M7J(xwUB%ryPf3mv|lO)9BiD_!rg{i@kzm?%e8h7)HC{U{9Q zP1d~TcVRDbp<*7@dbo)zotaRiAzUkmBaIEU)iT>B7*=RS49T2ny8cWkHL9nO32I?-BOf>@o;+A0n z3oV&eAU&8s1AY$(hN4C)MG_;#lFEx_xS@`>kdC+3Zizrdg>>8$DOIs5D0fRW*Y1{e zysu$+jVSF2XaFjG6Jot({2-;kx*-y@kpS{!qMUV-DabEE6f>I8EHIrlRR(*73_hYg z9MKJVZ-{{RaKo0ZTs=*!xnqBuT+VdEA+jp+vA5--i@_GThry}fT->wZQEf&1T7AFe zjBtk`M>FCnjd=ZZR_f{+8^8*&K}z4kfvX2_upUx1*BdO|Hv+8vnqoy0EJZ%5YwvO6 zg5^e|s;-(S0V|3n3_7PfB4(M2MnQ#j9)reX*(g23AIUID_2Rl-WtoK5Wo^tWsk|=Y z!^T>g_qLW9Y26{#s=jSa4I#@xgb>b-mMU_#;aYW2Q9Z7YxrW$& z(})6#=GHd8U_A#2ae0kbv}#!x4%U2AgXz43hjtzURI7&VN2yI!W5Q{6rQ%uq6Qiew z?MD3`6ERdgt~1UyrBh28Jz6wDVWfr~e8VV03%5Oj|CNy26ffp78wyh&-X_;K)>Ala zNtK&hLp23JqzqUaLXd^1tb<3C+BIx;juBsoUma>p(PM~*crHaY;Tni0~XHOUoqS^xX)V z*dS51(O)Xw3*kyE$8=eVCOM~3O_;14?Mi-2i|vRsHa14WgccOCr@6UBN->ReLu->Dropout 和 一组 Pooling 组成。 +1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.nets`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 2. 五组卷积操作,即 5个conv_block。 第一、二组采用两次连续的卷积操作。第三、四、五组采用三次连续的卷积操作。每组最后一个卷积后面Dropout概率为0,即不使用Dropout操作。 @@ -288,7 +288,7 @@ def layer_warp(block_func, input, ch_in, ch_out, count, stride): 3. 最后对网络做均值池化并返回该层。 -注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足 $(depth - 2) % 6 = 0$ 。 +注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足(depth-2)%6=0 。 ```python def resnet_cifar10(ipt, depth=32): @@ -370,7 +370,7 @@ test_reader = paddle.batch( ``` ### Trainer 程序的实现 -我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_func`。 +我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_program`。 ```python diff --git a/03.image_classification/README.md b/03.image_classification/README.md index e8975b1..5724277 100644 --- a/03.image_classification/README.md +++ b/03.image_classification/README.md @@ -216,7 +216,7 @@ def vgg_bn_drop(input): ``` - 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.networks` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . + 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.nets` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . 2. Five groups of convolutions. The first two groups perform two consecutive convolutions, while the last three groups perform three convolutions in sequence. The dropout rate of the last convolution in each group is set to 0, which means there is no dropout for this layer. @@ -283,7 +283,7 @@ The following are the components of `resnet_cifar10`: 2. The next level is composed of three residual blocks, namely three `layer_warp`, each of which uses the left residual block in Figure 10. 3. The last level is average pooling layer. -Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy $(depth - 2) % 6 == 0$. +Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy (depth-2)%6=0. ```python def resnet_cifar10(ipt, depth=32): @@ -369,7 +369,7 @@ test_reader = paddle.batch( ### Implementation of the trainer program -We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_func` previously defined . +We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_program` previously defined . diff --git a/03.image_classification/index.cn.html b/03.image_classification/index.cn.html index 16656bf..ec7d9d9 100644 --- a/03.image_classification/index.cn.html +++ b/03.image_classification/index.cn.html @@ -263,7 +263,7 @@ def vgg_bn_drop(input): ``` -1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.networks`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 +1. 首先定义了一组卷积网络,即conv_block。卷积核大小为3x3,池化窗口大小为2x2,窗口滑动大小为2,groups决定每组VGG模块是几次连续的卷积操作,dropouts指定Dropout操作的概率。所使用的`img_conv_group`是在`paddle.nets`中预定义的模块,由若干组 Conv->BN->ReLu->Dropout 和 一组 Pooling 组成。 2. 五组卷积操作,即 5个conv_block。 第一、二组采用两次连续的卷积操作。第三、四、五组采用三次连续的卷积操作。每组最后一个卷积后面Dropout概率为0,即不使用Dropout操作。 @@ -330,7 +330,7 @@ def layer_warp(block_func, input, ch_in, ch_out, count, stride): 3. 最后对网络做均值池化并返回该层。 -注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足 $(depth - 2) % 6 = 0$ 。 +注意:除第一层卷积层和最后一层全连接层之外,要求三组 `layer_warp` 总的含参层数能够被6整除,即 `resnet_cifar10` 的 depth 要满足(depth-2)%6=0 。 ```python def resnet_cifar10(ipt, depth=32): @@ -412,7 +412,7 @@ test_reader = paddle.batch( ``` ### Trainer 程序的实现 -我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_func`。 +我们需要为训练过程制定一个main_program, 同样的,还需要为测试程序配置一个test_program。定义训练的 `place` ,并使用先前定义的优化器 `optimizer_program`。 ```python diff --git a/03.image_classification/index.html b/03.image_classification/index.html index 506f348..3b362e3 100644 --- a/03.image_classification/index.html +++ b/03.image_classification/index.html @@ -258,7 +258,7 @@ def vgg_bn_drop(input): ``` - 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.networks` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . + 1. Firstly, it defines a convolution block or conv_block. The default convolution kernel is 3x3, and the default pooling size is 2x2 with stride 2. Groups decide the number of consecutive convolution operations in each VGG block. Dropout specifies the probability to perform dropout operation. Function `img_conv_group` is predefined in `paddle.nets` consisting of a series of `Conv->BN->ReLu->Dropout` and a group of `Pooling` . 2. Five groups of convolutions. The first two groups perform two consecutive convolutions, while the last three groups perform three convolutions in sequence. The dropout rate of the last convolution in each group is set to 0, which means there is no dropout for this layer. @@ -325,7 +325,7 @@ The following are the components of `resnet_cifar10`: 2. The next level is composed of three residual blocks, namely three `layer_warp`, each of which uses the left residual block in Figure 10. 3. The last level is average pooling layer. -Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy $(depth - 2) % 6 == 0$. +Note: Except the first convolutional layer and the last fully-connected layer, the total number of layers with parameters in three `layer_warp` should be dividable by 6. In other words, the depth of `resnet_cifar10` should satisfy (depth-2)%6=0. ```python def resnet_cifar10(ipt, depth=32): @@ -411,7 +411,7 @@ test_reader = paddle.batch( ### Implementation of the trainer program -We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_func` previously defined . +We need to develop a main_program for the training process. Similarly, we need to configure a test_program for the test program. It's also necessary to define the `place` of the training and use the optimizer `optimizer_program` previously defined . -- GitLab