From b74d0d49d97ecbab42ba8b05b170f1ae630670e1 Mon Sep 17 00:00:00 2001 From: kinghuin Date: Wed, 17 Jul 2019 18:48:33 +0800 Subject: [PATCH] fix layers number error test=develop --- 03.image_classification/README.md | 2 +- 03.image_classification/index.html | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/03.image_classification/README.md b/03.image_classification/README.md index cd3f644..783c9a6 100644 --- a/03.image_classification/README.md +++ b/03.image_classification/README.md @@ -140,7 +140,7 @@ Figure 9 illustrates the ResNet architecture. To the left is the basic building Figure 9. Residual block

-Figure 10 illustrates ResNets with 50, 101, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers. +Figure 10 illustrates ResNets with 50, 116, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.


diff --git a/03.image_classification/index.html b/03.image_classification/index.html index 0a34d23..aeba2a8 100644 --- a/03.image_classification/index.html +++ b/03.image_classification/index.html @@ -182,7 +182,7 @@ Figure 9 illustrates the ResNet architecture. To the left is the basic building Figure 9. Residual block

-Figure 10 illustrates ResNets with 50, 101, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers. +Figure 10 illustrates ResNets with 50, 116, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.


-- GitLab