diff --git a/03.image_classification/README.md b/03.image_classification/README.md index cd3f6442a9eee307cecd0efa1b746077185f0734..783c9a66b9fe92785abb6dcf995d33377295cb74 100644 --- a/03.image_classification/README.md +++ b/03.image_classification/README.md @@ -140,7 +140,7 @@ Figure 9 illustrates the ResNet architecture. To the left is the basic building Figure 9. Residual block

-Figure 10 illustrates ResNets with 50, 101, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers. +Figure 10 illustrates ResNets with 50, 116, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.


diff --git a/03.image_classification/index.html b/03.image_classification/index.html index 0a34d23e09cea345c13c45b0cf71e1c62736848a..aeba2a81c23e3e27f39b558cdc9bc4a36ecfd9ed 100644 --- a/03.image_classification/index.html +++ b/03.image_classification/index.html @@ -182,7 +182,7 @@ Figure 9 illustrates the ResNet architecture. To the left is the basic building Figure 9. Residual block

-Figure 10 illustrates ResNets with 50, 101, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers. +Figure 10 illustrates ResNets with 50, 116, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.