@@ -140,7 +140,7 @@ Figure 9 illustrates the ResNet architecture. To the left is the basic building
Figure 9. Residual block
</p>
Figure 10 illustrates ResNets with 50, 101, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.
Figure 10 illustrates ResNets with 50, 116, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.
@@ -182,7 +182,7 @@ Figure 9 illustrates the ResNet architecture. To the left is the basic building
Figure 9. Residual block
</p>
Figure 10 illustrates ResNets with 50, 101, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.
Figure 10 illustrates ResNets with 50, 116, 152 layers, respectively. All three networks use bottleneck blocks and their difference lies in the repetition time of residual blocks. ResNet converges very fast and can be trained with hundreds or thousands of layers.