From 03593f9ea6eb409dda4ba43991dc361bb3183fe1 Mon Sep 17 00:00:00 2001 From: cuicheng01 Date: Tue, 7 Dec 2021 02:39:56 +0000 Subject: [PATCH] update some en docs --- docs/en/models/DLA_en.md | 4 ++-- docs/en/models/DPN_DenseNet_en.md | 6 +++--- docs/en/models/ESNet_en.md | 4 ++-- docs/en/models/EfficientNet_and_ResNeXt101_wsl_en.md | 10 +++++----- docs/en/models/HRNet_en.md | 8 ++++---- docs/en/models/HarDNet_en.md | 4 ++-- docs/en/models/Inception_en.md | 12 ++++++------ docs/en/models/LeViT_en.md | 6 +++--- docs/en/models/MixNet_en.md | 6 +++--- docs/en/models/Mobile_en.md | 8 ++++---- docs/en/models/Others_en.md | 6 +++--- docs/en/models/ReXNet_en.md | 6 +++--- docs/en/models/RedNet_en.md | 4 ++-- docs/en/models/RepVGG_en.md | 6 +++--- docs/en/models/ResNeSt_RegNet_en.md | 4 ++-- docs/en/models/ResNet_and_vd_en.md | 8 ++++---- docs/en/models/SEResNext_and_Res2Net_en.md | 8 ++++---- docs/en/models/SwinTransformer_en.md | 6 +++--- docs/en/models/TNT_en.md | 4 ++-- docs/en/models/Twins_en.md | 4 ++-- docs/en/models/ViT_and_DeiT_en.md | 8 ++++---- 21 files changed, 66 insertions(+), 66 deletions(-) diff --git a/docs/en/models/DLA_en.md b/docs/en/models/DLA_en.md index fc5d75c0..0d7b8380 100644 --- a/docs/en/models/DLA_en.md +++ b/docs/en/models/DLA_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## Overview @@ -11,7 +11,7 @@ DLA (Deep Layer Aggregation). Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. The authors augment standard architectures with deeper aggregation to better fuse information across layers. Deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. [paper](https://arxiv.org/abs/1707.06484) -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters | Model | Params (M) | FLOPs (G) | Top-1 (%) | Top-5 (%) | |:-----------------:|:----------:|:---------:|:---------:|:---------:| diff --git a/docs/en/models/DPN_DenseNet_en.md b/docs/en/models/DPN_DenseNet_en.md index 7447d7a1..89767666 100644 --- a/docs/en/models/DPN_DenseNet_en.md +++ b/docs/en/models/DPN_DenseNet_en.md @@ -10,9 +10,9 @@ ## 1. Overview -DenseNet is a new network structure proposed in 2017 and was the best paper of CVPR. The network has designed a new cross-layer connected block called dense-block. Compared to the bottleneck in ResNet, dense-block has designed a more aggressive dense connection module, that is, connecting all the layers to each other, and each layer will accept all the layers in front of it as its additional input. DenseNet stacks all dense-blocks into a densely connected network. The dense connection makes DenseNet easier to backpropagate, making the network easier to train and converge. The full name of DPN is Dual Path Networks, which is a network composed of DenseNet and ResNeXt, which proves that DenseNet can extract new features from the previous level, and ResNeXt essentially reuses the extracted features . The author further analyzes and finds that ResNeXt has high reuse rate for features, but low redundancy, while DenseNet can create new features, but with high redundancy. Combining the advantages of the two structures, the author designed the DPN network. In the end, the DPN network achieved better results than ResNeXt and DenseNet under the same FLOPS and parameters. +DenseNet is a new network structure proposed in 2017 and was the best paper of CVPR. The network has designed a new cross-layer connected block called dense-block. Compared to the bottleneck in ResNet, dense-block has designed a more aggressive dense connection module, that is, connecting all the layers to each other, and each layer will accept all the layers in front of it as its additional input. DenseNet stacks all dense-blocks into a densely connected network. The dense connection makes DenseNet easier to backpropagate, making the network easier to train and converge. The full name of DPN is Dual Path Networks, which is a network composed of DenseNet and ResNeXt, which proves that DenseNet can extract new features from the previous level, and ResNeXt essentially reuses the extracted features . The author further analyzes and finds that ResNeXt has high reuse rate for features, but low redundancy, while DenseNet can create new features, but with high redundancy. Combining the advantages of the two structures, the author designed the DPN network. In the end, the DPN network achieved better results than ResNeXt and DenseNet under the same FLOPs and parameters. -The FLOPS, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. +The FLOPs, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. ![](../../images/models/T4_benchmark/t4.fp32.bs4.DPN.flops.png) @@ -22,7 +22,7 @@ The FLOPS, parameters, and inference time on the T4 GPU of this series of models ![](../../images/models/T4_benchmark/t4.fp16.bs4.DPN.png) -The pretrained models of these two types of models (a total of 10) are open sourced in PaddleClas at present. The indicators are shown in the figure above. It is easy to observe that under the same FLOPS and parameters, DPN has higher accuracy than DenseNet. However,because DPN has more branches, its inference speed is slower than DenseNet. Since DenseNet264 has the deepest layers in all DenseNet networks, it has the largest parameters,DenseNet161 has the largest width, resulting the largest FLOPs and the highest accuracy in this series. From the perspective of inference speed, DenseNet161, which has a large FLOPs and high accuracy, has a faster speed than DenseNet264, so it has a greater advantage than DenseNet264. +The pretrained models of these two types of models (a total of 10) are open sourced in PaddleClas at present. The indicators are shown in the figure above. It is easy to observe that under the same FLOPs and parameters, DPN has higher accuracy than DenseNet. However,because DPN has more branches, its inference speed is slower than DenseNet. Since DenseNet264 has the deepest layers in all DenseNet networks, it has the largest parameters,DenseNet161 has the largest width, resulting the largest FLOPs and the highest accuracy in this series. From the perspective of inference speed, DenseNet161, which has a large FLOPs and high accuracy, has a faster speed than DenseNet264, so it has a greater advantage than DenseNet264. For DPN series networks, the larger the model's FLOPs and parameters, the higher the model's accuracy. Among them, since the width of DPN107 is the largest, it has the largest number of parameters and FLOPs in this series of networks. diff --git a/docs/en/models/ESNet_en.md b/docs/en/models/ESNet_en.md index 77219229..fbfcb5d3 100644 --- a/docs/en/models/ESNet_en.md +++ b/docs/en/models/ESNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -11,7 +11,7 @@ ESNet (Enhanced ShuffleNet) is a lightweight network developed by Baidu. This network combines the advantages of MobileNetV3, GhostNet, and PPLCNet on the basis of ShuffleNetV2 to form a faster and more accurate network on ARM devices, Because of its excellent performance, [PP-PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/picodet) launched in PaddleDetection uses this model as a backbone, with stronger object detection algorithm, the final mAP index refreshed the SOTA index of the object detection model on the ARM device in one fell swoop. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters | Models | Top1 | Top5 | FLOPs
(M) | Params
(M) | |:--:|:--:|:--:|:--:|:--:| diff --git a/docs/en/models/EfficientNet_and_ResNeXt101_wsl_en.md b/docs/en/models/EfficientNet_and_ResNeXt101_wsl_en.md index 6b25f69d..cd9a2d23 100644 --- a/docs/en/models/EfficientNet_and_ResNeXt101_wsl_en.md +++ b/docs/en/models/EfficientNet_and_ResNeXt101_wsl_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed based on V100 GPU](#3) * [4. Inference speed based on T4 GPU](#4) @@ -13,11 +13,11 @@ EfficientNet is a lightweight NAS-based network released by Google in 2019. EfficientNetB7 refreshed the classification accuracy of ImageNet-1k at that time. In this paper, the author points out that the traditional methods to improve the performance of neural networks mainly start with the width of the network, the depth of the network, and the resolution of the input picture. However, the author found that balancing these three dimensions is essential for improving accuracy and efficiency through experiments. Therefore, the author summarized how to balance the three dimensions at the same time through a series of experiments. -At the same time, based on this scaling method, the author built a total of 7 networks B1-B7 in the EfficientNet series on the basis of EfficientNetB0, and with the same FLOPS and parameters, the accuracy reached state-of-the-art effect. +At the same time, based on this scaling method, the author built a total of 7 networks B1-B7 in the EfficientNet series on the basis of EfficientNetB0, and with the same FLOPs and parameters, the accuracy reached state-of-the-art effect. ResNeXt is an improved version of ResNet that proposed by Facebook in 2016. In 2019, Facebook researchers studied the accuracy limit of the series network on ImageNet through weakly-supervised-learning. In order to distinguish the previous ResNeXt network, the suffix of this series network is WSL, where WSL is the abbreviation of weakly-supervised-learning. In order to have stronger feature extraction capability, the researchers further enlarged the network width, among which the largest ResNeXt101_32x48d_wsl has 800 million parameters. It was trained under 940 million weak-labeled images, and the results were finetune trained on imagenet-1k. Finally, the acc-1 of imagenet-1k reaches 85.4%, which is also the network with the highest precision under the resolution of 224x224 on imagenet-1k so far. In Fix-ResNeXt, the author used a larger image resolution, made a special Fix strategy for the inconsistency of image data preprocessing in training and testing, and made ResNeXt101_32x48d_wsl have a higher accuracy. Since it used the Fix strategy, it was named Fix-ResNeXt101_32x48d_wsl. -The FLOPS, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. +The FLOPs, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. ![](../../images/models/T4_benchmark/t4.fp32.bs4.EfficientNet.flops.png) @@ -30,9 +30,9 @@ The FLOPS, parameters, and inference time on the T4 GPU of this series of models At present, there are a total of 14 pretrained models of the two types of models that PaddleClas open source. It can be seen from the above figure that the advantages of the EfficientNet series network are very obvious. The ResNeXt101_wsl series model uses more data, and the final accuracy is also higher. EfficientNet_B0_small removes SE_block based on EfficientNet_B0, which has faster inference speed. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | ResNeXt101_
32x8d_wsl | 0.826 | 0.967 | 0.822 | 0.964 | 29.140 | 78.440 | | ResNeXt101_
32x16d_wsl | 0.842 | 0.973 | 0.842 | 0.972 | 57.550 | 152.660 | diff --git a/docs/en/models/HRNet_en.md b/docs/en/models/HRNet_en.md index 847f849a..f6cf5378 100644 --- a/docs/en/models/HRNet_en.md +++ b/docs/en/models/HRNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed based on V100 GPU](#3) * [4. Inference speed based on T4 GPU](#4) @@ -12,7 +12,7 @@ HRNet is a brand new neural network proposed by Microsoft research Asia in 2019. Different from the previous convolutional neural network, this network can still maintain high resolution in the deep layer of the network, so the heat map of the key points predicted is more accurate, and it is also more accurate in space. In addition, the network performs particularly well in other visual tasks sensitive to resolution, such as detection and segmentation. -The FLOPS, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. +The FLOPs, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. ![](../../images/models/T4_benchmark/t4.fp32.bs4.HRNet.flops.png) @@ -25,9 +25,9 @@ The FLOPS, parameters, and inference time on the T4 GPU of this series of models At present, there are 7 pretrained models of such models open-sourced by PaddleClas, and their indicators are shown in the figure. Among them, the reason why the accuracy of the HRNet_W48_C indicator is abnormal may be due to fluctuations in training. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | HRNet_W18_C | 0.769 | 0.934 | 0.768 | 0.934 | 4.140 | 21.290 | | HRNet_W18_C_ssld | 0.816 | 0.958 | 0.768 | 0.934 | 4.140 | 21.290 | diff --git a/docs/en/models/HarDNet_en.md b/docs/en/models/HarDNet_en.md index ba1c2e5e..7388896a 100644 --- a/docs/en/models/HarDNet_en.md +++ b/docs/en/models/HarDNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -11,7 +11,7 @@ HarDNet(Harmonic DenseNet)is a brand new neural network proposed by National Tsing Hua University in 2019, which to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. [Paper](https://arxiv.org/abs/1909.00948). -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters | Model | Params (M) | FLOPs (G) | Top-1 (%) | Top-5 (%) | |:---------------------:|:----------:|:---------:|:---------:|:---------:| diff --git a/docs/en/models/Inception_en.md b/docs/en/models/Inception_en.md index 9b312392..cf521d91 100644 --- a/docs/en/models/Inception_en.md +++ b/docs/en/models/Inception_en.md @@ -3,22 +3,22 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed based on V100 GPU](#3) * [4. Inference speed based on T4 GPU](#4) ## 1. Overview -GoogLeNet is a new neural network structure designed by Google in 2014, which, together with VGG network, became the twin champions of the ImageNet challenge that year. GoogLeNet introduces the Inception structure for the first time, and stacks the Inception structure in the network so that the number of network layers reaches 22, which is also the mark of the convolutional network exceeding 20 layers for the first time. Since 1x1 convolution is used in the Inception structure to reduce the dimension of channel number, and Global pooling is used to replace the traditional method of processing features in multiple fc layers, the final GoogLeNet network has much less FLOPS and parameters than VGG network, which has become a beautiful scenery of neural network design at that time. +GoogLeNet is a new neural network structure designed by Google in 2014, which, together with VGG network, became the twin champions of the ImageNet challenge that year. GoogLeNet introduces the Inception structure for the first time, and stacks the Inception structure in the network so that the number of network layers reaches 22, which is also the mark of the convolutional network exceeding 20 layers for the first time. Since 1x1 convolution is used in the Inception structure to reduce the dimension of channel number, and Global pooling is used to replace the traditional method of processing features in multiple fc layers, the final GoogLeNet network has much less FLOPs and parameters than VGG network, which has become a beautiful scenery of neural network design at that time. InceptionV3 is an improvement of InceptionV2 by Google. First of all, the author optimized the Inception module in InceptionV3. At the same time, more types of Inception modules were designed and used. Further, the larger square two-dimensional convolution kernel in some Inception modules in InceptionV3 was disassembled into two smaller asymmetric convolution kernels, which can greatly save the amount of parameters. -Xception is another improvement to InceptionV3 that Google proposed after Inception. In Xception, the author used the depthwise separable convolution to replace the traditional convolution operation, which greatly saved the network FLOPS and the number of parameters, but improved the accuracy. In DeeplabV3+, the author further improved the Xception and increased the number of Xception layers, and designed the network of Xception65 and Xception71. +Xception is another improvement to InceptionV3 that Google proposed after Inception. In Xception, the author used the depthwise separable convolution to replace the traditional convolution operation, which greatly saved the network FLOPs and the number of parameters, but improved the accuracy. In DeeplabV3+, the author further improved the Xception and increased the number of Xception layers, and designed the network of Xception65 and Xception71. InceptionV4 is a new neural network designed by Google in 2016, when residual structure were all the rage, but the authors believe that high performance can be achieved using only Inception structure. InceptionV4 uses more Inception structure to achieve even greater precision on Imagenet-1k. -The FLOPS, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. +The FLOPs, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. ![](../../images/models/T4_benchmark/t4.fp32.bs4.Inception.flops.png) @@ -31,9 +31,9 @@ The FLOPS, parameters, and inference time on the T4 GPU of this series of models The figure above reflects the relationship between the accuracy of Xception series and InceptionV4 and other indicators. Among them, Xception_deeplab is consistent with the structure of the paper, and Xception is an improved model developed by PaddleClas, which improves the accuracy by about 0.6% when the inference speed is basically unchanged. Details of the improved model are being updated, so stay tuned. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | GoogLeNet | 0.707 | 0.897 | 0.698 | | 2.880 | 8.460 | | Xception41 | 0.793 | 0.945 | 0.790 | 0.945 | 16.740 | 22.690 | diff --git a/docs/en/models/LeViT_en.md b/docs/en/models/LeViT_en.md index 4d7e5dbb..6cc72452 100644 --- a/docs/en/models/LeViT_en.md +++ b/docs/en/models/LeViT_en.md @@ -3,16 +3,16 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview LeViT is a fast inference hybrid neural network for image classification tasks. Its design considers the performance of the network model on different hardware platforms, so it can better reflect the real scenarios of common applications. Through a large number of experiments, the author found a better way to combine the convolutional neural network and the Transformer system, and proposed an attention-based method to integrate the position information encoding in the Transformer. [Paper](https://arxiv.org/abs/2104.01136)。 -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(M) | Params
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(M) | Params
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | LeViT-128S | 0.7598 | 0.9269 | 0.766 | 0.929 | 305 | 7.8 | | LeViT-128 | 0.7810 | 0.9371 | 0.786 | 0.940 | 406 | 9.2 | diff --git a/docs/en/models/MixNet_en.md b/docs/en/models/MixNet_en.md index a0faa3dc..c104731d 100644 --- a/docs/en/models/MixNet_en.md +++ b/docs/en/models/MixNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -16,9 +16,9 @@ MixNet is a lightweight network proposed by Google. The main idea of MixNet is t In order to solve the above two problems, MDConv(mixed depthwise convolution) is proposed. In this method, different size of kernels are mixed in a convolution operation block. And based on AutoML, a series of networks called MixNets are proposed, which have achieved good results on Imagenet. [paper](https://arxiv.org/pdf/1907.09595.pdf) -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | FLOPS
(M) | Params
(G | +| Models | Top1 | Top5 | Reference
top1 | FLOPs
(M) | Params
(G | | :------: | :---: | :---: | :---------------: | :----------: | ------------- | | MixNet_S | 76.28 | 92.99 | 75.8 | 252.977 | 4.167 | | MixNet_M | 77.67 | 93.64 | 77.0 | 357.119 | 5.065 | diff --git a/docs/en/models/Mobile_en.md b/docs/en/models/Mobile_en.md index 543bfb9e..7834ef09 100644 --- a/docs/en/models/Mobile_en.md +++ b/docs/en/models/Mobile_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed and storage size based on SD855](#3) * [4. Inference speed based on T4 GPU](#4) @@ -12,7 +12,7 @@ MobileNetV1 is a network launched by Google in 2017 for use on mobile devices or embedded devices. The network replaces the depthwise separable convolution with the traditional convolution operation, that is, the combination of depthwise convolution and pointwise convolution. Compared with the traditional convolution operation, this combination can greatly save the number of parameters and computation. At the same time, MobileNetV1 can also be used for object detection, image segmentation and other visual tasks. -MobileNetV2 is a lightweight network proposed by Google following MobileNetV1. Compared with MobileNetV1, MobileNetV2 proposed Linear bottlenecks and Inverted residual block as a basic network structures, to constitute MobileNetV2 network architecture through stacking these basic module a lot. In the end, higher classification accuracy was achieved when FLOPS was only half of MobileNetV1. +MobileNetV2 is a lightweight network proposed by Google following MobileNetV1. Compared with MobileNetV1, MobileNetV2 proposed Linear bottlenecks and Inverted residual block as a basic network structures, to constitute MobileNetV2 network architecture through stacking these basic module a lot. In the end, higher classification accuracy was achieved when FLOPs was only half of MobileNetV1. The ShuffleNet series network is the lightweight network structure proposed by MEGVII. So far, there are two typical structures in this series network, namely, ShuffleNetV1 and ShuffleNetV2. A Channel Shuffle operation in ShuffleNet can exchange information between groups and perform end-to-end training. In the paper of ShuffleNetV2, the author proposes four criteria for designing lightweight networks, and designs the ShuffleNetV2 network according to the four criteria and the shortcomings of ShuffleNetV1. @@ -31,9 +31,9 @@ GhosttNet is a brand-new lightweight network structure proposed by Huawei in 202 Currently there are 32 pretrained models of the mobile series open source by PaddleClas, and their indicators are shown in the figure below. As you can see from the picture, newer lightweight models tend to perform better, and MobileNetV3 represents the latest lightweight neural network architecture. In MobileNetV3, the author used 1x1 convolution after global-avg-pooling in order to obtain higher accuracy,this operation significantly increases the number of parameters but has little impact on the amount of computation, so if the model is evaluated from a storage perspective of excellence, MobileNetV3 does not have much advantage, but because of its smaller computation, it has a faster inference speed. In addition, the SSLD distillation model in our model library performs excellently, refreshing the accuracy of the current lightweight model from various perspectives. Due to the complex structure and many branches of the MobileNetV3 model, which is not GPU friendly, the GPU inference speed is not as good as that of MobileNetV1. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | MobileNetV1_x0_25 | 0.514 | 0.755 | 0.506 | | 0.070 | 0.460 | | MobileNetV1_x0_5 | 0.635 | 0.847 | 0.637 | | 0.280 | 1.310 | diff --git a/docs/en/models/Others_en.md b/docs/en/models/Others_en.md index e8101b5e..37bb659f 100644 --- a/docs/en/models/Others_en.md +++ b/docs/en/models/Others_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed and storage size based on SD855](#3) * [4. Inference speed based on T4 GPU](#4) @@ -20,9 +20,9 @@ DarkNet53 is designed for object detection by YOLO author in the paper. The netw -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AlexNet | 0.567 | 0.792 | 0.5720 | | 1.370 | 61.090 | | SqueezeNet1_0 | 0.596 | 0.817 | 0.575 | | 1.550 | 1.240 | diff --git a/docs/en/models/ReXNet_en.md b/docs/en/models/ReXNet_en.md index ab9cd01d..63f693c4 100644 --- a/docs/en/models/ReXNet_en.md +++ b/docs/en/models/ReXNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## Overview @@ -11,9 +11,9 @@ ReXNet is proposed by NAVER AI Lab, which is based on new network design principles. Aiming at the problem of representative bottleneck in the existing network, a set of design principles are proposed. The author believes that the conventional design produce representational bottlenecks, which would affect model performance. To investigate the representational bottleneck, the author study the matrix rank of the features generated by ten thousand random networks. Besides, entire layer’s channel configuration is also studied to design more accurate network architectures. In the end, the author proposes a set of simple and effective design principles to mitigate the representational bottleneck. [paper](https://arxiv.org/pdf/2007.00992.pdf) -## Accuracy, FLOPS and Parameters +## Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | FLOPS
(G) | Params
(M) | +| Models | Top1 | Top5 | Reference
top1 | FLOPs
(G) | Params
(M) | | :--------: | :---: | :---: | :---------------: | :-----------: | -------------- | | ReXNet_1_0 | 77.46 | 93.70 | 77.9 | 0.415 | 4.838 | | ReXNet_1_3 | 79.13 | 94.64 | 79.5 | 0.683 | 7.611 | diff --git a/docs/en/models/RedNet_en.md b/docs/en/models/RedNet_en.md index b9b5c8ed..09268372 100644 --- a/docs/en/models/RedNet_en.md +++ b/docs/en/models/RedNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -11,7 +11,7 @@ In the backbone of ResNet and in all bottleneck positions of backbone, the convolution is replaced by Involution, but all convolutions are reserved for channel mapping and fusion. These carefully redesigned entities combine to form a new efficient backbone network, called Rednet. [paper](https://arxiv.org/abs/2103.06255). -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters | Model | Params (M) | FLOPs (G) | Top-1 (%) | Top-5 (%) | |:---------------------:|:----------:|:---------:|:---------:|:---------:| diff --git a/docs/en/models/RepVGG_en.md b/docs/en/models/RepVGG_en.md index 0a028706..551feaff 100644 --- a/docs/en/models/RepVGG_en.md +++ b/docs/en/models/RepVGG_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -11,9 +11,9 @@ RepVGG (Making VGG-style ConvNets Great Again) series model is a simple but powerful convolutional neural network architecture proposed by Tsinghua University (Guiguang Ding's team), MEGVII Technology (Jian Sun et al.), HKUST and Aberystwyth University in 2021. The architecture has an inference time agent similar to VGG. The main body is composed of 3x3 convolution and relu stack, while the training time model has multi branch topology. The decoupling of training time and inference time is realized by re-parameterization technology, so the model is called repvgg. [paper](https://arxiv.org/abs/2101.03697). -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1| FLOPS
(G) | +| Models | Top1 | Top5 | Reference
top1| FLOPs
(G) | |:--:|:--:|:--:|:--:|:--:| | RepVGG_A0 | 0.7131 | 0.9016 | 0.7241 | | | RepVGG_A1 | 0.7380 | 0.9146 | 0.7446 | | diff --git a/docs/en/models/ResNeSt_RegNet_en.md b/docs/en/models/ResNeSt_RegNet_en.md index 748d075b..b6ce389e 100644 --- a/docs/en/models/ResNeSt_RegNet_en.md +++ b/docs/en/models/ResNeSt_RegNet_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed based on T4 GPU](#3) @@ -16,7 +16,7 @@ RegNet was proposed in 2020 by Facebook to deepen the concept of design space. B ## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | ResNeSt50_fast_1s1x64d | 0.8035 | 0.9528| 0.8035 | -| 8.68 | 26.3 | | ResNeSt50 | 0.8083 | 0.9542| 0.8113 | -| 10.78 | 27.5 | diff --git a/docs/en/models/ResNet_and_vd_en.md b/docs/en/models/ResNet_and_vd_en.md index 3ffeb292..70312fd7 100644 --- a/docs/en/models/ResNet_and_vd_en.md +++ b/docs/en/models/ResNet_and_vd_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed based on V100 GPU](#3) * [4. Inference speed based on T4 GPU](#4) @@ -18,7 +18,7 @@ The models of the ResNet series released this time include 14 pre-trained models Among them, ResNet50_vd_v2 and ResNet50_vd_ssld adopted knowledge distillation, which further improved the accuracy of the model while keeping the structure unchanged. Specifically, the teacher model of ResNet50_vd_v2 is ResNet152_vd (top1 accuracy 80.59%), the training set is imagenet-1k, the teacher model of ResNet50_vd_ssld is ResNeXt101_32x16d_wsl (top1 accuracy 84.2%), and the training set is the combination of 4 million data mined by imagenet-22k and ImageNet-1k . The specific methods of knowledge distillation are being continuously updated. -The FLOPS, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. +The FLOPs, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. ![](../../images/models/T4_benchmark/t4.fp32.bs4.ResNet.flops.png) @@ -32,9 +32,9 @@ The FLOPS, parameters, and inference time on the T4 GPU of this series of models As can be seen from the above curves, the higher the number of layers, the higher the accuracy, but the corresponding number of parameters, calculation and latency will increase. ResNet50_vd_ssld further improves the accuracy of top-1 of the ImageNet-1k validation set by using stronger teachers and more data, reaching 82.39%, refreshing the accuracy of ResNet50 series models. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | ResNet18 | 0.710 | 0.899 | 0.696 | 0.891 | 3.660 | 11.690 | | ResNet18_vd | 0.723 | 0.908 | | | 4.140 | 11.710 | diff --git a/docs/en/models/SEResNext_and_Res2Net_en.md b/docs/en/models/SEResNext_and_Res2Net_en.md index e574fd97..18c8ace1 100644 --- a/docs/en/models/SEResNext_and_Res2Net_en.md +++ b/docs/en/models/SEResNext_and_Res2Net_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) * [3. Inference speed based on V100 GPU](#3) * [4. Inference speed based on T4 GPU](#4) @@ -16,7 +16,7 @@ SENet is the winner of the 2017 ImageNet classification competition. It proposes Res2Net is a brand-new improvement of ResNet proposed in 2019. The solution can be easily integrated with other excellent modules. Without increasing the amount of calculation, the performance on ImageNet, CIFAR-100 and other data sets exceeds ResNet. Res2Net, with its simple structure and superior performance, further explores the multi-scale representation capability of CNN at a more fine-grained level. Res2Net reveals a new dimension to improve model accuracy, called scale, which is an essential and more effective factor in addition to the existing dimensions of depth, width, and cardinality. The network also performs well in other visual tasks such as object detection and image segmentation. -The FLOPS, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. +The FLOPs, parameters, and inference time on the T4 GPU of this series of models are shown in the figure below. ![](../../images/models/T4_benchmark/t4.fp32.bs4.SeResNeXt.flops.png) @@ -32,9 +32,9 @@ At present, there are a total of 24 pretrained models of the three categories op -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Parameters
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Parameters
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | Res2Net50_26w_4s | 0.793 | 0.946 | 0.780 | 0.936 | 8.520 | 25.700 | | Res2Net50_vd_26w_4s | 0.798 | 0.949 | | | 8.370 | 25.060 | diff --git a/docs/en/models/SwinTransformer_en.md b/docs/en/models/SwinTransformer_en.md index 95afaaf6..8f76a662 100644 --- a/docs/en/models/SwinTransformer_en.md +++ b/docs/en/models/SwinTransformer_en.md @@ -3,16 +3,16 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview Swin Transformer a new vision Transformer, that capably serves as a general-purpose backbone for computer vision. It is a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. [Paper](https://arxiv.org/abs/2103.14030)。 -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Params
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Params
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | SwinTransformer_tiny_patch4_window7_224 | 0.8069 | 0.9534 | 0.812 | 0.955 | 4.5 | 28 | | SwinTransformer_small_patch4_window7_224 | 0.8275 | 0.9613 | 0.832 | 0.962 | 8.7 | 50 | diff --git a/docs/en/models/TNT_en.md b/docs/en/models/TNT_en.md index abdcfbaa..fd207363 100644 --- a/docs/en/models/TNT_en.md +++ b/docs/en/models/TNT_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -12,7 +12,7 @@ TNT(Transformer-iN-Transformer) series models were proposed by Huawei-Noah in 20 -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters | Model | Params (M) | FLOPs (G) | Top-1 (%) | Top-5 (%) | |:---------------------:|:----------:|:---------:|:---------:|:---------:| diff --git a/docs/en/models/Twins_en.md b/docs/en/models/Twins_en.md index f86f537c..0096066e 100644 --- a/docs/en/models/Twins_en.md +++ b/docs/en/models/Twins_en.md @@ -3,14 +3,14 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview The Twins network includes Twins-PCPVT and Twins-SVT, which focuses on the meticulous design of the spatial attention mechanism, resulting in a simple but more effective solution. Since the architecture only involves matrix multiplication, and the current deep learning framework has a high degree of optimization for matrix multiplication, the architecture is very efficient and easy to implement. Moreover, this architecture can achieve excellent performance in a variety of downstream vision tasks such as image classification, target detection, and semantic segmentation. [Paper](https://arxiv.org/abs/2104.13840). -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters | Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Params
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| diff --git a/docs/en/models/ViT_and_DeiT_en.md b/docs/en/models/ViT_and_DeiT_en.md index 789ad86c..d8e36b9a 100644 --- a/docs/en/models/ViT_and_DeiT_en.md +++ b/docs/en/models/ViT_and_DeiT_en.md @@ -3,7 +3,7 @@ ## Catalogue * [1. Overview](#1) -* [2. Accuracy, FLOPS and Parameters](#2) +* [2. Accuracy, FLOPs and Parameters](#2) ## 1. Overview @@ -13,9 +13,9 @@ ViT(Vision Transformer) series models were proposed by Google in 2020. These mod DeiT(Data-efficient Image Transformers) series models were proposed by Facebook at the end of 2020. Aiming at the problem that the ViT models need large-scale dataset training, the DeiT improved them, and finally achieved 83.1% Top1 accuracy on ImageNet. More importantly, using convolution model as teacher model, and performing knowledge distillation on these models, the Top1 accuracy of 85.2% can be achieved on the ImageNet dataset. -## 2. Accuracy, FLOPS and Parameters +## 2. Accuracy, FLOPs and Parameters -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Params
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Params
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | ViT_small_patch16_224 | 0.7769 | 0.9342 | 0.7785 | 0.9342 | | | | ViT_base_patch16_224 | 0.8195 | 0.9617 | 0.8178 | 0.9613 | | | @@ -26,7 +26,7 @@ DeiT(Data-efficient Image Transformers) series models were proposed by Facebook | ViT_large_patch32_384 | 0.8153 | 0.9608 | 0.815 | - | | | -| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPS
(G) | Params
(M) | +| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Params
(M) | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | DeiT_tiny_patch16_224 | 0.718 | 0.910 | 0.722 | 0.911 | | | | DeiT_small_patch16_224 | 0.796 | 0.949 | 0.799 | 0.950 | | | -- GitLab