未验证 提交 f9413d1a 编写于 作者: L littletomatodonkey 提交者: GitHub

Merge pull request #278 from littletomatodonkey/static/fix_readme_en

fix readme en
......@@ -7,6 +7,7 @@
飞桨图像分类套件PaddleClas是飞桨为工业界和学术界所准备的一个图像分类任务的工具集,助力使用者训练出更好的视觉模型和应用落地。
**近期更新**
- 2020.09.17 添加HRNet_W48_C_ssld模型,在ImageNet上Top-1 Acc可达0.836;添加ResNet34_vd_ssld模型,在ImageNet上Top-1 Acc可达0.797。
- 2020.09.07 添加HRNet_W18_C_ssld模型,在ImageNet上Top-1 Acc可达0.81162;添加MobileNetV3_small_x0_35_ssld模型,在ImageNet上Top-1 Acc可达0.5555。
- 2020.07.14 添加Res2Net200_vd_26w_4s_ssld模型,在ImageNet上Top-1 Acc可达85.13%;添加Fix_ResNet50_vd_ssld_v2模型,在ImageNet上Top-1 Acc可达84.0%。
- 2020.06.17 添加英文文档。
......@@ -17,7 +18,7 @@
## 特性
- 丰富的模型库:基于ImageNet1k分类数据集,PaddleClas提供了24个系列的分类网络结构和训练配置,121个预训练模型和性能评估。
- 丰富的模型库:基于ImageNet1k分类数据集,PaddleClas提供了24个系列的分类网络结构和训练配置,122个预训练模型和性能评估。
- SSLD知识蒸馏:基于该方案蒸馏模型的识别准确率普遍提升3%以上。
......@@ -41,8 +42,9 @@
- [ResNet及其Vd系列](#ResNet及其Vd系列)
- [移动端系列](#移动端系列)
- [SEResNeXt与Res2Net系列](#SEResNeXt与Res2Net系列)
- [Inception系列](#Inception系列)
- [DPN与DenseNet系列](#DPN与DenseNet系列)
- [HRNet](HRNet系列)
- [Inception系列](#Inception系列)
- [EfficientNet与ResNeXt101_wsl系列](#EfficientNet与ResNeXt101_wsl系列)
- [ResNeSt与RegNet系列](#ResNeSt与RegNet系列)
- 模型训练/评估
......@@ -103,6 +105,7 @@ ResNet及其Vd系列模型的精度、速度指标如下表所示,更多关于
| ResNet18_vd | 0.7226 | 0.9080 | 1.54557 | 3.85363 | 4.14 | 11.71 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet18_vd_pretrained.tar) |
| ResNet34 | 0.7457 | 0.9214 | 2.34957 | 5.89821 | 7.36 | 21.8 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_pretrained.tar) |
| ResNet34_vd | 0.7598 | 0.9298 | 2.43427 | 6.22257 | 7.39 | 21.82 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar) |
| ResNet34_vd_ssld | 0.7972 | 0.9490 | 2.43427 | 6.22257 | 7.39 | 21.82 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_ssld_pretrained.tar) |
| ResNet50 | 0.7650 | 0.9300 | 3.47712 | 7.84421 | 8.19 | 25.56 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_pretrained.tar) |
| ResNet50_vc | 0.7835 | 0.9403 | 3.52346 | 8.10725 | 8.67 | 25.58 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vc_pretrained.tar) |
| ResNet50_vd | 0.7912 | 0.9444 | 3.53131 | 8.09057 | 8.67 | 25.58 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar) |
......@@ -235,6 +238,7 @@ HRNet系列模型的精度、速度指标如下表所示,更多关于该系列
| HRNet_W40_C | 0.7877 | 0.9447 | 12.12202 | 25.68184 | 25.41 | 57.55 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W40_C_pretrained.tar) |
| HRNet_W44_C | 0.7900 | 0.9451 | 13.19858 | 32.25202 | 29.79 | 67.06 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W44_C_pretrained.tar) |
| HRNet_W48_C | 0.7895 | 0.9442 | 13.70761 | 34.43572 | 34.58 | 77.47 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_pretrained.tar) |
| HRNet_W48_C_ssld | 0.8363 | 0.9682 | 13.70761 | 34.43572 | 34.58 | 77.47 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_pretrained.tar) |
| HRNet_W64_C | 0.7930 | 0.9461 | 17.57527 | 47.9533 | 57.83 | 128.06 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W64_C_pretrained.tar) |
......@@ -257,7 +261,7 @@ Inception系列模型的精度、速度指标如下表所示,更多关于该
<a name="EfficientNet与ResNeXt101_wsl系列"></a>
### EfficientNet与ResNeXt101_wsl系列
EfficientNet与ResNeXt101_wsl系列模型的精度、速度指标如下表所示,更多关于该系列的模型介绍可以参考:[EfficientNet与ResNeXt101_wsl系列模型文档](./docs/zh_CN/models/Inception.md)
EfficientNet与ResNeXt101_wsl系列模型的精度、速度指标如下表所示,更多关于该系列的模型介绍可以参考:[EfficientNet与ResNeXt101_wsl系列模型文档](./docs/zh_CN/models/EfficientNet_and_ResNeXt101_wsl.md)
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | 下载地址 |
......
......@@ -2,152 +2,310 @@
# PaddleClas
**Book**: https://paddleclas-en.readthedocs.io/en/latest/
**Quick start PaddleClas in 30 minutes**: https://paddleclas-en.readthedocs.io/en/latest/tutorials/quick_start_en.html
## Introduction
PaddleClas is a toolset for image classification tasks prepared for the industry and academia. It helps users train better computer vision models and apply them in real scenarios.
<div align="center">
<img src="./docs/images/main_features_s_en.png" width="700">
</div>
## Rich model zoo
Based on ImageNet1k dataset, PaddleClas provides 23 series of image classification networks such as ResNet, ResNet_vd, Res2Net, HRNet, and MobileNetV3 with brief introductions, reproduction configurations and training tricks. At the same time, the corresponding 117 image classification pretrained models are also available. The GPU inference time of the server-side models are evaluated based on TensorRT. The CPU inference time and storage size of the mobile-side models are evaluated on the Snapdragon 855 (SD855). For more detailed information on the supported pretrained models and their download links, please refer to [**models introduction tutorial**](https://paddleclas-en.readthedocs.io/en/latest/models/models_intro_en.html).
<div align="center">
<img src="./docs/images/models/V100_benchmark/v100.fp32.bs1.main_fps_top1_s.jpg" width="700">
</div>
The above figure shows some of the latest server-side pretrained models. It can be seen from the figure that when using V100 GPU with FP32 and TensorRT, the `Top1` accuracy of the ResNet50_vd_ssld pretrained model on ImageNet1k-val dataset is **83.0%** and that of ResNet101_vd_ssld pretrained model is 83.7%. These pretained models are obtained from SSLD knowledge distillation solution provided by PaddleClas. The marks of the same color and symbol in the figure represent models of different model sizes in the same series. For the introduction of different models, FLOPS, Params and detailed GPU inference time (including the inference speed of T4 GPU with different batch size), please refer to the documentation tutorial for more details: [https://paddleclas-en.readthedocs.io/en/latest/models/models_intro_en.html](https://paddleclas-en.readthedocs.io/en/latest/models/models_intro_en.html)
<div align="center">
<img
src="./docs/images/models/mobile_arm_top1.png" width="700">
</div>
The above figure shows the performance of some commonly used mobile-side models, including MobileNetV1, MobileNetV2, MobileNetV3 and ShuffleNetV2 series. The inference time is tested on Snapdragon 855 (SD855) with the batch size set as 1. The `Top1` accuracy of the MV3_large_x1_0_ssld, MV3_small_x1_0_ssld, MV1_ssld and MV2_ssld pretrained model on ImageNet1k-val dataset are 79%, 71.3%, 76.74%, 77.89%, respectively (M is short for MobileNet). MV3_large_x1_0_ssld_int8 is a quantizatied pretrained model for MV3_large_x1_0. More details about the mobile-side models can be seen in [**models introduction tutorial**](https://paddleclas-en.readthedocs.io/en/latest/models/models_intro_en.html)
- TODO
- [ ] Reproduction and performance evaluation of EfficientLite, GhostNet, RegNet and ResNeSt.
## High-level support
In addition to providing rich classification network structures and pretrained models, PaddleClas supports a series of algorithms or tools that help to improve the effectiveness and efficiency of image classification tasks.
### knowledge distillation (SSLD)
Knowledge distillation refers to using the teacher model to guide the student model to learn specific tasks, ensuring that the small model has a relatively large effect improvement with the computation cost unchanged, and even obtains similar accuracy with the large model. PaddleClas provides a Simple Semi-supervised Label Distillation method (SSLD). With this method, different models on ImageNet1k-val dataset have 3% absolute improvement(Top1 accuracy) on ImageNet1k-val dataset. The following figure shows the models' performance after SSLD.
<div align="center">
<img
src="./docs/images/distillation/distillation_perform_s.jpg" width="700">
</div>
Taking the ImageNet1k dataset as an example, the following figure shows the SSLD knowledge distillation method framework. The key points of the method include the choice of teacher model, loss calculation method, iteration number, use of unlabeled data, and ImageNet1k dataset finetune. For detailed introduction and experiments, please refer to [**knowledge distillation tutorial**](https://paddleclas-en.readthedocs.io/en/latest/advanced_tutorials/distillation/distillation_en.html)
<div align="center">
<img
src="./docs/images/distillation/ppcls_distillation_s.jpg" width="700">
</div>
### Data augmentation
For a certain image classification task, data augmentation is a commonly used regularization method, which can effectively improve the effect of image classification, especially for scenarios where the data is insufficient or the network is large. Commonly used data augmentation can be divided into 3 categories, `transformation`, `cropping` and `aliasing`, as shown below. The image transformation refers to performing some transformations on the entire image, such as AutoAugment and RandAugment. The image cropping refers to the transformation of the image to block some areas in a certain way, such as CutOut, RandErasing, HideAndSeek and GridMask. Image aliasing refers to the transformation of multiple images to alias a new image, such as Mixup and Cutmix.
**Recent update**
- 2020.09.17 Add `HRNet_W48_C_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 83.62%. Add `ResNet34_vd_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 79.72%.
- 2020.09.07 Add `HRNet_W18_C_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 81.16%.
- 2020.07.14 Add `Res2Net200_vd_26w_4s_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 85.13%. Add `Fix_ResNet50_vd_ssld_v2` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 84.00%.
- 2020.06.17 Add English documents.
- 2020.06.12 Add support for training and evaluation on Windows or CPU.
- 2020.05.17 Add support for mixed precision training.
- [more](./docs/zh_CN/update_history.md)
<div align="center">
<img
src="./docs/images/image_aug/image_aug_samples_s_en.jpg" width="800">
</div>
## Features
- Rich model zoo. Based on the ImageNet1k classification dataset, PaddleClas provides 24 series of classification network structures and training configurations, 122 models' pretrained weights and their evaluation metrics.
PaddleClas provides the reproduction of the above 8 data augmentation algorithms and the evaluation of the effect in a unified environment. The following figure shows the performance of different data augmentation methods based on ResNet50. Compared with the standard transformation, using data augmentation, the recognition accuracy can be increased by up to 1%. For more detailed introduction of data augmentation methods, please refer to the [**data augmentation tutorial**](https://paddleclas-en.readthedocs.io/en/latest/advanced_tutorials/image_augmentation/ImageAugment_en.html).
- SSLD Knowledge Distillation. Based on this SSLD distillation strategy, the accuracy of the distilled model is generally increased by more than 3%.
- Data augmentation: PaddleClas provides detailed introduction of 8 data augmentation algorithms such as AutoAugment, Cutout, Cutmix, code reproduction and effect evaluation in a unified experimental environment.
<div align="center">
<img
src="./docs/images/image_aug/main_image_aug_s.jpg" width="600">
</div>
- Pretrained model with 100,000 categories: Based on `ResNet50_vd` model, Baidu open sourced the `ResNet50_vd` pretrained model trained on a 100,000-category dataset. In some practical scenarios, the accuracy based on the pretrained weights can be increased by up to 30%.
- A variety of training modes, including multi-machine training, mixed precision training, etc.
## Quick start
- A variety of inference and deployment solutions, including TensorRT inference, Paddle-Lite inference, model service deployment, model quantification, Paddle Hub, etc.
Based on flowers102 dataset, one can easily experience different networks, pretrained models and SSLD knowledge distillation method in PaddleClas. More details can be seen in [**Quick start PaddleClas in 30 minutes**](https://paddleclas-en.readthedocs.io/en/latest/tutorials/quick_start_en.html).
- Support Linux, Windows, macOS and other systems.
## Getting started
## Tutorials
For installation, model training, inference, evaluation and finetune in PaddleClas, you can refer to [**gettting started tutorial**](https://paddleclas-en.readthedocs.io/en/latest/tutorials/index.html).
## Featured extension and application
### A classification pretrained model for 100,000 categories
The models trained on ImageNet1K dataset are often used as pretrained models for other classification tasks in practical applications due to lack of training data. However, there are only 1,000 categories in the ImageNet1K dataset, and the feature capability of the pretrained model is limited. Therefore, Baidu developed a tag system including 100,000 categories, with semantic information and different granularity. Through manual or semi-supervised methods, more than 55,000,000 training images have been collected. It is the largest image classification system in China and even in the worldwide. PaddleClas provides the ResNet50_vd model trained on this dataset. The following table shows the comparison of using the ImageNet pretrained model and the above 100,000 image classification pretrained model in some practical application scenarios. Using the 100,000 image classification pretrained model, the recognition accuracy can be increased by up to 30%.
| Dataset | Dataset statistics | ImageNet pretrained model | 100,000-categories' pretrained model |
|:--:|:--:|:--:|:--:|
| Flowers | class_num:102<br/>train/val:5789/2396 | 0.7779 | 0.9892 |
| Hand-painted stick figures | class_num:18<br/>train/val:1007/432 | 0.8785 | 0.9107 |
| Leaves | class_num:6<br/>train/val:5256/2278 | 0.8212 | 0.8385 |
| Container vehicle | class_num:115<br/>train/val:4879/2094 | 0.623 | 0.9524 |
| Chair | class_num:5<br/>train/val:169/78 | 0.8557 | 0.9077 |
| Geology | class_num:4<br/>train/val:671/296 | 0.5719 | 0.6781 |
The 100,000 categories' pretrained model can be downloaded here: [download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_10w_pretrained.tar). More details can be seen in [**Transfer learning tutorial**](https://paddleclas-en.readthedocs.io/en/latest/application/transfer_learning_en.html).
### Object detection
In recent years, object detection tasks attract a lot of attention in academia and industry. The ImageNet classification model is often used for pretrained model in object detection, which can directly affect the effect of object detection. Based on 82.39% ResNet50_vd pretrained model, PaddleDetection provides a Practical Server-side Detection solution, PSS-DET. The solution contains many strategies that can effectively improve the performance while taking limited extra computation cost, such as model pruning, better pretrained model, deformable convolution, cascade rcnn, autoaugment, libra sampling and multi-scale training. Compared with the 79.12% ImageNet1k pretrained model, the 82.39% model can help improve the COCO mAP by 1.5% without any computation cost. Using PSS-DET, the inference speed on single V100 GPU can reach 20FPS when COCO mAP is 47.8%, and reach 61FPS when COCO mAP is 41.6%. For more details, please refer to [**Object Detection tutorial**](https://paddleclas-en.readthedocs.io/en/latest/application/object_detection_en.html).
- TODO
- [ ] Application of PaddleClas in OCR tasks.
- [ ] Application of PaddleClas in face detection and recognition tasks.
## Industrial-grade application deployment tools
PaddlePaddle provides a series of practical tools to conveniently deploy the PaddleClas models for industrial applications. For more details, please refer to [**extension tutorial**](https://paddleclas.readthedocs.io/zh_CN/latest/extension/index.html)
- TensorRT inference
- Paddle-Lite
- Paddle-Serving
- Model quantization
- Multi-machine training
- Paddle Hub
## Competition support
PaddleClas stems from the Baidu's visual business applications and the exploration of frontier visual capabilities. It has helped us achieve leading results in many key events, and continues to promote more frontier visual solutions and landing applications. For more details, please refer to [**competition support tutorial**](https://paddleclas.readthedocs.io/zh_CN/latest/competition_support.html)
- 1st place in 2018 Kaggle Open Images V4 object detection challenge
- A-level certificate of three tasks: printed text OCR, face recognition and landmark recognition in the first multimedia information recognition technology competition
- 2nd place in 2019 Kaggle Open Images V5 object detection challenge
- 2nd place in Kaggle Landmark Retrieval Challenge 2019
- 2nd place in Kaggle Landmark Recognition Challenge 2019
- [Installation](./docs/en/tutorials/install.md)
- [Quick start PaddleClas in 30 minutes](./docs/en/tutorials/quick_start.md)
- [Model introduction and model zoo](./docs/en/models/models_intro.md)
- [Model zoo overview](#Model_zoo_overview)
- [ResNet and Vd series](#ResNet_and_Vd_series)
- [Mobile series](#Mobile_series)
- [SEResNeXt and Res2Net series](#SEResNeXt_and_Res2Net_series)
- [DPN and DenseNet series](#DPN_and_DenseNet_series)
- [HRNet series](#HRNet_series)
- [Inception series](#Inception_series)
- [EfficientNet and ResNeXt101_wsl series](#EfficientNet_and_ResNeXt101_wsl_series)
- [ResNeSt and RegNet series](#ResNeSt_and_RegNet_series)
- Model training/evaluation
- [Data preparation](./docs/en/tutorials/data_en.md)
- [Model training and finetuning](./docs/en/tutorials/getting_started_en.md)
- [Model evaluation](./docs/en/tutorials/getting_started_en.md)
- Model prediction/inference
- [Prediction based on training engine](./docs/en/extension/paddle_inference_en.md)
- [Python inference](./docs/en/extension/paddle_inference_en.md)
- C++ inference (coming soon)
- [Serving deployment](./docs/en/extension/paddle_serving_en.md)
- Mobile (coming soon)
- [Model Quantization and Compression](docs/en/extension/paddle_quantization_en.md)
- Advanced tutorials
- [Knowledge distillation](./docs/en/advanced_tutorials/distillation/distillation_en.md)
- [Data augmentation](./docs/en/advanced_tutorials/image_augmentation/ImageAugment_en.md)
- Applications
- [Transfer learning](./docs/en/application/transfer_learning_en.md)
- [Pretrained model with 100,000 categories](./docs/en/application/transfer_learning_en.md)
- [Generic object detection](./docs/en/application/object_detection_en.md)
- FAQ
- General image classification problems (coming soon)
- [PaddleClas FAQ](./docs/en/faq_en.md)
- [Competition support](./docs/en/competition_support_en.md)
- [License](#License)
- [Contribution](#Contribution)
<a name="Model_zoo_overview"></a>
### Model zoo overview
Based on the ImageNet1k classification dataset, the 24 classification network structures supported by PaddleClas and the corresponding 122 image classification pretrained models are shown below. Training trick, a brief introduction to each series of network structures, and performance evaluation will be shown in the corresponding chapters. The evaluation environment is as follows.
* CPU evaluation environment is based on Snapdragon 855 (SD855).
* The GPU evaluation speed is measured by running 500 times under the FP32+TensorRT configuration (excluding the warmup time of the first 10 times).
Curves of accuracy to the inference time of common server-side models are shown as follows.
![](./docs/images/models/T4_benchmark/t4.fp32.bs4.main_fps_top1.png)
Curves of accuracy to the inference time and storage size of common mobile-side models are shown as follows.
![](./docs/images/models/mobile_arm_storage.png)
![](./docs/images/models/mobile_arm_top1.png)
<a name="ResNet_and_Vd_series"></a>
### ResNet and Vd series
Accuracy and inference time metrics of ResNet and Vd series models are shown as follows. More detailed information can be refered to [ResNet and Vd series tutorial](./docs/en/models/ResNet_and_vd_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|---------------------|-----------|-----------|-----------------------|----------------------|----------|-----------|----------------------------------------------------------------------------------------------|
| ResNet18 | 0.7098 | 0.8992 | 1.45606 | 3.56305 | 3.66 | 11.69 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet18_pretrained.tar) |
| ResNet18_vd | 0.7226 | 0.9080 | 1.54557 | 3.85363 | 4.14 | 11.71 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet18_vd_pretrained.tar) |
| ResNet34 | 0.7457 | 0.9214 | 2.34957 | 5.89821 | 7.36 | 21.8 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_pretrained.tar) |
| ResNet34_vd | 0.7598 | 0.9298 | 2.43427 | 6.22257 | 7.39 | 21.82 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar) |
| ResNet34_vd_ssld | 0.7972 | 0.9490 | 2.43427 | 6.22257 | 7.39 | 21.82 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_ssld_pretrained.tar) |
| ResNet50 | 0.7650 | 0.9300 | 3.47712 | 7.84421 | 8.19 | 25.56 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_pretrained.tar) |
| ResNet50_vc | 0.7835 | 0.9403 | 3.52346 | 8.10725 | 8.67 | 25.58 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vc_pretrained.tar) |
| ResNet50_vd | 0.7912 | 0.9444 | 3.53131 | 8.09057 | 8.67 | 25.58 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar) |
| ResNet50_vd_v2 | 0.7984 | 0.9493 | 3.53131 | 8.09057 | 8.67 | 25.58 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_v2_pretrained.tar) |
| ResNet101 | 0.7756 | 0.9364 | 6.07125 | 13.40573 | 15.52 | 44.55 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.tar) |
| ResNet101_vd | 0.8017 | 0.9497 | 6.11704 | 13.76222 | 16.1 | 44.57 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar) |
| ResNet152 | 0.7826 | 0.9396 | 8.50198 | 19.17073 | 23.05 | 60.19 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet152_pretrained.tar) |
| ResNet152_vd | 0.8059 | 0.9530 | 8.54376 | 19.52157 | 23.53 | 60.21 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet152_vd_pretrained.tar) |
| ResNet200_vd | 0.8093 | 0.9533 | 10.80619 | 25.01731 | 30.53 | 74.74 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet200_vd_pretrained.tar) |
| ResNet50_vd_<br>ssld | 0.8239 | 0.9610 | 3.53131 | 8.09057 | 8.67 | 25.58 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar) |
| ResNet50_vd_<br>ssld_v2 | 0.8300 | 0.9640 | 3.53131 | 8.09057 | 8.67 | 25.58 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_v2_pretrained.tar) |
| ResNet101_vd_<br>ssld | 0.8373 | 0.9669 | 6.11704 | 13.76222 | 16.1 | 44.57 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_ssld_pretrained.tar) |
<a name="Mobile_series"></a>
### Mobile series
Accuracy and inference time metrics of Mobile series models are shown as follows. More detailed information can be refered to [Mobile series tutorial](./docs/en/models/Mobile_en.md).
| Model | Top-1 Acc | Top-5 Acc | SD855 time(ms)<br>bs=1 | Flops(G) | Params(M) | 模型大小(M) | Download Address |
|----------------------------------|-----------|-----------|------------------------|----------|-----------|---------|-----------------------------------------------------------------------------------------------------------|
| MobileNetV1_<br>x0_25 | 0.5143 | 0.7546 | 3.21985 | 0.07 | 0.46 | 1.9 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_x0_25_pretrained.tar) |
| MobileNetV1_<br>x0_5 | 0.6352 | 0.8473 | 9.579599 | 0.28 | 1.31 | 5.2 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_x0_5_pretrained.tar) |
| MobileNetV1_<br>x0_75 | 0.6881 | 0.8823 | 19.436399 | 0.63 | 2.55 | 10 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_x0_75_pretrained.tar) |
| MobileNetV1 | 0.7099 | 0.8968 | 32.523048 | 1.11 | 4.19 | 16 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_pretrained.tar) |
| MobileNetV1_<br>ssld | 0.7789 | 0.9394 | 32.523048 | 1.11 | 4.19 | 16 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_ssld_pretrained.tar) |
| MobileNetV2_<br>x0_25 | 0.5321 | 0.7652 | 3.79925 | 0.05 | 1.5 | 6.1 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_x0_25_pretrained.tar) |
| MobileNetV2_<br>x0_5 | 0.6503 | 0.8572 | 8.7021 | 0.17 | 1.93 | 7.8 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_x0_5_pretrained.tar) |
| MobileNetV2_<br>x0_75 | 0.6983 | 0.8901 | 15.531351 | 0.35 | 2.58 | 10 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_x0_75_pretrained.tar) |
| MobileNetV2 | 0.7215 | 0.9065 | 23.317699 | 0.6 | 3.44 | 14 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_pretrained.tar) |
| MobileNetV2_<br>x1_5 | 0.7412 | 0.9167 | 45.623848 | 1.32 | 6.76 | 26 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_x1_5_pretrained.tar) |
| MobileNetV2_<br>x2_0 | 0.7523 | 0.9258 | 74.291649 | 2.32 | 11.13 | 43 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_x2_0_pretrained.tar) |
| MobileNetV2_<br>ssld | 0.7674 | 0.9339 | 23.317699 | 0.6 | 3.44 | 14 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_ssld_pretrained.tar) |
| MobileNetV3_<br>large_x1_25 | 0.7641 | 0.9295 | 28.217701 | 0.714 | 7.44 | 29 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_25_pretrained.tar) |
| MobileNetV3_<br>large_x1_0 | 0.7532 | 0.9231 | 19.30835 | 0.45 | 5.47 | 21 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_pretrained.tar) |
| MobileNetV3_<br>large_x0_75 | 0.7314 | 0.9108 | 13.5646 | 0.296 | 3.91 | 16 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_75_pretrained.tar) |
| MobileNetV3_<br>large_x0_5 | 0.6924 | 0.8852 | 7.49315 | 0.138 | 2.67 | 11 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_5_pretrained.tar) |
| MobileNetV3_<br>large_x0_35 | 0.6432 | 0.8546 | 5.13695 | 0.077 | 2.1 | 8.6 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_35_pretrained.tar) |
| MobileNetV3_<br>small_x1_25 | 0.7067 | 0.8951 | 9.2745 | 0.195 | 3.62 | 14 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x1_25_pretrained.tar) |
| MobileNetV3_<br>small_x1_0 | 0.6824 | 0.8806 | 6.5463 | 0.123 | 2.94 | 12 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x1_0_pretrained.tar) |
| MobileNetV3_<br>small_x0_75 | 0.6602 | 0.8633 | 5.28435 | 0.088 | 2.37 | 9.6 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x0_75_pretrained.tar) |
| MobileNetV3_<br>small_x0_5 | 0.5921 | 0.8152 | 3.35165 | 0.043 | 1.9 | 7.8 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x0_5_pretrained.tar) |
| MobileNetV3_<br>small_x0_35 | 0.5303 | 0.7637 | 2.6352 | 0.026 | 1.66 | 6.9 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x0_35_pretrained.tar) |
| MobileNetV3_<br>small_x0_35_ssld | 0.5555 | 0.7771 | 2.6352 | 0.026 | 1.66 | 6.9 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x0_35_ssld_pretrained.tar) |
| MobileNetV3_<br>large_x1_0_ssld | 0.7896 | 0.9448 | 19.30835 | 0.45 | 5.47 | 21 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar) |
| MobileNetV3_large_<br>x1_0_ssld_int8 | 0.7605 | - | 14.395 | - | - | 10 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_int8_pretrained.tar) |
| MobileNetV3_small_<br>x1_0_ssld | 0.7129 | 0.9010 | 6.5463 | 0.123 | 2.94 | 12 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x1_0_ssld_pretrained.tar) |
| ShuffleNetV2 | 0.6880 | 0.8845 | 10.941 | 0.28 | 2.26 | 9 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_pretrained.tar) |
| ShuffleNetV2_<br>x0_25 | 0.4990 | 0.7379 | 2.329 | 0.03 | 0.6 | 2.7 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_x0_25_pretrained.tar) |
| ShuffleNetV2_<br>x0_33 | 0.5373 | 0.7705 | 2.64335 | 0.04 | 0.64 | 2.8 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_x0_33_pretrained.tar) |
| ShuffleNetV2_<br>x0_5 | 0.6032 | 0.8226 | 4.2613 | 0.08 | 1.36 | 5.6 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_x0_5_pretrained.tar) |
| ShuffleNetV2_<br>x1_5 | 0.7163 | 0.9015 | 19.3522 | 0.58 | 3.47 | 14 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_x1_5_pretrained.tar) |
| ShuffleNetV2_<br>x2_0 | 0.7315 | 0.9120 | 34.770149 | 1.12 | 7.32 | 28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_x2_0_pretrained.tar) |
| ShuffleNetV2_<br>swish | 0.7003 | 0.8917 | 16.023151 | 0.29 | 2.26 | 9.1 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ShuffleNetV2_swish_pretrained.tar) |
| DARTS_GS_4M | 0.7523 | 0.9215 | 47.204948 | 1.04 | 4.77 | 21 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DARTS_GS_4M_pretrained.tar) |
| DARTS_GS_6M | 0.7603 | 0.9279 | 53.720802 | 1.22 | 5.69 | 24 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DARTS_GS_6M_pretrained.tar) |
| GhostNet_<br>x0_5 | 0.6688 | 0.8695 | 5.7143 | 0.082 | 2.6 | 10 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/GhostNet_x0_5_pretrained.pdparams) |
| GhostNet_<br>x1_0 | 0.7402 | 0.9165 | 13.5587 | 0.294 | 5.2 | 20 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/GhostNet_x1_0_pretrained.pdparams) |
| GhostNet_<br>x1_3 | 0.7579 | 0.9254 | 19.9825 | 0.44 | 7.3 | 29 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/GhostNet_x1_3_pretrained.pdparams) |
<a name="SEResNeXt_and_Res2Net_series"></a>
### SEResNeXt and Res2Net series
Accuracy and inference time metrics of SEResNeXt and Res2Net series models are shown as follows. More detailed information can be refered to [SEResNext and_Res2Net series tutorial](./docs/en/models/SEResNext_and_Res2Net_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|---------------------------|-----------|-----------|-----------------------|----------------------|----------|-----------|----------------------------------------------------------------------------------------------------|
| Res2Net50_<br>26w_4s | 0.7933 | 0.9457 | 4.47188 | 9.65722 | 8.52 | 25.7 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_26w_4s_pretrained.tar) |
| Res2Net50_vd_<br>26w_4s | 0.7975 | 0.9491 | 4.52712 | 9.93247 | 8.37 | 25.06 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_vd_26w_4s_pretrained.tar) |
| Res2Net50_<br>14w_8s | 0.7946 | 0.9470 | 5.4026 | 10.60273 | 9.01 | 25.72 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_14w_8s_pretrained.tar) |
| Res2Net101_vd_<br>26w_4s | 0.8064 | 0.9522 | 8.08729 | 17.31208 | 16.67 | 45.22 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net101_vd_26w_4s_pretrained.tar) |
| Res2Net200_vd_<br>26w_4s | 0.8121 | 0.9571 | 14.67806 | 32.35032 | 31.49 | 76.21 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net200_vd_26w_4s_pretrained.tar) |
| Res2Net200_vd_<br>26w_4s_ssld | 0.8513 | 0.9742 | 14.67806 | 32.35032 | 31.49 | 76.21 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net200_vd_26w_4s_ssld_pretrained.tar) |
| ResNeXt50_<br>32x4d | 0.7775 | 0.9382 | 7.56327 | 10.6134 | 8.02 | 23.64 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt50_32x4d_pretrained.tar) |
| ResNeXt50_vd_<br>32x4d | 0.7956 | 0.9462 | 7.62044 | 11.03385 | 8.5 | 23.66 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt50_vd_32x4d_pretrained.tar) |
| ResNeXt50_<br>64x4d | 0.7843 | 0.9413 | 13.80962 | 18.4712 | 15.06 | 42.36 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt50_64x4d_pretrained.tar) |
| ResNeXt50_vd_<br>64x4d | 0.8012 | 0.9486 | 13.94449 | 18.88759 | 15.54 | 42.38 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt50_vd_64x4d_pretrained.tar) |
| ResNeXt101_<br>32x4d | 0.7865 | 0.9419 | 16.21503 | 19.96568 | 15.01 | 41.54 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_32x4d_pretrained.tar) |
| ResNeXt101_vd_<br>32x4d | 0.8033 | 0.9512 | 16.28103 | 20.25611 | 15.49 | 41.56 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_32x4d_pretrained.tar) |
| ResNeXt101_<br>64x4d | 0.7835 | 0.9452 | 30.4788 | 36.29801 | 29.05 | 78.12 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_64x4d_pretrained.tar) |
| ResNeXt101_vd_<br>64x4d | 0.8078 | 0.9520 | 30.40456 | 36.77324 | 29.53 | 78.14 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar) |
| ResNeXt152_<br>32x4d | 0.7898 | 0.9433 | 24.86299 | 29.36764 | 22.01 | 56.28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt152_32x4d_pretrained.tar) |
| ResNeXt152_vd_<br>32x4d | 0.8072 | 0.9520 | 25.03258 | 30.08987 | 22.49 | 56.3 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt152_vd_32x4d_pretrained.tar) |
| ResNeXt152_<br>64x4d | 0.7951 | 0.9471 | 46.7564 | 56.34108 | 43.03 | 107.57 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt152_64x4d_pretrained.tar) |
| ResNeXt152_vd_<br>64x4d | 0.8108 | 0.9534 | 47.18638 | 57.16257 | 43.52 | 107.59 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt152_vd_64x4d_pretrained.tar) |
| SE_ResNet18_vd | 0.7333 | 0.9138 | 1.7691 | 4.19877 | 4.14 | 11.8 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SE_ResNet18_vd_pretrained.tar) |
| SE_ResNet34_vd | 0.7651 | 0.9320 | 2.88559 | 7.03291 | 7.84 | 21.98 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SE_ResNet34_vd_pretrained.tar) |
| SE_ResNet50_vd | 0.7952 | 0.9475 | 4.28393 | 10.38846 | 8.67 | 28.09 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SE_ResNet50_vd_pretrained.tar) |
| SE_ResNeXt50_<br>32x4d | 0.7844 | 0.9396 | 8.74121 | 13.563 | 8.02 | 26.16 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SE_ResNeXt50_32x4d_pretrained.tar) |
| SE_ResNeXt50_vd_<br>32x4d | 0.8024 | 0.9489 | 9.17134 | 14.76192 | 10.76 | 26.28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SE_ResNeXt50_vd_32x4d_pretrained.tar) |
| SE_ResNeXt101_<br>32x4d | 0.7912 | 0.9420 | 18.82604 | 25.31814 | 15.02 | 46.28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SE_ResNeXt101_32x4d_pretrained.tar) |
| SENet154_vd | 0.8140 | 0.9548 | 53.79794 | 66.31684 | 45.83 | 114.29 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/SENet154_vd_pretrained.tar) |
<a name="DPN_and_DenseNet_series"></a>
### DPN and DenseNet series
Accuracy and inference time metrics of DPN and DenseNet series models are shown as follows. More detailed information can be refered to [DPN and DenseNet series tutorial](./docs/en/models/DPN_DenseNet_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|-------------|-----------|-----------|-----------------------|----------------------|----------|-----------|--------------------------------------------------------------------------------------|
| DenseNet121 | 0.7566 | 0.9258 | 4.40447 | 9.32623 | 5.69 | 7.98 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DenseNet121_pretrained.tar) |
| DenseNet161 | 0.7857 | 0.9414 | 10.39152 | 22.15555 | 15.49 | 28.68 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DenseNet161_pretrained.tar) |
| DenseNet169 | 0.7681 | 0.9331 | 6.43598 | 12.98832 | 6.74 | 14.15 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DenseNet169_pretrained.tar) |
| DenseNet201 | 0.7763 | 0.9366 | 8.20652 | 17.45838 | 8.61 | 20.01 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DenseNet201_pretrained.tar) |
| DenseNet264 | 0.7796 | 0.9385 | 12.14722 | 26.27707 | 11.54 | 33.37 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DenseNet264_pretrained.tar) |
| DPN68 | 0.7678 | 0.9343 | 11.64915 | 12.82807 | 4.03 | 10.78 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DPN68_pretrained.tar) |
| DPN92 | 0.7985 | 0.9480 | 18.15746 | 23.87545 | 12.54 | 36.29 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DPN92_pretrained.tar) |
| DPN98 | 0.8059 | 0.9510 | 21.18196 | 33.23925 | 22.22 | 58.46 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DPN98_pretrained.tar) |
| DPN107 | 0.8089 | 0.9532 | 27.62046 | 52.65353 | 35.06 | 82.97 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DPN107_pretrained.tar) |
| DPN131 | 0.8070 | 0.9514 | 28.33119 | 46.19439 | 30.51 | 75.36 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/DPN131_pretrained.tar) |
<a name="HRNet_series"></a>
### HRNet series
Accuracy and inference time metrics of HRNet series models are shown as follows. More detailed information can be refered to [Mobile series tutorial](./docs/en/models/HRNet_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|-------------|-----------|-----------|------------------|------------------|----------|-----------|--------------------------------------------------------------------------------------|
| HRNet_W18_C | 0.7692 | 0.9339 | 7.40636 | 13.29752 | 4.14 | 21.29 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_pretrained.tar) |
| HRNet_W18_C_ssld | 0.81162 | 0.95804 | 7.40636 | 13.29752 | 4.14 | 21.29 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_ssld_pretrained.tar) |
| HRNet_W30_C | 0.7804 | 0.9402 | 9.57594 | 17.35485 | 16.23 | 37.71 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W30_C_pretrained.tar) |
| HRNet_W32_C | 0.7828 | 0.9424 | 9.49807 | 17.72921 | 17.86 | 41.23 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W32_C_pretrained.tar) |
| HRNet_W40_C | 0.7877 | 0.9447 | 12.12202 | 25.68184 | 25.41 | 57.55 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W40_C_pretrained.tar) |
| HRNet_W44_C | 0.7900 | 0.9451 | 13.19858 | 32.25202 | 29.79 | 67.06 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W44_C_pretrained.tar) |
| HRNet_W48_C | 0.7895 | 0.9442 | 13.70761 | 34.43572 | 34.58 | 77.47 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_pretrained.tar) |
| HRNet_W48_C_ssld | 0.8363 | 0.9682 | 13.70761 | 34.43572 | 34.58 | 77.47 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_pretrained.tar) |
| HRNet_W64_C | 0.7930 | 0.9461 | 17.57527 | 47.9533 | 57.83 | 128.06 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W64_C_pretrained.tar) |
<a name="Inception_series"></a>
### Inception series
Accuracy and inference time metrics of Inception series models are shown as follows. More detailed information can be refered to [Inception series tutorial](./docs/en/models/Inception_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|--------------------|-----------|-----------|-----------------------|----------------------|----------|-----------|---------------------------------------------------------------------------------------------|
| GoogLeNet | 0.7070 | 0.8966 | 1.88038 | 4.48882 | 2.88 | 8.46 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/GoogLeNet_pretrained.tar) |
| Xception41 | 0.7930 | 0.9453 | 4.96939 | 17.01361 | 16.74 | 22.69 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Xception41_pretrained.tar) |
| Xception41_deeplab | 0.7955 | 0.9438 | 5.33541 | 17.55938 | 18.16 | 26.73 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Xception41_deeplab_pretrained.tar) |
| Xception65 | 0.8100 | 0.9549 | 7.26158 | 25.88778 | 25.95 | 35.48 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Xception65_pretrained.tar) |
| Xception65_deeplab | 0.8032 | 0.9449 | 7.60208 | 26.03699 | 27.37 | 39.52 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Xception65_deeplab_pretrained.tar) |
| Xception71 | 0.8111 | 0.9545 | 8.72457 | 31.55549 | 31.77 | 37.28 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Xception71_pretrained.tar) |
| InceptionV4 | 0.8077 | 0.9526 | 12.99342 | 25.23416 | 24.57 | 42.68 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/InceptionV4_pretrained.tar) |
<a name="EfficientNet_and_ResNeXt101_wsl_series"></a>
### EfficientNet and ResNeXt101_wsl series
Accuracy and inference time metrics of EfficientNet and ResNeXt101_wsl series models are shown as follows. More detailed information can be refered to [EfficientNet and ResNeXt101_wsl series tutorial](./docs/en/models/EfficientNet_and_ResNeXt101_wsl_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|---------------------------|-----------|-----------|------------------|------------------|----------|-----------|----------------------------------------------------------------------------------------------------|
| ResNeXt101_<br>32x8d_wsl | 0.8255 | 0.9674 | 18.52528 | 34.25319 | 29.14 | 78.44 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_32x8d_wsl_pretrained.tar) |
| ResNeXt101_<br>32x16d_wsl | 0.8424 | 0.9726 | 25.60395 | 71.88384 | 57.55 | 152.66 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_32x16d_wsl_pretrained.tar) |
| ResNeXt101_<br>32x32d_wsl | 0.8497 | 0.9759 | 54.87396 | 160.04337 | 115.17 | 303.11 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_32x32d_wsl_pretrained.tar) |
| ResNeXt101_<br>32x48d_wsl | 0.8537 | 0.9769 | 99.01698256 | 315.91261 | 173.58 | 456.2 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_32x48d_wsl_pretrained.tar) |
| Fix_ResNeXt101_<br>32x48d_wsl | 0.8626 | 0.9797 | 160.0838242 | 595.99296 | 354.23 | 456.2 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/Fix_ResNeXt101_32x48d_wsl_pretrained.tar) |
| EfficientNetB0 | 0.7738 | 0.9331 | 3.442 | 6.11476 | 0.72 | 5.1 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB0_pretrained.tar) |
| EfficientNetB1 | 0.7915 | 0.9441 | 5.3322 | 9.41795 | 1.27 | 7.52 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB1_pretrained.tar) |
| EfficientNetB2 | 0.7985 | 0.9474 | 6.29351 | 10.95702 | 1.85 | 8.81 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB2_pretrained.tar) |
| EfficientNetB3 | 0.8115 | 0.9541 | 7.67749 | 16.53288 | 3.43 | 11.84 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB3_pretrained.tar) |
| EfficientNetB4 | 0.8285 | 0.9623 | 12.15894 | 30.94567 | 8.29 | 18.76 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB4_pretrained.tar) |
| EfficientNetB5 | 0.8362 | 0.9672 | 20.48571 | 61.60252 | 19.51 | 29.61 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB5_pretrained.tar) |
| EfficientNetB6 | 0.8400 | 0.9688 | 32.62402 | - | 36.27 | 42 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB6_pretrained.tar) |
| EfficientNetB7 | 0.8430 | 0.9689 | 53.93823 | - | 72.35 | 64.92 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB7_pretrained.tar) |
| EfficientNetB0_<br>small | 0.7580 | 0.9258 | 2.3076 | 4.71886 | 0.72 | 4.65 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB0_small_pretrained.tar) |
<a name="ResNeSt_and_RegNet_series"></a>
### ResNeSt and RegNet series
Accuracy and inference time metrics of ResNeSt and RegNet series models are shown as follows. More detailed information can be refered to [ResNeSt and RegNet series tutorial](./docs/en/models/ResNeSt_RegNet_en.md).
| Model | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | Flops(G) | Params(M) | Download Address |
|------------------------|-----------|-----------|------------------|------------------|----------|-----------|------------------------------------------------------------------------------------------------------|
| ResNeSt50_<br>fast_1s1x64d | 0.8035 | 0.9528 | 3.45405 | 8.72680 | 8.68 | 26.3 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeSt50_fast_1s1x64d_pretrained.pdparams) |
| ResNeSt50 | 0.8102 | 0.9542 | 6.69042 | 8.01664 | 10.78 | 27.5 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/ResNeSt50_pretrained.pdparams) |
| RegNetX_4GF | 0.785 | 0.9416 | 6.46478 | 11.19862 | 8 | 22.1 | [Download link](https://paddle-imagenet-models-name.bj.bcebos.com/RegNetX_4GF_pretrained.pdparams) |
<a name="License"></a>
## License
PaddleClas is released under the <a href="https://github.com/PaddlePaddle/PaddleClas/blob/master/LICENSE">Apache 2.0 license</a>
## Updates
## Contributing
<a name="Contribution"></a>
## Contribution
Contributions are highly welcomed and we would really appreciate your feedback!!
- Thank [nblib](https://github.com/nblib) to fix bug of RandErasing.
- Thank [chenpy228](https://github.com/chenpy228) to fix some typos PaddleClas.
......@@ -28,6 +28,7 @@ At present, there are 7 pretrained models of such models open-sourced by PaddleC
| HRNet_W40_C | 0.788 | 0.945 | 0.789 | 0.945 | 25.410 | 57.550 |
| HRNet_W44_C | 0.790 | 0.945 | 0.789 | 0.944 | 29.790 | 67.060 |
| HRNet_W48_C | 0.790 | 0.944 | 0.793 | 0.945 | 34.580 | 77.470 |
| HRNet_W48_C_ssld | 0.836 | 0.968 | 0.793 | 0.945 | 34.580 | 77.470 |
| HRNet_W64_C | 0.793 | 0.946 | 0.795 | 0.946 | 57.830 | 128.060 |
......@@ -42,6 +43,7 @@ At present, there are 7 pretrained models of such models open-sourced by PaddleC
| HRNet_W40_C | 224 | 256 | 10.739 |
| HRNet_W44_C | 224 | 256 | 11.497 |
| HRNet_W48_C | 224 | 256 | 12.165 |
| HRNet_W48_C_ssld | 224 | 256 | 12.165 |
| HRNet_W64_C | 224 | 256 | 15.003 |
......@@ -58,4 +60,5 @@ At present, there are 7 pretrained models of such models open-sourced by PaddleC
| HRNet_W40_C | 224 | 256 | 11.4229 | 19.1595 | 30.47984 | 12.12202 | 25.68184 | 48.90623 |
| HRNet_W44_C | 224 | 256 | 12.25778 | 22.75456 | 32.61275 | 13.19858 | 32.25202 | 59.09871 |
| HRNet_W48_C | 224 | 256 | 12.65015 | 23.12886 | 33.37859 | 13.70761 | 34.43572 | 63.01219 |
| HRNet_W48_C_ssld | 224 | 256 | 12.65015 | 23.12886 | 33.37859 | 13.70761 | 34.43572 | 63.01219 |
| HRNet_W64_C | 224 | 256 | 15.10428 | 27.68901 | 40.4198 | 17.57527 | 47.9533 | 97.11228 |
......@@ -32,6 +32,7 @@ As can be seen from the above curves, the higher the number of layers, the highe
| ResNet18_vd | 0.723 | 0.908 | | | 4.140 | 11.710 |
| ResNet34 | 0.746 | 0.921 | 0.732 | 0.913 | 7.360 | 21.800 |
| ResNet34_vd | 0.760 | 0.930 | | | 7.390 | 21.820 |
| ResNet34_vd_ssld | 0.797 | 0.949 | | | 7.390 | 21.820 |
| ResNet50 | 0.765 | 0.930 | 0.760 | 0.930 | 8.190 | 25.560 |
| ResNet50_vc | 0.784 | 0.940 | | | 8.670 | 25.580 |
| ResNet50_vd | 0.791 | 0.944 | 0.792 | 0.946 | 8.670 | 25.580 |
......@@ -57,6 +58,7 @@ As can be seen from the above curves, the higher the number of layers, the highe
| ResNet18_vd | 224 | 256 | 1.603 |
| ResNet34 | 224 | 256 | 2.272 |
| ResNet34_vd | 224 | 256 | 2.343 |
| ResNet34_vd_ssld | 224 | 256 | 2.343 |
| ResNet50 | 224 | 256 | 2.939 |
| ResNet50_vc | 224 | 256 | 3.041 |
| ResNet50_vd | 224 | 256 | 3.165 |
......@@ -78,6 +80,7 @@ As can be seen from the above curves, the higher the number of layers, the highe
| ResNet18_vd | 224 | 256 | 1.39593 | 2.69063 | 3.88267 | 1.54557 | 3.85363 | 6.88121 |
| ResNet34 | 224 | 256 | 2.23092 | 4.10205 | 5.54904 | 2.34957 | 5.89821 | 10.73451 |
| ResNet34_vd | 224 | 256 | 2.23992 | 4.22246 | 5.79534 | 2.43427 | 6.22257 | 11.44906 |
| ResNet34_vd | 224 | 256 | 2.23992 | 4.22246 | 5.79534 | 2.43427 | 6.22257 | 11.44906 |
| ResNet50 | 224 | 256 | 2.63824 | 4.63802 | 7.02444 | 3.47712 | 7.84421 | 13.90633 |
| ResNet50_vc | 224 | 256 | 2.67064 | 4.72372 | 7.17204 | 3.52346 | 8.10725 | 14.45577 |
| ResNet50_vd | 224 | 256 | 2.65164 | 4.84109 | 7.46225 | 3.53131 | 8.09057 | 14.45965 |
......
......@@ -45,6 +45,7 @@ python tools/infer/predict.py \
- [ResNet50_vc](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vc_pretrained.tar)
- [ResNet18_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet18_vd_pretrained.tar)
- [ResNet34_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar)
- [ResNet34_vd_ssld](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_ssld_pretrained.tar)
- [ResNet50_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar)
- [ResNet50_vd_v2](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_v2_pretrained.tar)
- [ResNet101_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar)
......@@ -149,11 +150,13 @@ python tools/infer/predict.py \
- HRNet series
- HRNet series<sup>[[13](#ref13)]</sup>([paper link](https://arxiv.org/abs/1908.07919))
- [HRNet_W18_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_pretrained.tar)
- [HRNet_W18_C_ssld](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_ssld_pretrained.tar)
- [HRNet_W30_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W30_C_pretrained.tar)
- [HRNet_W32_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W32_C_pretrained.tar)
- [HRNet_W40_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W40_C_pretrained.tar)
- [HRNet_W44_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W44_C_pretrained.tar)
- [HRNet_W48_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_pretrained.tar)
- [HRNet_W48_C_ssld](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_ssld_pretrained.tar)
- [HRNet_W64_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W64_C_pretrained.tar)
......
# Release Notes
* 2020.09.17
* Add `HRNet_W48_C_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 83.62%.
* Add `ResNet34_vd_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 79.72%.
* 2020.09.07
* Add `HRNet_W18_C_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 81.16%.
* Add `MobileNetV3_small_x0_35_ssld` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 55.55%.
......@@ -9,7 +13,7 @@
* Add `Fix_ResNet50_vd_ssld_v2` pretrained model, whose Top-1 Acc on ImageNet1k dataset reaches 84.00%.
* 2020.06.17
* Add English documents
* Add English documents.
* 2020.06.12
* Add support for training and evaluation on Windows or CPU.
......
......@@ -27,6 +27,7 @@ HRNet是2019年由微软亚洲研究院提出的一种全新的神经网络,
| HRNet_W40_C | 0.788 | 0.945 | 0.789 | 0.945 | 25.410 | 57.550 |
| HRNet_W44_C | 0.790 | 0.945 | 0.789 | 0.944 | 29.790 | 67.060 |
| HRNet_W48_C | 0.790 | 0.944 | 0.793 | 0.945 | 34.580 | 77.470 |
| HRNet_W48_C_ssld | 0.836 | 0.968 | 0.793 | 0.945 | 34.580 | 77.470 |
| HRNet_W64_C | 0.793 | 0.946 | 0.795 | 0.946 | 57.830 | 128.060 |
......@@ -41,6 +42,7 @@ HRNet是2019年由微软亚洲研究院提出的一种全新的神经网络,
| HRNet_W40_C | 224 | 256 | 10.739 |
| HRNet_W44_C | 224 | 256 | 11.497 |
| HRNet_W48_C | 224 | 256 | 12.165 |
| HRNet_W48_C_ssld | 224 | 256 | 12.165 |
| HRNet_W64_C | 224 | 256 | 15.003 |
......@@ -57,4 +59,5 @@ HRNet是2019年由微软亚洲研究院提出的一种全新的神经网络,
| HRNet_W40_C | 224 | 256 | 11.4229 | 19.1595 | 30.47984 | 12.12202 | 25.68184 | 48.90623 |
| HRNet_W44_C | 224 | 256 | 12.25778 | 22.75456 | 32.61275 | 13.19858 | 32.25202 | 59.09871 |
| HRNet_W48_C | 224 | 256 | 12.65015 | 23.12886 | 33.37859 | 13.70761 | 34.43572 | 63.01219 |
| HRNet_W48_C_ssld | 224 | 256 | 12.65015 | 23.12886 | 33.37859 | 13.70761 | 34.43572 | 63.01219 |
| HRNet_W64_C | 224 | 256 | 15.10428 | 27.68901 | 40.4198 | 17.57527 | 47.9533 | 97.11228 |
......@@ -32,6 +32,7 @@ ResNet系列模型是在2015年提出的,一举在ILSVRC2015比赛中取得冠
| ResNet18_vd | 0.723 | 0.908 | | | 4.140 | 11.710 |
| ResNet34 | 0.746 | 0.921 | 0.732 | 0.913 | 7.360 | 21.800 |
| ResNet34_vd | 0.760 | 0.930 | | | 7.390 | 21.820 |
| ResNet34_vd_ssld | 0.797 | 0.949 | | | 7.390 | 21.820 |
| ResNet50 | 0.765 | 0.930 | 0.760 | 0.930 | 8.190 | 25.560 |
| ResNet50_vc | 0.784 | 0.940 | | | 8.670 | 25.580 |
| ResNet50_vd | 0.791 | 0.944 | 0.792 | 0.946 | 8.670 | 25.580 |
......@@ -58,6 +59,7 @@ ResNet系列模型是在2015年提出的,一举在ILSVRC2015比赛中取得冠
| ResNet18_vd | 224 | 256 | 1.603 |
| ResNet34 | 224 | 256 | 2.272 |
| ResNet34_vd | 224 | 256 | 2.343 |
| ResNet34_vd_ssld | 224 | 256 | 2.343 |
| ResNet50 | 224 | 256 | 2.939 |
| ResNet50_vc | 224 | 256 | 3.041 |
| ResNet50_vd | 224 | 256 | 3.165 |
......@@ -79,6 +81,7 @@ ResNet系列模型是在2015年提出的,一举在ILSVRC2015比赛中取得冠
| ResNet18_vd | 224 | 256 | 1.39593 | 2.69063 | 3.88267 | 1.54557 | 3.85363 | 6.88121 |
| ResNet34 | 224 | 256 | 2.23092 | 4.10205 | 5.54904 | 2.34957 | 5.89821 | 10.73451 |
| ResNet34_vd | 224 | 256 | 2.23992 | 4.22246 | 5.79534 | 2.43427 | 6.22257 | 11.44906 |
| ResNet34_vd_ssld | 224 | 256 | 2.23992 | 4.22246 | 5.79534 | 2.43427 | 6.22257 | 11.44906 |
| ResNet50 | 224 | 256 | 2.63824 | 4.63802 | 7.02444 | 3.47712 | 7.84421 | 13.90633 |
| ResNet50_vc | 224 | 256 | 2.67064 | 4.72372 | 7.17204 | 3.52346 | 8.10725 | 14.45577 |
| ResNet50_vd | 224 | 256 | 2.65164 | 4.84109 | 7.46225 | 3.53131 | 8.09057 | 14.45965 |
......
......@@ -45,6 +45,7 @@ python tools/infer/predict.py \
- [ResNet50_vc](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vc_pretrained.tar)
- [ResNet18_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet18_vd_pretrained.tar)
- [ResNet34_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar)
- [ResNet34_vd_ssld](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_ssld_pretrained.tar)
- [ResNet50_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar)
- [ResNet50_vd_v2](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_v2_pretrained.tar)
- [ResNet101_vd](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar)
......@@ -149,11 +150,13 @@ python tools/infer/predict.py \
- HRNet系列
- HRNet系列<sup>[[13](#ref13)]</sup>([论文地址](https://arxiv.org/abs/1908.07919))
- [HRNet_W18_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_pretrained.tar)
- [HRNet_W18_C_ssld](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_ssld_pretrained.tar)
- [HRNet_W30_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W30_C_pretrained.tar)
- [HRNet_W32_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W32_C_pretrained.tar)
- [HRNet_W40_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W40_C_pretrained.tar)
- [HRNet_W44_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W44_C_pretrained.tar)
- [HRNet_W48_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_pretrained.tar)
- [HRNet_W48_C_ssld](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W48_C_ssld_pretrained.tar)
- [HRNet_W64_C](https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W64_C_pretrained.tar)
......
# 更新日志
- 2020.09.17
* 添加HRNet_W48_C_ssld模型,在ImageNet上Top-1 Acc可达0.836;添加ResNet34_vd_ssld模型,在ImageNet上Top-1 Acc可达0.797。
* 2020.09.07
* 添加HRNet_W18_C_ssld模型,在ImageNet上Top-1 Acc可达0.81162;添加MobileNetV3_small_x0_35_ssld模型,在ImageNet上Top-1 Acc可达0.5555。
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册