提交 cc8e0d09 编写于 作者: C cuicheng01 提交者: ruri

Update readme about efficientnet (#3389)

* update TensorRT 1.5.2 inference time

* add EfficientNet configuration in README
上级 9bc02acb
...@@ -241,8 +241,9 @@ PaddlePaddle/Models ImageClassification 支持自定义数据 ...@@ -241,8 +241,9 @@ PaddlePaddle/Models ImageClassification 支持自定义数据
- 注意 - 注意
- 1:ResNet50_vd_v2是ResNet50_vd蒸馏版本。 - 1:ResNet50_vd_v2是ResNet50_vd蒸馏版本。
- 2:InceptionV4和Xception采用的输入图像的分辨率为299x299,DarkNet53为256x256,Fix_ResNeXt101_32x48d_wsl为320x320,其余模型使用的分辨率均为224x224。在预测时,DarkNet53与Fix_ResNeXt101_32x48d_wsl系列网络resize_short_size与输入的图像分辨率的宽或高相同,InceptionV4和Xception网络resize_short_size为320,其余网络resize_short_size均为256。 - 2:除EfficientNet外,InceptionV4和Xception采用的输入图像的分辨率为299x299,DarkNet53为256x256,Fix_ResNeXt101_32x48d_wsl为320x320,其余模型使用的分辨率均为224x224。在预测时,DarkNet53与Fix_ResNeXt101_32x48d_wsl系列网络resize_short_size与输入的图像分辨率的宽或高相同,InceptionV4和Xception网络resize_short_size为320,其余网络resize_short_size均为256。
- 3:调用动态链接库预测时需要将训练模型转换为二进制模型 - 3: EfficientNetB0~B7的分辨率大小分别为224x224,240x240,260x260,300x300,380x380,456x456,528x528,600x600,预测时的resize_short_size在其分辨率的长或高的基础上加32,如EfficientNetB1的resize_short_size为272,在该系列模型训练和预测的过程中,图片resize参数interpolation的值设置为2(cubic插值方式),该模型在训练过程中使用了指数滑动平均策略,具体请参考[指数滑动平均](https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/api_cn/optimizer_cn.html#exponentialmovingaverage)
- 4:调用动态链接库预测时需要将训练模型转换为二进制模型。
```bash ```bash
python infer.py \ python infer.py \
...@@ -251,7 +252,7 @@ PaddlePaddle/Models ImageClassification 支持自定义数据 ...@@ -251,7 +252,7 @@ PaddlePaddle/Models ImageClassification 支持自定义数据
--save_inference=True --save_inference=True
``` ```
- 4: ResNeXt101_wsl系列的预训练模型转自pytorch模型,详情见[ResNeXt wsl](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/) - 5: ResNeXt101_wsl系列的预训练模型转自pytorch模型,详情见[ResNeXt wsl](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/)
### AlexNet ### AlexNet
......
...@@ -226,15 +226,16 @@ Pretrained models can be downloaded by clicking related model names. ...@@ -226,15 +226,16 @@ Pretrained models can be downloaded by clicking related model names.
- Note - Note
- 1: ResNet50_vd_v2 is the distilled version of ResNet50_vd. - 1: ResNet50_vd_v2 is the distilled version of ResNet50_vd.
- 2: The image resolution feeded in InceptionV4 and Xception net is ```299x299```, Fix_ResNeXt101_32x48d_wsl is ```320x320```, DarkNet is ```256x256```, others are ```224x224```.In test time, the resize_short_size of the DarkNet53 and Fix_ResNeXt101_32x48d_wsl series networks is the same as the width or height of the input image resolution, the InceptionV4 and Xception network resize_short_size is 320, and the other networks resize_short_size are 256. - 2: In addition to EfficientNet, the image resolution feeded in InceptionV4 and Xception net is ```299x299```, Fix_ResNeXt101_32x48d_wsl is ```320x320```, DarkNet is ```256x256```, others are ```224x224```.In test time, the resize_short_size of the DarkNet53 and Fix_ResNeXt101_32x48d_wsl series networks is the same as the width or height of the input image resolution, the InceptionV4 and Xception network resize_short_size is 320, and the other networks resize_short_size are 256.
- 3: It's necessary to convert the train model to a binary model when appling dynamic link library to infer, One can do it by running following command: - 3: The resolutions of EfficientNetB0~B7 are ```224x224```,```240x240```,```260x260```,```300x300```,```380x380```,```456x456```,```528x528```,```600x600``` respectively, the resize_short_size in the inference phase is increased by 32 on the basis of the length or height of the resolution, for example, the resize_short_size of EfficientNetB1 is 272.In the process of training and inference phase of these series of models, the value of the resize parameter interpolation is set to 2 (cubic interpolation mode). Besides, the model uses ExponentialMovingAverage during the training process, this trick please refer to [ExponentialMovingAverage](https://www.paddlepaddle.org.cn/documentation/docs/en/1.5/api/optimizer.html#exponentialmovingaverage).
- 4: It's necessary to convert the train model to a binary model when appling dynamic link library to infer, One can do it by running following command:
```bash ```bash
python infer.py\ python infer.py\
--model=model_name \ --model=model_name \
--pretrained_model=${path_to_pretrained_model} \ --pretrained_model=${path_to_pretrained_model} \
--save_inference=True --save_inference=True
``` ```
- 4: The pretrained model of the ResNeXt101_wsl series network is converted from the pytorch model. Please refer to [RESNEXT WSL](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/) for details. - 5: The pretrained model of the ResNeXt101_wsl series network is converted from the pytorch model. Please refer to [RESNEXT WSL](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/) for details.
### AlexNet ### AlexNet
|Model | Top-1 | Top-5 | Paddle Fluid inference time(ms) | Paddle TensorRT inference time(ms) | |Model | Top-1 | Top-5 | Paddle Fluid inference time(ms) | Paddle TensorRT inference time(ms) |
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册