From 5fde410dd634e9f55981239fa106edb1e56d936e Mon Sep 17 00:00:00 2001 From: gaotingquan Date: Thu, 19 Jan 2023 05:52:52 +0000 Subject: [PATCH] fix: export.py -> export_model.py --- deploy/slim/README.md | 2 +- deploy/slim/README_en.md | 2 +- docs/en/advanced_tutorials/model_prune_quantization_en.md | 2 +- docs/zh_CN/training/advanced/prune_quantization.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/deploy/slim/README.md b/deploy/slim/README.md index eed8aa3b..4e6ab72b 100644 --- a/deploy/slim/README.md +++ b/deploy/slim/README.md @@ -125,7 +125,7 @@ python3.7 -m paddle.distributed.launch \ 在得到在线量化训练、模型剪枝保存的模型后,可以将其导出为inference model,用于预测部署,以模型剪枝为例: ```bash -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model \ -o Global.save_inference_dir=./inference diff --git a/deploy/slim/README_en.md b/deploy/slim/README_en.md index d7a978f4..36284762 100644 --- a/deploy/slim/README_en.md +++ b/deploy/slim/README_en.md @@ -127,7 +127,7 @@ python3.7 -m paddle.distributed.launch \ After getting the compressed model, we can export it as inference model for predictive deployment. Using pruned model as example: ```bash -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model -o Global.save_inference_dir=./inference diff --git a/docs/en/advanced_tutorials/model_prune_quantization_en.md b/docs/en/advanced_tutorials/model_prune_quantization_en.md index 1946cfa5..f75aa144 100644 --- a/docs/en/advanced_tutorials/model_prune_quantization_en.md +++ b/docs/en/advanced_tutorials/model_prune_quantization_en.md @@ -157,7 +157,7 @@ python3.7 -m paddle.distributed.launch \ Having obtained the saved model after online quantization training and pruning, it can be exported as an inference model for inference deployment. Here we take model pruning as an example: ``` -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model \ -o Global.save_inference_dir=./inference diff --git a/docs/zh_CN/training/advanced/prune_quantization.md b/docs/zh_CN/training/advanced/prune_quantization.md index 6d2fae5d..27921f05 100644 --- a/docs/zh_CN/training/advanced/prune_quantization.md +++ b/docs/zh_CN/training/advanced/prune_quantization.md @@ -151,7 +151,7 @@ python3.7 -m paddle.distributed.launch \ 在得到在线量化训练、模型剪枝保存的模型后,可以将其导出为 inference model,用于预测部署,以模型剪枝为例: ```bash -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model \ -o Global.save_inference_dir=./inference -- GitLab