diff --git a/deploy/slim/README.md b/deploy/slim/README.md index eed8aa3b21f5af6f0168d77f30c770cb6dbc1857..4e6ab72b9a386d98817fd915ba41aeef03e15187 100644 --- a/deploy/slim/README.md +++ b/deploy/slim/README.md @@ -125,7 +125,7 @@ python3.7 -m paddle.distributed.launch \ 在得到在线量化训练、模型剪枝保存的模型后,可以将其导出为inference model,用于预测部署,以模型剪枝为例: ```bash -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model \ -o Global.save_inference_dir=./inference diff --git a/deploy/slim/README_en.md b/deploy/slim/README_en.md index d7a978f467ae5f8b754606112d5799d43e2c0417..36284762f48773985a83e1ce32fe6f35f072e750 100644 --- a/deploy/slim/README_en.md +++ b/deploy/slim/README_en.md @@ -127,7 +127,7 @@ python3.7 -m paddle.distributed.launch \ After getting the compressed model, we can export it as inference model for predictive deployment. Using pruned model as example: ```bash -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model -o Global.save_inference_dir=./inference diff --git a/docs/en/advanced_tutorials/model_prune_quantization_en.md b/docs/en/advanced_tutorials/model_prune_quantization_en.md index 1946cfa5fb1e67c64fc97b5cd35d3f9c66ad9662..f75aa1440b6eb5e7963d21925a859d0f7a82be99 100644 --- a/docs/en/advanced_tutorials/model_prune_quantization_en.md +++ b/docs/en/advanced_tutorials/model_prune_quantization_en.md @@ -157,7 +157,7 @@ python3.7 -m paddle.distributed.launch \ Having obtained the saved model after online quantization training and pruning, it can be exported as an inference model for inference deployment. Here we take model pruning as an example: ``` -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model \ -o Global.save_inference_dir=./inference diff --git a/docs/zh_CN/training/advanced/prune_quantization.md b/docs/zh_CN/training/advanced/prune_quantization.md index 6d2fae5d0eda03d1dca0d23410469d06146c7f39..27921f055ec581dc57626bd082eb980bc330653b 100644 --- a/docs/zh_CN/training/advanced/prune_quantization.md +++ b/docs/zh_CN/training/advanced/prune_quantization.md @@ -151,7 +151,7 @@ python3.7 -m paddle.distributed.launch \ 在得到在线量化训练、模型剪枝保存的模型后,可以将其导出为 inference model,用于预测部署,以模型剪枝为例: ```bash -python3.7 tools/export.py \ +python3.7 tools/export_model.py \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ -o Global.pretrained_model=./output/ResNet50_vd/best_model \ -o Global.save_inference_dir=./inference