diff --git a/deploy/slim/README.md b/deploy/slim/README.md index f5b0bf0edaec5dd72f5e342f4e2d7ebded484990..335b915b4d0cdf09505a3f56459cc853e41b9661 100644 --- a/deploy/slim/README.md +++ b/deploy/slim/README.md @@ -66,7 +66,7 @@ cd PaddleClas 以CPU为例,若使用GPU,则将命令中改成`cpu`改成`gpu` ```bash -python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantalization.yaml -o Global.device=cpu +python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantization.yaml -o Global.device=cpu ``` 其中`yaml`文件解析详见[参考文档](../../docs/zh_CN/tutorials/config_description.md)。为了保证精度,`yaml`文件中已经使用`pretrained model`. @@ -81,7 +81,7 @@ python3.7 -m paddle.distributed.launch \ --gpus="0,1,2,3" \ deploy/slim/slim.py \ -m train \ - -c ppcls/configs/slim/ResNet50_vd_quantalization.yaml + -c ppcls/configs/slim/ResNet50_vd_quantization.yaml ``` ##### 3.1.2 离线量化 @@ -131,7 +131,7 @@ python3.7 -m paddle.distributed.launch \ python3.7 deploy/slim/slim.py \ -m export \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ - -o Global.save_inference_dir=./inference + -o Global.save_inference_dir=./inference ``` diff --git a/deploy/slim/README_en.md b/deploy/slim/README_en.md index dce481922a3d149a6283df855e672796e09e0cdc..815dbc10f9606deb26a770743d1c8edd92e52b20 100644 --- a/deploy/slim/README_en.md +++ b/deploy/slim/README_en.md @@ -67,7 +67,7 @@ The training command is as follow: If using GPU, change the `cpu` to `gpu` in the following command. ```bash -python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantalization.yaml -o Global.device=cpu +python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantization.yaml -o Global.device=cpu ``` The description of `yaml` file can be found in this [doc](../../docs/en/tutorials/config_en.md). To get better accuracy, the `pretrained model`is used in `yaml`. @@ -82,7 +82,7 @@ python3.7 -m paddle.distributed.launch \ --gpus="0,1,2,3" \ deploy/slim/slim.py \ -m train \ - -c ppcls/configs/slim/ResNet50_vd_quantalization.yaml + -c ppcls/configs/slim/ResNet50_vd_quantization.yaml ``` ##### 3.1.2 Offline quantization @@ -132,7 +132,7 @@ After getting the compressed model, we can export it as inference model for pred python3.7 deploy/slim/slim.py \ -m export \ -c ppcls/configs/slim/ResNet50_vd_prune.yaml \ - -o Global.save_inference_dir=./inference + -o Global.save_inference_dir=./inference ``` ### 5. Deploy diff --git a/log/endpoints.log b/log/endpoints.log new file mode 100644 index 0000000000000000000000000000000000000000..438da7f1e2835134fcc6eb9580cb70c419bd8ce7 --- /dev/null +++ b/log/endpoints.log @@ -0,0 +1,5 @@ +PADDLE_TRAINER_ENDPOINTS: +127.0.0.1:51466 +127.0.0.1:58283 +127.0.0.1:34005 +127.0.0.1:58331 \ No newline at end of file diff --git a/ppcls/configs/slim/ResNet50_vd_quantization.yaml b/ppcls/configs/slim/ResNet50_vd_quantization.yaml index 41fcd0c868e7c55695a532eb9ed9641fa089d32d..aeccaeaae497e7f538427aad671cb54af7f64847 100644 --- a/ppcls/configs/slim/ResNet50_vd_quantization.yaml +++ b/ppcls/configs/slim/ResNet50_vd_quantization.yaml @@ -42,7 +42,7 @@ Optimizer: momentum: 0.9 lr: name: Cosine - learning_rate: 0.1 + learning_rate: 0.01 regularizer: name: 'L2' coeff: 0.00007