From a08a3f2db0b4fe5134337744a94a1b32c3b41b69 Mon Sep 17 00:00:00 2001 From: LokeZhou Date: Thu, 30 Mar 2023 15:05:43 +0800 Subject: [PATCH] auto compression README.md add paddle_inference_eval.py test=document_fix (#8012) --- deploy/auto_compression/README.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/deploy/auto_compression/README.md b/deploy/auto_compression/README.md index 20a0cbacd..7b5043069 100644 --- a/deploy/auto_compression/README.md +++ b/deploy/auto_compression/README.md @@ -150,8 +150,15 @@ export CUDA_VISIBLE_DEVICES=0 python eval.py --config_path=./configs/ppyoloe_l_qat_dis.yaml ``` +使用paddle inference并使用trt int8得到模型的mAP: +``` +export CUDA_VISIBLE_DEVICES=0 +python paddle_inference_eval.py --model_path ./output/ --reader_config configs/ppyoloe_reader.yml --precision int8 --use_trt=True +``` + **注意**: - 要测试的模型路径可以在配置文件中`model_dir`字段下进行修改。 +- --precision 默认为paddle,如果使用trt,需要设置--use_trt=True,同时--precision 可设置为fp32/fp16/int8 ## 4.预测部署 -- GitLab