未验证 提交 54d44b15 编写于 作者: C ceci3 提交者: GitHub

judge model type only when compress_config is None (#1461)

* fix ptq hpo in ac

* judge model type only when compress_config is None

* update

* update demo

* revert compressor
上级 e689679d
......@@ -182,6 +182,9 @@ git clone https://github.com/PaddlePaddle/PaddleSlim.git & cd PaddleSlim
python setup.py install
```
### 验证安装
安装完成后您可以使用 python 或 python3 进入 python 解释器,输入import paddleslim, 没有报错则说明安装成功。
### 快速开始
......
......@@ -220,21 +220,21 @@ ac.compress()
- 测试FP32模型的速度
```
python ./image_classification/infer.py
python ./image_classification/paddle_inference_eval.py --model_path='./MobileNetV1_infer' --use_gpu=True --use_trt=True
### using tensorrt FP32 batch size: 1 time(ms): 0.6140608787536621
```
- 测试FP16模型的速度
```
python ./image_classification/infer.py --use_fp16=True
python ./image_classification/paddle_inference_eval.py --model_path='./MobileNetV1_infer' --use_gpu=True --use_trt=True --use_fp16=True
### using tensorrt FP16 batch size: 1 time(ms): 0.5795984268188477
```
- 测试INT8模型的速度
```
python ./image_classification/infer.py --model_dir=./MobileNetV1_quant/ --use_int8=True
python ./image_classification/paddle_inference_eval.py --model_path='./MobileNetV1_quant/' --use_gpu=True --use_trt=True --use_int8=True
### using tensorrt INT8 batch size: 1 time(ms): 0.5213963985443115
```
......
model_dir: './MobileNetV1_infer'
model_filename: 'inference.pdmodel'
params_filename: "inference.pdiparams"
batch_size: 128
data_dir: './ILSVRC2012_data_demo/ILSVRC2012/'
img_size: 224
resize_size: 256
Global:
model_dir: './MobileNetV1_infer'
model_filename: 'inference.pdmodel'
params_filename: "inference.pdiparams"
batch_size: 128
data_dir: './ILSVRC2012_data_demo/ILSVRC2012/'
img_size: 224
resize_size: 256
......@@ -172,9 +172,9 @@ class Predictor(object):
crop_size=args.img_size,
resize_size=args.resize_size)
else:
image = np.ones(
(1, 3, args.img_size, args.img_size)).astype(np.float32)
label = None
image = np.ones((args.batch_size, 3, args.img_size,
args.img_size)).astype(np.float32)
label = [[None]] * args.batch_size
val_loader = [[image, label]]
results = []
input_names = self.paddle_predictor.get_input_names()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册