未验证 提交 f01fc4f6 编写于 作者: C ceci3 提交者: GitHub

fix demo for now (#1057)

上级 374a5f7c
...@@ -54,7 +54,7 @@ python tools/export_model.py \ ...@@ -54,7 +54,7 @@ python tools/export_model.py \
``` ```
cd PaddleSlim/demo/auto-compression/ cd PaddleSlim/demo/auto-compression/
``` ```
使用[eval.py](../quant/quant_post/eval.py)脚本得到模型的分类精度: 使用[eval.py](../quant/quant_post/eval.py)脚本得到模型的分类精度,压缩后的模型也可以使用同一个脚本测试精度
``` ```
python ../quant/quant_post/eval.py --model_path infermodel_mobilenetv2 --model_name inference.pdmodel --params_name inference.pdiparams python ../quant/quant_post/eval.py --model_path infermodel_mobilenetv2 --model_name inference.pdmodel --params_name inference.pdiparams
``` ```
...@@ -95,7 +95,7 @@ python demo_imagenet.py \ ...@@ -95,7 +95,7 @@ python demo_imagenet.py \
### 3.3 进行剪枝蒸馏策略融合压缩 ### 3.3 进行剪枝蒸馏策略融合压缩
注意:本示例为对BERT模型进行ASP稀疏。 注意:本示例为对BERT模型进行ASP稀疏。
首先参考[脚本](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#%E9%A2%84%E6%B5%8B)得到可部署的模型。 首先参考[脚本](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#%E9%A2%84%E6%B5%8B)得到可部署的模型,或者下载SST-2数据集上的示例模型[SST-2-BERT](https://paddle-qa.bj.bcebos.com/PaddleSlim_datasets/static_bert_models.tar.gz)
剪枝蒸馏压缩示例脚本为[demo_glue.py](./demo_glue.py),使用接口``paddleslim.auto_compression.AutoCompression``对模型进行压缩。运行命令为: 剪枝蒸馏压缩示例脚本为[demo_glue.py](./demo_glue.py),使用接口``paddleslim.auto_compression.AutoCompression``对模型进行压缩。运行命令为:
``` ```
python demo_glue.py \ python demo_glue.py \
......
...@@ -97,7 +97,7 @@ if __name__ == '__main__': ...@@ -97,7 +97,7 @@ if __name__ == '__main__':
strategy_config=compress_config, strategy_config=compress_config,
train_config=train_config, train_config=train_config,
train_dataloader=train_dataloader, train_dataloader=train_dataloader,
eval_callback=eval_function, eval_callback=eval_function if 'HyperParameterOptimization' not in compress_config else None,
devices=args.devices) devices=args.devices)
ac.compress() ac.compress()
...@@ -273,6 +273,7 @@ def quantize(cfg): ...@@ -273,6 +273,7 @@ def quantize(cfg):
global g_min_emd_loss global g_min_emd_loss
### if eval_function is not None, use eval function provided by user. ### if eval_function is not None, use eval function provided by user.
### TODO(ceci3): fix eval_function
if g_quant_config.eval_function is not None: if g_quant_config.eval_function is not None:
emd_loss = g_quant_config.eval_function() emd_loss = g_quant_config.eval_function()
else: else:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册