未验证 提交 178cda1e 编写于 作者: C ceci3 提交者: GitHub

fix imagenet ac demo (#1136)

* fix imagenet ac demo

* update
上级 bf7bd927
......@@ -95,7 +95,7 @@ tar -xf ppyoloe_crn_l_300e_coco.tar
使用run.py脚本得到模型的mAP:
```
export CUDA_VISIBLE_DEVEICES=0
export CUDA_VISIBLE_DEVICES=0
python run.py --config_path=./configs/ppyoloe_l_qat_dis.yaml --eval=True
```
......@@ -105,7 +105,7 @@ python run.py --config_path=./configs/ppyoloe_l_qat_dis.yaml --eval=True
蒸馏量化自动压缩示例通过run.py脚本启动,会使用接口```paddleslim.auto_compression.AutoCompression```对模型进行自动压缩。配置config文件中模型路径、蒸馏、量化、和训练等部分的参数,配置完成后便可对模型进行量化和蒸馏。具体运行命令为:
```
export CUDA_VISIBLE_DEVEICES=0
export CUDA_VISIBLE_DEVICES=0
python run.py --config_path=./configs/ppyoloe_l_qat_dis.yaml --save_dir='./output/'
```
......
......@@ -69,7 +69,7 @@ tar -xf MobileNetV1_infer.tar
```shell
# 单卡启动
export CUDA_VISIBLE_DEVEICES=0
export CUDA_VISIBLE_DEVICES=0
python run.py \
--model_dir='MobileNetV1_infer' \
--model_filename='inference.pdmodel' \
......
......@@ -92,7 +92,7 @@ if __name__ == '__main__':
args = parser.parse_args()
print_arguments(args)
paddle.enable_static()
compress_config, train_config = load_config(args.config_path)
compress_config, train_config, _ = load_config(args.config_path)
data_dir = args.data_dir
train_reader = paddle.batch(
......
# 单卡启动
export CUDA_VISIBLE_DEVEICES=0
export CUDA_VISIBLE_DEVICES=0
python run.py \
--model_dir='MobileNetV1_infer' \
--model_filename='inference.pdmodel' \
......
......@@ -88,7 +88,7 @@ tar -zxvf afqmc.tar
数据集为CLUE,不同任务名称代表CLUE上不同的任务,可选择的任务名称有:afqmc, tnews, iflytek, ocnli, cmnli, cluewsc2020, csl。具体运行命令为
```shell
export CUDA_VISIBLE_DEVEICES=0
export CUDA_VISIBLE_DEVICES=0
python run.py \
--model_type='ppminilm' \
--model_dir='./afqmc/' \
......
......@@ -36,7 +36,7 @@ _logger = get_logger(__name__, level=logging.INFO)
try:
if platform.system().lower() == 'linux':
from ..quant.quant_post_hpo import quant_post_hpo
from ..quant import quant_post_hpo
except Exception as e:
_logger.warning(e)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册