1.8.0版本,conv_transpose_eltwiseadd_bn_fuse_pass导致PaddleOCR检测模型CPU预测结果错误
Created by: cryoco
System information
-PaddlePaddle version: v1.8.0
, 0231f58e592ad9f673ac1832d8c495c8ed65d24f
-CPU: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
-GPU: V100
-OS Platform (eg.Mac OS 10.14): Ubuntu1404
-Docker image: hub.baidubce.com/paddlepaddle/paddle:latest-dev
-Python version: Python3.7
-Cmake orders
-C++version.txt: used python api
-API information: inference configuration
config.switch_use_feed_fetch_ops(False)
config.disable_gpu()
To Reproduce
- run test with
conv_transpose_eltwiseadd_bn_fuse_pass
python3.7 test_ocr_det.py --model_file=det/model --params_file=det/params --float32
- run test without
conv_transpose_eltwiseadd_bn_fuse_pass
by addingconfig.delete_pass("conv_transpose_eltwiseadd_bn_fuse_pass")
in AnalysisConfig.
python3.7 test_ocr_det.py --model_file=det/model --params_file=det/params --float32
Describe your current behavior
It has been verified that when using conv_transpose_eltwiseadd_bn_fuse_pass
, the cpu inference output is wrong. It's easy to check the right output by using GPU to perform inference:
#config.disable_gpu()
config.enable_use_gpu(100, 0)
This issue can be reproduced with and without MKLDNN.
Code to reproduce the issue test_ocr_det.txt
Need to be renamed to test_ocr_det.py
Model to reproduce the issue
wget https://paddleocr.bj.bcebos.com/inference.tar
tar -xf inference.tar
The det
model is the one which reproduces this issue.
Other info / logs