PaddlePredictor预测结果错误
Created by: txyugood
使用paddle 1.7 版本微调ResNeXt50_64x4d模型,使用save_inference_model保存以后,用PaddlePredictor运行结果与使用 pragram 运行出来的不一致。program使用 clone(for_test=Ture)得到,并且运行结果正确。 PaddlePredictor运行的log 如下:
I0311 14:24:12.107333 2278 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0311 14:24:12.120690 2278 analysis_predictor.cc:833] MODEL VERSION: 1.7.0
I0311 14:24:12.120726 2278 analysis_predictor.cc:835] PREDICTOR VERSION: 1.7.0
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [seqpool_cvm_concat_fuse_pass]
--- Running IR pass [fc_lstm_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [fc_fuse_pass]
I0311 14:24:12.273890 2278 graph_pattern_detector.cc:101] --- detected 1 subgraphs
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
I0311 14:24:12.418642 2278 graph_pattern_detector.cc:101] --- detected 53 subgraphs
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [conv_transpose_bn_fuse_pass]
--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0311 14:24:12.498584 2278 analysis_predictor.cc:462] ======= optimize end =======
I0311 14:24:12.498739 2278 naive_executor.cc:105] --- skip [feed], feed -> image
I0311 14:24:12.499717 2278 naive_executor.cc:105] --- skip [fc_0.tmp_1], fetch -> fetch
W0311 14:24:12.516517 2278 naive_executor.cc:45] The NaiveExecutor can not work properly if the cmake flag ON_INFER is not set.
W0311 14:24:12.516551 2278 naive_executor.cc:47] Unlike the training phase, all the scopes and variables will be reused to save the allocation overhead.
W0311 14:24:12.516553 2278 naive_executor.cc:50] Please re-compile the inference library by setting the cmake flag ON_INFER=ON if you are running Paddle Inference