cpp_infer使用trt_fp32模式出错
Created by: mingmmq
2020-05-08 18:50:40,604-INFO: The architecture is YOLO
2020-05-08 18:50:40,604-INFO: Extra info: im_size
2020-05-08 18:50:40,608-INFO: min_subgraph_size = 3.
2020-05-08 18:50:40,608-INFO: Run inference by trt_fp32.
I0508 18:50:40.959987 1631 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0508 18:50:40.980535 1631 analysis_predictor.cc:833] MODEL VERSION: 1.7.1
I0508 18:50:40.980559 1631 analysis_predictor.cc:835] PREDICTOR VERSION: 1.7.1
I0508 18:50:40.981019 1631 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
Traceback (most recent call last):
File "tools/cpp_infer.py", line 321, in <module>
infer()
File "tools/cpp_infer.py", line 242, in infer
predict = fluid.core.create_paddle_predictor(config)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::framework::ir::PassRegistry::Get(std::string const&) const
3 paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::string, std::allocator<std::string> > const&)
4 paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7 paddle::AnalysisPredictor::OptimizeInferenceProgram()
8 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9 paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10 std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11 std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)
----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/paddle/paddle/fluid/framework/ir/pass.h:170)
config文件如下
# demo for cpp_infer.py
use_python_inference: false # whether to use python inference
mode: trt_fp32 # trt_fp32, trt_fp16, trt_int8, fluid
arch: YOLO # YOLO, SSD, RCNN, RetinaNet
min_subgraph_size: 3 # need 3 for YOLO arch
# visualize the predicted image
metric: COCO # COCO, VOC
draw_threshold: 0.01
Preprocess:
- type: Resize
target_size: 608
max_size: 608
- type: Normalize
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.229
- 0.224
- 0.225
is_scale: False
- type: Permute
to_bgr: False
- type: PadStride
stride: 0 # set 32 on FPN and 128 on RetinaNet