用tensorrt运行ERNIE模型报错
Created by: tianjie491
1)PaddlePaddle版本:paddlepaddle-gpu==1.5.2.post107 2)CPU:gpu预测 3)GPU:RTX2080ti,cuda10.0.130,cudnn7.6 4)系统环境:CentOS Linux release 7.6.1810,Python版本3.6.5 5)tensorrt:TensorRT-6.0.1.5.CentOS-7.6.x86_64-gnu.cuda-10.0.cudnn7.6.tar.gz -预测信息 预测模型:ERNIE1.0 使用python api进行预测,代码如下: type_scope = fluid.core.Scope() with fluid.scope_guard(type_scope): prog_file = "{}/model".format(args.init_checkpoints_type + "/inference_model") params_file = "{}/params".format(args.init_checkpoints_type + "/inference_model") config = AnalysisConfig(prog_file, params_file) if use_cuda: config.enable_use_gpu(2000, 0) config.enable_tensorrt_engine() else: config.disable_gpu() config.enable_mkldnn() predictor_type = create_paddle_predictor(config) 报错信息如下: W0917 22:04:51.145671 13830 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 10.0, Runtime API Version: 10.0 W0917 22:04:51.150243 13830 device_context.cc:267] device: 0, cuDNN Version: 7.6. I0917 22:04:51.165702 13830 analysis_predictor.cc:382] TensorRT subgraph engine is enabled --- Running analysis [ir_graph_build_pass] --- Running analysis [ir_graph_clean_pass] --- Running analysis [ir_analysis_pass] Traceback (most recent call last): File "app.py", line 214, in init(args) File "app.py", line 182, in init predictor_type = create_paddle_predictor(config) paddle.fluid.core_avx.EnforceNotMet: Pass tensorrt_subgraph_pass has not been registered at [/paddle/paddle/fluid/framework/ir/pass.h:160] PaddlePaddle Call Stacks: 0 0x7f06e7042d50p void paddle::platform::EnforceNotMet::Init<char const*>(char const*, char const*, int) + 352 1 0x7f06e70430c9p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137 2 0x7f06e704fbacp paddle::framework::ir::PassRegistry::Get(std::string const&) const + 284 3 0x7f06e8efcacap paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::string, std::allocatorstd::string > const&) + 170 4 0x7f06e8efed54p paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*) + 916 5 0x7f06e8ef9f36p paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*) + 726 6 0x7f06e8ef5302p paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*) + 722 7 0x7f06e72522aap paddle::AnalysisPredictor::OptimizeInferenceProgram() + 90 8 0x7f06e7252b82p paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&) + 194 9 0x7f06e7252cf7p paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&) + 343 10 0x7f06e7253141p std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) + 977 11 0x7f06e7253d41p std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictorpaddle::AnalysisConfig(paddle::AnalysisConfig const&) + 17 12 0x7f06e717e04dp 13 0x7f06e717e0bep 14 0x7f06e7075d56p 15 0x4a884cp _PyCFunction_FastCallKeywords + 924 16 0x51c8fbp 17 0x511f7ap _PyEval_EvalFrameDefault + 762 18 0x51b8e2p 19 0x51ca17p 20 0x511f7ap _PyEval_EvalFrameDefault + 762 21 0x51da02p PyEval_EvalCode + 274 22 0x56e3fep 23 0x41f60fp PyRun_FileExFlags + 164 24 0x41f9b5p PyRun_SimpleFileExFlags + 880 25 0x573c25p Py_Main + 1941 26 0x453218p main + 232 27 0x7f0742eb93d5p __libc_start_main + 245 28 0x56bbe4p 不用config.enable_tensorrt_engine()时一切正常,使用时报错,希望官方能够给与帮助,十分感谢!