模型预测设置 disable_gpu 不生效
Created by: AoZhang
-
环境 1)PaddlePaddle版本:gpu版,1.6.1/1.6.2 2)CPU:Intel(R) Xeon(R) CPU E5-2620 v2 3)GPU:NVIDIA K40m 4)系统环境:CentOS 6.3 ,Python版本 3.6.5
-
问题描述 预测阶段,config设置了 disable_gpu() 和 enable_mkldnn(),但是
create_paddle_predictor
时依然使用了 GPU。如果测试对应的 GPU 卡正在被占用且显存不足,则会报如下的错误:
>>> predictor = create_paddle_predictor(config.to_native_config())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::platform::GpuMemoryUsage(unsigned long*, unsigned long*)
3 paddle::AnalysisConfig::fraction_of_gpu_memory_for_pool() const
4 paddle::AnalysisConfig::ToNativeConfig() const
----------------------
Error Message Summary:
----------------------
PaddleCheckError: cudaMemGetInfo failed in paddle::platform::GetMemoryUsage, error code : 2, Please see detail in http
s://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430
e0038: out of memory at [/paddle/paddle/fluid/platform/gpu_info.cc:188]
- 复现代码 操作系统环境变量中 CUDA_VISIBLE_DEVICES=0,且正在被使用
from paddle.fluid.core_avx import AnalysisConfig, create_paddle_predictor
config = AnalysisConfig("lstm/model", "lstm/params")
config.disable_gpu()
config.enable_mkldnn()
predictor = create_paddle_predictor(config.to_native_config())
模型: lstm.tar.gz