python API放到multiprocess_reader中预测报cuda错
Created by: wang-kangkang
我的需求是在reader中使用inference_model进行预测,reader放在multiprocess_reader中。使用了两种实现方式都报错:Error: cudaSetDevice failed in paddle::platform::SetDeviced, error code : 3, Please see detail in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.htm l#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038: initialization error at (/paddle/paddle/fluid/platform/gpu_info.cc:240) 第一种是:在reader中定义CUDAPlace和exe;第二种是在reader中使用python API预测。其中第二种方法在pool构造的多进程中没问题,在本场景的multiprocess_reader中有问题。 最小可复现代码:paddle1.7 post97
import os
os.environ["CUDA_VISIBLE_DEVICES"]="2"
import paddle
import paddle.fluid as fluid
from paddle.fluid.core import PaddleTensor
from paddle.fluid.core import AnalysisConfig
from paddle.fluid.core import create_paddle_predictor
def reader():
landmark_config = AnalysisConfig('lmk_model/model','lmk_model/params')
landmark_config.switch_use_feed_fetch_ops(False)
#landmark_config.disable_gpu()
landmark_config.enable_use_gpu(100, 0)
landmark_predictor = create_paddle_predictor(landmark_config)
for i in range(10):
yield i
if __name__=="__main__":
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
train_reader = paddle.reader.multiprocess_reader([reader, reader])
for data in train_reader():
print(data)