hub serving 设置 use_gpu=true 时报错
Created by: waywaywayw
1)PaddleHub和PaddlePaddle版本: paddlehub 1.5.4 paddlepaddle-gpu 1.7.0.post107
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本 拉取docker镜像 FROM hub.baidubce.com/paddlepaddle/paddle:1.7.0-gpu-cuda10.0-cudnn7 使用镜像里的python3.6
3)复现信息:如为报错,请给出复现环境、复现步骤 已设置CUDA_VISIBLE_DEVICES=0 已跑通demo,使用如下语句时,速度明显比cpu变快,gpu调用成功。module.lexical_analysis(data=inputs, use_gpu=True, batch_size=64)
尝试使用 hub serving 提供gpu服务。 开启服务:hub serving start --config config.json
config.json里设置"use_gpu": false 时,可以正常运行 设置"use_gpu": true 时报错如下:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::platform::SetDeviceId(int)
3 paddle::memory::allocation::CUDAAllocator::AllocateImpl(unsigned long)
4 paddle::memory::allocation::AlignedAllocator::AllocateImpl(unsigned long)
5 paddle::memory::allocation::AutoGrowthBestFitAllocator::AllocateImpl(unsigned long)
6 paddle::memory::allocation::RetryAllocator::AllocateImpl(unsigned long)
7 paddle::memory::allocation::AllocatorFacade::Alloc(paddle::platform::Place const&, unsigned long)
8 paddle::memory::allocation::AllocatorFacade::AllocShared(paddle::platform::Place const&, unsigned long)
9 paddle::memory::AllocShared(paddle::platform::Place const&, unsigned long)
10 paddle::framework::Tensor::mutable_data(paddle::platform::Place const&, paddle::framework::proto::VarType_Type, unsigned long)
----------------------
Error Message Summary:
----------------------
Error: cudaSetDevice failed in paddle::platform::SetDeviced, error code : 3, Please see detail in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038: initialization error at (/paddle/paddle/fluid/platform/gpu_info.cc:240)