Windows上使用Faster-RCNN做预测,显存爆了
Created by: Knownx
1)PaddlePaddle版本:1.7.1 2)GPU:Quadro K1200,CUDA9.0,CUDNN7.6.2,Driver385.54 3)系统环境:Windows7,Python3.7 4)预测库来源:来自 pip3 install
- 复现信息和问题描述:加载Faster-RCNN模型后,显存占用约几百兆左右,调用infer接口,显存飙升到3.5GB,或者直接打满报错,即使未打满,再调用一次infer也会打满报错。其他模型,如YoloV3无此问题。同版本同模型在Linux上稳定运行,调用infer占用显存约2GB。
- 报错信息如下:
E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\paddle\fluid\executor.py:782: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
2020-04-23 14:51:57,604-ERROR: Exception on / [POST]
Traceback (most recent call last):
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\flask\_compat.py", line 35, in reraise
raise value
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "BaiduAI\EasyEdge\serving.py", line 102, in BaiduAI.EasyEdge.serving.Serving.run.api
File "BaiduAI\EasyEdge\serving.py", line 107, in BaiduAI.EasyEdge.serving.Serving.run.api
File "BaiduAI\EasyEdge\core.py", line 250, in BaiduAI.EasyEdge.core.Program.infer_image
File "BaiduAI\EasyEdge\engine_paddlefluid.py", line 104, in BaiduAI.EasyEdge.engine_paddlefluid.PaddleFluidExecutor.infer
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\paddle\fluid\executor.py", line 783, in run
six.reraise(*sys.exc_info())
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\six.py", line 693, in reraise
raise value
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\paddle\fluid\executor.py", line 778, in run
use_program_cache=use_program_cache)
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\paddle\fluid\executor.py", line 831, in _run_impl
use_program_cache=use_program_cache)
File "E:\liujie\validate\51-20200416\frcnn-pro\python37\lib\site-packages\paddle\fluid\executor.py", line 905, in _run_program
fetch_var_name)
RuntimeError:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
Windows not support stack backtrace yet.
----------------------
Error Message Summary:
----------------------
ResourceExhaustedError:
Out of memory error on GPU 0. Cannot allocate 63.000244MB memory on GPU 0, available memory is only 12.554688MB.
Please check whether there is any other process using GPU 0.
1. If yes, please stop them, or start PaddlePaddle on another GPU.
2. If no, please try one of the following suggestions:
1) Decrease the batch size of your model.
2) FLAGS_fraction_of_gpu_memory_to_use is 0.00 now, please set it to a higher value but less than 1.0.
The command is `export FLAGS_fraction_of_gpu_memory_to_use=xxx`.
at (D:\1.7.1\paddle\paddle\fluid\memory\detail\system_allocator.cc:150)