How to limit GPU memory of Fluid
Created by: AmitRozner
System information -PaddlePaddle version 1.3 -CPU: i7-6700 -GPU: NVIDIA 1080TI CUDA:9.2 -OS Platform Ubuntu 16.04 -Python version 3.5
When I run Pyramid Box model (from widerface_eval.py) on a 1080TI GPU it creates out of memory error (on a single image). I lowered the numbers in the 'get_shrink' function to make it fit the GPU memory (thus doing a smaller resize). Is there another way or environment variable which can control the total GPU memory consumed by fluid? I found someone in another thread who ran this network with an 8GB GPU without problems so I assume this is possible. I'm looking for something similar to Tensorflow's per_process_gpu_memory_fraction which can run a TF process on part of the GPU. Suppose I want to use 80% of the GPU memory, how can I do it in fluid?