未验证 提交 8ea9838e 编写于 作者: H Huihuang Zheng 提交者: GitHub

Fix memory allocator strategy flag. (#2308)

As the title.
上级 7d60ac69
...@@ -11,13 +11,14 @@ FLAGS_allocator_strategy ...@@ -11,13 +11,14 @@ FLAGS_allocator_strategy
取值范围 取值范围
--------------- ---------------
String型,['naive_best_fit', 'auto_growth']中的一个。缺省值为'naive_best_fit'。 String型,['naive_best_fit', 'auto_growth']中的一个。缺省值如果编译Paddle CMake时使用-DON_INFER=ON为'naive_best_fit'。
其他默认情况为'auto_growth'。PaddlePaddle pip安装包的默认策略也是'auto_growth'
示例 示例
-------- --------
FLAGS_allocator_strategy=naive_best_fit - 使用预分配best fit分配器。 FLAGS_allocator_strategy=naive_best_fit - 使用预分配best fit分配器,PaddlePaddle会先占用大多比例的可用内存/显存,在Paddle具体数据使用时分配,这种方式预占空间较大,但内存/显存碎片较少(比如能够支持模型的最大batch size会变大)
FLAGS_allocator_strategy=auto_growth - 使用auto growth分配器。 FLAGS_allocator_strategy=auto_growth - 使用auto growth分配器。PaddlePaddle会随着真实数据需要再占用内存/显存,但内存/显存可能会产生碎片(比如能够支持模型的最大batch size会变小)。
FLAGS_eager_delete_scope FLAGS_eager_delete_scope
......
...@@ -11,13 +11,13 @@ Use to choose allocator strategy of PaddlePaddle. ...@@ -11,13 +11,13 @@ Use to choose allocator strategy of PaddlePaddle.
Values accepted Values accepted
--------------- ---------------
String, enum in ['naive_best_fit', 'auto_growth']. The default value is 'naive_best_fit'. String, enum in ['naive_best_fit', 'auto_growth']. The default value will be 'naive_best_fit' if users compile PaddlePaddle with -DON_INFER=ON CMake flag, otherwise is 'auto_growth'. The default PaddlePaddle pip package uses 'auto_growth'.
Example Example
-------- --------
FLAGS_allocator_strategy=naive_best_fit would use the pre-allocated best fit allocator. FLAGS_allocator_strategy=naive_best_fit would use the pre-allocated best fit allocator. 'naive_best_fit' strategy would occupy almost all GPU memory by default but leads to less memory fragmentation (i.e., maximum batch size of models may be larger).
FLAGS_allocator_strategy=auto_growth would use the auto growth allocator. FLAGS_allocator_strategy=auto_growth would use the auto growth allocator. 'auto_growth' strategy would allocate GPU memory on demand but may lead to more memory fragmentation (i.e., maximum batch size of models may be smaller).
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册