未验证 提交 964cd660 编写于 作者: S sneaxiy 提交者: GitHub

make FLAGS_gemm_use_half_precision_compute_type=false by default (#50050)

* make FLAGS_gemm_use_half_precision_compute_type=false defaultly

* fix comments
上级 6d13992e
...@@ -146,17 +146,17 @@ PADDLE_DEFINE_EXPORTED_bool( ...@@ -146,17 +146,17 @@ PADDLE_DEFINE_EXPORTED_bool(
* CUDA related related FLAG * CUDA related related FLAG
* Name: FLAGS_gemm_use_half_precision_compute_type * Name: FLAGS_gemm_use_half_precision_compute_type
* Since Version: 2.4 * Since Version: 2.4
* Value Range: bool, default=true * Value Range: bool, default=false
* Example: * Example:
* Note: whether to use fp16 compute type when the input and output is fp16, * Note: whether to use fp16 compute type when the input and output is fp16,
* faster but it may loss precision. * faster but it may loss precision.
*/ */
PADDLE_DEFINE_EXPORTED_bool( PADDLE_DEFINE_EXPORTED_bool(
gemm_use_half_precision_compute_type, gemm_use_half_precision_compute_type,
true, false,
"Whether to use fp16 compute type when the input and output is fp16, " "Whether to use fp16 compute type when the input and output is fp16, "
"faster but it may loss precision in most case. If true, the compute " "faster but it may loss precision in most case. If true, the compute "
"type will be set to fp32. Default is true."); "type will be set to fp16. Default is false.");
/** /**
* CUDA related FLAG * CUDA related FLAG
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册