Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
ab3d2bf0
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ab3d2bf0
编写于
4月 26, 2021
作者:
J
Jiaqi Liu
提交者:
GitHub
4月 26, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix acc typo and shape error, and remove 'users' subjects in amp doc, test=document_fix (#32476)
上级
d2b31a14
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
16 addition
and
17 deletion
+16
-17
python/paddle/hapi/model.py
python/paddle/hapi/model.py
+13
-14
python/paddle/metric/metrics.py
python/paddle/metric/metrics.py
+3
-3
未找到文件。
python/paddle/hapi/model.py
浏览文件 @
ab3d2bf0
...
@@ -887,10 +887,10 @@ class Model(object):
...
@@ -887,10 +887,10 @@ class Model(object):
AdamW and Momentum optimizer. Before using pure float16 training,
AdamW and Momentum optimizer. Before using pure float16 training,
`multi_precision` could be set to True when creating optimizer, which can
`multi_precision` could be set to True when creating optimizer, which can
avoid poor accuracy or slow convergence in a way, and inputs of dtype float
avoid poor accuracy or slow convergence in a way, and inputs of dtype float
should be cast to float16 by users.
Users should also use
should be cast to float16 by users.
`paddle.static.amp.fp16_guard` API
`paddle.static.amp.fp16_guard` API to limit the range of pure float16
should be also used to limit the range of pure float16 training, otherwise,
training, otherwise, 'use_fp16_guard' should be set to False by users.
'use_fp16_guard' should be set to False by users. However, limiting the
However, limiting the
range of is not supported during training using AMP.
range of is not supported during training using AMP.
Args:
Args:
network (paddle.nn.Layer): The network is an instance of
network (paddle.nn.Layer): The network is an instance of
...
@@ -974,7 +974,7 @@ class Model(object):
...
@@ -974,7 +974,7 @@ class Model(object):
data = paddle.vision.datasets.MNIST(mode='train', transform=transform)
data = paddle.vision.datasets.MNIST(mode='train', transform=transform)
model.fit(data, epochs=2, batch_size=32, verbose=1)
model.fit(data, epochs=2, batch_size=32, verbose=1)
# mixed precision training is only support on GPU now.
# mixed precision training is only support
ed
on GPU now.
if paddle.is_compiled_with_cuda():
if paddle.is_compiled_with_cuda():
run_example_code()
run_example_code()
...
@@ -1462,19 +1462,18 @@ class Model(object):
...
@@ -1462,19 +1462,18 @@ class Model(object):
float16 training is used, the key 'level' of 'amp_configs'
float16 training is used, the key 'level' of 'amp_configs'
should be set to 'O1' or 'O2' respectively. Otherwise, the
should be set to 'O1' or 'O2' respectively. Otherwise, the
value of 'level' defaults to 'O0', which means float32
value of 'level' defaults to 'O0', which means float32
training. In addition to 'level',
users could pass in more
training. In addition to 'level',
parameters consistent with
parameters consistent with mixed precision API
. The supported
mixed precision API could also be passed in
. The supported
keys are: 'init_loss_scaling', 'incr_ratio', 'decr_ratio',
keys are: 'init_loss_scaling', 'incr_ratio', 'decr_ratio',
'incr_every_n_steps', 'decr_every_n_nan_or_inf',
'incr_every_n_steps', 'decr_every_n_nan_or_inf',
'use_dynamic_loss_scaling', 'custom_white_list',
'use_dynamic_loss_scaling', 'custom_white_list',
'custom_black_list', and 'custom_black_varnames'or
'custom_black_list', and 'custom_black_varnames'or
'use_fp16_guard' is only supported in static mode. Users could
'use_fp16_guard' is only supported in static mode. Mixed
refer to mixed precision API documentations
precision API documentations :ref:`api_paddle_amp_auto_cast`
:ref:`api_paddle_amp_auto_cast` and
and :ref:`api_paddle_amp_GradScaler` could be referenced
:ref:`api_paddle_amp_GradScaler` for details. For convenience,
for details. For convenience, 'amp_configs' could be set to
'amp_configs' could be set to 'O1' or 'O2' if no more
'O1' or 'O2' if no more parameters are needed. 'amp_configs'
parameters are needed. 'amp_configs' could be None in float32
could be None in float32 training. Default: None.
training. Default: None.
Returns:
Returns:
None
None
"""
"""
...
...
python/paddle/metric/metrics.py
浏览文件 @
ab3d2bf0
...
@@ -243,7 +243,7 @@ class Accuracy(Metric):
...
@@ -243,7 +243,7 @@ class Accuracy(Metric):
def
compute
(
self
,
pred
,
label
,
*
args
):
def
compute
(
self
,
pred
,
label
,
*
args
):
"""
"""
Compute the top-k (maxi
n
um value in `topk`) indices.
Compute the top-k (maxi
m
um value in `topk`) indices.
Args:
Args:
pred (Tensor): The predicted value is a Tensor with dtype
pred (Tensor): The predicted value is a Tensor with dtype
...
@@ -253,7 +253,7 @@ class Accuracy(Metric):
...
@@ -253,7 +253,7 @@ class Accuracy(Metric):
[batch_size, d0, ..., num_classes] in one hot representation.
[batch_size, d0, ..., num_classes] in one hot representation.
Return:
Return:
Tensor: Correct mask, a tensor with shape [batch_size, topk].
Tensor: Correct mask, a tensor with shape [batch_size,
d0, ...,
topk].
"""
"""
pred
=
paddle
.
argsort
(
pred
,
descending
=
True
)
pred
=
paddle
.
argsort
(
pred
,
descending
=
True
)
pred
=
paddle
.
slice
(
pred
=
paddle
.
slice
(
...
@@ -277,7 +277,7 @@ class Accuracy(Metric):
...
@@ -277,7 +277,7 @@ class Accuracy(Metric):
returns the accuracy of current step.
returns the accuracy of current step.
Args:
Args:
correct: Correct mask, a tensor with shape [batch_size, topk].
correct: Correct mask, a tensor with shape [batch_size,
d0, ...,
topk].
Return:
Return:
Tensor: the accuracy of current step.
Tensor: the accuracy of current step.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录