Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
f32ae272
P
Paddle
项目概览
PaddlePaddle
/
Paddle
1 年多 前同步成功
通知
2302
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f32ae272
编写于
8月 28, 2020
作者:
Z
Zhen Wang
提交者:
GitHub
8月 28, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Remove `sorted_sum_gradient_` form BasicEngine and PartialGradTask. (#26766)
Use `Tensor` instead of `Variable` in the doc of paddle.grad.
上级
7b78bfc0
变更
4
显示空白变更内容
内联
并排
Showing
4 changed file
with
14 addition
and
17 deletion
+14
-17
paddle/fluid/imperative/basic_engine.cc
paddle/fluid/imperative/basic_engine.cc
+1
-2
paddle/fluid/imperative/basic_engine.h
paddle/fluid/imperative/basic_engine.h
+0
-1
paddle/fluid/imperative/partial_grad_engine.cc
paddle/fluid/imperative/partial_grad_engine.cc
+1
-2
python/paddle/fluid/dygraph/base.py
python/paddle/fluid/dygraph/base.py
+12
-12
未找到文件。
paddle/fluid/imperative/basic_engine.cc
浏览文件 @
f32ae272
...
...
@@ -36,7 +36,6 @@ namespace paddle {
namespace
imperative
{
void
BasicEngine
::
Init
(
VarBase
*
var
,
bool
retain_graph
)
{
sorted_sum_gradient_
=
FLAGS_sort_sum_gradient
;
retain_graph_
=
retain_graph
;
init_node_
=
var
->
GradVarBase
()
->
GradNode
();
var
->
GradVarBase
()
->
ClearGradNode
();
...
...
@@ -106,7 +105,7 @@ void BasicEngine::PrepareGradAccumulators(const OpBase& op) {
auto
&
accumulator
=
accumulators_
[
var
.
get
()];
if
(
!
accumulator
)
{
if
(
sorted_sum_gradient_
)
{
if
(
FLAGS_sort_sum_gradient
)
{
accumulator
.
reset
(
new
SortedGradientAccumulator
(
var
.
get
()));
}
else
{
accumulator
.
reset
(
new
EagerGradientAccumulator
(
var
.
get
()));
...
...
paddle/fluid/imperative/basic_engine.h
浏览文件 @
f32ae272
...
...
@@ -44,7 +44,6 @@ class BasicEngine : public Engine {
private:
std
::
shared_ptr
<
GradOpNode
>
init_node_
;
bool
sorted_sum_gradient_
;
std
::
unordered_map
<
GradOpNode
*
,
size_t
>
node_deps_
;
std
::
unordered_map
<
VariableWrapper
*
,
std
::
unique_ptr
<
GradientAccumulator
>>
accumulators_
;
...
...
paddle/fluid/imperative/partial_grad_engine.cc
浏览文件 @
f32ae272
...
...
@@ -578,7 +578,6 @@ class PartialGradTask {
bool
retain_graph_
;
bool
allow_unused_
;
bool
only_inputs_
;
bool
sorted_sum_gradient_
{
FLAGS_sort_sum_gradient
};
};
PartialGradTask
::
PartialGradTask
(
...
...
@@ -981,7 +980,7 @@ void PartialGradTask::PrepareInitialGradientAccumulators(const OpBase *op) {
if
(
!
accumulator
)
{
accumulator
.
reset
(
new
GradientAccumulationInfo
(
var
,
sorted_sum_gradient_
,
create_graph_
));
var
,
FLAGS_sort_sum_gradient
,
create_graph_
));
}
accumulator
->
IncreaseTotalRefCnt
();
...
...
python/paddle/fluid/dygraph/base.py
浏览文件 @
f32ae272
...
...
@@ -327,19 +327,19 @@ def grad(outputs,
This API computes the sum of gradients of `outputs` with respect to each `inputs` .
Parameters:
outputs (
Variable|list(Variable)|tuple(Variable)): the output Variable
or
Variable
list/tuple of the graph to compute gradients.
inputs (
Variable|list(Variable)|tuple(Variable)): the input Variable
or
Variable
list/tuple of the graph to compute gradients. The returned
outputs (
Tensor|list(Tensor)|tuple(Tensor)): the output Tensor
or
Tensor
list/tuple of the graph to compute gradients.
inputs (
Tensor|list(Tensor)|tuple(Tensor)): the input Tensor
or
Tensor
list/tuple of the graph to compute gradients. The returned
values of this API are the gradients of `inputs` .
grad_outputs (
Variable|list(Variable|None)|tuple(Variable
|None), optional):
grad_outputs (
Tensor|list(Tensor|None)|tuple(Tensor
|None), optional):
initial gradient values of `outputs` . If `grad_outputs` is None,
the initial gradient values of `outputs` would be Tensors filled with 1;
if `grad_outputs` is not None, it must have the same length as `outputs` ,
and in this case, the initial gradient value of the i-th `outputs` would
be: (1) a Tensor filled with 1 when the i-th element of `grad_outputs`
is None; (2) the i-th element of `grad_outputs` when the i-th element of
`grad_outputs` is a
Variable
. Default None.
`grad_outputs` is a
Tensor
. Default None.
retain_graph (bool, optional): whether to retain the forward graph which
is used to calculate the gradient. When it is True, the graph would
be retained, in which way users can calculate backward twice for the
...
...
@@ -351,21 +351,21 @@ def grad(outputs,
computing process would be discarded. Default False.
only_inputs (bool, optional): whether to only compute the gradients of
`inputs` . If it is False, the gradients of all remaining leaf
Variable
s in the graph would be also computed and accumulated.
Tensor
s in the graph would be also computed and accumulated.
If it is True, only the gradients of `inputs` would be computed.
Default True. only_inputs=False is under development, and it is
not supported yet.
allow_unused (bool, optional): whether to raise error or return None if some
Variables of `inputs` are unreachable in the graph. If some Variable
s of
Tensors of `inputs` are unreachable in the graph. If some Tensor
s of
`inputs` are unreachable in the graph (i.e., their gradients are None),
error would be raised if allow_unused=False, or None would be returned as
their gradients if allow_unused=True. Default False.
no_grad_vars (
Variable|list(Variable)|tuple(Variable)|set(Variable
), optional):
the
Variable
s whose gradients are not needed to compute. Default None.
no_grad_vars (
Tensor|list(Tensor)|tuple(Tensor)|set(Tensor
), optional):
the
Tensor
s whose gradients are not needed to compute. Default None.
Returns:
tuple: a tuple of
Variables, whose length is the same as the Variable
number
inside `inputs`, and the i-th returned
Variable
is the sum of gradients of
tuple: a tuple of
Tensors, whose length is the same as the Tensor
number
inside `inputs`, and the i-th returned
Tensor
is the sum of gradients of
`outputs` with respect to the i-th `inputs`.
Examples 1:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录