Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
3c2bdaa8
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
3c2bdaa8
编写于
10月 13, 2021
作者:
L
levi131
提交者:
GitHub
10月 13, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
unify usage of tuple and list (#36368)
* modify format * modify format
上级
033a73c3
变更
5
显示空白变更内容
内联
并排
Showing
5 changed file
with
56 addition
and
69 deletion
+56
-69
python/paddle/autograd/functional.py
python/paddle/autograd/functional.py
+35
-46
python/paddle/autograd/utils.py
python/paddle/autograd/utils.py
+11
-13
python/paddle/fluid/dygraph/base.py
python/paddle/fluid/dygraph/base.py
+1
-1
python/paddle/fluid/tests/unittests/autograd/test_vjp_jvp.py
python/paddle/fluid/tests/unittests/autograd/test_vjp_jvp.py
+2
-2
python/paddle/fluid/tests/unittests/autograd/utils.py
python/paddle/fluid/tests/unittests/autograd/utils.py
+7
-7
未找到文件。
python/paddle/autograd/functional.py
浏览文件 @
3c2bdaa8
...
...
@@ -18,20 +18,7 @@ from ..fluid import framework
from
..fluid.dygraph
import
grad
from
..nn.initializer
import
assign
from
..tensor
import
reshape
,
zeros_like
,
to_tensor
from
.utils
import
_check_tensors
,
_stack_tensor_or_return_none
,
_replace_none_with_zero_tensor
def
to_tensorlist
(
tl
):
if
not
isinstance
(
tl
,
list
):
if
isinstance
(
tl
,
tuple
):
tl
=
list
(
tl
)
else
:
tl
=
[
tl
]
for
t
in
tl
:
assert
isinstance
(
t
,
paddle
.
Tensor
)
or
t
is
None
,
(
f
'
{
t
}
is expected to be paddle.Tensor or None, but found
{
type
(
t
)
}
.'
)
return
tl
from
.utils
import
_tensors
,
_stack_tensor_or_return_none
,
_replace_none_with_zero_tensor
@
contextlib
.
contextmanager
...
...
@@ -98,19 +85,19 @@ def vjp(func, inputs, v=None, create_graph=False, allow_unused=False):
reverse mode automatic differentiation.
Args:
func(Callable): `func` takes as input a tensor or a list
of tensors and returns a tensor or a list of tensors.
inputs(list[Tensor]|
Tensor): used as positional arguments
to evaluate `func`. `inputs` is accepted as one tensor
or a list of tensors.
v(list[Tensor]|
Tensor, optional): the cotangent vector
invovled in the VJP computation. `v` matches the size
and shape of `func`'s output. Default value is None
func(Callable): `func` takes as input a tensor or a list
/tuple
of tensors and returns a tensor or a list
/tuple
of tensors.
inputs(list[Tensor]|
tuple[Tensor]|Tensor): used as positional
arguments to evaluate `func`. `inputs` is accepted as one
tensor
or a list of tensors.
v(list[Tensor]|
tuple[Tensor]|Tensor|None, optional): the
cotangent vector invovled in the VJP computation. `v` matches
the size
and shape of `func`'s output. Default value is None
and in this case is equivalent to all ones the same size
of `func`'s output.
create_graph(bool, optional): if `True`, gradients can
be evaluated on the results. If `False`, taking gradients
on
the results is invalid. Default value is False.
create_graph(bool, optional): if `True`, gradients can
be
evaluated on the results. If `False`, taking gradients on
the results is invalid. Default value is False.
allow_unused(bool, optional): In case that some Tensors of
`inputs` do not contribute to the computation of the output.
If `allow_unused` is False, an error will be raised,
...
...
@@ -119,8 +106,9 @@ def vjp(func, inputs, v=None, create_graph=False, allow_unused=False):
Returns:
output(tuple):
func_out: the output of `func(inputs)`
vjp(list[Tensor]|Tensor): the pullback results of `v` on `func`
func_out(list[Tensor]|tuple[Tensor]|Tensor): the output of
`func(inputs)`
vjp(list[Tensor]): the pullback results of `v` on `func`
Examples:
.. code-block:: python
...
...
@@ -163,13 +151,13 @@ def vjp(func, inputs, v=None, create_graph=False, allow_unused=False):
# [[2., 1.],
# [1., 0.]]), None]
"""
xs
,
v
=
to_tensorlist
(
inputs
),
to_tensorlist
(
v
)
xs
,
v
=
_tensors
(
inputs
,
"inputs"
),
_tensors
(
v
,
"v"
)
with
gradient_scope
(
xs
,
v
,
create_graph
=
create_graph
,
allow_unused
=
allow_unused
)
as
[
xs
,
v
,
grad_fn
,
return_fn
]:
outputs
=
func
(
*
xs
)
ys
=
to_tensorlist
(
outputs
)
ys
=
_tensors
(
outputs
,
"outputs"
)
grads
=
grad_fn
(
ys
,
xs
,
v
)
outputs
,
grads
=
return_fn
(
outputs
),
return_fn
(
grads
)
...
...
@@ -186,16 +174,16 @@ def jvp(func, inputs, v=None, create_graph=False, allow_unused=False):
**This API is ONLY available in imperative mode.**
Args:
func(Callable): `func` takes as input a tensor or a list
of tensors and returns a tensor or a list of tensors.
inputs(list[Tensor]|
Tensor): used as positional arguments
to evaluate `func`. `inputs` is accepted as one tensor
or a list
of tensors.
v(list[Tensor]|
Tensor, optional): the tangent vector
invovled in the JVP computation. `v` matches the size
and shape of `inputs`. `v` is Optional if `func` returns
a single tensor. Default value is None and in this case
is equivalent to all ones the same size of `inputs`.
func(Callable): `func` takes as input a tensor or a list
/tuple
of tensors and returns a tensor or a list
/tuple
of tensors.
inputs(list[Tensor]|
tuple[Tensor]|Tensor): used as positional
arguments to evaluate `func`. `inputs` is accepted as one
tensor or a list/tuple
of tensors.
v(list[Tensor]|
tuple[Tensor]|Tensor|None, optional): the
tangent vector invovled in the JVP computation. `v` matches
the size and shape of `inputs`. `v` is Optional if `func`
returns a single tensor. Default value is None and in this
case
is equivalent to all ones the same size of `inputs`.
create_graph(bool, optional): if `True`, gradients can
be evaluated on the results. If `False`, taking gradients
on the results is invalid. Default value is False.
...
...
@@ -207,8 +195,9 @@ def jvp(func, inputs, v=None, create_graph=False, allow_unused=False):
Returns:
output(tuple):
func_out: the output of `func(inputs)`
jvp(list[Tensor]|Tensor): the pullback results of `v` on `func`
func_out(list[Tensor]|tuple[Tensor]|Tensor): the output of
`func(inputs)`
jvp(list[Tensor]): the pullback results of `v` on `func`
Examples:
.. code-block:: python
...
...
@@ -232,13 +221,13 @@ def jvp(func, inputs, v=None, create_graph=False, allow_unused=False):
# [0., 0.]])]
"""
xs
,
v
=
to_tensorlist
(
inputs
),
to_tensorlist
(
v
)
xs
,
v
=
_tensors
(
inputs
,
"inputs"
),
_tensors
(
v
,
"v"
)
with
gradient_scope
(
xs
,
v
,
create_graph
=
create_graph
,
allow_unused
=
allow_unused
)
as
[
xs
,
v
,
grad_fn
,
return_fn
]:
outputs
=
func
(
*
xs
)
ys
=
to_tensorlist
(
outputs
)
ys
=
_tensors
(
outputs
,
"outputs"
)
ys_grad
=
[
zeros_like
(
y
)
for
y
in
ys
]
xs_grad
=
grad_fn
(
ys
,
xs
,
ys_grad
,
create_graph
=
True
)
ys_grad
=
grad_fn
(
xs_grad
,
ys_grad
,
v
)
...
...
@@ -357,8 +346,8 @@ def jacobian(func, inputs, create_graph=False, allow_unused=False):
# [0., 0., 0., 2.]]), None))
'''
inputs
=
_
check_
tensors
(
inputs
,
"inputs"
)
outputs
=
_
check_
tensors
(
func
(
*
inputs
),
"outputs"
)
inputs
=
_tensors
(
inputs
,
"inputs"
)
outputs
=
_tensors
(
func
(
*
inputs
),
"outputs"
)
fin_size
=
len
(
inputs
)
fout_size
=
len
(
outputs
)
flat_outputs
=
tuple
(
reshape
(
output
,
shape
=
[
-
1
])
for
output
in
outputs
)
...
...
@@ -494,7 +483,7 @@ def hessian(func, inputs, create_graph=False, allow_unused=False):
# [0., 1., 1., 2.]]), None), (None, None))
'''
inputs
=
_
check_
tensors
(
inputs
,
"inputs"
)
inputs
=
_tensors
(
inputs
,
"inputs"
)
outputs
=
func
(
*
inputs
)
assert
isinstance
(
outputs
,
paddle
.
Tensor
)
and
outputs
.
shape
==
[
1
...
...
python/paddle/autograd/utils.py
浏览文件 @
3c2bdaa8
...
...
@@ -15,22 +15,20 @@
import
paddle
def
_check_tensors
(
in_out_list
,
name
):
assert
in_out_list
is
not
None
,
"{} should not be None"
.
format
(
name
)
if
isinstance
(
in_out_list
,
(
list
,
tuple
)):
assert
len
(
in_out_list
)
>
0
,
"{} connot be empyt"
.
format
(
name
)
for
each_var
in
in_out_list
:
def
_tensors
(
ts
,
name
):
if
isinstance
(
ts
,
(
list
,
tuple
)):
assert
len
(
ts
)
>
0
,
"{} connot be empty"
.
format
(
name
)
for
each_t
in
ts
:
assert
isinstance
(
each_
var
,
paddle
.
Tensor
),
"Elements of {} must be paddle.Tensor
"
.
format
(
each_
t
,
paddle
.
Tensor
)
or
each_t
is
None
,
"Elements of {} must be paddle.Tensor or None
"
.
format
(
name
)
return
list
(
in_out_list
)
return
list
(
ts
)
else
:
assert
isinstance
(
in_out_list
,
paddle
.
Tensor
)
,
"{} must be Tensor or list of Tensor"
.
format
(
name
)
return
[
in_out_list
]
ts
,
paddle
.
Tensor
)
or
ts
is
None
,
"{} must be Tensor or list of Tensor"
.
format
(
name
)
return
[
ts
]
def
_stack_tensor_or_return_none
(
origin_list
):
...
...
python/paddle/fluid/dygraph/base.py
浏览文件 @
3c2bdaa8
...
...
@@ -456,7 +456,7 @@ def grad(outputs,
the Tensors whose gradients are not needed to compute. Default None.
Returns:
tuple: a tuple
of Tensors, whose length is the same as the Tensor number
list: a list
of Tensors, whose length is the same as the Tensor number
inside `inputs`, and the i-th returned Tensor is the sum of gradients of
`outputs` with respect to the i-th `inputs`.
...
...
python/paddle/fluid/tests/unittests/autograd/test_vjp_jvp.py
浏览文件 @
3c2bdaa8
...
...
@@ -15,7 +15,7 @@
import
unittest
import
paddle
from
paddle.autograd.functional
import
vjp
,
jvp
,
to_tensorlist
from
paddle.autograd.functional
import
vjp
,
jvp
,
_tensors
from
paddle
import
grad
,
ones_like
,
zeros_like
...
...
@@ -55,7 +55,7 @@ def nested(x):
def
make_v
(
f
,
inputs
):
outputs
=
to_tensorlist
(
f
(
*
inputs
)
)
outputs
=
_tensors
(
f
(
*
inputs
),
"outputs"
)
return
[
ones_like
(
x
)
for
x
in
outputs
]
...
...
python/paddle/fluid/tests/unittests/autograd/utils.py
浏览文件 @
3c2bdaa8
...
...
@@ -14,7 +14,7 @@
import
numpy
as
np
import
paddle
from
paddle.autograd.functional
import
_
check_
tensors
from
paddle.autograd.functional
import
_tensors
def
_product
(
t
):
...
...
@@ -42,8 +42,8 @@ def _set_item(t, idx, value):
def
_compute_numerical_jacobian
(
func
,
xs
,
delta
,
np_dtype
):
xs
=
_
check_
tensors
(
xs
,
"xs"
)
ys
=
_
check_
tensors
(
func
(
*
xs
),
"ys"
)
xs
=
_tensors
(
xs
,
"xs"
)
ys
=
_tensors
(
func
(
*
xs
),
"ys"
)
fin_size
=
len
(
xs
)
fout_size
=
len
(
ys
)
jacobian
=
list
([]
for
_
in
range
(
fout_size
))
...
...
@@ -59,11 +59,11 @@ def _compute_numerical_jacobian(func, xs, delta, np_dtype):
orig
=
_get_item
(
xs
[
j
],
q
)
x_pos
=
orig
+
delta
xs
[
j
]
=
_set_item
(
xs
[
j
],
q
,
x_pos
)
ys_pos
=
_
check_
tensors
(
func
(
*
xs
),
"ys_pos"
)
ys_pos
=
_tensors
(
func
(
*
xs
),
"ys_pos"
)
x_neg
=
orig
-
delta
xs
[
j
]
=
_set_item
(
xs
[
j
],
q
,
x_neg
)
ys_neg
=
_
check_
tensors
(
func
(
*
xs
),
"ys_neg"
)
ys_neg
=
_tensors
(
func
(
*
xs
),
"ys_neg"
)
xs
[
j
]
=
_set_item
(
xs
[
j
],
q
,
orig
)
...
...
@@ -76,8 +76,8 @@ def _compute_numerical_jacobian(func, xs, delta, np_dtype):
def
_compute_numerical_hessian
(
func
,
xs
,
delta
,
np_dtype
):
xs
=
_
check_
tensors
(
xs
,
"xs"
)
ys
=
_
check_
tensors
(
func
(
*
xs
),
"ys"
)
xs
=
_tensors
(
xs
,
"xs"
)
ys
=
_tensors
(
func
(
*
xs
),
"ys"
)
fin_size
=
len
(
xs
)
hessian
=
list
([]
for
_
in
range
(
fin_size
))
for
i
in
range
(
fin_size
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录