Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
MegEngine
提交
2eed7d83
MegEngine
项目概览
MegEngine 天元
/
MegEngine
接近 2 年 前同步成功
通知
414
Star
4708
Fork
583
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
MegEngine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
2eed7d83
编写于
1月 18, 2021
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
docs(mge): restore autodiff.Function docs
GitOrigin-RevId: cbd84df9168fd9a7a5480ed5edd88d84efa6d18d
上级
51003176
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
58 addition
and
17 deletion
+58
-17
imperative/python/megengine/core/autodiff/grad.py
imperative/python/megengine/core/autodiff/grad.py
+58
-17
未找到文件。
imperative/python/megengine/core/autodiff/grad.py
浏览文件 @
2eed7d83
...
@@ -20,23 +20,6 @@ from .._imperative_rt import core2, ops
...
@@ -20,23 +20,6 @@ from .._imperative_rt import core2, ops
from
..ops.builtin
import
Elemwise
,
OpDef
,
RemoteSend
from
..ops.builtin
import
Elemwise
,
OpDef
,
RemoteSend
from
..ops.special
import
Const
from
..ops.special
import
Const
""" Some notes:
1. Initialize the optimizer:
for each trainable parameter:
call wrt(param, callback)
Each parameter tensor will be assciated with a Tracer object saved in Tensor._extra_data
2. Tracer has one member: node, which is a VariableNode
3. VariableNode has a OpNode member: opnode
4. OpNode has four members:
a. id
b. inputs, which is made of VariableNode
c. outputs, which are weakref's to VariableNode
d. backward: call back function
e. has_grad_fn: call has_grad_fn(opnode, reached) to check grad exist
f. backward_allow_noinput: whether backward allow noinput
"""
_grad_count
=
0
_grad_count
=
0
_grad_manager_dict
=
weakref
.
WeakValueDictionary
()
_grad_manager_dict
=
weakref
.
WeakValueDictionary
()
...
@@ -97,6 +80,64 @@ class Grad:
...
@@ -97,6 +80,64 @@ class Grad:
class
Function
(
ops
.
PyOpBase
):
class
Function
(
ops
.
PyOpBase
):
"""
Defines a block of operations with customizable differentiation.
The computation should be defined in ``forward`` method, with gradient
computation defined in ``backward`` method.
Each instance of ``Function`` should be used only once during forwardding.
Examples:
.. code-block::
class Sigmoid(Function):
def forward(self, x):
y = 1 / (1 + F.exp(-x))
self.y = y
return y
def backward(self, dy):
y = self.y
return dy * y * (1-y)
"""
def
forward
(
self
,
*
args
,
**
kwargs
):
"""
Applies operations to ``inputs`` and returns results. It must be overriden by all subclasses.
:param input: input tensors.
:return: a tuple of Tensor or a single Tensor.
.. note::
This method should return a tuple of Tensor or a single Tensor representing the output
of the function.
"""
raise
NotImplementedError
def
backward
(
self
,
*
output_grads
):
"""
Compute the gradient of the forward function. It must be overriden by all subclasses.
:param output_grads: gradients of outputs that are returned by :meth:`forward`.
.. note::
In case when some tensors of outputs are not related to loss function, the corresponding
values in ``output_grads`` would be ``None``.
.. note::
This method should return a tuple which containing the gradients of all inputs, in the same order
as the ``inputs`` argument of :meth:`forward` . A ``Tensor`` could be returned
instead if there is only one input. If users want to stop the propagation of some gradients,
the corresponding returned values should be set ``None`` .
"""
raise
NotImplementedError
def
_default_rule
(
self
,
*
args
):
def
_default_rule
(
self
,
*
args
):
ret
=
self
.
forward
(
*
args
)
ret
=
self
.
forward
(
*
args
)
self
.
__single_output
=
isinstance
(
ret
,
core2
.
Tensor
)
self
.
__single_output
=
isinstance
(
ret
,
core2
.
Tensor
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录