未验证 提交 4dbf3a8f 编写于 作者: H HongyuJia 提交者: GitHub

[Fix typo] Fix typo error in grad_node_info.h (#51607)

* [Fix typo] Fix typo error in grad_node_info.h

* fix varbase_patch_methods.py typo error

* fix more typo errors
上级 66ac2594
......@@ -31,7 +31,7 @@
namespace egr {
/*
* GeneralGrad is Helpper class to implement custom grad operation between
* GeneralGrad is Helper class to implement custom grad operation between
* outputs and inputs.
*
* **/
......
......@@ -25,7 +25,7 @@ namespace egr {
/**
* GradNodeBase is base class of all grad node, which is what should be used by
* eager execution, we define most of backward autograd members here, and for
* each Operator, they should hold their onw forward Inputs as TensorWrapper.
* each Operator, they should hold their own forward Inputs as TensorWrapper.
*
* The GradNodeBase will be held in autograd_meta, and it is also a member of
* Edge, which indicates the edge of backward graph.
......@@ -40,7 +40,7 @@ namespace egr {
*
* NOTE: GradNodeBase holds its own inputs and Outputs
*
* Edge is defined to descripe depend of backward, an Edge is what linked
* Edge is defined to describe depend of backward, an Edge is what linked
* between two node, it should contain a Node and rank of this Node (this is
* used to indicate which input of grad this edge belong).
**/
......
......@@ -232,7 +232,7 @@ def monkey_patch_varbase():
"""
Run backward of current Graph which starts from current Tensor.
The new gradient will accumulat on previous gradient.
The new gradient will accumulate on previous gradient.
You can clear gradient by ``Tensor.clear_grad()`` .
......@@ -240,11 +240,11 @@ def monkey_patch_varbase():
grad_tensor(Tensor, optional): initial gradient values of the current Tensor. If `grad_tensor` is None,
the initial gradient values of the current Tensor would be Tensor filled with 1.0;
if `grad_tensor` is not None, it must have the same length as the current Tensor.
Teh default value is None.
The default value is None.
retain_graph(bool, optional): If False, the graph used to compute grads will be freed. If you would
like to add more ops to the built graph after calling this method( :code:`backward` ), set the parameter
:code:`retain_graph` to True, then the grads will be retained. Thus, seting it to False is much more memory-efficient.
:code:`retain_graph` to True, then the grads will be retained. Thus, setting it to False is much more memory-efficient.
Defaults to False.
Returns:
NoneType: None
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册