未验证 提交 c0ee14f6 编写于 作者: H hua-zi 提交者: GitHub

updata Adamw.py (#52984)

* updata Adamw.py

out.backward()  -> loss.backward()

* Update adamw.py
上级 35af5818
......@@ -90,7 +90,7 @@ class AdamW(Optimizer):
name (str, optional): Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name`.
The default value is None.
**Notes**:
Notes:
**Currently, AdamW doesn't support sparse parameter optimization.**
Examples:
......@@ -111,7 +111,7 @@ class AdamW(Optimizer):
beta1=beta1,
beta2=beta2,
weight_decay=0.01)
out.backward()
loss.backward()
opt.step()
opt.clear_grad()
......@@ -135,7 +135,7 @@ class AdamW(Optimizer):
}],
weight_decay=0.01,
beta1=0.9)
out.backward()
loss.backward()
opt.step()
opt.clear_grad()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册