未验证 提交 89fc393f 编写于 作者: Z zhang wenhui 提交者: GitHub

fix Adam cn doc (#2188)

* fix AdamOptimizer, test=develop, test=document_fix

* fix AdamOptimizer, test=develop, test=document_fix

* fix AdamOptimizer, test=develop, test=document_fix

* fix AdamOptimizer, test=develop, test=document_fix

* fix python code
上级 f06c7473
......@@ -19,7 +19,7 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
.. math::
moment\_2\_out=\beta_2∗moment\_2+(1−\beta_2)∗grad*grad
.. math::
learning\_rate=\frac{learning\_rate}{1-\beta_1^t}
learning\_rate=learning\_rate*\frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t}
.. math::
param\_out=param-learning\_rate*\frac{moment\_1}{\sqrt{moment\_2}+\epsilon}\\
......@@ -84,7 +84,7 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
avg_cost = fluid.layers.mean(cost)
# define beta decay variable
def get_decayed_betas(beta1_init, beta2_init, decay_steps, decay_rate)
def get_decayed_betas(beta1_init, beta2_init, decay_steps, decay_rate):
global_step = lr_scheduler._decay_step_counter()
beta1 = fluid.layers.create_global_var(
......@@ -113,7 +113,7 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
beta1, beta2 = get_decayed_betas(0.9, 0.99, 1e5, 0.9)
adam_optimizer = fluid.optimizer.AdamOptimizer(
learning_rate=0.01,
beta1=beta1
beta1=beta1,
beta2=beta2)
adam_optimizer.minimize(avg_cost)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册