未验证 提交 44ba39b7 编写于 作者: Y Yang Zhang 提交者: GitHub

Merge pull request #26 from willthefrog/fix_lr_scaling

Fix learning rate scaling bug
...@@ -410,7 +410,8 @@ class StaticGraphAdapter(object): ...@@ -410,7 +410,8 @@ class StaticGraphAdapter(object):
and self.model._optimizer._learning_rate_map: and self.model._optimizer._learning_rate_map:
# HACK workaround learning rate map issue # HACK workaround learning rate map issue
lr_var = self.model._optimizer._learning_rate_map[self._orig_prog] lr_var = self.model._optimizer._learning_rate_map[self._orig_prog]
self.model._optimizer._learning_rate_map[prog] = lr_var new_lr_var = prog.global_block().vars[lr_var.name]
self.model._optimizer._learning_rate_map[prog] = new_lr_var
losses = [] losses = []
metrics = [] metrics = []
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册