- 13 11月, 2017 1 次提交
-
-
由 QI JUN 提交于
* create learning rate variable for every parameter * fix ci * set parameter lr relatively to global lr
-
- 10 11月, 2017 1 次提交
-
-
由 Siddharth Goyal 提交于
* Fix attribute naming for momentum_op * Fix minor typo in comment * Fix attribute name * Fix names in test_optimizer * Fix python wrapper
-
- 05 11月, 2017 1 次提交
-
-
由 Yu Yang 提交于
-
- 02 11月, 2017 1 次提交
-
-
由 Qiao Longfei 提交于
* optimizer use init_program * create persistable variable * add create_persistable_var to block * optimizer use create_persistable_var * fix prefix * move create_global_persistable_var from Block to LayerHelper * Polish Optimizer initialization code. * Using the LayerHelper to create initialize operator and variables * add_accumulator should use an independent data type * default use param data type for accumulator
-
- 28 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* Adding the increment op for global step * Changing list to single op as per code review feedback
-
- 27 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* Add regularizer code * Fix code
-
- 26 10月, 2017 2 次提交
-
-
由 Abhinav Arora 提交于
-
由 Abhinav Arora 提交于
* Adding nesterov momentum to python momentum wrapper * Fixing optimizer test after merge
-
- 25 10月, 2017 2 次提交
-
-
由 Yu Yang 提交于
* Extract apply_backward_pass to backward.py Rename apply_backward_pass to append_backward_ops * Fix CI * Update design doc
-
由 Abhinav Arora 提交于
* Adding Adam Python wrapper * Adding tests for Python Adam wrapper
-
- 21 10月, 2017 1 次提交
-
-
由 Kexin Zhao 提交于
-
- 20 10月, 2017 2 次提交
-
-
由 Kexin Zhao 提交于
-
由 Abhinav Arora 提交于
* Adding the interface for the momentum optimizer * Adding a comment about accumulators
-
- 18 10月, 2017 1 次提交
-
-
由 Qiao Longfei 提交于
* init parameter base class * optimize the Comments of optimizer * basic implimentation of optimizer * add test_optimizer * add no_grad_set to interface * update optimizer.py * python code can run * fix some problem * add sync_with_cpp to Python Program and Block * sync vars and ops in block from cpp * optimize code and add some comment * add more check for sync * update optimizer with return value of Backward * rm unused code * infer shape when create gradient vairiable * update test_optimizer * update test_program.py * update backward test * follow comment
-