- 24 12月, 2019 1 次提交
-
-
由 Aurelius84 提交于
* optimize adam speed by removing _finish_update test=develop * fix SparseAdamFunctor param list test=develop * Remove scale_op in expect_list of adam_op test=develop * fix test optimizer loss assert error test=develop * fix test optimizer loss assert error test=develop * modify PADDLE_ENFORCE usage test=develop * fix op_type in lamb_op.cc test=develop * fix errors ostream format bug test=develop * add betaPowOut in ngraph op test=develop * fix ngraph::op api for gcc8 test=develop * clean code test=develop * modify struct into class test=develop * remove code of beta1Tensor in lamb_op test=develop
-
- 28 11月, 2019 1 次提交
-
-
由 Kaipeng Deng 提交于
* add Adam beta1/beta2 support Variable. test=develop
-
- 28 10月, 2019 1 次提交
-
-
由 Chen Weihang 提交于
* replace part of the old implementation, test=develop * restore concat op, test=develop * update all ops implemention & delete GetDataTypeOfVar func, test=develop
-
- 04 9月, 2019 1 次提交
-
-
由 Chen Weihang 提交于
Add user-friendly error message in optimizer ops to give a hint about the position sensitive problem of run(startup_program) (#19605) * add extra error message hint in optimizer ops * polish format & delete useless change, test=develop * extract init judue from shape compare, test=develop
-
- 21 5月, 2019 1 次提交
-
-
由 Yibing Liu 提交于
* Add LAMB optimizer * Expose LAMB Optimizer's APIs test=develop, test=document_preview * Cleanup code & doc test=develop, test=document_preview * Update lamb optimizer's formula test=develop
-
- 15 1月, 2019 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 07 1月, 2019 1 次提交
-
-
由 Qiao Longfei 提交于
test=develop
-
- 14 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 13 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 12 12月, 2018 2 次提交
- 11 12月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 16 11月, 2018 1 次提交
-
-
由 Wu Yi 提交于
* wip simplify operator framework * wip * wip * done test=develop * clean test=develop * fix test=develop * fix deps test=develop * fix cpu build test=develop * fix tensorrt build test=develop * fix tests test=develop * fix test=develop * fix cpu build test=develop
-
- 22 10月, 2018 1 次提交
-
-
由 Xin Pan 提交于
test=develop
-
- 27 6月, 2018 1 次提交
-
-
由 qiaolongfei 提交于
-
- 11 6月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* "add inplace attribute" * "register inplace attribute" * "change se-next model for memory-reuse" * "fix typo" * repick * fix merge conflict * "fix stupid error"
-
- 08 5月, 2018 1 次提交
-
-
由 Yu Yang 提交于
Do not use ctor * Reduce line of codes. * We can use virtual function for Maker now. * The implementation does not care what maker holds, it is easier to refactor later.
-
- 06 5月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* "optimizer op support float64" * "fix ci" * "fix ftrl op"
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 10 2月, 2018 2 次提交
- 20 12月, 2017 1 次提交
-
-
由 Yu Yang 提交于
* Move framework.proto to proto namespace * Fix compile * Fix compile * Fix Compile
-
- 12 12月, 2017 2 次提交
-
-
由 QI JUN 提交于
There are mainly following fixes: - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place` - remove `eigen_device` interface in base class `DeviceContext` - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext` - remove unused `platform::EigenDeviceConverter` - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL` - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
-
由 kavyasrinet 提交于
* Updating the Latex equation for Adagrad * Fixing Latex euqations for adadelta, adam and adamax
-
- 21 11月, 2017 1 次提交
-
-
由 Yu Yang 提交于
* Support many data types of several operators * SeqConv only support float/double * Revert adagrad
-
- 05 11月, 2017 1 次提交
-
-
由 kavyasrinet 提交于
* Adding the doc format for AdaDelta * Updating the documentation for Adagrad, Adam and Adamax * Updating the auc op * Fix review comments * Updating doc for Batch Norm * Updating the cast op * Updating the clip op * Fixing review comment * Fixing review comment: * Small change to restart PR_CI
-
- 20 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
-
- 17 10月, 2017 1 次提交
-
-
由 Yu Yang 提交于
They are public now
-
- 13 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* add adam op moment1_out = beta1 * moment1 + (1 − beta1) * grad moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad moment1_hat = moment1_out / (1 - beta1^t) moment2_hat = moment2_out / (1 - beta2^t) param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) + epsilon) * fix moment 2 * Adding the Adam optimization operator * Adding more tests for Adam op
-