- 04 9月, 2020 1 次提交
-
-
由 MRXLT 提交于
* update optimizer (#26711) * update doc * update doc * fix optimizer sample code * add default value for adamw weight_decay * fix adamw * change LearningRateDecay to _LRScheduler * fix adamw;notest * fix load;notest * remove file * bug fix * fix code style * bug fix * add ut * adamw support weight_decay=0 * fix ut * fix set_lr doc * fix doc * change parameters place * fix sample code
-
- 02 9月, 2020 1 次提交
-
-
由 tangwei12 提交于
* add embedding 2.0 * add embedding support input int32
-
- 28 8月, 2020 1 次提交
-
-
由 Zhou Wei 提交于
* support 2.0 lr_scheduler for 2.0 optimizer * fix unittest * fix doc * fix unittest * fix sample code, fix unittest
-
- 23 8月, 2020 1 次提交
-
-
由 MRXLT 提交于
refine Optimizer/Adam/Admax/RMSProp && add Admw * buf fix * update comment * unify arguments place; notest * fix ut, test=develop * bug fix * fix conflicts, test=develop * add examples code * bug fix * fix comments * fix sample code * add sample code for Optimizer * add adamax ut, test=develop * fix rmsprop ut, test=develop * add ut for optimizer.py and adamw.py * remove TestAdamOptimizerBetaVariable * update api && add ut * update doc && fix ut * add ut Co-authored-by: Nmapingshuo <mps2012@yeah.net>
-
- 24 12月, 2019 1 次提交
-
-
由 Aurelius84 提交于
* optimize adam speed by removing _finish_update test=develop * fix SparseAdamFunctor param list test=develop * Remove scale_op in expect_list of adam_op test=develop * fix test optimizer loss assert error test=develop * fix test optimizer loss assert error test=develop * modify PADDLE_ENFORCE usage test=develop * fix op_type in lamb_op.cc test=develop * fix errors ostream format bug test=develop * add betaPowOut in ngraph op test=develop * fix ngraph::op api for gcc8 test=develop * clean code test=develop * modify struct into class test=develop * remove code of beta1Tensor in lamb_op test=develop
-
- 28 11月, 2019 1 次提交
-
-
由 Kaipeng Deng 提交于
* add Adam beta1/beta2 support Variable. test=develop
-
- 15 1月, 2019 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 07 1月, 2019 1 次提交
-
-
由 Qiao Longfei 提交于
test=develop
-
- 27 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 17 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
test=develop
-
- 14 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 13 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 15 8月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 26 7月, 2018 2 次提交
- 24 2月, 2018 2 次提交
- 13 2月, 2018 1 次提交
-
-
由 Xin Pan 提交于
Currently, our tests run with 2 GPUs, the init time is absurdly long: about 4s for each process. Currently, we run each OP test on different processes. This PR: 1. create cmake function py_test_modules which will generate the Makefile that runs a list of Python unittest module in a single Python process. 2. move all "python unittest compatible" (e.g., used the unittest package, not just a regular python file). from fluid/tests to fluid/tests/unittests. 3. cmake now will run all OP tests in fluid/tests/unittests in a single process, except the time-consuming tests, they are separated into different processes to utilize parallelism. Please make sure to use the unittest package if you put the python test file in fluid/tests/unittests 4. remove all exit(0) from fluid/tests/unittests/*.py, exit(0) is used to disable unittest, we can not do it when running all tests in a single process since it will terminate the process without running the other tests. Instead, the test is disabled in fluid/tests/unittests/CMakeLists.txt. FIXME is added for each disabled item. Please disable the unittest from fluid/tests/unittests/CMakeLists.txt, instead of adding exit(0) to the Python file, for all Python file in fluid/tests/unittests/. 5. add an option WITH_FAST_BUNDLE_TEST. When OFF, will run the unit tests in separate process so that they can be tested individually.
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 24 1月, 2018 1 次提交
-
-
由 Yang Yu 提交于
The English of the previous API is bad.
-
- 21 1月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* "fix decode bug" * "follow commnet" * "fix error" * "fix hook bug" * fix based comment * fix copyright * fix based on comment
-
- 15 1月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* add copyright hook * add copyright hook * refine copyright hook * "test copyright hook" * fix check style * fix ci
-
- 29 12月, 2017 1 次提交
-
-
由 typhoonzero 提交于
-
- 27 12月, 2017 1 次提交
-
-
由 typhoonzero 提交于
-
- 26 12月, 2017 2 次提交
-
-
由 typhoonzero 提交于
-
由 typhoonzero 提交于
-
- 14 11月, 2017 1 次提交
-
-
由 Qiao Longfei 提交于
* init commit * change some dir name
-
- 20 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
-
- 13 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* add adam op moment1_out = beta1 * moment1 + (1 − beta1) * grad moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad moment1_hat = moment1_out / (1 - beta1^t) moment2_hat = moment2_out / (1 - beta2^t) param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) + epsilon) * fix moment 2 * Adding the Adam optimization operator * Adding more tests for Adam op
-