- 14 7月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* adam add input SkipUpdate * add unittest * add npu unittest * fix xpu compile * remove param stream
-
- 21 6月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* enable npu alignment * support flatten_params/grads * support clip by global norm * remove memset in coalesce_tensor_op * fix npu kernel of sum op when input is one tensor * add ut for flatten_param_grads+regularizer * fix ut * fix typo
-
- 09 6月, 2021 1 次提交
-
-
由 Leo Chen 提交于
-
- 31 5月, 2021 1 次提交
-
-
由 wangguanzhong 提交于
* support params groups, test=develop * simplify updating opt attr * update according to review
-
- 13 5月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* add use_global_beta_pow * add use_global_beta_pow * update npu kernel * update python api * refine code * add ut for use_global_beta_pow * fix npu kernel * add ut for api * add ut for exception * add ut for save/load
-
- 28 4月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* add input EpsilonTensor for adam * update python api * add unit test * add npu test * add more ut
-
- 14 10月, 2020 1 次提交
-
-
由 chentianyu03 提交于
* modify cond while_loop to paddle.static.nn.cond * modify crop_tensor to paddle.crop * modify Variable to paddle.static.Variable * remove nn.beam_search, nn.beam_search_decode, nn.gather_tree * remove bpr_loss, center_loss, rank_loss, smooth_l1, teacher_student_sigmoid_loss, edit_distance, sampled_softmax_with_cross_entropy in nn.functional * remove apis in nn.functional.learn_rate.py * remove pool2d, pool3d, adaptive_pool2d, adaptive_pool3d in nn.functional * remove apis in nn.functional.vision * remove erf, soft_relu in nn.functional.activation * remove apis in nn.functional.extension * remove nn.functional.rnn * remove hash from nn.functional.lod * remove row_conv from nn.functional.extension * remove one_hot, pad2d, pad_constant_like from nn.functional.common * remove nn.gather_tree, nn.BilinearTensorProduct, nn.Pool2D, nn.Pad2D * remove apis from optimizer.__init * remove tensor.creation.fill_constant * remove elementwise_mul in nn.functional.common and modify to paddle.multiply * remove tensor.stat.reduce_mean * remove reduce_all, reduce_any in tensor.logic * remove apis in tensor.math * remove apis in tensor.__init__ * remove has_inf, has_nan in tensor.search * remove apis in framework.__init__ * remove apis in paddle.__init__ * remove apis in nn.functional.__init__ * modify removed alias apis to raw api in doc and unittests * fix remove grid_sample bug * modify removed alias apis to raw api in doc and unittests * modify removed alias apis to raw api in doc and unittests * modify removed alias apis to raw api in doc and unittests * modify removed alias apis to raw api in doc and unittests * modify removed alias apis to raw api in doc and unittests * modify removed alias apis to raw api in doc and unittests * delete alias api relastions in doc * reserve paddle.compat, paddle.sysconfig * remove unittest for paddle.reduce_all, paddle.reduce_any * modify removed alias apis to raw api in doc and unittests * recover paddle.save and paddle.load * resolve conflicts * fix sample code missing paddle.enable_static() bug * fix sample code missing paddle.enable_static() bug * fix to_string sample code error
-
- 13 10月, 2020 1 次提交
-
-
由 Zhou Wei 提交于
* fix doc and unittest of 2.0 lr_scheduler * fix doc of 2.0 lr_scheduler * fix unittest * fix english doc of lr_scheduler * fix api name of lr scheduler * fix api name of lr scheduler
-
- 14 9月, 2020 1 次提交
-
-
由 MRXLT 提交于
* add check for sparse parameters with weight_decay * move sparse check to adam.py
-
- 01 9月, 2020 2 次提交
-
-
由 tangwei12 提交于
* add embedding 2.0 * add embedding support input int32
-
由 MRXLT 提交于
* update doc * update doc * fix optimizer sample code * add default value for adamw weight_decay * fix adamw * change LearningRateDecay to _LRScheduler * fix adamw;notest * fix load;notest * remove file * bug fix * fix code style * bug fix * add ut * adamw support weight_decay=0 * fix ut * fix set_lr doc * fix doc * change parameters place
-
- 28 8月, 2020 1 次提交
-
-
由 Zhou Wei 提交于
* support 2.0 lr_scheduler for 2.0 optimizer * fix unittest * fix doc * fix unittest * fix sample code, fix unittest
-
- 23 8月, 2020 1 次提交
-
-
由 MRXLT 提交于
refine Optimizer/Adam/Admax/RMSProp && add Admw * buf fix * update comment * unify arguments place; notest * fix ut, test=develop * bug fix * fix conflicts, test=develop * add examples code * bug fix * fix comments * fix sample code * add sample code for Optimizer * add adamax ut, test=develop * fix rmsprop ut, test=develop * add ut for optimizer.py and adamw.py * remove TestAdamOptimizerBetaVariable * update api && add ut * update doc && fix ut * add ut Co-authored-by: Nmapingshuo <mps2012@yeah.net>
-
- 24 12月, 2019 1 次提交
-
-
由 Aurelius84 提交于
* optimize adam speed by removing _finish_update test=develop * fix SparseAdamFunctor param list test=develop * Remove scale_op in expect_list of adam_op test=develop * fix test optimizer loss assert error test=develop * fix test optimizer loss assert error test=develop * modify PADDLE_ENFORCE usage test=develop * fix op_type in lamb_op.cc test=develop * fix errors ostream format bug test=develop * add betaPowOut in ngraph op test=develop * fix ngraph::op api for gcc8 test=develop * clean code test=develop * modify struct into class test=develop * remove code of beta1Tensor in lamb_op test=develop
-
- 28 11月, 2019 1 次提交
-
-
由 Kaipeng Deng 提交于
* add Adam beta1/beta2 support Variable. test=develop
-
- 15 1月, 2019 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 07 1月, 2019 1 次提交
-
-
由 Qiao Longfei 提交于
test=develop
-
- 27 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 17 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
test=develop
-
- 14 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 13 12月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 15 8月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 26 7月, 2018 2 次提交
- 24 2月, 2018 2 次提交
- 13 2月, 2018 1 次提交
-
-
由 Xin Pan 提交于
Currently, our tests run with 2 GPUs, the init time is absurdly long: about 4s for each process. Currently, we run each OP test on different processes. This PR: 1. create cmake function py_test_modules which will generate the Makefile that runs a list of Python unittest module in a single Python process. 2. move all "python unittest compatible" (e.g., used the unittest package, not just a regular python file). from fluid/tests to fluid/tests/unittests. 3. cmake now will run all OP tests in fluid/tests/unittests in a single process, except the time-consuming tests, they are separated into different processes to utilize parallelism. Please make sure to use the unittest package if you put the python test file in fluid/tests/unittests 4. remove all exit(0) from fluid/tests/unittests/*.py, exit(0) is used to disable unittest, we can not do it when running all tests in a single process since it will terminate the process without running the other tests. Instead, the test is disabled in fluid/tests/unittests/CMakeLists.txt. FIXME is added for each disabled item. Please disable the unittest from fluid/tests/unittests/CMakeLists.txt, instead of adding exit(0) to the Python file, for all Python file in fluid/tests/unittests/. 5. add an option WITH_FAST_BUNDLE_TEST. When OFF, will run the unit tests in separate process so that they can be tested individually.
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 24 1月, 2018 1 次提交
-
-
由 Yang Yu 提交于
The English of the previous API is bad.
-
- 21 1月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* "fix decode bug" * "follow commnet" * "fix error" * "fix hook bug" * fix based comment * fix copyright * fix based on comment
-
- 15 1月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* add copyright hook * add copyright hook * refine copyright hook * "test copyright hook" * fix check style * fix ci
-
- 29 12月, 2017 1 次提交
-
-
由 typhoonzero 提交于
-
- 27 12月, 2017 1 次提交
-
-
由 typhoonzero 提交于
-
- 26 12月, 2017 2 次提交
-
-
由 typhoonzero 提交于
-
由 typhoonzero 提交于
-
- 14 11月, 2017 1 次提交
-
-
由 Qiao Longfei 提交于
* init commit * change some dir name
-
- 20 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
-
- 13 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* add adam op moment1_out = beta1 * moment1 + (1 − beta1) * grad moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad moment1_hat = moment1_out / (1 - beta1^t) moment2_hat = moment2_out / (1 - beta2^t) param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) + epsilon) * fix moment 2 * Adding the Adam optimization operator * Adding more tests for Adam op
-