1. 12 1月, 2021 1 次提交
  2. 11 1月, 2021 1 次提交
    • Z
      [Cherry-Pick] Support pure fp16 training for AMP API. (#29544) (#30241) · d8dfef54
      Zhen Wang 提交于
      * Support pure fp16 training for AMP API. (#29544)
      
      * add cast ops before and after unsupported fp16 ops.
      
      * Keep partial net in FP32 pattern.
      
      * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
      
      * Add fp16 support for adam op.
      
      * add multi precision attr for adam.
      
      * Fix the bug of test_multi_precision_fp16_train UT.
      
      * Code format for CI.
      
      * Fix the redefine error about MPTypeTrait on windows.
      
      * fix bugs of the _create_accumulators func in Momentum.
      
      * fix bug when inserting post cast op.
      
      * Add the update_loss_scaling op in allow_set of UnusedVarCheck.
      
      * Update for ci coverage.
      
      * Add some doc for OptimizerWithMixedPrecision.
      
      * Fix the code style.
      
      * Imporve the doc of `amp_init`.
      
      * Change for fp16 testing if users have the infer program defined in separate way.
      
      * Remove tensor copy in the update_loss_scaling op. (#29426)
      
      * remove tensor copy in the update_loss_scaling op
      
      * not use thrust.
      
      * fix some cuda memory access error.
      d8dfef54
  3. 03 12月, 2020 1 次提交
    • Z
      [Cherry-pick] Add pure fp16 training with master weights. (#29301) · d8ea8a06
      Zhen Wang 提交于
      * Add pure fp16 training with master weights. (#27712)
      
      * add the weight decay func for the momentum op
      
      * Add the multi_precision function in Momentum Optimizer.
      
      * Make sure that the initial value of master weights are same with the fp16 weights.
      
      * add static loss scaling.
      
      * add the rescale_grad function in the pure fp16 training.
      
      * use the original momentum updating method.
      
      * Polish some codes, such as variable names.
      
      * add docstring for apis.
      
      * update the var creation details of _create_master_weight.
      
      * not modify codes about imperative momentum updating.
      
      * Fix the error of test_dist_sparse_tensor_load_momentum UT.
      
      * add unit test for multi precision fp16 training.
      
      * add more unit tests for CI.
      
      * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
      d8ea8a06
  4. 23 11月, 2020 1 次提交
  5. 06 11月, 2020 1 次提交
  6. 19 10月, 2020 2 次提交
    • Y
      xpu adam op (#28031) · 6f0c3d1f
      yinhaofeng 提交于
      * lookup_table_xpu op report errors;test=kunlun
      
      * add adam xpu op;test=kunlun
      
      * reset lookup
      
      * change adam wrong;test=kunlun
      6f0c3d1f
    • C
      Fix xpu error message (#28061) · 5f04875c
      Chengmo 提交于
      * fix error message,test=kunlun
      
      * fix, test=kunlun
      5f04875c
  7. 14 10月, 2020 2 次提交
  8. 13 10月, 2020 1 次提交
  9. 27 9月, 2020 1 次提交
  10. 22 9月, 2020 1 次提交
  11. 21 9月, 2020 1 次提交
  12. 09 9月, 2020 1 次提交
  13. 29 8月, 2020 1 次提交
    • J
      Adadelta Optimizer (#26590) · a1b99fae
      Jiawei Wang 提交于
      * add doc; notest
      
      * fix doc; notest
      
      * update doc; notest
      
      * refine optimizer && adam
      
      * refine optimizer; notest
      
      * add adam
      
      * fix doc
      
      * fix doc && add adamw; notest
      
      * add error message
      
      * bug fix
      
      * refine rmsprop && adamax
      
      * fix ci
      
      * buf fix
      
      * update comment
      
      * unify arguments place; notest
      
      * fix ut, test=develop
      
      * bug fix
      
      * fix conflicts, test=develop
      
      * add examples code
      
      * bug fix
      
      * fix comments
      
      * fix sample code
      
      * add sample code for Optimizer
      
      * add adamax ut, test=develop
      
      * fix rmsprop ut, test=develop
      
      * add ut for optimizer.py and adamw.py
      
      * first commit of adadelta optimizer
      
      * fix learning rate
      
      * fix adadelta doc and add sgd momentum
      
      * remove unused fluid
      
      * fix codestyle
      
      * Update test_adam_op.py
      
      * Update test_adam_op.py
      
      * fix SGD in 2 unittests
      
      * fix SGD in 2 unittests
      
      * fix ci
      
      * fix ut
      Co-authored-by: NMRXLT <xlt2024@gmail.com>
      Co-authored-by: Nmapingshuo <mps2012@yeah.net>
      a1b99fae
  14. 28 8月, 2020 1 次提交
  15. 11 7月, 2020 1 次提交
  16. 03 6月, 2020 1 次提交
  17. 13 5月, 2020 3 次提交
  18. 26 4月, 2020 1 次提交
    • L
      improve efficiency of runtime InferVarType (#22778) · 9a93f6aa
      liuwei1031 提交于
      * save InferVarType changes, test=develop
      
      * remove code comments, test=develop
      
      * tweak code, test=develop
      
      * fix compilation warning, update merge_ids_op split_ids_op to new interface, test=develop
      
      * modify fused_bn_activation_op, test=develop
      
      * fix error of fused_bn_activation_op, test=develop
      
      * fix PADDLE_ENFORCE and unittest coverage issue, test=develop
      
      * tweak PADDLE_ENFORCE messages, test=develop
      
      * improve unittest coverage, test=develop
      
      * add StaticGraphInferVarType class, test=develop
      
      * rebase develop branch, test=develop
      
      * fix unittest error, test=develop
      
      * remove comments, test=develop
      
      * improve unittest coverage, test=develop
      
      * imporve error message and imporve unittest coverage, test=develop
      
      * upgrade InferVarType API, test=develop
      
      * tweak pyfunc error message, test=develop
      
      * fix compilation conflict - save_combine_op, test=develop
      9a93f6aa
  19. 07 4月, 2020 1 次提交
  20. 04 4月, 2020 1 次提交
    • C
      Delete Ref & VectorRef and add GetDataSafely (#22997) · 16315d3d
      Chen Weihang 提交于
      * delete invalid check inferface Ref & VectorRef, test=develop
      
      * fix vector ref delete error, test=develop
      
      * try the new check inferface, test=develop
      
      * change all related code with new check macro, test=develop
      
      * remove static assert, test=develop
      
      * polish detail, test=develop
      
      * skip coverage problem, test=develop
      
      * add new check macro, test=develop
      16315d3d
  21. 27 2月, 2020 1 次提交
    • Z
      Refine adam op to improve performance, test=develop (#22346) · 72dde4ab
      zhaoyuchen2018 提交于
      * Refine adam op, test=develop
      
      * Fuse kernels together to reduce cpu time.
      
      * Refine paddle enforce, test=develop
      
      * Remove some comments, test=develop
      
      * Refine code,test=develop
      
      * Refine cuda kernel, test=develop
      
      * Refine code according to comments, test=develop
      72dde4ab
  22. 09 1月, 2020 1 次提交
    • Z
      test Optimizer in dygraph (#21949) · d0f0a252
      zhongpu 提交于
      * test Optimizer in dygraph, test=develop
      
      * add optest for Optimizer in dygraph, test=develop
      
      * fix adagrad optimizer, test=develop
      
      * fix dpsgd optimizer, test=develop
      
      * fix test_optimizer.py, test=develop
      
      * fix dpsgd optimizer, this op only support cpu, test=develop
      
      * add optest for optimizer, test=develop
      
      * add description for dpsgd, test=develop
      
      * add rmsprop to white_list in unused_var_check.cc, test=develop
      
      * polish code style, test=develop
      
      * polish code style, test=develop
      
      * delete seed attribute for DpsgdOptimizer, test=develop
      
      * change testing to debugging, test=develop
      d0f0a252
  23. 24 12月, 2019 1 次提交
    • A
      Optimize adam speed (#21777) · 51a86d2b
      Aurelius84 提交于
      * optimize adam speed by removing _finish_update test=develop
      
      * fix SparseAdamFunctor param list test=develop
      
      * Remove scale_op in expect_list of adam_op test=develop
      
      * fix test optimizer loss assert error test=develop
      
      * fix test optimizer loss assert error test=develop
      
      * modify PADDLE_ENFORCE usage test=develop
      
      * fix op_type in lamb_op.cc test=develop
      
      * fix errors ostream format bug test=develop
      
      * add betaPowOut in ngraph op test=develop
      
      * fix ngraph::op api for gcc8 test=develop
      
      * clean code test=develop
      
      * modify struct into class test=develop
      
      * remove code of beta1Tensor in lamb_op test=develop
      51a86d2b
  24. 06 12月, 2019 1 次提交
    • H
      Add Much Complex Test and Fix Bugs for Control Flow cond API (#21532) · 1dcf6a72
      Huihuang Zheng 提交于
      Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests.
      
      Fix bugs:
      
      1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op.
      
      2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var.
      
      This PR also did some code clean up:
      1. Print the var name when sgd_op catches shape error so that it is easier to debug
      2. Fix a typo: dicta -> dict
      1dcf6a72
  25. 29 11月, 2019 2 次提交
    • C
      Fix optimizer op infershape failed in dygraph multi-cards mode (#21374) · 664f958a
      Chen Weihang 提交于
      * add param & grad shape check for sgd op
      
      * add _reshape_inplece interface for dygraph parallel
      
      * refine unittest based paddle/models scripts, test=develop
      
      * add unittest for parallel grad fuse, test=develop
      664f958a
    • H
      Add dygraph execution context (#20157) · ac854670
      hong 提交于
      * add_dygraph_execution_context
      
      * add dygraph infershape context and execution context; test=develop
      
      * fix imperative bug; test=develop
      
      * remove inputs outputs interface from execution context,
      because it have same function with inputNames;
      test=develop
      
      * remove tracer_test ctest; test=develop
      
      * fix split op bug; test=develop
      
      * fix unitests bug; test=develop
      
      * fix distribute test bug; test=develop
      
      * fix ngraph compile bug; test=develop
      
      * fix grad maker bug; test=develop
      
      * fix load op bugs; test=develop
      
      * fix operator.cc construct bug; test=develop
      
      * remove useless name find in operator; test=develop
      
      * add tracer_test; test=develop
      
      * fix concat, split bug; test=develop
      
      * remove tracer_test unitest; test=develop
      
      * fix attribute check bug; test=develop
      
      * add test code to fix converage; test=develop
      
      * remove useless code, change check backward input in engin; test=develop
      
      * unlock var type infer shape;test=develop
      
      * add ShareAllLoD api; test=develop
      
      * add dygraph infershape context unitest; test=develop
      
      * remove increase and decrease lod in dygraph; test=develop
      
      * addd override; test=develop
      
      * fix increase descrease lod; test=develop
      
      * fix paddle_enforce; test=develop
      
      * disable lod op dygraph check; test=develop
      
      * fix paddle enforce error; test=develop
      
      * add comment for op_registry and OperatorBase; test=develop
      
      * optimize the comment of op_registry; test=develop
      
      * fix format of comment; test=develop
      
      * fix format of comment; test=develop
      
      * optimize the format of comment; test=develop
      
      * optimize the format of the comment; test=develop
      
      * optimize comment of op_registry; test=develop
      ac854670
  26. 28 11月, 2019 1 次提交
  27. 25 11月, 2019 1 次提交
  28. 31 10月, 2019 1 次提交
    • H
      GradMaker for dygraph (#19706) · 8c4573a3
      hong 提交于
      * refactor dygraph,test=develop
      
      * fix failed unittest,test=develop
      
      * polish code,test=develop
      
      * check windows ci error,test=develop
      try to fix windows ci error by np.allclose,test=develop
      
      * polish vlog and profiler, test=develop
      
      * try to fix preceding ops order,test=develop
      
      * test transformer in windows ci, test=develop
      
      * use python c-api to speed up tracer.trace,test=develop
      
      * test=develop, fix docker with paddle nccl problem
      
      * test=develop, add ut for debug string and gradient_accumulator
      
      * test=develop, add tests for layer/gradient_accumulator/prepared_op
      
      * test=develop, fix complie error for test_prepared_op
      
      * test=develop, add more ut for dygraph
      
      * test=develop, create API.spec for dygraph api change
      
      * optimize grad maker; test=develop
      
      * optimize grad maker
      
      * test
      
      * grad make optim; test=develop
      
      * fix unittest bugs; test=develop
      
      * add dygraph grad op maker and split_op
      
      * grad op maker refactor; test=develop
      
      * add dygraph grad maker; test=develop
      
      * fix op deformable_conv_v1_op bug; test=develop
      
      * fix deformable_conv prroi pool bugs;
      
      * fix new op grad op maker bug; test=develop
      
      * fix split by ref bug; test=develop
      
      * fix dygraph auto prune bug; test=develop
      
      * fix test_trace bug; test=develop
      
      * fix fused emb seq pool bug; test=develop
      
      * remove useless code in op_desc file; test=develop
      
      * remove useless code, StrVarBaseNode; test=develop
      
      * fix review issues; test=develop
      
      * fix rank_loss grad maker; test=develop
      
      * remove flag in VarBase; test=develop
      
      * fix distributed_notify_op compile bug ; test=develop
      
      * fix reshape op double grad; test=develop
      
      * fix expand as op; test=develop
      
      * add impertive type_defs.h for demo_train; test=develop
      
      * fix inference lib cmake; test=develop
      
      * fix inference lib; test=develop
      
      * fix infernce_lib; test=develop
      
      * fix inference cmake; test=develop
      
      * fix inference lib; test=develop
      
      * fix inference lib; test=develop
      
      * remove condition dygraph grad maker, modify local name; test=develop
      
      * fix split grad maker bug; test=develop
      
      * fix pyramid_op bug; test=develop
      
      * change travis time out limit; test=develop
      
      * restore travis; test=develop
      
      * change timeout limit; test=develop
      8c4573a3
  29. 28 10月, 2019 1 次提交
  30. 24 10月, 2019 1 次提交
  31. 24 9月, 2019 1 次提交
  32. 04 9月, 2019 1 次提交
  33. 03 9月, 2019 1 次提交
  34. 04 7月, 2019 1 次提交
  35. 26 6月, 2019 1 次提交
    • Y
      Update lamb optimizer (#18333) · 23941e43
      Yibing Liu 提交于
      * Update lamb optimizer
      
      test=develop, test=document_preview
      
      * Regenerate api spec
      
      test=develop, test=document_preview
      23941e43