1. 10 2月, 2022 1 次提交
  2. 28 1月, 2022 1 次提交
  3. 27 1月, 2022 1 次提交
  4. 20 1月, 2022 2 次提交
    • Z
      Fix master weight bug for multi_tensor optimizer(momentum, adam) (#38991) · 6b0c57cf
      zhangbo9674 提交于
      * fix mp
      
      * support merged_momentum for mp
      6b0c57cf
    • W
      [Eager] Support Eager mode for some testcase (#38783) · d21074cd
      wanghuancoder 提交于
      * Rearranged Eager AutoCodeGen directory structure
      
      * Removed USE_OP in Eager AutoCodeGen
      
      * Enabled generation for Operators without Grad/Inputs/Outputs
      
      * Resolved operators without input
      
      * Fixed merge conflicts
      
      * Enabled Eager AutoCodeGen for 10+ more operators
      
      * Refactored Eager AutoCodeGen with more organized helper objects
      
      * Enabled Eager AutoCodeGen for operators with multiple OpBases
      
      * Adjusted Eager AutoCodeGen to Enable Passing Output Tensor as Input Argument
      
      * Handled Dispensable Inputs/Outputs in Eager AutoCodeGen
      
      * Adjusted function generation/call between Python-C API & Dygraph API
      
      * Synchronized auto-generated Python-C API with Dygraph Forward Functions
      
      * support more eager tensor api
      
      * fix merge compile error
      
      * fix compile error and fit develop code
      
      * support pure CPU
      
      * fix some logic error in eager_mode
      
      * support _varbase_creator in eager mode
      
      * Added safe_initialized interface to EagerTensor for use in processing dispensable inputs
      
      * for eager mode
      
      * refine
      
      * support multiple constructor for eager tensor
      
      * add place related code
      
      * polish code
      
      * specific randint with dtype of int64
      
      * Support pure cpu test
      
      * eager logic
      
      * refine test in pure cpu
      
      * eager logic
      
      * eager logic
      
      * eager logic, test=develop
      
      * skip core.eager when in inference, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * call RetainGrad after run forward kernel, test=develop
      
      * refine, test=develop
      
      * support dygraph util, meta, guard test
      
      * eager test case
      
      * support inference test
      
      * refine test and fix initializer failed
      
      * modify eagertensor patch method
      
      * add eagertensor.clear_grandint, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * support create varbase and fix retain grad error
      
      * call monkey_patch_varbase in _test_eager_guard, test=develop
      
      * fix windows error
      
      * split clear_gradient to clear_gradient and zero_grads, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * support test_imperative_basic test in eager mode
      
      * remove additional log in variable.h
      
      * remove additional log in variable.h
      
      * remove additional code create in merge
      
      * eager
      
      * fix some eager logic, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * patch_tensor_method_func, test=develop
      
      * refine, test=develop
      
      * eager test case, test=develop
      
      * refine, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager optimizer, test=develop
      
      * eager optimizer, test=develop
      
      * eager test_imperative_optimizer_v2, test=develop
      
      * eager, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * eager, test=develop
      
      * add resize in share buffer to, test=develop
      
      * eager, test=develop
      
      * fix _share_buffer_to, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * support eager for dataloader,test=develop
      Co-authored-by: Njim19930609 <jim19930609@gmail.com>
      Co-authored-by: NJiabinYang <360788950@qq.com>
      d21074cd
  5. 19 1月, 2022 1 次提交
  6. 12 1月, 2022 1 次提交
  7. 07 1月, 2022 2 次提交
  8. 24 12月, 2021 1 次提交
  9. 22 12月, 2021 1 次提交
  10. 20 12月, 2021 1 次提交
    • Z
      Add multi_tensor for momentum optimizer and clear_grads (#37564) · 0cc5e22c
      zhangbo9674 提交于
      * add multi_tensor for momentum and clear_grads for optimizer
      
      * fix bug for dygraph
      
      * add unittest
      
      * refine comment
      
      * add param_group
      
      * refine regularizaiton logic
      
      * del clear_grads
      
      * add clear_grads
      
      * add dispensable check of None
      
      * refine clear_grad
      
      * fix build bug
      
      * refine code by comment
      
      * refine code
      
      * add multi tensor check
      
      * refine param_group update
      
      * add multi tensor for static mode
      
      * refine comments
      
      * delete useless comma for momentum
      
      * refine comment for momentum
      
      * refine code by commment
      0cc5e22c
  11. 17 12月, 2021 1 次提交
    • S
      Refine some AMP operators for BERT (#37923) · d80fe268
      sneaxiy 提交于
      * support multi precision update for LAMB
      
      * hide some api
      
      * fix ci uts
      
      * fix lamb output of dygraph
      
      * remove some changes to some PR
      
      * try to fix Py3 CI compile error
      
      * fix test_imperative_optimizer, add lars ut, add layer_norm ut
      
      * fix ut, fix format
      
      * fix ut
      
      * fix windows ci
      d80fe268
  12. 27 10月, 2021 1 次提交
  13. 18 10月, 2021 1 次提交
  14. 29 9月, 2021 1 次提交
  15. 26 9月, 2021 1 次提交
  16. 22 9月, 2021 1 次提交
  17. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  18. 14 9月, 2021 1 次提交
  19. 10 9月, 2021 1 次提交
    • S
      Fix warning (#34875) · 966f042d
      sunzhongkai588 提交于
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      
      * fix warning error , test=document_fix
      966f042d
  20. 01 9月, 2021 1 次提交
  21. 23 8月, 2021 1 次提交
  22. 17 8月, 2021 1 次提交
  23. 02 8月, 2021 1 次提交
  24. 30 7月, 2021 1 次提交
  25. 27 7月, 2021 2 次提交
  26. 15 7月, 2021 1 次提交
  27. 14 7月, 2021 1 次提交
  28. 08 7月, 2021 1 次提交
  29. 24 6月, 2021 1 次提交
  30. 21 6月, 2021 1 次提交
  31. 15 6月, 2021 1 次提交
  32. 11 6月, 2021 1 次提交
  33. 10 6月, 2021 1 次提交
  34. 31 5月, 2021 1 次提交
  35. 29 4月, 2021 1 次提交
  36. 28 4月, 2021 1 次提交
  37. 27 4月, 2021 1 次提交