1. 27 9月, 2020 1 次提交
  2. 23 9月, 2020 1 次提交
  3. 21 9月, 2020 2 次提交
    • L
      [Feature] Enhance inplace addto strategy for gradient accumulation in static graph (#27112) · aba759ba
      Leo Chen 提交于
      * support use add instead of sum to do gradient accumulation
      
      * add inplace addto pass
      
      * add grad_add op and inplace addto pass
      
      * remove debug code
      
      * code refine
      
      * fix bug when sereral sum ops inserts at same op_idx
      
      * fix Flags type
      
      * add addto attribute for conv3d
      
      * fix ut
      
      * code clean
      
      * fix type
      aba759ba
    • H
      Quant op dev (#25932) · 02606d45
      huangxu96 提交于
      * Finished ChannelWiseQuantDequantAbsMaxOp and Passed unittests.
      
      * Finished channel-wise quantize strategy in imperative quantization.
      
      * Added Cuda code of ChannelWiseQuantDequantMaxAbsOP
      Add Cuda code of ChannelWiseQuantDequantMaxAbsOp
      
      * Add quant_axis for channel_wise quant.
      
      * fixed a bug in unnitests, which will not trigger axis = 1 case and cannot meet the coverage rate requirement.
      
      * Added some assert infomation and fixed some coding style mistakes.
      02606d45
  4. 15 9月, 2020 1 次提交
  5. 14 9月, 2020 2 次提交
    • J
      Add bfloat16 passes (#26999) · 1483ea23
      joanna.wozna.intel 提交于
      1483ea23
    • Z
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for... · d708b210
      Zhen Wang 提交于
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240)
      
      * update amp_check_finite_and_scale_op for static_amp.
      
      * use amp_check_finite_and_scale in static graph amp.
      
      * update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op).
      
      * add update_loss_scaling op in cpp.
      
      * add update_loss_scaling_op unit test.
      
      * update the doc of the check_finite_and_unscale op
      
      * Update the process of gradients updating skipping if the gradients have infinite values.
      
      * update the way to zero grads.
      
      * update test_update_loss_scaling_op.py
      
      * add log info when find infinite grads.
      
      * add the unit test for UpdateLossScaling Layer.
      d708b210
  6. 08 9月, 2020 1 次提交
  7. 07 9月, 2020 1 次提交
  8. 04 9月, 2020 1 次提交
  9. 03 9月, 2020 1 次提交
  10. 02 9月, 2020 1 次提交
  11. 01 9月, 2020 1 次提交
  12. 31 8月, 2020 2 次提交
  13. 28 8月, 2020 4 次提交
    • W
      refine paddle inference api (#26774) · 68e0560c
      Wilber 提交于
      * refine paddle inference api
      Co-authored-by: Nnhzlx <nhzlx.dragon@gmail.com>
      68e0560c
    • L
      Refine paddle.manual_seed (#26496) · 844583c8
      Leo Chen 提交于
      * refine manual seed
      
      * fix ci problem
      
      * fix unittests
      
      * fix unittest
      
      * set is_init_py=false in manual_seed
      
      * fix unittest
      
      * fix bernoulli_op
      
      * fix(unittest): change random_seed to manual_seed
      
      * 🐞fix(unittest): fix manual_seed
      
      * trigger ci
      
      * fix test_sentiment
      
      * fix test_imperative_save_load
      
      * fix test_uniform_random_op
      
      * fix test_uniform_random_op
      
      * fix test_jit_save_load
      
      * merge develop
      
      * fix manual_seed
      
      * fix manual_seed
      
      * use global engine
      
      * use shared_ptr
      
      * fix double free
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * fix test bug
      
      * fix test bug
      
      * fix test bug
      
      * fix ci
      844583c8
    • J
      Add mkldnn bfloat16 option to C-API (#26676) · 02083bda
      joanna.wozna.intel 提交于
      * Add mkldnn bfloat16 option to C-API
      
      * Add test for bfloat16 gpu
      
      * Change coverage test
      02083bda
    • Z
      Update the demo code and the doc of varbase.backward. (#26506) · f9066e6a
      Zhen Wang 提交于
      * update the demo code and the doc of varbase.backward.
      
      * update the doc of the fake interface `paddle.fluid.Variable`.
      
      * remove BackwardStrategy.
      f9066e6a
  14. 27 8月, 2020 1 次提交
  15. 25 8月, 2020 2 次提交
    • Z
      improve unique op (#26537) · 0a895bc0
      Zhang Ting 提交于
      * add unique_v2 op
      
      * remove unique_v2 op
      
      * update doc
      0a895bc0
    • W
      optimized transformation form tensor to numpy (#26447) · c1f5df52
      wanghuancoder 提交于
      * optimized transformation form tensor to numpy, test=develop
      
      * optimized transformation form tensor to numpy, pass pre-commit, test=develop
      
      * modify fetchophandle zerocopy to deepcopy in PE&CUP, test=develop
      
      * modify py:array construct, test=develop
      
      * fix _fetch_var to use deep copy, test=develop
      c1f5df52
  16. 24 8月, 2020 2 次提交
  17. 23 8月, 2020 1 次提交
  18. 21 8月, 2020 1 次提交
    • Q
      support Baidu Kunlun AI Accelerator (#25959) · 138ecf24
      QingshuChen 提交于
      * support Baidu AI Accelerator
        * test=kunlun
      
      * minor
       * test=kunlun
      
      * support xpu op in separate file
       * test=kunlun
      
      * update XPU error message and remove duplicated code
      
       * test=kunlun
      
      * minor
       * test=kunlun
      
      * minor
       * test=kunlun
      138ecf24
  19. 19 8月, 2020 1 次提交
  20. 18 8月, 2020 2 次提交
  21. 17 8月, 2020 1 次提交
  22. 16 8月, 2020 2 次提交
  23. 15 8月, 2020 1 次提交
    • Z
      expose and unify the Tensor concepts to the user (#25978) · 6de463d3
      Zhou Wei 提交于
      * expose and unify the Tensor concepts to the user
      
      * expose tensor to user
      
      * add copy place for Tensor
      
      * add copy place for Tensor
      
      * add note
      
      * add macro PADDLE_WITH_CUDA
      
      * remove RUN_TYPE=DIST
      
      * fix some error
      6de463d3
  24. 14 8月, 2020 1 次提交
  25. 13 8月, 2020 2 次提交
    • L
      Feature/Enable Auto-Mixed-Precision in dynamic graph (#24903) · 2d95280e
      Leo Chen 提交于
      * add auto_cast, test=develop
      
      * add loss scaler, test=develop
      
      * add comments, test=develop
      
      * refine code, test=develop
      
      * refine code, test=develop
      
      * do not set flags automatically, test=develop
      
      * fix custom op bug, test=develop
      
      * add more test, test=develop
      
      * refine enable logic, test=develop
      
      * enable amp test with GPU, test=develop
      
      * add unittest
      
      * add test for found_inf
      
      * follow comments
      
      * follow comments
      
      * remove global variable, use singleton
      
      * add some notes
      
      * update comments
      
      * update comments
      
      * update comments
      
      * add use_dynamic_loss_scaling argument
      
      * refine found_inf
      
      * refine found_inf
      2d95280e
    • C
      Fix loaded variable suffix repeat error (#26169) · 838e36e9
      Chen Weihang 提交于
      * fix loaded var suffix repeat error
      
      * use new dygraph name for loaded param
      838e36e9
  26. 12 8月, 2020 1 次提交
  27. 10 8月, 2020 1 次提交
  28. 07 8月, 2020 1 次提交
  29. 06 8月, 2020 1 次提交
    • T
      add heter ps mode (#25682) · 0cb60c70
      Thunderbrook 提交于
      * add heter ps mode
      
      * code style
      test=develop
      
      * add with_pslib
      test=develop
      
      * unitest
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * code style
      test=develop
      
      * test monitor
      test=develop
      
      * prepare trainer
      test=develop
      
      * code style
      test=develop
      0cb60c70