1. 23 5月, 2023 1 次提交
  2. 11 5月, 2023 1 次提交
  3. 28 4月, 2023 1 次提交
  4. 24 4月, 2023 2 次提交
  5. 18 4月, 2023 1 次提交
  6. 07 4月, 2023 1 次提交
  7. 06 4月, 2023 1 次提交
  8. 21 3月, 2023 1 次提交
    • iSerendipity's avatar
      [PHI decoupling] Move DataType* from paddle:experimental to phi namespace (#51716) · 4638a62e
      iSerendipity 提交于
      * move DataType from paddle::experimental to phi
      
      * convert namespace
      
      * convert namespace
      
      * convert namespace
      
      * clarify namespace
      
      * convert more datatype
      
      * Revert "convert more datatype"
      
      This reverts commit 083b462959e6a22d4d8767707b628b95b396642e.
      
      * convert more in auto_code_generator
      
      * fix conflicts for XPU
      
      * fix namespace conflicts
      
      * fix errors
      
      * Revert "fix errors"
      
      This reverts commit f9d9958b54ee32141112274c8a5c3c381ab0f876.
      
      * fix errors
      
      * fix formatting
      4638a62e
  9. 14 3月, 2023 1 次提交
  10. 09 3月, 2023 1 次提交
  11. 19 1月, 2023 1 次提交
    • J
      [KUNLUN] add op: maxpool_with_index (#49505) · f71f77e9
      jameszhang 提交于
      * [KUNLUN] add op: maxpool_with_index
      
      * use DeviceContext::Alloc() instead of DenseTensor::mutable_data()
      
      * fix file format
      
      * solve clip unittest failure
      
      * minor fix
      
      * Revert "solve clip unittest failure" since the issue is fixed
      in #49535
      
      This reverts commit 1127adc66e79afe35ac3c00bb34e6aaa7cd7d78b.
      
      * align with xdnn on the definition of mask in max_pool_with_index
      
      * minor
      f71f77e9
  12. 06 1月, 2023 1 次提交
  13. 21 12月, 2022 1 次提交
  14. 10 11月, 2022 1 次提交
  15. 26 6月, 2022 1 次提交
  16. 05 6月, 2022 1 次提交
  17. 28 4月, 2022 1 次提交
  18. 16 3月, 2022 1 次提交
  19. 15 3月, 2022 1 次提交
  20. 28 2月, 2022 2 次提交
  21. 20 2月, 2022 1 次提交
  22. 18 2月, 2022 1 次提交
    • Z
      [AMP] support GPU BF16 amp for dygraph (#39029) · 7d6d3848
      zhangbo9674 提交于
      * support dtype param for auto_cast
      
      * add amp_dtype for tracer
      
      * add unsupported bf16 list
      
      * support bf16 amp for O2
      
      * refine python interface for bfloat16
      
      * refine code
      
      * refine code
      
      * refine unittest
      
      * refine code
      
      * refine code
      
      * add bf16 o1
      
      * refine code by comment
      
      * add gradient accumulator
      
      * add recompute
      7d6d3848
  23. 16 2月, 2022 1 次提交
    • J
      EagerTensor to EagerVariable (#39447) · 831fd86e
      Jiabin Yang 提交于
      * merge legacy to fluid
      
      * Remove legacy code
      
      * Remove legacy code
      
      * Remove DataType test
      
      * Using Tensor directly instead of using EagerTensor
      
      * support gradient_accumulation
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * refine code
      
      * Rename all EagerTensor to Tensor
      
      * Rename some EagerTensor to Tensor
      
      * rename EagerTensor to EagerVariable
      
      * add more test
      
      * merge develop and refine code
      831fd86e
  24. 09 2月, 2022 1 次提交
  25. 02 2月, 2022 1 次提交
  26. 24 11月, 2021 1 次提交
    • 0
      [Dy2stat]support pure fp16 for dy2stat (#36944) · 52edad6a
      0x45f 提交于
      * run dy2stat pure fp16 in Linear model
      
      * no use self._pure_fp16_inputs
      
      * add test and fix Adam error in dy2stat pure fp16 training
      
      * use paddle.optimizer.Adam
      
      * run test in gpu
      
      * change test time for CI
      
      * enlarge atol for test_resnet_pure_fp16
      
      * refine code and enlarge atol
      
      * make custom_white_list and custom_black_list take effect for AMP and pure fp16
      
      * check tracer is not None
      
      * use default atol
      
      * change filter_size
      
      * change atol and add some NOTE
      52edad6a
  27. 16 11月, 2021 1 次提交
  28. 27 10月, 2021 1 次提交
  29. 13 10月, 2021 1 次提交
  30. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  31. 29 6月, 2021 1 次提交
  32. 21 6月, 2021 1 次提交
  33. 10 5月, 2021 1 次提交
  34. 26 4月, 2021 1 次提交
  35. 04 2月, 2021 1 次提交
  36. 19 1月, 2021 1 次提交
  37. 04 11月, 2020 1 次提交
  38. 24 9月, 2020 1 次提交
    • W
      use iwyu clean include (#27267) · df43905f
      wanghuancoder 提交于
      * use iwyu clean include, test=develop, test=win
      
      * compilation error, test=develop
      
      * fix compilation error2, test=develop
      
      * fix compilation error3, test=develop
      
      * fix compilation error4, test=develop
      
      * fix compilation error5, test=develop
      
      * fix compilation error6, test=develop
      
      * fix compilation error7, test=develop
      
      * fix compilation error8, test=develop
      
      * fix compilation error8, test=develop
      
      * fix compilation error10, test=develop
      
      * fix compilation error11, test=develop
      df43905f