1. 19 3月, 2021 1 次提交
  2. 17 3月, 2021 1 次提交
  3. 12 3月, 2021 1 次提交
  4. 24 2月, 2021 1 次提交
    • T
      fix entry (#31079) · ebbdf525
      tangwei12 提交于
      * fix entry
      
      * fix distributed lookup table fuse case
      
      * fix entry bug at first time
      
      * move entry from paddle.fluid -> paddle.distributed
      
      * fix ut with paddle.enable_static()
      Co-authored-by: Nmalin10 <malin10@baidu.com>
      ebbdf525
  5. 05 2月, 2021 1 次提交
  6. 25 1月, 2021 1 次提交
  7. 20 1月, 2021 2 次提交
  8. 18 1月, 2021 2 次提交
  9. 13 1月, 2021 4 次提交
  10. 12 1月, 2021 1 次提交
    • T
      add sparse embedding & load vars for 2.0 & gloo bug fix (#30306) · 5e839e4d
      tangwei12 提交于
      * add sparse embedding & load vars for 2.0
      
      Change-Id: I36b59ed5f015189dc9d9d2e34a9357722d369f1b
      
      * fix hdfs gloo
      
      Change-Id: Ia84d579053720ad804183e54c9a04b4f031c79c6
      
      * fix gloo hdfs
      
      Change-Id: I5ab982fd483cddc10adcdef0b8aa83aca976cb9e
      
      * move loadvar/sparse embedding from incubute to static
      
      Change-Id: I57081d3545ad2efab78c72420d2162c0eacaf3a0
      5e839e4d
  11. 08 1月, 2021 2 次提交
    • Z
      Support pure fp16 training for AMP API. (#29544) · 7f7dfccf
      Zhen Wang 提交于
      * add cast ops before and after unsupported fp16 ops.
      
      * Keep partial net in FP32 pattern.
      
      * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
      
      * Add fp16 support for adam op.
      
      * add multi precision attr for adam.
      
      * Fix the bug of test_multi_precision_fp16_train UT.
      
      * Code format for CI.
      
      * Fix the redefine error about MPTypeTrait on windows.
      
      * fix bugs of the _create_accumulators func in Momentum.
      
      * fix bug when inserting post cast op.
      
      * Add the update_loss_scaling op in allow_set of UnusedVarCheck.
      
      * Update for ci coverage.
      
      * Add some doc for OptimizerWithMixedPrecision.
      
      * Fix the code style.
      
      * Imporve the doc of `amp_init`.
      
      * Change for fp16 testing if users have the infer program defined in separate way.
      7f7dfccf
    • G
      Quantization supports 2.0 APIs (#30036) · 1bdf9242
      guofei 提交于
      * Quantization supports 2.0 APIs
      
      * Fix the error of save_quantized_model
      1bdf9242
  12. 07 1月, 2021 1 次提交
  13. 05 1月, 2021 2 次提交
  14. 30 12月, 2020 1 次提交
  15. 29 12月, 2020 1 次提交
  16. 28 12月, 2020 1 次提交
    • X
      clean redundant API alias in 2.0 - part 1 (#29928) · 726c78f2
      XiaoguangHu 提交于
      * rm check_import_scipy, rm chunk_eval and mean_iou in paddle.metric.__init__.py
      
      * Revert "rm check_import_scipy, rm chunk_eval and mean_iou in paddle.metric.__init__.py"
      
      This reverts commit 179ba8c2b22bc31fe8d8a126e31820792cbd0f4e.
      
      * delete paddle.metric.chunk_eval and paddle.metric.mean_iou
      
      * delete paddle.nn.clip and paddle.nn.clip_by_norm
      
      * delete paddle.nn.functional.activation.hard_sigmoid and paddle.nn.functional.activation.hard_swish
      
      * delete paddle.nn.Pool2D, paddle.nn.BilinearTensorProduct, paddle.nn.RowConv, paddle.nn.functional.row_conv
      
      * fix extension import error
      
      * fix unittest for row_conv and Pool2D
      726c78f2
  17. 22 12月, 2020 1 次提交
  18. 21 12月, 2020 2 次提交
  19. 16 12月, 2020 1 次提交
  20. 15 12月, 2020 1 次提交
  21. 11 12月, 2020 1 次提交
  22. 09 12月, 2020 1 次提交
  23. 08 12月, 2020 1 次提交
  24. 02 12月, 2020 2 次提交
    • Z
      Add pure fp16 training with master weights. (#27712) · be3777a5
      Zhen Wang 提交于
      * add the weight decay func for the momentum op
      
      * Add the multi_precision function in Momentum Optimizer.
      
      * Make sure that the initial value of master weights are same with the fp16 weights.
      
      * add static loss scaling.
      
      * add the rescale_grad function in the pure fp16 training.
      
      * use the original momentum updating method.
      
      * Polish some codes, such as variable names.
      
      * add docstring for apis.
      
      * update the var creation details of _create_master_weight.
      
      * not modify codes about imperative momentum updating.
      
      * Fix the error of test_dist_sparse_tensor_load_momentum UT.
      
      * add unit test for multi precision fp16 training.
      
      * add more unit tests for CI.
      
      * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
      
      * For CI Coverage Checking.
      be3777a5
    • F
      Layer norm fp16 (#29169) · 7584bb50
      furnace 提交于
      * add fp16 for layer_norm op
      
      * revert layernorm api
      
      * fix forward
      
      * fix forward
      
      * fix backward for layernorm with fp16
      
      * fix unit test for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U>
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      7584bb50
  25. 01 12月, 2020 1 次提交
  26. 30 11月, 2020 2 次提交
  27. 27 11月, 2020 1 次提交
  28. 26 11月, 2020 1 次提交
  29. 25 11月, 2020 1 次提交
    • H
      Quant nn2.0 (#28764) · 40f54537
      huangxu96 提交于
      * Impelement 2.0 API version Conv2d and Linear layer quantization in imperative mode.
      
      * use cudnn softmax in static Lenet
      
      * Modified ChannelwiseQAT Unittest for 2.0 API.
      
      * For CI python coverage.
      40f54537
  30. 24 11月, 2020 1 次提交
    • L
      Upgrade string literals to raw string (#28989) · 3815d7aa
      Leo Chen 提交于
      * upgrade comment string to raw string
      
      * fix string in
      
      * fix string with ' '
      
      * revert update on comments
      
      * upgrade only necessary
      
      * fix sample code checker
      
      * fix comments with '''
      3815d7aa