1. 19 1月, 2021 2 次提交
  2. 18 1月, 2021 4 次提交
  3. 17 1月, 2021 1 次提交
  4. 15 1月, 2021 5 次提交
  5. 14 1月, 2021 2 次提交
  6. 13 1月, 2021 1 次提交
  7. 12 1月, 2021 8 次提交
  8. 11 1月, 2021 9 次提交
  9. 10 1月, 2021 2 次提交
  10. 09 1月, 2021 2 次提交
  11. 08 1月, 2021 4 次提交
    • Z
      Support pure fp16 training for AMP API. (#29544) · 7f7dfccf
      Zhen Wang 提交于
      * add cast ops before and after unsupported fp16 ops.
      
      * Keep partial net in FP32 pattern.
      
      * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
      
      * Add fp16 support for adam op.
      
      * add multi precision attr for adam.
      
      * Fix the bug of test_multi_precision_fp16_train UT.
      
      * Code format for CI.
      
      * Fix the redefine error about MPTypeTrait on windows.
      
      * fix bugs of the _create_accumulators func in Momentum.
      
      * fix bug when inserting post cast op.
      
      * Add the update_loss_scaling op in allow_set of UnusedVarCheck.
      
      * Update for ci coverage.
      
      * Add some doc for OptimizerWithMixedPrecision.
      
      * Fix the code style.
      
      * Imporve the doc of `amp_init`.
      
      * Change for fp16 testing if users have the infer program defined in separate way.
      7f7dfccf
    • L
      use cuda generator in bernoulli cuda kernel (#30199) · 789743e1
      Leo Chen 提交于
      789743e1
    • L
      Fix dtype of ungenerated grad var (#28511) · 8696335f
      Leo Chen 提交于
      * fix dtype of ungenerated grad var
      
      * update ut
      
      * refine code
      
      * set default dtype
      
      * fix could_use_cudnn bug
      
      * remove debug code
      
      * re-implement
      
      * fix bug
      8696335f
    • W
      shape op support int8 and uint8 tensor (#30201) · 609c0222
      Wilber 提交于
      609c0222