1. 07 12月, 2020 8 次提交
  2. 05 12月, 2020 3 次提交
  3. 04 12月, 2020 11 次提交
  4. 03 12月, 2020 12 次提交
  5. 02 12月, 2020 6 次提交
    • J
      fix nll_loss doc;test=document_fix; (#29247) · cf433221
      Jack Zhou 提交于
      * fix nll_loss doc;test=document_fix;
      
      * remove numpy and set_device;test=document_fix;
      
      * remove numpy;test=document_fix;
      cf433221
    • L
      Move temporal_shift to paddle.nn.functional (#29261) · b9f1f434
      LielinJiang 提交于
      * move temporal_shift to functional
      b9f1f434
    • C
      a2e9d95a
    • Z
      Add pure fp16 training with master weights. (#27712) · be3777a5
      Zhen Wang 提交于
      * add the weight decay func for the momentum op
      
      * Add the multi_precision function in Momentum Optimizer.
      
      * Make sure that the initial value of master weights are same with the fp16 weights.
      
      * add static loss scaling.
      
      * add the rescale_grad function in the pure fp16 training.
      
      * use the original momentum updating method.
      
      * Polish some codes, such as variable names.
      
      * add docstring for apis.
      
      * update the var creation details of _create_master_weight.
      
      * not modify codes about imperative momentum updating.
      
      * Fix the error of test_dist_sparse_tensor_load_momentum UT.
      
      * add unit test for multi precision fp16 training.
      
      * add more unit tests for CI.
      
      * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
      
      * For CI Coverage Checking.
      be3777a5
    • C
      fix random failed of complex matmul (#29285) · 976961de
      chentianyu03 提交于
      976961de
    • F
      Layer norm fp16 (#29169) · 7584bb50
      furnace 提交于
      * add fp16 for layer_norm op
      
      * revert layernorm api
      
      * fix forward
      
      * fix forward
      
      * fix backward for layernorm with fp16
      
      * fix unit test for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U>
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      7584bb50