1. 10 6月, 2022 1 次提交
  2. 05 6月, 2022 1 次提交
  3. 02 6月, 2022 1 次提交
  4. 19 4月, 2022 1 次提交
  5. 17 3月, 2022 1 次提交
    • H
      Move layer norm to phi (#40193) · 681a6865
      hong 提交于
      * update
      
      * fix bugs; test=develop
      
      * update; test=develop
      
      * fix test compile error; test=develop
      
      * fix cpu compile error; test=develop
      
      * fix test error; test=develo
      
      * fix layer_norm_op plugin error; test=develop
      
      * fix error; test=develop
      
      * fix test bug; test=develop
      
      * update; test=develop
      
      * polish code; test=develop
      
      * fix bugs; test=develop
      
      * remove unused depency; test=develop
      
      * polish code; test=develop
      681a6865
  6. 04 3月, 2022 1 次提交
  7. 01 3月, 2022 1 次提交
    • Z
      [bf16] add bf16 kernel: layer_norm p_norm reduce_sum (#39843) · ce8ed978
      zhangbo9674 提交于
      * add layer norm
      
      * add p norm
      
      * add reduce sum
      
      * refine layer norm register bf16 for cudnn811
      
      * add bf16 cast for hip
      
      * add unittest
      
      * refine rocm
      
      * refine layer_norm unittest
      
      * refine reduce op
      
      * refine unittest
      
      * enhance atol for reduce unittest
      ce8ed978
  8. 20 2月, 2022 1 次提交
  9. 19 2月, 2022 1 次提交
    • A
      [Pten]Unify paddle/pten::framework::ddim into pten::ddim (#39614) · 2fe04264
      Aurelius84 提交于
      * Unify paddle/pten::framework::ddim into pten::ddim
      
      * fix paddle namespace
      
      * compile sucessfully
      
      * fix npu src file
      
      * fix conflict
      
      * fix conflict
      
      * fix tensorrt compiler error
      
      * fix conflict
      
      * fix conflict
      
      * fix tesst file conflict
      
      * fix conflict
      
      * fix mlu file conflict
      
      * fix mlu file conflict
      
      * fix cinn header file conflict
      
      * fix conflict
      
      * fix conflict
      
      * fix conflict
      
      * fix conflict
      2fe04264
  10. 29 1月, 2022 1 次提交
    • L
      Optimize layer norm backward cuda kernel when cols is 1024. (#39247) · 99cfcc09
      Li Min 提交于
      * Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op.
      
      * Remove useless code.
      
      * Remove useless code.
      
      * Optimize layer_norm fwd when cols is 1024.
      
      * Remove useless code.
      
      * Minors.
      
      * Minors.
      
      * Modifications accordding to reviews.
      
      * Minors.
      
      * Optimize layer_norm bwd kernel when cols is 1024.
      
      * Polish layer_norm_bwd_1024 kernel.
      
      * Limit ln_bwd_1024_kernel to paddle_with_cuda.
      
      * Fix double type compile error.
      
      * Add optimization of ln bwd for fused_dropout_add_ln op.
      
      * Polish codes.
      99cfcc09
  11. 26 1月, 2022 1 次提交
  12. 17 12月, 2021 1 次提交
    • S
      Refine some AMP operators for BERT (#37923) · d80fe268
      sneaxiy 提交于
      * support multi precision update for LAMB
      
      * hide some api
      
      * fix ci uts
      
      * fix lamb output of dygraph
      
      * remove some changes to some PR
      
      * try to fix Py3 CI compile error
      
      * fix test_imperative_optimizer, add lars ut, add layer_norm ut
      
      * fix ut, fix format
      
      * fix ut
      
      * fix windows ci
      d80fe268
  13. 03 12月, 2021 1 次提交
  14. 23 9月, 2021 1 次提交
  15. 08 9月, 2021 1 次提交
    • Z
      fix the bug of layer_norm when batch_size=1 (#35480) · ad5f7494
      zhangkaihuo 提交于
      The bug is that access to mean and var is incorrect, and the array will be out of bounds: the shape of mean and var is [batch_size], and the range of thread idx is 0~feature_size, so mean[idx] and var[idx] is incorrect.
      
      When batch_size=1, the correct access is mean[0] and var[0], and a unit test with batch_size=1 is added.
      ad5f7494
  16. 23 8月, 2021 1 次提交
    • L
      Refactor the organization of layer_norm cuda impl. (#34883) · 7f5eb533
      Li Min 提交于
      Refactor the organization of layer_norm cuda impl so that it can be reused in fused attention op.
      
          Extract the layer_norm cuda impl form layer_norm_op.cu to layer_norm_kernel.cu.h.
          Define fused/attention_layer_norm.h, which can be used in fused attention op in next PR.
      7f5eb533
  17. 24 6月, 2021 1 次提交
  18. 22 6月, 2021 1 次提交
  19. 15 6月, 2021 1 次提交
  20. 12 6月, 2021 1 次提交
    • Z
      Fix LayerNorm Problem (#33420) · fe94db6c
      zhiboniu 提交于
      * Eliminate numerical differences of LayerNorm; fix LayerNorm Nan Bug while large data input
      
      * fix bug while large shape of data input
      fe94db6c
  21. 08 6月, 2021 1 次提交
    • S
      add dynamic layer_norm plugin (#33293) · 45d1ae21
      Shang Zhizhou 提交于
      * add dynamic layer_norm plugin
      
      * fix bug
      
      * fix numpy.allclose
      
      * fix format
      
      * fix code style
      
      * remove shepe in dynamic shape
      
      * code format
      
      * remove layer norm fp16
      
      * fix format
      45d1ae21
  22. 19 3月, 2021 1 次提交
  23. 02 3月, 2021 1 次提交
  24. 15 1月, 2021 1 次提交
  25. 14 12月, 2020 1 次提交
  26. 10 12月, 2020 1 次提交
  27. 07 12月, 2020 1 次提交
  28. 02 12月, 2020 1 次提交
    • F
      Layer norm fp16 (#29169) · 7584bb50
      furnace 提交于
      * add fp16 for layer_norm op
      
      * revert layernorm api
      
      * fix forward
      
      * fix forward
      
      * fix backward for layernorm with fp16
      
      * fix unit test for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U>
      
      * fix with_mkldnn compile error for layernorm with fp16
      
      * fix with_mkldnn compile error for layernorm with fp16
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      7584bb50
  29. 14 5月, 2020 1 次提交
  30. 20 4月, 2020 1 次提交
  31. 06 1月, 2020 1 次提交
  32. 05 9月, 2018 1 次提交
  33. 08 8月, 2018 1 次提交
  34. 12 2月, 2018 1 次提交
  35. 10 2月, 2018 2 次提交
  36. 03 2月, 2018 2 次提交