1. 07 1月, 2022 1 次提交
  2. 06 1月, 2022 3 次提交
  3. 29 12月, 2021 4 次提交
  4. 13 12月, 2021 1 次提交
  5. 26 11月, 2021 1 次提交
  6. 25 11月, 2021 1 次提交
  7. 24 11月, 2021 1 次提交
  8. 23 11月, 2021 4 次提交
    • L
      bug fix shard_index (#37042) (#37421) · f873d3a1
      lilong12 提交于
      f873d3a1
    • Z
      [cherry-pick]Refactor Heterogenous Pipeline Parameter Server (#37446) · 4dc426f4
      zmx 提交于
      * bug fix for  DeserializeSelectedRows. test=develop (#36520)
      
      * fix SerializeSelectedRows (#36543)
      
      * bug fix for  DeserializeSelectedRows. test=develop
      
      * fix bug for SerializeSelectedRows. test=develop
      
      * update. test=develop
      
      * [Heterps]Refactor Heter Pipeline Parameter Server (#36845)
      
      * change username
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * update
      
      * update
      
      * update unittests
      
      * fix
      
      * update
      
      * fix
      
      * update
      
      * fix
      
      * fix
      
      * fix
      
      * update
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update send_and_recv op. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * update. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix unit. notest,test=coverage
      
      * fix ut. notest, test=coverage
      
      * update. notest,test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix. notest, test=coverage
      
      * fix. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * add func. notest, test=coverage
      
      * fix ut. notest, test=coverage
      
      * fix. test=develop
      
      * fix. test=develop
      
      * Fix unit test for send_and_recv_cpu & send_and_recv_gpu (#37129)
      
      * [heterps]fix ut for heter_pipeline_trainer.cc  (#37136)
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * [heterps]bug fix for local training with --heter_worker_num (#37166)
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * [heterps]Refactor heterogenous worker (#37244)
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * refactor heter trainer. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * [heterps]add heterps mode judgement (#37298)
      
      * [heterps]change default executor for heter trainer (#37314)
      
      * fix pslib. test=develop
      
      * add device to train_from_dataset. test=develop
      
      * refine fleet.stop_worker. test=develop
      
      * fix ut. test=develop
      
      * fix ut. test=develop
      
      * fix executor & ut. test=develop
      
      * fix executor & ut. test=develop
      
      * fix executor & ut. test=develop
      
      * [heterps]remove api for heter pipeline ps (#37396)
      
      * fix api. test=develop
      
      * fix api. test=develop
      
      * fix code style. test=release/2.2
      
      * fix CMakeLists. test=develop (#37454)
      4dc426f4
    • Z
      elu support alpha < 0 (#37316) (#37437) · 436808c6
      zhupengyang 提交于
      436808c6
    • S
      [Cherry-pick 2.2]Enhance error message of scatter op (#37431) · d5e73f07
      sneaxiy 提交于
      * enhance scatter err msg check
      
      * fix ci error
      d5e73f07
  9. 22 11月, 2021 2 次提交
  10. 16 11月, 2021 1 次提交
    • Z
      [cherry-pick-2.2.1]fix fused_transformer_encoder_layer bug (#37229) · 36dd295e
      zhangkaihuo 提交于
      修复了fused_transformer_encoder_layer fine-tune过程发现的一些问题:
      
          fused_attention_op添加attn_mask=None的支持:PR
          pre_layer_norm处理问题:PR
          参数处理,计算错误的问题:PR
          add_bias计算错误问题:PR
          添加pure fp16的支持:PR
      36dd295e
  11. 15 11月, 2021 1 次提交
  12. 10 11月, 2021 1 次提交
  13. 08 11月, 2021 1 次提交
  14. 01 11月, 2021 2 次提交
  15. 28 10月, 2021 4 次提交
    • P
      d8ffb261
    • H
      [Cherry-pick] Enable CTC grad compute on GPU (#36780) · 8ede9e6f
      Hui Zhang 提交于
      * Revert "Align CTC grad scale same with ESPNet (#34729)"
      
      This reverts commit 10f9644c.
      
      * ctc grad compute on gpu
      8ede9e6f
    • L
      Fix fused_attention_op and fused_feedforward_op bug when pre_layer_norm is false. (#36793) (#36816) · ae592233
      Li Min 提交于
      * Fix bug when pre_layer_norm is false.
      ae592233
    • X
      [Cherry-pick]FFT function enhancements and bugfixes (#36537) · 11b9f5f9
      Xiaoxu Chen 提交于
      * update fft api path (#36219)
      
      * update fft api path
      * add sample code for ihfft2
      Co-authored-by: Nchenfeiyu <chenfeiyu@baidu.com>
      
      * fix fft axis (#36321)
      
      fix: `-1` is used when fft's axis is `0`
      
      * use unified external error message for cufft api (#36114)
      
      * fft: modify sample code result (#36325)
      
      * dynamic load mkl as a fft backend when it is avaialble and requested (#36414)
      
      * add rocm support for fft api (#36415)
      
      * move signal apis
      
      * move fft and signal API path (#2)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos in signal.py (#3)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos
      
      * disable Cache when CUFFT_VERSION >= 10200 (#4)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos
      
      * Add LRUCache for fft plans
      
      * add LRUCache for cuff and hipfft (#5)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos
      
      * WIP: add cache
      
      * delete move constructor and operator= for CuFFTHandle and FFTConfig
      
      * remove log from CuFFTHandle and FFTConfig
      
      * add lrucache for fft rocm backend
      
      * disable LRUCache when CUFFT_VERSION >= 10200
      
      * disbale copy and move for hipFFTHandle; format code
      Co-authored-by: NXiaoxu Chen <chenxx_id@163.com>
      
      * remove debug message of cufftHandler
      
      * roll_op: support Tensor as input for shifts (#36727)
      
      * fix fftshift/ifftshift on static mode
      
      * update roll_op version
      
      * add more test cases for fftshift/ifftshift
      Co-authored-by: Nzhiboniu <31800336+zhiboniu@users.noreply.github.com>
      Co-authored-by: Nchenfeiyu <chenfeiyu@baidu.com>
      Co-authored-by: LJQ️ <33169170+lijiaqi0612@users.noreply.github.com>
      11b9f5f9
  16. 27 10月, 2021 4 次提交
  17. 26 10月, 2021 7 次提交
    • S
      [Cherry-pick] Add FasterTokenizer Operator (#36716) · edff5b79
      Steffy-zxf 提交于
      * Add FasterTokenizer Operator (#34491)
      
      Add Tokenizer related functionalities for Transformer model in order that the process of training and predicting is consistent.
      
      * support the text string as an input Tensor
      * support the "VOCAB"unordered_map<wstring, int> as an input Tensor to lookup tokens
      * Tokenizer used for BERT. This tokenizer applies an end-to-end, text string to wordpiece tokenization.
      * It first applies basic tokenization, followed by wordpiece tokenization.
      
      * optimize fast tokenizer
      
      * remove const_cast
      Co-authored-by: Nzhoushunjie <zhoushunjie@baidu.com>
      Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com>
      edff5b79
    • Z
      [cherry pick] add op: fused_feedforward(backward) (#36730) · 76c1bae1
      zhangkaihuo 提交于
      * add op: fused_feedforward(backward) (#35611)
      
      这个PR是fused_feedforward反向的代码
      
      相关kernel实现:fused_dropout_act_bias, fused_residual_dropout_bias, fused_layernorm_residual_dropout_bias
      
      fused_feedforward是一个融合算子,该算子对transformer模型的feed forward层的算子进行融合和封装,使得前端只呈现一个接口,通过融合减少部分访存和kernel launch的时间,以此提升性能。
      
      * Move fused_attention and fused_feedforward functional api path to incubate (#36704)
      
      将 #35905 和 #35843 PR中新增的的python api接口移到incubate目录下。
      76c1bae1
    • Z
      [cherry-pick]add op: fused_feedforward(forward) (#36729) · 77034fc3
      zhangkaihuo 提交于
      This is a fusion operator to compute feed forward layer in transformer model architecture.
      77034fc3
    • F
      Pool3d 2.0 (#36545) (#36721) · dfda193f
      feng_shuai 提交于
      dfda193f
    • S
      Add bincount op (#36317) (#36709) · 610a810c
      smallv0221 提交于
      * Add bincount op
      
      * upload cpu version
      
      * fix unitest
      
      * fix unittest
      
      * fix unittest
      
      * fix en doc
      
      * add more test
      
      * fix en doc
      
      * add more test case
      
      * fix test
      
      * fix input vailidation
      
      * fix input check
      
      * fix unittest
      
      * fix test
      
      * fix en doc
      
      cherry-pick
      610a810c
    • Y
      [Cherry-pick] Add the forward QR operator (#36627) · 616ce203
      Yulong Ao 提交于
      616ce203
    • L
      [cherry-pick-2.2] Fused attention op forward (#35905) (#36708) · d2be870a
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      d2be870a
  18. 25 10月, 2021 1 次提交
    • W
      [cherry-pick 2.2] static model parallel dropout support deterministic RandomSeedGenerator (#36682) · 59615fff
      WangXi 提交于
      * Revert "Add fused_dropout wrapper to ease use. (#36185) (#36640)"
      
      This reverts commit 05d7e2fd.
      
      * [hybrid] seed and dropout op support force-cpu (#35820)
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug, the flag PADDLE_WITH_ROCM is invalid
      
      * [HIP] fix op not support AMD GPU bug
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] seed and dropout op support force-cpu
      
      * [hybrid] fix seed ci failed issue
      
      * add AsExtra for force_cpu of seed op
      
      * Add fused_dropout wrapper to ease use. (#36185)
      
      * [hybrid] static model parallel dropout support deterministic RandomSeedGenerator (#36228)
      Co-authored-by: Nxiayanming <41795079@qq.com>
      Co-authored-by: NLi Min <11663212+limin2021@users.noreply.github.com>
      59615fff