1. 28 10月, 2021 3 次提交
    • L
      Fix fused_attention_op and fused_feedforward_op bug when pre_layer_norm is false. (#36793) (#36816) · ae592233
      Li Min 提交于
      * Fix bug when pre_layer_norm is false.
      ae592233
    • X
      [Cherry-pick]FFT function enhancements and bugfixes (#36537) · 11b9f5f9
      Xiaoxu Chen 提交于
      * update fft api path (#36219)
      
      * update fft api path
      * add sample code for ihfft2
      Co-authored-by: Nchenfeiyu <chenfeiyu@baidu.com>
      
      * fix fft axis (#36321)
      
      fix: `-1` is used when fft's axis is `0`
      
      * use unified external error message for cufft api (#36114)
      
      * fft: modify sample code result (#36325)
      
      * dynamic load mkl as a fft backend when it is avaialble and requested (#36414)
      
      * add rocm support for fft api (#36415)
      
      * move signal apis
      
      * move fft and signal API path (#2)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos in signal.py (#3)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos
      
      * disable Cache when CUFFT_VERSION >= 10200 (#4)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos
      
      * Add LRUCache for fft plans
      
      * add LRUCache for cuff and hipfft (#5)
      
      * move signal apis
      
      * move fft.py and signal.py to paddle/, fix typos
      
      * fix relative imports from fft.py and signal.py
      
      * fix typos
      
      * WIP: add cache
      
      * delete move constructor and operator= for CuFFTHandle and FFTConfig
      
      * remove log from CuFFTHandle and FFTConfig
      
      * add lrucache for fft rocm backend
      
      * disable LRUCache when CUFFT_VERSION >= 10200
      
      * disbale copy and move for hipFFTHandle; format code
      Co-authored-by: NXiaoxu Chen <chenxx_id@163.com>
      
      * remove debug message of cufftHandler
      
      * roll_op: support Tensor as input for shifts (#36727)
      
      * fix fftshift/ifftshift on static mode
      
      * update roll_op version
      
      * add more test cases for fftshift/ifftshift
      Co-authored-by: Nzhiboniu <31800336+zhiboniu@users.noreply.github.com>
      Co-authored-by: Nchenfeiyu <chenfeiyu@baidu.com>
      Co-authored-by: LJQ️ <33169170+lijiaqi0612@users.noreply.github.com>
      11b9f5f9
    • 0
      show paddle traceback after last user code traceback (#36741) (#36765) · 96edcea4
      0x45f 提交于
      show paddle traceback after last user code traceback
      96edcea4
  2. 27 10月, 2021 6 次提交
  3. 26 10月, 2021 16 次提交
  4. 25 10月, 2021 9 次提交
  5. 24 10月, 2021 1 次提交
    • J
      Add viterbi decode (#35778) (#36615) · 1906c746
      Jack Zhou 提交于
      * add viterbi decode cpu kernel
      
      * add viterbi decoder api in paddle.text
      
      * add a data buffer once to avoid create many small pieces of data buffer frequently
      
      * fix viterbi max_seq_length bug
      
      * fix seq_len=1 bug
      
      * fix device context
      
      * move split out of for loop
      
      * remove INVERSE_SUB
      
      * remove 2 GET_CAST_MASK
      
      * remove 1 loop
      
      * remove Functor
      
      * add to_static deploy code
      
      * use MAX_FUNC instead of ELE_MAX
      
      * add MaxFunctor
      
      * impl max_func
      
      * remove MaxFunctor
      
      * remove cast op
      
      * use REGISTER_OP_WITHOUT_GRADIENT
      
      * add viterbi cuda kernel
      
      * add FIX_BLOCKDIM_CASE macro
      
      * add MKL add, mul; add get data mask
      
      * add arange mkl impl
      
      * add CPU Argmax
      
      * add cpu gather
      
      * use EXECUTE_MKL_ELEMENT_BINARY_OP instead of some ADD, MUL
      
      * use SameDimsBinaryOP instead of EXECUTE_MKL_ELEMENT_BINARY_OP
      
      * use SAME_DIMS_ELEMENT_BINARY_OP
      
      * add SimpleBroadcastBinaryOP
      
      * use int instead of int64_t to accelerate
      
      * optimize SimpleBroadcastBinaryOP
      
      * optimize SimpleBroadcastBinaryOP
      
      * optimize performance in both single thread and multithread situation
      
      * remove useless line
      
      * remove useless code
      
      * add CREATE_TENSOR_BUFFER macro
      
      * add INIT_REQUIRED_TENSOR macro
      
      * add comment
      
      * fix windows ci
      
      * add viterbi unittest
      
      * remove cuda add functor
      
      * remove cuda equal
      
      * remove a template function
      
      * fix windows ci
      
      * fix windows dtype
      
      * remove some template instance
      
      * remove useless header file
      
      * remove some blockdim
      
      * remove transpose impl
      
      * accelerate cpu performance on single thread situation
      
      * viterbi_decode->crf_decode
      
      * rename crf params name
      
      * add viterbi api test
      
      * remove useless import
      
      * add enable_static
      
      * use viterbi decoder
      
      * fix viterbi len=1
      
      * fix  viterbi unittest
      
      * remove useless comments
      
      * reconstruct viterbi decode
      
      * remove ADD,SUB,MUL structure
      
      * fix coverage
      
      * remove CREATE_TENSOR
      
      * add name args
      
      * crf.py->ops.py; with_start_stop_tag->include_start_end_tag
      
      * update crf_decode en docs
      
      * fix viterbi decode en docs
      
      * fix some review comments
      
      * add FIXED_BLOCK_DIM_CASE in cuda
      
      * push_back->emplace_back
      
      * crf_decode->viterbi_decode; include_start_end_tag->include_bos_eos_tag
      
      * paddle.text.ops.viterbi_decode->paddle.text.viterbi_decode
      
      * fix viterbi_decode en docs
      1906c746
  6. 21 10月, 2021 2 次提交
  7. 20 10月, 2021 2 次提交
  8. 19 10月, 2021 1 次提交
    • L
      [cherry-pick]Add sparse attention cherrypick (#36447) · 36edb0e1
      Liu-xiandong 提交于
          The code of this PR can only support CUDA 11.2. Currently, CI does not have GPU with CUDA 11.2 , and all tests will be skipped automatically.
      
          The new OP is paddle._C_ops.sparse_attention. Regarding the work of the python API, it will be resolved in a follow-up PR.
      
          The code of this PR lacks tests on dynamic graphs and static graphs, and will be added in subsequent PRs.
      36edb0e1