- 28 10月, 2021 1 次提交
-
-
由 Li Min 提交于
* Fix bug when pre_layer_norm is false.
-
- 27 10月, 2021 8 次提交
-
-
由 pangyoki 提交于
-
由 piotrekobiIntel 提交于
* Add WIP version of elementwise_div_mkldnn without working dy grad * Add dy gradient calculation implementation, disable broadcast tests * Readd removed tests from static_mode_white_list * Add bfloat16 gradient tests, remove int8 and uint8 support * - Change the way dy grad is calculated to improve performance - Refactor BinaryMKLDNNHandler to use a default parameter * Change copyright year * Refactor as suggested * Attempt to bypass CI Approval not accepting max_relative_error * Fix formatting issue
-
由 Feiyu Chan 提交于
* WIP: add cache * delete move constructor and operator= for CuFFTHandle and FFTConfig * remove log from CuFFTHandle and FFTConfig * add lrucache for fft rocm backend * disable LRUCache when CUFFT_VERSION >= 10200 * disbale copy and move for hipFFTHandle; format code * clean debug code Co-authored-by: NXiaoxu Chen <chenxx_id@163.com>
-
由 baoachun 提交于
* fix matmul dim error * fix wrong dim check in matmul
-
由 Feiyu Chan 提交于
* fix fftshift/ifftshift on static mode * update roll_op version * add more test cases for fftshift/ifftshift
-
由 taixiurong 提交于
-
由 huangjun12 提交于
* add eigvalsh with is_test * add eigvalsh op * fix backward bug * forward and backward, float and complex, unittest * remove eigvalsh_helper.h * remove changes of cusolver.h * fix unittest * fix unittest bug * update code following eigh * fix test * update lapack * pull develop * update funcor * fix unittest bug * fix details * add tensor_method_func * fix notes
-
由 whs 提交于
-
- 26 10月, 2021 7 次提交
-
-
由 Li Min 提交于
功能:本PR的目标是提高attention模块的计算性能。 为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op; 为了减少防存开销,本PR采取了两种优化方法: (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次; (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
-
由 Feiyu Chan 提交于
-
由 zhulei 提交于
-
由 Li Min 提交于
将 #35905 和 #35843 PR中新增的的python api接口移到incubate目录下。
-
由 Qi Li 提交于
* [NPU] fix argsort op, test=develop * remove debug files, test=develop * fix typo, test=develop * address review comments, test=develop
-
由 Jack Zhou 提交于
* optimize fast tokenizer
-
由 feng_shuai 提交于
-
- 25 10月, 2021 6 次提交
-
-
由 Aganlengzi 提交于
* [NPU] modifications for model ernie-1.0 * rollback 503003 and change cast to dtype
-
由 zhangkaihuo 提交于
这个PR是fused_feedforward反向的代码 相关kernel实现:fused_dropout_act_bias, fused_residual_dropout_bias, fused_layernorm_residual_dropout_bias fused_feedforward是一个融合算子,该算子对transformer模型的feed forward层的算子进行融合和封装,使得前端只呈现一个接口,通过融合减少部分访存和kernel launch的时间,以此提升性能。
-
由 smallv0221 提交于
* Add bincount op * upload cpu version * fix unitest * fix unittest * fix unittest * fix en doc * add more test * fix en doc * add more test case * fix test * fix input vailidation * fix input check * fix unittest * fix test * fix en doc
-
由 TTerror 提交于
* add some ops to train ssd on kunlun * add some ops to train ssd on kunlun * add some ops to train ssd on kunlun * update cast op unittest * update cast op unittest * update cast op unittest * update xpu cmake * update cast unittest
-
由 whs 提交于
-
由 zhangkaihuo 提交于
这个PR只包含fused_feedforward前向的代码。 相关kernel实现:fused_dropout_act_bias, fused_residual_dropout_bias, fused_layernorm_residual_dropout_bias fused_feedforward是一个融合算子,该算子对transformer模型的feed forward层的算子进行融合和封装,使得前端只呈现一个接口,通过融合减少部分访存和kernel launch的时间,以此提升性能。
-
- 23 10月, 2021 1 次提交
-
-
由 baoachun 提交于
-
- 22 10月, 2021 3 次提交
-
-
由 zhangbo9674 提交于
-
由 Li Min 提交于
功能:本PR的目标是提高attention模块的计算性能。 为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op; 为了减少防存开销,本PR采取了两种优化方法: (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次; (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
-
由 Weilong Wu 提交于
* Support elementwise_add triple grad Kernel * Change code-format to follow CI std * Removed unreasonable code, and fixed an input uninitialized issue * Support elementwise_add triple grad Kernel * Change code-format to follow CI std * Removed unreasonable code, and fixed an input uninitialized issue
-
- 21 10月, 2021 7 次提交
-
-
由 zhulei 提交于
-
由 ronnywang 提交于
-
由 jakpiase 提交于
* added base changes for matmul_v2+trans+resh fuse pass * added full matmul_v2+transpose+reshape pass * removed a file added by mistake * added reviewers suggestions * Changed ops type in checking capatibility version * Deteled one statement
-
由 furnace 提交于
* add sync_batch_norm (support train, infer, and fp32, fp16, and NCHW, NHWC) * [NPU] Delete debug codes * [NPU] Remove FP16
-
由 Jack Zhou 提交于
* add viterbi decode cpu kernel * add viterbi decoder api in paddle.text * add a data buffer once to avoid create many small pieces of data buffer frequently * fix viterbi max_seq_length bug * fix seq_len=1 bug * fix device context * move split out of for loop * remove INVERSE_SUB * remove 2 GET_CAST_MASK * remove 1 loop * remove Functor * add to_static deploy code * use MAX_FUNC instead of ELE_MAX * add MaxFunctor * impl max_func * remove MaxFunctor * remove cast op * use REGISTER_OP_WITHOUT_GRADIENT * add viterbi cuda kernel * add FIX_BLOCKDIM_CASE macro * add MKL add, mul; add get data mask * add arange mkl impl * add CPU Argmax * add cpu gather * use EXECUTE_MKL_ELEMENT_BINARY_OP instead of some ADD, MUL * use SameDimsBinaryOP instead of EXECUTE_MKL_ELEMENT_BINARY_OP * use SAME_DIMS_ELEMENT_BINARY_OP * add SimpleBroadcastBinaryOP * use int instead of int64_t to accelerate * optimize SimpleBroadcastBinaryOP * optimize SimpleBroadcastBinaryOP * optimize performance in both single thread and multithread situation * remove useless line * remove useless code * add CREATE_TENSOR_BUFFER macro * add INIT_REQUIRED_TENSOR macro * add comment * fix windows ci * add viterbi unittest * remove cuda add functor * remove cuda equal * remove a template function * fix windows ci * fix windows dtype * remove some template instance * remove useless header file * remove some blockdim * remove transpose impl * accelerate cpu performance on single thread situation * viterbi_decode->crf_decode * rename crf params name * add viterbi api test * remove useless import * add enable_static * use viterbi decoder * fix viterbi len=1 * fix viterbi unittest * remove useless comments * reconstruct viterbi decode * remove ADD,SUB,MUL structure * fix coverage * remove CREATE_TENSOR * add name args * crf.py->ops.py; with_start_stop_tag->include_start_end_tag * update crf_decode en docs * fix viterbi decode en docs * fix some review comments * add FIXED_BLOCK_DIM_CASE in cuda * push_back->emplace_back * crf_decode->viterbi_decode; include_start_end_tag->include_bos_eos_tag * paddle.text.ops.viterbi_decode->paddle.text.viterbi_decode * fix viterbi_decode en docs
-
由 TTerror 提交于
* add some ops to train ssd on kunlun * update test_fill_any_like_op_xpu.py
-
由 niuliling123 提交于
* Update the implement of reduceAnyKernel according to kernel primitive api * Fix a bug in ReadData, ReadDataBc and ReadDataReduce when NX != 1
-
- 20 10月, 2021 5 次提交
-
-
由 李季 提交于
* fix global gather and global scatter operators
-
由 ronnywang 提交于
-
由 Steffy-zxf 提交于
Add Tokenizer related functionalities for Transformer model in order that the process of training and predicting is consistent. * support the text string as an input Tensor * support the "VOCAB"unordered_map<wstring, int> as an input Tensor to lookup tokens * Tokenizer used for BERT. This tokenizer applies an end-to-end, text string to wordpiece tokenization. * It first applies basic tokenization, followed by wordpiece tokenization.
-
由 wuhuachaocoding 提交于
-
由 Zeng Jinle 提交于
-
- 19 10月, 2021 2 次提交
-
-
由 Weilong Wu 提交于
* Support elementwise_add triple grad Kernel * Change code-format to follow CI std
-
由 zhulei 提交于
* [NPU] Add iou_similarity op * [NPU] Add iou_similarity op * [NPU] Add iou_similarity op
-