- 18 11月, 2022 1 次提交
-
-
由 MarDino 提交于
* Add quick gelu and fused bias add kernel * fix annotation * remove useless code * add fast gelu option and set it in multi transformer op * add flag to restrict if use fast gelu approximate * fix flags conflict * fix use tanh function instead * add cudart version limit * use phi fast tanh func * fix comment
-
- 31 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* fix typo `Fasle`/`Flase` -> `Flase` * fix typo `Ture` -> `True`
-
- 28 9月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* remove needless using tensor * remove needless using tensor * resolve conflict * replace tensor using * fix format error * revert needless changing * fix rocm and npu compile error * fix cinn compile error * fix format error * fix mkldnn format error * fix mkldnn format error * fix cinn compile error * fix cinn compile error * fix cinn compile error * resolve conflict
-
- 18 9月, 2022 1 次提交
-
-
由 RichardWooSJTU 提交于
-
- 01 8月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* remove cudaDeviceContext * remove more template * fix rocm compile * remove alias name CUDADeviceContext * fix compile * fix tests * revert changes
-
- 26 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 17 6月, 2022 1 次提交
-
-
由 Yiqun Liu 提交于
* Support optional residual add in fused_attention and fused_feedforward. * Add checkpoint and add the check of add_residual when pre_layer_norm is false. * Add TODO and change the python api to add add_residual argument.
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 31 5月, 2022 1 次提交
-
-
由 Li Min 提交于
* replace dropout_is_test with is_test. * improve atol on a100.
-
- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 15 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move paddle/operators/math/functors.h * move paddle/operators/math/compound_functors.h
-
- 29 1月, 2022 1 次提交
-
-
由 Li Min 提交于
* Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op. * Remove useless code. * Remove useless code. * Optimize layer_norm fwd when cols is 1024. * Remove useless code. * Minors. * Minors. * Modifications accordding to reviews. * Minors. * Optimize layer_norm bwd kernel when cols is 1024. * Polish layer_norm_bwd_1024 kernel. * Limit ln_bwd_1024_kernel to paddle_with_cuda. * Fix double type compile error. * Add optimization of ln bwd for fused_dropout_add_ln op. * Polish codes.
-
- 17 1月, 2022 1 次提交
-
-
由 Wilber 提交于
* add pten::Place data structure. * update ci problem * fix ci problem * update * using platform::Place=pten::Place * remove BOOST_GET_CONST for CPUPlace and GPUPlace * compile pass 25%. * compile pass 45% * compile pass 60% * remove boost_get for xpu npu mlu and ipu * compile pass on cpu and gpu. * fix compile problem * fix compile error. * update * fix ci problem * update * ci approve * fix ci problem * fix ci eager test problem * remove BOOST_GET_CONST * fix npu compile
-
- 07 1月, 2022 1 次提交
-
-
由 Li Min 提交于
* Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op.
-
- 28 12月, 2021 1 次提交
-
-
由 Li Min 提交于
-
- 22 10月, 2021 1 次提交
-
-
由 Li Min 提交于
功能:本PR的目标是提高attention模块的计算性能。 为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op; 为了减少防存开销,本PR采取了两种优化方法: (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次; (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
-
- 29 9月, 2021 1 次提交
-
-
由 Li Min 提交于
-