- 23 5月, 2023 10 次提交
-
-
由 Tian Zheng 提交于
-
由 Leo Chen 提交于
* add host memory stats * add ut
-
由 huangjiyi 提交于
* update * update * update * set out dtype
-
由 Wang Xin 提交于
* static graph autogen code support for pad3d op * bug fixed * add ut for pad3d mkldnn op * fix coverage * fix bug * fix bug * Delete test_pad3d_mkldnn_op.py
-
由 ronnywang 提交于
* [CustomDevice] fix auto_paralell * update * update * update
-
由 LoneRanger 提交于
* fix the static op generation for group_norm * fix bug of mismatch * fix bug of AssertionError * fix setting of composite
-
由 Yuanle Liu 提交于
-
由 kangguangli 提交于
* Use copy_if_different to avoid recompilation of generated cutlass kernels. * add program parameter dialect_interface * fix op create bug * add conv2d * draft of paddle converter * fix CI * fix windows CI * fix program destructor * printer draft * fix bug * printer draft finish * fix windows CI * reserve inplace semantics * revert program::destroy since no need to do topology sort * revert * modify by reviews * polish * fix op definition * fix CI * refresh file changes --------- Co-authored-by: Numiswing <umiswing@foxmail.com> Co-authored-by: Nzhangbo9674 <zhangbo54@baidu.com>
-
由 zhupengyang 提交于
-
由 HongyuJia 提交于
* [0D-Tensor] Support elementwise_add * support elementwise_add ZeroDim2&3
-
- 22 5月, 2023 12 次提交
-
-
由 risemeup1 提交于
* update_c++14_to_c++17_on_windows * disable test_audio_logmel_feature and test_audio_mel_feature
-
由 xiongkun 提交于
* [Dy2static-Fallback] add set_eval_frame function in pybind. 1. add set_eval_frame function in pybind. * add unittest for eval frame hooker. * [support py38] * fix-GeneratorExit error in eval frame hooker * support python == 3.9 * support 3.10 * fix some comments * speed up eval frame for cache hitted code. * code format * fix unittest --------- Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
由 kangguangli 提交于
* add conv2d * printer draft * fix bug * printer draft finish * fix windows CI * commit printer and resnet50 related ops * fix * fix * fix op definition --------- Co-authored-by: Numiswing <umiswing@foxmail.com> Co-authored-by: Nzhangbo9674 <zhangbo54@baidu.com>
-
由 shentanyue 提交于
-
由 cyber-pioneer 提交于
* recompute bn grad * fix test case --------- Co-authored-by: Nsunli <466530738@qq.com>
-
由 zhupengyang 提交于
-
由 zhupengyang 提交于
-
由 Yuanle Liu 提交于
-
由 JYChen 提交于
-
由 Yuanle Liu 提交于
[Inference] add config.enable_low_precision_io api and remove rely on AnalysisConfig::Precison in trt (#52485)
-
由 Tian Zheng 提交于
* Add GPU kernel for multiclass_nms3 op * Make multiclass_nms3 gpu kernel output consistent with cpu kernel * Fix API incompatibility * Fix unittests on builds without CUDA * Fix ROCM build * Remove fluid headers; Use default atol for unittest * Change function and variable naming * Add comments; Reduce redundant code * Use paddle test framework
-
由 niuliling123 提交于
Print python trace back when debugmode = CHECK_NAN_INF_AND_ABORT and backward has nan/inf (#52808)
-
- 20 5月, 2023 1 次提交
-
-
由 zhangbo9674 提交于
* add types and attributes * remove some const_cast * refine code
-
- 19 5月, 2023 9 次提交
-
-
由 shentanyue 提交于
-
由 warrentdrew 提交于
* add minimum grad composite rules * add public python api * fix format * fix format * update testcase * fix testcase * fix format * fix cmakelist.txt * fix format * fix param problem * fix op and composite rule * fix bf16 cpu support problem * fix bf16 cpu issue * fix axis error log * add axis for maximum * revert commit * remove .orig * fix generic problem * revert max op * fix axis error * fix maximum axis * fix test_check_output * fix cinn * fix minimum maximum axis check
-
由 limingshu 提交于
* Reorganize the forward codes of flash-attention. * Fix forward. * Remove some noused codes. * Simplify codes and fix backward. * Change all LOG(INFO) to VLOG and fix the backward. * add scale for AF2 flash_attn, much thanks to xreki and shaojie for debug these codes * decrease the effect of debug print on performance * Unify the initialize of flashattn arguments. * Rewirte the reshape of temp_mask and temp_bias. * API support use_flash_attn. * Fix compiling error on CI. * Try to crop the flash-attention lib. * Correct the condition of whether can use flash-attn. * Remove the softmax_out argument. * Remove is_causal. * Polish codes. * Fix qkv_transpose_out's shape and scaling of Q * K. * Update commit of flash-attention. --------- Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
-
由 RedContritio 提交于
-
由 Galaxy1458 提交于
-
由 xiaoguoguo626807 提交于
* review * modify opcompat bug * modify pybind
-
由 zhoutianzi666 提交于
* decrease_peak_memory
-
由 Galaxy1458 提交于
-
由 ronnywang 提交于
-
- 18 5月, 2023 8 次提交
-
-
由 houj04 提交于
-
由 Yuanle Liu 提交于
-
由 Hulek 提交于
* Fused elementwises kernels and ops * change fuse pass name * adjust .pbtxt files * adjust quantization attributes * add missing arguments and fix others, review fixed * simplify fused kernel registration * fix elementwise unit tests * reuse one fused elementwise op * adjust proto * Add supported datatypes * Change 'Scale' to 'scale' in tests, change some tests to onednn * Revert breaking changes * Fix unit tests * Delete obsolete test cases * Delete commented out code * Fix codestyle * delete temporary condition * fix conflicts and delete duplicate fusing * Fix code after merge * Move tests to new directory * fix tests volatility * Rename test_elementwise_add_onednn_op.py to test_elementwise_add_mkldnn_op.py * Update CMakeLists.txt add mkldnn op test --------- Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-
由 huangjiyi 提交于
-
由 co63oc 提交于
-
由 co63oc 提交于
-
由 Wang Xin 提交于
* move sequence_mask op InferShape func * add dtype infer
-