- 26 5月, 2023 2 次提交
-
-
由 zhaoyingli 提交于
* global view process_group * fix import * fix attr * fix tunner init comm
-
由 risemeup1 提交于
* fix test error * fix test_exectuor_feed_non_tensor * fix test error
-
- 25 5月, 2023 5 次提交
-
-
由 HongyuJia 提交于
-
由 zhangkaihuo 提交于
-
由 thunder95 提交于
-
由 zhouweiwei2014 提交于
-
由 houj04 提交于
-
- 24 5月, 2023 3 次提交
-
-
由 Leo Chen 提交于
-
由 zhangkaihuo 提交于
-
由 Haohongxiang 提交于
-
- 23 5月, 2023 10 次提交
-
-
由 Zhang Zheng 提交于
* [AMP OP&Test] Support float16 in selu * fix
-
由 Fisher 提交于
* Enable check_cinn on some tests Tests: bitwise, compare, shape, assign_value, sum, expand_v2, lookup_table, lookup_table_v2 * Enable more CINN tests Tests with CINN: expand_v2, matmul, matmul_v2, mul, norm, one_hot_v2 Add target select in cinn_launch_op * Revert test_mul_op * Improve op unit tests
-
由 co63oc 提交于
-
由 co63oc 提交于
-
由 co63oc 提交于
-
由 cyberslack_lee 提交于
-
由 Leo Chen 提交于
* add host memory stats * add ut
-
由 ronnywang 提交于
* [CustomDevice] fix auto_paralell * update * update * update
-
由 zxcd 提交于
* fix processing logic of the arange function when dtype is empty. * update commit version * fix ValueError when end is None. * add unitest for new case. * fix tensor type. * remove paddle.to_tensor(), add more test unit. * remove useless line. * fix enable_static * add new test unit. * fix by comment.
-
由 HongyuJia 提交于
* [0D-Tensor] Support elementwise_add * support elementwise_add ZeroDim2&3
-
- 22 5月, 2023 8 次提交
-
-
由 zhenhailiu 提交于
* unify code * remove useless code * polish * python/paddle/distributed/fleet/meta_parallel/pipeline_parallel.py * polish * polish
-
由 Meteor Liu 提交于
* [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * [dygraph]unify _non_static_mode() in_dygraph_mode() and in_dynamic_mode() * fixed cyclic reference that caused patial import * fixed bad change * fix bad import * fix bad import * fix bad import * fix ut failed caused by change in_dynamic_mode * fix ut failed caused by change in_dynamic_mode * fixed usage of in_dynamic_mode() or in_dygraph_mode() * revert python3 to python in .pre-commit-config.yaml * fix merge conflicts
-
由 Zhang Ting 提交于
-
由 niuliling123 提交于
-
由 niuliling123 提交于
-
由 JYChen 提交于
-
由 Tian Zheng 提交于
* Add GPU kernel for multiclass_nms3 op * Make multiclass_nms3 gpu kernel output consistent with cpu kernel * Fix API incompatibility * Fix unittests on builds without CUDA * Fix ROCM build * Remove fluid headers; Use default atol for unittest * Change function and variable naming * Add comments; Reduce redundant code * Use paddle test framework
-
由 niuliling123 提交于
Print python trace back when debugmode = CHECK_NAN_INF_AND_ABORT and backward has nan/inf (#52808)
-
- 20 5月, 2023 1 次提交
-
-
由 ShenLiang 提交于
-
- 19 5月, 2023 5 次提交
-
-
由 warrentdrew 提交于
* add minimum grad composite rules * add public python api * fix format * fix format * update testcase * fix testcase * fix format * fix cmakelist.txt * fix format * fix param problem * fix op and composite rule * fix bf16 cpu support problem * fix bf16 cpu issue * fix axis error log * add axis for maximum * revert commit * remove .orig * fix generic problem * revert max op * fix axis error * fix maximum axis * fix test_check_output * fix cinn * fix minimum maximum axis check
-
由 limingshu 提交于
* Reorganize the forward codes of flash-attention. * Fix forward. * Remove some noused codes. * Simplify codes and fix backward. * Change all LOG(INFO) to VLOG and fix the backward. * add scale for AF2 flash_attn, much thanks to xreki and shaojie for debug these codes * decrease the effect of debug print on performance * Unify the initialize of flashattn arguments. * Rewirte the reshape of temp_mask and temp_bias. * API support use_flash_attn. * Fix compiling error on CI. * Try to crop the flash-attention lib. * Correct the condition of whether can use flash-attn. * Remove the softmax_out argument. * Remove is_causal. * Polish codes. * Fix qkv_transpose_out's shape and scaling of Q * K. * Update commit of flash-attention. --------- Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
-
由 Zhang Zheng 提交于
* Add large dim test of log_softmax * fix
-
由 Charles-hit 提交于
-
由 Danyang Zhang 提交于
* delete bf16 of cross entropy * delete bf16 of cross entropy
-
- 18 5月, 2023 6 次提交
-
-
由 houj04 提交于
-
由 Charles-hit 提交于
* add meshgrid,expand_as, prod and grad bf16 kernel * fix bf16 for optest * modify code style * fix amp test
-
由 PuQing 提交于
* fix parameter not passed * fix repr
-
由 HongyuJia 提交于
* [CINN] Fix TestGelu unittest of CINN * pass if_enable_cinn
-
由 co63oc 提交于
-
由 Hulek 提交于
* Fused elementwises kernels and ops * change fuse pass name * adjust .pbtxt files * adjust quantization attributes * add missing arguments and fix others, review fixed * simplify fused kernel registration * fix elementwise unit tests * reuse one fused elementwise op * adjust proto * Add supported datatypes * Change 'Scale' to 'scale' in tests, change some tests to onednn * Revert breaking changes * Fix unit tests * Delete obsolete test cases * Delete commented out code * Fix codestyle * delete temporary condition * fix conflicts and delete duplicate fusing * Fix code after merge * Move tests to new directory * fix tests volatility * Rename test_elementwise_add_onednn_op.py to test_elementwise_add_mkldnn_op.py * Update CMakeLists.txt add mkldnn op test --------- Co-authored-by: NSilv3S <slawomir.siwek@intel.com>
-