- 12 1月, 2021 6 次提交
-
-
由 swtkiwi 提交于
* fix datanorm error msg (#30294) * Optimize the error message of framework. (#30134) * modify error message based on comments (#30189) * modify error message based on comments * edit code according to review. * Correct spelling according to review. * fix enforce msg of sum xpu op (#30113) * enhance error info for py_func (#30138) * enhance error info for py_func * update * fix elugradgrad test fail & error message opt (#30171) * fix elugradgrad test fail and error message opt * fix unitest,test=develop * Update prroi_pool_op.h fix error message * opt message,test=develop * fix ci fail,test=develop * Refine PADDLE_ENFORCE Error Messages. test=develop (#30149) Improve some error messages in parallel_executor.cc, conditional_block_op.cc, recurrent_op.cc * enhance error message, test=develop (#30220) * fix error message for distribute_fpn_proposals_op (#30116) * enhance error msgs of fusion_seqpool_cvm_concat_op.cc, test=develop (#30240) * just add the op error message for the matmul xpu (#30246) add the op error message for the matmul xpu * enhance error message of nll_loss op test=develop (#30125) * enhance error message of nll_loss op test=develop Co-authored-by: Nyaoxuefeng <yaoxuefeng@baidu.com> Co-authored-by: Nxiemoyuan <71377852+xiemoyuan@users.noreply.github.com> Co-authored-by: NWeiXin <weixin10@baidu.com> Co-authored-by: NJack Zhou <zhoushunjie@baidu.com> Co-authored-by: NWilber <jiweibo@baidu.com> Co-authored-by: NDouble_V <liuvv0203@163.com> Co-authored-by: NHuihuang Zheng <zhhsplendid@gmail.com> Co-authored-by: Nzhang wenhui <frankwhzhang@126.com> Co-authored-by: Nwangguanzhong <jerrywgz@126.com> Co-authored-by: N石晓伟 <39303645+Shixiaowei02@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: Nlijianshe02 <48898730+lijianshe02@users.noreply.github.com>
-
由 Leo Chen 提交于
[cherry-pick] use cuda generator in bernoulli cuda kernel (#30199)
-
由 Chengmo 提交于
-
由 wangchaochaohu 提交于
* reduce the occupied size of memory for the fused pattern of elementwise_add Op and activation Op(relu Op for example) (#29885) * register OPMaker and Infer Shape Check for fused_elementwise_add (#30259)
-
由 Zhen Wang 提交于
[Cherry-pick]Fix the accuracy problem of allclose op when using float64 data type in static mode.(#29890) (#30313) * Fix the accuracy problem of allclose op when using float64 data type in static mode. * Format the code style.
-
由 chentianyu03 提交于
* complex gradient matmul (#29966) * dot op support complex types * matmul support complex types * add test case * matmul broadcast gradient support complex * move conjFunctor to complex_functor.h * change the kron gradient when complex types (#29995) * type promotion for grad (#30177) * type promotion for grad * add type promotion for div op
-
- 11 1月, 2021 11 次提交
-
-
由 liym27 提交于
[Cherry-Pick] Support vector<double> as type of op attribute and op set_value suppport vector<double> as value (#30126) (#30305) Cherry-Pick #30126 1. Support vector<float64> as type of op attribute. 2. op set_value suppports float64 numpy.array
-
由 liym27 提交于
cherry-pick #30147,For op set_value, check input's rank < 7
-
由 Zhang Ting 提交于
add cast cuda kernel cherry-pick #29352
-
由 wangchaochaohu 提交于
cherry-pick #28769, add support for place string representation
-
由 wangchaochaohu 提交于
* elementwise_add_grad Op optimization (#29575) * optimize for long width for elementwise (#29602) * refine (#29622) * delete the code for fp16 optimization because it is not faster than common template code (#29715) * fix the shape choose of vectorize for cuda * optimization for fp16 elementwise add (#29744) * Fix the compiler error for half type (#29799) * refine the compiler error for half2 operation (#29816) * fix the compiler error when gcc4 cuda9.0 (#29997)
-
由 Zhen Wang 提交于
* Support pure fp16 training for AMP API. (#29544) * add cast ops before and after unsupported fp16 ops. * Keep partial net in FP32 pattern. * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode. * Add fp16 support for adam op. * add multi precision attr for adam. * Fix the bug of test_multi_precision_fp16_train UT. * Code format for CI. * Fix the redefine error about MPTypeTrait on windows. * fix bugs of the _create_accumulators func in Momentum. * fix bug when inserting post cast op. * Add the update_loss_scaling op in allow_set of UnusedVarCheck. * Update for ci coverage. * Add some doc for OptimizerWithMixedPrecision. * Fix the code style. * Imporve the doc of `amp_init`. * Change for fp16 testing if users have the infer program defined in separate way. * Remove tensor copy in the update_loss_scaling op. (#29426) * remove tensor copy in the update_loss_scaling op * not use thrust. * fix some cuda memory access error.
-
由 Zhang Ting 提交于
* improve dropout (#29465) * improve drop out * add VectorizedRandomGeneratorWithGenerator * fix bug * modify according to comments * improve dropout grad (#29605) * improve grad perf * fix the bug of dropout_grad (#29813)
-
由 GaoWei8 提交于
* Softmax vectorization (#29404) * vec softmax fw * vec softmax bw * add a message argument for compiler compatibility * optimize softmax forward (#30217) * optimize softmax forward Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com>
-
由 Wilber 提交于
-
由 WangXi 提交于
* Optimization grad merge performance (#29784) * [fleet] combine amp and gradient merge, test=develop (#30086) * fix assign_op_xpu concat_op_xpu warining (#30120) Co-authored-by: Nliuyuhui <liuyuhui@baidu.com>
-
由 QingshuChen 提交于
* add aarch64 and sunway kunlun lib * minor * optimize elementwise_add for kunlun * update kunlun dependence * minor * minor
-
- 08 1月, 2021 3 次提交
-
-
由 liym27 提交于
[cherry-pick 2.0] Fix bug: In dynamic mode, if start or end is negetive, __getitem__ return wrong result(#30003) (#30146) 1. when slice_item is a slice: 1) the start of __getitem__ should be std::max(start, 0) if slice 2) the start of __getitem__ should be std::min(end, dim) 2. when slice_item is an integer, it should be in [-dim_len, dim_len) 3. Fix error message to use accurate data
-
由 liym27 提交于
1. Type of index: int, slice(step must be 1). 2. Type of value: (1) int32, int64, float32, bool; (2) numpy.array(int32, int64, float32, bool);<Note: float64 is not supported> (3) paddle.Tensor(int32, int64, float32, float64, bool);
-
由 123malin 提交于
* Add Lookahead and ModelAverage Optimizer (#30004) * test=develop, add model_average and lookahead * Improve Index select cuda kernel (#30139) * test=develop, add index_select_cuda kernel
-
- 07 1月, 2021 3 次提交
-
-
由 ShenLiang 提交于
-
由 Leo Chen 提交于
* Improve performance of elementwise_add grad op (#29187) * pass stop_gradient for cast op * improve performance of elementwise_add grad * use tensor copy async * dygraph branch * fix dygraph branch * add ut * make gelu fp16 computing more robust (#29484) * Add fast path for dropout when p == 0 (#29553) * add fast path for p == 0 in dropout * add ut
-
由 furnace 提交于
* Layer norm fp16 (#29169) * add fp16 for layer_norm op * revert layernorm api * fix forward * fix forward * fix backward for layernorm with fp16 * fix unit test for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U> * fix with_mkldnn compile error for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> * fix layer_norm accuracy (#29434) * Layernorm opt (#29522) * layernorm fw opt * layernorm bw opt * fix typo, test=develop * remove const dim3 for windows CI compatibility * merge develop Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com> * Fix compile problem when cuda_arch < 6000 (#29576) * fix compile problem when cuda_arch < 6000 * refine code * refine code Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com>
-
- 05 1月, 2021 1 次提交
-
-
由 cc 提交于
-
- 29 12月, 2020 3 次提交
-
-
由 Chen Weihang 提交于
* [Complex] Add support for complex grad accumulated (#29889) * add support for complex grad accumulated * add unittest for coverage * update test dtype * remove useless blank line * [Complex] Handle complex to real after type promotion (#29855) * try to add fwd op input dtypes * refactor base impl * return tmp_ins after dygraph prepare data * fix typo found in debug * polish comment & add complex net test * revert detail change * fix unittest failed * add complex kernel condition control * fix xpu test failed & polish comment * polish details by review comments * Complex op test (#29753) * delete no need to calculate inputs in dygraph op_test * delete no need to calculate inputs in dygraph op_test * change grad elementwise_mul for complex types (#29757) * add conj op for complex types * add conj for complex types * add more test case * add conj_op test * modify conj api and impl * add complex type for fill_constant_op xpu * add setConstant for complex type * remove complex conj test file * user define grad for test_conj_op * add test case for static mode of conj api * modify conj doc * change input args name to x * remove useless codes * conj support real types * add conj test case for real number * delete no need to calculate inputs in dygraph op_test * delete no need to calculate inputs in dygraph op_test * modify grad of mul for complex types * fix the grads of inputs args order not match bug * change the grad of div when complex types (#29804) * change the grad of div when complex types * fix the grads of inputs args order not match bug Co-authored-by: Nchentianyu03 <chentianyu03@baidu.com>
-
由 Wilber 提交于
-
由 Thunderbrook 提交于
* cherry pick heter ps * CMakeList
-
- 28 12月, 2020 1 次提交
-
-
由 taixiurong 提交于
* support some shape in matmul and cast * modify matmul
-
- 25 12月, 2020 2 次提交
-
-
由 QingshuChen 提交于
* feat: support check_nan_inf for kunlun device * support kunlun stack * minor
-
由 tangwei12 提交于
* add ps table (#29463) * add ps table Change-Id: I468a04bd071d21ff52654926fcf4d5f3da19e178 * add service (#29560) * add service, remove ut on mac * fix heter_profiler & add heter stop method * fix code style * merge pscore Change-Id: Ie7f60d1cdde6755a0c29db26863c6283e9843d57 * fix cmake Change-Id: I6773509a7b4ca79139ecc40b7bf3eb318ceff8bb * fix conflit Change-Id: I35575be0c96a8520f9d756ea7f1ff0b904a165ba * fix conflit Change-Id: Ic926ea0b0d67803226d51241397ba3b510226bfa
-
- 22 12月, 2020 2 次提交
-
-
由 QingshuChen 提交于
* add nearest_interp_v2 on kunlun * add nearest_interp_v2 on kunlun Co-authored-by: NTTerror <tangzhiyi11@users.noreply.github.com>
-
由 WangXi 提交于
* gen nccl id use socket (#29431) * fix gen_nccl_id_op_helper compile failed, test=develop (#29614)
-
- 21 12月, 2020 1 次提交
-
-
由 Jacek Czaja 提交于
-
- 18 12月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* Add complex dtype op (add) test example (#29603) * add op test case for complex * polish code details * add xpu set constant support * fix argument rror * remove useless pyc file * [Complex] Add real & imag op and api for complex tensor (#29672) * add complex real op & api & unittest * add imag op & api & unittest * refactor op impl * revert simplify writing due to complile failed * polish details * polish grad op code * add conj op for complex types (#29527) * add conj op for complex types * add conj for complex types * add more test case * add conj_op test * modify conj api and impl * add complex type for fill_constant_op xpu * add setConstant for complex type * remove complex conj test file * user define grad for test_conj_op * add test case for static mode of conj api * modify conj doc * change input args name to x * remove useless codes * conj support real types * add conj test case for real number Co-authored-by: Nchentianyu03 <chentianyu03@baidu.com>
-
- 17 12月, 2020 3 次提交
-
-
由 ShenLiang 提交于
* Fix the dowanload bug in the case of multiple machines (#29551) * fix the dowanload bug * add sort for ips * Fix bug of matmul_v2 for broadcast case (#29599) * fix bug of matmul_v2 for broadcast * Rebuild group automatically in dynamic graph distributed (#29255) * add tensor_indices in AssignGroupBySize * add rebuild group in reducer * fix error message of gather nd (#29521)
-
由 arlesniak 提交于
fix #27935 (comment) by QA @OliverLPH (Could you add some MKLDNN-related print log when use FLAGS_use_mkldnn?)
-
由 TTerror 提交于
* fix expand && concat/transpose to new api * update xpu_header * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * add nearest_interp on kunlun * update error message
-
- 16 12月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support roi_align & affine_channel for kunlun * minor
-
- 15 12月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support mobilenet for kunlun (#29458) * add xpu ops for training transformer in kunlun (#29539) * 1.fix matmul bug 2. add one hot * add xpu error msg Co-authored-by: Nprocr <procrboo@gmail.com> Co-authored-by: Ntaixiurong <taixiurong@126.com>
-
- 08 12月, 2020 1 次提交
-
-
由 liuyuhui 提交于
* add deformable_conv op on xpu (#29234) * rebase develop * update deformable_conv op on xpu * update deformable_conv op on xpu * update kunlun conv2d/softmax/elementwise implemetation (#29229) * update conv2d & softmax to new xpu api * test=kunlun * remove useless comments * test=kunlun * remote softmax xpu op * test=kunlun * update kunlun softmax * test=kunlun * update xpu unitest * test=kunlun * fix elementwise_grad bug for kunlun *test=kunlun * support global pooling for kunlun (#29293) * test=kunlun * update reduce_sum op on xpu (#29367) * update reduce_sum op on xpu * update reduce_sum op on xpu * support running on xpu * fix expand/uniform_random && concat/transpose to new api on xpu (#29280) * fix expand && concat/transpose to new api * update uniform_random_op * update xpu_header * 1. fix elementwise ops'bug 2. fix softmax_with_cross_entropy_op 3. add biliner_interp_op (#29448) Co-authored-by: Nroot <root@bjhw-sys-rpm0223.bjhw.baidu.com> Co-authored-by: N卖鱼的哲学 <tangzhiyi11@users.noreply.github.com> Co-authored-by: NQingshuChen <qingshu.chen714@gmail.com> Co-authored-by: Ntaixiurong <taixiurong@126.com> Co-authored-by: Nroot <root@bjhw-sys-rpm0223.bjhw.baidu.com>
-