- 23 2月, 2021 3 次提交
-
-
由 Zhong Hui 提交于
[BUG FIX] Fix softmax cross entropy overflow problem.
-
由 Qi Li 提交于
ATT, cherry pick of #31132
-
由 Wojciech Uss 提交于
* A fix for oneDNN matmul kernel. Fixes issue #30309 (#30723) * A fix for #30309 with oneDNN 1.6
-
- 22 2月, 2021 1 次提交
-
-
由 Guanghua Yu 提交于
* add parameter in roi_align op * fix compatibility of ops * fix op test & cpu kernel * fix JaccardOverlap in nms
-
- 18 2月, 2021 1 次提交
-
-
由 Jacek Czaja 提交于
-
- 05 2月, 2021 1 次提交
-
-
由 chentianyu03 提交于
make abs support complex types cherry-pick: #30375 #30637
-
- 27 1月, 2021 1 次提交
-
-
由 Wojciech Uss 提交于
Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
-
- 21 1月, 2021 1 次提交
-
-
由 QingshuChen 提交于
-
- 20 1月, 2021 1 次提交
-
-
由 AshburnLee 提交于
This PR is cherry-picked from PR: #29192 Function: Added TF32 switch for cuDNN. Turned on as default, turned off when users set the switch as False
-
- 19 1月, 2021 4 次提交
-
-
由 Zhen Wang 提交于
Fix the compiling error of update_loss_scaling when using cuda9.
-
由 hutuxian 提交于
-
由 taixiurong 提交于
* support transformer v2.0 * fix range op crash in dygraph xpu place
-
由 JZ-LIANG 提交于
-
- 18 1月, 2021 4 次提交
-
-
由 lidanqing 提交于
Co-authored-by: NWojciech Uss <wojciech.uss@intel.com>
-
由 guofei 提交于
* Modify the calculation logic of LambOptimizer (#29313) * Modify the calculation logic of LambOptimizer * Modify the calculation logic of LambOptimizer * Modify the calculation logic of LambOptimizer
-
由 ceci3 提交于
* add pad and concat double grad * resolve conflict
-
由 Zhang Ting 提交于
* add fp16 support for tril_triu op (#30186) * add VecCastCUDAKernel (#30296) Co-authored-by: Nfurnace <34057289+windstamp@users.noreply.github.com>
-
- 15 1月, 2021 4 次提交
-
-
由 Yang Zhang 提交于
built-in `rsqrt` is shadowed
-
由 lijianshe02 提交于
* add transpose double grad test=develop (#29600) * add transpose double grad test=develop * cherry-pick test=develop
-
由 whs 提交于
-
由 wawltor 提交于
* fix the rnn mask memory bug for out of read * update the code for the rnn
-
- 14 1月, 2021 4 次提交
-
-
由 ShenLiang 提交于
-
由 lidanqing 提交于
-
由 LielinJiang 提交于
* Add double grad for conv_transpose (#29706) * add double grad for conv_transpose * register cudnn conv double grad for depthwise conv (#29807)
-
由 GaoWei8 提交于
* softmax backward optimize
-
- 13 1月, 2021 3 次提交
-
-
由 JZ-LIANG 提交于
-
由 tangwei12 提交于
Change-Id: I3c788e7576688e63181e7f01562529b85a09cc59
-
由 石晓伟 提交于
* Register op version for grid_sampler, test=op_version (#29916) * add op version for fake_quant and fake_dequant ops, test=op_version (#29923) * Register op version for print, test=op_version (#29945) * add gru op_register_version; test=op_version; (#29931) * Register op version for coalesce_tensor. (#29940) * register op version for conv2d_transpose, conv3d_transpose and depthwise_conv2d_transpose, test=op_version (#29937) * add op_register_version for allclose op; test=op_version (#29968) * register ModifyAttr for instance_norm, test=op_version (#29938) * add op_version for flip op [test=op_version] (#30019) * add the op version check for the elementwise ops, test=op_version (#30010) * add the support the op version check for matmul, test=op_version (#30011) * Revert "register ModifyAttr for instance_norm, test=op_version (#29938)" * add REGISTER_OP_VERSION for generate_proposals, roi_align, roi_pool test=op_version (#30034) * Fix rank_attention op_version, test=op_version (#30006) * fix rank_attention, test=op_version * Register op version for linspace,test=op_version (#30025) * fix op_register_version for compare ops, test=op_version (#30007) Co-authored-by: Nzhoushunjie <zhoushunjie@baidu.com> * register ModifyAttr for instance_norm, test=op_version (#30065) * register instance norm, test=op_version * add trace op_register_version and fix version bug; test=op_version (#30000) * fix a bug in op_version_registry, test=develop, test=op_version (#29994) * Add version checking, test=op_version (#30129) * fix a bug in gaussian_random_op version, test=release/2.0 Co-authored-by: NLielinJiang <50691816+LielinJiang@users.noreply.github.com> Co-authored-by: Ncc <52520497+juncaipeng@users.noreply.github.com> Co-authored-by: NQi Li <qili93@qq.com> Co-authored-by: NJack Zhou <zhoushunjie@baidu.com> Co-authored-by: NGuo Sheng <whucsgs@163.com> Co-authored-by: Nwangxinxin08 <69842442+wangxinxin08@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: NFlyingQianMM <245467267@qq.com> Co-authored-by: Nceci3 <ceci3@users.noreply.github.com> Co-authored-by: Nhutuxian <hutuxian2011@sina.cn> Co-authored-by: Nchalsliu <45041955+chalsliu@users.noreply.github.com> Co-authored-by: Nwangguanzhong <jerrywgz@126.com> Co-authored-by: NShenLiang <shenliang03@baidu.com> Co-authored-by: Nyinhaofeng <66763551+yinhaofeng@users.noreply.github.com> Co-authored-by: Nchannings <chenlingchi@baidu.com> Co-authored-by: Nchentianyu03 <chentianyu03@baidu.com> Co-authored-by: Nruri <shipeng1108@163.com>
-
- 12 1月, 2021 7 次提交
-
-
由 Chengmo 提交于
* Fix server.h include device_context (#30243) * fix cmake Co-authored-by: NseiriosPlus <tangwei12@baidu.com> * 【Paddle.Fleet】Support local save sparse param (#30175) * add save tensor support Co-authored-by: NseiriosPlus <tangwei12@baidu.com> * add sparse embedding & load vars for 2.0 & gloo bug fix (#30306) * add sparse embedding & load vars for 2.0 Change-Id: I36b59ed5f015189dc9d9d2e34a9357722d369f1b * fix hdfs gloo Change-Id: Ia84d579053720ad804183e54c9a04b4f031c79c6 * fix gloo hdfs Change-Id: I5ab982fd483cddc10adcdef0b8aa83aca976cb9e * move loadvar/sparse embedding from incubute to static Change-Id: I57081d3545ad2efab78c72420d2162c0eacaf3a0 Co-authored-by: Ntangwei12 <tangwei12@baidu.com>
-
由 swtkiwi 提交于
* fix datanorm error msg (#30294) * Optimize the error message of framework. (#30134) * modify error message based on comments (#30189) * modify error message based on comments * edit code according to review. * Correct spelling according to review. * fix enforce msg of sum xpu op (#30113) * enhance error info for py_func (#30138) * enhance error info for py_func * update * fix elugradgrad test fail & error message opt (#30171) * fix elugradgrad test fail and error message opt * fix unitest,test=develop * Update prroi_pool_op.h fix error message * opt message,test=develop * fix ci fail,test=develop * Refine PADDLE_ENFORCE Error Messages. test=develop (#30149) Improve some error messages in parallel_executor.cc, conditional_block_op.cc, recurrent_op.cc * enhance error message, test=develop (#30220) * fix error message for distribute_fpn_proposals_op (#30116) * enhance error msgs of fusion_seqpool_cvm_concat_op.cc, test=develop (#30240) * just add the op error message for the matmul xpu (#30246) add the op error message for the matmul xpu * enhance error message of nll_loss op test=develop (#30125) * enhance error message of nll_loss op test=develop Co-authored-by: Nyaoxuefeng <yaoxuefeng@baidu.com> Co-authored-by: Nxiemoyuan <71377852+xiemoyuan@users.noreply.github.com> Co-authored-by: NWeiXin <weixin10@baidu.com> Co-authored-by: NJack Zhou <zhoushunjie@baidu.com> Co-authored-by: NWilber <jiweibo@baidu.com> Co-authored-by: NDouble_V <liuvv0203@163.com> Co-authored-by: NHuihuang Zheng <zhhsplendid@gmail.com> Co-authored-by: Nzhang wenhui <frankwhzhang@126.com> Co-authored-by: Nwangguanzhong <jerrywgz@126.com> Co-authored-by: N石晓伟 <39303645+Shixiaowei02@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: Nlijianshe02 <48898730+lijianshe02@users.noreply.github.com>
-
由 Leo Chen 提交于
[cherry-pick] use cuda generator in bernoulli cuda kernel (#30199)
-
由 Chengmo 提交于
-
由 wangchaochaohu 提交于
* reduce the occupied size of memory for the fused pattern of elementwise_add Op and activation Op(relu Op for example) (#29885) * register OPMaker and Infer Shape Check for fused_elementwise_add (#30259)
-
由 Zhen Wang 提交于
[Cherry-pick]Fix the accuracy problem of allclose op when using float64 data type in static mode.(#29890) (#30313) * Fix the accuracy problem of allclose op when using float64 data type in static mode. * Format the code style.
-
由 chentianyu03 提交于
* complex gradient matmul (#29966) * dot op support complex types * matmul support complex types * add test case * matmul broadcast gradient support complex * move conjFunctor to complex_functor.h * change the kron gradient when complex types (#29995) * type promotion for grad (#30177) * type promotion for grad * add type promotion for div op
-
- 11 1月, 2021 5 次提交
-
-
由 liym27 提交于
[Cherry-Pick] Support vector<double> as type of op attribute and op set_value suppport vector<double> as value (#30126) (#30305) Cherry-Pick #30126 1. Support vector<float64> as type of op attribute. 2. op set_value suppports float64 numpy.array
-
由 liym27 提交于
cherry-pick #30147,For op set_value, check input's rank < 7
-
由 Zhang Ting 提交于
add cast cuda kernel cherry-pick #29352
-
由 wangchaochaohu 提交于
cherry-pick #28769, add support for place string representation
-
由 wangchaochaohu 提交于
* elementwise_add_grad Op optimization (#29575) * optimize for long width for elementwise (#29602) * refine (#29622) * delete the code for fp16 optimization because it is not faster than common template code (#29715) * fix the shape choose of vectorize for cuda * optimization for fp16 elementwise add (#29744) * Fix the compiler error for half type (#29799) * refine the compiler error for half2 operation (#29816) * fix the compiler error when gcc4 cuda9.0 (#29997)
-