- 16 6月, 2021 4 次提交
-
-
由 TTerror 提交于
* fix gather op and add logsumexp op on kunlun * update xpu depence * update tests and fix elementwise_add
-
由 lilong12 提交于
* Add raw program meta optimizer (#32597) * add raw program, test=develop * add precision unitest for executor all reduce (#33339) * fix dp (#33297) Co-authored-by: NYuang Liu <liuyuang@baidu.com> Co-authored-by: N李季 <2042519524@qq.com>
-
由 lidanqing 提交于
* [oneDNN] First fix to #33021 (#33174) * - First fix to #33021 * [oneDNN] Second fix to #33021 (#33471) * use older download_data function Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
-
由 Shang Zhizhou 提交于
* 1, remove layernorm dynamic fp16; 2, let reshape out in dynamic shape (#33535)
-
- 15 6月, 2021 4 次提交
-
-
由 wawltor 提交于
-
由 ShenLiang 提交于
* Fix gather infer shape using axis (#33413) * fix gather shape bug * fix None * fix topo * Fix hang of hybrid parallel in new_group (#33141) * fix hang of hybrid parallel * fix new_group for hang problem * fix hang
-
由 WeiXin 提交于
修复pylayer 返回to_tensor时触发段错误的bug。 原因: 如果在Python端修改了stop_gradient属性,c++ 端InnerSetOverridedStopGradient 无法修改stop_gradient属性,在c++端调用SetOverridedStopGradient修改stop_gradient属性。 to_tensor产生的tensor的grad var的DataType为默认值(-1),在backward的过程中grad var的DataType不能为默认值(-1),因此在调用ForwardDataType设置grad var的DataType。 原始PR:#33303
-
由 wenbin 提交于
-
- 12 6月, 2021 1 次提交
-
-
由 zhiboniu 提交于
* Eliminate numerical differences of LayerNorm; fix LayerNorm Nan Bug while large data input * fix bug while large shape of data input
-
- 11 6月, 2021 3 次提交
-
-
由 liuyuhui 提交于
* add unit8 for concat (#32850) * add bool type for tril api (#33402)
-
由 Chen Weihang 提交于
Support diff dataset tensor place in single process dataloader cherry-pick of #33470
-
由 Lijunhui 提交于
使用op benchmark时发现,当输入数据量小于某个值时,python 端 log_softmax 接口的输入值经过计算过后 会被改变为nan。输出正常。 cherry-pick自 #32937
-
- 10 6月, 2021 2 次提交
-
-
由 wangguanzhong 提交于
-
由 王明冬 提交于
-
- 09 6月, 2021 2 次提交
- 08 6月, 2021 3 次提交
- 07 6月, 2021 1 次提交
-
-
由 wenbin 提交于
-
- 04 6月, 2021 1 次提交
-
-
由 wawltor 提交于
* fix compare op in for in the cuda device * fix the paddle compare op for the broadcast
-
- 03 6月, 2021 2 次提交
- 01 6月, 2021 1 次提交
-
-
由 whs 提交于
-
- 31 5月, 2021 1 次提交
-
-
由 wenbin 提交于
-
- 25 5月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* fix precision of mp * fix bug of seed * fix dp * print group
-
- 20 5月, 2021 1 次提交
-
-
由 Aurelius84 提交于
* BugFix with ParseInputDataType from LodTensorArray * BugFix with ParseInputDataType from LodTensorArray
-
- 19 5月, 2021 2 次提交
-
-
由 Chen Weihang 提交于
cherry-pick of #32972
-
由 danleifeng 提交于
* cherrypick for #32640 :add profile and fix dataset hang in heterps;test=develop * cherrypick for #32640 :add profile and fix dataset hang in heterps;test=develop * cherrypick for #32640 :add profile and fix dataset hang in heterps;test=develop
-
- 18 5月, 2021 1 次提交
-
-
由 houj04 提交于
-
- 11 5月, 2021 1 次提交
-
-
由 ShenLiang 提交于
fix error log for reducer fix doc fix bug of utest fix spawn fix converage
-
- 07 5月, 2021 6 次提交
-
-
由 Shang Zhizhou 提交于
* implement MHA order same as training * fix fp16 compile issue on old architecture Co-authored-by: Nzlsh80826 <rewang@nvidia.com>
-
由 Jiawei Wang 提交于
-
由 LielinJiang 提交于
* fix compile error on jetson platform * remove unused head file * rm decode_jpeg op on jetson platform
-
由 Zhou Wei 提交于
[CHERRY-PICK2.1]Remove paddle_custom_op dynamic libraries, and link to FLUID_CORE on windows (#32583) (#32769) * Remove paddle_custom_op dynamic libraries, change link to FLUID_CORE on windows, and check copy_to * fix CI
-
由 WeiXin 提交于
修复了py_layer_op由于没有析构PyLayerContext造成内存(显存)泄露的问题。 原始pr:#32707
-
由 WeiXin 提交于
* clear 'BasicEngine' when an exception occurs in the backward. (#32546) * clear 'BasicEngine' when an exception occurs in the backward. * deal with conflict. * deal with conflict. * forward return any type. (#32661)
-
- 06 5月, 2021 3 次提交
-
-
由 Adam Osewski 提交于
-
由 jakpiase 提交于
* base changes for fix * minor change * fix for bwd kernel * removed unnecessary import * implemented reviewers suggestions * CI fix
-
由 chajchaj 提交于
cherry-pick:change softmax_with_cross_entropy_op's parameter name from softmax_switch to use_softmax (#32750) * change parameter name from softmax_switch to use_softmax, test=develop * cherry-pick:change parameter name from softmax_switch to use_softmax, test=develop
-