- 19 8月, 2022 4 次提交
-
-
由 dongfangshenzhu 提交于
* add merged_momentum *test=kunlun * add merged_momentum *test=kunlun * add fp16 to merged_momentum,*test=kunlun * change dist_model.cc * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun * add merged_momentum unittest and change momentum,test=kunlun
-
由 Charles-hit 提交于
* 修复生成动态图代码时,如果输出没有配置名字,会导致下标越界的问题。 * decide forward_return[0] is not none * 修改反向yaml前向输出只有一个时,未配置名字,那么输出自动生成为out * modify code style
-
由 mengqingchun02 提交于
* support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * support beam_search operator on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * fix beam_search operator bugs on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun * support beam_search_decode operator on xpu. test=kunlun
-
由 Aganlengzi 提交于
-
- 18 8月, 2022 11 次提交
-
-
由 heliqi 提交于
* predictor add GetInputType interface * predictor change GetInputType to GetInputTypes * predictor add tester * predictor add tester * predictor change GetInputType to GetInputTypes * predictor change GetInputType to GetInputTypes * predictor add tester
-
由 Weilong Wu 提交于
* [Eager] add get_tensor_from_selected_rows * add PADDLE_ENFORCE to check SelectedRows * use _ prefix in temp
-
由 OccupyMars2025 提交于
-
由 HongyuJia 提交于
* transfer trilinear op to phi, change name from trilinear_interp_v2 to trilinear_interp * reserve linear_interp param * change testcase scale if-branch * testcase test_imperative_case * fix trilinear testcase * import paddle in test_trilinear_interp_v2
-
由 pangyoki 提交于
apply buffer_shared_inplace_pass and inplace_addto_op_pass pass to program in Standalone Executor (#45085) * apply inplace addto in python apply_pass * fix * apply inplace pass for program * skip feed and fetch var * fix block_desc.move_from * fix block desc * alltoall remove inplace * fix
-
由 Aurelius84 提交于
* [OpAttr]Squeeze axes support Tensor * add support_tensor * fix unittest * fix coverage
-
由 zhangxiaoci 提交于
* change to async mode for xpu multi-card training in static graph mode * minor bugfix * irrelevant. move to another pr * move change to other pr * fix stream issue * fix 'stream not meet with current context' error * fix branch diverge, test=kunlun
-
由 JingZhuangzhuang 提交于
* fix infer tans scop * fix infer trans scope * fic infer trans scope * fic infer trans scope Co-authored-by: Ndingjiawei <327396238@qq.com>
-
由 wanghuancoder 提交于
-
由 HongyuJia 提交于
* transfer bilinear op to phi, change bname from bilinear_interp_v2 to bilinear_interp * reserve linear_interp param * fix cross device import
-
由 zyfncg 提交于
-
- 17 8月, 2022 12 次提交
-
-
由 zyfncg 提交于
-
由 Sing_chan 提交于
-
由 Aurelius84 提交于
* [OpAttr]Add SupportTensor for OpMaker * fix typo * fix code style * add SupportTensor for concat op * add unittest for register Tensor * add shape checker and split attribute
-
由 Wilber 提交于
* fix multi stream error.
-
由 Leo Chen 提交于
-
由 Leo Chen 提交于
* use addKernel * fix compile * remove elementwiseAddto * add return * fix custom place
-
由 feng_shuai 提交于
-
由 wanghuancoder 提交于
* fix_stop_gradient
-
由 fwenguang 提交于
-
由 ykkk2333 提交于
* xpu unittest grad compute supports more types, *test=kunlun * add instance norm xpu, *test=kunlun
-
由 HongyuJia 提交于
* transfer bicubic_interp op to phi, change name from bicubic_interp_v2 to bicubic_interp * test final_state_bicubic_interp api * testcase match imperative case
-
由 sneaxiy 提交于
* fix squared_l2_norm bug * update buffer.h
-
- 16 8月, 2022 13 次提交
-
-
由 Chen Weihang 提交于
* move check finite and unscale kernel into phi * move infershape into phi * move update_loss_scaling kernel into phi * remove original kernels * move update loss scaling infershape into phi * add header for xpu and npu * solve coverage failed * fix npu test failed * remove mutable data in cu file * fix new executor failed * add valid check for meta tensor output
-
由 Siming Dai 提交于
* initial commit * fix op maker bug * fix mul grad bug * add unittest * fix add grad bug, add cpu kernel * add paddle.geometric.message_passing * add paddle.geometric.send_uv api, add unittest * add fp16 judgement * fix file typo, move compute_type to message_op * add impl file * fix unittest timeout time * add review revise
-
由 Weilong Wu 提交于
* [Eager draft] forward_only interface migrate to autograd_api * strings api add dygraph forward function * rm useless comments * draft version for check CI * fix ci * forward-only no need compute_require_grad and pass stop_gradient, rm useless comments * polish yaml and using CPUPlace = phi::CPUPlace * rm useless comments * polish yaml and update some test case * rm useless funcs * polish eager_gen code * polish code
-
由 feng_shuai 提交于
* convert multihead to oss * fix:bug * fix:delete const cast * fix:don't support bias_qk * add vit pass * fix:convert bug and add preln_residual_bias * support length=-1 * add UT for convert * add no_bias_qk support for gpu_multihead_op * delete infer_shape depends on bias_qk * oss just can be used in T4 and A* * fix:change api for ROCM CI
-
由 Aganlengzi 提交于
-
由 feifei-111 提交于
* fix_shape * code style * fix assert * fix to_tensor badreturn
-
由 HongyuJia 提交于
-
由 Wangzheee 提交于
-
由 zhangkaihuo 提交于
-
由 Siming Dai 提交于
-
由 houj04 提交于
-
由 Wilber 提交于
-
由 Sing_chan 提交于
* add select_p * fix bugs * add custom test for select_p; modify select_p primrules * modify according to xiaoxu's comment * add eq_p, select_p, pow_p, use autograd to test grad of high order * add requirement of autograd, modify expected type of eq * modify according to xiaoxu's comment * import primops to use primops.pow
-