- 29 12月, 2022 12 次提交
-
-
由 risemeup1 提交于
* fix_static_problem * test * fix_static_problem,test=document_fix
-
由 wangzhen38 提交于
* [fluid remove] rawconv
-
由 Aurelius84 提交于
* [D2SCinn]Support deliver skip_gc_vars into Graph * fix unittest * fix copy
-
由 Lin Manhui 提交于
-
由 ykkk2333 提交于
-
由 xu98bin 提交于
* auto parallel bf16
-
由 zmxdream 提交于
* fix load into memory * fix load into memory * fix code style
-
由 姜永久 提交于
* rm legacy dygraph part7 * rm non_static_mode * modify * modify * add static test * set static for lstm_cudnn test * reset tracer * reset varbase * fix
-
由 Leo Chen 提交于
-
由 MarDino 提交于
-
由 Wang Bojun 提交于
* fusedAttenGrad_noGrad * code style fix * add ut * remove unnecessary log
-
由 姜永久 提交于
* rm legacy layers part6 * rm non_static_mode * modify non_static * minor change * rm loss * rm in_legacy part8 * minor change
-
- 28 12月, 2022 18 次提交
-
-
由 RichardWooSJTU 提交于
-
由 姜永久 提交于
* rm legacy nn part2 * rm _non_static_mode * modify * modify unpool test * modify unpool test * modify loss * keep legacy for layer_norm
-
由 tianshuo78520a 提交于
* Add CheckPRTemplate.py;test=document_fix;test=prtemplate * Add CheckPRTemplate.py;test=document_fix;test=prtemplate * Add CheckPRTemplate.py;test=document_fix;test=prtemplate
-
由 sprouteer 提交于
-
由 zqw_1997 提交于
remove fluid.contrib.fused_elemwise_activation, sequence_topk_avg_pooling, var_conv_2d, match_matrix_tensor and tree_conv (#49331)
-
由 Leo Chen 提交于
* add skip run * alloc minimum memory * skip check_size in Alloc * skip check_size in Alloc * skip check_size in Alloc * fix cases when tensor is initialized or empty * alloc empty output for place info * add test * increase timeout * format code * skip cpu * add cudnn_deterministic * fit for hostAlloc * follow comments * change check_size to fake_alloc
-
由 HappyHeavyRain 提交于
* generate the static op of some ops * add the VERSION of pixel_shuffle * change the API doc of isclose * change the API doc of isclose * fix the isclose op comment
-
由 xiongkun 提交于
* einsum support 0d tensor. 1. support 0d tensor in multi-operands. 2. add 9 unittests for einsum 0d tensor. * override NVIDIA_TF32_OVERRIDE to avoid accuracy problem in 11.2 and 11.8
-
由 xiaoxiaohehe001 提交于
-
由 Matsumoto Ruko 提交于
* update pypi doc * update pypi doc * update pypi doc * empty commit, re-trigger all ci Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
-
由 Haohongxiang 提交于
-
由 zhaoyingli 提交于
* [AutoParallel] adapt for clip * fix unittest * enable_static * fix dist_fill_constant_batch_size_like * fix process_mesh.shape * update cond of modifying shape_list
-
由 zhaoyingli 提交于
-
由 Yuanle Liu 提交于
-
由 WangZhen 提交于
-
由 姜永久 提交于
* rm legacy fluid part4 * rm non_static_mode * minor change * modify initializer * rm legacy for initializer * fix dataloader test
-
由 Huihuang Zheng 提交于
This PR increased the delta in unit test for CUDA 11.8. The reason of this fix: (1) It seems CUDA 11.8 has higher delta in accuracy result. Our other targets for seresnext under parallel executor have already added delta such as CPU, all reduce test cases, so we did same for GPU base case with CUDA 11.8 (2) A new executor is under developing in PaddlePaddle team, so the unit test for old executor can be relaxed.
-
由 wanghuancoder 提交于
* delete old dygraph pylayer
-
- 27 12月, 2022 10 次提交
-
-
由 Yuanle Liu 提交于
-
由 zhouweiwei2014 提交于
-
由 jiangcheng 提交于
* fix CINN should add float16.h may install bug * reupdate setuppy support float16 * add only if float16.h file exists
-
由 zhangyikun02 提交于
-
由 risemeup1 提交于
* fix run_setup problem * test
-
由 xiaoting 提交于
* fix fold for large bs * fix fold for large bs
-
由 zhaoyingli 提交于
* fix input order * add unittest * update cmakelist
-
由 Leo Chen 提交于
-
由 zhaoyingli 提交于
* [AutoParallel] quantization pass support export * support subgraph * move_presist_var_to_global_block * update unittest * fix ci-coverage * fix codestyle * fix fake_dequantize_op * remove unused var * fix ci error and aprroval error * add unittest for fp16 in test_dequant_linear * replace mutable data * fix unittest in non-cuda-core * fix unittest Co-authored-by: Ncarryyu <569782149@qq.com> Co-authored-by: Nwufeisheng <wfs1997@163.com>
-