- 18 10月, 2021 8 次提交
-
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * init hessian API * save status * polish API docstring * modify docstring * add utils.py * save status * fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * test_hessian.py is ok * polish hessian API * init vhp * Revert "init vhp" This reverts commit cbd4d3b66abe82b0ac10721b9eddeb7d82e0a1c8. * init vhp * finish vhp API logically * add test for partial_engine.cc * modify numerical_delta with dtype float32 * merge fix for dtype float64 * spell fix * save status * polish code * rm _stop_gradient_pre_process * save status * add example for vhp interface * add _compute_numerical_vjp and _compute_numerical_vhp * test is ok * vhp is ok * add testVHPFloat64 * modify for comments * modify format * modify format * save status * test_vhp is ok * finish code polish * small modify for v is None Co-authored-by: NJiabinYang <360788950@qq.com>
-
由 Qi Li 提交于
-
由 Qi Li 提交于
-
由 Siming Dai 提交于
* fix async_read bug * change index place to cpu * add tensor size judge * add async_read & async_write test * fix bug in async_write * fix mac py3 ci * fix bug for cpu version paddle * fix windows ci bug * change input argument error type * change const_cast to mutable_data * add async_write out-of-bound check and consumate error hint * fix a small bug for dst_tensor * add docs and refine codes * refine docs * notest,test=windows_ci * fix windows ci * fix require * fix code-block * add core.is_compiled_with_cuda()
-
由 ceci3 提交于
* quant support matmul_v2 * fix format
-
由 taixiurong 提交于
[XPU AMP] 1. xpu support gradient acc 2. xpu support create tensor in dygraph 3. xpu support update weight params in amp (#36439)
-
由 Tongxin Bai 提交于
* autograd.functional passed pylint checker. * autograd.functional: fix import errors. * autograd.functional: fixed unit tests. * autograd.functional minor format change * [autograd.functional] Fixed vjp and jvp's v=None bug.
-
由 Haohongxiang 提交于
-
- 17 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
This reverts commit 0452f27c.
-
- 16 10月, 2021 1 次提交
-
-
由 Zhang Zheng 提交于
* fix the initializer of resnet unit op * fix the initializer of resnet unit op
-
- 15 10月, 2021 7 次提交
-
-
由 duanboqiang 提交于
-
由 Zhang Zheng 提交于
-
由 Nyakku Shigure 提交于
* add resnext model * add zh docs * add unittest * test performance Co-authored-by: Ainavo <ainavo@163.com> Co-authored-by: Npithygit <pyg20200403@163.com> Co-authored-by: Ainavo <ainavo@163.com> Co-authored-by: Npithygit <pyg20200403@163.com>
-
由 0x45f 提交于
* fix no_grad context error in train mode when using save/load * change net to train mode in test case
-
由 jiangcheng 提交于
* Add CinnSubgraphSearchPass * solve CI problem of subgraph order not same * fix some bug by review advices * ensure the independently of subgraph, that mean the subgraph should not have link to out-graph * rename cinn_subgraph_search_pass to build_cinn_pass and delete paddle_to_cinn_pass * add flag to control wheter append build cinn pass * remove AppendPass at ParallelExecutorPassBuilder * rename paddle_to_cinn_pass to build_cinn_pass in build_strategy and close test_run_from_cinn
-
由 Jiabin Yang 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * add tanh triple grad * format python code * refine code Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
由 Zeng Jinle 提交于
-
- 14 10月, 2021 11 次提交
-
-
由 Yanxing Shi 提交于
* add sparse_embedding doc * delete wrong space * fix error for sample code * fix error for doc compile * delete __all__ * modify sample code
-
由 duanboqiang 提交于
-
由 levi131 提交于
-
由 Zhang Zheng 提交于
-
由 zhulei 提交于
* [NPU] Add density_prior_box op * [NPU] Add density_prior_box op
-
由 Zeng Jinle 提交于
* merge momentum ops * update * add ut to improve coverage * remove optimizer change * fix error msg * update ut * add __restrict__ for CUDA * update ut * move merged_momentum_op to optimizer dir * fix coverage
-
由 Zeng Jinle 提交于
-
由 ShenLiang 提交于
* add no_sync for parameters sync * add pipeline for moe
-
由 levi131 提交于
-
由 Zeng Jinle 提交于
* add memory_analysis * fix has_none
-
由 Yuang Liu 提交于
-
- 13 10月, 2021 12 次提交
-
-
由 Guoxia Wang 提交于
* fix BatchNorm for fp16
-
由 yujun 提交于
* update * update * update * try make CI pass * doc typo * update doc string
-
由 limingshu 提交于
* A leap of try for cudaLaunchCooperativeKernel * fix bugs * Totally replace the lar cuda kernel * Fix bugs * a test for lars merge * Adding las_op_momentum infer_shape * Fix codes * use avg_numel instead of max_numel to acquire grid num * modify unittest files about lars op * Finally converge when merged-lars works * fix ctest files * add merged_operation kernel when cuda version is older than 11 * Fix code style * fix ctest failure * fix error * fix all ctest error and change lars compute code of cpu * fix bugs on v100. * revert python modififation about lars * revert python modification codes
-
由 wuhuanzhou 提交于
Check detail PR description at https://github.com/PaddlePaddle/Paddle/pull/36116
-
由 zhangbo9674 提交于
* add attr is_distributed * refine code * refine black/white list for pure fp16
-
由 zhangbo9674 提交于
* add fp16 for clip_by_norm api * support ClipByGlobalNorm for fp16 in dygraph * add unittest for dygraph clipGlobalNorm * refine unittest for dygraph clipGlobalNorm for mac and windows * refine unittest * add unittest for fp64 * refine unittest for fp64
-
由 Guoxia Wang 提交于
-
由 caozhou 提交于
-
由 Leo Chen 提交于
* refine amp level * fix typo * update tracer._amp_level
-
由 Huihuang Zheng 提交于
Remove RunFromCinn method in PE because We Will Call CinnRunner in Compute method of SubgraphOp
-
由 Jiabin Yang 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
由 Wangzheee 提交于
* add_int_pass * add_int8_flag_pass * add_int8_flag_pass * fix CMakeLists.txt * fix test_trt_fc_fuse_quant_dequant_pass.py * fix python/paddle/fluid/tests/unittests/ir/inference/test_trt_fc_fuse_quant_dequant_pass.py * fix test_trt_fc_fuse_quant_dequant_pass.py
-