- 19 10月, 2021 5 次提交
-
-
由 littletomatodonkey 提交于
* fix replicate pad when input size is 0 * add unit test
-
由 xiongkun 提交于
* catch the generatorfunction and intercept it. * add test generator * add test case * refine the testcase
-
由 Yulong Ao 提交于
* Add QR decomposition op * Change codes to adapt to new svd_helper * Update linalg.py Restore the deleted comma * Restore the deleted line * Update linalg.py * Update linalg.py * Improve the qr code by reviews * Update QR based on CI results * Update qr doc, test=document_fix * Change unsafe and ill-formed codes
-
由 YipZLF 提交于
* Add auto parallel cost model and unittests * Fixed code styles. * Fixed bugs and codes style * fixed typo * Improved code style: object encapsulation. * Fixed codes. * Refractored estimate_cost * Fixed typo
-
由 Zeng Jinle 提交于
* add pow2_warmup op * remove contrib __all__ * add AttrT * rename * follow comments * fix duplicate PADDLE_RESTRICT
-
- 18 10月, 2021 9 次提交
-
-
由 Haohongxiang 提交于
* [HybridParallel]Support fp16 in dygraph hybrid parallel * update * update * update for recompute * add unittest of pp+fp16 * add unittest of recompute+fp16 * update * modify ut
-
由 jakpiase 提交于
* added softplus * refactored softplus op * deleted unnecessary file * added missing file * added formatting * disabled tests if GPU is used * added reviewer suggestion * unified softplus kernel
-
由 levi131 提交于
* init functional jacobian api * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * init hessian API * save status * polish API docstring * modify docstring * add utils.py * save status * fix dygraph double grad dtype error when calling for high differential senario * reinvoke ci * test_hessian.py is ok * polish hessian API * init vhp * Revert "init vhp" This reverts commit cbd4d3b66abe82b0ac10721b9eddeb7d82e0a1c8. * init vhp * finish vhp API logically * add test for partial_engine.cc * modify numerical_delta with dtype float32 * merge fix for dtype float64 * spell fix * save status * polish code * rm _stop_gradient_pre_process * save status * add example for vhp interface * add _compute_numerical_vjp and _compute_numerical_vhp * test is ok * vhp is ok * add testVHPFloat64 * modify for comments * modify format * modify format * save status * test_vhp is ok * finish code polish * small modify for v is None Co-authored-by: NJiabinYang <360788950@qq.com>
-
由 Qi Li 提交于
-
由 Qi Li 提交于
-
由 ceci3 提交于
* quant support matmul_v2 * fix format
-
由 taixiurong 提交于
[XPU AMP] 1. xpu support gradient acc 2. xpu support create tensor in dygraph 3. xpu support update weight params in amp (#36439)
-
由 Tongxin Bai 提交于
* autograd.functional passed pylint checker. * autograd.functional: fix import errors. * autograd.functional: fixed unit tests. * autograd.functional minor format change * [autograd.functional] Fixed vjp and jvp's v=None bug.
-
由 Haohongxiang 提交于
-
- 15 10月, 2021 4 次提交
-
-
由 0x45f 提交于
* fix no_grad context error in train mode when using save/load * change net to train mode in test case
-
由 jiangcheng 提交于
* Add CinnSubgraphSearchPass * solve CI problem of subgraph order not same * fix some bug by review advices * ensure the independently of subgraph, that mean the subgraph should not have link to out-graph * rename cinn_subgraph_search_pass to build_cinn_pass and delete paddle_to_cinn_pass * add flag to control wheter append build cinn pass * remove AppendPass at ParallelExecutorPassBuilder * rename paddle_to_cinn_pass to build_cinn_pass in build_strategy and close test_run_from_cinn
-
由 Jiabin Yang 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * add tanh triple grad * format python code * refine code Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
由 Zeng Jinle 提交于
-
- 14 10月, 2021 9 次提交
-
-
由 Yanxing Shi 提交于
* add sparse_embedding doc * delete wrong space * fix error for sample code * fix error for doc compile * delete __all__ * modify sample code
-
由 levi131 提交于
-
由 Zhang Zheng 提交于
-
由 zhulei 提交于
* [NPU] Add density_prior_box op * [NPU] Add density_prior_box op
-
由 Zeng Jinle 提交于
* merge momentum ops * update * add ut to improve coverage * remove optimizer change * fix error msg * update ut * add __restrict__ for CUDA * update ut * move merged_momentum_op to optimizer dir * fix coverage
-
由 Zeng Jinle 提交于
-
由 ShenLiang 提交于
* add no_sync for parameters sync * add pipeline for moe
-
由 Zeng Jinle 提交于
* add memory_analysis * fix has_none
-
由 Yuang Liu 提交于
-
- 13 10月, 2021 13 次提交
-
-
由 yujun 提交于
* update * update * update * try make CI pass * doc typo * update doc string
-
由 limingshu 提交于
* A leap of try for cudaLaunchCooperativeKernel * fix bugs * Totally replace the lar cuda kernel * Fix bugs * a test for lars merge * Adding las_op_momentum infer_shape * Fix codes * use avg_numel instead of max_numel to acquire grid num * modify unittest files about lars op * Finally converge when merged-lars works * fix ctest files * add merged_operation kernel when cuda version is older than 11 * Fix code style * fix ctest failure * fix error * fix all ctest error and change lars compute code of cpu * fix bugs on v100. * revert python modififation about lars * revert python modification codes
-
由 wuhuanzhou 提交于
Check detail PR description at https://github.com/PaddlePaddle/Paddle/pull/36116
-
由 zhangbo9674 提交于
* add attr is_distributed * refine code * refine black/white list for pure fp16
-
由 zhangbo9674 提交于
* add fp16 for clip_by_norm api * support ClipByGlobalNorm for fp16 in dygraph * add unittest for dygraph clipGlobalNorm * refine unittest for dygraph clipGlobalNorm for mac and windows * refine unittest * add unittest for fp64 * refine unittest for fp64
-
由 Guoxia Wang 提交于
-
由 caozhou 提交于
-
由 Leo Chen 提交于
* refine amp level * fix typo * update tracer._amp_level
-
由 Huihuang Zheng 提交于
Remove RunFromCinn method in PE because We Will Call CinnRunner in Compute method of SubgraphOp
-
由 Jiabin Yang 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
由 Wangzheee 提交于
* add_int_pass * add_int8_flag_pass * add_int8_flag_pass * fix CMakeLists.txt * fix test_trt_fc_fuse_quant_dequant_pass.py * fix python/paddle/fluid/tests/unittests/ir/inference/test_trt_fc_fuse_quant_dequant_pass.py * fix test_trt_fc_fuse_quant_dequant_pass.py
-
由 From00 提交于
-
由 levi131 提交于
* modify format * modify format
-