1. 26 10月, 2021 1 次提交
    • L
      [cherry-pick-2.2] Fused attention op forward (#35905) (#36708) · d2be870a
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      d2be870a
  2. 27 9月, 2021 2 次提交
  3. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  4. 09 9月, 2021 1 次提交
    • 0
      Add matrix_rank Op and it's GPU and CPU kernel (#34823) · eb1fbf12
      0x45f 提交于
      * init matrix_rank op, add matrix_rank CPU code and test
      
      * add GPU kernel, remove svd_eigen.h
      
      * add CPU kernel when tol is tensor
      
      * add cpu and gpu code when tol is tensor
      
      * fix CI-ROCM error
      
      * add matrix_rank API describe, fix PR-CI-Py3 error
      
      * fix PR-CI-Windows error, add matrix_rank API test
      
      * delete useless comments
      
      * fix review
      
      * add my code in svd_helper.h
      
      * update doc commets
      
      * remove spaces
      eb1fbf12
  5. 08 9月, 2021 1 次提交
  6. 27 8月, 2021 1 次提交
  7. 23 8月, 2021 1 次提交
  8. 16 8月, 2021 1 次提交
    • D
      add unique_consecutive_op (#34334) · 875cfd57
      duanboqiang 提交于
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * add unique_consecutive_op
      
      * remove unity build
      
      * add unique_consecutive op
      
      * add unique_consecutive op
      
      * add enable static
      
      * add noqa
      
      * add space line
      
      * add default case.
      
      * add comma
      
      * add space line
      
      * modify unique_consecutive unittest
      
      * optimize ut coverage
      
      * rebase develop
      
      * improve coverage
      
      * update en docs
      
      * update en docs
      
      * update en docs
      
      * update en docs
      
      * update en docs
      
      * update en doc
      875cfd57
  9. 12 8月, 2021 1 次提交
  10. 14 7月, 2021 1 次提交
  11. 23 6月, 2021 1 次提交
  12. 11 6月, 2021 1 次提交
  13. 09 6月, 2021 2 次提交
  14. 05 5月, 2021 1 次提交
  15. 26 4月, 2021 2 次提交
  16. 21 4月, 2021 1 次提交
    • Z
      【NPU】Merge NPU ccl code (#32381) · c3158527
      zhang wenhui 提交于
      * add allreduce and broadcast without test (#31024)
      
      add allreduce and broadcast without test
      
      * Refactor HCCLCommContext to be compatible with Paddle (#31359)
      
      Refactor HCCLCommContext to be compatible with Paddle (#31359)
      
      * [NPU] add npu kernel for communication op (#31437)
      
      * add allreduce and broadcast without test
      
      * add c_broadcast_test case
      
      * build c_comm_init and c_create_group operators
      
      * make the whole thing compile
      
      * add broadcast and init op test case but run failed
      
      * make unit test compile
      
      * fix broadcast test bug and change into hcom for ccl
      
      * change c_comm_init and c_create_group ops accordingly
      
      * make tests compile
      
      * transfer code to 27
      
      * compiled successfully in 28, but run failed
      
      * test broadcast in 28, but failed
      
      * make hcom primitives work
      
      * change hccl data type for base.h
      
      * fix broadcast bug
      
      * make attributes work
      
      * fix group name bug
      
      * add allreduce but test failed
      
      * allreduce bug for qiuliang
      
      * allreduce finished
      
      * add allgather and reducescatter
      
      * merge all op code
      
      * add allgather test
      
      * finish run all ccl op test exclude send/recv
      
      * all all op and test exclude send/recv
      
      * send_v2_npu.cc recv_v2_npiu.cc compiled
      
      * fix ccl core dump bug and test allgather, reducescatter, broadcast op
      
      * fix allreduce bug just for test
      
      * hcom send&recv test pass, without hcom_destroy
      
      * for qiuliang test
      
      * Ascend Send&Recv Test Pass
      
      * all op (ex send/recv) ok
      
      * fix bug
      
      * merge all ccl op
      
      * style merge to PaddlePaddle
      
      * merge style
      
      * new merge style
      
      * merge style 2
      
      * insert an empty at the end
      
      * disable ctest for hcom to pass ci
      Co-authored-by: Nvoid-main <voidmain1313113@gmail.com>
      Co-authored-by: Nf2hkop <f2huestc@outlook.com>
      
      * Add auto-increasing tag id for Hcom OPs (#31702)
      
      * add c_reduce_sum op (#31793)
      
      add c_reduce_sum op
      
      * update Ascendrc hccl to 20.3 (#32126)
      
      update Ascendrc hccl to 20.3 (#32126)
      
      * fix merge code
      
      * change cmake.txt1
      
      * [NPU] Support npu kernel for c sync stream op (#31386)
      
      * sync stream npu op
      
      * add with_ascend_acl
      
      * update c++ unittest
      
      * compile all failed
      
      * try to pre commit
      
      * after pre commit
      
      * merge&compile&test hccl successfully!
      
      * fix code style
      
      * fix code style
      
      * fix bugs about hccl
      
      * fix some bugs
      
      * fix code style
      
      * fix style
      
      * fix style
      
      * fix
      
      * fixed
      
      * merge develop
      Co-authored-by: Nlw921014 <liuwei921014@yeah.net>
      Co-authored-by: NVoid Main <voidmain1313113@gmail.com>
      Co-authored-by: Nf2hkop <f2huestc@outlook.com>
      Co-authored-by: Nxiayanming <41795079@qq.com>
      c3158527
  17. 07 4月, 2021 1 次提交
    • Z
      【NPU】Merge ascend GE&distributed code by 0208 from ascendrc (#31957) · 8c7c53b3
      zhang wenhui 提交于
      * Ascend rc (#30483)
      
      * Fix compilcation on CANN20.1 and older (#30494)
      
      Fix compilcation on CANN20.1 and older
      
      * Add distribution supported (#30578)
      
      Add distribution supported
      
      * Build praser for Hcom* operators (#30627)
      
      Build praser for Hcom* operators
      
      * Pass device_ids info from launch to trainer. (#30632)
      
      Pass device_ids info from launch to trainer
      
      * Add Hccl program group (#30642)
      
      Add Hccl program group
      
      * Add startup bash files of test_ascend_group. (#30645)
      
      Add startup bash files of test_ascend_group
      
      * cleanup (#30646)
      
      cleanup test_ascend_group.py
      
      * [Feature] Build parser to support distributed training (#30658)
      
      [Feature] Build parser to support distributed training
      
      * fix compilation on ascend-20.1 (#30722)
      
      fix compilation on ascend-20.1
      
      * Dev/fix ascend string (#30749)
      
      Dev/fix ascend string
      
      * code style (#30781)
      
      code style
      
      * Merge ascend_optimizer and ascend_parser. (#30776)
      
      Merge ascend_optimizer and ascend_parser.
      
      * Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug  (#30797)
      
      Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug
      
      * Add paddle ascend distribution training supported (#30796)
      
      Add paddle ascend distribution training supported
      
      * pass cxx_flags to gloo cmake (#30857)
      
      * Destroy session first. (#30954)
      
      Destroy session first.
      
      * merge
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix style, test=develop
      
      * fix, test=develop
      
      * fix
      
      * fix log fatal, test=develop
      
      * fix enforce style, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix rccl, test=develop
      
      * fix test, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix node_num, test=develop
      
      * fix ids str, test=develop
      
      * fix ids str, test=develop
      
      * fix ids str, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix style code, test=develop
      
      * fix style code, test=develop
      
      * fix style code, test=develop
      
      * fix style code, test=develop
      Co-authored-by: Nhutuxian <hutuxian2011@sina.cn>
      Co-authored-by: Ngongweibao <weibao.gong@gmail.com>
      Co-authored-by: NVoid Main <voidmain1313113@gmail.com>
      Co-authored-by: NLeo Chen <chenqiuliang@baidu.com>
      Co-authored-by: Ndingsiyu <18369187719@163.com>
      Co-authored-by: NOleNet <olenet@126.com>
      8c7c53b3
  18. 01 4月, 2021 1 次提交
  19. 26 3月, 2021 1 次提交
  20. 17 1月, 2021 1 次提交
  21. 15 1月, 2021 1 次提交
    • P
      Add Inplace strategy (Output reuse Input Varbase) in dygraph (#30103) · 13d75736
      pangyoki 提交于
      * add view strategy on squeeze,unsqueeze,reshape,flatten
      
      * add squeeze unittest
      
      * add unittests
      
      * use View strategy as name rather than Reuse Allacation
      
      * fix view api doc
      
      * fix format
      
      * use core.ops when input of reshape2 is Tensor
      
      * fix test_cross_entropy_loss error because of reshape2
      
      * fix test_cross_entropy_loss error because of reshape2
      
      * add inplace strategy
      
      * add elementwise_add sub
      
      * let backward op not use inplace
      
      * grad op do not use inplace
      
      * fix memory increase error and add leaf error message
      
      * delete selected_rows
      
      * change op_function
      
      * little change
      
      * solve HandleViewBetweenInputAndOutput
      
      * add unittest and leaf error message
      
      * merge view error
      
      * optimize op_function_generator format and support sum inplace op
      
      * fix format of basic_engine
      
      * fix format for framework
      
      * little change of variable wrapper
      
      * add reshape, squeeze, unsqueeze, scatter api
      
      * add relu elu tanh softmax inplace api
      
      * fix test_squeeze_op unittest
      
      * fix test_relu_op unittest
      
      * fix comment problems
      
      * delete sample code of inplace api
      
      * add reference of grad_pending_nodes in basic_engine
      
      * fix unittest name
      
      * add inplace apis into wlist
      
      * fix error message
      
      * add PADDLE_ENFORCE for set grad op twice
      
      * fix head file error
      13d75736
  22. 09 1月, 2021 1 次提交
    • P
      add View(reuse allocation) strategy on squeeze, unsqueeze, reshape, flatten op (#29913) · da16b33f
      pangyoki 提交于
      * add view strategy on squeeze,unsqueeze,reshape,flatten
      
      * add squeeze unittest
      
      * add unittests
      
      * use View strategy as name rather than Reuse Allacation
      
      * fix view api doc
      
      * fix format
      
      * use core.ops when input of reshape2 is Tensor
      
      * fix test_cross_entropy_loss error because of reshape2
      
      * delete selected_rows
      
      * change op_function
      
      * little change
      
      * solve HandleViewBetweenInputAndOutput
      da16b33f
  23. 08 1月, 2021 1 次提交
    • L
      Fix dtype of ungenerated grad var (#28511) · 8696335f
      Leo Chen 提交于
      * fix dtype of ungenerated grad var
      
      * update ut
      
      * refine code
      
      * set default dtype
      
      * fix could_use_cudnn bug
      
      * remove debug code
      
      * re-implement
      
      * fix bug
      8696335f
  24. 07 1月, 2021 1 次提交
  25. 06 1月, 2021 1 次提交
  26. 02 12月, 2020 1 次提交
    • Z
      Add pure fp16 training with master weights. (#27712) · be3777a5
      Zhen Wang 提交于
      * add the weight decay func for the momentum op
      
      * Add the multi_precision function in Momentum Optimizer.
      
      * Make sure that the initial value of master weights are same with the fp16 weights.
      
      * add static loss scaling.
      
      * add the rescale_grad function in the pure fp16 training.
      
      * use the original momentum updating method.
      
      * Polish some codes, such as variable names.
      
      * add docstring for apis.
      
      * update the var creation details of _create_master_weight.
      
      * not modify codes about imperative momentum updating.
      
      * Fix the error of test_dist_sparse_tensor_load_momentum UT.
      
      * add unit test for multi precision fp16 training.
      
      * add more unit tests for CI.
      
      * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
      
      * For CI Coverage Checking.
      be3777a5
  27. 18 11月, 2020 1 次提交
  28. 02 11月, 2020 1 次提交
  29. 29 10月, 2020 2 次提交
  30. 28 10月, 2020 1 次提交
  31. 14 10月, 2020 1 次提交
  32. 12 10月, 2020 1 次提交
  33. 27 9月, 2020 1 次提交
    • L
      add support to float64 input of warpctc op. (#27399) · 1501a80f
      Li Fuchen 提交于
      * add float64 input to ctc_loss
      
      * modified error message of  warpctc
      
      * update repo and tag of warpctc
      
      * add test for warpctc with float64 input
      
      * modified warpctc.cmake to make sure build always
      
      * resolved sample code bug of warpctc
      
      * add core.ops in warpctc dygraph
      
      * fix a bug of test
      1501a80f
  34. 21 9月, 2020 1 次提交
    • H
      Quant op dev (#25932) · 02606d45
      huangxu96 提交于
      * Finished ChannelWiseQuantDequantAbsMaxOp and Passed unittests.
      
      * Finished channel-wise quantize strategy in imperative quantization.
      
      * Added Cuda code of ChannelWiseQuantDequantMaxAbsOP
      Add Cuda code of ChannelWiseQuantDequantMaxAbsOp
      
      * Add quant_axis for channel_wise quant.
      
      * fixed a bug in unnitests, which will not trigger axis = 1 case and cannot meet the coverage rate requirement.
      
      * Added some assert infomation and fixed some coding style mistakes.
      02606d45
  35. 14 9月, 2020 1 次提交
    • Z
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for... · d708b210
      Zhen Wang 提交于
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240)
      
      * update amp_check_finite_and_scale_op for static_amp.
      
      * use amp_check_finite_and_scale in static graph amp.
      
      * update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op).
      
      * add update_loss_scaling op in cpp.
      
      * add update_loss_scaling_op unit test.
      
      * update the doc of the check_finite_and_unscale op
      
      * Update the process of gradients updating skipping if the gradients have infinite values.
      
      * update the way to zero grads.
      
      * update test_update_loss_scaling_op.py
      
      * add log info when find infinite grads.
      
      * add the unit test for UpdateLossScaling Layer.
      d708b210
  36. 08 9月, 2020 1 次提交