1. 24 11月, 2022 1 次提交
    • H
      [Phi Support CuDNN] Support ALL CuDNN (#47865) · 1623f1b4
      HongyuJia 提交于
      * support default use_gpudnn=True
      
      * fully support cudnn in phi
      
      * add header file
      
      * add white_list, verify accuracy
      
      * phi support all cudnn
      
      * opt affine_grad
      
      * try different arches of pretrained_model
      
      * try different arches of pretrained_model
      
      * add debug string
      
      * debug eager_method
      
      * add debug string, pass all local ctest
      
      * polish all debug code
      
      * delete use_cudnn relevant code autogen
      
      * fix depthwise_conv2d
      
      * Share all other members of Tensor except use_cudnn
      
      * polish codes according to review opinion
      
      * polish codes according to review opinion, fix bug
      
      * polish codes according to review opinion, opt performance
      
      * polish codes according to review opinion, fix pooling.py
      1623f1b4
  2. 17 11月, 2022 2 次提交
  3. 16 11月, 2022 1 次提交
  4. 11 11月, 2022 1 次提交
  5. 10 11月, 2022 1 次提交
  6. 09 11月, 2022 1 次提交
  7. 04 11月, 2022 1 次提交
  8. 02 11月, 2022 2 次提交
  9. 01 11月, 2022 3 次提交
  10. 31 10月, 2022 1 次提交
  11. 28 10月, 2022 2 次提交
  12. 24 10月, 2022 1 次提交
  13. 19 10月, 2022 1 次提交
  14. 12 10月, 2022 1 次提交
  15. 10 10月, 2022 1 次提交
    • Y
      [PHI]Add RNN yaml (#46812) · ab60fd8b
      YuanRisheng 提交于
      * add yaml entry for rnn and rrnn_grad, move infershape function for rnn_grad to phi infer meta
      
      * WIP: move rnn kernrl to phi
      
      * Change the code generation to avoid converting from intializer list to tuple of heterogeneous types.
      This is only triggered when an api has intermediate outputs, and the result of the outputs are of heterogeneous types.
      
      * fix the bug that when none in a vector of tensors requires gradient, the conversion to InferShapeContext to InferMetaContext (a.k.a. BuildInferMetaContext) produces errorous results.
      
      * fix ci bugs
      
      * fix ci bugs
      
      * fix ci bugs
      
      * modify code according comment
      Co-authored-by: Nchenfeiyu <chenfeiyu@baidu.com>
      ab60fd8b
  16. 28 9月, 2022 1 次提交
  17. 23 9月, 2022 1 次提交
  18. 22 9月, 2022 1 次提交
  19. 20 9月, 2022 2 次提交
  20. 19 9月, 2022 2 次提交
  21. 16 9月, 2022 1 次提交
  22. 14 9月, 2022 1 次提交
  23. 09 9月, 2022 1 次提交
  24. 07 9月, 2022 1 次提交
  25. 06 9月, 2022 1 次提交
  26. 05 9月, 2022 3 次提交
  27. 02 9月, 2022 1 次提交
  28. 31 8月, 2022 3 次提交
    • A
      [OpAttr]output_size of unpool support Tensor type (#45543) · 236ac0d0
      Aurelius84 提交于
      * [OpAttr]output_size of unpool support Tensor type
      
      * fix coverage
      
      * fix contain_var
      
      * fix coverage
      236ac0d0
    • C
      Fix split api bug (#45396) · 4a25b60d
      Charles-hit 提交于
      * fix split bug
      
      * solve function redefine
      
      * fix fluid.layers.split and add unit test
      
      * delete splitInferMeta register in unary.cc
      
      * modify test_split_op GPU unit test
      
      * modify test_split_op GPU unit test place param
      
      * refactor split op and fix infershape bugs
      
      * add () in && and ||
      
      * fix split C++ unit test
      
      * fix split infershape
      4a25b60d
    • L
      Add index add API (#45176) · 45171911
      Li Min 提交于
      45171911
  29. 30 8月, 2022 1 次提交
    • H
      [phi] Transfer coalesce_tensor to phi (#45478) · cf9d651b
      HongyuJia 提交于
      * add coalesce_tensor kernel
      
      * polist coalesce_tensor kernel
      
      * add sig and InferMeta
      
      * add testcase
      
      * add legacy_api.yaml
      
      * fix infermeta
      
      * fix yaml
      
      * fix kernel implementation
      
      * add compile dependency of phi/kernels
      
      * fix MetaConfig
      
      * add python api
      
      * add and fix testcase
      
      * rnn.py add import
      
      * change _C_ops.coalesce_tensor
      
      * remove useless comments
      
      * add SetBackend
      
      * restore XPU kernel temporarily
      
      * fix code according to PR comments
      cf9d651b