1. 09 3月, 2023 1 次提交
    • W
      [prim] add elementwise_pow backward (#51230) · d9de3ef6
      wangzhen38 提交于
      * [cinn] add elementwise_pow backward
      
      * [cinn] update unnitest
      
      * [cinn] update by comments
      
      * [cinn] for ci
      
      * [cinn] for ci
      
      * [cinn] for ci
      
      * [cinn] for ci
      
      * [cinn] for ci
      d9de3ef6
  2. 02 3月, 2023 1 次提交
    • W
      Add concat grad cinn (#50972) · a4689c90
      wangzhen38 提交于
      * [cinn] concat_grad
      
      * [cinn] concat_grad
      
      * [cinn] concat_grad build success
      
      * [Add PGLBOX] fix unnitest
      
      * [Add PGLBOX] fix unnitest
      
      * [Add PGLBOX] fix codestyle
      
      * [cinn] update by comments
      
      * [cinn] update by comment
      
      * [cinn] add axis check
      a4689c90
  3. 28 2月, 2023 3 次提交
    • X
      【prim】Matmul double grad composite api (#50452) · a0c473f4
      xiaoguoguo626807 提交于
      * modify name
      
      * merge develop
      
      * original code
      
      * build modify
      
      * success 2*2
      
      * fused dim=1 failed
      
      * success
      
      * modify static
      
      * success for static except dim=1
      
      * delete log
      
      * tmp modify
      
      * success
      
      * success
      
      * add fp1664
      
      * delete fp16 cpu test
      
      * stop windows test
      
      * review modify
      
      * modify tanh test
      
      * modify tanh
      
      * fix_conflixt
      
      * modift static prim
      
      * fix_conflict
      
      * Update test_static_prim.cc
      
      * update
      
      * bug fix
      a0c473f4
    • G
      add cumsum prim backward (#50565) · ca2b6095
      GGBond8488 提交于
      * add cumsum prim backward
      
      * skip aixs=None test case
      
      * fix op generante eror
      
      * fix static test error
      
      * remove unused code
      
      * fix static test error
      
      * skip cpu float16 test case
      
      * skip eager cpu cumsum float16 test case
      
      * add cinn test
      
      * reshape flatten out
      
      * Disable cinn single test
      
      * remove cinn test
      
      * reformat todo
      
      * add prim in cumsum op test
      
      * remove old test
      
      * fix typro
      
      * fix typro
      
      * fix typro
      
      * pass axis=None test case
      
      * remove forward prim test
      
      * remove same name axis
      ca2b6095
    • J
      【Prim】Reshape, transpose, cast vjp (#50778) · ab1b6303
      Jiabin Yang 提交于
      * support transpose and reshape
      
      * support reshpe, transpose, cast vjp
      
      * merge develop
      
      * recover unused file
      
      * remove prim base
      
      * support problem
      
      * remove additional status settting
      
      * remove additional status settting
      
      * fix ut
      
      * fix ut
      
      * fix ut
      
      * fix no grad branch
      
      * add more test
      
      * disable fp16 in cpu
      
      * fix test
      ab1b6303
  4. 25 2月, 2023 1 次提交
  5. 24 2月, 2023 1 次提交
    • X
      【prim】Slice grad (#50771) · f6dea800
      xiaoguoguo626807 提交于
      * support prim test in OpTest
      
      * fix cmake
      
      * fix op test
      
      * fix test_input_spec
      
      * disable cinn in reduce_sum unit test
      
      * add bfloat16 dtype for sum
      
      * add approve rules
      
      * polish code
      
      * add clear jit program function
      
      * convert grad out from tensor to numpy
      
      * remove unnecessary code
      
      * add only_prim flag
      
      * fix flag
      
      * fix op test
      
      * add attr
      
      * fix optest comp inplace error
      
      * fix op test
      
      * fix op test with guard
      
      * add initialization of check_comp flag
      
      * fix comp inplace error in op test
      
      * rename check_comp with check_prim and add bfloat16 dtype convert
      
      * rename comp_op_type to prim_op_type
      
      * rename comp to prim
      
      * remove useless code
      
      * skip ci check for only prim
      
      * add no_grad_vars and grad_outputs in prim test
      
      * fix var_dict
      
      * fix op test for only_prim
      
      * fix dy2static bugs
      
      * polish some code
      
      * temp
      
      * modify op test
      
      * except cinn test
      
      * modify bfp16
      
      * modify pad grad
      
      * add pad_grad dtype
      
      * start cinn part
      
      ---------
      Co-authored-by: NCharles-hit <wanghao107@baidu.com>
      f6dea800
  6. 23 2月, 2023 2 次提交
    • HappyHeavyRain's avatar
      Support 'complex promote' in yaml (#50611) · 91a3d159
      HappyHeavyRain 提交于
      * support 'complex promote' in yaml
      
      * change the compplex_promote
      
      * change 'kron' in math.py
      
      * change 'kron' comment in python
      
      * change kron comment in python
      
      * change kron comment in python
      91a3d159
    • J
      【Prim】Enhance gather vjp (#50786) · dca3a099
      Jiabin Yang 提交于
      * tmp gather vjp
      
      * support gather
      
      * remove useless code
      
      * fix compiling error
      
      * fix ut
      
      * add eager test
      
      * add eager test
      
      * add seed
      
      * fix cpu error
      
      * fix transpose op compat
      
      * remove tensor index case
      
      * fix prim_cinn
      
      * fix ut
      
      * add gather composite
      dca3a099
  7. 21 2月, 2023 1 次提交
    • HappyHeavyRain's avatar
      Support bw invoke fw (#50260) · d8845735
      HappyHeavyRain 提交于
      * support bw invoke fw
      
      * fix scale in static_backward.yaml
      
      * fix the bug in tensorrt/convert
      
      * move 'scale','sign' into ops.yaml
      
      * add scale_grad of scale in op_compat.yaml
      
      * change generated_static_op in CMakeLists.txt
      d8845735
  8. 13 2月, 2023 1 次提交
  9. 03 2月, 2023 1 次提交
  10. 20 1月, 2023 1 次提交
  11. 18 1月, 2023 1 次提交
  12. 17 1月, 2023 1 次提交
    • X
      【Prim】Add multiply,expand,div vjp rules (#49831) · 39c6765a
      Xiaoxu Chen 提交于
      * support elementwise base func
      
      * fix compiling error and add test
      
      * support vjp for div using comp
      
      * remove additional change
      
      * fix dy2st error with magic num
      
      * fix dy magic num
      
      * another magic
      
      * another magic
      
      * another magic
      
      * add skip rename strategy
      
      * support add vjp
      
      * support add with new axis cal
      
      * support sub vjp
      
      * [prim] add multiply vjp rules
      
      * [prim] add multiply vjp rules
      
      * [prim] fix no infershape with composite in _append_backward_ops
      
      * [prim] add expand vjp rule
      
      * [prim] add exp vjp rule
      
      * uncomment infer shape for reshape/sum static prim api
      
      * [prim] fix tanh nullptr error
      
      * remove some print message
      
      * fix magic number in run_program relative tests @JiaBinYang
      
      * [prim] add expand,multiply,exp vjp rules
      
      * fix only support single direction reduce error
      
      * infer reduce dims using out dims
      Co-authored-by: NJiabinYang <360788950@qq.com>
      39c6765a
  13. 16 1月, 2023 3 次提交
  14. 13 1月, 2023 4 次提交
  15. 09 1月, 2023 1 次提交
  16. 05 1月, 2023 2 次提交
  17. 30 12月, 2022 1 次提交
  18. 28 12月, 2022 1 次提交
  19. 20 12月, 2022 1 次提交
    • HappyHeavyRain's avatar
      Generate static graph code of some ops (#49092) · 11d7026b
      HappyHeavyRain 提交于
      * generate static graph code of some ops
      
      * change the default value of 'num' of 'unstack'
      
      * revert the pow
      
      * fix the 'real' 'imag' op error because of 'complex'
      
      * fix the code according to review
      11d7026b
  20. 13 12月, 2022 1 次提交
  21. 12 12月, 2022 1 次提交
  22. 09 12月, 2022 1 次提交
  23. 05 12月, 2022 1 次提交
  24. 02 12月, 2022 1 次提交
    • J
      [Eager] Optimize Grad by prune useless branch (#47827) · d1e93be1
      Jiabin Yang 提交于
      * [Eager] Fix paddle.grad interface
      
      * [Eager] Support minimum SubGraph for GeneralGrad
      
      * Add needed_nodes to prune grad graph more thoroughly
      
      * [Eager] Add grad_node_trans_mapping_ to record which grad_node has been transformed to AccumulationNode
      
      * [Eager] Fix paddle.grad interface
      
      * Polish code
      
      * remove potential_stop_node
      
      * Add endding_nodes to enhance genSugraph logic
      
      * clear endding_nodes_
      
      * polish code
      
      * rename endding_nodes to endding_nades_
      
      * Refactor grad interface
      
      * Add register_hook case to fix coverage-ci
      
      * Fix code format
      
      * Refactor general_grad
      
      * Add more code comments
      
      * call clear directly to release GradSlotMeta
      
      * fix a mistake
      
      * fix matmul/ multiply kernel logic and optional input in yaml, fill zeros logic and so on.
      
      * fix batch_norm_double_grad yaml optional config
      
      * fix tanh_triple_grad yaml and kernels
      
      * fix MultiplyTripleGradKernel optional logic
      
      * fix merge mistake
      
      * fix compile error
      
      * remove legacy attr for bn
      
      * polish code
      
      * fix some kernel
      
      * merge develop
      
      * fix error
      
      * remote log
      
      * fix kernel with full like
      
      * hide value log behind
      
      * hide value log behind
      
      * fix matmul_triple grad
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      d1e93be1
  25. 01 12月, 2022 1 次提交
  26. 30 11月, 2022 1 次提交
  27. 29 11月, 2022 1 次提交
  28. 28 11月, 2022 2 次提交
  29. 25 11月, 2022 1 次提交
  30. 24 11月, 2022 1 次提交
    • H
      [Phi Support CuDNN] Support ALL CuDNN (#47865) · 1623f1b4
      HongyuJia 提交于
      * support default use_gpudnn=True
      
      * fully support cudnn in phi
      
      * add header file
      
      * add white_list, verify accuracy
      
      * phi support all cudnn
      
      * opt affine_grad
      
      * try different arches of pretrained_model
      
      * try different arches of pretrained_model
      
      * add debug string
      
      * debug eager_method
      
      * add debug string, pass all local ctest
      
      * polish all debug code
      
      * delete use_cudnn relevant code autogen
      
      * fix depthwise_conv2d
      
      * Share all other members of Tensor except use_cudnn
      
      * polish codes according to review opinion
      
      * polish codes according to review opinion, fix bug
      
      * polish codes according to review opinion, opt performance
      
      * polish codes according to review opinion, fix pooling.py
      1623f1b4