1. 13 4月, 2023 7 次提交
  2. 12 4月, 2023 7 次提交
  3. 11 4月, 2023 16 次提交
  4. 10 4月, 2023 10 次提交
    • L
      Autogen segment_pool (#52538) · 1bc00955
      lzydev 提交于
      * autogen segment_pool
      
      * delete legacy_dygraph about segment_pool
      1bc00955
    • D
      【Hackathon No57】 add fp16 & bf16 for flip, fp16 for gaussian (#52380) · 2b0fffc2
      Difer 提交于
      * add_fp_bf_for_flip_gaussian_random
      
      * forget convert uint
      
      * fix some error
      
      * fix some error
      2b0fffc2
    • H
      [Opt Performance] Optimize custom operator performance (#52597) · 01247e33
      HongyuJia 提交于
      * [Opt Performance] Optimize custom operator performance, reconstruct python API auto-gen, add cache and use const inference
      
      * opt AutoGradMeta implementation
      
      * remove profiler codes
      
      * fix unit test
      
      * change year, 2021->2023
      
      * fix int64_t parse bug
      01247e33
    • G
      Autogen code bilinear_tensor_product (#52690) · 90c3bddf
      gouzil 提交于
      * add autogen code bilinear_tensor_product
      
      * [phi] rm cc file
      90c3bddf
    • C
      3ee2b237
    • L
      Autogen softmax_with_cross_entropy (#52515) · 351ccb63
      lzydev 提交于
      * autogen softmax_with_cross_entropy
      
      * fix error in softmax_with_cross_entropy version
      351ccb63
    • H
      [enforce.h Decouple gflags.h] Move gflags.h from enforce.h to enforce.cc (#52573) · 3c0b1795
      HongyuJia 提交于
      * [enforce.h Decouple gflags.h] Move gflags.h from enforce.h to enforce.cc
      
      * Add gflags.h for other files
      
      * Add gflags.h for other files
      
      * Add gflags.h for blas_impl.hip.h
      
      * Add gflags.h for miopen_helper.h
      3c0b1795
    • V
      [AMP OP&Test] Add fp16 and bf16 test to activation (#52521) · 6bd5fd75
      Vvsmile 提交于
      * adjust defalut tolerance of output and grad
      
      * fix a bug in the grad of OpTest
      
      * fix the type of setting defalut value in optest, both forward and
      backward
      
      * add defalut
      
      * fix test_sum_op
      
      * adjust tolerance
      
      * fix the tolerance of eager
      
      * add bf16 and fp16 to the activation tests
      
      * remove some fixs
      
      * fix activation
      
      * fix fp16
      
      * fix gelu
      
      * fix the activation tests
      
      * add bfloat16 specialization to singrad and cosgrad
      
      * fix bugs
      
      * fix bugs
      
      * add unittest
      
      * add skip
      
      * add fp/bf to rrelu/rrelu_grad
      
      * git add rrelu
      
      * fix bugs
      6bd5fd75
    • Q
      【AMP OP&Test】instance_norm fp16 and bf16 support. (#52241) · 7c98abd9
      qizhaoaoe 提交于
      * add fp16 and bf16 support for instance_norm
      
      * fix /= operator which not support bf16
      
      * fix instance_norm_grad kernel and unittests.
      
      * fix fp32 unittests.
      
      * fix instance_norm_kernel and unittests.
      
      * fix instance_norm_grad_kernel and unittest threshold.
      
      * add fp16/bf16 for instance_norm_grad_grad op.
      
      * add bf16 dtype check.
      
      * fix conflicts.
      
      * fix cpu support for fp32 op and fix type in instance_norm_grad_kernel.
      
      * fix type in instance_norm_kernel.
      
      * fix bf16 outputs in unittests and refine codes.
      
      * fix dx computation.
      
      * delete unuseful params and head including.
      
      * add fp16/bf16 for static graph.
      
      * fix device condiction for instance_norm op.
      
      * fix instance_norm_grad_grad and bf16 op tests.
      
      * fix op_test to support grad of bf16 can be compared with fp32.
      
      * remove updates.
      
      * add self-defined grad.
      7c98abd9
    • W
      add autogen code support for logcumsumexp op (#52682) · 891cf433
      Wang Xin 提交于
      891cf433