1. 08 5月, 2023 5 次提交
  2. 06 5月, 2023 3 次提交
  3. 05 5月, 2023 7 次提交
  4. 04 5月, 2023 5 次提交
  5. 30 4月, 2023 1 次提交
  6. 28 4月, 2023 12 次提交
  7. 27 4月, 2023 7 次提交
    • W
      [Dy2St]Get grad names when call append backward to fix high order gradient (#53250) · 2d17df97
      WangZhen 提交于
      [Dy2St]Get grad names when call append backward to fix high order gradient (#53250)
      2d17df97
    • Y
      db30aa1d
    • N
      【PaddlePaddle Hackathon 4】:为maxout算子支持 float16 数据类型 (#50976) · 8bfd978f
      NetPunk 提交于
      * support fp16 for maxout op
      
      * format code
      
      * change api
      
      * add test for static float16
      
      * format code
      
      * formatting code
      
      * atol alignment
      
      * experiment—1
      
      * experiment-2
      
      * experiment-3
      
      * format code
      8bfd978f
    • S
      Move fused feedforward (#53166) · 25b4ba7f
      Sonder 提交于
      * trans fused_feedward Compute function to phi
      
      * add register info
      
      * remove maxfunctor
      
      * move fused feedward to phi
      
      * remove sig file
      
      * remove fliud include
      
      * add include
      
      * add include
      
      * add sig file
      
      * add output register info
      
      * fix sig file
      
      * Update fused_feedforward_sig.cc
      
      * fix grad kernel
      
      * update output register info
      
      * fix
      
      * open fused_feedforward static build
      
      * add optional and fix code style
      
      * fix output info for fused attention
      
      * add optional param
      
      * merge
      25b4ba7f
    • Z
      [AMP] support OD level and skip dynamic loss scaling for bf16 (#53289) · 18e9dcdc
      Zhang Ting 提交于
      * support OD level and skip dynamic loss scaling for bf16
      18e9dcdc
    • H
      [Fix CppExtension Unittest] Change CUDAExtension to CppExtension if necessary (#53352) · 3278dec7
      HongyuJia 提交于
      * [Fix CppExtension Unittest] Change CUDAExtension to CppExtension if necessary
      
      * Temporarily test cpp_extension under GPU
      
      * Split mixed_extension unittest
      3278dec7
    • H
      Add jacobian and hessian (#53331) · e8d296ef
      HydrogenSulfate 提交于
      * add jacobian and hessian in paddle.autograd
      
      * disable unitest 'func_multi_input' for bug in high-order gradient of multiply
      
      * add dimension checks
      
      * add support for 0-D tensor
      
      * change return type from Jacobian to Hessian in hessian function
      
      * refine Jacobian _flatten function for single xs
      
      * refine support for 0-D tensor
      
      * 1. add 'func_multi_input' unitest for multiply_grad_kernel bug fixed
      already.
      2. support non-inplace math operation via magical method overwriting.
      
      * add unitest for math operation and raise error when 0-D tensor is indexed
      
      * add ndim check on ys and xs according to is_batched, and add one unitest
      
      * refine docstring of jacobian and hessian
      
      * move paddle.incubate.autograd.Jacobian/Hessian to paddle.incubate.autograd.functional.Jacobian/Hessian
      
      * remove single_input unitest case because numerical differentiation is wrong
      
      * remove 3 unitest for numerical result(reference result) is wrong
      
      * 1. rename autodiff.py to autograd.py
      2. increase TIMEOUT to 100
      
      * cancel modification for functional Jacobian/Hessian
      
      * 1. use tuple as return type instead of list
      2. refine docstring
      
      * add more unitest case to improve coverage
      
      * remove 2 unitest of Hessian for numerical result is wrong
      
      * remove 1 unitest of Hessian for numerical result is wrong
      
      * remove 1 unitest of Hessian for numerical result is wrong
      
      * change unit test to shape check
      
      * correct doc and replace incubate API to stable API in _grad
      e8d296ef