1. 16 3月, 2023 2 次提交
    • J
      [Auto Parallel Performance] Support BF16 Training (#51285) · 9ded5707
      JZ-LIANG 提交于
      * update env setting
      
      * update pass logic
      
      * dist op support bf16
      
      * backward cast update
      
      * update setting
      
      * update backward
      
      * revert amp pass
      
      * update fp16 backward logic
      
      * register c_embedding bf16
      
      * revert engine
      
      * add unitest
      
      * add unitest
      
      * update unitest
      
      * update cmake
      
      * update math
      
      * update math.py
      
      * update unitest
      
      * update unitest
      
      * revise unitest
      
      * revise unitest
      
      * update unitest
      
      * update unitest
      
      * update unitest
      9ded5707
    • S
      Fix nccl_test_op failure on hopper (#51390) · b5fd7fc1
      Shijie 提交于
      * add sync
      
      * Fix nccl_op_test
      b5fd7fc1
  2. 15 3月, 2023 5 次提交
    • S
      add assign composite backward op (#51430) · 297182f7
      SylarTiaNII 提交于
      * add assign composite backward op
      
      * fix log msg
      
      * code style
      
      * fix comp rule
      
      * replace assign with by_pass
      297182f7
    • J
      【Prim】Custom softmax grad (#51474) · f124c86f
      Jiabin Yang 提交于
      * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557)
      
      * [CINN]Enhance CacheKey hash logic by considering input dtypes
      
      * add unittest
      
      * fix typo
      
      * fix typo
      
      * fix map.at
      
      * fix find
      
      * fix test
      
      * fix cinn cache key structure realize
      
      * using ordered map for attributes
      
      * add test by review advice
      
      ---------
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      
      * [prim] enable dygraph_to_static to support custom_vjp
      
      * Pr 50885 (#7)
      
      * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557)
      
      * [CINN]Enhance CacheKey hash logic by considering input dtypes
      
      * add unittest
      
      * fix typo
      
      * fix typo
      
      * fix map.at
      
      * fix find
      
      * fix test
      
      * fix cinn cache key structure realize
      
      * using ordered map for attributes
      
      * add test by review advice
      
      ---------
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      
      * [prim] enable dygraph_to_static to support custom_vjp
      
      * fix code in a dy2static-friendly way.
      
      * [dystatic] add hooker for prim
      
      ---------
      Co-authored-by: NAurelius84 <zhangliujie@baidu.com>
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      Co-authored-by: Ncxxly <chenxx_id@163.com>
      
      * [prim] enable dygraph_to_static to support custom_vjp
      
      * fix cast prim and vjp dtype mapping error bug
      
      * Cxx prim custom vjp (#8)
      
      * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557)
      
      ---------
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      
      * [prim] enable dygraph_to_static to support custom_vjp
      
      * Pr 50885 (#7)
      
      * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557)
      
      * [CINN]Enhance CacheKey hash logic by considering input dtypes
      
      ---------
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      
      * [prim] enable dygraph_to_static to support custom_vjp
      
      * fix code in a dy2static-friendly way.
      
      * [dystatic] add hooker for prim
      
      ---------
      Co-authored-by: NAurelius84 <zhangliujie@baidu.com>
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      Co-authored-by: Ncxxly <chenxx_id@163.com>
      
      * [prim] enable dygraph_to_static to support custom_vjp
      
      * fix cast prim and vjp dtype mapping error bug
      
      * [dy2static-ci] fix dy2static ci errors.
      
      ---------
      Co-authored-by: NAurelius84 <zhangliujie@baidu.com>
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      Co-authored-by: Ncxxly <chenxx_id@163.com>
      
      * [Prim] enable whitelist and blacklist for custom_vjp
      
      * support softmax grad
      
      * remove additional code
      
      * add test back
      
      ---------
      Co-authored-by: NAurelius84 <zhangliujie@baidu.com>
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      Co-authored-by: Ncxxly <chenxx_id@163.com>
      Co-authored-by: Nxiongkun <807377414@qq.com>
      f124c86f
    • iSerendipity's avatar
      [PHI] remove operator.h in blas.h (rebase to latest codebase) (#51472) · 427712df
      iSerendipity 提交于
      * Revert "Revert "【Hackathon No.67】remove operator.h in blas.h (#50989)" (#51467)"
      
      This reverts commit b9d91531.
      
      * remove cout
      
      * add header
      
      * fix missing header
      
      * fix refer fluid error
      
      * fix missing header
      
      * 更新 repeat_interleave_grad_kernel_impl.h
      
      Change to phi style datatype.
      
      * 更新 repeat_interleave_grad_kernel_impl.h
      
      Fix missing header
      
      * datatype fluid -> phi
      
      * paddle::experimental -> phi
      
      * fix reference error
      
      * fix reference error
      
      * fix reference error
      
      * fix errors
      
      * fix missing FLAGS
      
      * fix missing headers
      
      * fix missing headers
      
      * fix missing headers
      
      * fix missing headers
      
      * fix missing header
      
      * fix missing header
      
      * fix errors
      427712df
    • R
      support auto generate for nonzero (#51600) · 3734e89a
      RedContritio 提交于
      3734e89a
    • HappyHeavyRain's avatar
      Move the "GetExpectedKernelType" into "get_expected_kernel_func.cc" (#51453) · f0db1f7e
      HappyHeavyRain 提交于
      * test_get_kernel
      
      * add invoke signature
      
      * change reduce_max
      
      * change frobenius_norm
      
      * reset reduce_max according to composite and change reduce_all
      
      * fix the bug when Scalar(*)
      
      * fix 'scalar when support_tensor'
      
      * change code according to review
      
      * change 'keep_signature' to 'manual_signature' and add some erro info
      f0db1f7e
  3. 14 3月, 2023 5 次提交
  4. 13 3月, 2023 6 次提交
  5. 10 3月, 2023 6 次提交
  6. 09 3月, 2023 6 次提交
  7. 08 3月, 2023 1 次提交
  8. 07 3月, 2023 2 次提交
  9. 06 3月, 2023 4 次提交
    • S
      convert todos to internal tasks (#51174) · 6b393e45
      Sławomir Siwek 提交于
      6b393e45
    • N
      Add multiprecision for adadelta op (#50131) · a8a2b7f4
      niuliling123 提交于
      a8a2b7f4
    • H
      [phi decoupling] decouple dependency to device_context in phi (Part 1) (#50865) · a1006b2b
      Huang Jiyi 提交于
      * move DeviceContextPool to phi
      
      * add EmplaceExternalContextFunc
      
      * update namespace
      
      * update cmake
      
      * fix bugs and create context_pool_impl.h
      
      * replace platform::is_xxx_place
      
      * fix bugs
      
      * update generator
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix enforce usage
      
      * Revert "fix enforce usage"
      
      This reverts commit 5f521f08a69713cee506e64a00ec6d9fba709e27.
      
      * fix bugs
      
      * rm XPUDeviceContext and CustomDeviceContext
      
      * fix bugs
      
      * fix fix context init bug
      
      * fix bugs after merge
      
      * fix bugs
      
      * fix name
      
      * fix mutable_data
      
      * update and fix bugs
      
      * fix bugs
      
      * update
      
      * fix bugs
      
      * fix name
      
      * fix bugs
      
      * merge
      
      * fix bugs
      
      * create context_pool in phi/backends
      
      * create context_pool in phi/backends
      
      * fix bugs
      
      * fix xpu bugs
      
      * fix rocm bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix bugs
      
      * fix xpu bugs
      
      * update
      
      * update
      
      * fix bugs
      
      * fix bugs
      a1006b2b
    • S
      oneDNN kernels code cleanup (#50743) · e2054925
      Sławomir Siwek 提交于
      * matmul refactored
      
      * fc
      
      * SetOutMemDescWithLogicalLayoutFusesSupport
      
      * matmul_v2
      
      * alpha support
      
      * group repetetive funcs
      
      * matmul utils
      
      * execute matmul methods
      
      * restore registered kernel names
      
      * split header and impl files
      
      * remove double negatives
      
      * increase coverage
      
      * add onednn tests to ctest
      
      * remove fusion logic from base matmuls
      e2054925
  10. 05 3月, 2023 1 次提交
  11. 03 3月, 2023 2 次提交