1. 06 9月, 2021 2 次提交
    • F
      replase pass with error exception (#35367) · 5675042d
      Feng Xing 提交于
      This PR adds error exception in fused transformer python interface.
      The function body are not implemented (will be implemented later).
      Following zhiqiu's comment in previous PR-35206 (merged already), it is better to raise an exception instead of using "pass".
      5675042d
    • W
      update trt ut. (#35458) · 18934c53
      Wilber 提交于
      18934c53
  2. 05 9月, 2021 1 次提交
  3. 04 9月, 2021 1 次提交
  4. 03 9月, 2021 12 次提交
  5. 02 9月, 2021 7 次提交
    • J
      [NPU] Support npu kernel for gather_nd op (#34800) · bb633965
      JingZhuangzhuang 提交于
      * [NPU] Support npu kernel for gather_ng op
      
      * [NPU] Support npu kernel for gather_nd op
      
      * [NPU] Support npu kernel for gather_nd and gather_nd_grad op
      
      * update py format error.
      
      * modify gather_nd_op_npu
      
      * modify gather_nd 910 test
      
      * modify gather_nd 910 test
      Co-authored-by: Nxiaoxiaohehe001 <hiteezsf@163.com>
      bb633965
    • X
      Add SVD Op and it's GPU and CPU kernel (#34953) · 7e5fb462
      xiongkun 提交于
      * Add SVD Op and it's GPU and CPU kernel
      
      * Remove CUDAPlace in test_svd_op, make the test available in CPU package
      
      * modfity the file
      
      * fix windows bug/ fix ROCM / fix test timeout
      
      * for pass the CIs
      
      * improve error report
      
      * for code review
      
      * some modification to test_svd_op
      
      * change python code style
      
      * expose the svd interface for document
      7e5fb462
    • Z
      [NPU] Add label_smooth_op (#34828) · e57a88b3
      zhulei 提交于
      * [NPU] Add label_smooth_op
      
      * [NPU] Add label_smooth_op
      e57a88b3
    • Y
      [hybrid] [npu] fit npu nan/inf check (#35171) · 67ed7e12
      Yuang Liu 提交于
      67ed7e12
    • W
      fix static error in summary (#35303) · b28cc734
      wangna11BD 提交于
      b28cc734
    • J
      [Auto Parallel] Logical Partition & Dist Op (#35117) · a622b701
      JZ-LIANG 提交于
      * support shard reader
      
      * support shard reader
      
      * add parallel mode
      
      * update process mesh
      
      * add method to compute comm_group
      
      * implement dist_embedding forward func
      
      * implement dist matmul forward func
      
      * implement dist reshape forward func
      
      * add transpiler framework
      
      * add transpiler forward
      
      * implement transpiler forward
      
      * implement transpiler backward & update
      
      * add process
      
      * add unitest
      
      * chmod
      
      * chmod
      
      * chmod
      
      * update unitest
      
      * add unitest for gpt
      
      * remove unused print
      
      * rename transpiler --> partitioner
      
      * rename transpiler --> partitioner
      
      * chmod
      
      * chmod
      
      * bug fixed
      
      * remove amp function
      
      * update case for dp mode
      
      * update case for dp mode
      a622b701
    • B
      [npu] add update_loss_scaling npu min value (#35270) · 280d7421
      Baibaifan 提交于
      280d7421
  6. 01 9月, 2021 15 次提交
  7. 31 8月, 2021 2 次提交
    • A
      Support CostInfo and MemProfiler in InterpreterCore (#34981) · 572bad8a
      Aurelius84 提交于
      * polish code
      
      * fix unittest on windows
      
      * refine pybind interface
      
      * support statistic MemSize of AllocatorPool
      
      * Replace mutex into atomic
      572bad8a
    • F
      transformer opt python files (#35206) · e2991555
      Feng Xing 提交于
      This PR adds fused transformer python related files. It defines interface of fused transformer.
      
      Fused transformer implements an optimized version of transformer layer (in python/paddle/nn/layer/transformer.py). In this PR, four layers (functions) are defined:
      (1) FusedMultiHeadAttention: multi-head attention layer
      (2) FusedFeedForward: feed forward layer
      (3) FusedTransformerEncoderLayer: transformer encoder layer
      (4) FusedTransformer: transformer layer
      e2991555