1. 02 9月, 2021 8 次提交
  2. 01 9月, 2021 22 次提交
  3. 31 8月, 2021 10 次提交
    • A
      Support CostInfo and MemProfiler in InterpreterCore (#34981) · 572bad8a
      Aurelius84 提交于
      * polish code
      
      * fix unittest on windows
      
      * refine pybind interface
      
      * support statistic MemSize of AllocatorPool
      
      * Replace mutex into atomic
      572bad8a
    • F
      transformer opt python files (#35206) · e2991555
      Feng Xing 提交于
      This PR adds fused transformer python related files. It defines interface of fused transformer.
      
      Fused transformer implements an optimized version of transformer layer (in python/paddle/nn/layer/transformer.py). In this PR, four layers (functions) are defined:
      (1) FusedMultiHeadAttention: multi-head attention layer
      (2) FusedFeedForward: feed forward layer
      (3) FusedTransformerEncoderLayer: transformer encoder layer
      (4) FusedTransformer: transformer layer
      e2991555
    • A
      [Dy2Stat]Add model ResNet50 for Dy2stat AMP training (#35276) · 079c585c
      Aurelius84 提交于
      * Add model for ResNet50 for Dy2stat AMP training
      
      * fix timeout
      
      * fix dataloader
      079c585c
    • Q
      [NPU] fix cmake for ascend ci, test=develop (#35255) · f6004ab9
      Qi Li 提交于
      * [NPU] fix cmake for ascend ci, test=develop
      
      * update paddle_build.sh scripts, test=allcase
      f6004ab9
    • S
      Revert "Revert "Add copy from tensor (#34406)" (#35173)" (#35256) · 6116f9af
      Shang Zhizhou 提交于
      * Revert "Revert "Add copy from tensor (#34406)" (#35173)"
      
      This reverts commit 32c1ec42.
      
      * add template instantiation
      6116f9af
    • zhouweiwei2014's avatar
      fix bug that cmake find python (#35304) · 00c9aeb0
      zhouweiwei2014 提交于
      00c9aeb0
    • Z
      New whl release strategy with pruned nv_fatbin (#35239) · 2f3b393d
      Zhanlue Yang 提交于
      [Background]
      Expansion in code size can be irreversible in the long run, leading to huge release packages which
      not only hampers user experience but also exceeds a hard limit of pypi.
      
      In such, NV_FATBIN section takes up 86% of the compiled dylib size, owing to the vast number of GPU
      arches supported.
      
      This PR aims to prune this NV_FATBIN.
      
      [Solution]
      In the new release strategy, two types of whl packages will be involved:
      
      Cubin PIP package:
      PIP package maintains a smaller window for GPU arches support, containing
      sm_60, sm_70, sm_75, sm_80 cubins, covering Pascal - Ampere arches
      
      JIT release package:
      This is a backup for Cubin PIP package, containing compute_35, compute_50, compute_60,
      compute_70, compute_75, compute_80, with best performance and GPU arches coverage.
      
      However, it takes around 10 min to install due to the JIT compilation.
      
      [How to use]
      The new release strategy is disabled by default.
      To compile for Cubin PIP package, add this to cmake: -DCUBIN_RELEASE_PIP
      To compile for JIT release package, add this to cmake: -DJIT_RELEASE_WHL
      2f3b393d
    • T
      Put code style check on gpu_ci (#35309) · d9f59fd1
      tianshuo78520a 提交于
      * notest;test=cpu
      
      * test
      
      * test=document_fix
      d9f59fd1
    • W
      update infer trt ut. (#35261) · 96e7d903
      Wilber 提交于
      96e7d903
    • W
      add trt error information. (#35277) · a2afcace
      wenbin 提交于
      * add trt error information.
      
      * rerun ci
      a2afcace