1. 19 4月, 2019 1 次提交
  2. 17 4月, 2019 1 次提交
  3. 25 3月, 2019 1 次提交
  4. 20 3月, 2019 2 次提交
  5. 18 3月, 2019 2 次提交
  6. 14 3月, 2019 2 次提交
  7. 12 3月, 2019 1 次提交
  8. 08 3月, 2019 3 次提交
  9. 07 3月, 2019 1 次提交
  10. 04 3月, 2019 2 次提交
  11. 28 2月, 2019 1 次提交
  12. 26 2月, 2019 1 次提交
  13. 22 2月, 2019 2 次提交
    • T
      Revert 15770 develop a6910f90 gelu mkl opt (#15872) · ee2321de
      tensor-tang 提交于
      * Revert "Optimze Gelu with MKL Erf function (#15770)"
      
      This reverts commit 676995c8.
      
      * test=develop
      ee2321de
    • Y
      Optimze Gelu with MKL Erf function (#15770) · 676995c8
      Yihua Xu 提交于
      * Optimize for gelu operator
      
      * Set up the low accuracy mode of MKL ERF function.
      
      test=develop
      
      * Only enable MKLML ERF when OS is linux
      
      * Use the speical mklml version included vmsErf function to verify gelu mkl kernel.
      
      test=develop
      
      * Add the CUDA macro to avoid NVCC's compile issue.
      
      test=develop
      
      * Add the TODO comments for mklml library modification.
      
      test=develop
      
      * Clean Code
      
      test=develop
      
      * Add the comment of marco for NVCC compiler.
      
      test=develop
      676995c8
  14. 19 2月, 2019 1 次提交
  15. 11 2月, 2019 1 次提交
  16. 02 2月, 2019 1 次提交
  17. 30 1月, 2019 5 次提交
  18. 29 1月, 2019 3 次提交
  19. 24 1月, 2019 2 次提交
    • Y
      Add the CUDA kernel for beam_search op (#15020) · 3008fa12
      Yiqun Liu 提交于
      * Refine the beam_search op and test.
      
      * A basic CUDA implementation of beam_search for small batch_size.
      
      * Implement CUDA kernel for beam_search_op.
      
      * Use multiple CUDA threads in the same block to select the top beam.
      
      * Update the python api of beam_search op.
      
      * Enable extend function in CPU kernel of beam_search op.
      
      * Unify the CUDA codes.
      test=develop
      
      * Unify the CPU kernel of beam_search op.
      
      * Ensure the seletced items of beam_search_op's CPU kernel sorted by scores.
      
      * Update the description of beam_search in API.spec.
      
      * Enable the use of CUDA kernel in beam_search op.
      
      * Exclude the beam_search's CUDA unittest when there is no CUDA gpu, and delete some debuging statements.
      test=develop
      
      * Follow comments.
      test=develop
      
      * Call the CPU kernel for beam_search op when batch_size > 4.
      test=develop
      
      * Remove the except of is_empty op in PrepareData.
      test=develop
      3008fa12
    • T
      nce add check sample lables, test=develop (#15463) · 5cfc40de
      tangwei12 提交于
      * nce add check sample lables, test=develop
      5cfc40de
  20. 21 1月, 2019 1 次提交
    • D
      Memory optimization of depthwise conv op and group norm op (#15313) · 9f8f0fc2
      Dun 提交于
      * mem opt
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * refine code  test=develop
      
      * refine code  test=develop
      
      * refine code  test=develop
      
      * refine code  test=develop
      
      * refine with cub test=develop
      
      * fix mkldnn test && remove comments && test=develop
      
      * polish code && test=develop
      
      * add only_forward test && test=develop
      9f8f0fc2
  21. 18 1月, 2019 1 次提交
    • Z
      Tree conv op (#15217) · e2ba9668
      zhaozhehao 提交于
      * refactor tree2col operator with new memory mechanism test=develop
      
      * test=develop
      
      * test=develop
      
      * Modified API according to panyx0718 test=develop
      
      * fix API change according to heavengate test=develop
      
      * Modify API comment test=develop
      e2ba9668
  22. 14 1月, 2019 1 次提交
  23. 13 1月, 2019 3 次提交
  24. 10 1月, 2019 1 次提交
    • W
      [Feature] support mix precision training for resnet (#14899) · fd854183
      Wu Yi 提交于
      * clip softmax for fp16
      
      * updates
      
      * fuse xent support fp16 test=develop
      
      * wip
      
      * wip
      
      * add simple row reduce
      
      * wip fp16 accurate softmax
      
      * add accurate softmax kernel for fp16 test=develop
      
      * update test=develop
      
      * fix cpu build test=develop
      
      * update api.spec test=develop
      
      * follow comments test=develop
      
      * fix build test=develop
      
      * fix trt build test=develop
      
      * fix inference build test=develop
      
      * fix merge test=develop
      
      * update test=develop
      
      * try fix build test=develop
      
      * fix build test=develop
      
      * rename real_exp test=develop
      
      * fortest
      
      * remove hacky kernels test=develop
      
      * clean up test=develop
      fd854183