1. 22 12月, 2017 1 次提交
    • D
      "remove GPU Sync Interface" (#6793) · abde3130
      dzhwinter 提交于
      * "remove GPU Sync Interface"
      
      * "fix typo"
      
      * "fix type cast error"
      
      * "fix related Copy with stream"
      
      * "fix failed tests with DevicePool"
      
      * "fix stupid removed position error"
      abde3130
  2. 21 12月, 2017 1 次提交
  3. 18 12月, 2017 1 次提交
  4. 15 12月, 2017 5 次提交
  5. 14 12月, 2017 2 次提交
  6. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  7. 07 12月, 2017 2 次提交
  8. 05 12月, 2017 1 次提交
  9. 01 12月, 2017 3 次提交
  10. 29 11月, 2017 1 次提交
  11. 28 11月, 2017 2 次提交
  12. 27 11月, 2017 3 次提交
  13. 24 11月, 2017 1 次提交
  14. 23 11月, 2017 1 次提交
  15. 16 11月, 2017 1 次提交
  16. 15 11月, 2017 1 次提交
  17. 13 11月, 2017 3 次提交
  18. 11 11月, 2017 2 次提交
  19. 08 11月, 2017 2 次提交
  20. 31 10月, 2017 1 次提交
  21. 26 10月, 2017 1 次提交
    • Q
      Cudnn batch norm op (#5067) · 56b723c4
      Qiao Longfei 提交于
      * init cudnn batch norm op
      
      * rename batch_norm_cudnn_op.cc batch_norm_op.cu
      
      * correct name style
      
      * add ExtractNCWHD, simplify code
      
      * fix ExtractNCWHD
      
      * use CUDNN_ENFORCE instead of PADDLE_ENFORCE
      56b723c4
  22. 25 10月, 2017 1 次提交
  23. 24 10月, 2017 2 次提交
  24. 18 10月, 2017 1 次提交
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827