1. 03 9月, 2020 1 次提交
  2. 09 7月, 2020 1 次提交
  3. 27 4月, 2020 1 次提交
  4. 24 4月, 2020 1 次提交
    • G
      Add cholesky_op (#23543) · a8c0fb4e
      Guo Sheng 提交于
      * Add cholesky_op forward part. test=develop
      
      * Complete cholesky_op forward part. test=develop
      
      * Add cholesky_op backward part. test=develop
      
      * Complete cholesky_op backward part. test=develop
      
      * Refine cholesky_op error check and docs. test=develop
      
      * Add grad_check unit test for cholesky_op. test=develop
      
      * Fix sample code in cholesky doc. test=develop
      
      * Refine some error messages of cholesky_op. test=develop
      
      * Refine some error messages of cholesky_op. test=develop
      
      * Remove unused input in cholesky_grad. test=develop
      
      * Remove unused input in cholesky_grad. test=develop
      
      * Fix stream for cusolverDnSetStream. test=develop
      
      * Update PADDLE_ENFORCE_CUDA_SUCCESS from cholesky_op to adapt to latest code.
      test=develop
      
      * Add CUSOLVER ERROR in enforce.h
      test=develop
      
      * Fix the missing return value in cholesky. test=develop
      a8c0fb4e
  5. 10 4月, 2020 1 次提交
  6. 05 9月, 2019 1 次提交
    • Y
      Integrate NVRTC to support compiling CUDA kernel at runtime (#19422) · 42b5bec6
      Yiqun Liu 提交于
      * Add the dynamic load of nvrtc, and support runtime compiling of CUDA kernel using nvrtc.
      test=develop
      
      * Call CUDA driver api to launch the kernel compiled by nvrtc.
      test=develop
      
      * Disable for mac and windows.
      test=develop
      
      * Refine the codes to support manually specified num_threads and workload_per_thread.
      test=develop
      
      * Refine the CUDA kernel to support large dims.
      test=develop
      42b5bec6
  7. 23 11月, 2018 1 次提交
  8. 22 11月, 2018 1 次提交
    • C
      Refine cublas to support CUBLAS_TENSOR_OP_MATH (#13929) · 00b9e9a1
      chengduo 提交于
      * refine cublase
      test=develop
      
      * code refine
      
      * refine cublas
      
      * add GEMME_EX
      
      * add enable_cublas_tensor_op_math doc and add cublasCall
      test=develop
      
      * fix CublasCall for cuda version
      test=develop
      
      * fix error
      test=develop
      
      * fix GEMM_EX to be compatible with gcc 4.8
      test=develop
      
      * add GEMM_EX
      test=develop
      
      * to compatiable with gcc4.8
      test=develop
      00b9e9a1
  9. 28 9月, 2018 1 次提交
  10. 15 9月, 2018 1 次提交
  11. 21 8月, 2018 1 次提交
  12. 17 8月, 2018 1 次提交
  13. 01 6月, 2018 1 次提交
  14. 25 4月, 2018 1 次提交
  15. 11 4月, 2018 1 次提交
  16. 08 4月, 2018 1 次提交
  17. 07 4月, 2018 1 次提交
  18. 09 3月, 2018 1 次提交
    • K
      Add float16 GEMM math function on GPU (#8695) · 90215b78
      kexinzhao 提交于
      * test cpu float16 data transform
      
      * add isnan etc
      
      * small fix
      
      * fix containsNAN test error
      
      * add data_type transform GPU test
      
      * add float16 GPU example
      
      * fix error
      
      * fix GPU test error
      
      * initial commit
      
      * fix error
      
      * small fix
      
      * add more gemm fp16 tests
      
      * fix error
      
      * add utility function
      90215b78
  19. 12 2月, 2018 1 次提交
  20. 10 2月, 2018 2 次提交
  21. 11 11月, 2017 1 次提交
  22. 18 10月, 2017 1 次提交
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827
  23. 10 8月, 2017 3 次提交
  24. 13 7月, 2017 1 次提交
  25. 11 7月, 2017 2 次提交
  26. 04 7月, 2017 3 次提交
  27. 03 7月, 2017 2 次提交