1. 01 6月, 2018 1 次提交
  2. 25 4月, 2018 1 次提交
  3. 11 4月, 2018 1 次提交
  4. 08 4月, 2018 1 次提交
  5. 07 4月, 2018 1 次提交
  6. 09 3月, 2018 1 次提交
    • K
      Add float16 GEMM math function on GPU (#8695) · 90215b78
      kexinzhao 提交于
      * test cpu float16 data transform
      
      * add isnan etc
      
      * small fix
      
      * fix containsNAN test error
      
      * add data_type transform GPU test
      
      * add float16 GPU example
      
      * fix error
      
      * fix GPU test error
      
      * initial commit
      
      * fix error
      
      * small fix
      
      * add more gemm fp16 tests
      
      * fix error
      
      * add utility function
      90215b78
  7. 12 2月, 2018 1 次提交
  8. 10 2月, 2018 2 次提交
  9. 11 11月, 2017 1 次提交
  10. 18 10月, 2017 1 次提交
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827
  11. 10 8月, 2017 3 次提交
  12. 13 7月, 2017 1 次提交
  13. 11 7月, 2017 2 次提交
  14. 04 7月, 2017 3 次提交
  15. 03 7月, 2017 2 次提交