- 12 4月, 2018 1 次提交
 - 
- 
由 Siddharth Goyal 提交于
* Fix cpplint errors, round2 * Fix pointer issue
 
 - 
 - 12 2月, 2018 1 次提交
 - 
- 
由 qingqing01 提交于
 
 - 
 - 10 2月, 2018 2 次提交
 - 21 1月, 2018 1 次提交
 - 
- 
由 chengduoZH 提交于
 
 - 
 - 19 1月, 2018 1 次提交
 - 
- 
由 chengduoZH 提交于
 
 - 
 - 18 1月, 2018 3 次提交
 - 
- 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 
 - 
 - 26 12月, 2017 1 次提交
 - 
- 
由 Luo Tao 提交于
 
 - 
 - 12 12月, 2017 1 次提交
 - 
- 
由 QI JUN 提交于
There are mainly following fixes: - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place` - remove `eigen_device` interface in base class `DeviceContext` - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext` - remove unused `platform::EigenDeviceConverter` - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL` - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
 
 - 
 - 14 11月, 2017 1 次提交
 - 
- 
由 xuwei06 提交于
The dimension is not set correctly and is not being checked in release mode because eigen_assert is not enabled.
 
 - 
 - 11 11月, 2017 1 次提交
 - 
- 
由 dangqingqing 提交于
 
 - 
 - 20 10月, 2017 1 次提交
 - 
- 
由 Yu Yang 提交于
* Remove template parameter for Tensor methods * Also check the type is correct when data() * Simplize holder_ * Fix accuracy_op * Register Code
 
 - 
 - 18 10月, 2017 1 次提交
 - 
- 
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
 
 -