- 30 10月, 2017 1 次提交
 - 
- 
由 chengduoZH 提交于
 
 - 
 - 29 10月, 2017 1 次提交
 - 
- 
由 QI JUN 提交于
* add sparse support for sum op * typo fix * fix gpu build error * fix unittest error * typo fix * infer var type and shape in op_test * follow comments * fix build error * bypass some unittests depend on NetOp * support sparse output for lookup table grad op * refine codes * fix gpu build error * fix lookup table grad gpu kernel * fix ci * fix ci * fix ci * fix bug in lookup_table_grad op * fix bug in test_word2vec * register double kernel for some operators * set is_sparse=True in test_word2vec * fix lookup table grad op CUDA kernel bug * disable test_modified_huber_loss_op temporarily * disable test_lstm_unit_op temporarily
 
 - 
 - 27 10月, 2017 2 次提交
 - 
- 
由 chengduoZH 提交于
 - 
由 QI JUN 提交于
* add sparse support for sum op * typo fix * fix gpu build error * fix unittest error * typo fix * infer var type and shape in op_test * follow comments * fix build error * bypass some unittests depend on NetOp
 
 - 
 - 26 10月, 2017 4 次提交
 - 
- 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 Yu Yang 提交于
* Cross Entropy Wrong * Fix XE * Polish gradient check for xe * Fix compile
 
 - 
 - 24 10月, 2017 5 次提交
 - 
- 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 Luo Tao 提交于
 - 
由 chengduoZH 提交于
 
 - 
 - 23 10月, 2017 4 次提交
 - 
- 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 dangqingqing 提交于
 
 - 
 - 21 10月, 2017 2 次提交
 - 
- 
由 chengduoZH 提交于
 - 
由 Yu Yang 提交于
 
 - 
 - 20 10月, 2017 1 次提交
 - 
- 
由 Yu Yang 提交于
* Remove template parameter for Tensor methods * Also check the type is correct when data() * Simplize holder_ * Fix accuracy_op * Register Code
 
 - 
 - 19 10月, 2017 4 次提交
 - 
- 
由 dangqingqing 提交于
 - 
由 dangqingqing 提交于
 - 
由 dangqingqing 提交于
 - 
由 dangqingqing 提交于
 
 - 
 - 18 10月, 2017 5 次提交
 - 
- 
由 dangqingqing 提交于
 - 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 chengduoZH 提交于
 - 
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
 
 - 
 - 16 10月, 2017 3 次提交
 - 
- 
由 qijun 提交于
 - 
由 qijun 提交于
 - 
由 dangqingqing 提交于
 
 - 
 - 15 10月, 2017 4 次提交
 - 14 10月, 2017 4 次提交