- 11 10月, 2017 2 次提交
-
-
由 fengjiayi 提交于
-
由 Markus Kliegl 提交于
* conv_shift_op: initial implementation using Eigen Limitations: - both gradient outputs must be specified and are always computed - explicit for loops => could be optimized in various ways (e.g., different memory layout) * conv shift - gradient fixes fix case when not all output gradients desired * conv shift: minor cleanup * conv shift - more minor cleanup * conv shift: clean up & initial GPU implementation * fix rebase issue
-
- 10 10月, 2017 19 次提交
-
-
由 chengduoZH 提交于
-
由 chengduoZH 提交于
-
由 Luo Tao 提交于
-
由 chengduoZH 提交于
-
由 Yu Yang 提交于
-
由 fengjiayi 提交于
-
由 chengduoZH 提交于
-
由 Luo Tao 提交于
-
由 Abhinav Arora 提交于
-
由 Yu Yang 提交于
-
由 fengjiayi 提交于
-
由 fengjiayi 提交于
-
由 Abhinav Arora 提交于
* Adding implementation for copying a vector to tensor * Changing Tensor test to access gpu memory indirectly
-
由 Yu Yang 提交于
It will significantly reduce binary size. It is useful for mobile deployment.
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
-
由 Abhinav Arora 提交于
* Implementing the Adamax optimizer step operator * Adding unit tests for adamax_op * Changing learning rate and time step to inputs from attributes * Changing learning rate and time step to input(tensors) * Making the Adamax operator conform to naming convention * Removing Tensor<float> from comments * Rectifying the Adamax implementation * Changing Unit Test values and adding comments * Changing Unit Test to test multiple steps
-
由 kavyasrinet 提交于
-
由 fengjiayi 提交于
-
- 09 10月, 2017 7 次提交
- 07 10月, 2017 8 次提交
-
-
由 qiaolongfei 提交于
-
由 qiaolongfei 提交于
-
由 Yi Wang 提交于
* Update program.md * Update * Update
-
由 Yan Chunwei 提交于
-
由 fengjiayi 提交于
-
由 qiaolongfei 提交于
-
由 Yan Chunwei 提交于
-
由 qiaolongfei 提交于
-
- 06 10月, 2017 4 次提交
-
-
由 Kexin Zhao 提交于
-
由 Kavya Srinet 提交于
-
由 Kavya Srinet 提交于
-
由 qiaolongfei 提交于
-