- 19 10月, 2017 1 次提交
-
-
由 wanghaoshuang 提交于
-
- 18 10月, 2017 5 次提交
-
-
由 wanghaoshuang 提交于
-
由 wanghaoshuang 提交于
-
由 QI JUN 提交于
-
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
-
由 Qiao Longfei 提交于
* init parameter base class * optimize the Comments of optimizer * basic implimentation of optimizer * add test_optimizer * add no_grad_set to interface * update optimizer.py * python code can run * fix some problem * add sync_with_cpp to Python Program and Block * sync vars and ops in block from cpp * optimize code and add some comment * add more check for sync * update optimizer with return value of Backward * rm unused code * infer shape when create gradient vairiable * update test_optimizer * update test_program.py * update backward test * follow comment
-
- 17 10月, 2017 4 次提交
-
-
由 Yu Yang 提交于
They are public now
-
由 Yu Yang 提交于
* Change dataType to data_type Follow PEP8 * Change name_convention to fit PEP8
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Feed/Fetch op just plain operator, not a OpWithKernel * Do not register OpInfoMaker since Feed/Fetch will never be configured by users * Feed/Fetch op has empty gradient * Feed/Fetch op do not hard code `feed_variable`, `fetch_variable` as its input and output, make it as a plain Operator input/output
-
- 16 10月, 2017 3 次提交
- 15 10月, 2017 4 次提交
- 14 10月, 2017 4 次提交
- 13 10月, 2017 6 次提交
-
-
由 typhoonzero 提交于
-
由 typhoonzero 提交于
-
由 Abhinav Arora 提交于
* Adding Hard Sigmoid Activation * Adding a comment for slope to be only positive * Fixing grammatical mistake in comment
-
由 Yan Chunwei 提交于
-
由 Yu Yang 提交于
* Add no_grad_vars for grad_op_maker * Add unittest * Fix unittest * Fix unittest * Follow comment
-
由 Abhinav Arora 提交于
* add adam op moment1_out = beta1 * moment1 + (1 − beta1) * grad moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad moment1_hat = moment1_out / (1 - beta1^t) moment2_hat = moment2_out / (1 - beta2^t) param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) + epsilon) * fix moment 2 * Adding the Adam optimization operator * Adding more tests for Adam op
-
- 12 10月, 2017 13 次提交
-
-
由 Luo Tao 提交于
-
由 guosheng 提交于
-
由 guosheng 提交于
-
由 guosheng 提交于
-
由 kexinzhao 提交于
* Implementing the DecayedAdagrad optimizer step operator * implementing DecayedAdagrad operator * remove file * small fix
-
由 武毅 提交于
* add cudnn_conv_op * WIP * update * update * fix grad check * use platform::memory * add support group for cudnn * update * follow comments * fix onlycpu build * update cuda define * follow comments * follow comments * merge with updates * fix compile error * follow comments * follow comments
-
由 chengduoZH 提交于
-
由 dongzhihong 提交于
-
由 fengjiayi 提交于
-
由 qijun 提交于
-
由 dongzhihong 提交于
-
由 Abhinav Arora 提交于
* Adding thresholded_relu op * Adding test for thresholded relu op
-
由 QI JUN 提交于
* init * unify CopyFrom interface * fix gpu build error * fix bug in tensor_py.h * refine code comments and add TODO list * fix conflicts in FeedOp and FetchOp
-