- 15 8月, 2017 1 次提交
-
-
由 fengjiayi 提交于
-
- 14 8月, 2017 1 次提交
-
-
由 fengjiayi 提交于
-
- 12 8月, 2017 4 次提交
- 11 8月, 2017 1 次提交
-
-
由 Yu Yang 提交于
-
- 10 8月, 2017 5 次提交
-
-
由 Yu Yang 提交于
-
由 qingqing01 提交于
-
由 fengjiayi 提交于
-
由 fengjiayi 提交于
-
由 fengjiayi 提交于
-
- 09 8月, 2017 1 次提交
-
-
由 Yu Yang 提交于
* Give which parameter, which element are wrong. And what max_diff is.
-
- 08 8月, 2017 3 次提交
-
-
由 Qiao Longfei 提交于
* init grad op checker * can run * add GradeChecker class * use get_numeric_gradient * refine code * add softmax and cross entropy auto grad test * use close to judge op_grad and numeric_grad * add cpu and gpu compare * add comments * add support_gpu * fix allclose * fix name error and symplify code * optimize gradient checker * add test_cross_entropy_op * update gradient_checker.py * optimize code * use random.uniform instead of random.random * fix type bug * optimize check_grad * put SupportGPU into OperatorBase * typo
-
由 Yu Yang 提交于
* Although backward_test/rnn_test is not pass, just comment them.
-
由 dongzhihong 提交于
-
- 07 8月, 2017 4 次提交
-
-
由 Qiao Longfei 提交于
* check INFINITY in cross_entropy * fix error * use onehot_cross_entropy without GPU kernel * add support_gpu * fix allclose * fix name error and symplify code
-
由 Qiao Longfei 提交于
* add support_gpu * fix allclose * fix name error and symplify code
-
由 dongzhihong 提交于
-
由 Yu Yang 提交于
It can be run both CPU/GPU. configure attributes are: * min: the min value of uniform random * max: the max value of uniform random * dims: the dimension of output tensor * seed: the random seed of uniform random. 0 means generate a seed each time.
-
- 04 8月, 2017 3 次提交
- 03 8月, 2017 3 次提交
-
-
由 Yan Chunwei 提交于
* move net_op to operators
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
-
- 02 8月, 2017 5 次提交
- 01 8月, 2017 2 次提交
- 31 7月, 2017 3 次提交
- 30 7月, 2017 1 次提交
-
-
由 dongzhihong 提交于
-
- 29 7月, 2017 1 次提交
-
-
由 Yan Chunwei 提交于
* add rnn op interfaces * add Run * rename state -> memory * change state -> memory * make compilable * add .cc * init test * add op fake implementation * add CreateStepNet and CreateScopes implementation. * add TODO list * init memory attributes. * add LinkMemories * add PlainNet fake implementation * Use std::shared_ptr<Scope> in the OpRunContext. * add test * disable mutable_data * finist segmentInput function * enable mutable_data with a trick * RNNOp test. * enable LinkMemories with mutable_data * update SegmentInput function with comments * finish ConcatOutput function * reformat inputs and attributes boot_memories * Refine unit test. * Refine unit test. * modify inlinks. * add OpDesc to Net * fix bug and update unit test. * move step scopes from inputs to outputs * fix merge conflict, update SegmentInput function * add RecurrentOpProtoAndCheckerMaker. * clean the codes * Abstract GetStepScopes and GetMaxSeqLen function * refine LinkMemories * Refine code and add some comments. * add backward core * update for develop branch. * add forward core * add forward algorithm * Add RecurrentGradientAlgorithm implenmention. * use CopyFrom and Slice function in RecurrentOp * add unit test for LinkMemories. * fix unit test. * use the latest tensor.h, solve conflict * add maker * move SegmentInput and ConcatOutput to details nameplace * unit test for RecurrentGradientAlgorithm. * apply OperatorBase * apply net operator. * move memorys to attributes * add RecurrentGradientOp * open test unit test in recurrent_network_op_test. * revert some files. * add RecurrentArgument and Link struct to simplify member variable. * rename. * move recurrent_op from framework to operators * add RecurrentGradientOp Init * fix name * fix Link.interal/external name * use namespace operators instead of framework * clean the code * use the latest add_op and mul_op, don't test backward now * Remove ScopePtr and OperatorPtr * add get_net to pybind * add test_recurrent_op.py * add random into gen_tensor * update to develop branch and refine some code. * add some comments.
-
- 27 7月, 2017 1 次提交
-
-
由 Yu Yang 提交于
-
- 26 7月, 2017 1 次提交
-
-
由 Yu Yang 提交于
-