1. 12 8月, 2017 2 次提交
  2. 11 8月, 2017 1 次提交
  3. 10 8月, 2017 5 次提交
  4. 09 8月, 2017 1 次提交
  5. 08 8月, 2017 3 次提交
    • Q
      add gradient test framework (#3226) · e31a469e
      Qiao Longfei 提交于
      * init grad op checker
      
      * can run
      
      * add GradeChecker class
      
      * use get_numeric_gradient
      
      * refine code
      
      * add softmax and cross entropy auto grad test
      
      * use close to judge op_grad and numeric_grad
      
      * add cpu and gpu compare
      
      * add comments
      
      * add support_gpu
      
      * fix allclose
      
      * fix name error and symplify code
      
      * optimize gradient checker
      
      * add test_cross_entropy_op
      
      * update gradient_checker.py
      
      * optimize code
      
      * use random.uniform instead of random.random
      
      * fix type bug
      
      * optimize check_grad
      
      * put SupportGPU into OperatorBase
      
      * typo
      e31a469e
    • Y
      Make Compile Pass · dba618c0
      Yu Yang 提交于
      * Although backward_test/rnn_test is not pass, just comment them.
      dba618c0
    • D
      "test gaussian random in python side" · 52d2ebda
      dongzhihong 提交于
      52d2ebda
  6. 07 8月, 2017 4 次提交
    • Q
      check INFINITY in cross_entropy (#3287) · 4bbd05fd
      Qiao Longfei 提交于
      * check INFINITY in cross_entropy
      
      * fix error
      
      * use onehot_cross_entropy without GPU kernel
      
      * add support_gpu
      
      * fix allclose
      
      * fix name error and symplify code
      4bbd05fd
    • Q
      add support_gpu (#3304) · 493396d8
      Qiao Longfei 提交于
      * add support_gpu
      
      * fix allclose
      
      * fix name error and symplify code
      493396d8
    • D
      "remove type alias done." · 72fb86a2
      dongzhihong 提交于
      72fb86a2
    • Y
      Add uniform random operator · e376bda4
      Yu Yang 提交于
      It can be run both CPU/GPU. configure attributes are:
      
      * min: the min value of uniform random
      * max: the max value of uniform random
      * dims: the dimension of output tensor
      * seed: the random seed of uniform random. 0 means generate a seed each
              time.
      e376bda4
  7. 04 8月, 2017 3 次提交
  8. 03 8月, 2017 3 次提交
  9. 02 8月, 2017 5 次提交
  10. 01 8月, 2017 2 次提交
  11. 31 7月, 2017 3 次提交
  12. 30 7月, 2017 1 次提交
  13. 29 7月, 2017 1 次提交
    • Y
      RecurrentOp implementation (#2890) · aee0d3ec
      Yan Chunwei 提交于
      * add rnn op interfaces
      
      * add Run
      
      * rename state -> memory
      
      * change state -> memory
      
      * make compilable
      
      * add .cc
      
      * init test
      
      * add op fake implementation
      
      * add CreateStepNet and CreateScopes implementation.
      
      * add TODO list
      
      * init memory attributes.
      
      * add LinkMemories
      
      * add PlainNet fake implementation
      
      * Use std::shared_ptr<Scope> in the OpRunContext.
      
      * add test
      
      * disable mutable_data
      
      * finist segmentInput function
      
      * enable mutable_data with a trick
      
      * RNNOp test.
      
      * enable LinkMemories with mutable_data
      
      * update SegmentInput function with comments
      
      * finish ConcatOutput function
      
      * reformat inputs and attributes
      
      boot_memories
      
      * Refine unit test.
      
      * Refine unit test.
      
      * modify inlinks.
      
      * add OpDesc to Net
      
      * fix bug and update unit test.
      
      * move step scopes from inputs to outputs
      
      * fix merge conflict, update SegmentInput function
      
      * add RecurrentOpProtoAndCheckerMaker.
      
      * clean the codes
      
      * Abstract GetStepScopes and GetMaxSeqLen function
      
      * refine LinkMemories
      
      * Refine code and add some comments.
      
      * add backward core
      
      * update for develop branch.
      
      * add forward core
      
      * add forward algorithm
      
      * Add RecurrentGradientAlgorithm implenmention.
      
      * use CopyFrom and Slice function in RecurrentOp
      
      * add unit test for LinkMemories.
      
      * fix unit test.
      
      * use the latest tensor.h, solve conflict
      
      * add maker
      
      * move SegmentInput and ConcatOutput to details nameplace
      
      * unit test for RecurrentGradientAlgorithm.
      
      * apply OperatorBase
      
      * apply net operator.
      
      * move memorys to attributes
      
      * add RecurrentGradientOp
      
      * open test unit test in recurrent_network_op_test.
      
      * revert some files.
      
      * add RecurrentArgument and Link struct to simplify member variable.
      
      * rename.
      
      * move recurrent_op from framework to operators
      
      * add RecurrentGradientOp Init
      
      * fix name
      
      * fix Link.interal/external name
      
      * use namespace operators instead of framework
      
      * clean the code
      
      * use the latest add_op and mul_op, don't test backward now
      
      * Remove ScopePtr and OperatorPtr
      
      * add get_net to pybind
      
      * add test_recurrent_op.py
      
      * add random into gen_tensor
      
      * update to develop branch and refine some code.
      
      * add some comments.
      aee0d3ec
  14. 27 7月, 2017 1 次提交
  15. 26 7月, 2017 1 次提交
  16. 25 7月, 2017 4 次提交