1. 19 10月, 2017 7 次提交
  2. 18 10月, 2017 7 次提交
    • D
      LSTM Operator forward implementation. · 2a8dbd13
      dangqingqing 提交于
      2a8dbd13
    • Q
      fix conflict (#4883) · 521514da
      QI JUN 提交于
      521514da
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827
    • Q
      fix gpu build error · f9681459
      qijun 提交于
      f9681459
    • Q
      add sparse sgd operator unittest · ab8cc401
      qijun 提交于
      ab8cc401
    • Q
      add sparse kernel of sgd operator · 182ce51c
      qijun 提交于
      182ce51c
    • Q
      Impl optimizer (#4734) · df0946eb
      Qiao Longfei 提交于
      * init parameter base class
      
      * optimize the Comments of optimizer
      
      * basic implimentation of optimizer
      
      * add test_optimizer
      
      * add no_grad_set to interface
      
      * update optimizer.py
      
      * python code can run
      
      * fix some problem
      
      * add sync_with_cpp to Python Program and Block
      
      * sync vars and ops in block from cpp
      
      * optimize code and add some comment
      
      * add more check for sync
      
      * update optimizer with return value of Backward
      
      * rm unused code
      
      * infer shape when create gradient vairiable
      
      * update test_optimizer
      
      * update test_program.py
      
      * update backward test
      
      * follow comment
      df0946eb
  3. 17 10月, 2017 4 次提交
  4. 16 10月, 2017 4 次提交
  5. 15 10月, 2017 4 次提交
  6. 14 10月, 2017 4 次提交
  7. 13 10月, 2017 6 次提交
  8. 12 10月, 2017 4 次提交