1. 14 10月, 2020 1 次提交
    • Z
      Multi task (#26002) · 5a83496c
      zhang wenhui 提交于
      * add multitask
      
      * add multitask, test=develop
      
      * fix code style, test=develop
      
      * add partail push dense, test=develop
      
      * fix has_kay in py3, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      5a83496c
  2. 30 7月, 2020 1 次提交
  3. 03 4月, 2020 1 次提交
    • C
      update linspace, equal operators to API 2.0 (#23274) · a2e10930
      channings 提交于
      * update linspace, equal operators to API 2.0, test=develop
      
      * equal support higher performance CUDA kernel, test=develop
      
      * update comment of equal&linspace operator, test=develop
      
      * update comment of equal&linspace operator, test=develop
      a2e10930
  4. 06 12月, 2019 1 次提交
    • H
      Add Much Complex Test and Fix Bugs for Control Flow cond API (#21532) · 1dcf6a72
      Huihuang Zheng 提交于
      Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests.
      
      Fix bugs:
      
      1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op.
      
      2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var.
      
      This PR also did some code clean up:
      1. Print the var name when sgd_op catches shape error so that it is easier to debug
      2. Fix a typo: dicta -> dict
      1dcf6a72
  5. 02 8月, 2019 1 次提交
    • Z
      Open gc by default (#18836) · 7ac748ad
      Zeng Jinle 提交于
      * open gc by default, test=develop
      
      * fix test_train_recognize_digits and disable gc when ngraph is enabled, test=develop
      
      * fix conditional_block op eager deletion bug, test=develop
      
      * add some comments to reviewers, test=develop
      7ac748ad
  6. 19 7月, 2019 1 次提交
    • H
      Support memory eager deletion on recurrent OP (#17710) · 89bc3fd8
      Huihuang Zheng 提交于
      Test PaddingRNN on V100 GPU device.
      
      Test configuration: large model, padding mode (which is the mode using recurrentOp), one GPU.
                         
      GPU memory (MiB):   6414 (this PR)     vs   6837 (without this PR)
      Speed (steps/s):         10.28 (this PR)    vs    9.89 (without this PR)
       
      89bc3fd8
  7. 28 3月, 2019 1 次提交
  8. 27 3月, 2019 1 次提交
  9. 05 3月, 2019 1 次提交
  10. 14 12月, 2018 1 次提交
  11. 16 11月, 2018 1 次提交
    • W
      Refine operator cmake (#14413) · a2d9b344
      Wu Yi 提交于
      * wip simplify operator framework
      
      * wip
      
      * wip
      
      * done test=develop
      
      * clean test=develop
      
      * fix test=develop
      
      * fix deps test=develop
      
      * fix cpu build test=develop
      
      * fix tensorrt build test=develop
      
      * fix tests test=develop
      
      * fix test=develop
      
      * fix cpu build test=develop
      a2d9b344