- 20 6月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Update backward appending stragety to support double backward and fix some bug. (#18104) * Update backward.py: - If there is no input grad var in all outputs of previous ops, do not append this op into graph. - Only apply this stragety when double backward. * Update some double backward op. * Update sum_op to judge whether a tensor is empty by numel or IsInitialized().
-
- 14 5月, 2019 1 次提交
-
-
由 lvmengsi 提交于
* test=develop, double backward reduce_mean * add comment. test=develop * fix format. test=develop * rename GradGrad -> DoubleGrad. test=develop * fix op_use_default_grad_op_maker.spec. test=develop
-
- 12 4月, 2019 2 次提交
-
-
由 zhoukunsheng 提交于
-
由 zhoukunsheng 提交于
bug fix: reduce_all, reduce_any register GRAD_OP, but have not defined GradKernel
-
- 25 3月, 2019 1 次提交
-
-
由 zhoukunsheng 提交于
split reduce_all_any_op.h into two files add unit test for reduce_all, reduce_any
-
- 20 3月, 2019 1 次提交
-
-
由 zhoukunsheng 提交于
add reduce_all, reduce_any op
-
- 02 2月, 2019 2 次提交
- 16 11月, 2018 1 次提交
-
-
由 Wu Yi 提交于
* wip simplify operator framework * wip * wip * done test=develop * clean test=develop * fix test=develop * fix deps test=develop * fix cpu build test=develop * fix tensorrt build test=develop * fix tests test=develop * fix test=develop * fix cpu build test=develop
-