- 24 12月, 2020 1 次提交
-
-
由 tangwei12 提交于
* oneps (3/4) Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nmalin10 <malin10@baidu.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-
- 26 11月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* add static_only for static api * addd static_only for class init * remove static_only for default_main_program * remove creater_parameter & startup_program * remove failed apis * revert py_func import * remove global scope * remove some api * remove cuda pinned place
-
- 14 10月, 2020 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 28 9月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* modify sample code * variable -> tensor * migrate program_guard sample code * refine error message * migrate program_guard * refine comment style * fix indent
-
- 21 9月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* support use add instead of sum to do gradient accumulation * add inplace addto pass * add grad_add op and inplace addto pass * remove debug code * code refine * fix bug when sereral sum ops inserts at same op_idx * fix Flags type * add addto attribute for conv3d * fix ut * code clean * fix type
-
- 11 9月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* fix calcu_gradients * fix code place * fix embedding interface usage
-
- 13 7月, 2020 1 次提交
-
-
由 liym27 提交于
[while grad]Support pruning op in find_op_path about while sub-block when appending backward (#25330) Prune OPs which are not related with loss in while sub-block when constructing backward OP path.
-
- 14 5月, 2020 1 次提交
-
-
由 Cindy Cai 提交于
* test=develop, test=document_fix * test=develop, test=document_fix Co-authored-by: Nswtkiwi <1208425345@qq.com>
-
- 30 4月, 2020 1 次提交
-
-
由 qingqing01 提交于
* Rename internal gradient variables in multiple backward * so that they have different names with previous backward * For example: * y = x * x, grad = fluid.gradients(fluid.gradients(y, x) + y * y, x) * In second-time backward, gradient variable names of partial * forward network (y * y) may be have same names with first-time * fluid.gradients(y, x). test=develop
-
- 15 4月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* allow amp and recompute working together
-
- 10 4月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* API/OP (append_backward) error message enhancement test=develop * polish check_type test=develop * fix unittest failed test=develop * merge develop test=develop
-
- 09 4月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* API(fluid.gridents) error message enhancement test=develop * fix unitest failed test=develop
-
- 20 3月, 2020 1 次提交
-
-
由 Zeng Jinle 提交于
* add double grad implementation for dygraph, test=develop * polish code, add uts, test=develop * fix place bug, test=develop * polish codes, add more uts for coverages, test=develop * add no_grad_set, test=develop * add star gan ut, test=develop * follow comments, test=develop
-
- 19 3月, 2020 1 次提交
-
-
由 Zhang Ting 提交于
-
- 17 3月, 2020 1 次提交
-
-
由 Zhang Ting 提交于
-
- 03 3月, 2020 1 次提交
-
-
由 Zhang Ting 提交于
* add fluid.device_guard to specify the device type for Op
-
- 23 2月, 2020 1 次提交
-
-
由 tianshuo78520a 提交于
-
- 10 2月, 2020 1 次提交
-
-
由 Guo Sheng 提交于
-
- 07 2月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* polish backward api doc test=develop, test=document_preview, test=document_fix * polish backward api doc test=develop, test=document_preview, test=document_fix * no_grad supports set of Variable test=develop, test=document_preview * polish sample code of append_backward test=develop, test=document_preview * modify assert into Raise TypeError test=develop,test=document_preview * fix unittest failed test=develop * rm useless file test=develop * polish en doc test=develop * polish code of no_grad_set test=develop * polish code of no_grad_set test=develop
-
- 20 1月, 2020 1 次提交
-
-
由 Zeng Jinle 提交于
* polish backward prune, test=develop * fix control flow op bug, test=develop * add some unittests, test=develop * fix unittest args, test=develop * follow huihuang's comments, test=develop
-
- 16 1月, 2020 1 次提交
-
-
由 zhangchunle 提交于
-
- 04 1月, 2020 1 次提交
-
-
由 liym27 提交于
* append optimize op in the grad block of current block if current block is in control flow. test=develop * add conditional grad op when optimizer used in control flow. test=develop * add comment and modify typo. test=develop * fix append_backward to support control flow. test=develop * add test. test=develop * fix copy_var_to_parent_block and conditional_block_grad. test=develop * fix bug: revert to append conditional_block_grad vars to sub grad block. test=develop * fix bug: revert to assign var to parent block even if var already is in parent block * fix bug: consider outputs is empty. test=develop * move _rename_grad_ out. test=develop * modify code according to reviews from Huihuang. test=develop * modify code according to reviews from Jinle. test=develop
-
- 01 1月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* update doc, test=develop * fix related unittests, test=develop * fix str incompatible error, test=develop
-
- 18 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
The fixed bugs: 1. The condition sub-graph is not pruned 2. When backward graph is extremely simple, the whole backward ops are pruned.
-
- 10 12月, 2019 1 次提交
-
-
由 mapingshuo 提交于
* add seed op
-
- 06 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests. Fix bugs: 1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op. 2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var. This PR also did some code clean up: 1. Print the var name when sgd_op catches shape error so that it is easier to debug 2. Fix a typo: dicta -> dict
-
- 29 11月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
* Commit before merging develop test=develop * Backup after working with Huihuang logs * Commit before deleting Huihuang debug loggings * Commit before debug test=develop * Fix bug commit test=develop * Backup of fixing bugs test=develop * Clean up code test=develop * Fix a bug in sum_op test=develop
-
- 30 10月, 2019 1 次提交
-
-
由 lvmengsi 提交于
* fix_gradients * fix_gradients, test=develop
-
- 19 10月, 2019 1 次提交
-
-
由 Aurelius84 提交于
-
- 13 10月, 2019 1 次提交
-
-
由 liym27 提交于
2. fix bug in backward.py: using fill_constant instead of fill_constant_batch_size_like 3. fix bug in ExpandGradOp. test=develop
-
- 09 10月, 2019 2 次提交
-
-
由 Youwei Song 提交于
* polish append_backward, test=document_fix * test=document_fix, test=develop * test=document_fix, test=develop * polish append_backward, test=document_fix, test=develop
-
由 mapingshuo 提交于
* rm unused ckpt and sort ckpt * use max op idx to sort, test=develop * remove unsed code,test=develop * add testcase, test_develop * modify test case, test=develop
-
- 26 9月, 2019 1 次提交
-
-
由 mapingshuo 提交于
* fix doc of apply_optimize test=document_fix test=document_preview * modify doc of backward test=develop test=document_fix * modify document hash test=develop test=document_preview
-
- 23 9月, 2019 1 次提交
-
-
由 mapingshuo 提交于
* add recompute based checkpoints methods for large batch training test=develop * add append_backward_with_forward_recomputation test=develop * refine optimizer test=develop * update backward and optimizer test=develop * make Variable usable test=develop * add recompute code * refine optimizer test=develop * refine addup _append_backward_ops_with_checkpoints_ 1) for recompute part, just cache the grad_op_desc without appending to block 2) before appending grad_op_desc to backward part, addup_repetitive_vars, remove unused branch test=develop * make method private * add recompute strategy into DistributedStrategy test=develop * checkpoint version3 test=develop * remove some print information test=develop * remove unused sumop test=develop * try to fix recompute with graph building modules * add input names to vars should be held * add memory debug tool * backup backward * Fix bugs * add backward desc for op not in any segments * add exception info for sub_block test=develop * modify code style test=develop * modify code style test=develop * remove print functions test=develop * add API spec test=develop test=document_preview * make Recompute a child class of Optimizer test=develop test=document_preview * add API spec test=develop test=document_preview * modify API spec test=develop test=document_preview * add document for Recompute test=develop test=document_preview * change API doc of Rcompute test=develop test=document_preview * code cleaning test=develop test=document_preview * modify API spec * fix bugs when segments hold no element * add testcase for Recompute Optimizer test=develop test=document_preview * add test for apply_gradient, and code cleaning test=develop test=document_preview * add test case for load function * enable CI test=develop test=document * add test case test=develop test=document_preview * add sample code for 4 function of recompute optimizer test=develop test=document_preview
-
- 11 9月, 2019 1 次提交
-
-
由 Youwei Song 提交于
* update dygraph api-doc and backward api-doc, test=develop * update dygraph api-doc and backward api-doc, update api.spec, test=develop * update dygraph api-doc and backward api-doc, update api.spec, test=develop * update API.spec, test=develop
-
- 26 8月, 2019 1 次提交
-
-
由 chengduo 提交于
* fix optimizer bug test=develop
-
- 24 7月, 2019 1 次提交
-
-
由 chengduo 提交于
* prun backward ops test=develop
-
- 02 7月, 2019 1 次提交
-
-
由 chengduo 提交于
* add not_been_used_vars to no_grad_set test=develop
-
- 01 7月, 2019 1 次提交
-
-
由 xsrobin 提交于
-
- 16 6月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Update backward.py: - If there is no input grad var in all outputs of previous ops, do not append this op into graph. - Only apply this stragety when double backward. * Update some double backward op. * Update sum_op to judge whether a tensor is empty by numel or IsInitialized().
-