- 05 8月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 04 8月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add gradients_with_optimizer api * modify gradients_with_optimizer * add gradients_with_optimizer api into paddle.auto.backward_mode * add gradients_with_optimizer test case * add doc for gradients_with_optimizer * add doc for gradients_with_optimizer
-
- 14 7月, 2021 1 次提交
-
-
由 ShenLiang 提交于
-
- 05 7月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 02 7月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 09 6月, 2021 1 次提交
-
-
由 wanghuancoder 提交于
* modify API nn.Bilinear's doc, test=develop
-
- 26 4月, 2021 1 次提交
-
-
由 xiemoyuan 提交于
* Modified params of some APIs to support tuple and list. * fixed bug.
-
- 07 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 02 4月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 12 1月, 2021 1 次提交
-
-
由 JZ-LIANG 提交于
-
- 24 12月, 2020 1 次提交
-
-
由 tangwei12 提交于
* oneps (3/4) Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nmalin10 <malin10@baidu.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-
- 26 11月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* add static_only for static api * addd static_only for class init * remove static_only for default_main_program * remove creater_parameter & startup_program * remove failed apis * revert py_func import * remove global scope * remove some api * remove cuda pinned place
-
- 14 10月, 2020 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 28 9月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* modify sample code * variable -> tensor * migrate program_guard sample code * refine error message * migrate program_guard * refine comment style * fix indent
-
- 21 9月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* support use add instead of sum to do gradient accumulation * add inplace addto pass * add grad_add op and inplace addto pass * remove debug code * code refine * fix bug when sereral sum ops inserts at same op_idx * fix Flags type * add addto attribute for conv3d * fix ut * code clean * fix type
-
- 11 9月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* fix calcu_gradients * fix code place * fix embedding interface usage
-
- 13 7月, 2020 1 次提交
-
-
由 liym27 提交于
[while grad]Support pruning op in find_op_path about while sub-block when appending backward (#25330) Prune OPs which are not related with loss in while sub-block when constructing backward OP path.
-
- 14 5月, 2020 1 次提交
-
-
由 Cindy Cai 提交于
* test=develop, test=document_fix * test=develop, test=document_fix Co-authored-by: Nswtkiwi <1208425345@qq.com>
-
- 30 4月, 2020 1 次提交
-
-
由 qingqing01 提交于
* Rename internal gradient variables in multiple backward * so that they have different names with previous backward * For example: * y = x * x, grad = fluid.gradients(fluid.gradients(y, x) + y * y, x) * In second-time backward, gradient variable names of partial * forward network (y * y) may be have same names with first-time * fluid.gradients(y, x). test=develop
-
- 15 4月, 2020 1 次提交
-
-
由 mapingshuo 提交于
* allow amp and recompute working together
-
- 10 4月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* API/OP (append_backward) error message enhancement test=develop * polish check_type test=develop * fix unittest failed test=develop * merge develop test=develop
-
- 09 4月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* API(fluid.gridents) error message enhancement test=develop * fix unitest failed test=develop
-
- 20 3月, 2020 1 次提交
-
-
由 Zeng Jinle 提交于
* add double grad implementation for dygraph, test=develop * polish code, add uts, test=develop * fix place bug, test=develop * polish codes, add more uts for coverages, test=develop * add no_grad_set, test=develop * add star gan ut, test=develop * follow comments, test=develop
-
- 19 3月, 2020 1 次提交
-
-
由 Zhang Ting 提交于
-
- 17 3月, 2020 1 次提交
-
-
由 Zhang Ting 提交于
-
- 03 3月, 2020 1 次提交
-
-
由 Zhang Ting 提交于
* add fluid.device_guard to specify the device type for Op
-
- 23 2月, 2020 1 次提交
-
-
由 tianshuo78520a 提交于
-
- 10 2月, 2020 1 次提交
-
-
由 Guo Sheng 提交于
-
- 07 2月, 2020 1 次提交
-
-
由 Aurelius84 提交于
* polish backward api doc test=develop, test=document_preview, test=document_fix * polish backward api doc test=develop, test=document_preview, test=document_fix * no_grad supports set of Variable test=develop, test=document_preview * polish sample code of append_backward test=develop, test=document_preview * modify assert into Raise TypeError test=develop,test=document_preview * fix unittest failed test=develop * rm useless file test=develop * polish en doc test=develop * polish code of no_grad_set test=develop * polish code of no_grad_set test=develop
-
- 20 1月, 2020 1 次提交
-
-
由 Zeng Jinle 提交于
* polish backward prune, test=develop * fix control flow op bug, test=develop * add some unittests, test=develop * fix unittest args, test=develop * follow huihuang's comments, test=develop
-
- 16 1月, 2020 1 次提交
-
-
由 zhangchunle 提交于
-
- 04 1月, 2020 1 次提交
-
-
由 liym27 提交于
* append optimize op in the grad block of current block if current block is in control flow. test=develop * add conditional grad op when optimizer used in control flow. test=develop * add comment and modify typo. test=develop * fix append_backward to support control flow. test=develop * add test. test=develop * fix copy_var_to_parent_block and conditional_block_grad. test=develop * fix bug: revert to append conditional_block_grad vars to sub grad block. test=develop * fix bug: revert to assign var to parent block even if var already is in parent block * fix bug: consider outputs is empty. test=develop * move _rename_grad_ out. test=develop * modify code according to reviews from Huihuang. test=develop * modify code according to reviews from Jinle. test=develop
-
- 01 1月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* update doc, test=develop * fix related unittests, test=develop * fix str incompatible error, test=develop
-
- 18 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
The fixed bugs: 1. The condition sub-graph is not pruned 2. When backward graph is extremely simple, the whole backward ops are pruned.
-
- 10 12月, 2019 1 次提交
-
-
由 mapingshuo 提交于
* add seed op
-
- 06 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests. Fix bugs: 1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op. 2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var. This PR also did some code clean up: 1. Print the var name when sgd_op catches shape error so that it is easier to debug 2. Fix a typo: dicta -> dict
-
- 29 11月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
* Commit before merging develop test=develop * Backup after working with Huihuang logs * Commit before deleting Huihuang debug loggings * Commit before debug test=develop * Fix bug commit test=develop * Backup of fixing bugs test=develop * Clean up code test=develop * Fix a bug in sum_op test=develop
-
- 30 10月, 2019 1 次提交
-
-
由 lvmengsi 提交于
* fix_gradients * fix_gradients, test=develop
-
- 19 10月, 2019 1 次提交
-
-
由 Aurelius84 提交于
-
- 13 10月, 2019 1 次提交
-
-
由 liym27 提交于
2. fix bug in backward.py: using fill_constant instead of fill_constant_batch_size_like 3. fix bug in ExpandGradOp. test=develop
-