- 12 1月, 2021 1 次提交
-
-
由 swtkiwi 提交于
* fix datanorm error msg (#30294) * Optimize the error message of framework. (#30134) * modify error message based on comments (#30189) * modify error message based on comments * edit code according to review. * Correct spelling according to review. * fix enforce msg of sum xpu op (#30113) * enhance error info for py_func (#30138) * enhance error info for py_func * update * fix elugradgrad test fail & error message opt (#30171) * fix elugradgrad test fail and error message opt * fix unitest,test=develop * Update prroi_pool_op.h fix error message * opt message,test=develop * fix ci fail,test=develop * Refine PADDLE_ENFORCE Error Messages. test=develop (#30149) Improve some error messages in parallel_executor.cc, conditional_block_op.cc, recurrent_op.cc * enhance error message, test=develop (#30220) * fix error message for distribute_fpn_proposals_op (#30116) * enhance error msgs of fusion_seqpool_cvm_concat_op.cc, test=develop (#30240) * just add the op error message for the matmul xpu (#30246) add the op error message for the matmul xpu * enhance error message of nll_loss op test=develop (#30125) * enhance error message of nll_loss op test=develop Co-authored-by: Nyaoxuefeng <yaoxuefeng@baidu.com> Co-authored-by: Nxiemoyuan <71377852+xiemoyuan@users.noreply.github.com> Co-authored-by: NWeiXin <weixin10@baidu.com> Co-authored-by: NJack Zhou <zhoushunjie@baidu.com> Co-authored-by: NWilber <jiweibo@baidu.com> Co-authored-by: NDouble_V <liuvv0203@163.com> Co-authored-by: NHuihuang Zheng <zhhsplendid@gmail.com> Co-authored-by: Nzhang wenhui <frankwhzhang@126.com> Co-authored-by: Nwangguanzhong <jerrywgz@126.com> Co-authored-by: N石晓伟 <39303645+Shixiaowei02@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: Nlijianshe02 <48898730+lijianshe02@users.noreply.github.com>
-
- 11 1月, 2021 1 次提交
-
-
由 Zhen Wang 提交于
* Support pure fp16 training for AMP API. (#29544) * add cast ops before and after unsupported fp16 ops. * Keep partial net in FP32 pattern. * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode. * Add fp16 support for adam op. * add multi precision attr for adam. * Fix the bug of test_multi_precision_fp16_train UT. * Code format for CI. * Fix the redefine error about MPTypeTrait on windows. * fix bugs of the _create_accumulators func in Momentum. * fix bug when inserting post cast op. * Add the update_loss_scaling op in allow_set of UnusedVarCheck. * Update for ci coverage. * Add some doc for OptimizerWithMixedPrecision. * Fix the code style. * Imporve the doc of `amp_init`. * Change for fp16 testing if users have the infer program defined in separate way. * Remove tensor copy in the update_loss_scaling op. (#29426) * remove tensor copy in the update_loss_scaling op * not use thrust. * fix some cuda memory access error.
-
- 03 12月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
* Add pure fp16 training with master weights. (#27712) * add the weight decay func for the momentum op * Add the multi_precision function in Momentum Optimizer. * Make sure that the initial value of master weights are same with the fp16 weights. * add static loss scaling. * add the rescale_grad function in the pure fp16 training. * use the original momentum updating method. * Polish some codes, such as variable names. * add docstring for apis. * update the var creation details of _create_master_weight. * not modify codes about imperative momentum updating. * Fix the error of test_dist_sparse_tensor_load_momentum UT. * add unit test for multi precision fp16 training. * add more unit tests for CI. * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
-
- 23 11月, 2020 1 次提交
-
-
由 furnace 提交于
* refactor momentum op to combine weight_decay (scale op and sum op)
-
- 06 11月, 2020 1 次提交
-
-
由 taixiurong 提交于
-
- 19 10月, 2020 2 次提交
-
-
由 yinhaofeng 提交于
* lookup_table_xpu op report errors;test=kunlun * add adam xpu op;test=kunlun * reset lookup * change adam wrong;test=kunlun
-
由 Chengmo 提交于
* fix error message,test=kunlun * fix, test=kunlun
-
- 14 10月, 2020 2 次提交
-
-
由 MRXLT 提交于
* fix adam * fix gpu adam * fix code style * fix ut * update ut add cuda code
-
由 Chen Weihang 提交于
* polish some error message * add white list * revert shell script change
-
- 13 10月, 2020 1 次提交
-
-
由 Chengmo 提交于
* add xpu sgd & momentum
-
- 27 9月, 2020 1 次提交
-
-
由 Chengmo 提交于
* fix sgd/momentum/dpsgd/rmsprop error message
-
- 22 9月, 2020 1 次提交
-
-
由 123malin 提交于
* test=develop, update error message
-
- 21 9月, 2020 1 次提交
-
-
由 MRXLT 提交于
* fix adam * rmsprop support double
-
- 09 9月, 2020 1 次提交
-
-
由 JZ-LIANG 提交于
add lars to fleet meta optimizer
-
- 29 8月, 2020 1 次提交
-
-
由 Jiawei Wang 提交于
* add doc; notest * fix doc; notest * update doc; notest * refine optimizer && adam * refine optimizer; notest * add adam * fix doc * fix doc && add adamw; notest * add error message * bug fix * refine rmsprop && adamax * fix ci * buf fix * update comment * unify arguments place; notest * fix ut, test=develop * bug fix * fix conflicts, test=develop * add examples code * bug fix * fix comments * fix sample code * add sample code for Optimizer * add adamax ut, test=develop * fix rmsprop ut, test=develop * add ut for optimizer.py and adamw.py * first commit of adadelta optimizer * fix learning rate * fix adadelta doc and add sgd momentum * remove unused fluid * fix codestyle * Update test_adam_op.py * Update test_adam_op.py * fix SGD in 2 unittests * fix SGD in 2 unittests * fix ci * fix ut Co-authored-by: NMRXLT <xlt2024@gmail.com> Co-authored-by: Nmapingshuo <mps2012@yeah.net>
-
- 28 8月, 2020 1 次提交
-
-
由 lilong12 提交于
-
- 11 7月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* fix softmax_with_cross_entropy cuda kernel overflow bug, test=develop * replace old macro & for condition, test=develop * polish details, test=develop
-
- 03 6月, 2020 1 次提交
-
-
由 leesusu 提交于
-
- 13 5月, 2020 3 次提交
-
-
由 gongweibao 提交于
-
由 MRXLT 提交于
-
由 zhang wenhui 提交于
-
- 26 4月, 2020 1 次提交
-
-
由 liuwei1031 提交于
* save InferVarType changes, test=develop * remove code comments, test=develop * tweak code, test=develop * fix compilation warning, update merge_ids_op split_ids_op to new interface, test=develop * modify fused_bn_activation_op, test=develop * fix error of fused_bn_activation_op, test=develop * fix PADDLE_ENFORCE and unittest coverage issue, test=develop * tweak PADDLE_ENFORCE messages, test=develop * improve unittest coverage, test=develop * add StaticGraphInferVarType class, test=develop * rebase develop branch, test=develop * fix unittest error, test=develop * remove comments, test=develop * improve unittest coverage, test=develop * imporve error message and imporve unittest coverage, test=develop * upgrade InferVarType API, test=develop * tweak pyfunc error message, test=develop * fix compilation conflict - save_combine_op, test=develop
-
- 07 4月, 2020 1 次提交
-
-
由 wangchaochaohu 提交于
* add support for value tensor support of fill_constant Op
-
- 04 4月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* delete invalid check inferface Ref & VectorRef, test=develop * fix vector ref delete error, test=develop * try the new check inferface, test=develop * change all related code with new check macro, test=develop * remove static assert, test=develop * polish detail, test=develop * skip coverage problem, test=develop * add new check macro, test=develop
-
- 27 2月, 2020 1 次提交
-
-
由 zhaoyuchen2018 提交于
* Refine adam op, test=develop * Fuse kernels together to reduce cpu time. * Refine paddle enforce, test=develop * Remove some comments, test=develop * Refine code,test=develop * Refine cuda kernel, test=develop * Refine code according to comments, test=develop
-
- 09 1月, 2020 1 次提交
-
-
由 zhongpu 提交于
* test Optimizer in dygraph, test=develop * add optest for Optimizer in dygraph, test=develop * fix adagrad optimizer, test=develop * fix dpsgd optimizer, test=develop * fix test_optimizer.py, test=develop * fix dpsgd optimizer, this op only support cpu, test=develop * add optest for optimizer, test=develop * add description for dpsgd, test=develop * add rmsprop to white_list in unused_var_check.cc, test=develop * polish code style, test=develop * polish code style, test=develop * delete seed attribute for DpsgdOptimizer, test=develop * change testing to debugging, test=develop
-
- 24 12月, 2019 1 次提交
-
-
由 Aurelius84 提交于
* optimize adam speed by removing _finish_update test=develop * fix SparseAdamFunctor param list test=develop * Remove scale_op in expect_list of adam_op test=develop * fix test optimizer loss assert error test=develop * fix test optimizer loss assert error test=develop * modify PADDLE_ENFORCE usage test=develop * fix op_type in lamb_op.cc test=develop * fix errors ostream format bug test=develop * add betaPowOut in ngraph op test=develop * fix ngraph::op api for gcc8 test=develop * clean code test=develop * modify struct into class test=develop * remove code of beta1Tensor in lamb_op test=develop
-
- 06 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests. Fix bugs: 1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op. 2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var. This PR also did some code clean up: 1. Print the var name when sgd_op catches shape error so that it is easier to debug 2. Fix a typo: dicta -> dict
-
- 29 11月, 2019 2 次提交
-
-
由 Chen Weihang 提交于
* add param & grad shape check for sgd op * add _reshape_inplece interface for dygraph parallel * refine unittest based paddle/models scripts, test=develop * add unittest for parallel grad fuse, test=develop
-
由 hong 提交于
* add_dygraph_execution_context * add dygraph infershape context and execution context; test=develop * fix imperative bug; test=develop * remove inputs outputs interface from execution context, because it have same function with inputNames; test=develop * remove tracer_test ctest; test=develop * fix split op bug; test=develop * fix unitests bug; test=develop * fix distribute test bug; test=develop * fix ngraph compile bug; test=develop * fix grad maker bug; test=develop * fix load op bugs; test=develop * fix operator.cc construct bug; test=develop * remove useless name find in operator; test=develop * add tracer_test; test=develop * fix concat, split bug; test=develop * remove tracer_test unitest; test=develop * fix attribute check bug; test=develop * add test code to fix converage; test=develop * remove useless code, change check backward input in engin; test=develop * unlock var type infer shape;test=develop * add ShareAllLoD api; test=develop * add dygraph infershape context unitest; test=develop * remove increase and decrease lod in dygraph; test=develop * addd override; test=develop * fix increase descrease lod; test=develop * fix paddle_enforce; test=develop * disable lod op dygraph check; test=develop * fix paddle enforce error; test=develop * add comment for op_registry and OperatorBase; test=develop * optimize the comment of op_registry; test=develop * fix format of comment; test=develop * fix format of comment; test=develop * optimize the format of comment; test=develop * optimize the format of the comment; test=develop * optimize comment of op_registry; test=develop
-
- 28 11月, 2019 1 次提交
-
-
由 Kaipeng Deng 提交于
* add Adam beta1/beta2 support Variable. test=develop
-
- 25 11月, 2019 1 次提交
-
-
由 WangXi 提交于
-
- 31 10月, 2019 1 次提交
-
-
由 hong 提交于
* refactor dygraph,test=develop * fix failed unittest,test=develop * polish code,test=develop * check windows ci error,test=develop try to fix windows ci error by np.allclose,test=develop * polish vlog and profiler, test=develop * try to fix preceding ops order,test=develop * test transformer in windows ci, test=develop * use python c-api to speed up tracer.trace,test=develop * test=develop, fix docker with paddle nccl problem * test=develop, add ut for debug string and gradient_accumulator * test=develop, add tests for layer/gradient_accumulator/prepared_op * test=develop, fix complie error for test_prepared_op * test=develop, add more ut for dygraph * test=develop, create API.spec for dygraph api change * optimize grad maker; test=develop * optimize grad maker * test * grad make optim; test=develop * fix unittest bugs; test=develop * add dygraph grad op maker and split_op * grad op maker refactor; test=develop * add dygraph grad maker; test=develop * fix op deformable_conv_v1_op bug; test=develop * fix deformable_conv prroi pool bugs; * fix new op grad op maker bug; test=develop * fix split by ref bug; test=develop * fix dygraph auto prune bug; test=develop * fix test_trace bug; test=develop * fix fused emb seq pool bug; test=develop * remove useless code in op_desc file; test=develop * remove useless code, StrVarBaseNode; test=develop * fix review issues; test=develop * fix rank_loss grad maker; test=develop * remove flag in VarBase; test=develop * fix distributed_notify_op compile bug ; test=develop * fix reshape op double grad; test=develop * fix expand as op; test=develop * add impertive type_defs.h for demo_train; test=develop * fix inference lib cmake; test=develop * fix inference lib; test=develop * fix infernce_lib; test=develop * fix inference cmake; test=develop * fix inference lib; test=develop * fix inference lib; test=develop * remove condition dygraph grad maker, modify local name; test=develop * fix split grad maker bug; test=develop * fix pyramid_op bug; test=develop * change travis time out limit; test=develop * restore travis; test=develop * change timeout limit; test=develop
-
- 28 10月, 2019 1 次提交
-
-
由 Chen Weihang 提交于
* replace part of the old implementation, test=develop * restore concat op, test=develop * update all ops implemention & delete GetDataTypeOfVar func, test=develop
-
- 24 10月, 2019 1 次提交
-
-
由 WangXi 提交于
-
- 24 9月, 2019 1 次提交
-
-
由 jhjiangcs 提交于
-
- 04 9月, 2019 1 次提交
-
-
由 Chen Weihang 提交于
Add user-friendly error message in optimizer ops to give a hint about the position sensitive problem of run(startup_program) (#19605) * add extra error message hint in optimizer ops * polish format & delete useless change, test=develop * extract init judue from shape compare, test=develop
-
- 03 9月, 2019 1 次提交
-
-
由 Tao Luo 提交于
test=develop
-
- 04 7月, 2019 1 次提交
-
-
由 chengduo 提交于
-
- 26 6月, 2019 1 次提交
-
-
由 Yibing Liu 提交于
* Update lamb optimizer test=develop, test=document_preview * Regenerate api spec test=develop, test=document_preview
-