- 03 12月, 2020 6 次提交
-
-
由 YUNSHEN XIE 提交于
-
由 Leo Chen 提交于
* use has_grad instead of train_mode * add vlog for debug * fix ut * fix ut
-
由 Aurelius84 提交于
-
由 ShenLiang 提交于
-
由 liym27 提交于
[Dy2stat] Add a decorator paddle.jit.not_to_static to support that not to convert a function in Dynamic-to-Static. (#29253) Usage scenarios:A function could have run successfully in static mode, you can use it to decorate a function in the following cases: 1. An unknown error occurs in the dynamic-to-static conversion process of the function; 2. In the internal implementation of the function, it has two branches: dynamic branch and static branch; 3. Users don't want to convert the function in the process of dynamic to static.
-
由 ShenLiang 提交于
-
- 02 12月, 2020 9 次提交
-
-
由 LielinJiang 提交于
* move temporal_shift to functional
-
由 Chen Weihang 提交于
-
由 Zhen Wang 提交于
* add the weight decay func for the momentum op * Add the multi_precision function in Momentum Optimizer. * Make sure that the initial value of master weights are same with the fp16 weights. * add static loss scaling. * add the rescale_grad function in the pure fp16 training. * use the original momentum updating method. * Polish some codes, such as variable names. * add docstring for apis. * update the var creation details of _create_master_weight. * not modify codes about imperative momentum updating. * Fix the error of test_dist_sparse_tensor_load_momentum UT. * add unit test for multi precision fp16 training. * add more unit tests for CI. * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT. * For CI Coverage Checking.
-
由 chentianyu03 提交于
-
由 furnace 提交于
* add fp16 for layer_norm op * revert layernorm api * fix forward * fix forward * fix backward for layernorm with fp16 * fix unit test for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U> * fix with_mkldnn compile error for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
-
由 mls1999725 提交于
* Update IterableDataset API * Update TensorDataset API * Update APIs in paddle/text/datasets * Update dataset.py
-
由 mls1999725 提交于
* Update get_worker_info API * Update dataloader_iter.py * Update dataloader_iter.py * Update dataloader_iter.py
-
由 mls1999725 提交于
* Update conv3d API * Update nn.py * Update nn.py * Update nn.py * Update nn.py * Update nn.py * Update nn.py * Update nn.py * Update nn.py
-
由 Huihuang Zheng 提交于
This PR fixes several problems in dy2stat for Deoldify model in PaddleGan. In model, software engineer wrote if x.shape == y.shape, the Tenser shape is a tuple in dygraph so the == returns True/False, but in static graph the == becomes element-wise comparison, which is a different behavior. In this PR we reduce the element-wise comparison result. If software engineer write computations which uses parameters in hooks, the static graph can loss the parameter variable because we put param_guard at forward of a Layer. In this PR we made param_guard cover pre-hook and post-hook. In PaddleGan, software engineer calculated some parameter values in __init__ by running some dygraph code. Those code also run during dy2stat. So some variables may be assign as a VarBase (Tensor) first and then Variable, which raised an error. We fixed the bug in this PR by handling the case. TODO: We just added testcase for the 1. shape comparison. Should add test case for 2. and 3. But since we are chasing 2.0RC, I will do it in the near future PR
-
- 01 12月, 2020 14 次提交
-
-
由 Leo Chen 提交于
* pass stop_gradient for cast op * improve performance of elementwise_add grad * use tensor copy async * dygraph branch * fix dygraph branch * add ut
-
由 卖鱼的哲学 提交于
* rebase develop * update deformable_conv op on xpu * update deformable_conv op on xpu
-
由 Chen Weihang 提交于
* hot fix complle failed in gcc4.8 * fix failed unittest
-
由 ShenLiang 提交于
-
由 Leo Chen 提交于
* add stop_gradient property and remove reduce redundant information * refine code
-
由 QingshuChen 提交于
* update conv2d & softmax to new xpu api * test=kunlun * remove useless comments * test=kunlun * remote softmax xpu op * test=kunlun * update kunlun softmax * test=kunlun * update xpu unitest * test=kunlun * fix elementwise_grad bug for kunlun *test=kunlun
-
由 Jiawei Wang 提交于
* fix 3 doc * fix 3 doc * Update adadelta.py
-
由 lijianshe02 提交于
-
由 huangxu96 提交于
-
由 Jiawei Wang 提交于
* add lamb optimizer and unittest * fix momentum resume training * fix momentum acc
-
由 Leo Chen 提交于
-
由 wanghuancoder 提交于
-
由 chentianyu03 提交于
* add complex64 and complex128 type; add +-*/@ and slice opreator for complex types * add test cases for complex elementwise, matmul and getitem unittest * add test cases for complex types * add test cases for complex matmul unittest
-
由 Zhou Wei 提交于
* The leaf tensor concept is exposed and the gradient accumulation of leaf tensor * The leaf tensor concept is exposed and the gradient accumulation of leaf tensor * fix coverage * fix api doc * fix CI unittest * fix CI unittest * fix unitest * empty tensor does’t need inner_var_ * fix some error message
-
- 30 11月, 2020 11 次提交
-
-
由 huangjun12 提交于
* fix en doc, test=document_fix * add blank after code declare, test=document_fix * refine doc of dropout, test=document_fix * refine npair_loss and dropout, test=document_fix
-
由 yaoxuefeng 提交于
-
由 Chen Weihang 提交于
-
由 hong19860320 提交于
-
由 123malin 提交于
* fix paramete prefetch & device guard Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-
由 liym27 提交于
* Add a class TensorInplaceVersion to count the inplace version and put it in framework::Tensor instead of Allocation or Variable. * Add a new attribute `_inplace_version` for VarBase. * Raise exception if an inplace operation can result in incorrect gradient computation. * Add a new interface _bump_inplace_version() for VarBase to bump the version whenever the Tensor is modified through an inplace operation. * For api assign, call _bump_inplace_version() when it's an inplace operation inn dynamic mode. * Use original var_wrapper if the inplace_version is not changed. * Replace SnapshotVarWrapperList with SnapshotVarWrapper to optimize performane.
-
由 lilong12 提交于
* update, test=develop
-
由 joejiong 提交于
As the title
-
由 WangXi 提交于
-
由 Chen Weihang 提交于
* fix failed tests in yingchun gived list * add unittests into static_mode_white_list * add enable static * fix dist unittest * skip test_sigmoid_focal_loss_op & add gym * revert no need skip unittests * remove gym
-
由 123malin 提交于
* test=develop, rm pathlib
-