- 07 12月, 2020 3 次提交
-
-
由 Chen Long 提交于
-
由 liuyuhui 提交于
* fix bug,test=develop * fix DLTP-15151, paddle.ParamAttr API * fix DLTP-15083/DLTP-15274, paddle.nn.functionl.assign paddle.cast API * fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API * fix DLTP-15083, paddle.nn.functionl.assign API * fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API * support in_dygraph_mode for cast op, test=develop * fix bug,test=develop * fix doc * fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API, test=document_fix
-
由 tangwei12 提交于
* fix gpu emb out of range Change-Id: I5794ac73bd634d5ea069a6fbbd914274b6d6b7bf * fix doc Change-Id: I5a3350b2930a9ab2f52116c192b087307faf8fdf
-
- 05 12月, 2020 4 次提交
-
-
由 myq406450149 提交于
* enhance array_to_lod_tensor_op lod_tensor_to_array_op errors information. test=develop * fix format. test=develop * format fix. test=develop * add lod_rank_table. test=develop * fix format. test=develop * fix doc info. test=develop * fix np error * add unbind dygraph api. test=develop * fix unbind doc.test=develop
-
由 liym27 提交于
[cherri-pick] Fix bug: delete wrong check_type of paddle.concat and support LoDTensorArray (#29306) (#29368)
-
由 chentianyu03 提交于
* fix random failed of complex matmul * Make transpose, trace, kron, reshape, sum op support complex type (#29321) * add complex64 and complex128 type; add +-*/@ and slice opreator for complex types * add test cases for complex elementwise, matmul and getitem unittest * add test cases for complex types * add test cases for complex matmul unittest * kron, reshape, transpose support complex types * sum and trace op support complex types * add test case of sum and trace op * fix the bug of imag part of complex not initialized * format file * format code style * kron support type promotion; modify test cases
-
由 Feiyu Chan 提交于
* fix multiple documentation error, test=document_fix * fix more rst syntax errors, test=document_fix * fix format issues in docstring, test=document_fix
-
- 04 12月, 2020 9 次提交
-
-
由 ShenLiang 提交于
-
由 Chen Long 提交于
-
由 LielinJiang 提交于
-
由 Huihuang Zheng 提交于
Reduce exception type so that if covert_to_static failed, it reports right error message.
-
由 liym27 提交于
[cherry-pick 2.0rc1][inplace] Add ShareHolderWith for class Variable and SharePlaceholderWith in VarBase.detach() to share the same Tensor/SelectedRows (#29267) (#29359)
-
由 liym27 提交于
[cherry-pick 2.0rc1][Dy2Stat] Fix bug: Do not use gast.Subscript to replace gast.Name in when transforming for_enumerate_loop (#29310) (#29361)
-
由 Chen Weihang 提交于
* basic impl of type promote * add comment & another testcase * fix complex bugs & support python op promote type * fix failed unittests & polish code * add unittest for coverage * change to only promote complex type * polish code details * polish several comments
-
由 liym27 提交于
[Cheery-Pick 2.0.0-rc1][Dy2stat] Add a decorator paddle.jit.not_to_static to support that not to convert a function in Dynamic-to-Static. (#29253) (#29340) Usage scenarios:A function could have run successfully in static mode, you can use it to decorate a function in the following cases: 1. An unknown error occurs in the dynamic-to-static conversion process of the function; 2. In the internal implementation of the function, it has two branches: dynamic branch and static branch; 3. Users don't want to convert the function in the process of dynamic to static.
-
由 Leo Chen 提交于
* use has_grad instead of train_mode * add vlog for debug * fix ut * fix ut
-
- 03 12月, 2020 9 次提交
-
-
由 LielinJiang 提交于
* move temporal_shift to functional
-
由 ShenLiang 提交于
* fix the warning of reducer (#29323) * fix warning of fleet (#29317) * Fix doc of fleet api (#29282)
-
由 ShenLiang 提交于
* Change the api of DataParallel and Fleet (#29224)
-
由 Zhen Wang 提交于
* Add pure fp16 training with master weights. (#27712) * add the weight decay func for the momentum op * Add the multi_precision function in Momentum Optimizer. * Make sure that the initial value of master weights are same with the fp16 weights. * add static loss scaling. * add the rescale_grad function in the pure fp16 training. * use the original momentum updating method. * Polish some codes, such as variable names. * add docstring for apis. * update the var creation details of _create_master_weight. * not modify codes about imperative momentum updating. * Fix the error of test_dist_sparse_tensor_load_momentum UT. * add unit test for multi precision fp16 training. * add more unit tests for CI. * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
-
由 Steffy-zxf 提交于
* fix DATA_HOME path in win
-
由 yukavio 提交于
-
由 Huihuang Zheng 提交于
Cherry-pick of PR #29226
-
由 Jack Zhou 提交于
fix nll_loss doc;test=document_fix
-
由 LielinJiang 提交于
-
- 02 12月, 2020 3 次提交
-
-
由 chentianyu03 提交于
-
-
由 Chen Weihang 提交于
* hot fix complle failed in gcc4.8 * fix failed unittest
-
- 01 12月, 2020 6 次提交
-
-
由 Jiawei Wang 提交于
* add lamb optimizer and unittest * fix momentum resume training * fix momentum acc
-
由 Leo Chen 提交于
-
由 wanghuancoder 提交于
-
由 chentianyu03 提交于
* add complex64 and complex128 type; add +-*/@ and slice opreator for complex types * add test cases for complex elementwise, matmul and getitem unittest * add test cases for complex types * add test cases for complex matmul unittest
-
由 123malin 提交于
* fix fleet api doc
-
由 Zhou Wei 提交于
* The leaf tensor concept is exposed and the gradient accumulation of leaf tensor * The leaf tensor concept is exposed and the gradient accumulation of leaf tensor * fix coverage * fix api doc * fix CI unittest * fix CI unittest * fix unitest * empty tensor does’t need inner_var_ * fix some error message
-
- 30 11月, 2020 6 次提交
-
-
由 huangjun12 提交于
* fix en doc, test=document_fix * add blank after code declare, test=document_fix * refine doc of dropout, test=document_fix * refine npair_loss and dropout, test=document_fix
-
由 LielinJiang 提交于
* lazy import for scipy * rm unused check
-
由 yaoxuefeng 提交于
-
由 Chen Weihang 提交于
-
由 hong19860320 提交于
-
由 123malin 提交于
* fix paramete prefetch & device guard Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-