- 11 1月, 2021 22 次提交
-
-
由 石晓伟 提交于
-
由 石晓伟 提交于
-
由 wuhuanzhou 提交于
-
由 liym27 提交于
Support vector<double> as type of op attribute and op set_value suppport vector<double> as value (#30126)
-
由 furnace 提交于
-
由 wangchaochaohu 提交于
-
由 AshburnLee 提交于
-
由 石晓伟 提交于
-
由 chentianyu03 提交于
* type promotion for grad * add type promotion for div op
-
由 wuhuanzhou 提交于
-
由 YUNSHEN XIE 提交于
* disable ut test_tsm on windows * fix error * add ut execuate time
-
由 liym27 提交于
-
由 Jiaqi Liu 提交于
* add alias from fluid.layers.auc to static.auc * Update __init__.py
-
由 WeiXin 提交于
* Fix bug for 'save mutiple method' * To pass coverage. * edit code to pass coverage. * edit code to pass coverage. * add unittest for coverage. * change for coverage. * edit for coverage.
-
由 WeiXin 提交于
* modify error message based on comments * edit code according to review. * Correct spelling according to review.
-
由 gongweibao 提交于
-
由 Bai Yifan 提交于
-
由 YUNSHEN XIE 提交于
* use wget to replace curl to download the lcov file * add cache for lcov
-
由 Huihuang Zheng 提交于
Add clone method for static Variable so that this interface will be same as dygraph. It fixed some bugs in dy2stat
-
由 wawltor 提交于
add the op error message for the matmul xpu
-
由 XiaoguangHu 提交于
* delete paddle.nn.functional.assign * fix dynamic to static error
-
由 LielinJiang 提交于
* fix warning and no grad
-
- 10 1月, 2021 2 次提交
-
-
由 GaoWei8 提交于
* optimize softmax forward
-
由 wangchaochaohu 提交于
reduce the occupied size of memory for the fused pattern of elementwise_add Op and activation Op(relu Op for example) (#29885)
-
- 09 1月, 2021 3 次提交
-
-
由 zhang wenhui 提交于
-
由 pangyoki 提交于
* add view strategy on squeeze,unsqueeze,reshape,flatten * add squeeze unittest * add unittests * use View strategy as name rather than Reuse Allacation * fix view api doc * fix format * use core.ops when input of reshape2 is Tensor * fix test_cross_entropy_loss error because of reshape2 * delete selected_rows * change op_function * little change * solve HandleViewBetweenInputAndOutput
-
由 Jacek Czaja 提交于
* - Added UT for testing elementwise_mul caching * lint fixes
-
- 08 1月, 2021 13 次提交
-
-
由 huangxu96 提交于
-
由 Chen Weihang 提交于
-
由 Zhen Wang 提交于
* add cast ops before and after unsupported fp16 ops. * Keep partial net in FP32 pattern. * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode. * Add fp16 support for adam op. * add multi precision attr for adam. * Fix the bug of test_multi_precision_fp16_train UT. * Code format for CI. * Fix the redefine error about MPTypeTrait on windows. * fix bugs of the _create_accumulators func in Momentum. * fix bug when inserting post cast op. * Add the update_loss_scaling op in allow_set of UnusedVarCheck. * Update for ci coverage. * Add some doc for OptimizerWithMixedPrecision. * Fix the code style. * Imporve the doc of `amp_init`. * Change for fp16 testing if users have the infer program defined in separate way.
-
由 Leo Chen 提交于
-
由 Leo Chen 提交于
* fix dtype of ungenerated grad var * update ut * refine code * set default dtype * fix could_use_cudnn bug * remove debug code * re-implement * fix bug
-
由 Aurelius84 提交于
* fix tensor shape bug * fix op_num * clean code
-
由 liym27 提交于
In creation.assgin, reuse implamention code of layers.tensor.assign to avoid maintain two code (#30227)
-
由 littletomatodonkey 提交于
-
由 Wilber 提交于
-
由 Wilber 提交于
-
由 ruri 提交于
-
由 liym27 提交于
-
由 liym27 提交于
1. When x is Variable, call nn.shape(x) only in following cases: 1)The shape of x is used in control flow condition. 2)The dim to be used is negetive 2. When x is Variable, but x.shape or x.shape[idx] doesn't contain negetive value, don't convert to paddle.shape()
-