- 14 1月, 2021 4 次提交
-
-
由 QingshuChen 提交于
* optimize memcpy perf for kunlun (#30291) * optimize memcpy perf for kunlun * remove useless unitest for kunlun mean * minor * fix bug that cann't find mkldnn(kunlun) (#30394)
-
由 LielinJiang 提交于
* Add double grad for conv_transpose (#29706) * add double grad for conv_transpose * register cudnn conv double grad for depthwise conv (#29807)
-
由 Bai Yifan 提交于
-
由 chajchaj 提交于
* fix bug of celoss when using ignore_index and reduction (#30180) * fix bug of using ignore_index and reduction,test=develop * fix bug of celoss when using ignore_index and reduction, test=develop * improve performance when ignore_index=-100, test=develop * add test in test_cross_entropy_loss.py for coverage rate, test=develop * rm comment in test_cross_entropy_loss.py, test=develop * del hard code of "float64" in python/paddle/nn/functional/loss.py, test=develop * change mask to a more simplified implementation, test=develop * del comment in python/paddle/nn/functional/loss.py, test=develop * del hard code and change mask to a more simplified implementation, test=develop * change mask to a more simplified implementation, test=develop * change mask to a more simplified implementation, test=develop * fix bug of celoss when using ignore_index and reduction (#30180) * fix bug of using ignore_index and reduction,test=develop * fix bug of celoss when using ignore_index and reduction, test=develop * improve performance when ignore_index=-100, test=develop * add test in test_cross_entropy_loss.py for coverage rate, test=develop * rm comment in test_cross_entropy_loss.py, test=develop * del hard code of "float64" in python/paddle/nn/functional/loss.py, test=develop * change mask to a more simplified implementation, test=develop * del comment in python/paddle/nn/functional/loss.py, test=develop * del hard code and change mask to a more simplified implementation, test=develop * change mask to a more simplified implementation, test=develop * change mask to a more simplified implementation, test=develop
-
- 13 1月, 2021 4 次提交
- 12 1月, 2021 7 次提交
-
-
由 Leo Chen 提交于
* change to tensor copy sync * change to tensor copy sync * make copy_to safe when use TensorCopy * refine code * add ut * add cudapinned garbagecollector * add testcase: cpu place -> cuda pinned place
-
由 Chengmo 提交于
* Fix server.h include device_context (#30243) * fix cmake Co-authored-by: NseiriosPlus <tangwei12@baidu.com> * 【Paddle.Fleet】Support local save sparse param (#30175) * add save tensor support Co-authored-by: NseiriosPlus <tangwei12@baidu.com> * add sparse embedding & load vars for 2.0 & gloo bug fix (#30306) * add sparse embedding & load vars for 2.0 Change-Id: I36b59ed5f015189dc9d9d2e34a9357722d369f1b * fix hdfs gloo Change-Id: Ia84d579053720ad804183e54c9a04b4f031c79c6 * fix gloo hdfs Change-Id: I5ab982fd483cddc10adcdef0b8aa83aca976cb9e * move loadvar/sparse embedding from incubute to static Change-Id: I57081d3545ad2efab78c72420d2162c0eacaf3a0 Co-authored-by: Ntangwei12 <tangwei12@baidu.com>
-
由 swtkiwi 提交于
* fix datanorm error msg (#30294) * Optimize the error message of framework. (#30134) * modify error message based on comments (#30189) * modify error message based on comments * edit code according to review. * Correct spelling according to review. * fix enforce msg of sum xpu op (#30113) * enhance error info for py_func (#30138) * enhance error info for py_func * update * fix elugradgrad test fail & error message opt (#30171) * fix elugradgrad test fail and error message opt * fix unitest,test=develop * Update prroi_pool_op.h fix error message * opt message,test=develop * fix ci fail,test=develop * Refine PADDLE_ENFORCE Error Messages. test=develop (#30149) Improve some error messages in parallel_executor.cc, conditional_block_op.cc, recurrent_op.cc * enhance error message, test=develop (#30220) * fix error message for distribute_fpn_proposals_op (#30116) * enhance error msgs of fusion_seqpool_cvm_concat_op.cc, test=develop (#30240) * just add the op error message for the matmul xpu (#30246) add the op error message for the matmul xpu * enhance error message of nll_loss op test=develop (#30125) * enhance error message of nll_loss op test=develop Co-authored-by: Nyaoxuefeng <yaoxuefeng@baidu.com> Co-authored-by: Nxiemoyuan <71377852+xiemoyuan@users.noreply.github.com> Co-authored-by: NWeiXin <weixin10@baidu.com> Co-authored-by: NJack Zhou <zhoushunjie@baidu.com> Co-authored-by: NWilber <jiweibo@baidu.com> Co-authored-by: NDouble_V <liuvv0203@163.com> Co-authored-by: NHuihuang Zheng <zhhsplendid@gmail.com> Co-authored-by: Nzhang wenhui <frankwhzhang@126.com> Co-authored-by: Nwangguanzhong <jerrywgz@126.com> Co-authored-by: N石晓伟 <39303645+Shixiaowei02@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: Nlijianshe02 <48898730+lijianshe02@users.noreply.github.com>
-
由 Chengmo 提交于
-
由 wangchaochaohu 提交于
* reduce the occupied size of memory for the fused pattern of elementwise_add Op and activation Op(relu Op for example) (#29885) * register OPMaker and Infer Shape Check for fused_elementwise_add (#30259)
-
由 Zhen Wang 提交于
[Cherry-pick]Fix the accuracy problem of allclose op when using float64 data type in static mode.(#29890) (#30313) * Fix the accuracy problem of allclose op when using float64 data type in static mode. * Format the code style.
-
由 chentianyu03 提交于
* complex gradient matmul (#29966) * dot op support complex types * matmul support complex types * add test case * matmul broadcast gradient support complex * move conjFunctor to complex_functor.h * change the kron gradient when complex types (#29995) * type promotion for grad (#30177) * type promotion for grad * add type promotion for div op
-
- 11 1月, 2021 12 次提交
-
-
由 liym27 提交于
[Cherry-Pick] Support vector<double> as type of op attribute and op set_value suppport vector<double> as value (#30126) (#30305) Cherry-Pick #30126 1. Support vector<float64> as type of op attribute. 2. op set_value suppports float64 numpy.array
-
由 WeiXin 提交于
* Fix bug for 'save mutiple method' * To pass coverage. * edit code to pass coverage. * edit code to pass coverage. * add unittest for coverage. * change for coverage. * edit for coverage.
-
由 pangyoki 提交于
[Cherry-pick PR 29913], add View(reuse allocation) strategy on squeeze, unsqueeze, reshape, flatten op (#29913) (#30258) * add view strategy on squeeze,unsqueeze,reshape,flatten * add squeeze unittest * add unittests * use View strategy as name rather than Reuse Allacation * fix view api doc * fix format * use core.ops when input of reshape2 is Tensor * fix test_cross_entropy_loss error because of reshape2 * delete selected_rows * change op_function * little change * solve HandleViewBetweenInputAndOutput
-
由 Huihuang Zheng 提交于
Cherry-pick of PR #30208 , this PR added clone method for static Variable so that this interface will be same as dygraph. It fixed some bugs in dy2stat where users called clone of dygraph Tensor.
-
由 wangchaochaohu 提交于
cherry-pick #28769, add support for place string representation
-
由 wangchaochaohu 提交于
* elementwise_add_grad Op optimization (#29575) * optimize for long width for elementwise (#29602) * refine (#29622) * delete the code for fp16 optimization because it is not faster than common template code (#29715) * fix the shape choose of vectorize for cuda * optimization for fp16 elementwise add (#29744) * Fix the compiler error for half type (#29799) * refine the compiler error for half2 operation (#29816) * fix the compiler error when gcc4 cuda9.0 (#29997)
-
由 Zhen Wang 提交于
* Support pure fp16 training for AMP API. (#29544) * add cast ops before and after unsupported fp16 ops. * Keep partial net in FP32 pattern. * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode. * Add fp16 support for adam op. * add multi precision attr for adam. * Fix the bug of test_multi_precision_fp16_train UT. * Code format for CI. * Fix the redefine error about MPTypeTrait on windows. * fix bugs of the _create_accumulators func in Momentum. * fix bug when inserting post cast op. * Add the update_loss_scaling op in allow_set of UnusedVarCheck. * Update for ci coverage. * Add some doc for OptimizerWithMixedPrecision. * Fix the code style. * Imporve the doc of `amp_init`. * Change for fp16 testing if users have the infer program defined in separate way. * Remove tensor copy in the update_loss_scaling op. (#29426) * remove tensor copy in the update_loss_scaling op * not use thrust. * fix some cuda memory access error.
-
由 Aurelius84 提交于
* fix tensor shape bug * fix op_num * clean code
-
由 guofei 提交于
* Quantization supports 2.0 APIs * Fix the error of save_quantized_model
-
由 WangXi 提交于
* Optimization grad merge performance (#29784) * [fleet] combine amp and gradient merge, test=develop (#30086) * fix assign_op_xpu concat_op_xpu warining (#30120) Co-authored-by: Nliuyuhui <liuyuhui@baidu.com>
-
由 Chen Weihang 提交于
att, cherry-pick of #30219
-
由 XiaoguangHu 提交于
* fix dynamic to static error * delete paddle.nn.functional.assign
-
- 10 1月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 08 1月, 2021 9 次提交
-
-
由 liym27 提交于
[cherry-pick] [Dy2Stat] Don't convert to paddle.shape if var_x.shape is not negetive #29965 (#30235) * [Cherry-Pick 2.0] [Dy2Stat] Don't convert to paddle.shape if var_x.shape is not negetive (#29965) 1. When x is Variable, call nn.shape(x) only in following cases: 1)The shape of x is used in control flow condition. 2)The dim to be used is negetive 2. When x is Variable, but x.shape or x.shape[idx] doesn't contain negetive value, don't convert to paddle.shape() * [Cherry-Pick 2.0] [Dy2Stat] Use Paddle2.0 api paddle.tensor.array_* (#30156)
-
由 huangxu96 提交于
* Optimizer trans momentum (#29597) * merge amp related function in Momentum from paddle.fluid.contrib.optimizer into paddle.optimizer. * Add unittest for 2.0 Momentum API. * fix some bugs in weight_decay. * add alias for fluid.contrib.mixed_precision (#29562) * add alias for fluid.contrib.mixed_precision * add static.amp into setup.pu.in (#29621) * add static.amp into setup.pu.in * add unittest for api * fix a bug in multi_precision_fp16 unittest. (#29756)
-
由 liym27 提交于
[cherry-pick 2.0] Fix bug: In dynamic mode, if start or end is negetive, __getitem__ return wrong result(#30003) (#30146) 1. when slice_item is a slice: 1) the start of __getitem__ should be std::max(start, 0) if slice 2) the start of __getitem__ should be std::min(end, dim) 2. when slice_item is an integer, it should be in [-dim_len, dim_len) 3. Fix error message to use accurate data
-
由 liym27 提交于
1. Type of index: int, slice(step must be 1). 2. Type of value: (1) int32, int64, float32, bool; (2) numpy.array(int32, int64, float32, bool);<Note: float64 is not supported> (3) paddle.Tensor(int32, int64, float32, float64, bool);
-
由 Jiaqi Liu 提交于
* fix beam search bug * add dygraph unittest * update dynamic_decode argument doc * add warning info for state which has no lengths attribute
-
由 Chen Weihang 提交于
* simplify prepared op impl to improve performance * fix kunlun compile error * continue fix kunlun compile error * only transform diff place when dtype diff * fix failed unittests * remove useless file * polish impl by review comment
-
由 123malin 提交于
* Add Lookahead and ModelAverage Optimizer (#30004) * test=develop, add model_average and lookahead * Improve Index select cuda kernel (#30139) * test=develop, add index_select_cuda kernel
-
由 ceci3 提交于
* fix syncbn convet * add unittest
-
由 Chen Weihang 提交于
* Simplify the options of spawn based on fleetrun (#30144) * Simplify the options of spawn based on fleetrun * polish details * polish doc details * cleanup enum test=develop (#29294) Co-authored-by: Ngongweibao <weibao.gong@gmail.com>
-
- 07 1月, 2021 3 次提交
-
-
由 WeiXin 提交于
* Support storage of large parameters (#29988) * Support storage of large parameters * Reduce the complexity of the unittest * Reduce the complexity of the unittest,commented out unittest for * add unittest for static.save/load * Increase the timeout threshold of 'test_static_save_load' * Increase the timeout threshold of 'test_static_save_load' * Increase the timeout threshold of 'test_static_save_load' and 'test_paddle_save_load' * Increase the timeout threshold of 'test_static_save_load' and 'test_paddle_save_load' * Extend the timeout for the (#30151)
-
由 Leo Chen 提交于
* Improve performance of elementwise_add grad op (#29187) * pass stop_gradient for cast op * improve performance of elementwise_add grad * use tensor copy async * dygraph branch * fix dygraph branch * add ut * make gelu fp16 computing more robust (#29484) * Add fast path for dropout when p == 0 (#29553) * add fast path for p == 0 in dropout * add ut
-
由 furnace 提交于
* Layer norm fp16 (#29169) * add fp16 for layer_norm op * revert layernorm api * fix forward * fix forward * fix backward for layernorm with fp16 * fix unit test for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U> * fix with_mkldnn compile error for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> * fix layer_norm accuracy (#29434) * Layernorm opt (#29522) * layernorm fw opt * layernorm bw opt * fix typo, test=develop * remove const dim3 for windows CI compatibility * merge develop Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com> * Fix compile problem when cuda_arch < 6000 (#29576) * fix compile problem when cuda_arch < 6000 * refine code * refine code Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com>
-