- 12 1月, 2021 2 次提交
-
-
由 chentianyu03 提交于
* complex gradient matmul (#29966) * dot op support complex types * matmul support complex types * add test case * matmul broadcast gradient support complex * move conjFunctor to complex_functor.h * change the kron gradient when complex types (#29995) * type promotion for grad (#30177) * type promotion for grad * add type promotion for div op
-
由 LielinJiang 提交于
* fix warning and no grad
-
- 11 1月, 2021 12 次提交
-
-
由 liym27 提交于
[Cherry-Pick] Support vector<double> as type of op attribute and op set_value suppport vector<double> as value (#30126) (#30305) Cherry-Pick #30126 1. Support vector<float64> as type of op attribute. 2. op set_value suppports float64 numpy.array
-
由 WeiXin 提交于
* Fix bug for 'save mutiple method' * To pass coverage. * edit code to pass coverage. * edit code to pass coverage. * add unittest for coverage. * change for coverage. * edit for coverage.
-
由 pangyoki 提交于
[Cherry-pick PR 29913], add View(reuse allocation) strategy on squeeze, unsqueeze, reshape, flatten op (#29913) (#30258) * add view strategy on squeeze,unsqueeze,reshape,flatten * add squeeze unittest * add unittests * use View strategy as name rather than Reuse Allacation * fix view api doc * fix format * use core.ops when input of reshape2 is Tensor * fix test_cross_entropy_loss error because of reshape2 * delete selected_rows * change op_function * little change * solve HandleViewBetweenInputAndOutput
-
由 Huihuang Zheng 提交于
Cherry-pick of PR #30208 , this PR added clone method for static Variable so that this interface will be same as dygraph. It fixed some bugs in dy2stat where users called clone of dygraph Tensor.
-
由 wangchaochaohu 提交于
cherry-pick #28769, add support for place string representation
-
由 wangchaochaohu 提交于
* elementwise_add_grad Op optimization (#29575) * optimize for long width for elementwise (#29602) * refine (#29622) * delete the code for fp16 optimization because it is not faster than common template code (#29715) * fix the shape choose of vectorize for cuda * optimization for fp16 elementwise add (#29744) * Fix the compiler error for half type (#29799) * refine the compiler error for half2 operation (#29816) * fix the compiler error when gcc4 cuda9.0 (#29997)
-
由 Zhen Wang 提交于
* Support pure fp16 training for AMP API. (#29544) * add cast ops before and after unsupported fp16 ops. * Keep partial net in FP32 pattern. * Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode. * Add fp16 support for adam op. * add multi precision attr for adam. * Fix the bug of test_multi_precision_fp16_train UT. * Code format for CI. * Fix the redefine error about MPTypeTrait on windows. * fix bugs of the _create_accumulators func in Momentum. * fix bug when inserting post cast op. * Add the update_loss_scaling op in allow_set of UnusedVarCheck. * Update for ci coverage. * Add some doc for OptimizerWithMixedPrecision. * Fix the code style. * Imporve the doc of `amp_init`. * Change for fp16 testing if users have the infer program defined in separate way. * Remove tensor copy in the update_loss_scaling op. (#29426) * remove tensor copy in the update_loss_scaling op * not use thrust. * fix some cuda memory access error.
-
由 Aurelius84 提交于
* fix tensor shape bug * fix op_num * clean code
-
由 guofei 提交于
* Quantization supports 2.0 APIs * Fix the error of save_quantized_model
-
由 WangXi 提交于
* Optimization grad merge performance (#29784) * [fleet] combine amp and gradient merge, test=develop (#30086) * fix assign_op_xpu concat_op_xpu warining (#30120) Co-authored-by: Nliuyuhui <liuyuhui@baidu.com>
-
由 Chen Weihang 提交于
att, cherry-pick of #30219
-
由 XiaoguangHu 提交于
* fix dynamic to static error * delete paddle.nn.functional.assign
-
- 10 1月, 2021 1 次提交
-
-
由 WangXi 提交于
-
- 09 1月, 2021 1 次提交
-
-
由 littletomatodonkey 提交于
-
- 08 1月, 2021 11 次提交
-
-
由 liym27 提交于
[Cherry-Pick 2.0] In creation.assgin, reuse implamention code of layers.tensor.assign to avoid maintain two code (#30227) (#30236) cherry-pick #30227
-
由 liym27 提交于
[cherry-pick] [Dy2Stat] Don't convert to paddle.shape if var_x.shape is not negetive #29965 (#30235) * [Cherry-Pick 2.0] [Dy2Stat] Don't convert to paddle.shape if var_x.shape is not negetive (#29965) 1. When x is Variable, call nn.shape(x) only in following cases: 1)The shape of x is used in control flow condition. 2)The dim to be used is negetive 2. When x is Variable, but x.shape or x.shape[idx] doesn't contain negetive value, don't convert to paddle.shape() * [Cherry-Pick 2.0] [Dy2Stat] Use Paddle2.0 api paddle.tensor.array_* (#30156)
-
由 huangxu96 提交于
* Optimizer trans momentum (#29597) * merge amp related function in Momentum from paddle.fluid.contrib.optimizer into paddle.optimizer. * Add unittest for 2.0 Momentum API. * fix some bugs in weight_decay. * add alias for fluid.contrib.mixed_precision (#29562) * add alias for fluid.contrib.mixed_precision * add static.amp into setup.pu.in (#29621) * add static.amp into setup.pu.in * add unittest for api * fix a bug in multi_precision_fp16 unittest. (#29756)
-
由 liym27 提交于
[cherry-pick 2.0] Fix bug: In dynamic mode, if start or end is negetive, __getitem__ return wrong result(#30003) (#30146) 1. when slice_item is a slice: 1) the start of __getitem__ should be std::max(start, 0) if slice 2) the start of __getitem__ should be std::min(end, dim) 2. when slice_item is an integer, it should be in [-dim_len, dim_len) 3. Fix error message to use accurate data
-
由 liym27 提交于
1. Type of index: int, slice(step must be 1). 2. Type of value: (1) int32, int64, float32, bool; (2) numpy.array(int32, int64, float32, bool);<Note: float64 is not supported> (3) paddle.Tensor(int32, int64, float32, float64, bool);
-
由 Jiaqi Liu 提交于
* fix beam search bug * add dygraph unittest * update dynamic_decode argument doc * add warning info for state which has no lengths attribute
-
由 Chen Weihang 提交于
* simplify prepared op impl to improve performance * fix kunlun compile error * continue fix kunlun compile error * only transform diff place when dtype diff * fix failed unittests * remove useless file * polish impl by review comment
-
由 123malin 提交于
* Add Lookahead and ModelAverage Optimizer (#30004) * test=develop, add model_average and lookahead * Improve Index select cuda kernel (#30139) * test=develop, add index_select_cuda kernel
-
由 LutaoChu 提交于
-
由 ceci3 提交于
* fix syncbn convet * add unittest
-
由 Chen Weihang 提交于
* Simplify the options of spawn based on fleetrun (#30144) * Simplify the options of spawn based on fleetrun * polish details * polish doc details * cleanup enum test=develop (#29294) Co-authored-by: Ngongweibao <weibao.gong@gmail.com>
-
- 07 1月, 2021 5 次提交
-
-
由 WeiXin 提交于
* Support storage of large parameters (#29988) * Support storage of large parameters * Reduce the complexity of the unittest * Reduce the complexity of the unittest,commented out unittest for * add unittest for static.save/load * Increase the timeout threshold of 'test_static_save_load' * Increase the timeout threshold of 'test_static_save_load' * Increase the timeout threshold of 'test_static_save_load' and 'test_paddle_save_load' * Increase the timeout threshold of 'test_static_save_load' and 'test_paddle_save_load' * Extend the timeout for the (#30151)
-
由 Leo Chen 提交于
* Improve performance of elementwise_add grad op (#29187) * pass stop_gradient for cast op * improve performance of elementwise_add grad * use tensor copy async * dygraph branch * fix dygraph branch * add ut * make gelu fp16 computing more robust (#29484) * Add fast path for dropout when p == 0 (#29553) * add fast path for p == 0 in dropout * add ut
-
由 furnace 提交于
* Layer norm fp16 (#29169) * add fp16 for layer_norm op * revert layernorm api * fix forward * fix forward * fix backward for layernorm with fp16 * fix unit test for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 * 1. revert to PADDLE_ENFORCE_NOT_NULL, 2. change static_cast<float> to static_cast<U> * fix with_mkldnn compile error for layernorm with fp16 * fix with_mkldnn compile error for layernorm with fp16 Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> * fix layer_norm accuracy (#29434) * Layernorm opt (#29522) * layernorm fw opt * layernorm bw opt * fix typo, test=develop * remove const dim3 for windows CI compatibility * merge develop Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com> * Fix compile problem when cuda_arch < 6000 (#29576) * fix compile problem when cuda_arch < 6000 * refine code * refine code Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> Co-authored-by: Nzlsh80826 <zlsh80826@gmail.com>
-
由 tangwei12 提交于
Change-Id: Ia5279b0cbb6a5b3970aff66e9510e0d85efa70ce
-
由 ceci3 提交于
* fix bn docs (#30096) * add attribute for batch_norm (#29950) * add attribute for batch_norm
-
- 06 1月, 2021 3 次提交
-
-
由 gongweibao 提交于
* fix log test=release/2.0 * fix ut test=develop
-
由 huangxu96 提交于
* add fp16 check into max and avg pool (#29479) * Add ReserveSpace in dygraph batch_norm. (#29221) * Add ReserveSpace in dygraph batch_norm. * Add unittest for reservespace * add float16 into adaptive_avg_pool2d check list. (#29547)
-
由 liym27 提交于
4 APIs: array_length, array_read, array_write, create_array,cherry-pick #29565
-
- 05 1月, 2021 5 次提交
-
-
由 Thunderbrook 提交于
* add topo aware * resource.h * topo aware * format
-
由 liym27 提交于
* [cherry-pick 2.0] Fix unitest test_slice (#29740) Before this commit, test_slice use old api `dygraph_to_static_func` to use Dynamic-t-Static and use Executor explicitly,which is not recommended to users. After fixed, use recommended API `paddle.jit.to_static` to replace `dygraph_to_static_func`, which won't trigger the random exception on coverage CI. * [cherry-pick 2.0][Dy2Stat] Support grammar: for ele in var[idx] (#29541) Support to transformfor ele in var stms in which var is a slice of Tensor. * [cherry-pick 2.0][Dy2Stat] Fix bug for loop: a variable is used and created in loop, but used before created (#29769)
-
由 cc 提交于
* fix ininite scale values (#29386) * Support dygraph quant model (#29927) * Avoid the scale to be infinity in quant2_int8_mkldnn_pass, test=develop * support quantized model for paddle2.0 dygraph, test=develop Co-authored-by: NWojciech Uss <wojciech.uss@intel.com>
-
由 gongweibao 提交于
-
由 Chen Weihang 提交于
Set FLAGS_selected_gpus for spawn. When the child process starts, it will inherit the configuration of the main process and set the FLAGS once, but the environment variable has not been set at this time, which leads to the FLAGS_selected_gpus is keep same with mainprocess(usually empty), so manually update the flags here. 注:增加了一个单测,又移除了,单测打印显示CI机器nvidia-smi只有两张卡,需要大于两张卡才能测这个问题
-