- 23 6月, 2022 2 次提交
-
-
由 heliqi 提交于
* cherry pick form develop 43621 * code format * paddle2onnx update to 0.9.8
-
由 lidanqing 提交于
* Correct elementwise quantization (#43693) * [Bug fix] Do not quantize weights Y when matmul X and Y both other ops outputs (#43297) * fix some matmul that X and Y both other ops outputs, do not dequantize the Y. * fix CI format * fix according to review Co-authored-by: Njoanna.wozna.intel <joanna.wozna@intel.com>
-
- 22 6月, 2022 4 次提交
-
-
由 ccrrong 提交于
* add bilinear_interp_v2 converter * update op_teller.cc * add unittest for bilinear_interp_v2 converter * code format * bug fix * code format and add unittest * remove merged modify in op_teller.cc * code format * code format * fix scale init error
-
由 Yiqun Liu 提交于
cherry-pick #42750。 QA反馈,#42750 优化后,solov2模型性能可提升6%,故cherry-pick到2.3。因#41096 将linspace python实现从fluid.layers.tensor挪到了paddle.tensor.creation下,该pr不在release/2.3分支中,故将#42750 中python修改同步到fluid.layers.tensor.linspace中。
-
由 Zhang Ting 提交于
[cherry pick] Support optional residual add in fused ops and slice large tensor for cudnn_softmax (#43719) [cherry pick] Support optional residual add in fused ops and slice large tensor for cudnn_softmax cherry-pick #43635 #43681 #43474
-
由 Sing_chan 提交于
Only cherry pick format tool(clang-format, yapf, cmake-format) upgrade to release/2.3, lint tool such as cpplint will not move, because we are not going to fix cpplint error in release/2.3 pre_commit.sh also is moved to release/2.3 so that both PR-CI-pre-commit and PR-CI-pre-commit-23 can works. pre install clang-format to avoid repeat installation due to pre-commit's multi-thread running.
-
- 21 6月, 2022 4 次提交
-
-
由 Guanghua Yu 提交于
* cherry pick #43088 #40664 * fix clang format
-
由 chalsliu 提交于
* Update CUDA and TensorRT version for CI * disable ut * Update TensorRT for CUDA 10.2
-
由 niuliling123 提交于
删除 layout autotune 中的多余打印 背景 :layout autotune log会导致模型打印信息增多
-
由 zhoutianzi666 提交于
-
- 20 6月, 2022 1 次提交
-
-
由 xiongkun 提交于
* cherry pick from #43397 * fix code
-
- 17 6月, 2022 2 次提交
-
-
由 weishengying 提交于
-
由 WangXi 提交于
* Rename dropout is test (#43098) * replace dropout_is_test with is_test. * improve atol on a100. * fused_attention fused_feedforward api support Model Tensor Parallel (#42985) * fix is_test bug in fused_feedforward. (#43508) Co-authored-by: NLi Min <11663212+limin2021@users.noreply.github.com>
-
- 15 6月, 2022 1 次提交
-
-
由 zyfncg 提交于
* fix bug of strided_slice (#43388) * fix stride_slice bug * fix bug * fix bug of infer shape for slice (#43443)
-
- 14 6月, 2022 1 次提交
-
-
由 xiongkun 提交于
* [EinsumOp] Polish forward logic and backward logic for optimize (#42603) * change logic for optimize * modifty * merge * change einsum_v2 as default and add new flags: FLAG_einsum_opt=1|0 (#43010) * [EinsumOp] Make EinsumOp support bfloat16. (#43085) * change einsum_v2 as default and add new flags: FLAG_einsum_opt=1|0 * make EInsumOP support bf16 * add unittest for BF16 * add condition for test_BF16 * fix bugs * fix * change the backward api to fit einsum op
-
- 09 6月, 2022 1 次提交
-
-
由 zhupengyang 提交于
-
- 08 6月, 2022 2 次提交
-
-
由 niuliling123 提交于
Reduce amax/amin frobenius_norm_kerne原始实现为Eigen实现,文件编译时间较长,因此本PR将其替换为KP实现 删除DefaultElementwiseOperator中重复功能支持,减少elementwise_double_grad OP编译时间
-
由 heliqi 提交于
解决onnxruntime后端依赖的protobuf跟框架或外部protobuf版本冲突问题
-
- 06 6月, 2022 1 次提交
-
-
由 niuliling123 提交于
删除Broadcast function中rank例化以及Elementwise调用,降低编译时间。 从develop分支中的#42645 PR修改而来,由于develop分支与release分支相差较大,无法实现cherry-pick,因此针对release2.3重新提交PR. Broadcast中关于rank的例化会导致底层模板展开较多,造成reduce_sum_grad_kernel.cu.o文件体积过大,修改后可以降低.o体积及编译时间
-
- 30 5月, 2022 2 次提交
- 26 5月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
-
- 23 5月, 2022 1 次提交
-
-
由 Sing_chan 提交于
cherry-pick PR #42700
-
- 17 5月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
-
- 11 5月, 2022 1 次提交
-
-
由 Aurelius84 提交于
-
- 10 5月, 2022 4 次提交
-
-
由 JingZhuangzhuang 提交于
* pdnode_compare * panode compare * pdnode_compare
-
由 fwenguang 提交于
* [MLU] add mlu new profiler (#41138) * [MLU] add mlu new profiler * fix format * [MLU] support add callback to stream (#41831) * [MLU] add gather mlu kernel (#41969) * [MLU] add mlu activation kernels (#41751)
-
由 Allen Guo 提交于
set attr ignoreIndex type to string for custom_nllloss_op 部分 cheery-pick of #42534
-
由 zhangbo9674 提交于
-
- 09 5月, 2022 1 次提交
-
-
由 Allen Guo 提交于
add class NameScopeHelper for adding namescope info 添加更多 种类优化器状态的映射 为 IpuStrategy 添加 compilation_progress_logger option 用于输出 编译进度 部分代码清理和杂项优化
-
- 07 5月, 2022 2 次提交
-
-
由 FlyingQianMM 提交于
Reduce the number of threads per block of deformable_psroi_pooling to solve the bug where too many resources requested for launch (PaddlePaddle#42531) (#42533)
-
由 Ruibiao Chen 提交于
* Reduce time variation for cuda_managed_memory_test (#42458) * Disable standalone executor for test_tensordot (#42476)
-
- 05 5月, 2022 1 次提交
-
-
由 wawltor 提交于
-
- 04 5月, 2022 3 次提交
-
-
由 seemingwang 提交于
* enable graph-engine to return all id (#42319) * enable graph-engine to return all id * change vector's dimension * change vector's dimension * enlarge returned ids dimensions * change sample result's structure to fit training (#42426) * enable graph-engine to return all id * change vector's dimension * change vector's dimension * enlarge returned ids dimensions * add actual_val * change vlog * fix bug * bug fix * bug fix * fix display test * singleton of gpu_graph_wrapper * change sample result's structure to fit training * recover sample code * fix * secondary sample * add graph partition * fix pybind Co-authored-by: NDesmonDay <908660116@qq.com> Co-authored-by: NDesmonDay <908660116@qq.com>
-
由 heliqi 提交于
* fix paddle-ort python bug * fix paddle-ort python bug
-
由 XiaoguangHu 提交于
-
- 02 5月, 2022 1 次提交
-
-
由 Zhang Zheng 提交于
* Fix test_cudnn_norm_conv and test_cudnn_bn_add_relu in CUDA11.2 * no throw in V100 for some cases
-
- 30 4月, 2022 2 次提交
-
-
由 Weilong Wu 提交于
-
由 xiongkun 提交于
* Extend python einsum interface to make einsum_v2 support multi-operands and switch it to default. * add opt_einsum dependence * add yaml and support eager model * fix by code review
-
- 29 4月, 2022 1 次提交
-
-
由 WangXi 提交于
* fix FusedResidualDropoutBias nan in v100 (#42344) * fix lod_tensor_array gc (#42377)
-