- 29 8月, 2023 3 次提交
-
-
由 ronnywang 提交于
-
由 cyberslack_lee 提交于
* fix * fix
-
由 gouzil 提交于
-
- 28 8月, 2023 1 次提交
-
-
由 gouzil 提交于
-
- 25 8月, 2023 1 次提交
-
-
由 Yuanle Liu 提交于
* auto mixed precision inference support white list * update * update * update * move down identity_op_clean_pass * fix code style
-
- 24 8月, 2023 2 次提交
-
-
由 Aurelius84 提交于
[NewIR]Add NOT_FOR_INFER to prune Inference Library Size and Split VJP CodeGen into pd_op_vjp.cc (#56352) * [NewIR]Prune Inference Library Size and Remove IR Dialect * remove options * add NOT_FOR_INFER * fix pd_vjp.cc * polish deps * fix code style * fix unittest * fix cmake * fix inference CI
-
由 csy0225 提交于
-
- 23 8月, 2023 2 次提交
-
-
由 Leo Chen 提交于
* Integrate quantize/dequantize linear and add config for explicit quantization * Fix the build error * Add macro for TRT version < 8.0 * Remove qdq UT from windows * Fix UT failure * Check TRT version in qdq UT * Test tensorrt_explicit_enabled API * Disable QDQ UT if TRT version < 8.5 * Add quantization postfix into public APIs * Apply code formatter * Fix the UT failure for explicit quantization * Apply code formatter on modified files * Correct the year in copyright
-
由 Travis-Lee 提交于
-
- 22 8月, 2023 1 次提交
-
-
由 chen 提交于
-
- 21 8月, 2023 1 次提交
-
-
由 chen 提交于
-
- 18 8月, 2023 1 次提交
-
-
由 lzy 提交于
[Inference] Make share_external_data supports bf16 and bool; fix while_op cache_inference_while_scope when using fleet_executor. (#56055) * 1. make share_external_data supports bf16 and bool; 2. don't drop_kids when cache_inference_while_scope * fix FLAGS_cache_inference_while_scope * add unitest * add unitest * skip unitest when cudnn_version < 8100 * skip test share_external_data_bf16 when CUDA_ARCH < 80
-
- 17 8月, 2023 1 次提交
-
-
由 ming1753 提交于
* [paddle-TRT] support mark output * [fix bug] hook function only call one in different predictor * add api test
-
- 16 8月, 2023 1 次提交
-
-
由 jiangfan06 提交于
-
- 15 8月, 2023 1 次提交
-
-
由 bukejiyu 提交于
* fix trt bilinear_interp_v2_op * add trt 8.0 condition * add trt 8.0 condition test bilinear add trt 8.0 condition * code style
-
- 14 8月, 2023 1 次提交
-
-
由 cyberslack_lee 提交于
-
- 10 8月, 2023 1 次提交
-
-
由 csy0225 提交于
-
- 09 8月, 2023 2 次提交
-
-
由 Xinyu Chen 提交于
* onednn: rename macro to PADDLE_WITH_DNNL * onednn: rename macro to CINN_WITH_DNNL
-
由 Ruibin Cheung 提交于
-
- 07 8月, 2023 3 次提交
-
-
由 Yuanle Liu 提交于
* fix cudnn 8.7+ bug on cudnnConvolutionBiasActivationForward * save_optimized_model_pass support tensorrt * update * update * fix compile * update * fix ut timeout
-
由 gouzil 提交于
-
由 Ruibin Cheung 提交于
-
- 04 8月, 2023 2 次提交
-
-
由 Ruibin Cheung 提交于
* [clang-tidy] enable modernize-use-emplace * Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into modernize_use_emplace
-
由 Zhenghai Zhang 提交于
-
- 03 8月, 2023 2 次提交
- 02 8月, 2023 3 次提交
-
-
由 wz1qqx 提交于
-
由 yangjianfengo1 提交于
[Inference] Replace groupNorm when data types are bf16 and fp16, and data format is NHWC implementation. (#55399) * finish * cpergroup odd * fix bf16 * single channel * code style * jingdu duiqi * add head_file * add bf16 head file * bf16 2 * bf16 * bf16 head * bf16 compile * py test * bf16 compile * bf16 compile * unset py test * nhwc * test * mean var * bf16 success * su * ctest success * use is_same_as * is_same * use is_same * rtol * gpu_stream * del sigmod * fix bfloat16 type * use cuda_bf16_hpp * use_cuda_arch * bfloat162float2 * del inplace_tol * del max_releative_tol * temp store * jingdu duiqi * temp store * plugin * jingdu duiqi * duiqi * include cuda.h * del half * half single * ci * add const * ci * cudamemset * del printf * fp16 test * add half compute * del br16 ci * del ci * ci approve * del fluid include
-
由 jiangfan06 提交于
-
- 01 8月, 2023 1 次提交
-
-
由 hong19860320 提交于
-
- 27 7月, 2023 2 次提交
- 24 7月, 2023 2 次提交
-
-
由 chen 提交于
[Paddle-TRT] Convert 0D tensor to 1D tensor, increase the shape tensor's number count when collecting shape (#55503) * make 0-D tensor to 1-D tensor to support Grounding-SAM and add shape check * recover identity_op_clean_pass.cc
-
由 Xinyu Chen 提交于
* onednn: remove fc+eltwiseadd fusion pass * onednn: remove post-sum fusion in fc kernel * onednn: tests: make unfused add run into f32
-
- 21 7月, 2023 3 次提交
-
-
由 Yuanle Liu 提交于
* fix cudnn 8.7+ bug on cudnnConvolutionBiasActivationForward * save_optimized_model_pass support gpu
-
由 Ruibin Cheung 提交于
-
由 Jeng Bai-Cheng 提交于
Issue #55016
-
- 20 7月, 2023 2 次提交
-
-
由 Leo Chen 提交于
* Fix TRT multihead matmul UT failure
-
由 zhupengyang 提交于
-
- 19 7月, 2023 1 次提交
-
-
由 chen 提交于
-