- 19 7月, 2022 1 次提交
-
-
由 Ruibiao Chen 提交于
* Rename BOOST_GET macros * Fix conflicts
-
- 08 7月, 2022 1 次提交
-
-
由 Wilber 提交于
-
- 22 6月, 2022 1 次提交
-
-
由 zhoutianzi666 提交于
* add fc, multihead_mul, shape tensor infer, slice
-
- 20 6月, 2022 1 次提交
-
-
由 whs 提交于
-
- 02 6月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* new general transformer inference support
-
- 25 5月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* fix maybe-uninitialized warning * fix compile * fix xpu compile * fix npu compile * fix infer compile * fix compile * fix compile
-
- 02 4月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* paddle inference support new quant_model
-
- 04 3月, 2022 1 次提交
-
-
由 ceci3 提交于
[paddle-inference]support setting fully connected in multi-head attention static shape branch to int8 (#39660) * fix inference int * update * add unittest
-
- 11 2月, 2022 1 次提交
-
-
由 Wangzheee 提交于
* support ernie quant model with interleaved * support ernie quant model with interleaved * support ernie quant model with interleaved * support ernie quant model with interleaved * support ernie quant model with interleaved * support ernie quant model with interleaved * support ernie quant model with interleaved
-
- 24 9月, 2021 1 次提交
-
-
由 baoachun 提交于
* add multihead_matmul trt converter test case * move attribute check to op_teller
-
- 07 9月, 2021 1 次提交
-
-
由 ceci3 提交于
-
- 30 8月, 2021 1 次提交
-
-
由 ceci3 提交于
* update ernie int8
-
- 17 6月, 2021 1 次提交
-
-
由 Wilber 提交于
[Inference Tensorrt] Add attr for trt engine and handle the input seq problem for ernie var len. (#33575)
-
- 16 4月, 2021 1 次提交
-
-
由 ceci3 提交于
* support ernie trt-int8 for inference * fix reshape
-
- 02 4月, 2021 1 次提交
-
-
由 Wilber 提交于
* update trt engine addplugin name. * update
-
- 23 3月, 2021 1 次提交
-
-
由 Shang Zhizhou 提交于
* fix tensorrt output varible reshape * move padding shape x 1 x 1 in ernie to qkv and fc * update layer name * fix softmax when input is dynamic, fc not padding any more * fix varlen * move fc x_dim assert to op_teller
-
- 10 3月, 2021 1 次提交
-
-
由 Shang Zhizhou 提交于
-
- 27 11月, 2020 1 次提交
-
-
由 Shang Zhizhou 提交于
* remove -DSUPPORTS_CUDA_FP16 in cuda.cmake * comile with cuda9 * add some unittest * notest;test=coverage * add unittest for trt plugin swish && split * update ernie unittest * fix some error message * remove repeated judgement of CUDA version in mbEltwiseLayerNormOpConverter * fix comile errror when CUDA_ARCH_NAME < Pascal" * fix comile error * update unittest timeout * compile with cuda9 * update error msg * fix code style * add some comments * add define IF_CUDA_ARCH_SUPPORT_FP16 * rename IF_CUDA_ARCH_SUPPORT_FP16 to CUDA_ARCH_FP16_SUPPORTED
-
- 03 11月, 2020 1 次提交
-
-
由 Shang Zhizhou 提交于
* fp16 result ok * change -DWITH_NVINFER_PLUGIN toconfig.EnableTensorRtOSS * auto detect special slice op converter for ernie with trt oss * ernie oss only support fp16 * fix special_slice_plugin serialize bug * matmul in tensorrt ok * ernie unittest ok * add matmul tensorrt unittest * remove demo code
-
- 11 5月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* add new macro BOOST_GET_SAFELY & unittests, test=develop * add different macro type, test=develop * fix get macro type in executor, test=develop * four macro part change backup * using one macro for all case, test=develop * revert attribute change, test=develop * change to three func to solve gcc4.8 bug, test=develop * polish some details, test=develop
-
- 26 3月, 2020 1 次提交
-
-
由 Zhaolong Xing 提交于
* add dynamic plugin support. test=develop * change emb eltwise layernorm to math function test=develop * add emb eltwise layernorm test=develop * can run dynamic shape ernie test=develop * fix ci test=develop * add ut for trt ernie dynamic test=develop * refine dynamic shape c++ interface. test=develop * fix comments test=develop * fix comments test=develop
-
- 06 1月, 2020 1 次提交
-
-
由 Pei Yang 提交于
* add gelu plugin * align trt bert with gpu * add support for fused fc with relu, * add unittest for bert trt
-