From fd0a85e4f5fc128346412ac9192c19ee5c2dc67e Mon Sep 17 00:00:00 2001 From: ShiningZhang Date: Sun, 14 Nov 2021 16:27:38 +0800 Subject: [PATCH] update SERVING_CONFIGURE.md&SERVING_CONFIGURE_CN.md with descrip of ir_optim --- doc/SERVING_CONFIGURE.md | 6 +++--- doc/SERVING_CONFIGURE_CN.md | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/doc/SERVING_CONFIGURE.md b/doc/SERVING_CONFIGURE.md index cbcd96f8..def83915 100644 --- a/doc/SERVING_CONFIGURE.md +++ b/doc/SERVING_CONFIGURE.md @@ -92,9 +92,9 @@ More flags: | `mem_optim_off` | - | - | Disable memory / graphic memory optimization | | `ir_optim` | bool | False | Enable analysis and optimization of calculation graph | | `use_mkl` (Only for cpu version) | - | - | Run inference with MKL | -| `use_trt` (Only for trt version) | - | - | Run inference with TensorRT | -| `use_lite` (Only for Intel x86 CPU or ARM CPU) | - | - | Run PaddleLite inference | -| `use_xpu` | - | - | Run PaddleLite inference with Baidu Kunlun XPU | +| `use_trt` (Only for trt version) | - | - | Run inference with TensorRT. Need open with ir_optim. | +| `use_lite` (Only for Intel x86 CPU or ARM CPU) | - | - | Run PaddleLite inference. Need open with ir_optim. | +| `use_xpu` | - | - | Run PaddleLite inference with Baidu Kunlun XPU. Need open with ir_optim. | | `precision` | str | FP32 | Precision Mode, support FP32, FP16, INT8 | | `use_calib` | bool | False | Use TRT int8 calibration | | `gpu_multi_stream` | bool | False | EnableGpuMultiStream to get larger QPS | diff --git a/doc/SERVING_CONFIGURE_CN.md b/doc/SERVING_CONFIGURE_CN.md index eed97424..59b31654 100644 --- a/doc/SERVING_CONFIGURE_CN.md +++ b/doc/SERVING_CONFIGURE_CN.md @@ -91,9 +91,9 @@ workdir_9393 | `mem_optim_off` | - | - | Disable memory / graphic memory optimization | | `ir_optim` | bool | False | Enable analysis and optimization of calculation graph | | `use_mkl` (Only for cpu version) | - | - | Run inference with MKL | -| `use_trt` (Only for trt version) | - | - | Run inference with TensorRT | -| `use_lite` (Only for Intel x86 CPU or ARM CPU) | - | - | Run PaddleLite inference | -| `use_xpu` | - | - | Run PaddleLite inference with Baidu Kunlun XPU | +| `use_trt` (Only for trt version) | - | - | Run inference with TensorRT. Need open with ir_optim. | +| `use_lite` (Only for Intel x86 CPU or ARM CPU) | - | - | Run PaddleLite inference. Need open with ir_optim. | +| `use_xpu` | - | - | Run PaddleLite inference with Baidu Kunlun XPU. Need open with ir_optim. | | `precision` | str | FP32 | Precision Mode, support FP32, FP16, INT8 | | `use_calib` | bool | False | Use TRT int8 calibration | | `gpu_multi_stream` | bool | False | EnableGpuMultiStream to get larger QPS | -- GitLab