From 978876c70dd2d79a4d9c27c26dc0eb135babcdf1 Mon Sep 17 00:00:00 2001 From: wangjiawei04 Date: Thu, 25 Feb 2021 15:53:40 +0800 Subject: [PATCH] fix --- README.md | 3 ++- README_CN.md | 5 +++-- doc/TENSOR_RT.md | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index af3997d8..76e46b96 100644 --- a/README.md +++ b/README.md @@ -136,7 +136,7 @@ The url corresponding to `cuda9.0_cudnn7-mkl`, copy it and run pip install https://paddle-wheel.bj.bcebos.com/2.0.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-2.0.0.post90-cp27-cp27mu-linux_x86_64.whl ``` -the default `paddlepaddle-gpu==2.0.0` is Cuda 10.2 with no TensorRT. If you want to install PaddlePaddle with TensorRT. please also check the documentation-multi-version whl package list and find key word `cuda10.2-cudnn8.0-trt7.1.3`。 +the default `paddlepaddle-gpu==2.0.0` is Cuda 10.2 with no TensorRT. If you want to install PaddlePaddle with TensorRT. please also check the documentation-multi-version whl package list and find key word `cuda10.2-cudnn8.0-trt7.1.3`. More info please check [Paddle Serving uses TensorRT](./doc/TENSOR_RT.md) If it is other environment and Python version, please find the corresponding link in the table and install it with pip. @@ -223,6 +223,7 @@ the response is - [Develop Pipeline Serving](doc/PIPELINE_SERVING.md) - [Deploy Web Service with uWSGI](doc/UWSGI_DEPLOY.md) - [Hot loading for model file](doc/HOT_LOADING_IN_SERVING.md) +- [Paddle Serving uses TensorRT](doc/TENSOR_RT.md) ### About Efficiency - [How to profile Paddle Serving latency?](python/examples/util) diff --git a/README_CN.md b/README_CN.md index 991be5bd..2cae7b52 100644 --- a/README_CN.md +++ b/README_CN.md @@ -112,7 +112,7 @@ pip install paddle-serving-server-gpu==0.5.0.post11 # GPU with CUDA10.1 + Tensor 您可能需要使用国内镜像源(例如清华源, 在pip命令中添加`-i https://pypi.tuna.tsinghua.edu.cn/simple`)来加速下载。 -如果需要使用develop分支编译的安装包,请从[最新安装包列表](./doc/LATEST_PACKAGES.md)中获取下载地址进行下载,使用`pip install`命令进行安装。如果您想自行编译,请参照[Paddle Serving编译文档](./doc/COMPILE_CN.md) +如果需要使用develop分支编译的安装包,请从[最新安装包列表](./doc/LATEST_PACKAGES.md)中获取下载地址进行下载,使用`pip install`命令进行安装。如果您想自行编译,请参照[Paddle Serving编译文档](./doc/COMPILE_CN.md)。 paddle-serving-server和paddle-serving-server-gpu安装包支持Centos 6/7, Ubuntu 16/18和Windows 10。 @@ -134,7 +134,7 @@ pip install paddlepaddle-gpu==2.0.0 ``` pip install https://paddle-wheel.bj.bcebos.com/2.0.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-2.0.0.post90-cp27-cp27mu-linux_x86_64.whl ``` -由于默认的`paddlepaddle-gpu==2.0.0`是Cuda 10.2,并没有联编TensorRT,因此如果需要和在`paddlepaddle-gpu`上使用TensorRT,需要在上述多版本whl包列表当中,找到`cuda10.2-cudnn8.0-trt7.1.3`,下载对应的Python版本。 +由于默认的`paddlepaddle-gpu==2.0.0`是Cuda 10.2,并没有联编TensorRT,因此如果需要和在`paddlepaddle-gpu`上使用TensorRT,需要在上述多版本whl包列表当中,找到`cuda10.2-cudnn8.0-trt7.1.3`,下载对应的Python版本。更多信息请参考[如何使用TensorRT?](doc/TENSOR_RT_CN.md)。 如果是其他环境和Python版本,请在表格中找到对应的链接并用pip安装。 @@ -222,6 +222,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1 - [如何开发Pipeline?](doc/PIPELINE_SERVING_CN.md) - [如何使用uWSGI部署Web Service](doc/UWSGI_DEPLOY_CN.md) - [如何实现模型文件热加载](doc/HOT_LOADING_IN_SERVING_CN.md) +- [如何使用TensorRT?](doc/TENSOR_RT_CN.md) ### 关于Paddle Serving性能 - [如何测试Paddle Serving性能?](python/examples/util/) diff --git a/doc/TENSOR_RT.md b/doc/TENSOR_RT.md index 65fbb96b..6e53a6ff 100644 --- a/doc/TENSOR_RT.md +++ b/doc/TENSOR_RT.md @@ -1,6 +1,6 @@ ## Paddle Serving uses TensorRT -(English|[Simplified Chinese]((./TENSOR_RT_CN.md))) +(English|[简体中文]((./TENSOR_RT_CN.md))) ### Background -- GitLab