Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Serving
提交
b675d5cb
S
Serving
项目概览
PaddlePaddle
/
Serving
大约 2 年 前同步成功
通知
187
Star
833
Fork
253
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
105
列表
看板
标记
里程碑
合并请求
10
Wiki
2
Wiki
分析
仓库
DevOps
项目成员
Pages
S
Serving
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
105
Issue
105
列表
看板
标记
里程碑
合并请求
10
合并请求
10
Pages
分析
分析
仓库分析
DevOps
Wiki
2
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
b675d5cb
编写于
4月 18, 2021
作者:
Z
zhangjun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update
上级
a844f977
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
57 addition
and
12 deletion
+57
-12
doc/BAIDU_KUNLUN_XPU_SERVING.md
doc/BAIDU_KUNLUN_XPU_SERVING.md
+1
-1
doc/LOW_PRECISION_DEPLOYMENT.md
doc/LOW_PRECISION_DEPLOYMENT.md
+12
-11
doc/LOW_PRECISION_DEPLOYMENT_CN.md
doc/LOW_PRECISION_DEPLOYMENT_CN.md
+44
-0
未找到文件。
doc/BAIDU_KUNLUN_XPU_SERVING.md
浏览文件 @
b675d5cb
...
@@ -82,7 +82,7 @@ Start the rpc service, deploying on ARM server with Baidu Kunlun chips,and acc
...
@@ -82,7 +82,7 @@ Start the rpc service, deploying on ARM server with Baidu Kunlun chips,and acc
```
```
python3 -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 6 --port 9292 --use_lite --use_xpu --ir_optim
python3 -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 6 --port 9292 --use_lite --use_xpu --ir_optim
```
```
Start the rpc service, deploying on ARMserver,and accelerate with Paddle-Lite.
Start the rpc service, deploying on ARM
server,and accelerate with Paddle-Lite.
```
```
python3 -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 6 --port 9292 --use_lite --ir_optim
python3 -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 6 --port 9292 --use_lite --ir_optim
```
```
...
...
doc/LOW_PRECISION_DEPLOYMENT.md
浏览文件 @
b675d5cb
#
Paddle Serving低精度部署
#
Low-Precision Deployment for Paddle Serving
低精度部署在Intel CPU上支持int8、bfloat16模型,Nvidia TensorRT支持int8、bfloat16模型。
Intel CPU supports int8 and bfloat16 models, NVIDIA TensorRT supports int8 and bfload16 models.
##
通过PaddleSlim量化生成低精度模型
##
Obtain the quantized model using PaddleSlim tool
详细见
[
PaddleSlim量化
](
https://paddleslim.readthedocs.io/zh_CN/latest/tutorials/quant/overview.html
)
Train the low-precision models please refer to
[
PaddleSlim
](
https://paddleslim.readthedocs.io/zh_CN/latest/tutorials/quant/overview.html
)
.
## 使用TensorRT int8加载PaddleSlim Int8量化模型进行部署
## Deploy the quantized model from PaddleSlim using Paddle Serving with Nvidia TensorRT int8 mode
首先下载Resnet50
[
PaddleSlim量化模型
](
https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
)
,并转换为Paddle Serving支持的部署模型格式。
Firstly, download the
[
Resnet50 int8 model
](
https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
)
and convert to Paddle Serving's saved model。
```
```
wget https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
wget https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
tar zxvf ResNet50_quant.tar.gz
tar zxvf ResNet50_quant.tar.gz
python -m paddle_serving_client.convert --dirname ResNet50_quant
python -m paddle_serving_client.convert --dirname ResNet50_quant
```
```
启动rpc服务, 设定所选GPU id、部署模型精度
Start RPC service, specify the GPU id and precision mode
```
```
python -m paddle_serving_server.serve --model serving_server --port 9393 --gpu_ids 0 --use_gpu --use_trt --precision int8
python -m paddle_serving_server.serve --model serving_server --port 9393 --gpu_ids 0 --use_gpu --use_trt --precision int8
```
```
使用client进行请求
Request the serving service with Client
```
```
from paddle_serving_client import Client
from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, File2Image, Resize, CenterCrop
from paddle_serving_app.reader import Sequential, File2Image, Resize, CenterCrop
...
@@ -38,7 +39,7 @@ fetch_map = client.predict(feed={"image": img}, fetch=["score"])
...
@@ -38,7 +39,7 @@ fetch_map = client.predict(feed={"image": img}, fetch=["score"])
print(fetch_map["score"].reshape(-1))
print(fetch_map["score"].reshape(-1))
```
```
##
参考文档
##
Reference
*
[
PaddleSlim
](
https://github.com/PaddlePaddle/PaddleSlim
)
*
[
PaddleSlim
](
https://github.com/PaddlePaddle/PaddleSlim
)
*
PaddleInference Intel CPU部署量化模型
[
文档
](
https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_x86_cpu_int8.html
)
*
[
Deploy the quantized model Using Paddle Inference on Intel CPU
](
https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_x86_cpu_int8.html
)
*
PaddleInference NV GPU部署量化模型
[
文档
](
https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_trt.html
)
*
[
Deploy the quantized model Using Paddle Inference on Nvidia GPU
](
https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_trt.html
)
\ No newline at end of file
\ No newline at end of file
doc/LOW_PRECISION_DEPLOYMENT_CN.md
0 → 100644
浏览文件 @
b675d5cb
# Paddle Serving低精度部署
低精度部署, 在Intel CPU上支持int8、bfloat16模型,Nvidia TensorRT支持int8、float16模型。
## 通过PaddleSlim量化生成低精度模型
详细见
[
PaddleSlim量化
](
https://paddleslim.readthedocs.io/zh_CN/latest/tutorials/quant/overview.html
)
## 使用TensorRT int8加载PaddleSlim Int8量化模型进行部署
首先下载Resnet50
[
PaddleSlim量化模型
](
https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
)
,并转换为Paddle Serving支持的部署模型格式。
```
wget https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
tar zxvf ResNet50_quant.tar.gz
python -m paddle_serving_client.convert --dirname ResNet50_quant
```
启动rpc服务, 设定所选GPU id、部署模型精度
```
python -m paddle_serving_server.serve --model serving_server --port 9393 --gpu_ids 0 --use_gpu --use_trt --precision int8
```
使用client进行请求
```
from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, File2Image, Resize, CenterCrop
from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize
client = Client()
client.load_client_config(
"resnet_v2_50_imagenet_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9393"])
seq = Sequential([
File2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])
image_file = "daisy.jpg"
img = seq(image_file)
fetch_map = client.predict(feed={"image": img}, fetch=["score"])
print(fetch_map["score"].reshape(-1))
```
## 参考文档
*
[
PaddleSlim
](
https://github.com/PaddlePaddle/PaddleSlim
)
*
PaddleInference Intel CPU部署量化模型
[
文档
](
https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_x86_cpu_int8.html
)
*
PaddleInference NV GPU部署量化模型
[
文档
](
https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_trt.html
)
\ No newline at end of file
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录