未验证 提交 8a5303f7 编写于 作者: T TeslaZhao 提交者: GitHub

Merge pull request #1170 from zhangjun/xpu-fix

[doc] fix doc for xpu example
...@@ -19,7 +19,7 @@ from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize ...@@ -19,7 +19,7 @@ from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize
client = Client() client = Client()
client.load_client_config( client.load_client_config(
"serving_client/serving_client_conf.prototxt") "serving_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9303"]) client.connect(["127.0.0.1:9393"])
seq = Sequential([ seq = Sequential([
File2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)), File2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
......
## Prepare ## Prepare
### download model and extract
```
wget https://paddle-inference-dist.cdn.bcebos.com/PaddleLite/models_and_data_for_unittests/bert_base_chinese.tar.gz
tar zxvf bert_base_chinese.tar.gz
```
### convert model ### convert model
``` ```
python -m paddle_serving_client.convert --dirname infer_bert-base-chinese_ft_model_4000.pdparams python3 -m paddle_serving_client.convert --dirname bert_base_chinese --model_filename bert_base_chinese/model.pdmodel --params_filename bert_base_chinese/model.pdiparams
```
### or, you can get the serving saved model directly
```
wget https://paddle-serving.bj.bcebos.com/models/xpu/bert.tar.gz
tar zxvf bert.tar.gz
```
### Getting Dict and Sample Dataset
```
sh get_data.sh
``` ```
this script will download Chinese Dictionary File vocab.txt and Chinese Sample Data data-c.txt
## RPC Service ## RPC Service
### Start Service ### Start Service
``` ```
pytyon bert_web_service.py serving_server 7703 python3 bert_web_service.py serving_server 7703
``` ```
### Client Prediction ### Client Prediction
``` ```
python bert_client.py head data-c.txt | python3 bert_client.py --model serving_client/serving_client_conf.prototxt
``` ```
wget https://paddle-serving.bj.bcebos.com/bert_example/data-c.txt --no-check-certificate
wget https://paddle-serving.bj.bcebos.com/bert_example/vocab.txt --no-check-certificate
此差异已折叠。
## Prepare ## Prepare
### download model and extract
```
wget https://paddle-inference-dist.cdn.bcebos.com/PaddleLite/models_and_data_for_unittests/ernie.tar.gz
tar zxvf ernie.tar.gz
```
### convert model ### convert model
``` ```
python3 -m paddle_serving_client.convert --dirname erine python3 -m paddle_serving_client.convert --dirname ernie
```
### or, you can get the serving saved model directly
```
wget https://paddle-serving.bj.bcebos.com/models/xpu/ernie.tar.gz
tar zxvf ernie.tar.gz
```
### Getting Dict and Sample Dataset
```
sh get_data.sh
``` ```
this script will download Chinese Dictionary File vocab.txt and Chinese Sample Data data-c.txt
## RPC Service ## RPC Service
......
...@@ -23,7 +23,7 @@ args = benchmark_args() ...@@ -23,7 +23,7 @@ args = benchmark_args()
reader = ChineseErnieReader({"max_seq_len": 128}) reader = ChineseErnieReader({"max_seq_len": 128})
fetch = ["save_infer_model/scale_0"] fetch = ["save_infer_model/scale_0"]
endpoint_list = ['127.0.0.1:7704'] endpoint_list = ['127.0.0.1:12000']
client = Client() client = Client()
client.load_client_config(args.model) client.load_client_config(args.model)
client.connect(endpoint_list) client.connect(endpoint_list)
......
wget https://paddle-serving.bj.bcebos.com/bert_example/data-c.txt --no-check-certificate
wget https://paddle-serving.bj.bcebos.com/bert_example/vocab.txt --no-check-certificate
此差异已折叠。
...@@ -8,15 +8,10 @@ ...@@ -8,15 +8,10 @@
sh get_data.sh sh get_data.sh
``` ```
## RPC service ## RPC service
### Start server ### Start server
``` shell You can use the following code to start the RPC service
python test_server.py uci_housing_model/
```
You can alse use the following code to start the RPC service
```shell ```shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9393 --use_lite --use_xpu --ir_optim python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9393 --use_lite --use_xpu --ir_optim
``` ```
...@@ -29,19 +24,17 @@ The `paddlepaddle` package is used in `test_client.py`, and you may need to down ...@@ -29,19 +24,17 @@ The `paddlepaddle` package is used in `test_client.py`, and you may need to down
python test_client.py uci_housing_client/serving_client_conf.prototxt python test_client.py uci_housing_client/serving_client_conf.prototxt
``` ```
## HTTP service ## HTTP service
### Start server ### Start server
Start a web service with default web service hosting modules: Start a web service with default web service hosting modules:
``` shell ``` shell
python test_server.py python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9393 --use_lite --use_xpu --ir_optim --name uci
``` ```
### Client prediction ### Client prediction
``` shell ``` shell
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:9292/uci/prediction curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:9393/uci/prediction
``` ```
...@@ -32,8 +32,6 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po ...@@ -32,8 +32,6 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
python test_client.py uci_housing_client/serving_client_conf.prototxt python test_client.py uci_housing_client/serving_client_conf.prototxt
``` ```
## HTTP服务 ## HTTP服务
### 开启服务端 ### 开启服务端
...@@ -41,11 +39,11 @@ python test_client.py uci_housing_client/serving_client_conf.prototxt ...@@ -41,11 +39,11 @@ python test_client.py uci_housing_client/serving_client_conf.prototxt
通过下面的一行代码开启默认web服务: 通过下面的一行代码开启默认web服务:
``` shell ``` shell
python test_server.py python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9393 --use_lite --use_xpu --ir_optim --name uci
``` ```
### 客户端预测 ### 客户端预测
``` shell ``` shell
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:9292/uci/prediction curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:9393/uci/prediction
``` ```
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddle_serving_client import Client
from paddle_serving_client.utils import MultiThreadRunner
import paddle
import numpy as np
def single_func(idx, resource):
client = Client()
client.load_client_config(
"./uci_housing_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9293", "127.0.0.1:9292"])
x = [
0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584,
0.6283, 0.4919, 0.1856, 0.0795, -0.0332
]
x = np.array(x)
for i in range(1000):
fetch_map = client.predict(feed={"x": x}, fetch=["price"])
if fetch_map is None:
return [[None]]
return [[0]]
multi_thread_runner = MultiThreadRunner()
thread_num = 4
result = multi_thread_runner.run(single_func, thread_num, {})
if None in result[0]:
exit(1)
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from paddle_serving_server.web_service import WebService
import numpy as np
class UciService(WebService):
def preprocess(self, feed=[], fetch=[]):
feed_batch = []
is_batch = True
new_data = np.zeros((len(feed), 1, 13)).astype("float32")
for i, ins in enumerate(feed):
nums = np.array(ins["x"]).reshape(1, 1, 13)
new_data[i] = nums
feed = {"x": new_data}
return feed, fetch, is_batch
uci_service = UciService(name="uci")
uci_service.load_model_config("uci_housing_model")
uci_service.prepare_server(
workdir="workdir", port=9393, use_lite=True, use_xpu=True, ir_optim=True)
uci_service.run_rpc_service()
uci_service.run_web_service()
## Prepare ## Prepare
### download model and extract
```
wget https://paddle-inference-dist.bj.bcebos.com/PaddleLite/models_and_data_for_unittests/VGG19.tar.gz
tar zxvf VGG19.tar.gz
```
### convert model ### convert model
``` ```
python -m paddle_serving_client.convert --dirname VGG19 python3 -m paddle_serving_client.convert --dirname VGG19
```
### or, you can get the serving saved model directly
```
wget https://paddle-serving.bj.bcebos.com/models/xpu/vgg19.tar.gz
tar zxvf vgg19.tar.gz
``` ```
## RPC Service ## RPC Service
...@@ -10,7 +20,7 @@ python -m paddle_serving_client.convert --dirname VGG19 ...@@ -10,7 +20,7 @@ python -m paddle_serving_client.convert --dirname VGG19
### Start Service ### Start Service
``` ```
python -m paddle_serving_server.serve --model serving_server --port 7702 --use_lite --use_xpu --ir_optim python3 -m paddle_serving_server.serve --model serving_server --port 7702 --use_lite --use_xpu --ir_optim
``` ```
### Client Prediction ### Client Prediction
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册