未验证 提交 2b36c2bd 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge branch 'develop' into pddet

...@@ -15,12 +15,18 @@ pip install paddlehub ...@@ -15,12 +15,18 @@ pip install paddlehub
run run
``` ```
python prepare_model.py 20 python prepare_model.py 128
``` ```
the 20 in the command above means max_seq_len in BERT model, which is the length of sample after preprocessing. the 128 in the command above means max_seq_len in BERT model, which is the length of sample after preprocessing.
the config file and model file for server side are saved in the folder bert_seq20_model. the config file and model file for server side are saved in the folder bert_seq128_model.
the config file generated for client side is saved in the folder bert_seq20_client. the config file generated for client side is saved in the folder bert_seq128_client.
You can also download the above model from BOS(max_seq_len=128). After decompression, the config file and model file for server side are stored in the bert_chinese_L-12_H-768_A-12_model folder, and the config file generated for client side is stored in the bert_chinese_L-12_H-768_A-12_client folder:
```shell
wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz
tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz
```
### Getting Dict and Sample Dataset ### Getting Dict and Sample Dataset
...@@ -32,11 +38,11 @@ this script will download Chinese Dictionary File vocab.txt and Chinese Sample D ...@@ -32,11 +38,11 @@ this script will download Chinese Dictionary File vocab.txt and Chinese Sample D
### RPC Inference Service ### RPC Inference Service
Run Run
``` ```
python -m paddle_serving_server.serve --model bert_seq20_model/ --port 9292 #cpu inference service python -m paddle_serving_server.serve --model bert_seq128_model/ --port 9292 #cpu inference service
``` ```
Or Or
``` ```
python -m paddle_serving_server_gpu.serve --model bert_seq20_model/ --port 9292 --gpu_ids 0 #launch gpu inference service at GPU 0 python -m paddle_serving_server_gpu.serve --model bert_seq128_model/ --port 9292 --gpu_ids 0 #launch gpu inference service at GPU 0
``` ```
### RPC Inference ### RPC Inference
...@@ -47,7 +53,7 @@ pip install paddle_serving_app ...@@ -47,7 +53,7 @@ pip install paddle_serving_app
``` ```
Run Run
``` ```
head data-c.txt | python bert_client.py --model bert_seq20_client/serving_client_conf.prototxt head data-c.txt | python bert_client.py --model bert_seq128_client/serving_client_conf.prototxt
``` ```
the client reads data from data-c.txt and send prediction request, the prediction is given by word vector. (Due to massive data in the word vector, we do not print it). the client reads data from data-c.txt and send prediction request, the prediction is given by word vector. (Due to massive data in the word vector, we do not print it).
...@@ -58,7 +64,7 @@ the client reads data from data-c.txt and send prediction request, the predictio ...@@ -58,7 +64,7 @@ the client reads data from data-c.txt and send prediction request, the predictio
``` ```
set environmental variable to specify which gpus are used, the command above means gpu 0 and gpu 1 is used. set environmental variable to specify which gpus are used, the command above means gpu 0 and gpu 1 is used.
``` ```
python bert_web_service.py bert_seq20_model/ 9292 #launch gpu inference service python bert_web_service.py bert_seq128_model/ 9292 #launch gpu inference service
``` ```
### HTTP Inference ### HTTP Inference
...@@ -75,7 +81,7 @@ GPU:GPU V100 * 1 ...@@ -75,7 +81,7 @@ GPU:GPU V100 * 1
CUDA/cudnn Version:CUDA 9.2,cudnn 7.1.4 CUDA/cudnn Version:CUDA 9.2,cudnn 7.1.4
In the test, 10 thousand samples in the sample data are copied into 100 thousand samples. Each client thread sends a sample of the number of threads. The batch size is 1, the max_seq_len is 20, and the time unit is seconds. In the test, 10 thousand samples in the sample data are copied into 100 thousand samples. Each client thread sends a sample of the number of threads. The batch size is 1, the max_seq_len is 20(not 128 as described above), and the time unit is seconds.
When the number of client threads is 4, the prediction speed can reach 432 samples per second. When the number of client threads is 4, the prediction speed can reach 432 samples per second.
Because a single GPU can only perform serial calculations internally, increasing the number of client threads can only reduce the idle time of the GPU. Therefore, after the number of threads reaches 4, the increase in the number of threads does not improve the prediction speed. Because a single GPU can only perform serial calculations internally, increasing the number of client threads can only reduce the idle time of the GPU. Therefore, after the number of threads reaches 4, the increase in the number of threads does not improve the prediction speed.
......
...@@ -13,11 +13,17 @@ pip install paddlehub ...@@ -13,11 +13,17 @@ pip install paddlehub
``` ```
执行 执行
``` ```
python prepare_model.py 20 python prepare_model.py 128
```
参数128表示BERT模型中的max_seq_len,即预处理后的样本长度。
生成server端配置文件与模型文件,存放在bert_seq128_model文件夹。
生成client端配置文件,存放在bert_seq128_client文件夹。
您也可以从bos上直接下载上述模型(max_seq_len=128),解压后server端配置文件与模型文件存放在bert_chinese_L-12_H-768_A-12_model文件夹,client端配置文件存放在bert_chinese_L-12_H-768_A-12_client文件夹:
```shell
wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz
tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz
``` ```
参数20表示BERT模型中的max_seq_len,即预处理后的样本长度。
生成server端配置文件与模型文件,存放在bert_seq20_model文件夹
生成client端配置文件,存放在bert_seq20_client文件夹
### 获取词典和样例数据 ### 获取词典和样例数据
...@@ -29,11 +35,11 @@ sh get_data.sh ...@@ -29,11 +35,11 @@ sh get_data.sh
### 启动RPC预测服务 ### 启动RPC预测服务
执行 执行
``` ```
python -m paddle_serving_server.serve --model bert_seq20_model/ --port 9292 #启动cpu预测服务 python -m paddle_serving_server.serve --model bert_seq128_model/ --port 9292 #启动cpu预测服务
``` ```
或者 或者
``` ```
python -m paddle_serving_server_gpu.serve --model bert_seq20_model/ --port 9292 --gpu_ids 0 #在gpu 0上启动gpu预测服务 python -m paddle_serving_server_gpu.serve --model bert_seq128_model/ --port 9292 --gpu_ids 0 #在gpu 0上启动gpu预测服务
``` ```
### 执行预测 ### 执行预测
...@@ -44,7 +50,7 @@ pip install paddle_serving_app ...@@ -44,7 +50,7 @@ pip install paddle_serving_app
``` ```
执行 执行
``` ```
head data-c.txt | python bert_client.py --model bert_seq20_client/serving_client_conf.prototxt head data-c.txt | python bert_client.py --model bert_seq128_client/serving_client_conf.prototxt
``` ```
启动client读取data-c.txt中的数据进行预测,预测结果为文本的向量表示(由于数据较多,脚本中没有将输出进行打印),server端的地址在脚本中修改。 启动client读取data-c.txt中的数据进行预测,预测结果为文本的向量表示(由于数据较多,脚本中没有将输出进行打印),server端的地址在脚本中修改。
...@@ -54,7 +60,7 @@ head data-c.txt | python bert_client.py --model bert_seq20_client/serving_client ...@@ -54,7 +60,7 @@ head data-c.txt | python bert_client.py --model bert_seq20_client/serving_client
``` ```
通过环境变量指定gpu预测服务使用的gpu,示例中指定索引为0和1的两块gpu 通过环境变量指定gpu预测服务使用的gpu,示例中指定索引为0和1的两块gpu
``` ```
python bert_web_service.py bert_seq20_model/ 9292 #启动gpu预测服务 python bert_web_service.py bert_seq128_model/ 9292 #启动gpu预测服务
``` ```
### 执行预测 ### 执行预测
...@@ -70,7 +76,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"words": "hello", "fetch":[ ...@@ -70,7 +76,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"words": "hello", "fetch":[
环境:CUDA 9.2,cudnn 7.1.4 环境:CUDA 9.2,cudnn 7.1.4
测试中将样例数据中的1W个样本复制为10W个样本,每个client线程发送线程数分之一个样本,batch size为1,max_seq_len为20,时间单位为秒. 测试中将样例数据中的1W个样本复制为10W个样本,每个client线程发送线程数分之一个样本,batch size为1,max_seq_len为20(而不是上面的128),时间单位为秒.
在client线程数为4时,预测速度可以达到432样本每秒。 在client线程数为4时,预测速度可以达到432样本每秒。
由于单张GPU内部只能串行计算,client线程增多只能减少GPU的空闲时间,因此在线程数达到4之后,线程数增多对预测速度没有提升。 由于单张GPU内部只能串行计算,client线程增多只能减少GPU的空闲时间,因此在线程数达到4之后,线程数增多对预测速度没有提升。
......
...@@ -29,7 +29,7 @@ from paddle_serving_app import ChineseBertReader ...@@ -29,7 +29,7 @@ from paddle_serving_app import ChineseBertReader
args = benchmark_args() args = benchmark_args()
reader = ChineseBertReader({"max_seq_len": 20}) reader = ChineseBertReader({"max_seq_len": 128})
fetch = ["pooled_output"] fetch = ["pooled_output"]
endpoint_list = ["127.0.0.1:9292"] endpoint_list = ["127.0.0.1:9292"]
client = Client() client = Client()
......
...@@ -21,7 +21,7 @@ import os ...@@ -21,7 +21,7 @@ import os
class BertService(WebService): class BertService(WebService):
def load(self): def load(self):
self.reader = BertReader(vocab_file="vocab.txt", max_seq_len=20) self.reader = BertReader(vocab_file="vocab.txt", max_seq_len=128)
def preprocess(self, feed={}, fetch=[]): def preprocess(self, feed={}, fetch=[]):
feed_res = self.reader.process(feed["words"].encode("utf-8")) feed_res = self.reader.process(feed["words"].encode("utf-8"))
......
...@@ -158,8 +158,7 @@ class Client(object): ...@@ -158,8 +158,7 @@ class Client(object):
) )
else: else:
if self.predictor_sdk_ is None: if self.predictor_sdk_ is None:
timestamp = time.time() self.add_variant('default_tag_{}'.format(id(self)), endpoints,
self.add_variant('default_tag_{}'.format(timestamp), endpoints,
100) 100)
else: else:
print( print(
......
...@@ -19,6 +19,7 @@ Usage: ...@@ -19,6 +19,7 @@ Usage:
""" """
import argparse import argparse
from .web_service import WebService from .web_service import WebService
from flask import Flask, request
def parse_args(): # pylint: disable=doc-string-missing def parse_args(): # pylint: disable=doc-string-missing
...@@ -88,3 +89,20 @@ if __name__ == "__main__": ...@@ -88,3 +89,20 @@ if __name__ == "__main__":
service.prepare_server( service.prepare_server(
workdir=args.workdir, port=args.port, device=args.device) workdir=args.workdir, port=args.port, device=args.device)
service.run_server() service.run_server()
app_instance = Flask(__name__)
@app_instance.before_first_request
def init():
service._launch_web_service()
service_name = "/" + service.name + "/prediction"
@app_instance.route(service_name, methods=["POST"])
def run():
return service.get_prediction(request)
app_instance.run(host="0.0.0.0",
port=service.port,
threaded=False,
processes=4)
...@@ -50,45 +50,34 @@ class WebService(object): ...@@ -50,45 +50,34 @@ class WebService(object):
self.device = device self.device = device
def _launch_web_service(self): def _launch_web_service(self):
app_instance = Flask(__name__) self.client_service = Client()
client_service = Client() self.client_service.load_client_config(
client_service.load_client_config(
"{}/serving_server_conf.prototxt".format(self.model_config)) "{}/serving_server_conf.prototxt".format(self.model_config))
client_service.connect(["0.0.0.0:{}".format(self.port + 1)]) self.client_service.connect(["0.0.0.0:{}".format(self.port + 1)])
service_name = "/" + self.name + "/prediction"
@app_instance.route(service_name, methods=['POST']) def get_prediction(self, request):
def get_prediction():
if not request.json: if not request.json:
abort(400) abort(400)
if "fetch" not in request.json: if "fetch" not in request.json:
abort(400) abort(400)
try: try:
feed, fetch = self.preprocess(request.json, feed, fetch = self.preprocess(request.json, request.json["fetch"])
request.json["fetch"])
if isinstance(feed, list): if isinstance(feed, list):
fetch_map_batch = client_service.predict( fetch_map_batch = self.client_service.predict(
feed_batch=feed, fetch=fetch) feed_batch=feed, fetch=fetch)
fetch_map_batch = self.postprocess( fetch_map_batch = self.postprocess(
feed=request.json, feed=request.json, fetch=fetch, fetch_map=fetch_map_batch)
fetch=fetch,
fetch_map=fetch_map_batch)
result = {"result": fetch_map_batch} result = {"result": fetch_map_batch}
elif isinstance(feed, dict): elif isinstance(feed, dict):
if "fetch" in feed: if "fetch" in feed:
del feed["fetch"] del feed["fetch"]
fetch_map = client_service.predict(feed=feed, fetch=fetch) fetch_map = self.client_service.predict(feed=feed, fetch=fetch)
result = self.postprocess( result = self.postprocess(
feed=request.json, fetch=fetch, fetch_map=fetch_map) feed=request.json, fetch=fetch, fetch_map=fetch_map)
except ValueError: except ValueError:
result = {"result": "Request Value Error"} result = {"result": "Request Value Error"}
return result return result
app_instance.run(host="0.0.0.0",
port=self.port,
threaded=False,
processes=1)
def run_server(self): def run_server(self):
import socket import socket
localIP = socket.gethostbyname(socket.gethostname()) localIP = socket.gethostbyname(socket.gethostname())
...@@ -96,11 +85,7 @@ class WebService(object): ...@@ -96,11 +85,7 @@ class WebService(object):
print("http://{}:{}/{}/prediction".format(localIP, self.port, print("http://{}:{}/{}/prediction".format(localIP, self.port,
self.name)) self.name))
p_rpc = Process(target=self._launch_rpc_service) p_rpc = Process(target=self._launch_rpc_service)
p_web = Process(target=self._launch_web_service)
p_rpc.start() p_rpc.start()
p_web.start()
p_web.join()
p_rpc.join()
def preprocess(self, feed={}, fetch=[]): def preprocess(self, feed={}, fetch=[]):
return feed, fetch return feed, fetch
......
...@@ -306,6 +306,9 @@ class Server(object): ...@@ -306,6 +306,9 @@ class Server(object):
self.check_local_bin() self.check_local_bin()
if not self.use_local_bin: if not self.use_local_bin:
self.download_bin() self.download_bin()
# wait for other process to download server bin
while not os.path.exists(self.server_path):
time.sleep(1)
else: else:
print("Use local bin : {}".format(self.bin_path)) print("Use local bin : {}".format(self.bin_path))
command = "{} " \ command = "{} " \
...@@ -337,8 +340,5 @@ class Server(object): ...@@ -337,8 +340,5 @@ class Server(object):
self.gpuid,) self.gpuid,)
print("Going to Run Comand") print("Going to Run Comand")
print(command) print(command)
# wait for other process to download server bin
while not os.path.exists(self.server_path):
time.sleep(1)
os.system(command) os.system(command)
...@@ -21,6 +21,7 @@ import argparse ...@@ -21,6 +21,7 @@ import argparse
import os import os
from multiprocessing import Pool, Process from multiprocessing import Pool, Process
from paddle_serving_server_gpu import serve_args from paddle_serving_server_gpu import serve_args
from flask import Flask, request
def start_gpu_card_model(index, gpuid, args): # pylint: disable=doc-string-missing def start_gpu_card_model(index, gpuid, args): # pylint: disable=doc-string-missing
...@@ -114,3 +115,20 @@ if __name__ == "__main__": ...@@ -114,3 +115,20 @@ if __name__ == "__main__":
web_service.prepare_server( web_service.prepare_server(
workdir=args.workdir, port=args.port, device=args.device) workdir=args.workdir, port=args.port, device=args.device)
web_service.run_server() web_service.run_server()
app_instance = Flask(__name__)
@app_instance.before_first_request
def init():
web_service._launch_web_service()
service_name = "/" + web_service.name + "/prediction"
@app_instance.route(service_name, methods=["POST"])
def run():
return web_service.get_prediction(request)
app_instance.run(host="0.0.0.0",
port=web_service.port,
threaded=False,
processes=4)
...@@ -11,17 +11,16 @@ ...@@ -11,17 +11,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#!flask/bin/python
# pylint: disable=doc-string-missing
from flask import Flask, request, abort from flask import Flask, request, abort
from multiprocessing import Pool, Process, Queue
from paddle_serving_server_gpu import OpMaker, OpSeqMaker, Server from paddle_serving_server_gpu import OpMaker, OpSeqMaker, Server
import paddle_serving_server_gpu as serving import paddle_serving_server_gpu as serving
from multiprocessing import Pool, Process, Queue
from paddle_serving_client import Client from paddle_serving_client import Client
from .serve import start_multi_card from paddle_serving_server_gpu.serve import start_multi_card
import time
import random import sys
import numpy as np
class WebService(object): class WebService(object):
...@@ -29,7 +28,6 @@ class WebService(object): ...@@ -29,7 +28,6 @@ class WebService(object):
self.name = name self.name = name
self.gpus = [] self.gpus = []
self.rpc_service_list = [] self.rpc_service_list = []
self.input_queues = []
def load_model_config(self, model_config): def load_model_config(self, model_config):
self.model_config = model_config self.model_config = model_config
...@@ -66,12 +64,6 @@ class WebService(object): ...@@ -66,12 +64,6 @@ class WebService(object):
return server return server
def _launch_rpc_service(self, service_idx): def _launch_rpc_service(self, service_idx):
if service_idx == 0:
self.rpc_service_list[service_idx].check_local_bin()
if not self.rpc_service_list[service_idx].use_local_bin:
self.rpc_service_list[service_idx].download_bin()
else:
time.sleep(3)
self.rpc_service_list[service_idx].run_server() self.rpc_service_list[service_idx].run_server()
def prepare_server(self, workdir="", port=9393, device="gpu", gpuid=0): def prepare_server(self, workdir="", port=9393, device="gpu", gpuid=0):
...@@ -93,88 +85,31 @@ class WebService(object): ...@@ -93,88 +85,31 @@ class WebService(object):
gpuid, gpuid,
thread_num=10)) thread_num=10))
def producers(self, inputqueue, endpoint): def _launch_web_service(self):
client = Client() gpu_num = len(self.gpus)
client.load_client_config("{}/serving_server_conf.prototxt".format( self.client = Client()
self.client.load_client_config("{}/serving_server_conf.prototxt".format(
self.model_config)) self.model_config))
client.connect([endpoint]) endpoints = ""
while True: if gpu_num > 0:
request_json = inputqueue.get()
try:
feed, fetch = self.preprocess(request_json,
request_json["fetch"])
if isinstance(feed, list):
fetch_map_batch = client.predict(
feed_batch=feed, fetch=fetch)
fetch_map_batch = self.postprocess(
feed=request_json,
fetch=fetch,
fetch_map=fetch_map_batch)
result = {"result": fetch_map_batch}
elif isinstance(feed, dict):
if "fetch" in feed:
del feed["fetch"]
fetch_map = client.predict(feed=feed, fetch=fetch)
result = self.postprocess(
feed=request_json, fetch=fetch, fetch_map=fetch_map)
self.output_queue.put(result)
except ValueError:
self.output_queue.put(-1)
def _launch_web_service(self, gpu_num):
app_instance = Flask(__name__)
service_name = "/" + self.name + "/prediction"
self.input_queues = []
self.output_queue = Queue()
for i in range(gpu_num): for i in range(gpu_num):
self.input_queues.append(Queue()) endpoints += "127.0.0.1:{},".format(self.port + i + 1)
else:
producer_list = [] endpoints = "127.0.0.1:{}".format(self.port + 1)
for i, input_q in enumerate(self.input_queues): self.client.connect([endpoints])
producer_processes = Process(
target=self.producers,
args=(
input_q,
"0.0.0.0:{}".format(self.port + 1 + i), ))
producer_list.append(producer_processes)
for p in producer_list:
p.start()
client = Client()
client.load_client_config("{}/serving_server_conf.prototxt".format(
self.model_config))
client.connect(["0.0.0.0:{}".format(self.port + 1)])
self.idx = 0
@app_instance.route(service_name, methods=['POST']) def get_prediction(self, request):
def get_prediction():
if not request.json: if not request.json:
abort(400) abort(400)
if "fetch" not in request.json: if "fetch" not in request.json:
abort(400) abort(400)
feed, fetch = self.preprocess(request.json, request.json["fetch"])
self.input_queues[self.idx].put(request.json) fetch_map_batch = self.client.predict(feed=feed, fetch=fetch)
fetch_map_batch = self.postprocess(
#self.input_queues[0].put(request.json) feed=request.json, fetch=fetch, fetch_map=fetch_map_batch)
self.idx += 1 result = {"result": fetch_map_batch}
if self.idx >= len(self.gpus):
self.idx = 0
result = self.output_queue.get()
if not isinstance(result, dict) and result == -1:
result = {"result": "Request Value Error"}
return result return result
app_instance.run(host="0.0.0.0",
port=self.port,
threaded=False,
processes=1)
for p in producer_list:
p.join()
def run_server(self): def run_server(self):
import socket import socket
localIP = socket.gethostbyname(socket.gethostname()) localIP = socket.gethostbyname(socket.gethostname())
...@@ -188,13 +123,6 @@ class WebService(object): ...@@ -188,13 +123,6 @@ class WebService(object):
for p in server_pros: for p in server_pros:
p.start() p.start()
p_web = Process(
target=self._launch_web_service, args=(len(self.gpus), ))
p_web.start()
p_web.join()
for p in server_pros:
p.join()
def preprocess(self, feed={}, fetch=[]): def preprocess(self, feed={}, fetch=[]):
return feed, fetch return feed, fetch
......
...@@ -3,6 +3,9 @@ FROM centos:7.3.1611 ...@@ -3,6 +3,9 @@ FROM centos:7.3.1611
RUN yum -y install wget && \ RUN yum -y install wget && \
yum -y install epel-release && yum -y install patchelf && \ yum -y install epel-release && yum -y install patchelf && \
yum -y install gcc make python-devel && \ yum -y install gcc make python-devel && \
yum -y install libSM-1.2.2-2.el7.x86_64 --setopt=protected_multilib=false && \
yum -y install libXrender-0.9.10-1.el7.x86_64 --setopt=protected_multilib=false && \
yum -y install libXext-1.3.3-3.el7.x86_64 --setopt=protected_multilib=false && \
yum -y install python3 python3-devel && \ yum -y install python3 python3-devel && \
yum clean all && \ yum clean all && \
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \
......
...@@ -2,6 +2,9 @@ FROM centos:7.3.1611 ...@@ -2,6 +2,9 @@ FROM centos:7.3.1611
RUN yum -y install wget >/dev/null \ RUN yum -y install wget >/dev/null \
&& yum -y install gcc gcc-c++ make glibc-static which >/dev/null \ && yum -y install gcc gcc-c++ make glibc-static which >/dev/null \
&& yum -y install git openssl-devel curl-devel bzip2-devel python-devel >/dev/null \ && yum -y install git openssl-devel curl-devel bzip2-devel python-devel >/dev/null \
&& yum -y install libSM-1.2.2-2.el7.x86_64 --setopt=protected_multilib=false \
&& yum -y install libXrender-0.9.10-1.el7.x86_64 --setopt=protected_multilib=false \
&& yum -y install libXext-1.3.3-3.el7.x86_64 --setopt=protected_multilib=false \
&& wget https://cmake.org/files/v3.2/cmake-3.2.0-Linux-x86_64.tar.gz >/dev/null \ && wget https://cmake.org/files/v3.2/cmake-3.2.0-Linux-x86_64.tar.gz >/dev/null \
&& tar xzf cmake-3.2.0-Linux-x86_64.tar.gz \ && tar xzf cmake-3.2.0-Linux-x86_64.tar.gz \
&& mv cmake-3.2.0-Linux-x86_64 /usr/local/cmake3.2.0 \ && mv cmake-3.2.0-Linux-x86_64 /usr/local/cmake3.2.0 \
......
...@@ -164,6 +164,7 @@ function python_test_fit_a_line() { ...@@ -164,6 +164,7 @@ function python_test_fit_a_line() {
fi fi
;; ;;
GPU) GPU)
export CUDA_VISIBLE_DEVICES=0
# test rpc # test rpc
check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9393 --thread 4 --gpu_ids 0 > /dev/null &" check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9393 --thread 4 --gpu_ids 0 > /dev/null &"
sleep 5 # wait for the server to start sleep 5 # wait for the server to start
...@@ -226,7 +227,7 @@ function python_run_criteo_ctr_with_cube() { ...@@ -226,7 +227,7 @@ function python_run_criteo_ctr_with_cube() {
exit 1 exit 1
fi fi
echo "criteo_ctr_with_cube inference auc test success" echo "criteo_ctr_with_cube inference auc test success"
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill kill_server_process
ps -ef | grep "cube" | grep -v grep | awk '{print $2}' | xargs kill ps -ef | grep "cube" | grep -v grep | awk '{print $2}' | xargs kill
;; ;;
GPU) GPU)
...@@ -253,7 +254,7 @@ function python_run_criteo_ctr_with_cube() { ...@@ -253,7 +254,7 @@ function python_run_criteo_ctr_with_cube() {
exit 1 exit 1
fi fi
echo "criteo_ctr_with_cube inference auc test success" echo "criteo_ctr_with_cube inference auc test success"
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill kill_server_process
ps -ef | grep "cube" | grep -v grep | awk '{print $2}' | xargs kill ps -ef | grep "cube" | grep -v grep | awk '{print $2}' | xargs kill
;; ;;
*) *)
...@@ -276,27 +277,48 @@ function python_test_bert() { ...@@ -276,27 +277,48 @@ function python_test_bert() {
case $TYPE in case $TYPE in
CPU) CPU)
pip install paddlehub pip install paddlehub
python prepare_model.py 20 # Because download from paddlehub may timeout,
# download the model from bos(max_seq_len=128).
wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz
tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz
sh get_data.sh sh get_data.sh
check_cmd "python -m paddle_serving_server.serve --model bert_seq20_model/ --port 9292 &" check_cmd "python -m paddle_serving_server.serve --model bert_chinese_L-12_H-768_A-12_model --port 9292 &"
sleep 5 sleep 5
pip install paddle_serving_app pip install paddle_serving_app
check_cmd "head -n 10 data-c.txt | python bert_client.py --model bert_seq20_client/serving_client_conf.prototxt" check_cmd "head -n 10 data-c.txt | python bert_client.py --model bert_chinese_L-12_H-768_A-12_client/serving_client_conf.prototxt"
kill_server_process kill_server_process
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill # python prepare_model.py 20
ps -ef | grep "serving" | grep -v grep | awk '{print $2}' | xargs kill # sh get_data.sh
# check_cmd "python -m paddle_serving_server.serve --model bert_seq20_model/ --port 9292 &"
# sleep 5
# pip install paddle_serving_app
# check_cmd "head -n 10 data-c.txt | python bert_client.py --model bert_seq20_client/serving_client_conf.prototxt"
# kill_server_process
# ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill
# ps -ef | grep "serving" | grep -v grep | awk '{print $2}' | xargs kill
echo "bert RPC inference pass" echo "bert RPC inference pass"
;; ;;
GPU) GPU)
export CUDA_VISIBLE_DEVICES=0
pip install paddlehub pip install paddlehub
python prepare_model.py 20 # Because download from paddlehub may timeout,
# download the model from bos(max_seq_len=128).
wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz
tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz
sh get_data.sh sh get_data.sh
check_cmd "python -m paddle_serving_server_gpu.serve --model bert_seq20_model/ --port 9292 --gpu_ids 0 &" check_cmd "python -m paddle_serving_server_gpu.serve --model bert_chinese_L-12_H-768_A-12_model --port 9292 --gpu_ids 0 &"
sleep 5 sleep 5
pip install paddle_serving_app pip install paddle_serving_app
check_cmd "head -n 10 data-c.txt | python bert_client.py --model bert_seq20_client/serving_client_conf.prototxt" check_cmd "head -n 10 data-c.txt | python bert_client.py --model bert_chinese_L-12_H-768_A-12_client/serving_client_conf.prototxt"
kill_server_process kill_server_process
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill # python prepare_model.py 20
# sh get_data.sh
# check_cmd "python -m paddle_serving_server_gpu.serve --model bert_seq20_model/ --port 9292 --gpu_ids 0 &"
# sleep 5
# pip install paddle_serving_app
# check_cmd "head -n 10 data-c.txt | python bert_client.py --model bert_seq20_client/serving_client_conf.prototxt"
# kill_server_process
# ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill
echo "bert RPC inference pass" echo "bert RPC inference pass"
;; ;;
*) *)
...@@ -325,6 +347,7 @@ function python_test_imdb() { ...@@ -325,6 +347,7 @@ function python_test_imdb() {
check_cmd "python text_classify_service.py imdb_cnn_model/workdir/9292 imdb.vocab &" check_cmd "python text_classify_service.py imdb_cnn_model/workdir/9292 imdb.vocab &"
sleep 5 sleep 5
check_cmd "curl -H "Content-Type:application/json" -X POST -d '{"words": "i am very sad | 0", "fetch":["prediction"]}' http://127.0.0.1:9292/imdb/prediction" check_cmd "curl -H "Content-Type:application/json" -X POST -d '{"words": "i am very sad | 0", "fetch":["prediction"]}' http://127.0.0.1:9292/imdb/prediction"
kill_server_process
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill
ps -ef | grep "text_classify_service.py" | grep -v grep | awk '{print $2}' | xargs kill ps -ef | grep "text_classify_service.py" | grep -v grep | awk '{print $2}' | xargs kill
echo "imdb CPU HTTP inference pass" echo "imdb CPU HTTP inference pass"
...@@ -356,6 +379,7 @@ function python_test_lac() { ...@@ -356,6 +379,7 @@ function python_test_lac() {
check_cmd "python lac_web_service.py jieba_server_model/ lac_workdir 9292 &" check_cmd "python lac_web_service.py jieba_server_model/ lac_workdir 9292 &"
sleep 5 sleep 5
check_cmd "curl -H "Content-Type:application/json" -X POST -d '{"words": "我爱北京天安门", "fetch":["word_seg"]}' http://127.0.0.1:9292/lac/prediction" check_cmd "curl -H "Content-Type:application/json" -X POST -d '{"words": "我爱北京天安门", "fetch":["word_seg"]}' http://127.0.0.1:9292/lac/prediction"
kill_server_process
ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill ps -ef | grep "paddle_serving_server" | grep -v grep | awk '{print $2}' | xargs kill
ps -ef | grep "lac_web_service" | grep -v grep | awk '{print $2}' | xargs kill ps -ef | grep "lac_web_service" | grep -v grep | awk '{print $2}' | xargs kill
echo "lac CPU HTTP inference pass" echo "lac CPU HTTP inference pass"
...@@ -377,7 +401,7 @@ function python_run_test() { ...@@ -377,7 +401,7 @@ function python_run_test() {
python_test_fit_a_line $TYPE # pwd: /Serving/python/examples python_test_fit_a_line $TYPE # pwd: /Serving/python/examples
python_run_criteo_ctr_with_cube $TYPE # pwd: /Serving/python/examples python_run_criteo_ctr_with_cube $TYPE # pwd: /Serving/python/examples
python_test_bert $TYPE # pwd: /Serving/python/examples python_test_bert $TYPE # pwd: /Serving/python/examples
python_test_imdb $TYPE python_test_imdb $TYPE # pwd: /Serving/python/examples
python_test_lac $TYPE python_test_lac $TYPE
echo "test python $TYPE part finished as expected." echo "test python $TYPE part finished as expected."
cd ../.. # pwd: /Serving cd ../.. # pwd: /Serving
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册