Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Serving
提交
56bf8827
S
Serving
项目概览
PaddlePaddle
/
Serving
大约 1 年 前同步成功
通知
185
Star
833
Fork
253
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
105
列表
看板
标记
里程碑
合并请求
10
Wiki
2
Wiki
分析
仓库
DevOps
项目成员
Pages
S
Serving
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
105
Issue
105
列表
看板
标记
里程碑
合并请求
10
合并请求
10
Pages
分析
分析
仓库分析
DevOps
Wiki
2
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
未验证
提交
56bf8827
编写于
2月 03, 2021
作者:
J
Jiawei Wang
提交者:
GitHub
2月 03, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1014 from HexToString/fix_grpc_bug
fix grpc_impl_bug and add readme by HexToString
上级
f6bbbaae
f597d39c
变更
8
显示空白变更内容
内联
并排
Showing
8 changed file
with
111 addition
and
154 deletion
+111
-154
python/examples/grpc_impl_example/imdb/README.md
python/examples/grpc_impl_example/imdb/README.md
+25
-0
python/examples/grpc_impl_example/imdb/README_CN.md
python/examples/grpc_impl_example/imdb/README_CN.md
+24
-0
python/examples/grpc_impl_example/imdb/test_client.py
python/examples/grpc_impl_example/imdb/test_client.py
+16
-12
python/examples/grpc_impl_example/imdb/test_multilang_ensemble_client.py
.../grpc_impl_example/imdb/test_multilang_ensemble_client.py
+0
-39
python/examples/grpc_impl_example/imdb/test_multilang_ensemble_server.py
.../grpc_impl_example/imdb/test_multilang_ensemble_server.py
+0
-40
python/examples/imdb/test_ensemble_server.py
python/examples/imdb/test_ensemble_server.py
+0
-40
python/paddle_serving_server/__init__.py
python/paddle_serving_server/__init__.py
+35
-21
python/paddle_serving_server_gpu/__init__.py
python/paddle_serving_server_gpu/__init__.py
+11
-2
未找到文件。
python/examples/grpc_impl_example/imdb/README.md
0 → 100644
浏览文件 @
56bf8827
## IMDB comment sentiment inference service
(
[
简体中文
](
./README_CN.md
)
|English)
### Get model files and sample data
```
sh get_data.sh
```
the package downloaded contains cnn, lstm and bow model config along with their test_data and train_data.
### Start RPC inference service
```
python -m paddle_serving_server.serve --model imdb_cnn_model/ --thread 10 --port 9393 --use_multilang
```
### RPC Infer
The
`paddlepaddle`
package is used in
`test_client.py`
, and you may need to download the corresponding package(
`pip install paddlepaddle`
).
```
head test_data/part-0 | python test_client.py
```
it will get predict results of the first 10 test cases.
python/examples/grpc_impl_example/imdb/README_CN.md
0 → 100644
浏览文件 @
56bf8827
## IMDB评论情绪预测服务
(简体中文|
[
English
](
./README.md
)
)
### 获取模型文件和样例数据
```
sh get_data.sh
```
脚本会下载和解压出cnn、lstm和bow三种模型的配置文文件以及test_data和train_data。
### 启动RPC预测服务
```
python -m paddle_serving_server.serve --model imdb_cnn_model/ --thread 10 --port 9393 --use_multilang
```
### 执行预测
`test_client.py`
中使用了
`paddlepaddle`
包,需要进行下载(
`pip install paddlepaddle`
)。
```
head test_data/part-0 | python test_client.py
```
预测test_data/part-0的前十个样例。
python/examples/
imdb/test_ensemble
_client.py
→
python/examples/
grpc_impl_example/imdb/test
_client.py
浏览文件 @
56bf8827
...
...
@@ -12,14 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from
paddle_serving_client
import
Client
from
imdb_reader
import
IMDBDataset
from
paddle_serving_client
import
MultiLangClient
as
Client
from
paddle_serving_app.reader.imdb_reader
import
IMDBDataset
import
sys
import
numpy
as
np
client
=
Client
()
# If you have more than one model, make sure that the input
# and output of more than one model are the same.
client
.
load_client_config
(
'imdb_bow_client_conf/serving_client_conf.prototxt'
)
client
.
connect
([
"127.0.0.1:9393"
])
# you can define any english sentence or dataset here
...
...
@@ -28,11 +26,17 @@ client.connect(["127.0.0.1:9393"])
imdb_dataset
=
IMDBDataset
()
imdb_dataset
.
load_resource
(
'imdb.vocab'
)
for
i
in
range
(
3
):
line
=
'i am very sad | 0'
for
line
in
sys
.
stdin
:
word_ids
,
label
=
imdb_dataset
.
get_words_and_label
(
line
)
feed
=
{
"words"
:
word_ids
}
word_len
=
len
(
word_ids
)
feed
=
{
"words"
:
np
.
array
(
word_ids
).
reshape
(
word_len
,
1
),
"words.lod"
:
[
0
,
word_len
]
}
fetch
=
[
"prediction"
]
fetch_maps
=
client
.
predict
(
feed
=
feed
,
fetch
=
fetch
)
for
model
,
fetch_map
in
fetch_maps
.
items
():
print
(
"step: {}, model: {}, res: {}"
.
format
(
i
,
model
,
fetch_map
))
fetch_map
=
client
.
predict
(
feed
=
feed
,
fetch
=
fetch
,
batch
=
True
)
if
fetch_map
[
"serving_status_code"
]
==
0
:
print
(
fetch_map
)
else
:
print
(
fetch_map
[
"serving_status_code"
])
#print("{} {}".format(fetch_map["prediction"][0], label[0]))
python/examples/grpc_impl_example/imdb/test_multilang_ensemble_client.py
已删除
100644 → 0
浏览文件 @
f6bbbaae
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from
paddle_serving_client
import
MultiLangClient
from
imdb_reader
import
IMDBDataset
client
=
MultiLangClient
()
# If you have more than one model, make sure that the input
# and output of more than one model are the same.
client
.
connect
([
"127.0.0.1:9393"
])
# you can define any english sentence or dataset here
# This example reuses imdb reader in training, you
# can define your own data preprocessing easily.
imdb_dataset
=
IMDBDataset
()
imdb_dataset
.
load_resource
(
'imdb.vocab'
)
for
i
in
range
(
3
):
line
=
'i am very sad | 0'
word_ids
,
label
=
imdb_dataset
.
get_words_and_label
(
line
)
feed
=
{
"words"
:
word_ids
}
fetch
=
[
"prediction"
]
fetch_maps
=
client
.
predict
(
feed
=
feed
,
fetch
=
fetch
)
for
model
,
fetch_map
in
fetch_maps
.
items
():
if
model
==
"serving_status_code"
:
continue
print
(
"step: {}, model: {}, res: {}"
.
format
(
i
,
model
,
fetch_map
))
python/examples/grpc_impl_example/imdb/test_multilang_ensemble_server.py
已删除
100644 → 0
浏览文件 @
f6bbbaae
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from
paddle_serving_server
import
OpMaker
from
paddle_serving_server
import
OpGraphMaker
from
paddle_serving_server
import
MultiLangServer
op_maker
=
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
cnn_infer_op
=
op_maker
.
create
(
'general_infer'
,
engine_name
=
'cnn'
,
inputs
=
[
read_op
])
bow_infer_op
=
op_maker
.
create
(
'general_infer'
,
engine_name
=
'bow'
,
inputs
=
[
read_op
])
response_op
=
op_maker
.
create
(
'general_response'
,
inputs
=
[
cnn_infer_op
,
bow_infer_op
])
op_graph_maker
=
OpGraphMaker
()
op_graph_maker
.
add_op
(
read_op
)
op_graph_maker
.
add_op
(
cnn_infer_op
)
op_graph_maker
.
add_op
(
bow_infer_op
)
op_graph_maker
.
add_op
(
response_op
)
server
=
MultiLangServer
()
server
.
set_op_graph
(
op_graph_maker
.
get_op_graph
())
model_config
=
{
cnn_infer_op
:
'imdb_cnn_model'
,
bow_infer_op
:
'imdb_bow_model'
}
server
.
load_model_config
(
model_config
)
server
.
prepare_server
(
workdir
=
"work_dir1"
,
port
=
9393
,
device
=
"cpu"
)
server
.
run_server
()
python/examples/imdb/test_ensemble_server.py
已删除
100644 → 0
浏览文件 @
f6bbbaae
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from
paddle_serving_server
import
OpMaker
from
paddle_serving_server
import
OpGraphMaker
from
paddle_serving_server
import
Server
op_maker
=
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
cnn_infer_op
=
op_maker
.
create
(
'general_infer'
,
engine_name
=
'cnn'
,
inputs
=
[
read_op
])
bow_infer_op
=
op_maker
.
create
(
'general_infer'
,
engine_name
=
'bow'
,
inputs
=
[
read_op
])
response_op
=
op_maker
.
create
(
'general_response'
,
inputs
=
[
cnn_infer_op
,
bow_infer_op
])
op_graph_maker
=
OpGraphMaker
()
op_graph_maker
.
add_op
(
read_op
)
op_graph_maker
.
add_op
(
cnn_infer_op
)
op_graph_maker
.
add_op
(
bow_infer_op
)
op_graph_maker
.
add_op
(
response_op
)
server
=
Server
()
server
.
set_op_graph
(
op_graph_maker
.
get_op_graph
())
model_config
=
{
cnn_infer_op
:
'imdb_cnn_model'
,
bow_infer_op
:
'imdb_bow_model'
}
server
.
load_model_config
(
model_config
)
server
.
prepare_server
(
workdir
=
"work_dir1"
,
port
=
9393
,
device
=
"cpu"
)
server
.
run_server
()
python/paddle_serving_server/__init__.py
浏览文件 @
56bf8827
...
...
@@ -537,8 +537,9 @@ class MultiLangServerServiceServicer(multi_lang_general_model_service_pb2_grpc.
fetch_names
=
list
(
request
.
fetch_var_names
)
is_python
=
request
.
is_python
log_id
=
request
.
log_id
feed_batch
=
[]
for
feed_inst
in
request
.
insts
:
feed_dict
=
{}
feed_inst
=
request
.
insts
[
0
]
for
idx
,
name
in
enumerate
(
feed_names
):
var
=
feed_inst
.
tensor_array
[
idx
]
v_type
=
self
.
feed_types_
[
name
]
...
...
@@ -552,11 +553,21 @@ class MultiLangServerServiceServicer(multi_lang_general_model_service_pb2_grpc.
data
=
np
.
frombuffer
(
var
.
data
,
dtype
=
"int32"
)
else
:
raise
Exception
(
"error type."
)
else
:
if
v_type
==
0
:
# int64
data
=
np
.
array
(
list
(
var
.
int64_data
),
dtype
=
"int64"
)
elif
v_type
==
1
:
# float32
data
=
np
.
array
(
list
(
var
.
float_data
),
dtype
=
"float32"
)
elif
v_type
==
2
:
# int32
data
=
np
.
array
(
list
(
var
.
int_data
),
dtype
=
"int32"
)
else
:
raise
Exception
(
"error type."
)
data
.
shape
=
list
(
feed_inst
.
tensor_array
[
idx
].
shape
)
feed_dict
[
name
]
=
data
if
len
(
var
.
lod
)
>
0
:
feed_dict
[
"{}.lod"
.
format
()]
=
var
.
lod
return
feed_dict
,
fetch_names
,
is_python
,
log_id
feed_dict
[
"{}.lod"
.
format
(
name
)]
=
var
.
lod
feed_batch
.
append
(
feed_dict
)
return
feed_batch
,
fetch_names
,
is_python
,
log_id
def
_pack_inference_response
(
self
,
ret
,
fetch_names
,
is_python
):
resp
=
multi_lang_general_model_service_pb2
.
InferenceResponse
()
...
...
@@ -608,10 +619,10 @@ class MultiLangServerServiceServicer(multi_lang_general_model_service_pb2_grpc.
return
resp
def
Inference
(
self
,
request
,
context
):
feed_
dict
,
fetch_names
,
is_python
,
log_id
=
\
feed_
batch
,
fetch_names
,
is_python
,
log_id
=
\
self
.
_unpack_inference_request
(
request
)
ret
=
self
.
bclient_
.
predict
(
feed
=
feed_
dict
,
feed
=
feed_
batch
,
fetch
=
fetch_names
,
batch
=
True
,
need_variant_tag
=
True
,
...
...
@@ -649,6 +660,9 @@ class MultiLangServer(object):
"max_body_size is less than default value, will use default value in service."
)
def
use_encryption_model
(
self
,
flag
=
False
):
self
.
encryption_model
=
flag
def
set_port
(
self
,
port
):
self
.
gport_
=
port
...
...
python/paddle_serving_server_gpu/__init__.py
浏览文件 @
56bf8827
...
...
@@ -244,6 +244,9 @@ class Server(object):
"max_body_size is less than default value, will use default value in service."
)
def
use_encryption_model
(
self
,
flag
=
False
):
self
.
encryption_model
=
flag
def
set_port
(
self
,
port
):
self
.
port
=
port
...
...
@@ -690,6 +693,8 @@ class MultiLangServerServiceServicer(multi_lang_general_model_service_pb2_grpc.
raise
Exception
(
"error type."
)
data
.
shape
=
list
(
feed_inst
.
tensor_array
[
idx
].
shape
)
feed_dict
[
name
]
=
data
if
len
(
var
.
lod
)
>
0
:
feed_dict
[
"{}.lod"
.
format
(
name
)]
=
var
.
lod
feed_batch
.
append
(
feed_dict
)
return
feed_batch
,
fetch_names
,
is_python
,
log_id
...
...
@@ -744,11 +749,12 @@ class MultiLangServerServiceServicer(multi_lang_general_model_service_pb2_grpc.
return
resp
def
Inference
(
self
,
request
,
context
):
feed_
dict
,
fetch_names
,
is_python
,
log_id
\
feed_
batch
,
fetch_names
,
is_python
,
log_id
\
=
self
.
_unpack_inference_request
(
request
)
ret
=
self
.
bclient_
.
predict
(
feed
=
feed_
dict
,
feed
=
feed_
batch
,
fetch
=
fetch_names
,
batch
=
True
,
need_variant_tag
=
True
,
log_id
=
log_id
)
return
self
.
_pack_inference_response
(
ret
,
fetch_names
,
is_python
)
...
...
@@ -787,6 +793,9 @@ class MultiLangServer(object):
"max_body_size is less than default value, will use default value in service."
)
def
use_encryption_model
(
self
,
flag
=
False
):
self
.
encryption_model
=
flag
def
set_port
(
self
,
port
):
self
.
gport_
=
port
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录