未验证 提交 d093f35e 编写于 作者: M MRXLT 提交者: GitHub

Merge pull request #635 from MRXLT/0.3.0-cherry

[cherry pick to 0.3.0]
...@@ -151,7 +151,6 @@ Here, `client.predict` function has two arguments. `feed` is a `python dict` wit ...@@ -151,7 +151,6 @@ Here, `client.predict` function has two arguments. `feed` is a `python dict` wit
- **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs. - **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
- **Highly concurrent and efficient communication** between clients and servers supported. - **Highly concurrent and efficient communication** between clients and servers supported.
- **Multiple programming languages** supported on client side, such as Golang, C++ and python. - **Multiple programming languages** supported on client side, such as Golang, C++ and python.
- **Extensible framework design** which can support model serving beyond Paddle.
<h2 align="center">Document</h2> <h2 align="center">Document</h2>
......
...@@ -156,7 +156,6 @@ print(fetch_map) ...@@ -156,7 +156,6 @@ print(fetch_map)
- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入. - 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
- 支持客户端和服务端之间 **高并发和高效通信**. - 支持客户端和服务端之间 **高并发和高效通信**.
- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python. - 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
- **可伸缩框架设计** 可支持不限于Paddle的模型服务.
<h2 align="center">文档</h2> <h2 align="center">文档</h2>
......
...@@ -20,7 +20,7 @@ This document will take Python2 as an example to show how to compile Paddle Serv ...@@ -20,7 +20,7 @@ This document will take Python2 as an example to show how to compile Paddle Serv
- Set `DPYTHON_INCLUDE_DIR` to `$PYTHONROOT/include/python3.6m/` - Set `DPYTHON_INCLUDE_DIR` to `$PYTHONROOT/include/python3.6m/`
- Set `DPYTHON_LIBRARIES` to `$PYTHONROOT/lib64/libpython3.6.so` - Set `DPYTHON_LIBRARIES` to `$PYTHONROOT/lib64/libpython3.6.so`
- Set `DPYTHON_EXECUTABLE` to `$PYTHONROOT/bin/python3` - Set `DPYTHON_EXECUTABLE` to `$PYTHONROOT/bin/python3.6`
## Get Code ## Get Code
...@@ -36,6 +36,8 @@ cd Serving && git submodule update --init --recursive ...@@ -36,6 +36,8 @@ cd Serving && git submodule update --init --recursive
export PYTHONROOT=/usr/ export PYTHONROOT=/usr/
``` ```
In the default centos7 image we provide, the Python path is `/usr/bin/python`. If you want to use our centos6 image, you need to set it to `export PYTHONROOT=/usr/local/python2.7/`.
## Compile Server ## Compile Server
### Integrated CPU version paddle inference library ### Integrated CPU version paddle inference library
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
-`DPYTHON_INCLUDE_DIR`设置为`$PYTHONROOT/include/python3.6m/` -`DPYTHON_INCLUDE_DIR`设置为`$PYTHONROOT/include/python3.6m/`
-`DPYTHON_LIBRARIES`设置为`$PYTHONROOT/lib64/libpython3.6.so` -`DPYTHON_LIBRARIES`设置为`$PYTHONROOT/lib64/libpython3.6.so`
-`DPYTHON_EXECUTABLE`设置为`$PYTHONROOT/bin/python3` -`DPYTHON_EXECUTABLE`设置为`$PYTHONROOT/bin/python3.6`
## 获取代码 ## 获取代码
...@@ -36,6 +36,8 @@ cd Serving && git submodule update --init --recursive ...@@ -36,6 +36,8 @@ cd Serving && git submodule update --init --recursive
export PYTHONROOT=/usr/ export PYTHONROOT=/usr/
``` ```
我们提供默认Centos7的Python路径为`/usr/bin/python`,如果您要使用我们的Centos6镜像,需要将其设置为`export PYTHONROOT=/usr/local/python2.7/`
## 编译Server部分 ## 编译Server部分
### 集成CPU版本Paddle Inference Library ### 集成CPU版本Paddle Inference Library
......
...@@ -46,7 +46,7 @@ In this example, the production model is uploaded to HDFS in `product_path` fold ...@@ -46,7 +46,7 @@ In this example, the production model is uploaded to HDFS in `product_path` fold
### Product model ### Product model
Run the following Python code products model in `product_path` folder. Every 60 seconds, the package file of Boston house price prediction model `uci_housing.tar.gz` will be generated and uploaded to the path of HDFS `/`. After uploading, the timestamp file `donefile` will be updated and uploaded to the path of HDFS `/`. Run the following Python code products model in `product_path` folder(You need to modify Hadoop related parameters before running). Every 60 seconds, the package file of Boston house price prediction model `uci_housing.tar.gz` will be generated and uploaded to the path of HDFS `/`. After uploading, the timestamp file `donefile` will be updated and uploaded to the path of HDFS `/`.
```python ```python
import os import os
...@@ -82,9 +82,14 @@ exe = fluid.Executor(place) ...@@ -82,9 +82,14 @@ exe = fluid.Executor(place)
exe.run(fluid.default_startup_program()) exe.run(fluid.default_startup_program())
def push_to_hdfs(local_file_path, remote_path): def push_to_hdfs(local_file_path, remote_path):
hadoop_bin = '/hadoop-3.1.2/bin/hadoop' afs = 'afs://***.***.***.***:***' # User needs to change
os.system('{} fs -put -f {} {}'.format( uci = '***,***' # User needs to change
hadoop_bin, local_file_path, remote_path)) hadoop_bin = '/path/to/haddop/bin' # User needs to change
prefix = '{} fs -Dfs.default.name={} -Dhadoop.job.ugi={}'.format(hadoop_bin, afs, uci)
os.system('{} -rmr {}/{}'.format(
prefix, remote_path, local_file_path))
os.system('{} -put {} {}'.format(
prefix, local_file_path, remote_path))
name = "uci_housing" name = "uci_housing"
for pass_id in range(30): for pass_id in range(30):
......
...@@ -46,7 +46,7 @@ Paddle Serving提供了一个自动监控脚本,远端地址更新模型后会 ...@@ -46,7 +46,7 @@ Paddle Serving提供了一个自动监控脚本,远端地址更新模型后会
### 生产模型 ### 生产模型
`product_path`下运行下面的Python代码生产模型,每隔 60 秒会产出 Boston 房价预测模型的打包文件`uci_housing.tar.gz`并上传至hdfs的`/`路径下,上传完毕后更新时间戳文件`donefile`并上传至hdfs的`/`路径下。 `product_path`下运行下面的Python代码生产模型(运行前需要修改hadoop相关的参数),每隔 60 秒会产出 Boston 房价预测模型的打包文件`uci_housing.tar.gz`并上传至hdfs的`/`路径下,上传完毕后更新时间戳文件`donefile`并上传至hdfs的`/`路径下。
```python ```python
import os import os
...@@ -82,9 +82,14 @@ exe = fluid.Executor(place) ...@@ -82,9 +82,14 @@ exe = fluid.Executor(place)
exe.run(fluid.default_startup_program()) exe.run(fluid.default_startup_program())
def push_to_hdfs(local_file_path, remote_path): def push_to_hdfs(local_file_path, remote_path):
hadoop_bin = '/hadoop-3.1.2/bin/hadoop' afs = 'afs://***.***.***.***:***' # User needs to change
os.system('{} fs -put -f {} {}'.format( uci = '***,***' # User needs to change
hadoop_bin, local_file_path, remote_path)) hadoop_bin = '/path/to/haddop/bin' # User needs to change
prefix = '{} fs -Dfs.default.name={} -Dhadoop.job.ugi={}'.format(hadoop_bin, afs, uci)
os.system('{} -rmr {}/{}'.format(
prefix, remote_path, local_file_path))
os.system('{} -put {} {}'.format(
prefix, local_file_path, remote_path))
name = "uci_housing" name = "uci_housing"
for pass_id in range(30): for pass_id in range(30):
......
...@@ -29,7 +29,7 @@ from paddle_serving_server.web_service import WebService ...@@ -29,7 +29,7 @@ from paddle_serving_server.web_service import WebService
uci_service = WebService(name = "uci") uci_service = WebService(name = "uci")
uci_service.load_model_config("./uci_housing_model") uci_service.load_model_config("./uci_housing_model")
uci_service.prepare_server(workdir="./workdir", port=int(9500), device="cpu") uci_service.prepare_server(workdir="./workdir", port=int(9500), device="cpu")
uci_service.run_server() uci_service.run_rpc_service()
#Get flask application #Get flask application
app_instance = uci_service.get_app_instance() app_instance = uci_service.get_app_instance()
``` ```
......
...@@ -29,7 +29,7 @@ from paddle_serving_server.web_service import WebService ...@@ -29,7 +29,7 @@ from paddle_serving_server.web_service import WebService
uci_service = WebService(name = "uci") uci_service = WebService(name = "uci")
uci_service.load_model_config("./uci_housing_model") uci_service.load_model_config("./uci_housing_model")
uci_service.prepare_server(workdir="./workdir", port=int(9500), device="cpu") uci_service.prepare_server(workdir="./workdir", port=int(9500), device="cpu")
uci_service.run_server() uci_service.run_rpc_service()
#获取flask服务 #获取flask服务
app_instance = uci_service.get_app_instance() app_instance = uci_service.get_app_instance()
``` ```
......
...@@ -22,15 +22,19 @@ def single_func(idx, resource): ...@@ -22,15 +22,19 @@ def single_func(idx, resource):
client.load_client_config( client.load_client_config(
"./uci_housing_client/serving_client_conf.prototxt") "./uci_housing_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9293", "127.0.0.1:9292"]) client.connect(["127.0.0.1:9293", "127.0.0.1:9292"])
test_reader = paddle.batch( x = [
paddle.reader.shuffle( 0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584,
paddle.dataset.uci_housing.test(), buf_size=500), 0.6283, 0.4919, 0.1856, 0.0795, -0.0332
batch_size=1) ]
for data in test_reader(): for i in range(1000):
fetch_map = client.predict(feed={"x": data[0][0]}, fetch=["price"]) fetch_map = client.predict(feed={"x": x}, fetch=["price"])
if fetch_map is None:
return [[None]]
return [[0]] return [[0]]
multi_thread_runner = MultiThreadRunner() multi_thread_runner = MultiThreadRunner()
thread_num = 4 thread_num = 4
result = multi_thread_runner.run(single_func, thread_num, {}) result = multi_thread_runner.run(single_func, thread_num, {})
if None in result[0]:
exit(1)
...@@ -2,28 +2,27 @@ ...@@ -2,28 +2,27 @@
([简体中文](./README_CN.md)|English) ([简体中文](./README_CN.md)|English)
### Get model files and sample data ### Get Model
``` ```
sh get_data.sh python -m paddle_serving_app.package --get_model lac
tar -xzvf lac.tar.gz
``` ```
the package downloaded contains lac model config along with lac dictionary.
#### Start RPC inference service #### Start RPC inference service
``` ```
python -m paddle_serving_server.serve --model jieba_server_model/ --port 9292 python -m paddle_serving_server.serve --model lac_model/ --port 9292
``` ```
### RPC Infer ### RPC Infer
``` ```
echo "我爱北京天安门" | python lac_client.py jieba_client_conf/serving_client_conf.prototxt lac_dict/ echo "我爱北京天安门" | python lac_client.py lac_client/serving_client_conf.prototxt
``` ```
it will get the segmentation result It will get the segmentation result.
### Start HTTP inference service ### Start HTTP inference service
``` ```
python lac_web_service.py jieba_server_model/ lac_workdir 9292 python lac_web_service.py lac_model/ lac_workdir 9292
``` ```
### HTTP Infer ### HTTP Infer
......
...@@ -2,28 +2,27 @@ ...@@ -2,28 +2,27 @@
(简体中文|[English](./README.md)) (简体中文|[English](./README.md))
### 获取模型和字典文件 ### 获取模型
``` ```
sh get_data.sh python -m paddle_serving_app.package --get_model lac
tar -xzvf lac.tar.gz
``` ```
下载包里包含了lac模型和lac模型预测需要的字典文件
#### 开启RPC预测服务 #### 开启RPC预测服务
``` ```
python -m paddle_serving_server.serve --model jieba_server_model/ --port 9292 python -m paddle_serving_server.serve --model lac_model/ --port 9292
``` ```
### 执行RPC预测 ### 执行RPC预测
``` ```
echo "我爱北京天安门" | python lac_client.py jieba_client_conf/serving_client_conf.prototxt lac_dict/ echo "我爱北京天安门" | python lac_client.py lac_client/serving_client_conf.prototxt
``` ```
我们就能得到分词结果 我们就能得到分词结果
### 开启HTTP预测服务 ### 开启HTTP预测服务
``` ```
python lac_web_service.py jieba_server_model/ lac_workdir 9292 python lac_web_service.py lac_model/ lac_workdir 9292
``` ```
### 执行HTTP预测 ### 执行HTTP预测
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
import sys import sys
import time import time
import requests import requests
from lac_reader import LACReader from paddle_serving_app.reader import LACReader
from paddle_serving_client import Client from paddle_serving_client import Client
from paddle_serving_client.utils import MultiThreadRunner from paddle_serving_client.utils import MultiThreadRunner
from paddle_serving_client.utils import benchmark_args from paddle_serving_client.utils import benchmark_args
...@@ -25,7 +25,7 @@ args = benchmark_args() ...@@ -25,7 +25,7 @@ args = benchmark_args()
def single_func(idx, resource): def single_func(idx, resource):
reader = LACReader("lac_dict") reader = LACReader()
start = time.time() start = time.time()
if args.request == "rpc": if args.request == "rpc":
client = Client() client = Client()
......
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/lac/lac_model_jieba_web.tar.gz
tar -zxvf lac_model_jieba_web.tar.gz
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
# pylint: disable=doc-string-missing # pylint: disable=doc-string-missing
from paddle_serving_client import Client from paddle_serving_client import Client
from lac_reader import LACReader from paddle_serving_app.reader import LACReader
import sys import sys
import os import os
import io import io
...@@ -24,7 +24,7 @@ client = Client() ...@@ -24,7 +24,7 @@ client = Client()
client.load_client_config(sys.argv[1]) client.load_client_config(sys.argv[1])
client.connect(["127.0.0.1:9292"]) client.connect(["127.0.0.1:9292"])
reader = LACReader(sys.argv[2]) reader = LACReader()
for line in sys.stdin: for line in sys.stdin:
if len(line) <= 0: if len(line) <= 0:
continue continue
...@@ -32,4 +32,8 @@ for line in sys.stdin: ...@@ -32,4 +32,8 @@ for line in sys.stdin:
if len(feed_data) <= 0: if len(feed_data) <= 0:
continue continue
fetch_map = client.predict(feed={"words": feed_data}, fetch=["crf_decode"]) fetch_map = client.predict(feed={"words": feed_data}, fetch=["crf_decode"])
print(fetch_map) begin = fetch_map['crf_decode.lod'][0]
end = fetch_map['crf_decode.lod'][1]
segs = reader.parse_result(line, fetch_map["crf_decode"][begin:end])
print({"word_seg": "|".join(segs)})
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
from paddle_serving_server.web_service import WebService from paddle_serving_server.web_service import WebService
import sys import sys
from lac_reader import LACReader from paddle_serving_app.reader import LACReader
class LACService(WebService): class LACService(WebService):
......
# OCR
## Get Model
```
python -m paddle_serving_app.package --get_model ocr_rec
tar -xzvf ocr_rec.tar.gz
```
## RPC Service
### Start Service
```
python -m paddle_serving_server.serve --model ocr_rec_model --port 9292
```
### Client Prediction
```
python test_ocr_rec_client.py
```
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddle_serving_client import Client
from paddle_serving_app.reader import OCRReader
import cv2
client = Client()
client.load_client_config("ocr_rec_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9292"])
image_file_list = ["./test_rec.jpg"]
img = cv2.imread(image_file_list[0])
ocr_reader = OCRReader()
feed = {"image": ocr_reader.preprocess([img])}
fetch = ["ctc_greedy_decoder_0.tmp_0", "softmax_0.tmp_0"]
fetch_map = client.predict(feed=feed, fetch=fetch)
rec_res = ocr_reader.postprocess(fetch_map)
print(image_file_list[0])
print(rec_res[0][0])
# Chinese sentence sentiment classification # Chinese Sentence Sentiment Classification
([简体中文](./README_CN.md)|English) ([简体中文](./README_CN.md)|English)
## Get model files and sample data
```
sh get_data.sh
```
## Install preprocess module
## Get Model
``` ```
pip install paddle_serving_app python -m paddle_serving_app.package --get_model senta_bilstm
python -m paddle_serving_app.package --get_model lac
``` ```
## Start http service ## Start HTTP Service
``` ```
python senta_web_service.py senta_bilstm_model/ workdir 9292 python -m paddle_serving_server.serve --model lac_model --port 9300
python senta_web_service.py
``` ```
In the Chinese sentiment classification task, the Chinese word segmentation needs to be done through [LAC task] (../lac). Set model path by ```lac_model_path``` and dictionary path by ```lac_dict_path```. In the Chinese sentiment classification task, the Chinese word segmentation needs to be done through [LAC task] (../lac).
In this demo, the LAC task is placed in the preprocessing part of the HTTP prediction service of the sentiment classification task. The LAC prediction service is deployed on the CPU, and the sentiment classification task is deployed on the GPU, which can be changed according to the actual situation. In this demo, the LAC task is placed in the preprocessing part of the HTTP prediction service of the sentiment classification task.
## Client prediction ## Client prediction
``` ```
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "天气不错"}], "fetch":["class_probs"]}' http://127.0.0.1:9292/senta/prediction curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "天气不错"}], "fetch":["class_probs"]}' http://127.0.0.1:9292/senta/prediction
......
# 中文语句情感分类 # 中文语句情感分类
(简体中文|[English](./README.md)) (简体中文|[English](./README.md))
## 获取模型文件和样例数据
``` ## 获取模型文件
sh get_data.sh
```
## 安装数据预处理模块
``` ```
pip install paddle_serving_app python -m paddle_serving_app.package --get_model senta_bilstm
python -m paddle_serving_app.package --get_model lac
``` ```
## 启动HTTP服务 ## 启动HTTP服务
``` ```
python senta_web_service.py senta_bilstm_model/ workdir 9292 python -m paddle_serving_server.serve --model lac_model --port 9300
python senta_web_service.py
``` ```
中文情感分类任务中需要先通过[LAC任务](../lac)进行中文分词,在脚本中通过```lac_model_path```参数配置LAC任务的模型文件路径,```lac_dict_path```参数配置LAC任务词典路径 中文情感分类任务中需要先通过[LAC任务](../lac)进行中文分词。
示例中将LAC任务放在情感分类任务的HTTP预测服务的预处理部分,LAC预测服务部署在CPU上,情感分类任务部署在GPU上,可以根据实际情况进行更改 示例中将LAC任务放在情感分类任务的HTTP预测服务的预处理部分。
## 客户端预测 ## 客户端预测
``` ```
......
#encoding=utf-8
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
...@@ -12,56 +13,28 @@ ...@@ -12,56 +13,28 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from paddle_serving_server_gpu.web_service import WebService from paddle_serving_server.web_service import WebService
from paddle_serving_client import Client from paddle_serving_client import Client
from paddle_serving_app.reader import LACReader, SentaReader from paddle_serving_app.reader import LACReader, SentaReader
import os import os
import sys import sys
from multiprocessing import Process
#senta_web_service.py
from paddle_serving_server.web_service import WebService
from paddle_serving_client import Client
from paddle_serving_app.reader import LACReader, SentaReader
class SentaService(WebService):
def set_config(
self,
lac_model_path,
lac_dict_path,
senta_dict_path, ):
self.lac_model_path = lac_model_path
self.lac_client_config_path = lac_model_path + "/serving_server_conf.prototxt"
self.lac_dict_path = lac_dict_path
self.senta_dict_path = senta_dict_path
def start_lac_service(self):
if not os.path.exists('./lac_serving'):
os.mkdir("./lac_serving")
os.chdir('./lac_serving')
self.lac_port = self.port + 100
r = os.popen(
"python -m paddle_serving_server.serve --model {} --port {} &".
format("../" + self.lac_model_path, self.lac_port))
os.chdir('..')
def init_lac_service(self):
ps = Process(target=self.start_lac_service())
ps.start()
self.init_lac_client()
def lac_predict(self, feed_data):
lac_result = self.lac_client.predict(
feed={"words": feed_data}, fetch=["crf_decode"])
return lac_result
def init_lac_client(self):
self.lac_client = Client()
self.lac_client.load_client_config(self.lac_client_config_path)
self.lac_client.connect(["127.0.0.1:{}".format(self.lac_port)])
def init_lac_reader(self): class SentaService(WebService):
#初始化lac模型预测服务
def init_lac_client(self, lac_port, lac_client_config):
self.lac_reader = LACReader() self.lac_reader = LACReader()
def init_senta_reader(self):
self.senta_reader = SentaReader() self.senta_reader = SentaReader()
self.lac_client = Client()
self.lac_client.load_client_config(lac_client_config)
self.lac_client.connect(["127.0.0.1:{}".format(lac_port)])
#定义senta模型预测服务的预处理,调用顺序:lac reader->lac模型预测->预测结果后处理->senta reader
def preprocess(self, feed=[], fetch=[]): def preprocess(self, feed=[], fetch=[]):
feed_data = [{ feed_data = [{
"words": self.lac_reader.process(x["words"]) "words": self.lac_reader.process(x["words"])
...@@ -80,15 +53,9 @@ class SentaService(WebService): ...@@ -80,15 +53,9 @@ class SentaService(WebService):
senta_service = SentaService(name="senta") senta_service = SentaService(name="senta")
senta_service.set_config( senta_service.load_model_config("senta_bilstm_model")
lac_model_path="./lac_model", senta_service.prepare_server(workdir="workdir")
lac_dict_path="./lac_dict", senta_service.init_lac_client(
senta_dict_path="./vocab.txt") lac_port=9300, lac_client_config="lac_model/serving_server_conf.prototxt")
senta_service.load_model_config(sys.argv[1])
senta_service.prepare_server(
workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu")
senta_service.init_lac_reader()
senta_service.init_senta_reader()
senta_service.init_lac_service()
senta_service.run_rpc_service() senta_service.run_rpc_service()
senta_service.run_web_service() senta_service.run_web_service()
...@@ -21,15 +21,15 @@ python -m paddle_serving_app.package --list_model ...@@ -21,15 +21,15 @@ python -m paddle_serving_app.package --list_model
python -m paddle_serving_app.package --get_model senta_bilstm python -m paddle_serving_app.package --get_model senta_bilstm
``` ```
11 pre-trained models are built into paddle_serving_app, covering 6 kinds of prediction tasks. 10 pre-trained models are built into paddle_serving_app, covering 6 kinds of prediction tasks.
The model files can be directly used for deployment, and the `--tutorial` argument can be added to obtain the deployment method. The model files can be directly used for deployment, and the `--tutorial` argument can be added to obtain the deployment method.
| Prediction task | Model name | | Prediction task | Model name |
| ------------ | ------------------------------------------------ | | ------------ | ------------------------------------------------ |
| SentimentAnalysis | 'senta_bilstm', 'senta_bow', 'senta_cnn' | | SentimentAnalysis | 'senta_bilstm', 'senta_bow', 'senta_cnn' |
| SemanticRepresentation | 'ernie_base' | | SemanticRepresentation | 'ernie' |
| ChineseWordSegmentation | 'lac' | | ChineseWordSegmentation | 'lac' |
| ObjectDetection | 'faster_rcnn', 'yolov3' | | ObjectDetection | 'faster_rcnn' |
| ImageSegmentation | 'unet', 'deeplabv3' | | ImageSegmentation | 'unet', 'deeplabv3' |
| ImageClassification | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' | | ImageClassification | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' |
...@@ -76,7 +76,7 @@ Preprocessing for Chinese word segmentation task. ...@@ -76,7 +76,7 @@ Preprocessing for Chinese word segmentation task.
[example](../examples/senta/senta_web_service.py) [example](../examples/senta/senta_web_service.py)
- The image preprocessing method is more flexible than the above method, and can be combined by the following multiple classes,[example](../examples/imagenet/image_rpc_client.py) - The image preprocessing method is more flexible than the above method, and can be combined by the following multiple classes,[example](../examples/imagenet/resnet50_rpc_client.py)
- class Sequentia - class Sequentia
......
...@@ -20,14 +20,14 @@ python -m paddle_serving_app.package --list_model ...@@ -20,14 +20,14 @@ python -m paddle_serving_app.package --list_model
python -m paddle_serving_app.package --get_model senta_bilstm python -m paddle_serving_app.package --get_model senta_bilstm
``` ```
paddle_serving_app中内置了11中预训练模型,涵盖了6种预测任务。获取到的模型文件可以直接用于部署,添加`--tutorial`参数可以获取对应的部署方式。 paddle_serving_app中内置了10种预训练模型,涵盖了6种预测任务。获取到的模型文件可以直接用于部署,添加`--tutorial`参数可以获取对应的部署方式。
| 预测服务类型 | 模型名称 | | 预测服务类型 | 模型名称 |
| ------------ | ------------------------------------------------ | | ------------ | ------------------------------------------------ |
| 中文情感分析 | 'senta_bilstm', 'senta_bow', 'senta_cnn' | | 中文情感分析 | 'senta_bilstm', 'senta_bow', 'senta_cnn' |
| 语义理解 | 'ernie_base' | | 语义理解 | 'ernie' |
| 中文分词 | 'lac' | | 中文分词 | 'lac' |
| 图像检测 | 'faster_rcnn', 'yolov3' | | 图像检测 | 'faster_rcnn' |
| 图像分割 | 'unet', 'deeplabv3' | | 图像分割 | 'unet', 'deeplabv3' |
| 图像分类 | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' | | 图像分类 | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' |
...@@ -71,7 +71,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的 ...@@ -71,7 +71,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的
[参考示例](../examples/senta/senta_web_service.py) [参考示例](../examples/senta/senta_web_service.py)
- 图像的预处理方法相比于上述的方法更加灵活多变,可以通过以下的多个类进行组合,[参考示例](../examples/imagenet/image_rpc_client.py) - 图像的预处理方法相比于上述的方法更加灵活多变,可以通过以下的多个类进行组合,[参考示例](../examples/imagenet/resnet50_rpc_client.py)
- class Sequentia - class Sequentia
......
...@@ -22,19 +22,21 @@ class ServingModels(object): ...@@ -22,19 +22,21 @@ class ServingModels(object):
self.model_dict = OrderedDict() self.model_dict = OrderedDict()
self.model_dict[ self.model_dict[
"SentimentAnalysis"] = ["senta_bilstm", "senta_bow", "senta_cnn"] "SentimentAnalysis"] = ["senta_bilstm", "senta_bow", "senta_cnn"]
self.model_dict["SemanticRepresentation"] = ["ernie_base"] self.model_dict["SemanticRepresentation"] = ["ernie"]
self.model_dict["ChineseWordSegmentation"] = ["lac"] self.model_dict["ChineseWordSegmentation"] = ["lac"]
self.model_dict["ObjectDetection"] = ["faster_rcnn", "yolov3"] self.model_dict["ObjectDetection"] = ["faster_rcnn"]
self.model_dict["ImageSegmentation"] = [ self.model_dict["ImageSegmentation"] = [
"unet", "deeplabv3", "deeplabv3+cityscapes" "unet", "deeplabv3", "deeplabv3+cityscapes"
] ]
self.model_dict["ImageClassification"] = [ self.model_dict["ImageClassification"] = [
"resnet_v2_50_imagenet", "mobilenet_v2_imagenet" "resnet_v2_50_imagenet", "mobilenet_v2_imagenet"
] ]
self.model_dict["OCR"] = ["ocr_rec"]
image_class_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ImageClassification/" image_class_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ImageClassification/"
image_seg_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ImageSegmentation/" image_seg_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ImageSegmentation/"
object_detection_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ObjectDetection/" object_detection_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ObjectDetection/"
ocr_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/OCR/"
senta_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SentimentAnalysis/" senta_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SentimentAnalysis/"
semantic_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticRepresentation/" semantic_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticRepresentation/"
wordseg_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/LexicalAnalysis/" wordseg_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/LexicalAnalysis/"
...@@ -52,6 +54,7 @@ class ServingModels(object): ...@@ -52,6 +54,7 @@ class ServingModels(object):
pack_url(self.model_dict, "ObjectDetection", object_detection_url) pack_url(self.model_dict, "ObjectDetection", object_detection_url)
pack_url(self.model_dict, "ImageSegmentation", image_seg_url) pack_url(self.model_dict, "ImageSegmentation", image_seg_url)
pack_url(self.model_dict, "ImageClassification", image_class_url) pack_url(self.model_dict, "ImageClassification", image_class_url)
pack_url(self.model_dict, "OCR", ocr_url)
def get_model_list(self): def get_model_list(self):
return self.model_dict return self.model_dict
......
...@@ -12,7 +12,10 @@ ...@@ -12,7 +12,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from .chinese_bert_reader import ChineseBertReader from .chinese_bert_reader import ChineseBertReader
from .image_reader import ImageReader, File2Image, URL2Image, Sequential, Normalize, CenterCrop, Resize, Transpose, Div, RGB2BGR, BGR2RGB, RCNNPostprocess, SegPostprocess, PadStride from .image_reader import ImageReader, File2Image, URL2Image, Sequential, Normalize
from .image_reader import CenterCrop, Resize, Transpose, Div, RGB2BGR, BGR2RGB
from .image_reader import RCNNPostprocess, SegPostprocess, PadStride
from .lac_reader import LACReader from .lac_reader import LACReader
from .senta_reader import SentaReader from .senta_reader import SentaReader
from .imdb_reader import IMDBDataset from .imdb_reader import IMDBDataset
from .ocr_reader import OCRReader
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cv2
import copy
import numpy as np
import math
import re
import sys
import argparse
from paddle_serving_app.reader import Sequential, Resize, Transpose, Div, Normalize
class CharacterOps(object):
""" Convert between text-label and text-index """
def __init__(self, config):
self.character_type = config['character_type']
self.loss_type = config['loss_type']
if self.character_type == "en":
self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz"
dict_character = list(self.character_str)
elif self.character_type == "ch":
character_dict_path = config['character_dict_path']
self.character_str = ""
with open(character_dict_path, "rb") as fin:
lines = fin.readlines()
for line in lines:
line = line.decode('utf-8').strip("\n").strip("\r\n")
self.character_str += line
dict_character = list(self.character_str)
elif self.character_type == "en_sensitive":
# same with ASTER setting (use 94 char).
self.character_str = string.printable[:-6]
dict_character = list(self.character_str)
else:
self.character_str = None
assert self.character_str is not None, \
"Nonsupport type of the character: {}".format(self.character_str)
self.beg_str = "sos"
self.end_str = "eos"
if self.loss_type == "attention":
dict_character = [self.beg_str, self.end_str] + dict_character
self.dict = {}
for i, char in enumerate(dict_character):
self.dict[char] = i
self.character = dict_character
def encode(self, text):
"""convert text-label into text-index.
input:
text: text labels of each image. [batch_size]
output:
text: concatenated text index for CTCLoss.
[sum(text_lengths)] = [text_index_0 + text_index_1 + ... + text_index_(n - 1)]
length: length of each text. [batch_size]
"""
if self.character_type == "en":
text = text.lower()
text_list = []
for char in text:
if char not in self.dict:
continue
text_list.append(self.dict[char])
text = np.array(text_list)
return text
def decode(self, text_index, is_remove_duplicate=False):
""" convert text-index into text-label. """
char_list = []
char_num = self.get_char_num()
if self.loss_type == "attention":
beg_idx = self.get_beg_end_flag_idx("beg")
end_idx = self.get_beg_end_flag_idx("end")
ignored_tokens = [beg_idx, end_idx]
else:
ignored_tokens = [char_num]
for idx in range(len(text_index)):
if text_index[idx] in ignored_tokens:
continue
if is_remove_duplicate:
if idx > 0 and text_index[idx - 1] == text_index[idx]:
continue
char_list.append(self.character[text_index[idx]])
text = ''.join(char_list)
return text
def get_char_num(self):
return len(self.character)
def get_beg_end_flag_idx(self, beg_or_end):
if self.loss_type == "attention":
if beg_or_end == "beg":
idx = np.array(self.dict[self.beg_str])
elif beg_or_end == "end":
idx = np.array(self.dict[self.end_str])
else:
assert False, "Unsupport type %s in get_beg_end_flag_idx"\
% beg_or_end
return idx
else:
err = "error in get_beg_end_flag_idx when using the loss %s"\
% (self.loss_type)
assert False, err
class OCRReader(object):
def __init__(self):
args = self.parse_args()
image_shape = [int(v) for v in args.rec_image_shape.split(",")]
self.rec_image_shape = image_shape
self.character_type = args.rec_char_type
self.rec_batch_num = args.rec_batch_num
char_ops_params = {}
char_ops_params["character_type"] = args.rec_char_type
char_ops_params["character_dict_path"] = args.rec_char_dict_path
char_ops_params['loss_type'] = 'ctc'
self.char_ops = CharacterOps(char_ops_params)
def parse_args(self):
parser = argparse.ArgumentParser()
parser.add_argument("--rec_algorithm", type=str, default='CRNN')
parser.add_argument("--rec_model_dir", type=str)
parser.add_argument("--rec_image_shape", type=str, default="3, 32, 320")
parser.add_argument("--rec_char_type", type=str, default='ch')
parser.add_argument("--rec_batch_num", type=int, default=1)
parser.add_argument(
"--rec_char_dict_path", type=str, default="./ppocr_keys_v1.txt")
return parser.parse_args()
def resize_norm_img(self, img, max_wh_ratio):
imgC, imgH, imgW = self.rec_image_shape
if self.character_type == "ch":
imgW = int(32 * max_wh_ratio)
h = img.shape[0]
w = img.shape[1]
ratio = w / float(h)
if math.ceil(imgH * ratio) > imgW:
resized_w = imgW
else:
resized_w = int(math.ceil(imgH * ratio))
seq = Sequential([
Resize(imgH, resized_w), Transpose((2, 0, 1)), Div(255),
Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5], True)
])
resized_image = seq(img)
padding_im = np.zeros((imgC, imgH, imgW), dtype=np.float32)
padding_im[:, :, 0:resized_w] = resized_image
return padding_im
def preprocess(self, img_list):
img_num = len(img_list)
norm_img_batch = []
max_wh_ratio = 0
for ino in range(img_num):
h, w = img_list[ino].shape[0:2]
wh_ratio = w * 1.0 / h
max_wh_ratio = max(max_wh_ratio, wh_ratio)
for ino in range(img_num):
norm_img = self.resize_norm_img(img_list[ino], max_wh_ratio)
norm_img = norm_img[np.newaxis, :]
norm_img_batch.append(norm_img)
norm_img_batch = np.concatenate(norm_img_batch)
norm_img_batch = norm_img_batch.copy()
return norm_img_batch[0]
def postprocess(self, outputs):
rec_res = []
rec_idx_lod = outputs["ctc_greedy_decoder_0.tmp_0.lod"]
predict_lod = outputs["softmax_0.tmp_0.lod"]
rec_idx_batch = outputs["ctc_greedy_decoder_0.tmp_0"]
for rno in range(len(rec_idx_lod) - 1):
beg = rec_idx_lod[rno]
end = rec_idx_lod[rno + 1]
rec_idx_tmp = rec_idx_batch[beg:end, 0]
preds_text = self.char_ops.decode(rec_idx_tmp)
beg = predict_lod[rno]
end = predict_lod[rno + 1]
probs = outputs["softmax_0.tmp_0"][beg:end, :]
ind = np.argmax(probs, axis=1)
blank = probs.shape[1]
valid_ind = np.where(ind != (blank - 1))[0]
score = np.mean(probs[valid_ind, ind[valid_ind]])
rec_res.append([preds_text, score])
return rec_res
...@@ -189,7 +189,7 @@ class Client(object): ...@@ -189,7 +189,7 @@ class Client(object):
# create predictor here # create predictor here
if endpoints is None: if endpoints is None:
if self.predictor_sdk_ is None: if self.predictor_sdk_ is None:
raise SystemExit( raise ValueError(
"You must set the endpoints parameter or use add_variant function to create a variant." "You must set the endpoints parameter or use add_variant function to create a variant."
) )
else: else:
...@@ -215,7 +215,7 @@ class Client(object): ...@@ -215,7 +215,7 @@ class Client(object):
return return
if isinstance(feed[key], if isinstance(feed[key],
list) and len(feed[key]) != self.feed_tensor_len[key]: list) and len(feed[key]) != self.feed_tensor_len[key]:
raise SystemExit("The shape of feed tensor {} not match.".format( raise ValueError("The shape of feed tensor {} not match.".format(
key)) key))
if type(feed[key]).__module__ == np.__name__ and np.size(feed[ if type(feed[key]).__module__ == np.__name__ and np.size(feed[
key]) != self.feed_tensor_len[key]: key]) != self.feed_tensor_len[key]:
...@@ -316,7 +316,7 @@ class Client(object): ...@@ -316,7 +316,7 @@ class Client(object):
int_feed_names, int_shape, fetch_names, result_batch_handle, int_feed_names, int_shape, fetch_names, result_batch_handle,
self.pid) self.pid)
else: else:
raise SystemExit( raise ValueError(
"Please make sure the inputs are all in list type or all in numpy.array type" "Please make sure the inputs are all in list type or all in numpy.array type"
) )
......
...@@ -86,7 +86,7 @@ class WebService(object): ...@@ -86,7 +86,7 @@ class WebService(object):
for key in fetch_map: for key in fetch_map:
fetch_map[key] = fetch_map[key].tolist() fetch_map[key] = fetch_map[key].tolist()
fetch_map = self.postprocess( fetch_map = self.postprocess(
feed=feed, fetch=fetch, fetch_map=fetch_map) feed=request.json["feed"], fetch=fetch, fetch_map=fetch_map)
result = {"result": fetch_map} result = {"result": fetch_map}
except ValueError: except ValueError:
result = {"result": "Request Value Error"} result = {"result": "Request Value Error"}
......
...@@ -384,7 +384,7 @@ class Server(object): ...@@ -384,7 +384,7 @@ class Server(object):
finally: finally:
os.remove(tar_name) os.remove(tar_name)
#release lock #release lock
version_file.cloes() version_file.close()
os.chdir(self.cur_path) os.chdir(self.cur_path)
self.bin_path = self.server_path + "/serving" self.bin_path = self.server_path + "/serving"
......
...@@ -131,7 +131,7 @@ class WebService(object): ...@@ -131,7 +131,7 @@ class WebService(object):
for key in fetch_map: for key in fetch_map:
fetch_map[key] = fetch_map[key].tolist() fetch_map[key] = fetch_map[key].tolist()
result = self.postprocess( result = self.postprocess(
feed=feed, fetch=fetch, fetch_map=fetch_map) feed=request.json["feed"], fetch=fetch, fetch_map=fetch_map)
result = {"result": result} result = {"result": result}
except ValueError: except ValueError:
result = {"result": "Request Value Error"} result = {"result": "Request Value Error"}
......
FROM nvidia/cuda:9.0-cudnn7-runtime-centos7 FROM nvidia/cuda:9.0-cudnn7-devel-centos7 as builder
FROM nvidia/cuda:9.0-cudnn7-runtime-centos7
RUN yum -y install wget && \ RUN yum -y install wget && \
yum -y install epel-release && yum -y install patchelf && \ yum -y install epel-release && yum -y install patchelf && \
yum -y install gcc make python-devel && \ yum -y install gcc make python-devel && \
...@@ -13,4 +14,7 @@ RUN yum -y install wget && \ ...@@ -13,4 +14,7 @@ RUN yum -y install wget && \
ln -s /usr/local/cuda-9.0/lib64/libcublas.so.9.0 /usr/local/cuda-9.0/lib64/libcublas.so && \ ln -s /usr/local/cuda-9.0/lib64/libcublas.so.9.0 /usr/local/cuda-9.0/lib64/libcublas.so && \
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> /root/.bashrc && \ echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> /root/.bashrc && \
ln -s /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so && \ ln -s /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so && \
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-9.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH' >> /root/.bashrc echo 'export LD_LIBRARY_PATH=/usr/local/cuda-9.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH' >> /root/.bashrc && \
mkdir -p /usr/local/cuda/extras
COPY --from=builder /usr/local/cuda/extras/CUPTI /usr/local/cuda/extras/CUPTI
#!/usr/bin/env bash #!/usr/bin/env bash
set -x
function unsetproxy() { function unsetproxy() {
HTTP_PROXY_TEMP=$http_proxy HTTP_PROXY_TEMP=$http_proxy
HTTPS_PROXY_TEMP=$https_proxy HTTPS_PROXY_TEMP=$https_proxy
...@@ -375,16 +375,17 @@ function python_test_multi_process(){ ...@@ -375,16 +375,17 @@ function python_test_multi_process(){
sh get_data.sh sh get_data.sh
case $TYPE in case $TYPE in
CPU) CPU)
check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9292 &" check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9292 --workdir test9292 &"
check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9293 &" check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9293 --workdir test9293 &"
sleep 5 sleep 5
check_cmd "python test_multi_process_client.py" check_cmd "python test_multi_process_client.py"
kill_server_process kill_server_process
echo "bert mutli rpc RPC inference pass" echo "bert mutli rpc RPC inference pass"
;; ;;
GPU) GPU)
check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --gpu_ids 0 &" rm -rf ./image #TODO: The following code tried to create this folder, but no corresponding code was found
check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9293 --gpu_ids 0 &" check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --workdir test9292 --gpu_ids 0 &"
check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9293 --workdir test9293 --gpu_ids 0 &"
sleep 5 sleep 5
check_cmd "python test_multi_process_client.py" check_cmd "python test_multi_process_client.py"
kill_server_process kill_server_process
...@@ -454,15 +455,16 @@ function python_test_lac() { ...@@ -454,15 +455,16 @@ function python_test_lac() {
cd lac # pwd: /Serving/python/examples/lac cd lac # pwd: /Serving/python/examples/lac
case $TYPE in case $TYPE in
CPU) CPU)
sh get_data.sh python -m paddle_serving_app.package --get_model lac
check_cmd "python -m paddle_serving_server.serve --model jieba_server_model/ --port 9292 &" tar -xzvf lac.tar.gz
check_cmd "python -m paddle_serving_server.serve --model lac_model/ --port 9292 &"
sleep 5 sleep 5
check_cmd "echo \"我爱北京天安门\" | python lac_client.py jieba_client_conf/serving_client_conf.prototxt lac_dict/" check_cmd "echo \"我爱北京天安门\" | python lac_client.py lac_client/serving_client_conf.prototxt "
echo "lac CPU RPC inference pass" echo "lac CPU RPC inference pass"
kill_server_process kill_server_process
unsetproxy # maybe the proxy is used on iPipe, which makes web-test failed. unsetproxy # maybe the proxy is used on iPipe, which makes web-test failed.
check_cmd "python lac_web_service.py jieba_server_model/ lac_workdir 9292 &" check_cmd "python lac_web_service.py lac_model/ lac_workdir 9292 &"
sleep 5 sleep 5
check_cmd "curl -H \"Content-Type:application/json\" -X POST -d '{\"feed\":[{\"words\": \"我爱北京天安门\"}], \"fetch\":[\"word_seg\"]}' http://127.0.0.1:9292/lac/prediction" check_cmd "curl -H \"Content-Type:application/json\" -X POST -d '{\"feed\":[{\"words\": \"我爱北京天安门\"}], \"fetch\":[\"word_seg\"]}' http://127.0.0.1:9292/lac/prediction"
# check http code # check http code
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册