diff --git a/README.md b/README.md
index 08e8d85a7baf1896c70ebc1999f900bd6747a895..1818ddd61cc5423c4a590815930d007303f18e81 100644
--- a/README.md
+++ b/README.md
@@ -151,7 +151,6 @@ Here, `client.predict` function has two arguments. `feed` is a `python dict` wit
- **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
- **Highly concurrent and efficient communication** between clients and servers supported.
- **Multiple programming languages** supported on client side, such as Golang, C++ and python.
-- **Extensible framework design** which can support model serving beyond Paddle.
Document
diff --git a/README_CN.md b/README_CN.md
index 8b091bdc9906007a4683b50184d08cd960483730..29cf095248f4c125b3dba7146e67efe8b7abae6c 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -156,7 +156,6 @@ print(fetch_map)
- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
- 支持客户端和服务端之间 **高并发和高效通信**.
- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
-- **可伸缩框架设计** 可支持不限于Paddle的模型服务.
文档
diff --git a/doc/COMPILE.md b/doc/COMPILE.md
index 411620af2ee10a769384c36cebc3aa3ecb93ea49..f4a6639bdb38fac97662084f7d927d24b6179717 100644
--- a/doc/COMPILE.md
+++ b/doc/COMPILE.md
@@ -20,7 +20,7 @@ This document will take Python2 as an example to show how to compile Paddle Serv
- Set `DPYTHON_INCLUDE_DIR` to `$PYTHONROOT/include/python3.6m/`
- Set `DPYTHON_LIBRARIES` to `$PYTHONROOT/lib64/libpython3.6.so`
-- Set `DPYTHON_EXECUTABLE` to `$PYTHONROOT/bin/python3`
+- Set `DPYTHON_EXECUTABLE` to `$PYTHONROOT/bin/python3.6`
## Get Code
@@ -36,6 +36,8 @@ cd Serving && git submodule update --init --recursive
export PYTHONROOT=/usr/
```
+In the default centos7 image we provide, the Python path is `/usr/bin/python`. If you want to use our centos6 image, you need to set it to `export PYTHONROOT=/usr/local/python2.7/`.
+
## Compile Server
### Integrated CPU version paddle inference library
diff --git a/doc/COMPILE_CN.md b/doc/COMPILE_CN.md
index 44802260719d37a3140ca15f6a2ccc15479e32d6..d8fd277131d7d169c1a47689e15556e5d10a0fdb 100644
--- a/doc/COMPILE_CN.md
+++ b/doc/COMPILE_CN.md
@@ -20,7 +20,7 @@
- 将`DPYTHON_INCLUDE_DIR`设置为`$PYTHONROOT/include/python3.6m/`
- 将`DPYTHON_LIBRARIES`设置为`$PYTHONROOT/lib64/libpython3.6.so`
-- 将`DPYTHON_EXECUTABLE`设置为`$PYTHONROOT/bin/python3`
+- 将`DPYTHON_EXECUTABLE`设置为`$PYTHONROOT/bin/python3.6`
## 获取代码
@@ -36,6 +36,8 @@ cd Serving && git submodule update --init --recursive
export PYTHONROOT=/usr/
```
+我们提供默认Centos7的Python路径为`/usr/bin/python`,如果您要使用我们的Centos6镜像,需要将其设置为`export PYTHONROOT=/usr/local/python2.7/`。
+
## 编译Server部分
### 集成CPU版本Paddle Inference Library
diff --git a/doc/HOT_LOADING_IN_SERVING.md b/doc/HOT_LOADING_IN_SERVING.md
index 299b49d4c9b58af413e5507b5523e93a02acc7d1..94575ca51368e4b9d03cdc65ce391a0ae43f0175 100644
--- a/doc/HOT_LOADING_IN_SERVING.md
+++ b/doc/HOT_LOADING_IN_SERVING.md
@@ -46,7 +46,7 @@ In this example, the production model is uploaded to HDFS in `product_path` fold
### Product model
-Run the following Python code products model in `product_path` folder. Every 60 seconds, the package file of Boston house price prediction model `uci_housing.tar.gz` will be generated and uploaded to the path of HDFS `/`. After uploading, the timestamp file `donefile` will be updated and uploaded to the path of HDFS `/`.
+Run the following Python code products model in `product_path` folder(You need to modify Hadoop related parameters before running). Every 60 seconds, the package file of Boston house price prediction model `uci_housing.tar.gz` will be generated and uploaded to the path of HDFS `/`. After uploading, the timestamp file `donefile` will be updated and uploaded to the path of HDFS `/`.
```python
import os
@@ -82,9 +82,14 @@ exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
def push_to_hdfs(local_file_path, remote_path):
- hadoop_bin = '/hadoop-3.1.2/bin/hadoop'
- os.system('{} fs -put -f {} {}'.format(
- hadoop_bin, local_file_path, remote_path))
+ afs = 'afs://***.***.***.***:***' # User needs to change
+ uci = '***,***' # User needs to change
+ hadoop_bin = '/path/to/haddop/bin' # User needs to change
+ prefix = '{} fs -Dfs.default.name={} -Dhadoop.job.ugi={}'.format(hadoop_bin, afs, uci)
+ os.system('{} -rmr {}/{}'.format(
+ prefix, remote_path, local_file_path))
+ os.system('{} -put {} {}'.format(
+ prefix, local_file_path, remote_path))
name = "uci_housing"
for pass_id in range(30):
diff --git a/doc/HOT_LOADING_IN_SERVING_CN.md b/doc/HOT_LOADING_IN_SERVING_CN.md
index 83cb20a3f661c6aa4bbcc3312ac131da1bb5038e..97a2272cffed18e7753859e9991757a5cccb7439 100644
--- a/doc/HOT_LOADING_IN_SERVING_CN.md
+++ b/doc/HOT_LOADING_IN_SERVING_CN.md
@@ -46,7 +46,7 @@ Paddle Serving提供了一个自动监控脚本,远端地址更新模型后会
### 生产模型
-在`product_path`下运行下面的Python代码生产模型,每隔 60 秒会产出 Boston 房价预测模型的打包文件`uci_housing.tar.gz`并上传至hdfs的`/`路径下,上传完毕后更新时间戳文件`donefile`并上传至hdfs的`/`路径下。
+在`product_path`下运行下面的Python代码生产模型(运行前需要修改hadoop相关的参数),每隔 60 秒会产出 Boston 房价预测模型的打包文件`uci_housing.tar.gz`并上传至hdfs的`/`路径下,上传完毕后更新时间戳文件`donefile`并上传至hdfs的`/`路径下。
```python
import os
@@ -82,9 +82,14 @@ exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
def push_to_hdfs(local_file_path, remote_path):
- hadoop_bin = '/hadoop-3.1.2/bin/hadoop'
- os.system('{} fs -put -f {} {}'.format(
- hadoop_bin, local_file_path, remote_path))
+ afs = 'afs://***.***.***.***:***' # User needs to change
+ uci = '***,***' # User needs to change
+ hadoop_bin = '/path/to/haddop/bin' # User needs to change
+ prefix = '{} fs -Dfs.default.name={} -Dhadoop.job.ugi={}'.format(hadoop_bin, afs, uci)
+ os.system('{} -rmr {}/{}'.format(
+ prefix, remote_path, local_file_path))
+ os.system('{} -put {} {}'.format(
+ prefix, local_file_path, remote_path))
name = "uci_housing"
for pass_id in range(30):
diff --git a/doc/UWSGI_DEPLOY.md b/doc/UWSGI_DEPLOY.md
index cb3fb506bf6fd4461240ebe43234fa3bed3d4784..92b69fc1f3da6c791c1009d41bbb3a3ec6f30594 100644
--- a/doc/UWSGI_DEPLOY.md
+++ b/doc/UWSGI_DEPLOY.md
@@ -29,7 +29,7 @@ from paddle_serving_server.web_service import WebService
uci_service = WebService(name = "uci")
uci_service.load_model_config("./uci_housing_model")
uci_service.prepare_server(workdir="./workdir", port=int(9500), device="cpu")
-uci_service.run_server()
+uci_service.run_rpc_service()
#Get flask application
app_instance = uci_service.get_app_instance()
```
diff --git a/doc/UWSGI_DEPLOY_CN.md b/doc/UWSGI_DEPLOY_CN.md
index 5bb87e26bbae729f8c21b4681413a4c9f5c4e7c8..966155162f5ff90e88f9b743a3047b5d86440a46 100644
--- a/doc/UWSGI_DEPLOY_CN.md
+++ b/doc/UWSGI_DEPLOY_CN.md
@@ -29,7 +29,7 @@ from paddle_serving_server.web_service import WebService
uci_service = WebService(name = "uci")
uci_service.load_model_config("./uci_housing_model")
uci_service.prepare_server(workdir="./workdir", port=int(9500), device="cpu")
-uci_service.run_server()
+uci_service.run_rpc_service()
#获取flask服务
app_instance = uci_service.get_app_instance()
```
diff --git a/python/examples/fit_a_line/test_multi_process_client.py b/python/examples/fit_a_line/test_multi_process_client.py
index 46ba3b60b5ae09b568868531d32234ade50d8556..5272d095df5e74f25ce0e36ca22c8d6d1884f5f0 100644
--- a/python/examples/fit_a_line/test_multi_process_client.py
+++ b/python/examples/fit_a_line/test_multi_process_client.py
@@ -22,15 +22,19 @@ def single_func(idx, resource):
client.load_client_config(
"./uci_housing_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9293", "127.0.0.1:9292"])
- test_reader = paddle.batch(
- paddle.reader.shuffle(
- paddle.dataset.uci_housing.test(), buf_size=500),
- batch_size=1)
- for data in test_reader():
- fetch_map = client.predict(feed={"x": data[0][0]}, fetch=["price"])
+ x = [
+ 0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584,
+ 0.6283, 0.4919, 0.1856, 0.0795, -0.0332
+ ]
+ for i in range(1000):
+ fetch_map = client.predict(feed={"x": x}, fetch=["price"])
+ if fetch_map is None:
+ return [[None]]
return [[0]]
multi_thread_runner = MultiThreadRunner()
thread_num = 4
result = multi_thread_runner.run(single_func, thread_num, {})
+if None in result[0]:
+ exit(1)
diff --git a/python/examples/lac/README.md b/python/examples/lac/README.md
index bc420186a09dfd0066c1abf0c0d95063e9cb0699..8d7adfb583f8e8e1fde0681a73f2bba65452fa87 100644
--- a/python/examples/lac/README.md
+++ b/python/examples/lac/README.md
@@ -2,28 +2,27 @@
([简体中文](./README_CN.md)|English)
-### Get model files and sample data
+### Get Model
```
-sh get_data.sh
+python -m paddle_serving_app.package --get_model lac
+tar -xzvf lac.tar.gz
```
-the package downloaded contains lac model config along with lac dictionary.
-
#### Start RPC inference service
```
-python -m paddle_serving_server.serve --model jieba_server_model/ --port 9292
+python -m paddle_serving_server.serve --model lac_model/ --port 9292
```
### RPC Infer
```
-echo "我爱北京天安门" | python lac_client.py jieba_client_conf/serving_client_conf.prototxt lac_dict/
+echo "我爱北京天安门" | python lac_client.py lac_client/serving_client_conf.prototxt
```
-it will get the segmentation result
+It will get the segmentation result.
### Start HTTP inference service
```
-python lac_web_service.py jieba_server_model/ lac_workdir 9292
+python lac_web_service.py lac_model/ lac_workdir 9292
```
### HTTP Infer
diff --git a/python/examples/lac/README_CN.md b/python/examples/lac/README_CN.md
index 449f474ca291053eb6880166c52814c9d4180f36..2379aa8ed69c026c6afd94b8b791774882eaf567 100644
--- a/python/examples/lac/README_CN.md
+++ b/python/examples/lac/README_CN.md
@@ -2,28 +2,27 @@
(简体中文|[English](./README.md))
-### 获取模型和字典文件
+### 获取模型
```
-sh get_data.sh
+python -m paddle_serving_app.package --get_model lac
+tar -xzvf lac.tar.gz
```
-下载包里包含了lac模型和lac模型预测需要的字典文件
-
#### 开启RPC预测服务
```
-python -m paddle_serving_server.serve --model jieba_server_model/ --port 9292
+python -m paddle_serving_server.serve --model lac_model/ --port 9292
```
### 执行RPC预测
```
-echo "我爱北京天安门" | python lac_client.py jieba_client_conf/serving_client_conf.prototxt lac_dict/
+echo "我爱北京天安门" | python lac_client.py lac_client/serving_client_conf.prototxt
```
我们就能得到分词结果
### 开启HTTP预测服务
```
-python lac_web_service.py jieba_server_model/ lac_workdir 9292
+python lac_web_service.py lac_model/ lac_workdir 9292
```
### 执行HTTP预测
diff --git a/python/examples/lac/benchmark.py b/python/examples/lac/benchmark.py
index 53d0881ed74e5e19104a70fb93d6872141d27afd..64e935a608477d5841df1b64abf7b6eb35dd1a4b 100644
--- a/python/examples/lac/benchmark.py
+++ b/python/examples/lac/benchmark.py
@@ -16,7 +16,7 @@
import sys
import time
import requests
-from lac_reader import LACReader
+from paddle_serving_app.reader import LACReader
from paddle_serving_client import Client
from paddle_serving_client.utils import MultiThreadRunner
from paddle_serving_client.utils import benchmark_args
@@ -25,7 +25,7 @@ args = benchmark_args()
def single_func(idx, resource):
- reader = LACReader("lac_dict")
+ reader = LACReader()
start = time.time()
if args.request == "rpc":
client = Client()
diff --git a/python/examples/lac/get_data.sh b/python/examples/lac/get_data.sh
deleted file mode 100644
index 29e6a6b2b3e995f78c37e15baf2f9a3b627ca9ef..0000000000000000000000000000000000000000
--- a/python/examples/lac/get_data.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-wget --no-check-certificate https://paddle-serving.bj.bcebos.com/lac/lac_model_jieba_web.tar.gz
-tar -zxvf lac_model_jieba_web.tar.gz
diff --git a/python/examples/lac/lac_client.py b/python/examples/lac/lac_client.py
index 9c485a923e4d42b72af41f7b9ad45c5702ca93a1..ab9af730abb2f5b33f4d0292115b2f7bf682f278 100644
--- a/python/examples/lac/lac_client.py
+++ b/python/examples/lac/lac_client.py
@@ -15,7 +15,7 @@
# pylint: disable=doc-string-missing
from paddle_serving_client import Client
-from lac_reader import LACReader
+from paddle_serving_app.reader import LACReader
import sys
import os
import io
@@ -24,7 +24,7 @@ client = Client()
client.load_client_config(sys.argv[1])
client.connect(["127.0.0.1:9292"])
-reader = LACReader(sys.argv[2])
+reader = LACReader()
for line in sys.stdin:
if len(line) <= 0:
continue
@@ -32,4 +32,8 @@ for line in sys.stdin:
if len(feed_data) <= 0:
continue
fetch_map = client.predict(feed={"words": feed_data}, fetch=["crf_decode"])
- print(fetch_map)
+ begin = fetch_map['crf_decode.lod'][0]
+ end = fetch_map['crf_decode.lod'][1]
+ segs = reader.parse_result(line, fetch_map["crf_decode"][begin:end])
+
+ print({"word_seg": "|".join(segs)})
diff --git a/python/examples/lac/lac_web_service.py b/python/examples/lac/lac_web_service.py
index 62a7148b230029bc781fa550597df25471a7fc8d..9b1c6693b52393aee1294b521fe30fb1a9fd0d79 100644
--- a/python/examples/lac/lac_web_service.py
+++ b/python/examples/lac/lac_web_service.py
@@ -14,7 +14,7 @@
from paddle_serving_server.web_service import WebService
import sys
-from lac_reader import LACReader
+from paddle_serving_app.reader import LACReader
class LACService(WebService):
diff --git a/python/examples/ocr/README.md b/python/examples/ocr/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..04c4fd3eaa304e55d980a2cf4fc34dda50f5009c
--- /dev/null
+++ b/python/examples/ocr/README.md
@@ -0,0 +1,21 @@
+# OCR
+
+## Get Model
+```
+python -m paddle_serving_app.package --get_model ocr_rec
+tar -xzvf ocr_rec.tar.gz
+```
+
+## RPC Service
+
+### Start Service
+
+```
+python -m paddle_serving_server.serve --model ocr_rec_model --port 9292
+```
+
+### Client Prediction
+
+```
+python test_ocr_rec_client.py
+```
diff --git a/python/examples/ocr/test_ocr_rec_client.py b/python/examples/ocr/test_ocr_rec_client.py
new file mode 100644
index 0000000000000000000000000000000000000000..b61256d03202374ada5b0d50a075fef156eca2ea
--- /dev/null
+++ b/python/examples/ocr/test_ocr_rec_client.py
@@ -0,0 +1,31 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from paddle_serving_client import Client
+from paddle_serving_app.reader import OCRReader
+import cv2
+
+client = Client()
+client.load_client_config("ocr_rec_client/serving_client_conf.prototxt")
+client.connect(["127.0.0.1:9292"])
+
+image_file_list = ["./test_rec.jpg"]
+img = cv2.imread(image_file_list[0])
+ocr_reader = OCRReader()
+feed = {"image": ocr_reader.preprocess([img])}
+fetch = ["ctc_greedy_decoder_0.tmp_0", "softmax_0.tmp_0"]
+fetch_map = client.predict(feed=feed, fetch=fetch)
+rec_res = ocr_reader.postprocess(fetch_map)
+print(image_file_list[0])
+print(rec_res[0][0])
diff --git a/python/examples/ocr/test_rec.jpg b/python/examples/ocr/test_rec.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..2c34cd33eac5766a072fde041fa6c9b1d612f1db
Binary files /dev/null and b/python/examples/ocr/test_rec.jpg differ
diff --git a/python/examples/senta/README.md b/python/examples/senta/README.md
index 88aac352110850a71ae0f9a28c1a98293f8e0ab9..9aeb6d1191719e067e2cb99d408a6d091c25ede3 100644
--- a/python/examples/senta/README.md
+++ b/python/examples/senta/README.md
@@ -1,21 +1,20 @@
-# Chinese sentence sentiment classification
+# Chinese Sentence Sentiment Classification
([简体中文](./README_CN.md)|English)
-## Get model files and sample data
-```
-sh get_data.sh
-```
-## Install preprocess module
+## Get Model
```
-pip install paddle_serving_app
+python -m paddle_serving_app.package --get_model senta_bilstm
+python -m paddle_serving_app.package --get_model lac
```
-## Start http service
+## Start HTTP Service
```
-python senta_web_service.py senta_bilstm_model/ workdir 9292
+python -m paddle_serving_server.serve --model lac_model --port 9300
+python senta_web_service.py
```
-In the Chinese sentiment classification task, the Chinese word segmentation needs to be done through [LAC task] (../lac). Set model path by ```lac_model_path``` and dictionary path by ```lac_dict_path```.
-In this demo, the LAC task is placed in the preprocessing part of the HTTP prediction service of the sentiment classification task. The LAC prediction service is deployed on the CPU, and the sentiment classification task is deployed on the GPU, which can be changed according to the actual situation.
+In the Chinese sentiment classification task, the Chinese word segmentation needs to be done through [LAC task] (../lac).
+In this demo, the LAC task is placed in the preprocessing part of the HTTP prediction service of the sentiment classification task.
+
## Client prediction
```
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "天气不错"}], "fetch":["class_probs"]}' http://127.0.0.1:9292/senta/prediction
diff --git a/python/examples/senta/README_CN.md b/python/examples/senta/README_CN.md
index f5011334db768c5f0869c296769ead7cb38613d8..f958af221d843748836bea325f87ba603411d39c 100644
--- a/python/examples/senta/README_CN.md
+++ b/python/examples/senta/README_CN.md
@@ -1,20 +1,19 @@
# 中文语句情感分类
(简体中文|[English](./README.md))
-## 获取模型文件和样例数据
-```
-sh get_data.sh
-```
-## 安装数据预处理模块
+
+## 获取模型文件
```
-pip install paddle_serving_app
+python -m paddle_serving_app.package --get_model senta_bilstm
+python -m paddle_serving_app.package --get_model lac
```
## 启动HTTP服务
```
-python senta_web_service.py senta_bilstm_model/ workdir 9292
+python -m paddle_serving_server.serve --model lac_model --port 9300
+python senta_web_service.py
```
-中文情感分类任务中需要先通过[LAC任务](../lac)进行中文分词,在脚本中通过```lac_model_path```参数配置LAC任务的模型文件路径,```lac_dict_path```参数配置LAC任务词典路径。
-示例中将LAC任务放在情感分类任务的HTTP预测服务的预处理部分,LAC预测服务部署在CPU上,情感分类任务部署在GPU上,可以根据实际情况进行更改。
+中文情感分类任务中需要先通过[LAC任务](../lac)进行中文分词。
+示例中将LAC任务放在情感分类任务的HTTP预测服务的预处理部分。
## 客户端预测
```
diff --git a/python/examples/senta/senta_web_service.py b/python/examples/senta/senta_web_service.py
index 0621ece74173596a1820f1b09258ecf5bb727f29..25c880ef8877aed0f3f9d394d1780855130f365b 100644
--- a/python/examples/senta/senta_web_service.py
+++ b/python/examples/senta/senta_web_service.py
@@ -1,3 +1,4 @@
+#encoding=utf-8
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,56 +13,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from paddle_serving_server_gpu.web_service import WebService
+from paddle_serving_server.web_service import WebService
from paddle_serving_client import Client
from paddle_serving_app.reader import LACReader, SentaReader
import os
import sys
-from multiprocessing import Process
+#senta_web_service.py
+from paddle_serving_server.web_service import WebService
+from paddle_serving_client import Client
+from paddle_serving_app.reader import LACReader, SentaReader
-class SentaService(WebService):
- def set_config(
- self,
- lac_model_path,
- lac_dict_path,
- senta_dict_path, ):
- self.lac_model_path = lac_model_path
- self.lac_client_config_path = lac_model_path + "/serving_server_conf.prototxt"
- self.lac_dict_path = lac_dict_path
- self.senta_dict_path = senta_dict_path
-
- def start_lac_service(self):
- if not os.path.exists('./lac_serving'):
- os.mkdir("./lac_serving")
- os.chdir('./lac_serving')
- self.lac_port = self.port + 100
- r = os.popen(
- "python -m paddle_serving_server.serve --model {} --port {} &".
- format("../" + self.lac_model_path, self.lac_port))
- os.chdir('..')
-
- def init_lac_service(self):
- ps = Process(target=self.start_lac_service())
- ps.start()
- self.init_lac_client()
-
- def lac_predict(self, feed_data):
- lac_result = self.lac_client.predict(
- feed={"words": feed_data}, fetch=["crf_decode"])
- return lac_result
-
- def init_lac_client(self):
- self.lac_client = Client()
- self.lac_client.load_client_config(self.lac_client_config_path)
- self.lac_client.connect(["127.0.0.1:{}".format(self.lac_port)])
- def init_lac_reader(self):
+class SentaService(WebService):
+ #初始化lac模型预测服务
+ def init_lac_client(self, lac_port, lac_client_config):
self.lac_reader = LACReader()
-
- def init_senta_reader(self):
self.senta_reader = SentaReader()
+ self.lac_client = Client()
+ self.lac_client.load_client_config(lac_client_config)
+ self.lac_client.connect(["127.0.0.1:{}".format(lac_port)])
+ #定义senta模型预测服务的预处理,调用顺序:lac reader->lac模型预测->预测结果后处理->senta reader
def preprocess(self, feed=[], fetch=[]):
feed_data = [{
"words": self.lac_reader.process(x["words"])
@@ -80,15 +53,9 @@ class SentaService(WebService):
senta_service = SentaService(name="senta")
-senta_service.set_config(
- lac_model_path="./lac_model",
- lac_dict_path="./lac_dict",
- senta_dict_path="./vocab.txt")
-senta_service.load_model_config(sys.argv[1])
-senta_service.prepare_server(
- workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu")
-senta_service.init_lac_reader()
-senta_service.init_senta_reader()
-senta_service.init_lac_service()
+senta_service.load_model_config("senta_bilstm_model")
+senta_service.prepare_server(workdir="workdir")
+senta_service.init_lac_client(
+ lac_port=9300, lac_client_config="lac_model/serving_server_conf.prototxt")
senta_service.run_rpc_service()
senta_service.run_web_service()
diff --git a/python/paddle_serving_app/README.md b/python/paddle_serving_app/README.md
index 07fff931e250bad59ef2bedfe1e054f4682f6c9f..6757407939c150ca14a22427a488f41a24feb7ac 100644
--- a/python/paddle_serving_app/README.md
+++ b/python/paddle_serving_app/README.md
@@ -21,15 +21,15 @@ python -m paddle_serving_app.package --list_model
python -m paddle_serving_app.package --get_model senta_bilstm
```
-11 pre-trained models are built into paddle_serving_app, covering 6 kinds of prediction tasks.
+10 pre-trained models are built into paddle_serving_app, covering 6 kinds of prediction tasks.
The model files can be directly used for deployment, and the `--tutorial` argument can be added to obtain the deployment method.
| Prediction task | Model name |
| ------------ | ------------------------------------------------ |
| SentimentAnalysis | 'senta_bilstm', 'senta_bow', 'senta_cnn' |
-| SemanticRepresentation | 'ernie_base' |
+| SemanticRepresentation | 'ernie' |
| ChineseWordSegmentation | 'lac' |
-| ObjectDetection | 'faster_rcnn', 'yolov3' |
+| ObjectDetection | 'faster_rcnn' |
| ImageSegmentation | 'unet', 'deeplabv3' |
| ImageClassification | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' |
@@ -76,7 +76,7 @@ Preprocessing for Chinese word segmentation task.
[example](../examples/senta/senta_web_service.py)
-- The image preprocessing method is more flexible than the above method, and can be combined by the following multiple classes,[example](../examples/imagenet/image_rpc_client.py)
+- The image preprocessing method is more flexible than the above method, and can be combined by the following multiple classes,[example](../examples/imagenet/resnet50_rpc_client.py)
- class Sequentia
diff --git a/python/paddle_serving_app/README_CN.md b/python/paddle_serving_app/README_CN.md
index f6fda8beaf75264d8ae5d2cbb939fdf226c342ab..d29c3fd9fff3ba2ab34ec67b6fd15ad10e3cfd07 100644
--- a/python/paddle_serving_app/README_CN.md
+++ b/python/paddle_serving_app/README_CN.md
@@ -20,14 +20,14 @@ python -m paddle_serving_app.package --list_model
python -m paddle_serving_app.package --get_model senta_bilstm
```
-paddle_serving_app中内置了11中预训练模型,涵盖了6种预测任务。获取到的模型文件可以直接用于部署,添加`--tutorial`参数可以获取对应的部署方式。
+paddle_serving_app中内置了10种预训练模型,涵盖了6种预测任务。获取到的模型文件可以直接用于部署,添加`--tutorial`参数可以获取对应的部署方式。
| 预测服务类型 | 模型名称 |
| ------------ | ------------------------------------------------ |
| 中文情感分析 | 'senta_bilstm', 'senta_bow', 'senta_cnn' |
-| 语义理解 | 'ernie_base' |
+| 语义理解 | 'ernie' |
| 中文分词 | 'lac' |
-| 图像检测 | 'faster_rcnn', 'yolov3' |
+| 图像检测 | 'faster_rcnn' |
| 图像分割 | 'unet', 'deeplabv3' |
| 图像分类 | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' |
@@ -71,7 +71,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的
[参考示例](../examples/senta/senta_web_service.py)
-- 图像的预处理方法相比于上述的方法更加灵活多变,可以通过以下的多个类进行组合,[参考示例](../examples/imagenet/image_rpc_client.py)
+- 图像的预处理方法相比于上述的方法更加灵活多变,可以通过以下的多个类进行组合,[参考示例](../examples/imagenet/resnet50_rpc_client.py)
- class Sequentia
diff --git a/python/paddle_serving_app/models/model_list.py b/python/paddle_serving_app/models/model_list.py
index 3d08f2fea95cc07e0cb1b57b005f72b95c6a4bcd..cf0ca3bf5765d65065e541462eb919ccc5c4b978 100644
--- a/python/paddle_serving_app/models/model_list.py
+++ b/python/paddle_serving_app/models/model_list.py
@@ -22,19 +22,21 @@ class ServingModels(object):
self.model_dict = OrderedDict()
self.model_dict[
"SentimentAnalysis"] = ["senta_bilstm", "senta_bow", "senta_cnn"]
- self.model_dict["SemanticRepresentation"] = ["ernie_base"]
+ self.model_dict["SemanticRepresentation"] = ["ernie"]
self.model_dict["ChineseWordSegmentation"] = ["lac"]
- self.model_dict["ObjectDetection"] = ["faster_rcnn", "yolov3"]
+ self.model_dict["ObjectDetection"] = ["faster_rcnn"]
self.model_dict["ImageSegmentation"] = [
"unet", "deeplabv3", "deeplabv3+cityscapes"
]
self.model_dict["ImageClassification"] = [
"resnet_v2_50_imagenet", "mobilenet_v2_imagenet"
]
+ self.model_dict["OCR"] = ["ocr_rec"]
image_class_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ImageClassification/"
image_seg_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ImageSegmentation/"
object_detection_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/ObjectDetection/"
+ ocr_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/image/OCR/"
senta_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SentimentAnalysis/"
semantic_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticRepresentation/"
wordseg_url = "https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/LexicalAnalysis/"
@@ -52,6 +54,7 @@ class ServingModels(object):
pack_url(self.model_dict, "ObjectDetection", object_detection_url)
pack_url(self.model_dict, "ImageSegmentation", image_seg_url)
pack_url(self.model_dict, "ImageClassification", image_class_url)
+ pack_url(self.model_dict, "OCR", ocr_url)
def get_model_list(self):
return self.model_dict
diff --git a/python/paddle_serving_app/reader/__init__.py b/python/paddle_serving_app/reader/__init__.py
index 0eee878284e2028657a660acd38a21934bb5ccd7..b2b5e75ac430ecf897e34ec7afc994c9ccf8ee66 100644
--- a/python/paddle_serving_app/reader/__init__.py
+++ b/python/paddle_serving_app/reader/__init__.py
@@ -12,7 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from .chinese_bert_reader import ChineseBertReader
-from .image_reader import ImageReader, File2Image, URL2Image, Sequential, Normalize, CenterCrop, Resize, Transpose, Div, RGB2BGR, BGR2RGB, RCNNPostprocess, SegPostprocess, PadStride
+from .image_reader import ImageReader, File2Image, URL2Image, Sequential, Normalize
+from .image_reader import CenterCrop, Resize, Transpose, Div, RGB2BGR, BGR2RGB
+from .image_reader import RCNNPostprocess, SegPostprocess, PadStride
from .lac_reader import LACReader
from .senta_reader import SentaReader
from .imdb_reader import IMDBDataset
+from .ocr_reader import OCRReader
diff --git a/python/paddle_serving_app/reader/ocr_reader.py b/python/paddle_serving_app/reader/ocr_reader.py
new file mode 100644
index 0000000000000000000000000000000000000000..e5dc88482bd5e0a7a26873fd5cb60c43dc5104c9
--- /dev/null
+++ b/python/paddle_serving_app/reader/ocr_reader.py
@@ -0,0 +1,203 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import copy
+import numpy as np
+import math
+import re
+import sys
+import argparse
+from paddle_serving_app.reader import Sequential, Resize, Transpose, Div, Normalize
+
+
+class CharacterOps(object):
+ """ Convert between text-label and text-index """
+
+ def __init__(self, config):
+ self.character_type = config['character_type']
+ self.loss_type = config['loss_type']
+ if self.character_type == "en":
+ self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz"
+ dict_character = list(self.character_str)
+ elif self.character_type == "ch":
+ character_dict_path = config['character_dict_path']
+ self.character_str = ""
+ with open(character_dict_path, "rb") as fin:
+ lines = fin.readlines()
+ for line in lines:
+ line = line.decode('utf-8').strip("\n").strip("\r\n")
+ self.character_str += line
+ dict_character = list(self.character_str)
+ elif self.character_type == "en_sensitive":
+ # same with ASTER setting (use 94 char).
+ self.character_str = string.printable[:-6]
+ dict_character = list(self.character_str)
+ else:
+ self.character_str = None
+ assert self.character_str is not None, \
+ "Nonsupport type of the character: {}".format(self.character_str)
+ self.beg_str = "sos"
+ self.end_str = "eos"
+ if self.loss_type == "attention":
+ dict_character = [self.beg_str, self.end_str] + dict_character
+ self.dict = {}
+ for i, char in enumerate(dict_character):
+ self.dict[char] = i
+ self.character = dict_character
+
+ def encode(self, text):
+ """convert text-label into text-index.
+ input:
+ text: text labels of each image. [batch_size]
+
+ output:
+ text: concatenated text index for CTCLoss.
+ [sum(text_lengths)] = [text_index_0 + text_index_1 + ... + text_index_(n - 1)]
+ length: length of each text. [batch_size]
+ """
+ if self.character_type == "en":
+ text = text.lower()
+
+ text_list = []
+ for char in text:
+ if char not in self.dict:
+ continue
+ text_list.append(self.dict[char])
+ text = np.array(text_list)
+ return text
+
+ def decode(self, text_index, is_remove_duplicate=False):
+ """ convert text-index into text-label. """
+ char_list = []
+ char_num = self.get_char_num()
+
+ if self.loss_type == "attention":
+ beg_idx = self.get_beg_end_flag_idx("beg")
+ end_idx = self.get_beg_end_flag_idx("end")
+ ignored_tokens = [beg_idx, end_idx]
+ else:
+ ignored_tokens = [char_num]
+
+ for idx in range(len(text_index)):
+ if text_index[idx] in ignored_tokens:
+ continue
+ if is_remove_duplicate:
+ if idx > 0 and text_index[idx - 1] == text_index[idx]:
+ continue
+ char_list.append(self.character[text_index[idx]])
+ text = ''.join(char_list)
+ return text
+
+ def get_char_num(self):
+ return len(self.character)
+
+ def get_beg_end_flag_idx(self, beg_or_end):
+ if self.loss_type == "attention":
+ if beg_or_end == "beg":
+ idx = np.array(self.dict[self.beg_str])
+ elif beg_or_end == "end":
+ idx = np.array(self.dict[self.end_str])
+ else:
+ assert False, "Unsupport type %s in get_beg_end_flag_idx"\
+ % beg_or_end
+ return idx
+ else:
+ err = "error in get_beg_end_flag_idx when using the loss %s"\
+ % (self.loss_type)
+ assert False, err
+
+
+class OCRReader(object):
+ def __init__(self):
+ args = self.parse_args()
+ image_shape = [int(v) for v in args.rec_image_shape.split(",")]
+ self.rec_image_shape = image_shape
+ self.character_type = args.rec_char_type
+ self.rec_batch_num = args.rec_batch_num
+ char_ops_params = {}
+ char_ops_params["character_type"] = args.rec_char_type
+ char_ops_params["character_dict_path"] = args.rec_char_dict_path
+ char_ops_params['loss_type'] = 'ctc'
+ self.char_ops = CharacterOps(char_ops_params)
+
+ def parse_args(self):
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--rec_algorithm", type=str, default='CRNN')
+ parser.add_argument("--rec_model_dir", type=str)
+ parser.add_argument("--rec_image_shape", type=str, default="3, 32, 320")
+ parser.add_argument("--rec_char_type", type=str, default='ch')
+ parser.add_argument("--rec_batch_num", type=int, default=1)
+ parser.add_argument(
+ "--rec_char_dict_path", type=str, default="./ppocr_keys_v1.txt")
+ return parser.parse_args()
+
+ def resize_norm_img(self, img, max_wh_ratio):
+ imgC, imgH, imgW = self.rec_image_shape
+ if self.character_type == "ch":
+ imgW = int(32 * max_wh_ratio)
+ h = img.shape[0]
+ w = img.shape[1]
+ ratio = w / float(h)
+ if math.ceil(imgH * ratio) > imgW:
+ resized_w = imgW
+ else:
+ resized_w = int(math.ceil(imgH * ratio))
+
+ seq = Sequential([
+ Resize(imgH, resized_w), Transpose((2, 0, 1)), Div(255),
+ Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5], True)
+ ])
+ resized_image = seq(img)
+ padding_im = np.zeros((imgC, imgH, imgW), dtype=np.float32)
+ padding_im[:, :, 0:resized_w] = resized_image
+
+ return padding_im
+
+ def preprocess(self, img_list):
+ img_num = len(img_list)
+ norm_img_batch = []
+ max_wh_ratio = 0
+ for ino in range(img_num):
+ h, w = img_list[ino].shape[0:2]
+ wh_ratio = w * 1.0 / h
+ max_wh_ratio = max(max_wh_ratio, wh_ratio)
+ for ino in range(img_num):
+ norm_img = self.resize_norm_img(img_list[ino], max_wh_ratio)
+ norm_img = norm_img[np.newaxis, :]
+ norm_img_batch.append(norm_img)
+ norm_img_batch = np.concatenate(norm_img_batch)
+ norm_img_batch = norm_img_batch.copy()
+
+ return norm_img_batch[0]
+
+ def postprocess(self, outputs):
+ rec_res = []
+ rec_idx_lod = outputs["ctc_greedy_decoder_0.tmp_0.lod"]
+ predict_lod = outputs["softmax_0.tmp_0.lod"]
+ rec_idx_batch = outputs["ctc_greedy_decoder_0.tmp_0"]
+ for rno in range(len(rec_idx_lod) - 1):
+ beg = rec_idx_lod[rno]
+ end = rec_idx_lod[rno + 1]
+ rec_idx_tmp = rec_idx_batch[beg:end, 0]
+ preds_text = self.char_ops.decode(rec_idx_tmp)
+ beg = predict_lod[rno]
+ end = predict_lod[rno + 1]
+ probs = outputs["softmax_0.tmp_0"][beg:end, :]
+ ind = np.argmax(probs, axis=1)
+ blank = probs.shape[1]
+ valid_ind = np.where(ind != (blank - 1))[0]
+ score = np.mean(probs[valid_ind, ind[valid_ind]])
+ rec_res.append([preds_text, score])
+ return rec_res
diff --git a/python/paddle_serving_client/__init__.py b/python/paddle_serving_client/__init__.py
index f2922f577b21d8acc3f8ec629f2473b5339ee725..f201eefc449b3aea11db6ae209d79fb6acb05173 100644
--- a/python/paddle_serving_client/__init__.py
+++ b/python/paddle_serving_client/__init__.py
@@ -189,7 +189,7 @@ class Client(object):
# create predictor here
if endpoints is None:
if self.predictor_sdk_ is None:
- raise SystemExit(
+ raise ValueError(
"You must set the endpoints parameter or use add_variant function to create a variant."
)
else:
@@ -215,7 +215,7 @@ class Client(object):
return
if isinstance(feed[key],
list) and len(feed[key]) != self.feed_tensor_len[key]:
- raise SystemExit("The shape of feed tensor {} not match.".format(
+ raise ValueError("The shape of feed tensor {} not match.".format(
key))
if type(feed[key]).__module__ == np.__name__ and np.size(feed[
key]) != self.feed_tensor_len[key]:
@@ -316,7 +316,7 @@ class Client(object):
int_feed_names, int_shape, fetch_names, result_batch_handle,
self.pid)
else:
- raise SystemExit(
+ raise ValueError(
"Please make sure the inputs are all in list type or all in numpy.array type"
)
diff --git a/python/paddle_serving_server/web_service.py b/python/paddle_serving_server/web_service.py
index 7f37b10be05e84e29cf6cda3cd3cc3d939910027..b3fcc1b880fcbffa1da884e4b68350c1870997c1 100755
--- a/python/paddle_serving_server/web_service.py
+++ b/python/paddle_serving_server/web_service.py
@@ -86,7 +86,7 @@ class WebService(object):
for key in fetch_map:
fetch_map[key] = fetch_map[key].tolist()
fetch_map = self.postprocess(
- feed=feed, fetch=fetch, fetch_map=fetch_map)
+ feed=request.json["feed"], fetch=fetch, fetch_map=fetch_map)
result = {"result": fetch_map}
except ValueError:
result = {"result": "Request Value Error"}
diff --git a/python/paddle_serving_server_gpu/__init__.py b/python/paddle_serving_server_gpu/__init__.py
index d4631141f8173b4ae0cb41d42c615566ac81ae7e..e40c0fa48763eaa66373e9f2149552c4f8693eb7 100644
--- a/python/paddle_serving_server_gpu/__init__.py
+++ b/python/paddle_serving_server_gpu/__init__.py
@@ -384,7 +384,7 @@ class Server(object):
finally:
os.remove(tar_name)
#release lock
- version_file.cloes()
+ version_file.close()
os.chdir(self.cur_path)
self.bin_path = self.server_path + "/serving"
diff --git a/python/paddle_serving_server_gpu/web_service.py b/python/paddle_serving_server_gpu/web_service.py
index 2328453268f6cefa9c5bddb818677cc3962ea7ea..76721de8a005dfb23fbe2427671446889aa72af1 100644
--- a/python/paddle_serving_server_gpu/web_service.py
+++ b/python/paddle_serving_server_gpu/web_service.py
@@ -131,7 +131,7 @@ class WebService(object):
for key in fetch_map:
fetch_map[key] = fetch_map[key].tolist()
result = self.postprocess(
- feed=feed, fetch=fetch, fetch_map=fetch_map)
+ feed=request.json["feed"], fetch=fetch, fetch_map=fetch_map)
result = {"result": result}
except ValueError:
result = {"result": "Request Value Error"}
diff --git a/tools/Dockerfile.gpu b/tools/Dockerfile.gpu
index a08bdf3daef103b5944df192fef967ebd9772b6c..bf05080ca72e90b2179f6a717f6f4e86e7aefe29 100644
--- a/tools/Dockerfile.gpu
+++ b/tools/Dockerfile.gpu
@@ -1,5 +1,6 @@
-FROM nvidia/cuda:9.0-cudnn7-runtime-centos7
+FROM nvidia/cuda:9.0-cudnn7-devel-centos7 as builder
+FROM nvidia/cuda:9.0-cudnn7-runtime-centos7
RUN yum -y install wget && \
yum -y install epel-release && yum -y install patchelf && \
yum -y install gcc make python-devel && \
@@ -13,4 +14,7 @@ RUN yum -y install wget && \
ln -s /usr/local/cuda-9.0/lib64/libcublas.so.9.0 /usr/local/cuda-9.0/lib64/libcublas.so && \
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> /root/.bashrc && \
ln -s /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so && \
- echo 'export LD_LIBRARY_PATH=/usr/local/cuda-9.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH' >> /root/.bashrc
+ echo 'export LD_LIBRARY_PATH=/usr/local/cuda-9.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH' >> /root/.bashrc && \
+ mkdir -p /usr/local/cuda/extras
+
+COPY --from=builder /usr/local/cuda/extras/CUPTI /usr/local/cuda/extras/CUPTI
diff --git a/tools/serving_build.sh b/tools/serving_build.sh
index 8e78e13ef8e86b55af6a90df1b9235611508c0ba..989e48ead9864e717e573f7f0800a1afba2e934a 100644
--- a/tools/serving_build.sh
+++ b/tools/serving_build.sh
@@ -1,5 +1,5 @@
#!/usr/bin/env bash
-
+set -x
function unsetproxy() {
HTTP_PROXY_TEMP=$http_proxy
HTTPS_PROXY_TEMP=$https_proxy
@@ -375,16 +375,17 @@ function python_test_multi_process(){
sh get_data.sh
case $TYPE in
CPU)
- check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9292 &"
- check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9293 &"
+ check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9292 --workdir test9292 &"
+ check_cmd "python -m paddle_serving_server.serve --model uci_housing_model --port 9293 --workdir test9293 &"
sleep 5
check_cmd "python test_multi_process_client.py"
kill_server_process
echo "bert mutli rpc RPC inference pass"
;;
GPU)
- check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --gpu_ids 0 &"
- check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9293 --gpu_ids 0 &"
+ rm -rf ./image #TODO: The following code tried to create this folder, but no corresponding code was found
+ check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --workdir test9292 --gpu_ids 0 &"
+ check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9293 --workdir test9293 --gpu_ids 0 &"
sleep 5
check_cmd "python test_multi_process_client.py"
kill_server_process
@@ -454,15 +455,16 @@ function python_test_lac() {
cd lac # pwd: /Serving/python/examples/lac
case $TYPE in
CPU)
- sh get_data.sh
- check_cmd "python -m paddle_serving_server.serve --model jieba_server_model/ --port 9292 &"
+ python -m paddle_serving_app.package --get_model lac
+ tar -xzvf lac.tar.gz
+ check_cmd "python -m paddle_serving_server.serve --model lac_model/ --port 9292 &"
sleep 5
- check_cmd "echo \"我爱北京天安门\" | python lac_client.py jieba_client_conf/serving_client_conf.prototxt lac_dict/"
+ check_cmd "echo \"我爱北京天安门\" | python lac_client.py lac_client/serving_client_conf.prototxt "
echo "lac CPU RPC inference pass"
kill_server_process
unsetproxy # maybe the proxy is used on iPipe, which makes web-test failed.
- check_cmd "python lac_web_service.py jieba_server_model/ lac_workdir 9292 &"
+ check_cmd "python lac_web_service.py lac_model/ lac_workdir 9292 &"
sleep 5
check_cmd "curl -H \"Content-Type:application/json\" -X POST -d '{\"feed\":[{\"words\": \"我爱北京天安门\"}], \"fetch\":[\"word_seg\"]}' http://127.0.0.1:9292/lac/prediction"
# check http code