提交 122d91b4 编写于 作者: D Dong Daxiang 提交者: GitHub

Merge pull request #358 from MRXLT/0.2.0-fix-client

bug fix for 0.2.0
...@@ -154,10 +154,87 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv ...@@ -154,10 +154,87 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
{"label":"daisy","prob":0.9341403245925903} {"label":"daisy","prob":0.9341403245925903}
``` ```
<h3 align="center">More Demos</h4>
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | Bert-Base-Baike |
| URL | [https://paddle-serving.bj.bcebos.com/bert_example/bert_seq128.tar.gz](https://paddle-serving.bj.bcebos.com/bert_example%2Fbert_seq128.tar.gz) |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert |
| Description | Get semantic representation from a Chinese Sentence |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | Resnet50-Imagenet |
| URL | [https://paddle-serving.bj.bcebos.com/imagenet-example/ResNet50_vd.tar.gz](https://paddle-serving.bj.bcebos.com/imagenet-example%2FResNet50_vd.tar.gz) |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet |
| Description | Get image semantic representation from an image |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | Resnet101-Imagenet |
| URL | https://paddle-serving.bj.bcebos.com/imagenet-example/ResNet101_vd.tar.gz |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet |
| Description | Get image semantic representation from an image |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | CNN-IMDB |
| URL | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
| Description | Get category probability from an English Sentence |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | LSTM-IMDB |
| URL | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
| Description | Get category probability from an English Sentence |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | BOW-IMDB |
| URL | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
| Description | Get category probability from an English Sentence |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | Jieba-LAC |
| URL | https://paddle-serving.bj.bcebos.com/lac/lac_model.tar.gz |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/lac |
| Description | Get word segmentation from a Chinese Sentence |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | DNN-CTR |
| URL | None(Get model by [local_train.py](./python/examples/criteo_ctr/local_train.py)) |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/criteo_ctr |
| Description | Get click probability from a feature vector of item |
| Key | Value |
| :----------------- | :----------------------------------------------------------- |
| Model Name | DNN-CTR(with cube) |
| URL | None(Get model by [local_train.py](python/examples/criteo_ctr_with_cube/local_train.py)) |
| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/criteo_ctr_with_cube |
| Description | Get click probability from a feature vector of item |
<h2 align="center">Document</h2> <h2 align="center">Document</h2>
### New to Paddle Serving ### New to Paddle Serving
......
...@@ -147,7 +147,7 @@ tar -xzf uci_housing.tar.gz ...@@ -147,7 +147,7 @@ tar -xzf uci_housing.tar.gz
Running on the Server side (inside the container): Running on the Server side (inside the container):
```bash ```bash
python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 --name uci python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 --name uci --gpu_ids 0
``` ```
Running on the Client side (inside or outside the container): Running on the Client side (inside or outside the container):
...@@ -161,7 +161,7 @@ tar -xzf uci_housing.tar.gz ...@@ -161,7 +161,7 @@ tar -xzf uci_housing.tar.gz
Running on the Server side (inside the container): Running on the Server side (inside the container):
```bash ```bash
python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 --gpu_ids 0
``` ```
Running following Python code on the Client side (inside or outside the container, The `paddle-serving-client` package needs to be installed): Running following Python code on the Client side (inside or outside the container, The `paddle-serving-client` package needs to be installed):
......
...@@ -145,7 +145,7 @@ tar -xzf uci_housing.tar.gz ...@@ -145,7 +145,7 @@ tar -xzf uci_housing.tar.gz
在Server端(容器内)运行: 在Server端(容器内)运行:
```bash ```bash
python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 --name uci python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 --name uci --gpu_ids 0
``` ```
在Client端(容器内或容器外)运行: 在Client端(容器内或容器外)运行:
...@@ -159,7 +159,7 @@ tar -xzf uci_housing.tar.gz ...@@ -159,7 +159,7 @@ tar -xzf uci_housing.tar.gz
在Server端(容器内)运行: 在Server端(容器内)运行:
```bash ```bash
python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 python -m paddle_serving_server_gpu.serve --model uci_housing_model --thread 10 --port 9292 --gpu_ids 0
``` ```
在Client端(容器内或容器外,需要安装`paddle-serving-client`包)运行下面Python代码: 在Client端(容器内或容器外,需要安装`paddle-serving-client`包)运行下面Python代码:
......
...@@ -21,6 +21,10 @@ import time ...@@ -21,6 +21,10 @@ import time
import criteo_reader as criteo import criteo_reader as criteo
from paddle_serving_client.metric import auc from paddle_serving_client.metric import auc
import sys
py_version = sys.version_info[0]
client = Client() client = Client()
client.load_client_config(sys.argv[1]) client.load_client_config(sys.argv[1])
client.connect(["127.0.0.1:9292"]) client.connect(["127.0.0.1:9292"])
...@@ -39,7 +43,10 @@ label_list = [] ...@@ -39,7 +43,10 @@ label_list = []
prob_list = [] prob_list = []
start = time.time() start = time.time()
for ei in range(1000): for ei in range(1000):
if py_version == 2:
data = reader().next() data = reader().next()
else:
data = reader().__next__()
feed_dict = {} feed_dict = {}
for i in range(1, 27): for i in range(1, 27):
feed_dict["sparse_{}".format(i - 1)] = data[0][i] feed_dict["sparse_{}".format(i - 1)] = data[0][i]
......
...@@ -17,10 +17,16 @@ import base64 ...@@ -17,10 +17,16 @@ import base64
import json import json
import time import time
import os import os
import sys
py_version = sys.version_info[0]
def predict(image_path, server): def predict(image_path, server):
if py_version == 2:
image = base64.b64encode(open(image_path).read()) image = base64.b64encode(open(image_path).read())
else:
image = base64.b64encode(open(image_path, "rb").read()).decode("utf-8")
req = json.dumps({"image": image, "fetch": ["score"]}) req = json.dumps({"image": image, "fetch": ["score"]})
r = requests.post( r = requests.post(
server, data=req, headers={"Content-Type": "application/json"}) server, data=req, headers={"Content-Type": "application/json"})
...@@ -28,18 +34,8 @@ def predict(image_path, server): ...@@ -28,18 +34,8 @@ def predict(image_path, server):
return r return r
def batch_predict(image_path, server):
image = base64.b64encode(open(image_path).read())
req = json.dumps({"image": [image, image], "fetch": ["score"]})
r = requests.post(
server, data=req, headers={"Content-Type": "application/json"})
print(r.json()["result"][1]["score"][0])
return r
if __name__ == "__main__": if __name__ == "__main__":
server = "http://127.0.0.1:9393/image/prediction" server = "http://127.0.0.1:9393/image/prediction"
#image_path = "./data/n01440764_10026.JPEG"
image_list = os.listdir("./image_data/n01440764/") image_list = os.listdir("./image_data/n01440764/")
start = time.time() start = time.time()
for img in image_list: for img in image_list:
......
...@@ -19,16 +19,15 @@ import time ...@@ -19,16 +19,15 @@ import time
client = Client() client = Client()
client.load_client_config(sys.argv[1]) client.load_client_config(sys.argv[1])
client.connect(["127.0.0.1:9295"]) client.connect(["127.0.0.1:9393"])
reader = ImageReader() reader = ImageReader()
start = time.time() start = time.time()
for i in range(1000): for i in range(1000):
with open("./data/n01440764_10026.JPEG") as f: with open("./data/n01440764_10026.JPEG", "rb") as f:
img = f.read() img = f.read()
img = reader.process_image(img).reshape(-1) img = reader.process_image(img).reshape(-1)
fetch_map = client.predict(feed={"image": img}, fetch=["score"]) fetch_map = client.predict(feed={"image": img}, fetch=["score"])
print(i)
end = time.time() end = time.time()
print(end - start) print(end - start)
......
...@@ -19,15 +19,23 @@ import paddle ...@@ -19,15 +19,23 @@ import paddle
import re import re
import paddle.fluid.incubate.data_generator as dg import paddle.fluid.incubate.data_generator as dg
py_version = sys.version_info[0]
class IMDBDataset(dg.MultiSlotDataGenerator): class IMDBDataset(dg.MultiSlotDataGenerator):
def load_resource(self, dictfile): def load_resource(self, dictfile):
self._vocab = {} self._vocab = {}
wid = 0 wid = 0
if py_version == 2:
with open(dictfile) as f: with open(dictfile) as f:
for line in f: for line in f:
self._vocab[line.strip()] = wid self._vocab[line.strip()] = wid
wid += 1 wid += 1
else:
with open(dictfile, encoding="utf-8") as f:
for line in f:
self._vocab[line.strip()] = wid
wid += 1
self._unk_id = len(self._vocab) self._unk_id = len(self._vocab)
self._pattern = re.compile(r'(;|,|\.|\?|!|\s|\(|\))') self._pattern = re.compile(r'(;|,|\.|\?|!|\s|\(|\))')
self.return_value = ("words", [1, 2, 3, 4, 5, 6]), ("label", [0]) self.return_value = ("words", [1, 2, 3, 4, 5, 6]), ("label", [0])
......
...@@ -11,5 +11,5 @@ ...@@ -11,5 +11,5 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from auc import auc from .auc import auc
from acc import acc from .acc import acc
...@@ -71,7 +71,7 @@ def start_multi_card(args): # pylint: disable=doc-string-missing ...@@ -71,7 +71,7 @@ def start_multi_card(args): # pylint: disable=doc-string-missing
else: else:
gpus = args.gpu_ids.split(",") gpus = args.gpu_ids.split(",")
if len(gpus) <= 0: if len(gpus) <= 0:
start_gpu_card_model(-1, args) start_gpu_card_model(-1, 0, args)
else: else:
gpu_processes = [] gpu_processes = []
for i, gpu_id in enumerate(gpus): for i, gpu_id in enumerate(gpus):
......
...@@ -18,6 +18,7 @@ from __future__ import print_function ...@@ -18,6 +18,7 @@ from __future__ import print_function
import platform import platform
import os import os
import sys
from setuptools import setup, Distribution, Extension from setuptools import setup, Distribution, Extension
from setuptools import find_packages from setuptools import find_packages
...@@ -25,6 +26,7 @@ from setuptools import setup ...@@ -25,6 +26,7 @@ from setuptools import setup
from paddle_serving_client.version import serving_client_version from paddle_serving_client.version import serving_client_version
from pkg_resources import DistributionNotFound, get_distribution from pkg_resources import DistributionNotFound, get_distribution
py_version = sys.version_info[0]
def python_version(): def python_version():
return [int(v) for v in platform.python_version().split(".")] return [int(v) for v in platform.python_version().split(".")]
...@@ -37,8 +39,9 @@ def find_package(pkgname): ...@@ -37,8 +39,9 @@ def find_package(pkgname):
return False return False
def copy_lib(): def copy_lib():
lib_list = ['libpython2.7.so.1.0', 'libssl.so.10', 'libcrypto.so.10'] if py_version == 2 else ['libpython3.6m.so.1.0', 'libssl.so.10', 'libcrypto.so.10']
os.popen('mkdir -p paddle_serving_client/lib') os.popen('mkdir -p paddle_serving_client/lib')
for lib in ['libpython2.7.so.1.0', 'libssl.so.10', 'libcrypto.so.10']: for lib in lib_list:
r = os.popen('whereis {}'.format(lib)) r = os.popen('whereis {}'.format(lib))
text = r.read() text = r.read()
os.popen('cp {} ./paddle_serving_client/lib'.format(text.strip().split(' ')[1])) os.popen('cp {} ./paddle_serving_client/lib'.format(text.strip().split(' ')[1]))
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册