diff --git a/README.md b/README.md
index 2fdc83db96d1b418aabddabe35f9709e2c72f810..08e8d85a7baf1896c70ebc1999f900bd6747a895 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,12 @@
+([简体中文](./README_CN.md)|English)
+
+
@@ -23,14 +26,6 @@ We consider deploying deep learning inference service online to be a user-facing
-Some Key Features
-
-- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
-- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-- **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
-- **Highly concurrent and efficient communication** between clients and servers supported.
-- **Multiple programming languages** supported on client side, such as Golang, C++ and python.
-- **Extensible framework design** which can support model serving beyond Paddle.
Installation
@@ -60,8 +55,40 @@ If you need install modules compiled with develop branch, please download packag
Client package support Centos 7 and Ubuntu 18, or you can use HTTP service without install client.
+
+ Pre-built services with Paddle Serving
+
+Chinese Word Segmentation
+
+``` shell
+> python -m paddle_serving_app.package -get_model lac
+> tar -xzf lac.tar.gz
+> python lac_web_service.py 9292 &
+> curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "我爱北京天安门"}], "fetch":["word_seg"]}' http://127.0.0.1:9393/lac/prediction
+{"result":[{"word_seg":"我|爱|北京|天安门"}]}
+```
+
+Image Classification
+
+
+
+
+
+
+
+``` shell
+> python -m paddle_serving_app.package -get_model resnet_v2_50_imagenet
+> tar -xzf resnet_v2_50_imagenet.tar.gz
+> python resnet50_imagenet_classify.py resnet50_serving_model &
+> curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"image": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
+{"result":{"label":["daisy"],"prob":[0.9341403245925903]}}
+```
+
+
Quick Start Example
+This quick start example is only for users who already have a model to deploy and we prepare a ready-to-deploy model here. If you want to know how to use paddle serving from offline training to online serving, please reference to [Train_To_Service](https://github.com/PaddlePaddle/Serving/blob/develop/doc/TRAIN_TO_SERVICE.md)
+
### Boston House Price Prediction model
``` shell
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/uci_housing.tar.gz
@@ -117,138 +144,14 @@ print(fetch_map)
```
Here, `client.predict` function has two arguments. `feed` is a `python dict` with model input variable alias name and values. `fetch` assigns the prediction variables to be returned from servers. In the example, the name of `"x"` and `"price"` are assigned when the servable model is saved during training.
- Pre-built services with Paddle Serving
-
-Chinese Word Segmentation
-
-- **Description**:
-``` shell
-Chinese word segmentation HTTP service that can be deployed with one line command.
-```
-
-- **Download Servable Package**:
-``` shell
-wget --no-check-certificate https://paddle-serving.bj.bcebos.com/lac/lac_model_jieba_web.tar.gz
-```
-- **Host web service**:
-``` shell
-tar -xzf lac_model_jieba_web.tar.gz
-python lac_web_service.py jieba_server_model/ lac_workdir 9292
-```
-- **Request sample**:
-``` shell
-curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "我爱北京天安门"}], "fetch":["word_seg"]}' http://127.0.0.1:9292/lac/prediction
-```
-- **Request result**:
-``` shell
-{"word_seg":"我|爱|北京|天安门"}
-```
-
-Image Classification
-
-- **Description**:
-``` shell
-Image classification trained with Imagenet dataset. A label and corresponding probability will be returned.
-Note: This demo needs paddle-serving-server-gpu.
-```
-
-- **Download Servable Package**:
-``` shell
-wget --no-check-certificate https://paddle-serving.bj.bcebos.com/imagenet-example/imagenet_demo.tar.gz
-```
-- **Host web service**:
-``` shell
-tar -xzf imagenet_demo.tar.gz
-python image_classification_service_demo.py resnet50_serving_model
-```
-- **Request sample**:
-
-
-
-
-
-
-
-``` shell
-curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
-```
-- **Request result**:
-``` shell
-{"label":"daisy","prob":0.9341403245925903}
-```
-
-
More Demos
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | Bert-Base-Baike |
-| URL | [https://paddle-serving.bj.bcebos.com/bert_example/bert_seq128.tar.gz](https://paddle-serving.bj.bcebos.com/bert_example%2Fbert_seq128.tar.gz) |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert |
-| Description | Get semantic representation from a Chinese Sentence |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | Resnet50-Imagenet |
-| URL | [https://paddle-serving.bj.bcebos.com/imagenet-example/ResNet50_vd.tar.gz](https://paddle-serving.bj.bcebos.com/imagenet-example%2FResNet50_vd.tar.gz) |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet |
-| Description | Get image semantic representation from an image |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | Resnet101-Imagenet |
-| URL | https://paddle-serving.bj.bcebos.com/imagenet-example/ResNet101_vd.tar.gz |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet |
-| Description | Get image semantic representation from an image |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | CNN-IMDB |
-| URL | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
-| Description | Get category probability from an English Sentence |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | LSTM-IMDB |
-| URL | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
-| Description | Get category probability from an English Sentence |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | BOW-IMDB |
-| URL | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
-| Description | Get category probability from an English Sentence |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | Jieba-LAC |
-| URL | https://paddle-serving.bj.bcebos.com/lac/lac_model.tar.gz |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/lac |
-| Description | Get word segmentation from a Chinese Sentence |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| Model Name | DNN-CTR |
-| URL | https://paddle-serving.bj.bcebos.com/criteo_ctr_example/criteo_ctr_demo_model.tar.gz |
-| Client/Server Code | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/criteo_ctr |
-| Description | Get click probability from a feature vector of item |
+Some Key Features of Paddle Serving
+- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
+- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
+- **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
+- **Highly concurrent and efficient communication** between clients and servers supported.
+- **Multiple programming languages** supported on client side, such as Golang, C++ and python.
+- **Extensible framework design** which can support model serving beyond Paddle.
Document
diff --git a/README_CN.md b/README_CN.md
index 547a50e7a430f3afb3a69eae211ed87cb248a268..8b091bdc9906007a4683b50184d08cd960483730 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -1,9 +1,12 @@
+(简体中文|[English](./README.md))
+
+
@@ -24,14 +27,7 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务
-核心功能
-- 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**.
-- 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等.
-- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
-- 支持客户端和服务端之间 **高并发和高效通信**.
-- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
-- **可伸缩框架设计** 可支持不限于Paddle的模型服务.
安装
@@ -61,7 +57,38 @@ pip install paddle-serving-server-gpu # GPU
客户端安装包支持Centos 7和Ubuntu 18,或者您可以使用HTTP服务,这种情况下不需要安装客户端。
-快速启动示例
+ Paddle Serving预装的服务
+
+中文分词
+
+``` shell
+> python -m paddle_serving_app.package -get_model lac
+> tar -xzf lac.tar.gz
+> python lac_web_service.py 9292 &
+> curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "我爱北京天安门"}], "fetch":["word_seg"]}' http://127.0.0.1:9393/lac/prediction
+{"result":[{"word_seg":"我|爱|北京|天安门"}]}
+```
+
+图像分类
+
+
+
+
+
+
+
+``` shell
+> python -m paddle_serving_app.package -get_model resnet_v2_50_imagenet
+> tar -xzf resnet_v2_50_imagenet.tar.gz
+> python resnet50_imagenet_classify.py resnet50_serving_model &
+> curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"image": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
+{"result":{"label":["daisy"],"prob":[0.9341403245925903]}}
+```
+
+
+
快速开始示例
+
+这个快速开始示例主要是为了给那些已经有一个要部署的模型的用户准备的,而且我们也提供了一个可以用来部署的模型。如果您想知道如何从离线训练到在线服务走完全流程,请参考[从训练到部署](https://github.com/PaddlePaddle/Serving/blob/develop/doc/TRAIN_TO_SERVICE_CN.md)
波士顿房价预测
@@ -122,139 +149,14 @@ print(fetch_map)
```
在这里,`client.predict`函数具有两个参数。 `feed`是带有模型输入变量别名和值的`python dict`。 `fetch`被要从服务器返回的预测变量赋值。 在该示例中,在训练过程中保存可服务模型时,被赋值的tensor名为`"x"`和`"price"`。
-Paddle Serving预装的服务
-
-中文分词模型
-
-- **介绍**:
-``` shell
-本示例为中文分词HTTP服务一键部署
-```
-
-- **下载服务包**:
-``` shell
-wget --no-check-certificate https://paddle-serving.bj.bcebos.com/lac/lac_model_jieba_web.tar.gz
-```
-- **启动web服务**:
-``` shell
-tar -xzf lac_model_jieba_web.tar.gz
-python lac_web_service.py jieba_server_model/ lac_workdir 9292
-```
-- **客户端请求示例**:
-``` shell
-curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "我爱北京天安门"}], "fetch":["word_seg"]}' http://127.0.0.1:9292/lac/prediction
-```
-- **返回结果示例**:
-``` shell
-{"word_seg":"我|爱|北京|天安门"}
-```
-
-图像分类模型
-
-- **介绍**:
-``` shell
-图像分类模型由Imagenet数据集训练而成,该服务会返回一个标签及其概率
-注意:本示例需要安装paddle-serving-server-gpu
-```
-
-- **下载服务包**:
-``` shell
-wget --no-check-certificate https://paddle-serving.bj.bcebos.com/imagenet-example/imagenet_demo.tar.gz
-```
-- **启动web服务**:
-``` shell
-tar -xzf imagenet_demo.tar.gz
-python image_classification_service_demo.py resnet50_serving_model
-```
-- **客户端请求示例**:
-
-
-
-
-
-
-
-``` shell
-curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
-```
-- **返回结果示例**:
-``` shell
-{"label":"daisy","prob":0.9341403245925903}
-```
-
-
更多示例
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | Bert-Base-Baike |
-| 下载链接 | [https://paddle-serving.bj.bcebos.com/bert_example/bert_seq128.tar.gz](https://paddle-serving.bj.bcebos.com/bert_example%2Fbert_seq128.tar.gz) |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert |
-| 介绍 | 获得一个中文语句的语义表示 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | Resnet50-Imagenet |
-| 下载链接 | [https://paddle-serving.bj.bcebos.com/imagenet-example/ResNet50_vd.tar.gz](https://paddle-serving.bj.bcebos.com/imagenet-example%2FResNet50_vd.tar.gz) |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet |
-| 介绍 | 获得一张图片的图像语义表示 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | Resnet101-Imagenet |
-| 下载链接 | https://paddle-serving.bj.bcebos.com/imagenet-example/ResNet101_vd.tar.gz |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet |
-| 介绍 | 获得一张图片的图像语义表示 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | CNN-IMDB |
-| 下载链接 | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
-| 介绍 | 从一个中文语句获得类别及其概率 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | LSTM-IMDB |
-| 下载链接 | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
-| 介绍 | 从一个英文语句获得类别及其概率 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | BOW-IMDB |
-| 下载链接 | https://paddle-serving.bj.bcebos.com/imdb-demo/imdb_model.tar.gz |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imdb |
-| 介绍 | 从一个英文语句获得类别及其概率 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | Jieba-LAC |
-| 下载链接 | https://paddle-serving.bj.bcebos.com/lac/lac_model.tar.gz |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/lac |
-| 介绍 | 获取中文语句的分词 |
-
-
-
-| Key | Value |
-| :----------------- | :----------------------------------------------------------- |
-| 模型名 | DNN-CTR |
-| 下载链接 | https://paddle-serving.bj.bcebos.com/criteo_ctr_example/criteo_ctr_demo_model.tar.gz |
-| 客户端/服务端代码 | https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/criteo_ctr |
-| 介绍 | 从项目的特征向量中获得点击概率 |
-
+Paddle Serving的核心功能
+- 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**.
+- 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等.
+- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
+- 支持客户端和服务端之间 **高并发和高效通信**.
+- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
+- **可伸缩框架设计** 可支持不限于Paddle的模型服务.
文档
diff --git a/python/examples/bert/bert_web_service.py b/python/examples/bert/bert_web_service.py
index d72150878c51d4f95bbc5d2263ad00fb1ed2c387..b1898b2cc0ee690dd075958944a56fed27dce29a 100644
--- a/python/examples/bert/bert_web_service.py
+++ b/python/examples/bert/bert_web_service.py
@@ -21,7 +21,10 @@ import os
class BertService(WebService):
def load(self):
- self.reader = ChineseBertReader(vocab_file="vocab.txt", max_seq_len=128)
+ self.reader = ChineseBertReader({
+ "vocab_file": "vocab.txt",
+ "max_seq_len": 128
+ })
def preprocess(self, feed=[], fetch=[]):
feed_res = [
diff --git a/python/examples/deeplabv3/README.md b/python/examples/deeplabv3/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..3eb5c84e2d5be7c7a1448940c758e60d77bd56e6
--- /dev/null
+++ b/python/examples/deeplabv3/README.md
@@ -0,0 +1,22 @@
+# Image Segmentation
+
+## Get Model
+
+```
+python -m paddle_serving_app.package --get_model deeplabv3
+tar -xzvf deeplabv3.tar.gz
+```
+
+## RPC Service
+
+### Start Service
+
+```
+python -m paddle_serving_server_gpu.serve --model deeplabv3_server --gpu_ids 0 --port 9494
+```
+
+### Client Prediction
+
+```
+python deeplabv3_client.py
+```
diff --git a/python/examples/deeplabv3/README_CN.md b/python/examples/deeplabv3/README_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..a25bb2d059df49568056664493c1c96b999005b2
--- /dev/null
+++ b/python/examples/deeplabv3/README_CN.md
@@ -0,0 +1,21 @@
+# 图像分割
+
+## 获取模型
+
+```
+python -m paddle_serving_app.package --get_model deeplabv3
+tar -xzvf deeplabv3.tar.gz
+```
+
+## RPC 服务
+
+### 启动服务端
+
+```
+python -m paddle_serving_server_gpu.serve --model deeplabv3_server --gpu_ids 0 --port 9494
+```
+
+### 客户端预测
+
+```
+python deeplabv3_client.py
diff --git a/python/examples/deeplabv3/deeplabv3_client.py b/python/examples/deeplabv3/deeplabv3_client.py
index 75ea6b0a01868af30c94fb0686159571c2c1c966..77e25d5f5a24d0aa1dad8939c1e7845eaf5e4122 100644
--- a/python/examples/deeplabv3/deeplabv3_client.py
+++ b/python/examples/deeplabv3/deeplabv3_client.py
@@ -18,7 +18,7 @@ import sys
import cv2
client = Client()
-client.load_client_config("seg_client/serving_client_conf.prototxt")
+client.load_client_config("deeplabv3_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9494"])
preprocess = Sequential(
diff --git a/python/examples/faster_rcnn_model/README.md b/python/examples/faster_rcnn_model/README.md
index c1d3d40b054fb362bd20c59a9a7fc4d09e89f31b..e31f734e2b8f04ee4cd35258f9da81672b2caf88 100644
--- a/python/examples/faster_rcnn_model/README.md
+++ b/python/examples/faster_rcnn_model/README.md
@@ -12,8 +12,8 @@ If you want to have more detection models, please refer to [Paddle Detection Mod
### Start the service
```
tar xf faster_rcnn_model.tar.gz
-mv faster_rcnn_model/pddet *.
-GLOG_v=2 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_id 0
+mv faster_rcnn_model/pddet* .
+GLOG_v=2 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0
```
### Perform prediction
diff --git a/python/examples/faster_rcnn_model/README_CN.md b/python/examples/faster_rcnn_model/README_CN.md
index a2c3618f071a3650d50c791595bc04ba0c1d378a..3ddccf9e63043e797c9e261c1f26ebe774adb81c 100644
--- a/python/examples/faster_rcnn_model/README_CN.md
+++ b/python/examples/faster_rcnn_model/README_CN.md
@@ -13,7 +13,7 @@ wget https://paddle-serving.bj.bcebos.com/pddet_demo/infer_cfg.yml
```
tar xf faster_rcnn_model.tar.gz
mv faster_rcnn_model/pddet* ./
-GLOG_v=2 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_id 0
+GLOG_v=2 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0
```
### 执行预测
diff --git a/python/examples/imagenet/README_CN.md b/python/examples/imagenet/README_CN.md
index 77ade579ba17ad8247b2f118242642a1d3c79927..081cff528c393ecb5534ec679d6e63739f720f20 100644
--- a/python/examples/imagenet/README_CN.md
+++ b/python/examples/imagenet/README_CN.md
@@ -19,10 +19,10 @@ pip install paddle_serving_app
启动server端
```
-python image_classification_service.py ResNet50_vd_model cpu 9696 #cpu预测服务
+python resnet50_web_service.py ResNet50_vd_model cpu 9696 #cpu预测服务
```
```
-python image_classification_service.py ResNet50_vd_model gpu 9696 #gpu预测服务
+python resnet50_web_service.py ResNet50_vd_model gpu 9696 #gpu预测服务
```
diff --git a/python/examples/mobilenet/README.md b/python/examples/mobilenet/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..496ebdbe2e244af8091cb28cdcdecf7627088ba3
--- /dev/null
+++ b/python/examples/mobilenet/README.md
@@ -0,0 +1,22 @@
+# Image Classification
+
+## Get Model
+
+```
+python -m paddle_serving_app.package --get_model mobilenet_v2_imagenet
+tar -xzvf mobilenet_v2_imagenet.tar.gz
+```
+
+## RPC Service
+
+### Start Service
+
+```
+python -m paddle_serving_server_gpu.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 9393
+```
+
+### Client Prediction
+
+```
+python mobilenet_tutorial.py
+```
diff --git a/python/examples/mobilenet/README_CN.md b/python/examples/mobilenet/README_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..7c721b4bd161fbf7c400f1a73ddb7be69c449871
--- /dev/null
+++ b/python/examples/mobilenet/README_CN.md
@@ -0,0 +1,22 @@
+# 图像分类
+
+## 获取模型
+
+```
+python -m paddle_serving_app.package --get_model mobilenet_v2_imagenet
+tar -xzvf mobilenet_v2_imagenet.tar.gz
+```
+
+## RPC 服务
+
+### 启动服务端
+
+```
+python -m paddle_serving_server_gpu.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 9393
+```
+
+### 客户端预测
+
+```
+python mobilenet_tutorial.py
+```
diff --git a/python/examples/resnet_v2_50/README.md b/python/examples/resnet_v2_50/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd86074c73177a06cd59ebb3bd0c28c7f22e95f2
--- /dev/null
+++ b/python/examples/resnet_v2_50/README.md
@@ -0,0 +1,22 @@
+# Image Classification
+
+## Get Model
+
+```
+python -m paddle_serving_app.package --get_model resnet_v2_50_imagenet
+tar -xzvf resnet_v2_50_imagenet.tar.gz
+```
+
+## RPC Service
+
+### Start Service
+
+```
+python -m paddle_serving_server_gpu.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 9393
+```
+
+### Client Prediction
+
+```
+python resnet50_v2_tutorial.py
+```
diff --git a/python/examples/resnet_v2_50/README_CN.md b/python/examples/resnet_v2_50/README_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..bda2916eb43d55d718af1095c21869e00fb27093
--- /dev/null
+++ b/python/examples/resnet_v2_50/README_CN.md
@@ -0,0 +1,22 @@
+# 图像分类
+
+## 获取模型
+
+```
+python -m paddle_serving_app.package --get_model resnet_v2_50_imagenet
+tar -xzvf resnet_v2_50_imagenet.tar.gz
+```
+
+## RPC 服务
+
+### 启动服务端
+
+```
+python -m paddle_serving_server_gpu.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 9393
+```
+
+### 客户端预测
+
+```
+python resnet50_v2_tutorial.py
+```
diff --git a/python/examples/unet_for_image_seg/README.md b/python/examples/unet_for_image_seg/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..7936ad43cbc3b53719babdf6f91ea46e74a827da
--- /dev/null
+++ b/python/examples/unet_for_image_seg/README.md
@@ -0,0 +1,22 @@
+# Image Segmentation
+
+## Get Model
+
+```
+python -m paddle_serving_app.package --get_model unet
+tar -xzvf unet.tar.gz
+```
+
+## RPC Service
+
+### Start Service
+
+```
+python -m paddle_serving_server_gpu.serve --model unet_model --gpu_ids 0 --port 9494
+```
+
+### Client Prediction
+
+```
+python seg_client.py
+```
diff --git a/python/examples/unet_for_image_seg/README_CN.md b/python/examples/unet_for_image_seg/README_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..f4b91aaff5697ff8ea3901e0a8084152f6007ff4
--- /dev/null
+++ b/python/examples/unet_for_image_seg/README_CN.md
@@ -0,0 +1,22 @@
+# 图像分割
+
+## 获取模型
+
+```
+python -m paddle_serving_app.package --get_model unet
+tar -xzvf unet.tar.gz
+```
+
+## RPC 服务
+
+### 启动服务端
+
+```
+python -m paddle_serving_server_gpu.serve --model unet_model --gpu_ids 0 --port 9494
+```
+
+### 客户端预测
+
+```
+python seg_client.py
+```
diff --git a/python/paddle_serving_app/README.md b/python/paddle_serving_app/README.md
index d45bf0c43835563216c0f74e3e2b42562bafd212..6757407939c150ca14a22427a488f41a24feb7ac 100644
--- a/python/paddle_serving_app/README.md
+++ b/python/paddle_serving_app/README.md
@@ -12,7 +12,7 @@ pip install paddle_serving_app
## Get model list
```shell
-python -m paddle_serving_app.package --model_list
+python -m paddle_serving_app.package --list_model
```
## Download pre-training model
diff --git a/python/paddle_serving_app/README_CN.md b/python/paddle_serving_app/README_CN.md
index 015d64a5d5d5490d4315dd80b983933b950749ed..d29c3fd9fff3ba2ab34ec67b6fd15ad10e3cfd07 100644
--- a/python/paddle_serving_app/README_CN.md
+++ b/python/paddle_serving_app/README_CN.md
@@ -11,7 +11,7 @@ pip install paddle_serving_app
## 获取模型列表
```shell
-python -m paddle_serving_app.package --model_list
+python -m paddle_serving_app.package --list_model
```
## 下载预训练模型
diff --git a/python/paddle_serving_app/reader/image_reader.py b/python/paddle_serving_app/reader/image_reader.py
index 7988bf447b5a0a075171d93d22dd1933aa8532b8..7f4a795513447d74e7f02d7741344ccae81c7c9d 100644
--- a/python/paddle_serving_app/reader/image_reader.py
+++ b/python/paddle_serving_app/reader/image_reader.py
@@ -296,7 +296,10 @@ class File2Image(object):
pass
def __call__(self, img_path):
- fin = open(img_path)
+ if py_version == 2:
+ fin = open(img_path)
+ else:
+ fin = open(img_path, "rb")
sample = fin.read()
data = np.fromstring(sample, np.uint8)
img = cv2.imdecode(data, cv2.IMREAD_COLOR)
diff --git a/python/paddle_serving_client/__init__.py b/python/paddle_serving_client/__init__.py
index 8baeea7e6b8e27cc2c06ae5aaa488144e60679ab..870e9d807b9009f413be3ca4730f19e7570c4956 100644
--- a/python/paddle_serving_client/__init__.py
+++ b/python/paddle_serving_client/__init__.py
@@ -61,13 +61,18 @@ class SDKConfig(object):
self.tag_list = []
self.cluster_list = []
self.variant_weight_list = []
+ self.rpc_timeout_ms = 20000
+ self.load_balance_strategy = "la"
def add_server_variant(self, tag, cluster, variant_weight):
self.tag_list.append(tag)
self.cluster_list.append(cluster)
self.variant_weight_list.append(variant_weight)
- def gen_desc(self):
+ def set_load_banlance_strategy(self, strategy):
+ self.load_balance_strategy = strategy
+
+ def gen_desc(self, rpc_timeout_ms):
predictor_desc = sdk.Predictor()
predictor_desc.name = "general_model"
predictor_desc.service_name = \
@@ -86,7 +91,7 @@ class SDKConfig(object):
self.sdk_desc.predictors.extend([predictor_desc])
self.sdk_desc.default_variant_conf.tag = "default"
self.sdk_desc.default_variant_conf.connection_conf.connect_timeout_ms = 2000
- self.sdk_desc.default_variant_conf.connection_conf.rpc_timeout_ms = 20000
+ self.sdk_desc.default_variant_conf.connection_conf.rpc_timeout_ms = rpc_timeout_ms
self.sdk_desc.default_variant_conf.connection_conf.connect_retry_count = 2
self.sdk_desc.default_variant_conf.connection_conf.max_connection_per_host = 100
self.sdk_desc.default_variant_conf.connection_conf.hedge_request_timeout_ms = -1
@@ -119,6 +124,7 @@ class Client(object):
self.profile_ = _Profiler()
self.all_numpy_input = True
self.has_numpy_input = False
+ self.rpc_timeout_ms = 20000
def load_client_config(self, path):
from .serving_client import PredictorClient
@@ -171,6 +177,12 @@ class Client(object):
self.predictor_sdk_.add_server_variant(tag, cluster,
str(variant_weight))
+ def set_rpc_timeout_ms(self, rpc_timeout):
+ if not isinstance(rpc_timeout, int):
+ raise ValueError("rpc_timeout must be int type.")
+ else:
+ self.rpc_timeout_ms = rpc_timeout
+
def connect(self, endpoints=None):
# check whether current endpoint is available
# init from client config
@@ -188,7 +200,7 @@ class Client(object):
print(
"parameter endpoints({}) will not take effect, because you use the add_variant function.".
format(endpoints))
- sdk_desc = self.predictor_sdk_.gen_desc()
+ sdk_desc = self.predictor_sdk_.gen_desc(self.rpc_timeout_ms)
self.client_handle_.create_predictor_by_desc(sdk_desc.SerializeToString(
))
diff --git a/python/paddle_serving_server/__init__.py b/python/paddle_serving_server/__init__.py
index 3cb96a8f04922362fdb4b4c497f7679355e3879f..7356de2c2feac126272cf9a771a03146a87ef541 100644
--- a/python/paddle_serving_server/__init__.py
+++ b/python/paddle_serving_server/__init__.py
@@ -23,6 +23,7 @@ import paddle_serving_server as paddle_serving_server
from .version import serving_server_version
from contextlib import closing
import collections
+import fcntl
class OpMaker(object):
@@ -322,6 +323,10 @@ class Server(object):
bin_url = "https://paddle-serving.bj.bcebos.com/bin/" + tar_name
self.server_path = os.path.join(self.module_path, floder_name)
+ #acquire lock
+ version_file = open("{}/version.py".format(self.module_path), "r")
+ fcntl.flock(version_file, fcntl.LOCK_EX)
+
if not os.path.exists(self.server_path):
print('Frist time run, downloading PaddleServing components ...')
r = os.system('wget ' + bin_url + ' --no-check-certificate')
@@ -345,6 +350,8 @@ class Server(object):
foemat(self.module_path))
finally:
os.remove(tar_name)
+ #release lock
+ version_file.close()
os.chdir(self.cur_path)
self.bin_path = self.server_path + "/serving"
diff --git a/python/paddle_serving_server_gpu/__init__.py b/python/paddle_serving_server_gpu/__init__.py
index 7acc926c7f7fc465da20a7609bc767a5289d2e61..d4631141f8173b4ae0cb41d42c615566ac81ae7e 100644
--- a/python/paddle_serving_server_gpu/__init__.py
+++ b/python/paddle_serving_server_gpu/__init__.py
@@ -25,6 +25,7 @@ from .version import serving_server_version
from contextlib import closing
import argparse
import collections
+import fcntl
def serve_args():
@@ -347,6 +348,11 @@ class Server(object):
download_flag = "{}/{}.is_download".format(self.module_path,
folder_name)
+
+ #acquire lock
+ version_file = open("{}/version.py".format(self.module_path), "r")
+ fcntl.flock(version_file, fcntl.LOCK_EX)
+
if os.path.exists(download_flag):
os.chdir(self.cur_path)
self.bin_path = self.server_path + "/serving"
@@ -377,6 +383,8 @@ class Server(object):
format(self.module_path))
finally:
os.remove(tar_name)
+ #release lock
+ version_file.cloes()
os.chdir(self.cur_path)
self.bin_path = self.server_path + "/serving"