未验证 提交 ad55c1a3 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge pull request #1036 from wangjiawei04/readme2

Readme Fix
...@@ -11,14 +11,16 @@ This example use model [BERT Chinese Model](https://www.paddlepaddle.org.cn/hubd ...@@ -11,14 +11,16 @@ This example use model [BERT Chinese Model](https://www.paddlepaddle.org.cn/hubd
Install paddlehub first Install paddlehub first
``` ```
pip install paddlehub pip3 install paddlehub
``` ```
run run
``` ```
python prepare_model.py 128 python3 prepare_model.py 128
``` ```
**PaddleHub only support Python 3.5+**
the 128 in the command above means max_seq_len in BERT model, which is the length of sample after preprocessing. the 128 in the command above means max_seq_len in BERT model, which is the length of sample after preprocessing.
the config file and model file for server side are saved in the folder bert_seq128_model. the config file and model file for server side are saved in the folder bert_seq128_model.
the config file generated for client side is saved in the folder bert_seq128_client. the config file generated for client side is saved in the folder bert_seq128_client.
...@@ -28,8 +30,9 @@ You can also download the above model from BOS(max_seq_len=128). After decompres ...@@ -28,8 +30,9 @@ You can also download the above model from BOS(max_seq_len=128). After decompres
```shell ```shell
wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz
tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz
mv bert_chinese_L-12_H-768_A-12_model bert_seq128_model
mv bert_chinese_L-12_H-768_A-12_client bert_seq128_client
``` ```
if your model is bert_chinese_L-12_H-768_A-12_model, replace the 'bert_seq128_model' field in the following command with 'bert_chinese_L-12_H-768_A-12_model',replace 'bert_seq128_client' with 'bert_chinese_L-12_H-768_A-12_client'.
### Getting Dict and Sample Dataset ### Getting Dict and Sample Dataset
......
...@@ -10,11 +10,11 @@ ...@@ -10,11 +10,11 @@
示例中采用[Paddlehub](https://github.com/PaddlePaddle/PaddleHub)中的[BERT中文模型](https://www.paddlepaddle.org.cn/hubdetail?name=bert_chinese_L-12_H-768_A-12&en_category=SemanticModel) 示例中采用[Paddlehub](https://github.com/PaddlePaddle/PaddleHub)中的[BERT中文模型](https://www.paddlepaddle.org.cn/hubdetail?name=bert_chinese_L-12_H-768_A-12&en_category=SemanticModel)
请先安装paddlehub 请先安装paddlehub
``` ```
pip install paddlehub pip3 install paddlehub
``` ```
执行 执行
``` ```
python prepare_model.py 128 python3 prepare_model.py 128
``` ```
参数128表示BERT模型中的max_seq_len,即预处理后的样本长度。 参数128表示BERT模型中的max_seq_len,即预处理后的样本长度。
生成server端配置文件与模型文件,存放在bert_seq128_model文件夹。 生成server端配置文件与模型文件,存放在bert_seq128_model文件夹。
...@@ -25,9 +25,9 @@ python prepare_model.py 128 ...@@ -25,9 +25,9 @@ python prepare_model.py 128
```shell ```shell
wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz wget https://paddle-serving.bj.bcebos.com/paddle_hub_models/text/SemanticModel/bert_chinese_L-12_H-768_A-12.tar.gz
tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz tar -xzf bert_chinese_L-12_H-768_A-12.tar.gz
mv bert_chinese_L-12_H-768_A-12_model bert_seq128_model
mv bert_chinese_L-12_H-768_A-12_client bert_seq128_client
``` ```
若使用bert_chinese_L-12_H-768_A-12_model模型,将下面命令中的bert_seq128_model字段替换为bert_chinese_L-12_H-768_A-12_model,bert_seq128_client字段替换为bert_chinese_L-12_H-768_A-12_client.
### 获取词典和样例数据 ### 获取词典和样例数据
......
...@@ -12,6 +12,7 @@ Paddle Detection provides a large number of [Model Zoo](https://github.com/Paddl ...@@ -12,6 +12,7 @@ Paddle Detection provides a large number of [Model Zoo](https://github.com/Paddl
### Serving example ### Serving example
Several examples of PaddleDetection models used in Serving are given in this folder Several examples of PaddleDetection models used in Serving are given in this folder
All examples support TensorRT.
-[Faster RCNN](./faster_rcnn_r50_fpn_1x_coco) -[Faster RCNN](./faster_rcnn_r50_fpn_1x_coco)
-[PPYOLO](./ppyolo_r50vd_dcn_1x_coco) -[PPYOLO](./ppyolo_r50vd_dcn_1x_coco)
......
...@@ -13,6 +13,9 @@ tar xf faster_rcnn_r50_fpn_1x_coco.tar ...@@ -13,6 +13,9 @@ tar xf faster_rcnn_r50_fpn_1x_coco.tar
python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0
``` ```
This model support TensorRT, if you want a faster inference, please use `--use_trt`.
### Perform prediction ### Perform prediction
``` ```
python test_client.py 000000570688.jpg python test_client.py 000000570688.jpg
......
...@@ -13,6 +13,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/ ...@@ -13,6 +13,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
tar xf faster_rcnn_r50_fpn_1x_coco.tar tar xf faster_rcnn_r50_fpn_1x_coco.tar
python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0
``` ```
该模型支持TensorRT,如果想要更快的预测速度,可以开启`--use_trt`选项。
### 执行预测 ### 执行预测
``` ```
......
...@@ -13,6 +13,8 @@ tar xf ppyolo_r50vd_dcn_1x_coco.tar ...@@ -13,6 +13,8 @@ tar xf ppyolo_r50vd_dcn_1x_coco.tar
python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0
``` ```
This model support TensorRT, if you want a faster inference, please use `--use_trt`.
### Perform prediction ### Perform prediction
``` ```
python test_client.py 000000570688.jpg python test_client.py 000000570688.jpg
......
...@@ -14,6 +14,8 @@ tar xf ppyolo_r50vd_dcn_1x_coco.tar ...@@ -14,6 +14,8 @@ tar xf ppyolo_r50vd_dcn_1x_coco.tar
python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0
``` ```
该模型支持TensorRT,如果想要更快的预测速度,可以开启`--use_trt`选项。
### 执行预测 ### 执行预测
``` ```
python test_client.py 000000570688.jpg python test_client.py 000000570688.jpg
......
...@@ -12,6 +12,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/ ...@@ -12,6 +12,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
tar xf ttfnet_darknet53_1x_coco.tar tar xf ttfnet_darknet53_1x_coco.tar
python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0
``` ```
This model support TensorRT, if you want a faster inference, please use `--use_trt`.
### Perform prediction ### Perform prediction
``` ```
......
...@@ -14,6 +14,8 @@ tar xf ttfnet_darknet53_1x_coco.tar ...@@ -14,6 +14,8 @@ tar xf ttfnet_darknet53_1x_coco.tar
python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0
``` ```
该模型支持TensorRT,如果想要更快的预测速度,可以开启`--use_trt`选项。
### 执行预测 ### 执行预测
``` ```
python test_client.py 000000570688.jpg python test_client.py 000000570688.jpg
......
...@@ -13,6 +13,8 @@ tar xf yolov3_darknet53_270e_coco.tar ...@@ -13,6 +13,8 @@ tar xf yolov3_darknet53_270e_coco.tar
python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_ids 0
``` ```
This model support TensorRT, if you want a faster inference, please use `--use_trt`.
### Perform prediction ### Perform prediction
``` ```
python test_client.py 000000570688.jpg python test_client.py 000000570688.jpg
......
...@@ -14,6 +14,8 @@ tar xf yolov3_darknet53_270e_coco.tar ...@@ -14,6 +14,8 @@ tar xf yolov3_darknet53_270e_coco.tar
python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0 python -m paddle_serving_server_gpu.serve --model pddet_serving_model --port 9494 --gpu_ids 0
``` ```
该模型支持TensorRT,如果想要更快的预测速度,可以开启`--use_trt`选项。
### 执行预测 ### 执行预测
``` ```
python test_client.py 000000570688.jpg python test_client.py 000000570688.jpg
......
# Imagenet Pipeline WebService # Imagenet Pipeline WebService
这里以 Uci 服务为例来介绍 Pipeline WebService 的使用。 这里以 Imagenet 服务为例来介绍 Pipeline WebService 的使用。
## 获取模型 ## 获取模型
``` ```
...@@ -10,10 +10,11 @@ sh get_model.sh ...@@ -10,10 +10,11 @@ sh get_model.sh
## 启动服务 ## 启动服务
``` ```
python web_service.py &>log.txt & python resnet50_web_service.py &>log.txt &
``` ```
## 测试 ## 测试
``` ```
curl -X POST -k http://localhost:18082/uci/prediction -d '{"key": ["x"], "value": ["0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332"]}' python pipeline_rpc_client.py
``` ```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册