diff --git a/deploy/cpp_shitu/readme.md b/deploy/cpp_shitu/readme.md
index 22c3b6eb22e7b94dbc0b5f21470bd87359262812..7ebf45be1c03a228c540b20f924eb826f65fd73f 100644
--- a/deploy/cpp_shitu/readme.md
+++ b/deploy/cpp_shitu/readme.md
@@ -239,7 +239,7 @@ make -j
* `CUDNN_LIB_DIR`为cudnn库文件地址,在docker中为`/usr/lib/x86_64-linux-gnu/`。
* `TENSORRT_DIR`是tensorrt库文件地址,在dokcer中为`/usr/local/TensorRT6-cuda10.0-cudnn7/`,TensorRT需要结合GPU使用。
* `FAISS_DIR`是faiss的安装地址
- * `FAISS_WITH_MKL`是指在编译faiss的过程中,是否使用了mkldnn,本文档中编译faiss,没有使用,而使用了openblas,故设置为`OFF`,若使用了mkldnn,则为`ON`.
+* `FAISS_WITH_MKL`是指在编译faiss的过程中,是否使用了mkldnn,本文档中编译faiss,没有使用,而使用了openblas,故设置为`OFF`,若使用了mkldnn,则为`ON`.
在执行上述命令,编译完成之后,会在当前路径下生成`build`文件夹,其中生成一个名为`pp_shitu`的可执行文件。
@@ -264,7 +264,7 @@ make -j
cd ..
```
-- 将相应的yaml文件拷到`test`文件夹下
+- 将相应的yaml文件拷到当前文件夹下
```shell
cp ../configs/inference_drink.yaml .
diff --git a/deploy/paddleserving/README.MD b/deploy/paddleserving/README.MD
new file mode 120000
index 0000000000000000000000000000000000000000..a2fdec2de5ac3f468ff7ed63b04ebf7bb7b2f574
--- /dev/null
+++ b/deploy/paddleserving/README.MD
@@ -0,0 +1 @@
+../../docs/zh_CN/inference_deployment/paddle_serving_deploy.md
\ No newline at end of file
diff --git a/deploy/paddleserving/README.md b/deploy/paddleserving/README.md
deleted file mode 100644
index bb34b12989a56944bb5a3b890dc122cd4beba24f..0000000000000000000000000000000000000000
--- a/deploy/paddleserving/README.md
+++ /dev/null
@@ -1,172 +0,0 @@
-# PaddleClas Pipeline WebService
-
-(English|[简体中文](./README_CN.md))
-
-PaddleClas provides two service deployment methods:
-- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please refer to the [tutorial](../../deploy/hubserving/readme_en.md)
-- Based on **PaddleServing**: Code path is "`./deploy/paddleserving`". if you prefer retrieval_based image reocognition service, please refer to [tutorial](./recognition/README.md),if you'd like image classification service, Please follow this tutorial.
-
-# Image Classification Service deployment based on PaddleServing
-
-This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the ResNet50_vd model as a pipeline online service.
-
-Some Key Features of Paddle Serving:
-- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.
-- Industrial serving features supported, such as models management, online loading, online A/B testing etc.
-- Highly concurrent and efficient communication between clients and servers supported.
-
-The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://github.com/PaddlePaddle/Serving/blob/develop/README.md).
-
-
-## Contents
-- [Environmental preparation](#environmental-preparation)
-- [Model conversion](#model-conversion)
-- [Paddle Serving pipeline deployment](#paddle-serving-pipeline-deployment)
-- [FAQ](#faq)
-
-
-## Environmental preparation
-
-PaddleClas operating environment and PaddleServing operating environment are needed.
-
-1. Please prepare PaddleClas operating environment reference [link](../../docs/zh_CN/tutorials/install.md).
- Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.1.0.
-
-2. The steps of PaddleServing operating environment prepare are as follows:
-
- Install serving which used to start the service
- ```
- pip3 install paddle-serving-server==0.6.1 # for CPU
- pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
- # Other GPU environments need to confirm the environment and then choose to execute the following commands
- pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
- pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
- ```
-
-3. Install the client to send requests to the service
- In [download link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md) find the client installation package corresponding to the python version.
- The python3.7 version is recommended here:
-
- ```
- wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
- pip3 install paddle_serving_client-0.0.0-cp37-none-any.whl
- ```
-
-4. Install serving-app
- ```
- pip3 install paddle-serving-app==0.6.1
- ```
-
- **note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
-
-
-
-## Model conversion
-When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.
-
-Firstly, download the inference model of ResNet50_vd
-```
-# Download and unzip the ResNet50_vd model
-wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
-```
-
-Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
-```
-# ResNet50_vd model conversion
-python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
- --model_filename inference.pdmodel \
- --params_filename inference.pdiparams \
- --serving_server ./ResNet50_vd_serving/ \
- --serving_client ./ResNet50_vd_client/
-```
-
-After the ResNet50_vd inference model is converted, there will be additional folders of `ResNet50_vd_serving` and `ResNet50_vd_client` in the current folder, with the following format:
-```
-|- ResNet50_vd_client/
- |- __model__
- |- __params__
- |- serving_server_conf.prototxt
- |- serving_server_conf.stream.prototxt
-
-|- ResNet50_vd_client
- |- serving_client_conf.prototxt
- |- serving_client_conf.stream.prototxt
-```
-
-Once you have the model file for deployment, you need to change the alias name in `serving_server_conf.prototxt`: Change `alias_name` in `feed_var` to `image`, change `alias_name` in `fetch_var` to `prediction`,
-The modified serving_server_conf.prototxt file is as follows:
-```
-feed_var {
- name: "inputs"
- alias_name: "image"
- is_lod_tensor: false
- feed_type: 1
- shape: 3
- shape: 224
- shape: 224
-}
-fetch_var {
- name: "save_infer_model/scale_0.tmp_1"
- alias_name: "prediction"
- is_lod_tensor: true
- fetch_type: 1
- shape: -1
-}
-```
-
-
-## Paddle Serving pipeline deployment
-
-1. Download the PaddleClas code, if you have already downloaded it, you can skip this step.
- ```
- git clone https://github.com/PaddlePaddle/PaddleClas
-
- # Enter the working directory
- cd PaddleClas/deploy/paddleserving/
- ```
-
- The paddleserving directory contains the code to start the pipeline service and send prediction requests, including:
- ```
- __init__.py
- config.yml # configuration file of starting the service
- pipeline_http_client.py # script to send pipeline prediction request by http
- pipeline_rpc_client.py # script to send pipeline prediction request by rpc
- classification_web_service.py # start the script of the pipeline server
- ```
-
-2. Run the following command to start the service.
- ```
- # Start the service and save the running log in log.txt
- python3 classification_web_service.py &>log.txt &
- ```
- After the service is successfully started, a log similar to the following will be printed in log.txt
- ![](./imgs/start_server.png)
-
-3. Send service request
- ```
- python3 pipeline_http_client.py
- ```
- After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
- ![](./imgs/results.png)
-
- Adjust the number of concurrency in config.yml to get the largest QPS.
-
- ```
- op:
- concurrency: 8
- ...
- ```
-
- Multiple service requests can be sent at the same time if necessary.
-
- The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.
-
-
-## FAQ
-**Q1**: No result return after sending the request.
-
-**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and before sending the request. The command to close the proxy is:
-```
-unset https_proxy
-unset http_proxy
-```
diff --git a/deploy/paddleserving/README_CN.md b/deploy/paddleserving/README_CN.md
deleted file mode 100644
index 02ee2093d901251a20cdf67261b0fb882d2736fd..0000000000000000000000000000000000000000
--- a/deploy/paddleserving/README_CN.md
+++ /dev/null
@@ -1,167 +0,0 @@
-# PaddleClas 服务化部署
-
-([English](./README.md)|简体中文)
-
-PaddleClas提供2种服务部署方式:
-- 基于PaddleHub Serving的部署:代码路径为"`./deploy/hubserving`",使用方法参考[文档](../../deploy/hubserving/readme.md);
-- 基于PaddleServing的部署:代码路径为"`./deploy/paddleserving`", 基于检索方式的图像识别服务参考[文档](./recognition/README_CN.md), 图像分类服务按照本教程使用。
-
-# 基于PaddleServing的图像分类服务部署
-
-本文档以经典的ResNet50_vd模型为例,介绍如何使用[PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)工具部署PaddleClas
-动态图模型的pipeline在线服务。
-
-相比较于hubserving部署,PaddleServing具备以下优点:
-- 支持客户端和服务端之间高并发和高效通信
-- 支持 工业级的服务能力 例如模型管理,在线加载,在线A/B测试等
-- 支持 多种编程语言 开发客户端,例如C++, Python和Java
-
-更多有关PaddleServing服务化部署框架介绍和使用教程参考[文档](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)。
-
-## 目录
-- [环境准备](#环境准备)
-- [模型转换](#模型转换)
-- [Paddle Serving pipeline部署](#部署)
-- [FAQ](#FAQ)
-
-
-## 环境准备
-
-需要准备PaddleClas的运行环境和PaddleServing的运行环境。
-
-- 准备PaddleClas的[运行环境](../../docs/zh_CN/tutorials/install.md), 根据环境下载对应的paddle whl包,推荐安装2.1.0版本
-
-- 准备PaddleServing的运行环境,步骤如下
-
-1. 安装serving,用于启动服务
- ```
- pip3 install paddle-serving-server==0.6.1 # for CPU
- pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
- # 其他GPU环境需要确认环境再选择执行如下命令
- pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
- pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
- ```
-
-2. 安装client,用于向服务发送请求
- 在[下载链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)中找到对应python版本的client安装包,这里推荐python3.7版本:
-
- ```
- wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
- pip3 install paddle_serving_client-0.0.0-cp37-none-any.whl
- ```
-
-3. 安装serving-app
- ```
- pip3 install paddle-serving-app==0.6.1
- ```
- **Note:** 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)。
-
-
-## 模型转换
-
-使用PaddleServing做服务化部署时,需要将保存的inference模型转换为serving易于部署的模型。
-
-首先,下载ResNet50_vd的inference模型
-```
-# 下载并解压ResNet50_vd模型
-wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
-```
-
-接下来,用安装的paddle_serving_client把下载的inference模型转换成易于server部署的模型格式。
-
-```
-# 转换ResNet50_vd模型
-python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
- --model_filename inference.pdmodel \
- --params_filename inference.pdiparams \
- --serving_server ./ResNet50_vd_serving/ \
- --serving_client ./ResNet50_vd_client/
-```
-ResNet50_vd推理模型转换完成后,会在当前文件夹多出`ResNet50_vd_serving` 和`ResNet50_vd_client`的文件夹,具备如下格式:
-```
-|- ResNet50_vd_client/
- |- __model__
- |- __params__
- |- serving_server_conf.prototxt
- |- serving_server_conf.stream.prototxt
-
-|- ResNet50_vd_client
- |- serving_client_conf.prototxt
- |- serving_client_conf.stream.prototxt
-
-```
-得到模型文件之后,需要修改serving_server_conf.prototxt中的alias名字: 将`feed_var`中的`alias_name`改为`image`, 将`fetch_var`中的`alias_name`改为`prediction`,
-修改后的serving_server_conf.prototxt内容如下:
-```
-feed_var {
- name: "inputs"
- alias_name: "image"
- is_lod_tensor: false
- feed_type: 1
- shape: 3
- shape: 224
- shape: 224
-}
-fetch_var {
- name: "save_infer_model/scale_0.tmp_1"
- alias_name: "prediction"
- is_lod_tensor: true
- fetch_type: 1
- shape: -1
-}
-```
-
-
-## Paddle Serving pipeline部署
-
-1. 下载PaddleClas代码,若已下载可跳过此步骤
- ```
- git clone https://github.com/PaddlePaddle/PaddleClas
-
- # 进入到工作目录
- cd PaddleClas/deploy/paddleserving/
- ```
- paddleserving目录包含启动pipeline服务和发送预测请求的代码,包括:
- ```
- __init__.py
- config.yml # 启动服务的配置文件
- pipeline_http_client.py # http方式发送pipeline预测请求的脚本
- pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
- classification_web_service.py # 启动pipeline服务端的脚本
- ```
-
-2. 启动服务可运行如下命令:
- ```
- # 启动服务,运行日志保存在log.txt
- python3 classification_web_service.py &>log.txt &
- ```
- 成功启动服务后,log.txt中会打印类似如下日志
- ![](./imgs/start_server.png)
-
-3. 发送服务请求:
- ```
- python3 pipeline_http_client.py
- ```
- 成功运行后,模型预测的结果会打印在cmd窗口中,结果示例为:
- ![](./imgs/results.png)
-
- 调整 config.yml 中的并发个数可以获得最大的QPS
- ```
- op:
- #并发数,is_thread_op=True时,为线程并发;否则为进程并发
- concurrency: 8
- ...
- ```
- 有需要的话可以同时发送多个服务请求
-
- 预测性能数据会被自动写入 `PipelineServingLogs/pipeline.tracer` 文件中。
-
-
-## FAQ
-**Q1**: 发送请求后没有结果返回或者提示输出解码报错
-
-**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是:
-```
-unset https_proxy
-unset http_proxy
-```
diff --git a/deploy/paddleserving/classification_web_service.py b/deploy/paddleserving/classification_web_service.py
index 6c353eb102fd8c2914c71e6a3c946431823b8c32..3e43ce8608e5e0edac1802910856be2ed6e6b635 100644
--- a/deploy/paddleserving/classification_web_service.py
+++ b/deploy/paddleserving/classification_web_service.py
@@ -21,6 +21,7 @@ import logging
import numpy as np
import base64, cv2
+
class ImagenetOp(Op):
def init_op(self):
self.seq = Sequential([
@@ -48,7 +49,7 @@ class ImagenetOp(Op):
input_imgs = np.concatenate(imgs, axis=0)
return {"image": input_imgs}, False, None, ""
- def postprocess(self, input_dicts, fetch_dict, log_id):
+ def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
score_list = fetch_dict["prediction"]
result = {"label": [], "prob": []}
for score in score_list:
diff --git a/deploy/paddleserving/recognition/README.md b/deploy/paddleserving/recognition/README.md
deleted file mode 100644
index 005e418163b546d7faba6c133429aead1cfd56ca..0000000000000000000000000000000000000000
--- a/deploy/paddleserving/recognition/README.md
+++ /dev/null
@@ -1,178 +0,0 @@
-# Product Recognition Service deployment based on PaddleServing
-
-(English|[简体中文](./README_CN.md))
-
-This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the product recognition model based on retrieval method as a pipeline online service.
-
-Some Key Features of Paddle Serving:
-- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.
-- Industrial serving features supported, such as models management, online loading, online A/B testing etc.
-- Highly concurrent and efficient communication between clients and servers supported.
-
-The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://github.com/PaddlePaddle/Serving/blob/develop/README.md).
-
-## Contents
-- [Environmental preparation](#environmental-preparation)
-- [Model conversion](#model-conversion)
-- [Paddle Serving pipeline deployment](#paddle-serving-pipeline-deployment)
-- [FAQ](#faq)
-
-
-## Environmental preparation
-
-PaddleClas operating environment and PaddleServing operating environment are needed.
-
-1. Please prepare PaddleClas operating environment reference [link](../../docs/zh_CN/tutorials/install.md).
- Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.1.0.
-
-2. The steps of PaddleServing operating environment prepare are as follows:
-
- Install serving which used to start the service
- ```
- pip3 install paddle-serving-server==0.6.1 # for CPU
- pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
- # Other GPU environments need to confirm the environment and then choose to execute the following commands
- pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
- pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
- ```
-
-3. Install the client to send requests to the service
- In [download link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md) find the client installation package corresponding to the python version.
- The python3.7 version is recommended here:
-
- ```
- wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
- pip3 install paddle_serving_client-0.0.0-cp37-none-any.whl
- ```
-
-4. Install serving-app
- ```
- pip3 install paddle-serving-app==0.6.1
- ```
-
- **note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md).
-
-
-
-## Model conversion
-When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.
-The following assumes that the current working directory is the PaddleClas root directory
-
-Firstly, download the inference model of ResNet50_vd
-```
-cd deploy
-# Download and unzip the ResNet50_vd model
-wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_aliproduct_v1.0_infer.tar
-cd models
-tar -xf product_ResNet50_vd_aliproduct_v1.0_infer.tar
-```
-
-Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
-```
-# Product recognition model conversion
-python3 -m paddle_serving_client.convert --dirname ./product_ResNet50_vd_aliproduct_v1.0_infer/ \
- --model_filename inference.pdmodel \
- --params_filename inference.pdiparams \
- --serving_server ./product_ResNet50_vd_aliproduct_v1.0_serving/ \
- --serving_client ./product_ResNet50_vd_aliproduct_v1.0_client/
-```
-
-After the ResNet50_vd inference model is converted, there will be additional folders of `product_ResNet50_vd_aliproduct_v1.0_serving` and `product_ResNet50_vd_aliproduct_v1.0_client` in the current folder, with the following format:
-```
-|- product_ResNet50_vd_aliproduct_v1.0_serving/
- |- __model__
- |- __params__
- |- serving_server_conf.prototxt
- |- serving_server_conf.stream.prototxt
-
-|- product_ResNet50_vd_aliproduct_v1.0_client
- |- serving_client_conf.prototxt
- |- serving_client_conf.stream.prototxt
-```
-
-Once you have the model file for deployment, you need to change the alias name in `serving_server_conf.prototxt`: change `alias_name` in `fetch_var` to `features`,
-The modified serving_server_conf.prototxt file is as follows:
-```
-feed_var {
- name: "x"
- alias_name: "x"
- is_lod_tensor: false
- feed_type: 1
- shape: 3
- shape: 224
- shape: 224
-}
-fetch_var {
- name: "save_infer_model/scale_0.tmp_1"
- alias_name: "features"
- is_lod_tensor: true
- fetch_type: 1
- shape: -1
-}
-```
-
-Next,download and unpack the built index of product gallery
-```
-cd ../
-wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/recognition_demo_data_v1.1.tar && tar -xf recognition_demo_data_v1.1.tar
-```
-
-
-
-## Paddle Serving pipeline deployment
-
-**Attention:** pipeline deployment mode does not support Windows platform
-
-1. Download the PaddleClas code, if you have already downloaded it, you can skip this step.
- ```
- git clone https://github.com/PaddlePaddle/PaddleClas
-
- # Enter the working directory
- cd PaddleClas/deploy/paddleserving/recognition
- ```
-
- The paddleserving directory contains the code to start the pipeline service and send prediction requests, including:
- ```
- __init__.py
- config.yml # configuration file of starting the service
- pipeline_http_client.py # script to send pipeline prediction request by http
- pipeline_rpc_client.py # script to send pipeline prediction request by rpc
- recognition_web_service.py # start the script of the pipeline server
- ```
-
-2. Run the following command to start the service.
- ```
- # Start the service and save the running log in log.txt
- python3 recognition_web_service.py &>log.txt &
- ```
- After the service is successfully started, a log similar to the following will be printed in log.txt
- ![](../imgs/start_server_recog.png)
-
-3. Send service request
- ```
- python3 pipeline_http_client.py
- ```
- After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
- ![](../imgs/results_recog.png)
-
- Adjust the number of concurrency in config.yml to get the largest QPS.
-
- ```
- op:
- concurrency: 8
- ...
- ```
-
- Multiple service requests can be sent at the same time if necessary.
-
- The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.
-
-
-## FAQ
-**Q1**: No result return after sending the request.
-
-**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and before sending the request. The command to close the proxy is:
-```
-unset https_proxy
-unset http_proxy
-```
diff --git a/deploy/paddleserving/recognition/README_CN.md b/deploy/paddleserving/recognition/README_CN.md
deleted file mode 100644
index b8b8128a01b5e7640e51ef23c0be4ac0e8b13b0e..0000000000000000000000000000000000000000
--- a/deploy/paddleserving/recognition/README_CN.md
+++ /dev/null
@@ -1,174 +0,0 @@
-# 基于PaddleServing的商品识别服务部署
-
-([English](./README.md)|简体中文)
-
-本文以商品识别为例,介绍如何使用[PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)工具部署PaddleClas动态图模型的pipeline在线服务。
-
-相比较于hubserving部署,PaddleServing具备以下优点:
-- 支持客户端和服务端之间高并发和高效通信
-- 支持 工业级的服务能力 例如模型管理,在线加载,在线A/B测试等
-- 支持 多种编程语言 开发客户端,例如C++, Python和Java
-
-更多有关PaddleServing服务化部署框架介绍和使用教程参考[文档](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)。
-
-## 目录
-- [环境准备](#环境准备)
-- [模型转换](#模型转换)
-- [Paddle Serving pipeline部署](#部署)
-- [FAQ](#FAQ)
-
-
-## 环境准备
-
-需要准备PaddleClas的运行环境和PaddleServing的运行环境。
-
-- 准备PaddleClas的[运行环境](../../docs/zh_CN/tutorials/install.md), 根据环境下载对应的paddle whl包,推荐安装2.1.0版本
-
-- 准备PaddleServing的运行环境,步骤如下
-
-1. 安装serving,用于启动服务
- ```
- pip3 install paddle-serving-server==0.6.1 # for CPU
- pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
- # 其他GPU环境需要确认环境再选择执行如下命令
- pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
- pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
- ```
-
-2. 安装client,用于向服务发送请求
- 在[下载链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)中找到对应python版本的client安装包,这里推荐python3.7版本:
-
- ```
- wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
- pip3 install paddle_serving_client-0.0.0-cp37-none-any.whl
- ```
-
-3. 安装serving-app
- ```
- pip3 install paddle-serving-app==0.6.1
- ```
- **Note:** 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md)。
-
-
-## 模型转换
-
-使用PaddleServing做服务化部署时,需要将保存的inference模型转换为serving易于部署的模型。
-以下内容假定当前工作目录为PaddleClas根目录。
-
-首先,下载商品识别的inference模型
-```
-cd deploy
-
-# 下载并解压商品识别模型
-wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_aliproduct_v1.0_infer.tar
-cd models
-tar -xf product_ResNet50_vd_aliproduct_v1.0_infer.tar
-```
-
-接下来,用安装的paddle_serving_client把下载的inference模型转换成易于server部署的模型格式。
-
-```
-# 转换商品识别模型
-python3 -m paddle_serving_client.convert --dirname ./product_ResNet50_vd_aliproduct_v1.0_infer/ \
- --model_filename inference.pdmodel \
- --params_filename inference.pdiparams \
- --serving_server ./product_ResNet50_vd_aliproduct_v1.0_serving/ \
- --serving_client ./product_ResNet50_vd_aliproduct_v1.0_client/
-```
-商品识别推理模型转换完成后,会在当前文件夹多出`product_ResNet50_vd_aliproduct_v1.0_serving` 和`product_ResNet50_vd_aliproduct_v1.0_client`的文件夹,具备如下格式:
-```
-|- product_ResNet50_vd_aliproduct_v1.0_serving/
- |- __model__
- |- __params__
- |- serving_server_conf.prototxt
- |- serving_server_conf.stream.prototxt
-
-|- product_ResNet50_vd_aliproduct_v1.0_client
- |- serving_client_conf.prototxt
- |- serving_client_conf.stream.prototxt
-
-```
-得到模型文件之后,需要修改serving_server_conf.prototxt中的alias名字: 将`fetch_var`中的`alias_name`改为`features`,
-修改后的serving_server_conf.prototxt内容如下:
-```
-feed_var {
- name: "x"
- alias_name: "x"
- is_lod_tensor: false
- feed_type: 1
- shape: 3
- shape: 224
- shape: 224
-}
-fetch_var {
- name: "save_infer_model/scale_0.tmp_1"
- alias_name: "features"
- is_lod_tensor: true
- fetch_type: 1
- shape: -1
-}
-```
-
-接下来,下载并解压已经构建后的商品库index
-```
-cd ../
-wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/recognition_demo_data_v1.1.tar && tar -xf recognition_demo_data_v1.1.tar
-```
-
-
-
-## Paddle Serving pipeline部署
-
-**注意:** pipeline部署方式不支持windows平台
-
-1. 下载PaddleClas代码,若已下载可跳过此步骤
- ```
- git clone https://github.com/PaddlePaddle/PaddleClas
-
- # 进入到工作目录
- cd PaddleClas/deploy/paddleserving/recognition
- ```
- paddleserving目录包含启动pipeline服务和发送预测请求的代码,包括:
- ```
- __init__.py
- config.yml # 启动服务的配置文件
- pipeline_http_client.py # http方式发送pipeline预测请求的脚本
- pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
- recognition_web_service.py # 启动pipeline服务端的脚本
- ```
-
-2. 启动服务可运行如下命令:
- ```
- # 启动服务,运行日志保存在log.txt
- python3 recognition_web_service.py &>log.txt &
- ```
- 成功启动服务后,log.txt中会打印类似如下日志
- ![](../imgs/start_server_recog.png)
-
-3. 发送服务请求:
- ```
- python3 pipeline_http_client.py
- ```
- 成功运行后,模型预测的结果会打印在cmd窗口中,结果示例为:
- ![](../imgs/results_recog.png)
-
- 调整 config.yml 中的并发个数可以获得最大的QPS
- ```
- op:
- #并发数,is_thread_op=True时,为线程并发;否则为进程并发
- concurrency: 8
- ...
- ```
- 有需要的话可以同时发送多个服务请求
-
- 预测性能数据会被自动写入 `PipelineServingLogs/pipeline.tracer` 文件中。
-
-
-## FAQ
-**Q1**: 发送请求后没有结果返回或者提示输出解码报错
-
-**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是:
-```
-unset https_proxy
-unset http_proxy
-```
diff --git a/deploy/paddleserving/recognition/recognition_web_service.py b/deploy/paddleserving/recognition/recognition_web_service.py
index 920d93139cd11f3c3539a7255fb584a3d0799017..4a3478b65fa43a45d050e1b3341066acbe199138 100644
--- a/deploy/paddleserving/recognition/recognition_web_service.py
+++ b/deploy/paddleserving/recognition/recognition_web_service.py
@@ -83,7 +83,7 @@ class DetOp(Op):
}
return feed_dict, False, None, ""
- def postprocess(self, input_dicts, fetch_dict, log_id):
+ def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
boxes = self.img_postprocess(fetch_dict, visualize=False)
boxes.sort(key=lambda x: x["score"], reverse=True)
boxes = filter(lambda x: x["score"] >= self.threshold,
@@ -173,7 +173,7 @@ class RecOp(Op):
filtered_results.append(results[i])
return filtered_results
- def postprocess(self, input_dicts, fetch_dict, log_id):
+ def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
batch_features = fetch_dict["features"]
if self.feature_normalize:
diff --git a/deploy/python/predict_cls.py b/deploy/python/predict_cls.py
index cdeb32e4881fb5cac4e3ba09adfba8019af579ad..7d6d861648e1629a89fd255027a9e4798d09547d 100644
--- a/deploy/python/predict_cls.py
+++ b/deploy/python/predict_cls.py
@@ -47,12 +47,14 @@ class ClsPredictor(Predictor):
import auto_log
import os
pid = os.getpid()
+ size = config["PreProcess"]["transform_ops"][1]["CropImage"][
+ "size"]
self.auto_logger = auto_log.AutoLogger(
model_name=config["Global"].get("model_name", "cls"),
model_precision='fp16'
if config["Global"]["use_fp16"] else 'fp32',
batch_size=config["Global"].get("batch_size", 1),
- data_shape=[3, 224, 224],
+ data_shape=[3, size, size],
save_path=config["Global"].get("save_log_path",
"./auto_log.log"),
inference_config=self.config,
diff --git a/deploy/python/predict_rec.py b/deploy/python/predict_rec.py
index d41c513f89fd83972e86bc5941a8dba1fd488856..0d0e7415996531a96bcb0f3b8b210f36fcd4eac8 100644
--- a/deploy/python/predict_rec.py
+++ b/deploy/python/predict_rec.py
@@ -35,6 +35,27 @@ class RecPredictor(Predictor):
self.preprocess_ops = create_operators(config["RecPreProcess"][
"transform_ops"])
self.postprocess = build_postprocess(config["RecPostProcess"])
+ self.benchmark = config["Global"].get("benchmark", False)
+
+ if self.benchmark:
+ import auto_log
+ pid = os.getpid()
+ self.auto_logger = auto_log.AutoLogger(
+ model_name=config["Global"].get("model_name", "rec"),
+ model_precision='fp16'
+ if config["Global"]["use_fp16"] else 'fp32',
+ batch_size=config["Global"].get("batch_size", 1),
+ data_shape=[3, 224, 224],
+ save_path=config["Global"].get("save_log_path",
+ "./auto_log.log"),
+ inference_config=self.config,
+ pids=pid,
+ process_name=None,
+ gpu_ids=None,
+ time_keys=[
+ 'preprocess_time', 'inference_time', 'postprocess_time'
+ ],
+ warmup=2)
def predict(self, images, feature_normalize=True):
input_names = self.paddle_predictor.get_input_names()
@@ -44,16 +65,22 @@ class RecPredictor(Predictor):
output_tensor = self.paddle_predictor.get_output_handle(output_names[
0])
+ if self.benchmark:
+ self.auto_logger.times.start()
if not isinstance(images, (list, )):
images = [images]
for idx in range(len(images)):
for ops in self.preprocess_ops:
images[idx] = ops(images[idx])
image = np.array(images)
+ if self.benchmark:
+ self.auto_logger.times.stamp()
input_tensor.copy_from_cpu(image)
self.paddle_predictor.run()
batch_output = output_tensor.copy_to_cpu()
+ if self.benchmark:
+ self.auto_logger.times.stamp()
if feature_normalize:
feas_norm = np.sqrt(
@@ -62,6 +89,9 @@ class RecPredictor(Predictor):
if self.postprocess is not None:
batch_output = self.postprocess(batch_output)
+
+ if self.benchmark:
+ self.auto_logger.times.end(stamp=True)
return batch_output
@@ -85,16 +115,19 @@ def main(config):
batch_names.append(img_name)
cnt += 1
- if cnt % config["Global"]["batch_size"] == 0 or (idx + 1) == len(image_list):
- if len(batch_imgs) == 0:
+ if cnt % config["Global"]["batch_size"] == 0 or (idx + 1
+ ) == len(image_list):
+ if len(batch_imgs) == 0:
continue
-
+
batch_results = rec_predictor.predict(batch_imgs)
for number, result_dict in enumerate(batch_results):
filename = batch_names[number]
print("{}:\t{}".format(filename, result_dict))
batch_imgs = []
batch_names = []
+ if rec_predictor.benchmark:
+ rec_predictor.auto_logger.report()
return
diff --git a/deploy/utils/predictor.py b/deploy/utils/predictor.py
index 11f153071a0da0f82f035fe7389ad8f9f3bd8e6b..1c49c9dd5478932c655374fad541acbfc8952eeb 100644
--- a/deploy/utils/predictor.py
+++ b/deploy/utils/predictor.py
@@ -60,6 +60,7 @@ class Predictor(object):
precision_mode=Config.Precision.Half
if args.use_fp16 else Config.Precision.Float32,
max_batch_size=args.batch_size,
+ workspace_size=1 << 30,
min_subgraph_size=30)
config.enable_memory_optim()
diff --git a/docs/en_tmp/advanced_tutorials/.gitkeep b/docs/en_tmp/advanced_tutorials/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/algorithm_introduction/.gitkeep b/docs/en_tmp/algorithm_introduction/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/data_preparation/.gitkeep b/docs/en_tmp/data_preparation/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/faq_series/.gitkeep b/docs/en_tmp/faq_series/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/image_recognition_pipeline/.gitkeep b/docs/en_tmp/image_recognition_pipeline/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/images/.gitkeep b/docs/en_tmp/images/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/inference_deployment/.gitkeep b/docs/en_tmp/inference_deployment/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/installation/.gitkeep b/docs/en_tmp/installation/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/introduction/.gitkeep b/docs/en_tmp/introduction/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/models/.gitkeep b/docs/en_tmp/models/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/models_training/.gitkeep b/docs/en_tmp/models_training/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/others/.gitkeep b/docs/en_tmp/others/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/en_tmp/quick_start/.gitkeep b/docs/en_tmp/quick_start/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/docs/images/wx_group.png b/docs/images/wx_group.png
index e65af8e60bb8ecf0e2adb83202b86e22456fba35..38663ce7c866c0a9b3735eec33f1f82ef59b813c 100644
Binary files a/docs/images/wx_group.png and b/docs/images/wx_group.png differ
diff --git a/docs/zh_CN/inference_deployment/paddle_serving_deploy.md b/docs/zh_CN/inference_deployment/paddle_serving_deploy.md
index 69032d99ff97d8eef6c25cfd3c4426f3830e1f03..f5effa95d82e82762f705e241a08c614ce87e1a3 100644
--- a/docs/zh_CN/inference_deployment/paddle_serving_deploy.md
+++ b/docs/zh_CN/inference_deployment/paddle_serving_deploy.md
@@ -21,21 +21,25 @@
Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。
```shell
-nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
-nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
+docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel
+nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash
nvidia-docker exec -it test bash
```
进入 docker 后,需要安装 Serving 相关的 python 包。
```shell
-pip install paddlepaddle-gpu
-pip install paddle-serving-client
-pip install paddle-serving-server-gpu
-pip install paddle-serving-app
+pip3 install paddle-serving-client==0.7.0
+pip3 install paddle-serving-server==0.7.0 # CPU
+pip3 install paddle-serving-app==0.7.0
+pip3 install paddle-serving-server-gpu==0.7.0.post102 #GPU with CUDA10.2 + TensorRT6
+# 其他GPU环境需要确认环境再选择执行哪一条
+pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6
+pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8
```
* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。
+* 其他环境配置安装请参考: [使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md)
* 如果希望部署 CPU 服务,可以安装 serving-server 的 cpu 版本,安装命令如下。
@@ -43,6 +47,7 @@ pip install paddle-serving-app
pip install paddle-serving-server
```
+
## 3. 图像分类服务部署
### 3.1 模型转换
diff --git a/test_tipc/config/DeiT/DeiT_base_patch16_384_train_infer_python.txt b/test_tipc/config/DeiT/DeiT_base_patch16_384_train_infer_python.txt
index f6e034d01ca6dc083a13e8f479068eb1079878ee..c8fc60e0a517c53bbd27c3cc21528eb0de61adc9 100644
--- a/test_tipc/config/DeiT/DeiT_base_patch16_384_train_infer_python.txt
+++ b/test_tipc/config/DeiT/DeiT_base_patch16_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/D
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/ResNeSt/ResNeSt101_train_infer_python.txt b/test_tipc/config/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt
similarity index 61%
rename from test_tipc/config/ResNeSt/ResNeSt101_train_infer_python.txt
rename to test_tipc/config/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt
index 30fbdf7378e5d6d379b42ad6934a85f382b4ab59..40e6675a6717ad1448ea0663f0152fbefd5987c7 100644
--- a/test_tipc/config/ResNeSt/ResNeSt101_train_infer_python.txt
+++ b/test_tipc/config/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:ResNeSt101
+model_name:GeneralRecognition_PPLCNet_x2_5
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False
+norm_train:tools/train.py -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,31 +21,31 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt101.yaml
+eval:tools/eval.py -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt101.yaml
+norm_export:tools/export_model.py -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
-pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNeSt101_pretrained.pdparams
+pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/pretrain/general_PPLCNet_x2_5_pretrained_v1.0.pdparams
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_rec.py -c configs/inference_rec.yaml
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
-o Global.batch_size:1|16
-o Global.use_tensorrt:True|False
-o Global.use_fp16:True|False
--o Global.inference_model_dir:../inference
--o Global.infer_imgs:../dataset/ILSVRC2012/val
+-o Global.rec_inference_model_dir:../inference
+-o Global.infer_imgs:../dataset/Aliproduct/demo_test/
-o Global.save_log_path:null
-o Global.benchmark:True
null:null
diff --git a/test_tipc/config/LeViT/LeViT_384_train_infer_python.txt b/test_tipc/config/LeViT/LeViT_384_train_infer_python.txt
index b5afe81c42532275a918764b16cd26bf52cf7fb3..47451d4f75a9dc1d19e7d7a89957d837a0e1c464 100644
--- a/test_tipc/config/LeViT/LeViT_384_train_infer_python.txt
+++ b/test_tipc/config/LeViT/LeViT_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/L
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt b/test_tipc/config/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt
index e91974a2a7d8eeb14604bfecef8dc511285e4342..e7161c8d0fcac3ad2b7363b22c6ec15c2f0d57da 100644
--- a/test_tipc/config/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt
+++ b/test_tipc/config/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/S
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt b/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt
index 14d883a99532bdb827cce3b2abca02e05d39532d..6da5cedf6f07074c01ed4c0f7ca53d290ee62040 100644
--- a/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt
+++ b/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt
@@ -33,11 +33,11 @@ fpgm_export:null
distill_export:null
kl_quant:null
export2:null
-pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SwinTransformer_large_patch4_window12_384_pretrained.pdparams
+pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SwinTransformer_large_patch4_window12_384_22kto1k_pretrained.pdparams
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt b/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt
index 84c1e4bd3194c316457afcc1e26dcc0c269251f6..359899e1a1094dc9c58cb3b5ca47fd66a7989765 100644
--- a/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt
+++ b/test_tipc/config/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt
@@ -33,7 +33,7 @@ fpgm_export:null
distill_export:null
kl_quant:null
export2:null
-pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SwinTransformer_large_patch4_window7_224_pretrained.pdparams
+pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/SwinTransformer_large_patch4_window7_224_22kto1k_pretrained.pdparams
infer_model:../inference/
infer_export:True
infer_quant:Fasle
diff --git a/test_tipc/config/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt b/test_tipc/config/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt
index 6d629122095ba35abbbfbed918334757ccfc9d8e..ed88b5154410109ca114718542e5317627848bc5 100644
--- a/test_tipc/config/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt
+++ b/test_tipc/config/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/V
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt b/test_tipc/config/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt
index 0febb01aa137a7d2b3d3fc4b390274cb85aaa5c6..9c3abcdc0bef583284c8983cdc4e5a90b16cc1d9 100644
--- a/test_tipc/config/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt
+++ b/test_tipc/config/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/V
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/VisionTransformer/ViT_huge_patch32_384_train_infer_python.txt b/test_tipc/config/VisionTransformer/ViT_huge_patch32_384_train_infer_python.txt
index b6384854233c9f6d01c79ad8ccc51608ecdbf559..942302f2cd170a2f4c94f9051e28bb11c0740cb9 100644
--- a/test_tipc/config/VisionTransformer/ViT_huge_patch32_384_train_infer_python.txt
+++ b/test_tipc/config/VisionTransformer/ViT_huge_patch32_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/V
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt b/test_tipc/config/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt
index 3a15cf03b2cfd17283ac507c646ae053b4143aa6..dd067233703248d8a73feeaa8ea4fa40f6f61e70 100644
--- a/test_tipc/config/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt
+++ b/test_tipc/config/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/V
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/config/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt b/test_tipc/config/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt
index bde1606533086a97a2af1f0252efe5ab3ffc37fc..b5211889a2e91c7f03ebe7f7539327d0007ac4ff 100644
--- a/test_tipc/config/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt
+++ b/test_tipc/config/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt
@@ -37,7 +37,7 @@ pretrained_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/V
infer_model:../inference/
infer_export:True
infer_quant:Fasle
-inference:python/predict_cls.py -c configs/inference_cls.yaml
+inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=384 -o PreProcess.transform_ops.1.CropImage.size=384
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:True|False
-o Global.cpu_num_threads:1|6
diff --git a/test_tipc/prepare.sh b/test_tipc/prepare.sh
index 0a1f70b0665f3ab0a21551e8505d2c4e52cd914e..db9b4fdad265d64d1d1db513ba64fd2be1111b6f 100644
--- a/test_tipc/prepare.sh
+++ b/test_tipc/prepare.sh
@@ -37,6 +37,24 @@ model_name=$(func_parser_value "${lines[1]}")
model_url_value=$(func_parser_value "${lines[35]}")
model_url_key=$(func_parser_key "${lines[35]}")
+if [[ $FILENAME == *GeneralRecognition* ]];then
+ cd dataset
+ rm -rf Aliproduct
+ rm -rf train_reg_all_data.txt
+ rm -rf demo_train
+ wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/data/whole_chain/tipc_shitu_demo_data.tar
+ tar -xf tipc_shitu_demo_data.tar
+ ln -s tipc_shitu_demo_data Aliproduct
+ ln -s tipc_shitu_demo_data/demo_train.txt train_reg_all_data.txt
+ ln -s tipc_shitu_demo_data/demo_train demo_train
+ cd tipc_shitu_demo_data
+ ln -s demo_test.txt val_list.txt
+ cd ../../
+ eval "wget -nc $model_url_value"
+ mv general_PPLCNet_x2_5_pretrained_v1.0.pdparams GeneralRecognition_PPLCNet_x2_5_pretrained.pdparams
+ exit 0
+fi
+
if [ ${MODE} = "lite_train_lite_infer" ] || [ ${MODE} = "lite_train_whole_infer" ];then
# pretrain lite train data
cd dataset
@@ -68,6 +86,11 @@ elif [ ${MODE} = "whole_infer" ] || [ ${MODE} = "klquant_whole_infer" ];then
rm -rf inference
tar xf "${model_name}_inference.tar"
fi
+ if [[ $model_name == "SwinTransformer_large_patch4_window7_224" || $model_name == "SwinTransformer_large_patch4_window12_384" ]];then
+ cmd="mv ${model_name}_22kto1k_pretrained.pdparams ${model_name}_pretrained.pdparams"
+ eval $cmd
+ fi
+
elif [ ${MODE} = "whole_train_whole_infer" ];then
cd dataset
rm -rf ILSVRC2012
diff --git a/test_tipc/test_train_inference_python.sh b/test_tipc/test_train_inference_python.sh
index ec20912a5f1fd73a33837eaa457b04202c38e08a..0577a03091eb3b3518db4228e12b2f802570518b 100644
--- a/test_tipc/test_train_inference_python.sh
+++ b/test_tipc/test_train_inference_python.sh
@@ -291,8 +291,12 @@ else
export FLAGS_cudnn_deterministic=True
eval $cmd
status_check $? "${cmd}" "${status_log}"
-
- set_eval_pretrain=$(func_set_params "${pretrain_model_key}" "${save_log}/${$model_name}/${train_model_name}")
+
+ if [[ $FILENAME == *GeneralRecognition* ]]; then
+ set_eval_pretrain=$(func_set_params "${pretrain_model_key}" "${save_log}/RecModel/${train_model_name}")
+ else
+ set_eval_pretrain=$(func_set_params "${pretrain_model_key}" "${save_log}/${model_name}/${train_model_name}")
+ fi
# save norm trained models to set pretrain for pact training and fpgm training
if [ ${trainer} = ${trainer_norm} ]; then
load_norm_train_model=${set_eval_pretrain}
@@ -308,7 +312,11 @@ else
if [ ${run_export} != "null" ]; then
# run export model
save_infer_path="${save_log}"
- set_export_weight=$(func_set_params "${export_weight}" "${save_log}/${model_name}/${train_model_name}")
+ if [[ $FILENAME == *GeneralRecognition* ]]; then
+ set_eval_pretrain=$(func_set_params "${pretrain_model_key}" "${save_log}/RecModel/${train_model_name}")
+ else
+ set_export_weight=$(func_set_params "${export_weight}" "${save_log}/${model_name}/${train_model_name}")
+ fi
set_save_infer_key=$(func_set_params "${save_infer_key}" "${save_infer_path}")
export_cmd="${python} ${run_export} ${set_export_weight} ${set_save_infer_key}"
eval $export_cmd