未验证 提交 68d7abf5 编写于 作者: H HydrogenSulfate 提交者: GitHub

Merge pull request #2203 from HydrogenSulfate/refine_dev_docs

refine reid and deployment docs
...@@ -161,7 +161,7 @@ Download the [Market-1501-v15.09.15.zip](https://pan.baidu.com/s/1ntIi2Op?_at_=1 ...@@ -161,7 +161,7 @@ Download the [Market-1501-v15.09.15.zip](https://pan.baidu.com/s/1ntIi2Op?_at_=1
#### 4.1 Model Evaluation #### 4.1 Model Evaluation
Prepare the `*.pdparams` model parameter file for evaluation. You can use the trained model or the model saved in [2.1.4 Model training] (#214-model training). Prepare the `*.pdparams` model parameter file for evaluation. You can use the trained model or the model saved in [3.1.4 Model training](#314-model-training).
- Take the `latest.pdparams` saved during training as an example, execute the following command to evaluate. - Take the `latest.pdparams` saved during training as an example, execute the following command to evaluate.
...@@ -191,8 +191,6 @@ Prepare the `*.pdparams` model parameter file for evaluation. You can use the tr ...@@ -191,8 +191,6 @@ Prepare the `*.pdparams` model parameter file for evaluation. You can use the tr
```log ```log
... ...
... ...
ppcls INFO: unique_endpoints {''}
ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams
ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [0/125]
ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [20/125]
ppcls INFO: gallery feature calculation process: [40/125] ppcls INFO: gallery feature calculation process: [40/125]
...@@ -224,8 +222,6 @@ Prepare the `*.pdparams` model parameter file for evaluation. You can use the tr ...@@ -224,8 +222,6 @@ Prepare the `*.pdparams` model parameter file for evaluation. You can use the tr
```log ```log
... ...
... ...
ppcls INFO: unique_endpoints {''}
ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams
ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [0/125]
ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [20/125]
ppcls INFO: gallery feature calculation process: [40/125] ppcls INFO: gallery feature calculation process: [40/125]
...@@ -324,25 +320,25 @@ You can convert the model file saved during training into an inference model and ...@@ -324,25 +320,25 @@ You can convert the model file saved during training into an inference model and
##### 4.2.3 Inference based on C++ prediction engine ##### 4.2.3 Inference based on C++ prediction engine
PaddleClas provides an example of inference based on the C++ prediction engine, you can refer to [Server-side C++ prediction](../inference_deployment/cpp_deploy.md) to complete the corresponding inference deployment. If you are using the Windows platform, you can refer to the Visual Studio 2019 Community CMake Compilation Guide to complete the corresponding prediction library compilation and model prediction work. PaddleClas provides an example of inference based on the C++ prediction engine, you can refer to [Server-side C++ prediction](../inference_deployment/cpp_deploy_en.md) to complete the corresponding inference deployment. If you are using the Windows platform, you can refer to the Visual Studio 2019 Community CMake Compilation Guide to complete the corresponding prediction library compilation and model prediction work.
#### 4.3 Service deployment #### 4.3 Service deployment
Paddle Serving provides high-performance, flexible and easy-to-use industrial-grade online inference services. Paddle Serving supports RESTful, gRPC, bRPC and other protocols, and provides inference solutions in a variety of heterogeneous hardware and operating system environments. For more introduction to Paddle Serving, please refer to the Paddle Serving code repository. Paddle Serving provides high-performance, flexible and easy-to-use industrial-grade online inference services. Paddle Serving supports RESTful, gRPC, bRPC and other protocols, and provides inference solutions in a variety of heterogeneous hardware and operating system environments. For more introduction to Paddle Serving, please refer to the Paddle Serving code repository.
PaddleClas provides an example of model serving deployment based on Paddle Serving. You can refer to [Model serving deployment](../inference_deployment/paddle_serving_deploy.md) to complete the corresponding deployment. PaddleClas provides an example of model serving deployment based on Paddle Serving. You can refer to [Model serving deployment](../inference_deployment/recognition_serving_deploy_en.md) to complete the corresponding deployment.
#### 4.4 Lite deployment #### 4.4 Lite deployment
Paddle Lite is a high-performance, lightweight, flexible and easily extensible deep learning inference framework, positioned to support multiple hardware platforms including mobile, embedded and server. For more introduction to Paddle Lite, please refer to the Paddle Lite code repository. Paddle Lite is a high-performance, lightweight, flexible and easily extensible deep learning inference framework, positioned to support multiple hardware platforms including mobile, embedded and server. For more introduction to Paddle Lite, please refer to the Paddle Lite code repository.
PaddleClas provides an example of deploying models based on Paddle Lite. You can refer to [Deployment](../inference_deployment/paddle_lite_deploy.md) to complete the corresponding deployment. PaddleClas provides an example of deploying models based on Paddle Lite. You can refer to [Deployment](../inference_deployment/paddle_lite_deploy_en.md) to complete the corresponding deployment.
#### 4.5 Paddle2ONNX Model Conversion and Prediction #### 4.5 Paddle2ONNX Model Conversion and Prediction
Paddle2ONNX supports converting PaddlePaddle model format to ONNX model format. The deployment of Paddle models to various inference engines can be completed through ONNX, including TensorRT/OpenVINO/MNN/TNN/NCNN, and other inference engines or hardware that support the ONNX open source format. For more information about Paddle2ONNX, please refer to the Paddle2ONNX code repository. Paddle2ONNX supports converting PaddlePaddle model format to ONNX model format. The deployment of Paddle models to various inference engines can be completed through ONNX, including TensorRT/OpenVINO/MNN/TNN/NCNN, and other inference engines or hardware that support the ONNX open source format. For more information about Paddle2ONNX, please refer to the Paddle2ONNX code repository.
PaddleClas provides an example of converting an inference model to an ONNX model and making inference prediction based on Paddle2ONNX. You can refer to [Paddle2ONNX model conversion and prediction](../../../deploy/paddle2onnx/readme.md) to complete the corresponding deployment work. PaddleClas provides an example of converting an inference model to an ONNX model and making inference prediction based on Paddle2ONNX. You can refer to [Paddle2ONNX model conversion and prediction](../../../deploy/paddle2onnx/readme_en.md) to complete the corresponding deployment work.
### 5. Summary ### 5. Summary
......
...@@ -181,7 +181,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s ...@@ -181,7 +181,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
# One-click compile and install Serving server, set SERVING_BIN # One-click compile and install Serving server, set SERVING_BIN
source ./build_server.sh python3.7 source ./build_server.sh python3.7
``` ```
**Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. **Note: The path set by **[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; If you encounter a non-network error during the execution of `build_server.sh`, you can manually copy the commands in the script to the terminal for execution.
- Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields. - Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields.
```log ```log
...@@ -193,9 +193,9 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s ...@@ -193,9 +193,9 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
shape: 1 shape: 1
} }
``` ```
- Modify part of the code of [`test_cpp_serving_client`](./test_cpp_serving_client.py) - Modify part of the code of [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py)
1. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` . 1. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` .
2. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment. 2. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment.
- Start the service: - Start the service:
```shell ```shell
......
...@@ -219,7 +219,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s ...@@ -219,7 +219,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
# One-click compile and install Serving server, set SERVING_BIN # One-click compile and install Serving server, set SERVING_BIN
source ./build_server.sh python3.7 source ./build_server.sh python3.7
``` ```
**Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. **Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; If you encounter a non-network error during the execution of `build_server.sh`, you can manually copy the commands in the script to the terminal for execution.
- The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder. - The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder.
```shell ```shell
......
...@@ -161,7 +161,7 @@ ...@@ -161,7 +161,7 @@
#### 4.1 模型评估 #### 4.1 模型评估
准备用于评估的`*.pdparams`模型参数文件,可以使用训练好的模型,也可以使用[2.1.4 模型训练](#214-模型训练)中保存的模型。 准备用于评估的`*.pdparams`模型参数文件,可以使用训练好的模型,也可以使用[3.1.4 模型训练](#314-模型训练)中保存的模型。
- 以训练过程中保存的`latest.pdparams`为例,执行如下命令即可进行评估。 - 以训练过程中保存的`latest.pdparams`为例,执行如下命令即可进行评估。
...@@ -191,8 +191,6 @@ ...@@ -191,8 +191,6 @@
```log ```log
... ...
... ...
ppcls INFO: unique_endpoints {''}
ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams
ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [0/125]
ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [20/125]
ppcls INFO: gallery feature calculation process: [40/125] ppcls INFO: gallery feature calculation process: [40/125]
...@@ -223,8 +221,6 @@ ...@@ -223,8 +221,6 @@
```log ```log
... ...
... ...
ppcls INFO: unique_endpoints {''}
ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams
ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [0/125]
ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [20/125]
ppcls INFO: gallery feature calculation process: [40/125] ppcls INFO: gallery feature calculation process: [40/125]
...@@ -330,7 +326,7 @@ PaddleClas 提供了基于 C++ 预测引擎推理的示例,您可以参考[服 ...@@ -330,7 +326,7 @@ PaddleClas 提供了基于 C++ 预测引擎推理的示例,您可以参考[服
Paddle Serving 提供高性能、灵活易用的工业级在线推理服务。Paddle Serving 支持 RESTful、gRPC、bRPC 等多种协议,提供多种异构硬件和多种操作系统环境下推理解决方案。更多关于Paddle Serving 的介绍,可以参考Paddle Serving 代码仓库。 Paddle Serving 提供高性能、灵活易用的工业级在线推理服务。Paddle Serving 支持 RESTful、gRPC、bRPC 等多种协议,提供多种异构硬件和多种操作系统环境下推理解决方案。更多关于Paddle Serving 的介绍,可以参考Paddle Serving 代码仓库。
PaddleClas 提供了基于 Paddle Serving 来完成模型服务化部署的示例,您可以参考[模型服务化部署](../inference_deployment/paddle_serving_deploy.md)来完成相应的部署工作。 PaddleClas 提供了基于 Paddle Serving 来完成模型服务化部署的示例,您可以参考[模型服务化部署](../inference_deployment/recognition_serving_deploy.md)来完成相应的部署工作。
#### 4.4 端侧部署 #### 4.4 端侧部署
......
...@@ -182,7 +182,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -182,7 +182,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
# 一键编译安装Serving server、设置 SERVING_BIN # 一键编译安装Serving server、设置 SERVING_BIN
source ./build_server.sh python3.7 source ./build_server.sh python3.7
``` ```
**注:**[build_server.sh](./build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译 **注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译;如果执行`build_server.sh`过程中遇到非网络原因的报错,则可以手动将脚本中的命令逐条复制到终端执行
- 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。 - 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。
```log ```log
...@@ -194,9 +194,9 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 ...@@ -194,9 +194,9 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
shape: 1 shape: 1
} }
``` ```
- 修改 [`test_cpp_serving_client`](./test_cpp_serving_client.py) 的部分代码 - 修改 [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py) 的部分代码
1. 修改 [`load_client_config`](./test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 1. 修改 [`load_client_config`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt`
2. 修改 [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt``feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name``x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。 2. 修改 [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt``feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name``x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。
- 启动服务: - 启动服务:
```shell ```shell
......
...@@ -218,7 +218,7 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD ...@@ -218,7 +218,7 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD
# 一键编译安装Serving server、设置 SERVING_BIN # 一键编译安装Serving server、设置 SERVING_BIN
source ./build_server.sh python3.7 source ./build_server.sh python3.7
``` ```
**注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 **注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译;如果执行`build_server.sh`过程中遇到非网络原因的报错,则可以手动将脚本中的命令逐条复制到终端执行
- C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。 - C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。
```shell ```shell
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册