diff --git a/docs/en/algorithm_introduction/reid.md b/docs/en/algorithm_introduction/reid.md index c4c5ac59e5003fad8e25fe7e2e9824bb3819532b..5853fb5ae6b2fdd1e9f8da55f9874abe19da9353 100644 --- a/docs/en/algorithm_introduction/reid.md +++ b/docs/en/algorithm_introduction/reid.md @@ -161,7 +161,7 @@ Download the [Market-1501-v15.09.15.zip](https://pan.baidu.com/s/1ntIi2Op?_at_=1 #### 4.1 Model Evaluation -Prepare the `*.pdparams` model parameter file for evaluation. You can use the trained model or the model saved in [2.1.4 Model training] (#214-model training). +Prepare the `*.pdparams` model parameter file for evaluation. You can use the trained model or the model saved in [3.1.4 Model training](#314-model-training). - Take the `latest.pdparams` saved during training as an example, execute the following command to evaluate. @@ -191,8 +191,6 @@ Prepare the `*.pdparams` model parameter file for evaluation. You can use the tr ```log ... ... - ppcls INFO: unique_endpoints {''} - ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [40/125] @@ -224,8 +222,6 @@ Prepare the `*.pdparams` model parameter file for evaluation. You can use the tr ```log ... ... - ppcls INFO: unique_endpoints {''} - ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [40/125] @@ -324,25 +320,25 @@ You can convert the model file saved during training into an inference model and ##### 4.2.3 Inference based on C++ prediction engine -PaddleClas provides an example of inference based on the C++ prediction engine, you can refer to [Server-side C++ prediction](../inference_deployment/cpp_deploy.md) to complete the corresponding inference deployment. If you are using the Windows platform, you can refer to the Visual Studio 2019 Community CMake Compilation Guide to complete the corresponding prediction library compilation and model prediction work. +PaddleClas provides an example of inference based on the C++ prediction engine, you can refer to [Server-side C++ prediction](../inference_deployment/cpp_deploy_en.md) to complete the corresponding inference deployment. If you are using the Windows platform, you can refer to the Visual Studio 2019 Community CMake Compilation Guide to complete the corresponding prediction library compilation and model prediction work. #### 4.3 Service deployment Paddle Serving provides high-performance, flexible and easy-to-use industrial-grade online inference services. Paddle Serving supports RESTful, gRPC, bRPC and other protocols, and provides inference solutions in a variety of heterogeneous hardware and operating system environments. For more introduction to Paddle Serving, please refer to the Paddle Serving code repository. -PaddleClas provides an example of model serving deployment based on Paddle Serving. You can refer to [Model serving deployment](../inference_deployment/paddle_serving_deploy.md) to complete the corresponding deployment. +PaddleClas provides an example of model serving deployment based on Paddle Serving. You can refer to [Model serving deployment](../inference_deployment/recognition_serving_deploy_en.md) to complete the corresponding deployment. #### 4.4 Lite deployment Paddle Lite is a high-performance, lightweight, flexible and easily extensible deep learning inference framework, positioned to support multiple hardware platforms including mobile, embedded and server. For more introduction to Paddle Lite, please refer to the Paddle Lite code repository. -PaddleClas provides an example of deploying models based on Paddle Lite. You can refer to [Deployment](../inference_deployment/paddle_lite_deploy.md) to complete the corresponding deployment. +PaddleClas provides an example of deploying models based on Paddle Lite. You can refer to [Deployment](../inference_deployment/paddle_lite_deploy_en.md) to complete the corresponding deployment. #### 4.5 Paddle2ONNX Model Conversion and Prediction Paddle2ONNX supports converting PaddlePaddle model format to ONNX model format. The deployment of Paddle models to various inference engines can be completed through ONNX, including TensorRT/OpenVINO/MNN/TNN/NCNN, and other inference engines or hardware that support the ONNX open source format. For more information about Paddle2ONNX, please refer to the Paddle2ONNX code repository. -PaddleClas provides an example of converting an inference model to an ONNX model and making inference prediction based on Paddle2ONNX. You can refer to [Paddle2ONNX model conversion and prediction](../../../deploy/paddle2onnx/readme.md) to complete the corresponding deployment work. +PaddleClas provides an example of converting an inference model to an ONNX model and making inference prediction based on Paddle2ONNX. You can refer to [Paddle2ONNX model conversion and prediction](../../../deploy/paddle2onnx/readme_en.md) to complete the corresponding deployment work. ### 5. Summary diff --git a/docs/en/inference_deployment/classification_serving_deploy_en.md b/docs/en/inference_deployment/classification_serving_deploy_en.md index 120871edddbe1ca7b6ac1b3a72a3e89e8d1de39a..a46b000702ce8622c33777497642b5e98ed48061 100644 --- a/docs/en/inference_deployment/classification_serving_deploy_en.md +++ b/docs/en/inference_deployment/classification_serving_deploy_en.md @@ -181,7 +181,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s # One-click compile and install Serving server, set SERVING_BIN source ./build_server.sh python3.7 ``` - **Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. + **Note: The path set by **[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; If you encounter a non-network error during the execution of `build_server.sh`, you can manually copy the commands in the script to the terminal for execution. - Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields. ```log @@ -193,9 +193,9 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s shape: 1 } ``` -- Modify part of the code of [`test_cpp_serving_client`](./test_cpp_serving_client.py) - 1. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` . - 2. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment. +- Modify part of the code of [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py) + 1. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` . + 2. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment. - Start the service: ```shell diff --git a/docs/en/inference_deployment/recognition_serving_deploy_en.md b/docs/en/inference_deployment/recognition_serving_deploy_en.md index bf8061376a6db8fb6cb8c256c8cc5a74c0fb1326..6d1db098d91c6d4bacba94dc7fff1a0511d722ba 100644 --- a/docs/en/inference_deployment/recognition_serving_deploy_en.md +++ b/docs/en/inference_deployment/recognition_serving_deploy_en.md @@ -219,7 +219,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s # One-click compile and install Serving server, set SERVING_BIN source ./build_server.sh python3.7 ``` - **Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. + **Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; If you encounter a non-network error during the execution of `build_server.sh`, you can manually copy the commands in the script to the terminal for execution. - The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder. ```shell diff --git a/docs/zh_CN/algorithm_introduction/reid.md b/docs/zh_CN/algorithm_introduction/reid.md index 4c0cad846caa81b838ac8c0ec664e9108e3dc647..f8c8705ac59e9950b14587730c971b81e81f48b3 100644 --- a/docs/zh_CN/algorithm_introduction/reid.md +++ b/docs/zh_CN/algorithm_introduction/reid.md @@ -161,7 +161,7 @@ #### 4.1 模型评估 -准备用于评估的`*.pdparams`模型参数文件,可以使用训练好的模型,也可以使用[2.1.4 模型训练](#214-模型训练)中保存的模型。 +准备用于评估的`*.pdparams`模型参数文件,可以使用训练好的模型,也可以使用[3.1.4 模型训练](#314-模型训练)中保存的模型。 - 以训练过程中保存的`latest.pdparams`为例,执行如下命令即可进行评估。 @@ -191,8 +191,6 @@ ```log ... ... - ppcls INFO: unique_endpoints {''} - ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [40/125] @@ -223,8 +221,6 @@ ```log ... ... - ppcls INFO: unique_endpoints {''} - ppcls INFO: Found /root/.paddleclas/weights/resnet50-19c8e357_torch2paddle.pdparams ppcls INFO: gallery feature calculation process: [0/125] ppcls INFO: gallery feature calculation process: [20/125] ppcls INFO: gallery feature calculation process: [40/125] @@ -330,7 +326,7 @@ PaddleClas 提供了基于 C++ 预测引擎推理的示例,您可以参考[服 Paddle Serving 提供高性能、灵活易用的工业级在线推理服务。Paddle Serving 支持 RESTful、gRPC、bRPC 等多种协议,提供多种异构硬件和多种操作系统环境下推理解决方案。更多关于Paddle Serving 的介绍,可以参考Paddle Serving 代码仓库。 -PaddleClas 提供了基于 Paddle Serving 来完成模型服务化部署的示例,您可以参考[模型服务化部署](../inference_deployment/paddle_serving_deploy.md)来完成相应的部署工作。 +PaddleClas 提供了基于 Paddle Serving 来完成模型服务化部署的示例,您可以参考[模型服务化部署](../inference_deployment/recognition_serving_deploy.md)来完成相应的部署工作。 #### 4.4 端侧部署 diff --git a/docs/zh_CN/inference_deployment/classification_serving_deploy.md b/docs/zh_CN/inference_deployment/classification_serving_deploy.md index 3d9c999625535b9a70c2ba443512717bcb3a975c..330c4cf965a83b818658d9b6af37300dbe65fc64 100644 --- a/docs/zh_CN/inference_deployment/classification_serving_deploy.md +++ b/docs/zh_CN/inference_deployment/classification_serving_deploy.md @@ -182,7 +182,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 # 一键编译安装Serving server、设置 SERVING_BIN source ./build_server.sh python3.7 ``` - **注:**[build_server.sh](./build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 + **注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译;如果执行`build_server.sh`过程中遇到非网络原因的报错,则可以手动将脚本中的命令逐条复制到终端执行。 - 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。 ```log @@ -194,9 +194,9 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 shape: 1 } ``` -- 修改 [`test_cpp_serving_client`](./test_cpp_serving_client.py) 的部分代码 - 1. 修改 [`load_client_config`](./test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 。 - 2. 修改 [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt` 中 `feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name` 为 `x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。 +- 修改 [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py) 的部分代码 + 1. 修改 [`load_client_config`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 。 + 2. 修改 [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt` 中 `feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name` 为 `x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。 - 启动服务: ```shell diff --git a/docs/zh_CN/inference_deployment/recognition_serving_deploy.md b/docs/zh_CN/inference_deployment/recognition_serving_deploy.md index 5311fe997269aecc1f956e8ebcdbcb628b3ed23c..f823f7f284a6179a78d8bb61c027d17259674acb 100644 --- a/docs/zh_CN/inference_deployment/recognition_serving_deploy.md +++ b/docs/zh_CN/inference_deployment/recognition_serving_deploy.md @@ -218,7 +218,7 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD # 一键编译安装Serving server、设置 SERVING_BIN source ./build_server.sh python3.7 ``` - **注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 + **注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译;如果执行`build_server.sh`过程中遇到非网络原因的报错,则可以手动将脚本中的命令逐条复制到终端执行。 - C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。 ```shell