Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.2.2
2. The steps of PaddleServing operating environment prepare are as follows:
2. The steps of PaddleServing operating environment prepare are as follows:
...
@@ -191,6 +189,15 @@ The recognition model is the same.
...
@@ -191,6 +189,15 @@ The recognition model is the same.
```
```
## C++ Serving
## C++ Serving
Service deployment based on python obviously has the advantage of convenient secondary development. However, the real application often needs to pursue better performance. PaddleServing also provides a more performant C++ deployment version.
The C++ service deployment is the same as python in the environment setup and data preparation stages, the difference is when the service is started and the client sends requests.
| Language | Speed | Secondary development | Do you need to compile |
|-----|-----|---------|------------|
| C++ | fast | Slightly difficult | Single model prediction does not need to be compiled, multi-model concatenation needs to be compiled |
| python | general | easy | single-model/multi-model no compilation required |
1. Compile Serving
1. Compile Serving
To improve predictive performance, C++ services also provide multiple model concatenation services. Unlike Python Pipeline services, multiple model concatenation requires the pre - and post-model processing code to be written on the server side, so local recompilation is required to generate serving. Specific may refer to the official document: [how to compile Serving](https://github.com/PaddlePaddle/Serving/blob/v0.8.3/doc/Compile_EN.md)
To improve predictive performance, C++ services also provide multiple model concatenation services. Unlike Python Pipeline services, multiple model concatenation requires the pre - and post-model processing code to be written on the server side, so local recompilation is required to generate serving. Specific may refer to the official document: [how to compile Serving](https://github.com/PaddlePaddle/Serving/blob/v0.8.3/doc/Compile_EN.md)
...
@@ -198,12 +205,28 @@ The recognition model is the same.
...
@@ -198,12 +205,28 @@ The recognition model is the same.
2. Run the following command to start the service.
2. Run the following command to start the service.
```
```
# Start the service and save the running log in log.txt
# Start the service and save the running log in log.txt
After the service is successfully started, a log similar to the following will be printed in log.txt
After the service is successfully started, a log similar to the following will be printed in log.txt
![](./imgs/start_server.png)
![](./imgs/start_server.png)
3. Send service request
3. Send service request
Due to the need for pre and post-processing in the C++Server part, in order to speed up the input to the C++Server is only the base64 encoded string of the picture, it needs to be manually modified
Change the feed_type field and shape field in ppocrv2_det_client/serving_client_conf.prototxt to the following: