diff --git a/docs/en/PULC/PULC_car_exists_en.md b/docs/en/PULC/PULC_car_exists_en.md index 33c0932e6f118d7f9e31650e7d1e9754af19ec17..91cc3733a0a256e5055c0c68b377b818a3a1f106 100644 --- a/docs/en/PULC/PULC_car_exists_en.md +++ b/docs/en/PULC/PULC_car_exists_en.md @@ -438,7 +438,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_language_classification_en.md b/docs/en/PULC/PULC_language_classification_en.md index c7cd5f5db9c01f01c4fbb2299086bc1adcfc98d1..450f3e75629e7b63bb0d5381f7f3b3d3b23f6d60 100644 --- a/docs/en/PULC/PULC_language_classification_en.md +++ b/docs/en/PULC/PULC_language_classification_en.md @@ -451,7 +451,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_person_attribute_en.md b/docs/en/PULC/PULC_person_attribute_en.md index 173313aad1a684289f3a6825cdf73ea01493847d..bf1028940648af3128ef85a8e12585726f42091f 100644 --- a/docs/en/PULC/PULC_person_attribute_en.md +++ b/docs/en/PULC/PULC_person_attribute_en.md @@ -429,7 +429,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_person_exists_en.md b/docs/en/PULC/PULC_person_exists_en.md index baf5ce3e4c295a57d928853f5a0b3da1d3c7b366..31d452fd767a802f4421a3d65a034cbf9f482b1f 100644 --- a/docs/en/PULC/PULC_person_exists_en.md +++ b/docs/en/PULC/PULC_person_exists_en.md @@ -439,7 +439,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_safety_helmet_en.md b/docs/en/PULC/PULC_safety_helmet_en.md index d2e5cb32931cdc98b0776f4692e6162e907aa6fa..45bf57ecadba2ccb6bfb7aeaa1c91c9110116ba9 100644 --- a/docs/en/PULC/PULC_safety_helmet_en.md +++ b/docs/en/PULC/PULC_safety_helmet_en.md @@ -413,7 +413,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_text_image_orientation_en.md b/docs/en/PULC/PULC_text_image_orientation_en.md index 1d3cc41f992adff90f396463205cd060147023c1..cf530905d2d22ecff40cba4ba9ad5f5b8eadfee9 100644 --- a/docs/en/PULC/PULC_text_image_orientation_en.md +++ b/docs/en/PULC/PULC_text_image_orientation_en.md @@ -447,7 +447,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_textline_orientation_en.md b/docs/en/PULC/PULC_textline_orientation_en.md index d11307d0b5aafe056c1f1e53a85882d2449ac277..fc4d540bc094a30daae0105707decec11871120d 100644 --- a/docs/en/PULC/PULC_textline_orientation_en.md +++ b/docs/en/PULC/PULC_textline_orientation_en.md @@ -56,7 +56,7 @@ It can be seen that high accuracy can be getted when backbone is SwinTranformer_ **Note**: * Backbone name without \* means the resolution is 224x224, and with \* means the resolution is 48x192 (h\*w). The stride of the network is changed to `[2, [2, 1], [2, 1], [2, 1]`. Please refer to [PaddleOCR]( https://github.com/PaddlePaddle/PaddleOCR)for more details. -* Backbone name with \*\* means that the resolution is 80x160 (h\*w), and the stride of the network is changed to `[2, [2, 1], [2, 1], [2, 1]]`. This resolution is searched by [Hyperparameter Searching](pulc_train_en.md#4). +* Backbone name with \*\* means that the resolution is 80x160 (h\*w), and the stride of the network is changed to `[2, [2, 1], [2, 1], [2, 1]]`. This resolution is searched by [Hyperparameter Searching](PULC_train_en.md#4). * The Latency is tested on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz. The MKLDNN is enabled and the number of threads is 10. * About PP-LCNet, please refer to [PP-LCNet Introduction](../models/PP-LCNet_en.md) and [PP-LCNet Paper](https://arxiv.org/abs/2109.15099). @@ -431,7 +431,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_traffic_sign_en.md b/docs/en/PULC/PULC_traffic_sign_en.md index baa0faf4828a6c7acc16f8c12587a2af58c04f99..e235ca9272acdf1cb84e9a97e7bfe9cf69ea44f6 100644 --- a/docs/en/PULC/PULC_traffic_sign_en.md +++ b/docs/en/PULC/PULC_traffic_sign_en.md @@ -456,7 +456,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/PULC/PULC_vehicle_attribute_en.md b/docs/en/PULC/PULC_vehicle_attribute_en.md index 47d7c963e9de6e4bde9fd3338830611e59b60695..4777cfa03200f1750195d735e2333dcb2a36c5b0 100644 --- a/docs/en/PULC/PULC_vehicle_attribute_en.md +++ b/docs/en/PULC/PULC_vehicle_attribute_en.md @@ -462,7 +462,7 @@ PaddleClas provides an example about how to deploy with C++. Please refer to [De Paddle Serving is a flexible, high-performance carrier for machine learning models, and supports different protocol, such as RESTful, gRPC, bRPC and so on, which provides different deployment solutions for a variety of heterogeneous hardware and operating system environments. Please refer [Paddle Serving](https://github.com/PaddlePaddle/Serving) for more information. -PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/paddle_serving_deploy_en.md). +PaddleClas provides an example about how to deploy as service by Paddle Serving. Please refer to [Paddle Serving Deployment](../inference_deployment/classification_serving_deploy_en.md). diff --git a/docs/en/algorithm_introduction/ISE_ReID_en.md b/docs/en/algorithm_introduction/ISE_ReID_en.md index e509a52018625fa1ef49f7495af949ad68d30909..a5b782eeadbb5bb68f6b8f5f6d0b099036919568 100644 --- a/docs/en/algorithm_introduction/ISE_ReID_en.md +++ b/docs/en/algorithm_introduction/ISE_ReID_en.md @@ -16,7 +16,7 @@ ISE (Implicit Sample Extension) is a simple, efficient, and effective learning a > Xinyu Zhang, Dongdong Li, Zhigang Wang, Jian Wang, Errui Ding, Javen Qinfeng Shi, Zhaoxiang Zhang, Jingdong Wang
> CVPR2022 -![image](../../images/ISE_ReID/ISE_pipeline.png) +![image](../../images/ISE_pipeline.png) ## 2. Performance on Market1501 and MSMT17 diff --git a/docs/en/algorithm_introduction/reid.md b/docs/en/algorithm_introduction/reid.md index 5853fb5ae6b2fdd1e9f8da55f9874abe19da9353..e4cabc2286b4de14595ab555e8178e6314f9e449 100644 --- a/docs/en/algorithm_introduction/reid.md +++ b/docs/en/algorithm_introduction/reid.md @@ -1,4 +1,4 @@ -English | [简体中文](../../zh_CN/algorithm_introduction/reid.md) +English | [简体中文](../../zh_CN/algorithm_introduction/ReID.md) # ReID pedestrian re-identification diff --git a/docs/en/image_recognition_pipeline/feature_extraction_en.md b/docs/en/image_recognition_pipeline/feature_extraction_en.md index 856d47537293aaae3afee1b6b25a900c3a7f5884..a3809b8afeb1d65a86543931e463ad990f54c06d 100644 --- a/docs/en/image_recognition_pipeline/feature_extraction_en.md +++ b/docs/en/image_recognition_pipeline/feature_extraction_en.md @@ -112,7 +112,7 @@ Based on the `GeneralRecognitionV2_PPLCNetV2_base.yaml` configuration file, the ### 5.1 Data Preparation -First you need to customize your own dataset based on the task. Please refer to [Dataset Format Description](../data_preparation/recognition_dataset.md) for the dataset format and file structure. +First you need to customize your own dataset based on the task. Please refer to [Dataset Format Description](../data_preparation/recognition_dataset_en.md) for the dataset format and file structure. After the preparation is complete, it is necessary to modify the content related to the data configuration in the configuration file, mainly including the path of the dataset and the number of categories. As is as shown below: @@ -256,7 +256,7 @@ wangzai.jpg: [-7.82453567e-02 2.55877394e-02 -3.66694555e-02 1.34572461e-02 -3.40284109e-02 8.35561901e-02 2.10910216e-02 -3.27066667e-02] ``` -In most cases, just getting the features may not meet the users' requirements. If you want to go further on the image recognition task, you can refer to the document [Vector Search](./vector_search.md). +In most cases, just getting the features may not meet the users' requirements. If you want to go further on the image recognition task, you can refer to the document [Vector Search](./vector_search_en.md). diff --git a/docs/en/inference_deployment/classification_serving_deploy_en.md b/docs/en/inference_deployment/classification_serving_deploy_en.md index a46b000702ce8622c33777497642b5e98ed48061..86a774f0eb8cf07ec55578ff8a7db9f2d9faa6ea 100644 --- a/docs/en/inference_deployment/classification_serving_deploy_en.md +++ b/docs/en/inference_deployment/classification_serving_deploy_en.md @@ -1,4 +1,4 @@ -English | [简体中文](../../zh_CN/inference_deployment/classification_serving_deploy.md) +English | [简体中文](../../zh_CN/deployment/image_classification/paddle_serving.md) # Classification model service deployment diff --git a/docs/en/inference_deployment/export_model_en.md b/docs/en/inference_deployment/export_model_en.md index 8fe3c46ad9b08c3aba0913e089a7e8718459bcea..062ee1bcec9534d57ee4b8e41b0fec9e9faceca5 100644 --- a/docs/en/inference_deployment/export_model_en.md +++ b/docs/en/inference_deployment/export_model_en.md @@ -99,5 +99,5 @@ The inference model exported is used to deployment by using prediction engine. Y * [C++ inference](./cpp_deploy_en.md)(Only support classification) * [Python Whl inference](./whl_deploy_en.md)(Only support classification) * [PaddleHub Serving inference](./paddle_hub_serving_deploy_en.md)(Only support classification) -* [PaddleServing inference](./paddle_serving_deploy_en.md) +* [PaddleServing inference](./classification_serving_deploy_en.md) * [PaddleLite inference](./paddle_lite_deploy_en.md)(Only support classification) diff --git a/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md b/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md index 4dddc94bd8456a882e42000b640870155f46da7c..213b5822e0aaa8f0b7ae07c13c30fa61188b73f2 100644 --- a/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md +++ b/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md @@ -1,4 +1,4 @@ -English | [简体中文](../../zh_CN/inference_deployment/paddle_hub_serving_deploy.md) +English | [简体中文](../../zh_CN/deployment/image_classification/paddle_hub.md) # Service deployment based on PaddleHub Serving @@ -53,7 +53,7 @@ Before installing the service module, you need to prepare the inference model an "inference_model_dir": "../inference/" ``` * Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`. -* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../algorithm_introduction/ImageNet_models.md), or you can use your own trained and converted models. +* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../algorithm_introduction/ImageNet_models_en.md), or you can use your own trained and converted models. diff --git a/docs/en/inference_deployment/recognition_serving_deploy_en.md b/docs/en/inference_deployment/recognition_serving_deploy_en.md index 6d1db098d91c6d4bacba94dc7fff1a0511d722ba..b21b22d3eacdf1f0cdc34f6bab64cbac9392b2e5 100644 --- a/docs/en/inference_deployment/recognition_serving_deploy_en.md +++ b/docs/en/inference_deployment/recognition_serving_deploy_en.md @@ -1,4 +1,4 @@ -English | [简体中文](../../zh_CN/inference_deployment/recognition_serving_deploy.md) +English | [简体中文](../../zh_CN/deployment/PP-ShiTu/paddle_serving.md) # Recognition model service deployment @@ -219,7 +219,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s # One-click compile and install Serving server, set SERVING_BIN source ./build_server.sh python3.7 ``` - **Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; If you encounter a non-network error during the execution of `build_server.sh`, you can manually copy the commands in the script to the terminal for execution. + **Note:** The path set by [build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled; If you encounter a non-network error during the execution of `build_server.sh`, you can manually copy the commands in the script to the terminal for execution. - The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder. ```shell diff --git a/docs/en/quick_start/quick_start_recognition_en.md b/docs/en/quick_start/quick_start_recognition_en.md index 670ad03e80d8dd69ae2f283704ba9e7bd04444a6..1d93728de0cd27b695df399166e8e5b41ec0fc5e 100644 --- a/docs/en/quick_start/quick_start_recognition_en.md +++ b/docs/en/quick_start/quick_start_recognition_en.md @@ -71,7 +71,7 @@ Click the "save index" button above to initialize the current library to `original`. #### 1.2.5 Preview Index -Click the "class preview" button to view it in the pop-up window. +Click the "class preview" button to view it in the pop-up window. @@ -99,7 +99,7 @@ One can preview it according to the instructions in [Function Experience - Previ ### 2.1 Environment configuration -* Installation: Please refer to the document [Environment Preparation](../installation/install_paddleclas.md) to configure the PaddleClas operating environment. +* Installation: Please refer to the document [Environment Preparation](../installation/install_paddleclas_en.md) to configure the PaddleClas operating environment. * Go to the `deploy` run directory. All the content and scripts in this section need to be run in the `deploy` directory, you can enter the `deploy` directory with the following scripts. @@ -315,7 +315,7 @@ Build a new index database `index_all` with the following scripts. python3.7 python/build_gallery.py -c configs/inference_general.yaml -o IndexProcess.data_file="./drink_dataset_v2.0/gallery/drink_label_all.txt" -o IndexProcess.index_dir="./drink_dataset_v2.0/index_all" ``` -The final constructed new index database is saved in the folder `./drink_dataset_v2.0/index_all`. For specific instructions on yaml `yaml`, please refer to [Vector Search Documentation](../image_recognition_pipeline/vector_search.md). +The final constructed new index database is saved in the folder `./drink_dataset_v2.0/index_all`. For specific instructions on yaml `yaml`, please refer to [Vector Search Documentation](../image_recognition_pipeline/vector_search_en.md). @@ -392,4 +392,4 @@ After decompression, the `recognition_demo_data_v1.1` folder should have the fol After downloading the model and test data according to the above steps, you can re-build the index database and test the relevant recognition model. -* For more introduction to object detection, please refer to: [Object Detection Tutorial Document](../image_recognition_pipeline/mainbody_detection.md); for the introduction of feature extraction, please refer to: [Feature Extraction Tutorial Document](../image_recognition_pipeline/feature_extraction.md); for the introduction to vector search, please refer to: [vector search tutorial document](../image_recognition_pipeline/vector_search.md). +* For more introduction to object detection, please refer to: [Object Detection Tutorial Document](../image_recognition_pipeline/mainbody_detection.md); for the introduction of feature extraction, please refer to: [Feature Extraction Tutorial Document](../image_recognition_pipeline/feature_extraction.md); for the introduction to vector search, please refer to: [vector search tutorial document](../image_recognition_pipeline/vector_search_en.md).