diff --git a/docs/en/advanced_tutorials/knowledge_distillation_en.md b/docs/en/advanced_tutorials/knowledge_distillation_en.md
index 82ee3a6e7acc8314eff03e66cb9a97b9730b5341..18e8aa38cc20c2ba854930e772e0a3db599ff1d2 100644
--- a/docs/en/advanced_tutorials/knowledge_distillation_en.md
+++ b/docs/en/advanced_tutorials/knowledge_distillation_en.md
@@ -91,7 +91,7 @@ Paper:
SSLD is a simple semi-supervised distillation method proposed by Baidu in 2021. By designing an improved JS divergence as the loss function and combining the data mining strategy based on ImageNet22k dataset, the accuracy of the 18 backbone network models was improved by more than 3% on average.
-For more information about the principle, model zoo and usage of SSLD, please refer to: [Introduction to SSLD](ssld.md).
+
##### 1.2.1.2 Configuration of SSLD
@@ -152,8 +152,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | PPLCNet_x2_5 | [PPLCNet_x2_5.yaml](../../../../ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml) | 74.93% | - |
-| DML | PPLCNet_x2_5 | [PPLCNet_x2_5_dml.yaml](../../../../ppcls/configs/ImageNet/Distillation/PPLCNet_x2_5_dml.yaml) | 76.68%(**+1.75%**) | - |
+| baseline | PPLCNet_x2_5 | [PPLCNet_x2_5.yaml](../../../ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml) | 74.93% | - |
+| DML | PPLCNet_x2_5 | [PPLCNet_x2_5_dml.yaml](../../../ppcls/configs/ImageNet/Distillation/PPLCNet_x2_5_dml.yaml) | 76.68%(**+1.75%**) | - |
* Note: Complete PPLCNet_x2_5 The model have been trained for 360 epochs. For comparison, both baseline and DML have been trained for 100 epochs. Therefore, the accuracy is lower than the model (76.60%) opened on the official website.
@@ -210,8 +210,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | PPLCNet_x2_5 | [PPLCNet_x2_5.yaml](../../../../ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml) | 74.93% | - |
-| UDML | PPLCNet_x2_5 | [PPLCNet_x2_5_dml.yaml](../../../../ppcls/configs/ImageNet/Distillation/PPLCNet_x2_5_udml.yaml) | 76.74%(**+1.81%**) | - |
+| baseline | PPLCNet_x2_5 | [PPLCNet_x2_5.yaml](../../../ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml) | 74.93% | - |
+| UDML | PPLCNet_x2_5 | [PPLCNet_x2_5_dml.yaml](../../../ppcls/configs/ImageNet/Distillation/PPLCNet_x2_5_udml.yaml) | 76.74%(**+1.81%**) | - |
##### 1.2.3.2 Configuration of UDML
@@ -262,7 +262,10 @@ Loss:
weight: 1.0
```
-**Note(:** `return_patterns` are specified in the network above. The function of returning middle layer features is based on TheseusLayer. For more information about usage of TheseusLayer, please refer to: [Usage of TheseusLayer](theseus_layer.md).
+**Note(:** `return_patterns` are specified in the network above. The function of returning middle layer features is based on TheseusLayer.
+
+
+
@@ -286,8 +289,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | ResNet18 | [ResNet18.yaml](../../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
-| AFD | ResNet18 | [resnet34_distill_resnet18_afd.yaml](../../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_afd.yaml) | 71.68%(**+0.88%**) | - |
+| baseline | ResNet18 | [ResNet18.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
+| AFD | ResNet18 | [resnet34_distill_resnet18_afd.yaml](../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_afd.yaml) | 71.68%(**+0.88%**) | - |
Note: In order to keep alignment with the training configuration in the paper, the number of training iterations is set to be 100 epochs, so the baseline accuracy is lower than the open source model accuracy in PaddleClas (71.0%).
@@ -374,7 +377,10 @@ Loss:
weight: 1.0
```
-**Note(:** `return_patterns` are specified in the network above. The function of returning middle layer features is based on TheseusLayer. For more information about usage of TheseusLayer, please refer to: [Usage of TheseusLayer](theseus_layer.md).
+**Note(:** `return_patterns` are specified in the network above. The function of returning middle layer features is based on TheseusLayer.
+
+
+
@@ -397,8 +403,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | ResNet18 | [ResNet18.yaml](../../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
-| DKD | ResNet18 | [resnet34_distill_resnet18_dkd.yaml](../../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml) | 72.59%(**+1.79%**) | - |
+| baseline | ResNet18 | [ResNet18.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
+| DKD | ResNet18 | [resnet34_distill_resnet18_dkd.yaml](../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml) | 72.59%(**+1.79%**) | - |
##### 1.2.5.2 Configuration of DKD
@@ -465,8 +471,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | ResNet18 | [ResNet18.yaml](../../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
-| DIST | ResNet18 | [resnet34_distill_resnet18_dist.yaml](../../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dist.yaml) | 71.99%(**+1.19%**) | - |
+| baseline | ResNet18 | [ResNet18.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
+| DIST | ResNet18 | [resnet34_distill_resnet18_dist.yaml](../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dist.yaml) | 71.99%(**+1.19%**) | - |
##### 1.2.6.2 Configuration of DIST
@@ -531,8 +537,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | ResNet18 | [ResNet18.yaml](../../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
-| MGD | ResNet18 | [resnet34_distill_resnet18_mgd.yaml](../../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_mgd.yaml) | 71.86%(**+1.06%**) | - |
+| baseline | ResNet18 | [ResNet18.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
+| MGD | ResNet18 | [resnet34_distill_resnet18_mgd.yaml](../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_mgd.yaml) | 71.86%(**+1.06%**) | - |
##### 1.2.7.2 Configuration of MGD
@@ -603,8 +609,8 @@ Performance on ImageNet1k is shown below.
| Strategy | Backbone | Config | Top-1 acc | Download Link |
| --- | --- | --- | --- | --- |
-| baseline | ResNet18 | [ResNet18.yaml](../../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
-| WSL | ResNet18 | [resnet34_distill_resnet18_wsl.yaml](../../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_wsl.yaml) | 72.23%(**+1.43%**) | - |
+| baseline | ResNet18 | [ResNet18.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet18.yaml) | 70.8% | - |
+| WSL | ResNet18 | [resnet34_distill_resnet18_wsl.yaml](../../../ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_wsl.yaml) | 72.23%(**+1.43%**) | - |
##### 1.2.8.2 Configuration of WSL
@@ -657,7 +663,7 @@ Loss:
### 2.1 Environment Configuration
-* Installation: Please refer to [Paddle Installation Tutorial](../installation/install_paddle.md) and [PaddleClas Installation Tutorial](../../installation.md) to configure the running environment.
+* Installation: Please refer to [Paddle Installation Tutorial](../installation/install_paddle_en.md) and [PaddleClas Installation Tutorial](../installation/install_paddleclas_en.md) to configure the running environment.
@@ -699,7 +705,7 @@ cat train_list.txt train_list_unlabel.txt > train_list_all.txt
**Note:**
-* For more information about the format of `train_list.txt` and `val_list.txt`, you may refer to [Format Description of PaddleClas Classification Dataset](../single_label_classification/dataset.md#1-数据集格式说明) .
+* For more information about the format of `train_list.txt` and `val_list.txt`, you may refer to [Format Description of PaddleClas Classification Dataset](../data_preparation/classification_dataset_en.md#1dataset-format) .
@@ -707,7 +713,7 @@ cat train_list.txt train_list_unlabel.txt > train_list_all.txt
### 2.3 Model Training
-In this section, the process of model training, evaluation and prediction of knowledge distillation algorithm will be introduced using the SSLD knowledge distillation algorithm as an example. The configuration file is [PPLCNet_x2_5_ssld.yaml](../../../../ppcls/configs/ImageNet/Distillation/PPLCNet_x2_5_ssld.yaml). You can use the following command to complete the model training.
+In this section, the process of model training, evaluation and prediction of knowledge distillation algorithm will be introduced using the SSLD knowledge distillation algorithm as an example. The configuration file is [PPLCNet_x2_5_ssld.yaml](../../../ppcls/configs/ImageNet/Distillation/PPLCNet_x2_5_ssld.yaml). You can use the following command to complete the model training.
```shell
@@ -776,7 +782,7 @@ python3 tools/export_model.py \
3 files will be generated in `inference` directory: `inference.pdiparams`, `inference.pdiparams.info` and `inference.pdmodel`.
-For more information about model inference, please refer to: [Python Inference](../../deployment/image_classification/python.md).
+For more information about model inference, please refer to: [Python Inference](../inference_deployment/python_deploy_en.md).
diff --git a/docs/en/image_recognition_pipeline/feature_extraction_en.md b/docs/en/image_recognition_pipeline/feature_extraction_en.md
index a3809b8afeb1d65a86543931e463ad990f54c06d..db2926863e4b08506d20fa4f50a7f3b75329fc45 100644
--- a/docs/en/image_recognition_pipeline/feature_extraction_en.md
+++ b/docs/en/image_recognition_pipeline/feature_extraction_en.md
@@ -39,7 +39,7 @@ Functions of the above modules :
#### 3.1 Backbone
-The Backbone part adopts [PP-LCNetV2_base](../models/PP-LCNetV2.md), which is based on `PPLCNet_V1`, including Rep strategy, PW convolution, Shortcut, activation function improvement, SE module improvement After several optimization points, the final classification accuracy is similar to `PPLCNet_x2_5`, and the inference delay is reduced by 40%*. During the experiment, we made appropriate improvements to `PPLCNetV2_base`, so that it can achieve higher performance in recognition tasks while keeping the speed basically unchanged, including: removing `ReLU` and ` at the end of `PPLCNetV2_base` FC`, change the stride of the last stage (RepDepthwiseSeparable) to 1.
+The Backbone part adopts [PP-LCNetV2_base](../models/PP-LCNetV2_en.md), which is based on `PPLCNet_V1`, including Rep strategy, PW convolution, Shortcut, activation function improvement, SE module improvement After several optimization points, the final classification accuracy is similar to `PPLCNet_x2_5`, and the inference delay is reduced by 40%*. During the experiment, we made appropriate improvements to `PPLCNetV2_base`, so that it can achieve higher performance in recognition tasks while keeping the speed basically unchanged, including: removing `ReLU` and ` at the end of `PPLCNetV2_base` FC`, change the stride of the last stage (RepDepthwiseSeparable) to 1.
**Note:** *The inference environment is based on Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz hardware platform, OpenVINO inference platform.
diff --git a/docs/en/quick_start/quick_start_recognition_en.md b/docs/en/quick_start/quick_start_recognition_en.md
index 1d93728de0cd27b695df399166e8e5b41ec0fc5e..cc84ccf48f78144aaa2c97f3637b77d1d1bcb8e5 100644
--- a/docs/en/quick_start/quick_start_recognition_en.md
+++ b/docs/en/quick_start/quick_start_recognition_en.md
@@ -124,7 +124,10 @@ Note: Since some decompression software has problems in decompressing the above
The demo data download path of this chapter is as follows: [drink_dataset_v2.0.tar (drink data)](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v2.0.tar),
-The following takes **drink_dataset_v2.0.tar** as an example to introduce the PP-ShiTu quick start process on the PC. Users can also download and decompress the data of other scenarios to experience: [22 scenarios data download](../../zh_CN/introduction/ppshitu_application_scenarios.md#22-下载解压场景库数据).
+The following takes **drink_dataset_v2.0.tar** as an example to introduce the PP-ShiTu quick start process on the PC.
+
+
+
If you want to experience the server object detection and the recognition model of each scene, you can refer to [2.4 Server recognition model list](#24-list-of-server-identification-models)