From 609382c117478714c495062f151c59eb05e059e0 Mon Sep 17 00:00:00 2001
From: sibo2rr <1415419833@qq.com>
Date: Mon, 14 Feb 2022 19:20:13 +0800
Subject: [PATCH] readthedoc in english
---
docs/en/advanced_tutorials/index.rst | 12 ++++
.../advanced_tutorials/multilabel/index.rst | 2 +-
.../DataAugmentation_en.md | 26 +++----
.../ImageNet_models_en.md | 12 +---
.../image_classification_en.md | 1 +
docs/en/algorithm_introduction/index.rst | 12 ++++
docs/en/conf.py | 70 ++++++++-----------
docs/en/data_preparation/index.rst | 8 +++
docs/en/doc_en.rst | 24 +++++++
docs/en/extension/VisualDL_en.md | 6 +-
docs/en/extension/index.rst | 9 +--
docs/en/faq_series/index.rst | 10 +++
.../feature_extraction_en.md | 14 ++--
docs/en/image_recognition_pipeline/index.rst | 9 +++
docs/en/index.rst | 26 +++----
docs/en/inference_deployment/cpp_deploy_en.md | 4 +-
docs/en/inference_deployment/index.rst | 19 +++++
.../paddle_lite_deploy_en.md | 4 +-
docs/en/inference_deployment/whl_deploy_en.md | 4 +-
docs/en/installation/index.rst | 8 +++
docs/en/introduction/index.rst | 8 +++
docs/en/introduction/more_demo/index.rst | 11 +++
docs/en/models/PP-LCNet_en.md | 4 +-
docs/en/models/index.rst | 28 ++++++--
docs/en/models/models_intro_en.md | 14 ++--
docs/en/models_training/index.rst | 10 +++
docs/en/others/VisualDL_en.md | 6 +-
docs/en/others/index.rst | 15 ++++
docs/en/quick_start/index.rst | 10 +++
.../quick_start_classification_new_user_en.md | 10 +--
.../quick_start/quick_start_recognition_en.md | 15 ++--
31 files changed, 273 insertions(+), 138 deletions(-)
create mode 100644 docs/en/advanced_tutorials/index.rst
create mode 100644 docs/en/algorithm_introduction/index.rst
create mode 100644 docs/en/data_preparation/index.rst
create mode 100644 docs/en/doc_en.rst
create mode 100644 docs/en/faq_series/index.rst
create mode 100644 docs/en/image_recognition_pipeline/index.rst
create mode 100644 docs/en/inference_deployment/index.rst
create mode 100644 docs/en/installation/index.rst
create mode 100644 docs/en/introduction/index.rst
create mode 100644 docs/en/introduction/more_demo/index.rst
create mode 100644 docs/en/models_training/index.rst
create mode 100644 docs/en/others/index.rst
create mode 100644 docs/en/quick_start/index.rst
diff --git a/docs/en/advanced_tutorials/index.rst b/docs/en/advanced_tutorials/index.rst
new file mode 100644
index 00000000..b0121559
--- /dev/null
+++ b/docs/en/advanced_tutorials/index.rst
@@ -0,0 +1,12 @@
+advanced_tutorials
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ DataAugmentation_en.md
+ distillation/index
+ multilabel/index
+ model_prune_quantization_en.md
+ code_overview_en.md
+ how_to_contribute_en.md
diff --git a/docs/en/advanced_tutorials/multilabel/index.rst b/docs/en/advanced_tutorials/multilabel/index.rst
index 1e8acfdf..07e0a54a 100644
--- a/docs/en/advanced_tutorials/multilabel/index.rst
+++ b/docs/en/advanced_tutorials/multilabel/index.rst
@@ -4,4 +4,4 @@ Multilabel Classification
.. toctree::
:maxdepth: 3
- multilabel.md
\ No newline at end of file
+ multilabel_en.md
\ No newline at end of file
diff --git a/docs/en/algorithm_introduction/DataAugmentation_en.md b/docs/en/algorithm_introduction/DataAugmentation_en.md
index baa9ca5e..9ada02ff 100644
--- a/docs/en/algorithm_introduction/DataAugmentation_en.md
+++ b/docs/en/algorithm_introduction/DataAugmentation_en.md
@@ -23,7 +23,7 @@
Data augmentation is a commonly used regularization method in image classification task, which is often used in scenarios with insufficient data or large model. In this chapter, we mainly introduce 8 image augmentation methods besides standard augmentation methods. Users can apply these methods in their own tasks for better model performance. Under the same conditions, these augmentation methods' performance on ImageNet1k dataset is shown as follows.
-![](../../../images/image_aug/main_image_aug.png)
+![](../../images/image_aug/main_image_aug.png)
@@ -50,7 +50,7 @@ Compared with the above standard image augmentation methods, the researchers hav
Visualization results of some images after augmentation are shown as follows.
-![](../../../images/image_aug/image_aug_samples_s_en.jpg)
+![](../../images/image_aug/image_aug_samples_s_en.jpg)
The following table shows more detailed information of the transformations.
@@ -72,7 +72,7 @@ The following table shows more detailed information of the transformations.
PaddleClas integrates all the above data augmentation strategies. More details including principles and usage of the strategies are introduced in the following chapters. For better visualization, we use the following figure to show the changes after the transformations. And `RandCrop` is replaced with` Resize` for simplification.
-![](../../../images/image_aug/test_baseline.jpeg)
+![](../../images/image_aug/test_baseline.jpeg)
### 2.1 Image Transformation
@@ -91,7 +91,7 @@ Unlike conventional artificially designed image augmentation methods, AutoAugmen
The images after `AutoAugment` are as follows.
-![][test_autoaugment]
+![](../../images/image_aug/test_autoaugment.jpeg)
#### 2.1.2 RandAugment
@@ -107,7 +107,7 @@ In `RandAugment`, the author proposes a random augmentation method. Instead of u
The images after `RandAugment` are as follows.
-![][test_randaugment]
+![](../../images/image_aug/test_randaugment.jpeg)
#### 2.1.3 TimmAutoAugment
@@ -137,7 +137,7 @@ Cutout is a kind of dropout, but occludes input image rather than feature map. I
The images after `Cutout` are as follows.
-![][test_cutout]
+![](../../images/image_aug/test_cutout.jpeg)
#### 2.2.2 RandomErasing
@@ -150,7 +150,7 @@ RandomErasing is similar to the Cutout. It is also to solve the problem of poor
The images after `RandomErasing` are as follows.
-![][test_randomerassing]
+![](../../images/image_aug/test_randomerassing.jpeg)
#### 2.2.3 HideAndSeek
@@ -162,11 +162,11 @@ Github repo: [https://github.com/kkanshul/Hide-and-Seek](https://github.com/kkan
Images are divided into some patches for `HideAndSeek` and masks are generated with certain probability for each patch. The meaning of the masks in different areas is shown in the figure below.
-![][hide_and_seek_mask_expanation]
+![](../../images/image_aug/hide-and-seek-visual.png)
The images after `HideAndSeek` are as follows.
-![][test_hideandseek]
+![](../../images/image_aug/gridmask-0.png)
#### 2.2.4 GridMask
@@ -180,7 +180,7 @@ The author points out that the previous method based on image cropping has two p
1. Excessive deletion of the area may cause most or all of the target subject to be deleted, or cause the context information loss, resulting in the images after enhancement becoming noisy data.
2. Reserving too much area has little effect on the object and context.
-![][gridmask-0]
+![](../../images/image_aug/hide-and-seek-visual.png)
Therefore, it is the core problem to be solved how to
if you avoid over-deletion or over-retention becomes the core problem to be solved.
@@ -195,7 +195,7 @@ It shows that the second method is better.
The images after `GridMask` are as follows.
-![][test_gridmask]
+![](../../images/image_aug/test_gridmask.jpeg)
### 2.3 Image mix
@@ -215,7 +215,7 @@ Mixup is the first solution for image aliasing, it is easy to realize and perfor
The images after `Mixup` are as follows.
-![][test_mixup]
+![](../../images/image_aug/test_mixup.png)
#### 2.3.2 Cutmix
@@ -229,7 +229,7 @@ Cutmix randomly cuts out an `ROI` from one image, and then covered onto the corr
The images after `Cutmix` are as follows.
-![][test_cutmix]
+![](../../images/image_aug/test_cutmix.png)
For the practical part of data augmentation, please refer to [Data Augmentation Practice](../advanced_tutorials/DataAugmentation_en.md).
diff --git a/docs/en/algorithm_introduction/ImageNet_models_en.md b/docs/en/algorithm_introduction/ImageNet_models_en.md
index 54b3b521..580aeaa1 100644
--- a/docs/en/algorithm_introduction/ImageNet_models_en.md
+++ b/docs/en/algorithm_introduction/ImageNet_models_en.md
@@ -42,21 +42,15 @@ Based on the ImageNet-1k classification dataset, the 37 classification network s
Curves of accuracy to the inference time of common server-side models are shown as follows.
-
-
-
+![](../../images/models/V100_benchmark/v100.fp32.bs1.main_fps_top1_s.png)
Curves of accuracy to the inference time of common mobile-side models are shown as follows.
-
-
-
+![](../../images/models/mobile_arm_top1.png)
Curves of accuracy to the inference time of some VisionTransformer models are shown as follows.
-
-
-
+![](../../images/models/V100_benchmark/v100.fp32.bs1.visiontransformer.png)
diff --git a/docs/en/algorithm_introduction/image_classification_en.md b/docs/en/algorithm_introduction/image_classification_en.md
index 3b28b04d..fa2319c1 100644
--- a/docs/en/algorithm_introduction/image_classification_en.md
+++ b/docs/en/algorithm_introduction/image_classification_en.md
@@ -1,3 +1,4 @@
+# Image Classification Task Introduction
## Catalogue
- [1. Dataset Introduction](#1)
diff --git a/docs/en/algorithm_introduction/index.rst b/docs/en/algorithm_introduction/index.rst
new file mode 100644
index 00000000..95110891
--- /dev/null
+++ b/docs/en/algorithm_introduction/index.rst
@@ -0,0 +1,12 @@
+algorithm_introduction
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ image_classification_en.md
+ metric_learning_en.md
+ knowledge_distillation_en.md
+ model_prune_quantization_en.md
+ ImageNet_models_en.md
+ DataAugmentation_en.md
diff --git a/docs/en/conf.py b/docs/en/conf.py
index 1b5a0c12..fef10eec 100644
--- a/docs/en/conf.py
+++ b/docs/en/conf.py
@@ -10,70 +10,56 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
-import os
-import recommonmark
+# import os
+# import sys
+# sys.path.insert(0, os.path.abspath('.'))
+import sphinx_rtd_theme
+from recommonmark.parser import CommonMarkParser
+# -- Project information -----------------------------------------------------
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+project = 'PaddleClas-en'
+copyright = '2022, PaddleClas'
+author = 'PaddleClas'
-# -- Project information -----------------------------------------------------
+# The full version, including alpha/beta/rc tags
+release = '2.3'
-project = 'PaddleClas'
-copyright = '2020, paddlepaddle'
-author = 'paddlepaddle'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
-
+source_parsers = {
+ '.md': CommonMarkParser,
+}
+source_suffix = ['.rst', '.md']
extensions = [
- 'sphinx.ext.autodoc',
- 'sphinx.ext.napoleon',
- 'sphinx.ext.coverage',
- 'sphinx.ext.viewcode',
- 'sphinx.ext.mathjax',
- 'sphinx.ext.githubpages',
- 'sphinx.ext.napoleon',
- 'recommonmark',
- 'sphinx_markdown_tables',
-]
-
+ 'recommonmark',
+ 'sphinx_markdown_tables'
+ ]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
-# md file can also be parased
-source_suffix = ['.rst', '.md']
+# The root document.
+root_doc = 'doc_en'
-# The master toctree document.
-master_doc = 'index'
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#
-# This is also used if you do content translation via gettext catalogs.
-# Usually you set "language" from the command line for these cases.
-language = 'en'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
-
-# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
-on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
-
-if not on_rtd: # only import and set the theme if we're building docs locally
- import sphinx_rtd_theme
- html_theme = 'sphinx_rtd_theme'
- html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
-
-# otherwise, readthedocs.org uses their theme by default, so no need to specify it
+#
+# 更改文档配色
+html_theme = "sphinx_rtd_theme"
+html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-
html_static_path = ['_static']
-
-html_logo = '../images/logo.png'
diff --git a/docs/en/data_preparation/index.rst b/docs/en/data_preparation/index.rst
new file mode 100644
index 00000000..58915725
--- /dev/null
+++ b/docs/en/data_preparation/index.rst
@@ -0,0 +1,8 @@
+data_preparation
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ recognition_dataset_en.md
+ classification_dataset_en.md
diff --git a/docs/en/doc_en.rst b/docs/en/doc_en.rst
new file mode 100644
index 00000000..ee21fcc7
--- /dev/null
+++ b/docs/en/doc_en.rst
@@ -0,0 +1,24 @@
+Welcome to PaddleClas!
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ introduction/index
+ installation/index
+ quick_start/index
+ image_recognition_pipeline/index
+ data_preparation/index
+ models_training/index
+ inference_deployment/index
+ models/index
+ algorithm_introduction/index
+ advanced_tutorials/index
+ others/index
+ extension/index
+ faq_series/index
+
+
+
+
+
diff --git a/docs/en/extension/VisualDL_en.md b/docs/en/extension/VisualDL_en.md
index 9ffd03e9..403a74f4 100644
--- a/docs/en/extension/VisualDL_en.md
+++ b/docs/en/extension/VisualDL_en.md
@@ -39,6 +39,6 @@ More information about the command,please refer to [VisualDL](https://github.c
Then you can enter the address `127.0.0.1:8840` and view the training process in the browser:
-
-
-
+
+![](../../images/VisualDL/train_loss.png)
+
diff --git a/docs/en/extension/index.rst b/docs/en/extension/index.rst
index 4d72ea47..880a85c1 100644
--- a/docs/en/extension/index.rst
+++ b/docs/en/extension/index.rst
@@ -3,10 +3,11 @@ extension
.. toctree::
:maxdepth: 1
-
- paddle_inference_en.md
+
+ train_with_DALI_en.md
+ VisualDL_en.md
paddle_mobile_inference_en.md
+ paddle_serving_en.md
paddle_quantization_en.md
- multi_machine_training_en.md
paddle_hub_en.md
- paddle_serving_en.md
+ multi_machine_training_en.md
diff --git a/docs/en/faq_series/index.rst b/docs/en/faq_series/index.rst
new file mode 100644
index 00000000..106fc070
--- /dev/null
+++ b/docs/en/faq_series/index.rst
@@ -0,0 +1,10 @@
+faq_series
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ faq_2021_s2_en.md
+ faq_2021_s1_en.md
+ faq_2020_s1_en.md
+ faq_selected_30_en.md
diff --git a/docs/en/image_recognition_pipeline/feature_extraction_en.md b/docs/en/image_recognition_pipeline/feature_extraction_en.md
index 4ff01f51..68830dd2 100644
--- a/docs/en/image_recognition_pipeline/feature_extraction_en.md
+++ b/docs/en/image_recognition_pipeline/feature_extraction_en.md
@@ -58,12 +58,12 @@ The results are shown in the table below:
- Address of the pre-training model: [General recognition pre-training model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/pretrain/general_PPLCNet_x2_5_pretrained_v1.0.pdparams)
-# 4.Customized Feature Extraction
+## 4.Customized Feature Extraction
Customized feature extraction refers to retraining the feature extraction model based on one's own task. It consists of four main steps: 1) data preparation, 2) model training, 3) model evaluation, and 4) model inference.
-## 4.1 Data Preparation
+### 4.1 Data Preparation
To start with, customize your dataset based on the task (See [Format description](../data_preparation/recognition_dataset_en.md#1) for the dataset format). Before initiating the model training, modify the data-related content in the configuration files, including the address of the dataset and the class number. The corresponding locations in configuration files are shown below:
@@ -99,7 +99,7 @@ Train:
```
-## 4.2 Model Training
+### 4.2 Model Training
- Single machine single card training
@@ -130,7 +130,7 @@ python -m paddle.distributed.launch \
```
-## 4.3 Model Evaluation
+### 4.3 Model Evaluation
- Single Card Evaluation
@@ -154,11 +154,11 @@ python -m paddle.distributed.launch \
**Recommendation:** It is suggested to employ multi-card evaluation, which can quickly obtain the feature set of the overall dataset using multi-card parallel computing, accelerating the evaluation process.
-## 4.4 Model Inference
+### 4.4 Model Inference
Two steps are included in the inference: 1)exporting the inference model; 2)obtaining the feature vector.
-### 4.4.1 Export Inference Model
+#### 4.4.1 Export Inference Model
```
python tools/export_model \
@@ -168,7 +168,7 @@ python tools/export_model \
The generated inference models are under the directory `inference`, which comprises three files, namely, `inference.pdmodel`、`inference.pdiparams`、`inference.pdiparams.info`. Among them, `inference.pdmodel` serves to store the structure of inference model while `inference.pdiparams` and `inference.pdiparams.info` are mobilized to store model-related parameters.
-### 4.4.2 Obtain Feature Vector
+#### 4.4.2 Obtain Feature Vector
```
cd deploy
diff --git a/docs/en/image_recognition_pipeline/index.rst b/docs/en/image_recognition_pipeline/index.rst
new file mode 100644
index 00000000..1e87e37e
--- /dev/null
+++ b/docs/en/image_recognition_pipeline/index.rst
@@ -0,0 +1,9 @@
+image_recognition_pipeline
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ mainbody_detection_en.md
+ feature_extraction_en.md
+ vector_search_en.md
diff --git a/docs/en/index.rst b/docs/en/index.rst
index b8a2e909..30cc73b4 100644
--- a/docs/en/index.rst
+++ b/docs/en/index.rst
@@ -1,17 +1,19 @@
-Welcome to PaddleClas!
+欢迎使用PaddleClas图像分类库!
================================
.. toctree::
- :maxdepth: 1
- :numbered:
- :caption: Contents:
-
- tutorials/index
+ :maxdepth: 2
+
+ models_training/index
+ extension/index
+ introduction/index
+ image_recognition_pipeline/index
+ others/index
+ faq_series/index
+ data_preparation/index
+ installation/index
models/index
advanced_tutorials/index
- application/index
- extension/index
- competition_support_en.md
- update_history_en.md
- faq_en.md
-
+ algorithm_introduction/index
+ inference_deployment/index
+ quick_start/index
diff --git a/docs/en/inference_deployment/cpp_deploy_en.md b/docs/en/inference_deployment/cpp_deploy_en.md
index 1b65ebf6..3f92b662 100644
--- a/docs/en/inference_deployment/cpp_deploy_en.md
+++ b/docs/en/inference_deployment/cpp_deploy_en.md
@@ -293,8 +293,6 @@ sh tools/run.sh
* The prediction results will be shown on the screen, which is as follows.
-
-
-
+![](../../images/inference_deployment/cpp_infer_result.png)
* In the above results,`class id` represents the id corresponding to the category with the highest confidence, and `score` represents the probability that the image belongs to that category.
diff --git a/docs/en/inference_deployment/index.rst b/docs/en/inference_deployment/index.rst
new file mode 100644
index 00000000..16f9d555
--- /dev/null
+++ b/docs/en/inference_deployment/index.rst
@@ -0,0 +1,19 @@
+inference_deployment
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ export_model_en.md
+ python_deploy_en.md
+ cpp_deploy_en.md
+ paddle_serving_deploy_en.md
+ paddle_hub_serving_deploy_en.md
+ paddle_lite_deploy_en.md
+ whl_deploy_en.md
+
+
+
+
+
+
diff --git a/docs/en/inference_deployment/paddle_lite_deploy_en.md b/docs/en/inference_deployment/paddle_lite_deploy_en.md
index b584aafd..12d45dd5 100644
--- a/docs/en/inference_deployment/paddle_lite_deploy_en.md
+++ b/docs/en/inference_deployment/paddle_lite_deploy_en.md
@@ -258,9 +258,7 @@ export LD_LIBRARY_PATH=/data/local/tmp/debug:$LD_LIBRARY_PATH
The result is as follows:
-
-
-
+![](../../images/inference_deployment/lite_demo_result.png)
## 3. FAQ
diff --git a/docs/en/inference_deployment/whl_deploy_en.md b/docs/en/inference_deployment/whl_deploy_en.md
index e97cbfd7..224d41a7 100644
--- a/docs/en/inference_deployment/whl_deploy_en.md
+++ b/docs/en/inference_deployment/whl_deploy_en.md
@@ -39,9 +39,7 @@ pip3 install dist/*
## 2. Quick Start
* Using the `ResNet50` model provided by PaddleClas, the following image(`'docs/images/inference_deployment/whl_demo.jpg'`) as an example.
-
-
-
+![](../../images/inference_deployment/whl_demo.jpg)
* Python
```python
diff --git a/docs/en/installation/index.rst b/docs/en/installation/index.rst
new file mode 100644
index 00000000..832675a6
--- /dev/null
+++ b/docs/en/installation/index.rst
@@ -0,0 +1,8 @@
+installation
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ install_paddle_en.md
+ install_paddleclas_en.md
diff --git a/docs/en/introduction/index.rst b/docs/en/introduction/index.rst
new file mode 100644
index 00000000..7b8fdf35
--- /dev/null
+++ b/docs/en/introduction/index.rst
@@ -0,0 +1,8 @@
+introduction
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ function_intro_en.md
+ more_demo/index
diff --git a/docs/en/introduction/more_demo/index.rst b/docs/en/introduction/more_demo/index.rst
new file mode 100644
index 00000000..f09bccf2
--- /dev/null
+++ b/docs/en/introduction/more_demo/index.rst
@@ -0,0 +1,11 @@
+more_demo
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ product.md
+ logo.md
+ cartoon.md
+ more_demo.md
+ vehicle.md
diff --git a/docs/en/models/PP-LCNet_en.md b/docs/en/models/PP-LCNet_en.md
index 0fae0bf8..1dd35e09 100644
--- a/docs/en/models/PP-LCNet_en.md
+++ b/docs/en/models/PP-LCNet_en.md
@@ -27,13 +27,13 @@ In the field of computer vision, the quality of backbone network determines the
## 2. Introduction
Recent years witnessed the emergence of many lightweight backbone networks. In past two years, in particular, there were abundant networks searched by NAS that either enjoy advantages on FLOPs or Params, or have an edge in terms of inference speed on ARM devices. However, few of them dedicated to specified optimization of Intel CPU, resulting their imperfect inference speed on the intel CPU side. Based on this, we specially design the backbone network PP-LCNet for Intel CPU devices with its acceleration library MKLDNN. Compared with other lightweight SOTA models, this backbone network can further improve the performance of the model without increasing the inference time, significantly outperforming the existing SOTA models. A comparison chart with other models is shown below.
-
+![](../../images/PP-LCNet/PP-LCNet-Acc.png)
## 3. Method
The overall structure of the network is shown in the figure below.
-
+![](../../images/PP-LCNet/PP-LCNet.png)
Build on extensive experiments, we found that many seemingly less time-consuming operations will increase the latency on Intel CPU-based devices, especially when the MKLDNN acceleration library is enabled. Therefore, we finally chose a block with the leanest possible structure and the fastest possible speed to form our BaseNet (similar to MobileNetV1). Based on BaseNet, we summarized four strategies that can improve the accuracy of the model without increasing the latency, and we combined these four strategies to form PP-LCNet. Each of these four strategies is introduced as below:
diff --git a/docs/en/models/index.rst b/docs/en/models/index.rst
index 73b2a1c1..abeee98c 100644
--- a/docs/en/models/index.rst
+++ b/docs/en/models/index.rst
@@ -3,14 +3,28 @@ models
.. toctree::
:maxdepth: 1
-
- models_intro_en.md
- Tricks_en.md
- ResNet_and_vd_en.md
- Mobile_en.md
+
+ PP-LCNet_en.md
SEResNext_and_Res2Net_en.md
+ ReXNet_en.md
+ Others_en.md
+ Twins_en.md
Inception_en.md
+ HarDNet_en.md
+ EfficientNet_and_ResNeXt101_wsl_en.md
+ ESNet_en.md
HRNet_en.md
+ RepVGG_en.md
+ RedNet_en.md
+ Mobile_en.md
+ ResNeSt_RegNet_en.md
+ ResNet_and_vd_en.md
+ models_intro_en.md
+ TNT_en.md
+ ViT_and_DeiT_en.md
+ LeViT_en.md
+ DLA_en.md
+ PVTV2_en.md
DPN_DenseNet_en.md
- EfficientNet_and_ResNeXt101_wsl_en.md
- Others_en.md
+ MixNet_en.md
+ SwinTransformer_en.md
diff --git a/docs/en/models/models_intro_en.md b/docs/en/models/models_intro_en.md
index feb67f0e..8d35459b 100644
--- a/docs/en/models/models_intro_en.md
+++ b/docs/en/models/models_intro_en.md
@@ -23,16 +23,14 @@ python tools/infer/predict.py \
--batch_size=1
```
-
-
-
-
-
-
+![](../../images/models/V100_benchmark/v100.fp32.bs1.main_fps_top1_s.png)
+
+
+![](../../images/models/mobile_arm_top1.png)
+
-
-
+![](../../images/models/V100_benchmark/v100.fp32.bs1.visiontransformer.png)
> If you think this document is helpful to you, welcome to give a star to our project:[https://github.com/PaddlePaddle/PaddleClas](https://github.com/PaddlePaddle/PaddleClas)
diff --git a/docs/en/models_training/index.rst b/docs/en/models_training/index.rst
new file mode 100644
index 00000000..687cb739
--- /dev/null
+++ b/docs/en/models_training/index.rst
@@ -0,0 +1,10 @@
+models_training
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ config_description_en.md
+ recognition_en.md
+ classification_en.md
+ train_strategy_en.md
diff --git a/docs/en/others/VisualDL_en.md b/docs/en/others/VisualDL_en.md
index cbd096ba..34ff4b8a 100644
--- a/docs/en/others/VisualDL_en.md
+++ b/docs/en/others/VisualDL_en.md
@@ -52,6 +52,6 @@ More information about the command,please refer to [VisualDL](https://github.c
Then you can enter the address `127.0.0.1:8840` and view the training process in the browser:
-
-
-
+
+![](../../images/VisualDL/train_loss.png)
+
diff --git a/docs/en/others/index.rst b/docs/en/others/index.rst
new file mode 100644
index 00000000..e1c98e30
--- /dev/null
+++ b/docs/en/others/index.rst
@@ -0,0 +1,15 @@
+others
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ transfer_learning_en.md
+ train_with_DALI_en.md
+ VisualDL_en.md
+ train_on_xpu_en.md
+ feature_visiualization_en.md
+ paddle_mobile_inference_en.md
+ competition_support_en.md
+ update_history_en.md
+ versions_en.md
diff --git a/docs/en/quick_start/index.rst b/docs/en/quick_start/index.rst
new file mode 100644
index 00000000..4380b8b1
--- /dev/null
+++ b/docs/en/quick_start/index.rst
@@ -0,0 +1,10 @@
+quick_start
+================================
+
+.. toctree::
+ :maxdepth: 1
+
+ quick_start_classification_new_user_en.md
+ quick_start_classification_professional_en.md
+ quick_start_recognition_en.md
+ quick_start_multilabel_classification_en.md
diff --git a/docs/en/quick_start/quick_start_classification_new_user_en.md b/docs/en/quick_start/quick_start_classification_new_user_en.md
index bf668b7b..12a5e741 100644
--- a/docs/en/quick_start/quick_start_classification_new_user_en.md
+++ b/docs/en/quick_start/quick_start_classification_new_user_en.md
@@ -78,7 +78,7 @@ After the unzip operation is completed, there are three `.txt` files for trainin
The image files of the flowers102 dataset are stored in the `dataset/flowers102/jpg` directory. The image examples are as follows:
-
+![](../../images/quick_start/Examples-Flower-102.png)
Return to the root directory of `PaddleClas`:
@@ -148,9 +148,7 @@ python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml
After the training is completed, the `Top1 Acc` curve of the validation set is shown below, and the highest accuracy rate is 0.2735.
-
-
-
+![](../../images/quick_start/r50_vd_acc.png)
#### 4.2.2 Use pre-trained models for training
@@ -165,9 +163,7 @@ python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml -o Arch.pr
The `Top1 Acc` curve of the validation set is shown below. The highest accuracy rate is `0.9402`. After loading the pre-trained model, the accuracy of the flowers102 data set has been greatly improved, and the absolute accuracy has increased by more than 65%.
-
-
-
+![](../../images/quick_start/r50_vd_pretrained_acc.png)
## 5. Model prediction
diff --git a/docs/en/quick_start/quick_start_recognition_en.md b/docs/en/quick_start/quick_start_recognition_en.md
index aebc9154..61c6f230 100644
--- a/docs/en/quick_start/quick_start_recognition_en.md
+++ b/docs/en/quick_start/quick_start_recognition_en.md
@@ -165,9 +165,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.u
The image to be retrieved is shown below.
-
-
-
+![](../../images/recognition/product_demo/query/daoxiangcunjinzhubing_6.jpg)
The final output is shown below.
@@ -182,9 +180,7 @@ where bbox indicates the location of the detected object, rec_docs indicates the
The detection result is also saved in the folder `output`, for this image, the visualization result is as follows.
-
-
-
+![](../../images/recognition/product_demo/result/daoxiangcunjinzhubing_6_en.jpg)
@@ -228,9 +224,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.i
The image to be retrieved is shown below.
-
-
-
+![](../../images/recognition/product_demo/query/anmuxi.jpg)
The output is empty.
@@ -298,6 +292,5 @@ The output is as follows:
The final recognition result is `Anmuxi Ambrosial Yogurt`, which is corrrect, the visualization result is as follows.
-
-
+![](../../images/recognition/product_demo/result/anmuxi_en.jpg)
--
GitLab