未验证 提交 bd9fd51d 编写于 作者: W Walter 提交者: GitHub

Merge branch 'release/2.3' into release/2.3

---
name: 使用咨询
about: 使用方法咨询
title: "[HOW TO]"
labels: question
assignees: ''
---
欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献!
提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题。如果您的问题比较复杂,想与我们直接交流,可以首页扫码加入微信群。
<!-- 写在这对符号中的内容会被隐藏 -->
### 使用场景
<!-- 请描述您的使用场景 -->
### 数据情况
<!-- 请描述您当前的数据集情况,例如数据集的规模,标注情况,是否干净等 -->
### 问题描述
<!-- 请描述您遇到的问题,如果有具体的badcase,也可以在此列出 -->
### 预期效果
<!-- 请描述您预期的效果 -->
--- ---
name: 问题反馈 name: 问题反馈
about: PaddleClas问题反馈 about: PaddleClas问题反馈
title: '' title: "[BUG]"
labels: '' labels: bug
assignees: '' assignees: ''
--- ---
欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献! 欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献!
提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题: 提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题:
1. PaddleClas版本以及PaddlePaddle版本:请您提供您使用的版本号或分支信息,如PaddleClas release/2.2和PaddlePaddle 2.1.0 <!-- 写在这对符号中的内容会被隐藏 -->
2. 涉及的其他产品使用的版本号:如您在使用PaddleClas的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号 ### 必要信息
3. 训练环境信息: ** 如果您不能提供以下必要信息,可能会影响问题解决的速度 **
a. 具体操作系统,如Linux/Windows/MacOS #### 1. PaddleClas版本以及PaddlePaddle版本
b. Python版本号,如Python3.6/7/8 <!-- 请您提供您使用的版本号或分支信息,如PaddleClas release/2.2和PaddlePaddle 2.1.0 -->
c. CUDA/cuDNN版本, 如CUDA10.2/cuDNN 7.6.5等 #### 2. 最小可复现问题的方法
4. 完整的代码(相比于repo中代码,有改动的地方)、详细的错误信息及相关log ##### 2.1 原始代码问题
<!-- 如果您在使用PaddleClas原始代码中遇到问题,请说明您的使用方法和用到的配置文件。如果您对配置文件做了修改也请一并说明 -->
##### 2.2 二次开发问题
<!-- 如果您对PaddleClas进行了二次开发,请您整理二次开发的部分,仅保留最小可复现问题的代码。 -->
#### 3. 报错信息和log
<!-- 请您提供报错信息的摘录或截图,运行log文件等内容 -->
### 补充信息
<!-- 您可以选择填写以下信息,这会对问题的解决有所帮助 -->
#### 1.训练环境信息:
##### 1.1 操作系统 <!-- 如Linux/Windows/MacOS -->
##### 1.2 Python版本号 <!-- 如Python3.6/7/8 -->
##### 1.3 CUDA/cuDNN版本 <!-- 如CUDA10.2/cuDNN 7.6.5等 -->
##### 1.4 涉及的其他产品的版本号
<!-- 如您在使用PaddleClas的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号 ->>
### PR和修复建议
#### 1.PR
<!-- 我们非常欢迎您为PaddleClas贡献代码,您可以参考社区贡献指南(https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/how_to_contribute.md)为PaddleClas贡献代码,并将PR链接贴在此处。 -->
#### 2.修复建议
<!-- 如果您对修复有任何建议,也请您不吝赐教 -->
---
name: 功能需求
about: 新功能需求
title: "[FEATURE]"
labels: question
assignees: ''
---
欢迎您使用PaddleClas并反馈相关问题,非常感谢您对PaddleClas的贡献!
提出issue时,辛苦您提供以下信息,方便我们快速定位问题并及时有效地解决您的问题。如果您的问题比较复杂,想与我们直接交流,可以首页扫码加入微信群。
<!-- 写在这对符号中的内容会被隐藏 -->
### 使用场景
<!-- 请您描述您的行业和使用场景 -->
### 预期效果
<!-- 请描述您预期的功能效果 -->
### 效果参考
<!-- 如果有其他产品实现了这些功能,您可以在此列出 -->
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
**近期更新** **近期更新**
- 2022.1.27 全面升级文档;新增[PaddleServing C++ pipeline部署方式](./deploy/paddleserving/readme.md)[18M图像识别安卓部署Demo](./deploy/lite_shitu/README.md) - 2022.1.27 全面升级文档;新增[PaddleServing C++ pipeline部署方式](./deploy/paddleserving)[18M图像识别安卓部署Demo](./deploy/lite_shitu)
- 2021.11.1 发布[PP-ShiTu技术报告](https://arxiv.org/pdf/2111.00775.pdf),新增饮料识别demo - 2021.11.1 发布[PP-ShiTu技术报告](https://arxiv.org/pdf/2111.00775.pdf),新增饮料识别demo
- 2021.10.23 发布轻量级图像识别系统PP-ShiTu,CPU上0.2s即可完成在10w+库的图像识别。 - 2021.10.23 发布轻量级图像识别系统PP-ShiTu,CPU上0.2s即可完成在10w+库的图像识别。
[点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md)立即体验 [点击这里](./docs/zh_CN/quick_start/quick_start_recognition.md)立即体验
...@@ -38,7 +38,7 @@ Res2Net200_vd预训练模型Top-1精度高达85.1%。 ...@@ -38,7 +38,7 @@ Res2Net200_vd预训练模型Top-1精度高达85.1%。
* 您可以扫描下面的微信群二维码, 加入PaddleClas 微信交流群。获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。 * 您可以扫描下面的微信群二维码, 加入PaddleClas 微信交流群。获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/12560511/153565053-d6cbc57b-1610-4a64-87b2-50c948352d87.jpeg" width = "200" /> <img src="https://user-images.githubusercontent.com/12560511/162710270-8a249aca-4fa9-46f9-95e5-66d906fe6d66.jpg" width="200"/>
</div> </div>
## 快速体验 ## 快速体验
......
...@@ -41,7 +41,7 @@ Four sample solutions are provided, including product recognition, vehicle recog ...@@ -41,7 +41,7 @@ Four sample solutions are provided, including product recognition, vehicle recog
* You can also scan the QR code below to join the PaddleClas WeChat group to get more efficient answers to your questions and to communicate with developers from all walks of life. We look forward to hearing from you. * You can also scan the QR code below to join the PaddleClas WeChat group to get more efficient answers to your questions and to communicate with developers from all walks of life. We look forward to hearing from you.
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/12560511/153565053-d6cbc57b-1610-4a64-87b2-50c948352d87.jpeg" width = "200" /> <img src="https://user-images.githubusercontent.com/12560511/162710270-8a249aca-4fa9-46f9-95e5-66d906fe6d66.jpg" width="200"/>
</div> </div>
## Quick Start ## Quick Start
......
...@@ -51,8 +51,8 @@ RecPostProcess: null ...@@ -51,8 +51,8 @@ RecPostProcess: null
# indexing engine config # indexing engine config
IndexProcess: IndexProcess:
index_method: "HNSW32" # supported: HNSW32, IVF, Flat index_method: "HNSW32" # supported: HNSW32, IVF, Flat
index_dir: "./drink_dataset_v1.0/gallery" image_root: "./drink_dataset_v1.0/gallery"
image_root: "./drink_dataset_v1.0/index" index_dir: "./drink_dataset_v1.0/index"
data_file: "./drink_dataset_v1.0/gallery/drink_label.txt" data_file: "./drink_dataset_v1.0/gallery/drink_label.txt"
index_operation: "new" # suported: "append", "remove", "new" index_operation: "new" # suported: "append", "remove", "new"
delimiter: " " delimiter: " "
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import platform
import os import os
import argparse import argparse
import base64 import base64
...@@ -50,6 +51,8 @@ class Predictor(object): ...@@ -50,6 +51,8 @@ class Predictor(object):
else: else:
config.disable_gpu() config.disable_gpu()
if args.enable_mkldnn: if args.enable_mkldnn:
# there is no set_mkldnn_cache_capatity() on macOS
if platform.system() != "Darwin":
# cache 10 different shapes for mkldnn to avoid memory leak # cache 10 different shapes for mkldnn to avoid memory leak
config.set_mkldnn_cache_capacity(10) config.set_mkldnn_cache_capacity(10)
config.enable_mkldnn() config.enable_mkldnn()
......
advanced_tutorials
================================
.. toctree::
:maxdepth: 2
DataAugmentation_en.md
distillation/index
multilabel/index
model_prune_quantization_en.md
code_overview_en.md
how_to_contribute_en.md
...@@ -4,4 +4,4 @@ Multilabel Classification ...@@ -4,4 +4,4 @@ Multilabel Classification
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
multilabel.md multilabel_en.md
\ No newline at end of file \ No newline at end of file
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
Data augmentation is a commonly used regularization method in image classification task, which is often used in scenarios with insufficient data or large model. In this chapter, we mainly introduce 8 image augmentation methods besides standard augmentation methods. Users can apply these methods in their own tasks for better model performance. Under the same conditions, these augmentation methods' performance on ImageNet1k dataset is shown as follows. Data augmentation is a commonly used regularization method in image classification task, which is often used in scenarios with insufficient data or large model. In this chapter, we mainly introduce 8 image augmentation methods besides standard augmentation methods. Users can apply these methods in their own tasks for better model performance. Under the same conditions, these augmentation methods' performance on ImageNet1k dataset is shown as follows.
![](../../../images/image_aug/main_image_aug.png) ![](../../images/image_aug/main_image_aug.png)
<a name="2"></a> <a name="2"></a>
...@@ -50,7 +50,7 @@ Compared with the above standard image augmentation methods, the researchers hav ...@@ -50,7 +50,7 @@ Compared with the above standard image augmentation methods, the researchers hav
Visualization results of some images after augmentation are shown as follows. Visualization results of some images after augmentation are shown as follows.
![](../../../images/image_aug/image_aug_samples_s_en.jpg) ![](../../images/image_aug/image_aug_samples_s_en.jpg)
The following table shows more detailed information of the transformations. The following table shows more detailed information of the transformations.
...@@ -72,7 +72,7 @@ The following table shows more detailed information of the transformations. ...@@ -72,7 +72,7 @@ The following table shows more detailed information of the transformations.
PaddleClas integrates all the above data augmentation strategies. More details including principles and usage of the strategies are introduced in the following chapters. For better visualization, we use the following figure to show the changes after the transformations. And `RandCrop` is replaced with` Resize` for simplification. PaddleClas integrates all the above data augmentation strategies. More details including principles and usage of the strategies are introduced in the following chapters. For better visualization, we use the following figure to show the changes after the transformations. And `RandCrop` is replaced with` Resize` for simplification.
![](../../../images/image_aug/test_baseline.jpeg) ![](../../images/image_aug/test_baseline.jpeg)
<a name="2.1"></a> <a name="2.1"></a>
### 2.1 Image Transformation ### 2.1 Image Transformation
...@@ -91,7 +91,7 @@ Unlike conventional artificially designed image augmentation methods, AutoAugmen ...@@ -91,7 +91,7 @@ Unlike conventional artificially designed image augmentation methods, AutoAugmen
The images after `AutoAugment` are as follows. The images after `AutoAugment` are as follows.
![][test_autoaugment] ![](../../images/image_aug/test_autoaugment.jpeg)
<a name="2.1.2"></a> <a name="2.1.2"></a>
#### 2.1.2 RandAugment #### 2.1.2 RandAugment
...@@ -107,7 +107,7 @@ In `RandAugment`, the author proposes a random augmentation method. Instead of u ...@@ -107,7 +107,7 @@ In `RandAugment`, the author proposes a random augmentation method. Instead of u
The images after `RandAugment` are as follows. The images after `RandAugment` are as follows.
![][test_randaugment] ![](../../images/image_aug/test_randaugment.jpeg)
<a name="2.1.3"></a> <a name="2.1.3"></a>
#### 2.1.3 TimmAutoAugment #### 2.1.3 TimmAutoAugment
...@@ -137,7 +137,7 @@ Cutout is a kind of dropout, but occludes input image rather than feature map. I ...@@ -137,7 +137,7 @@ Cutout is a kind of dropout, but occludes input image rather than feature map. I
The images after `Cutout` are as follows. The images after `Cutout` are as follows.
![][test_cutout] ![](../../images/image_aug/test_cutout.jpeg)
<a name="2.2.2"></a> <a name="2.2.2"></a>
#### 2.2.2 RandomErasing #### 2.2.2 RandomErasing
...@@ -150,7 +150,7 @@ RandomErasing is similar to the Cutout. It is also to solve the problem of poor ...@@ -150,7 +150,7 @@ RandomErasing is similar to the Cutout. It is also to solve the problem of poor
The images after `RandomErasing` are as follows. The images after `RandomErasing` are as follows.
![][test_randomerassing] ![](../../images/image_aug/test_randomerassing.jpeg)
<a name="2.2.3"></a> <a name="2.2.3"></a>
#### 2.2.3 HideAndSeek #### 2.2.3 HideAndSeek
...@@ -162,11 +162,11 @@ Github repo: [https://github.com/kkanshul/Hide-and-Seek](https://github.com/kkan ...@@ -162,11 +162,11 @@ Github repo: [https://github.com/kkanshul/Hide-and-Seek](https://github.com/kkan
Images are divided into some patches for `HideAndSeek` and masks are generated with certain probability for each patch. The meaning of the masks in different areas is shown in the figure below. Images are divided into some patches for `HideAndSeek` and masks are generated with certain probability for each patch. The meaning of the masks in different areas is shown in the figure below.
![][hide_and_seek_mask_expanation] ![](../../images/image_aug/hide-and-seek-visual.png)
The images after `HideAndSeek` are as follows. The images after `HideAndSeek` are as follows.
![][test_hideandseek] ![](../../images/image_aug/gridmask-0.png)
<a name="2.2.4"></a> <a name="2.2.4"></a>
#### 2.2.4 GridMask #### 2.2.4 GridMask
...@@ -180,7 +180,7 @@ The author points out that the previous method based on image cropping has two p ...@@ -180,7 +180,7 @@ The author points out that the previous method based on image cropping has two p
1. Excessive deletion of the area may cause most or all of the target subject to be deleted, or cause the context information loss, resulting in the images after enhancement becoming noisy data. 1. Excessive deletion of the area may cause most or all of the target subject to be deleted, or cause the context information loss, resulting in the images after enhancement becoming noisy data.
2. Reserving too much area has little effect on the object and context. 2. Reserving too much area has little effect on the object and context.
![][gridmask-0] ![](../../images/image_aug/hide-and-seek-visual.png)
Therefore, it is the core problem to be solved how to Therefore, it is the core problem to be solved how to
if you avoid over-deletion or over-retention becomes the core problem to be solved. if you avoid over-deletion or over-retention becomes the core problem to be solved.
...@@ -195,7 +195,7 @@ It shows that the second method is better. ...@@ -195,7 +195,7 @@ It shows that the second method is better.
The images after `GridMask` are as follows. The images after `GridMask` are as follows.
![][test_gridmask] ![](../../images/image_aug/test_gridmask.jpeg)
<a name="2.3"></a> <a name="2.3"></a>
### 2.3 Image mix ### 2.3 Image mix
...@@ -215,7 +215,7 @@ Mixup is the first solution for image aliasing, it is easy to realize and perfor ...@@ -215,7 +215,7 @@ Mixup is the first solution for image aliasing, it is easy to realize and perfor
The images after `Mixup` are as follows. The images after `Mixup` are as follows.
![][test_mixup] ![](../../images/image_aug/test_mixup.png)
<a name="2.3.2"></a> <a name="2.3.2"></a>
#### 2.3.2 Cutmix #### 2.3.2 Cutmix
...@@ -229,7 +229,7 @@ Cutmix randomly cuts out an `ROI` from one image, and then covered onto the corr ...@@ -229,7 +229,7 @@ Cutmix randomly cuts out an `ROI` from one image, and then covered onto the corr
The images after `Cutmix` are as follows. The images after `Cutmix` are as follows.
![][test_cutmix] ![](../../images/image_aug/test_cutmix.png)
For the practical part of data augmentation, please refer to [Data Augmentation Practice](../advanced_tutorials/DataAugmentation_en.md). For the practical part of data augmentation, please refer to [Data Augmentation Practice](../advanced_tutorials/DataAugmentation_en.md).
......
# Image Classification Task Introduction
## Catalogue ## Catalogue
- [1. Dataset Introduction](#1) - [1. Dataset Introduction](#1)
......
algorithm_introduction
================================
.. toctree::
:maxdepth: 2
image_classification_en.md
metric_learning_en.md
knowledge_distillation_en.md
model_prune_quantization_en.md
ImageNet_models_en.md
DataAugmentation_en.md
...@@ -10,70 +10,56 @@ ...@@ -10,70 +10,56 @@
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
# #
import os # import os
import recommonmark # import sys
# sys.path.insert(0, os.path.abspath('.'))
import sphinx_rtd_theme
from recommonmark.parser import CommonMarkParser
# -- Project information -----------------------------------------------------
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] project = 'PaddleClas-en'
copyright = '2022, PaddleClas'
author = 'PaddleClas'
# -- Project information ----------------------------------------------------- # The full version, including alpha/beta/rc tags
release = '2.3'
project = 'PaddleClas'
copyright = '2020, paddlepaddle'
author = 'paddlepaddle'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
source_parsers = {
'.md': CommonMarkParser,
}
source_suffix = ['.rst', '.md']
extensions = [ extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.mathjax',
'sphinx.ext.githubpages',
'sphinx.ext.napoleon',
'recommonmark', 'recommonmark',
'sphinx_markdown_tables', 'sphinx_markdown_tables'
] ]
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
# md file can also be parased # The root document.
source_suffix = ['.rst', '.md'] root_doc = 'doc_en'
# The master toctree document. # List of patterns, relative to source directory, that match files and
master_doc = 'index' # directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
# -- Options for HTML output ------------------------------------------------- # -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
#
# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org # 更改文档配色
on_rtd = os.environ.get('READTHEDOCS', None) == 'True' html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
if not on_rtd: # only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# otherwise, readthedocs.org uses their theme by default, so no need to specify it
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static'] html_static_path = ['_static']
html_logo = '../images/logo.png'
data_preparation
================================
.. toctree::
:maxdepth: 2
recognition_dataset_en.md
classification_dataset_en.md
Welcome to PaddleClas!
================================
.. toctree::
:maxdepth: 1
introduction/index
installation/index
quick_start/index
image_recognition_pipeline/index
data_preparation/index
models_training/index
inference_deployment/index
models/index
algorithm_introduction/index
advanced_tutorials/index
others/index
faq_series/index
# Use VisualDL to visualize the training
## Preface
VisualDL, a visualization analysis tool of PaddlePaddle, provides a variety of charts to show the trends of parameters, and visualizes model structures, data samples, histograms of tensors, PR curves , ROC curves and high-dimensional data distributions. It enables users to understand the training process and the model structure more clearly and intuitively so as to optimize models efficiently. For more information, please refer to [VisualDL](https://github.com/PaddlePaddle/VisualDL/).
## Use VisualDL in PaddleClas
Now PaddleClas support use VisualDL to visualize the changes of learning rate, loss, accuracy in training.
### Set config and start training
You only need to set the field `Global.use_visualdl` to `True` in train config:
```yaml
# config.yaml
Global:
...
use_visualdl: True
...
```
PaddleClas will save the VisualDL logs to subdirectory `vdl/` under the output directory specified by `Global.output_dir`. And then you just need to start training normally:
```shell
python3 tools/train.py -c config.yaml
```
### Start VisualDL
After starting the training program, you can start the VisualDL service in a new terminal session:
```shell
visualdl --logdir ./output/vdl/
```
In the above command, `--logdir` specify the directory of the VisualDL logs produced in training. VisualDL will traverse and iterate to find the subdirectories of the specified directory to visualize all the experimental results. You can also use the following parameters to set the IP and port number of the VisualDL service:
* `--host`:ip, default is 127.0.0.1
* `--port`:port, default is 8040
More information about the command,please refer to [VisualDL](https://github.com/PaddlePaddle/VisualDL/blob/develop/README.md#2-launch-panel).
Then you can enter the address `127.0.0.1:8840` and view the training process in the browser:
<div align="center">
<img src="../../images/VisualDL/train_loss.png" width="400">
</div>
extension
================================
.. toctree::
:maxdepth: 1
paddle_inference_en.md
paddle_mobile_inference_en.md
paddle_quantization_en.md
multi_machine_training_en.md
paddle_hub_en.md
paddle_serving_en.md
# Distributed Training
Distributed deep neural networks training is highly efficient in PaddlePaddle.
And it is one of the PaddlePaddle's core advantage technologies.
On image classification tasks, distributed training can achieve almost linear acceleration ratio.
[Fleet](https://github.com/PaddlePaddle/Fleet) is High-Level API for distributed training in PaddlePaddle.
By using Fleet, a user can shift from local machine paddlepaddle code to distributed code easily.
In order to support both single-machine training and multi-machine training,
[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) uses the Fleet API interface.
For more information about distributed training,
please refer to [Fleet API documentation](https://github.com/PaddlePaddle/Fleet/blob/develop/README.md).
# Paddle Hub
[PaddleHub](https://github.com/PaddlePaddle/PaddleHub) is a pre-trained model application tool for PaddlePaddle.
Developers can conveniently use the high-quality pre-trained model combined with Fine-tune API to quickly complete the whole process from model migration to deployment.
All the pre-trained models of [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) have been collected by PaddleHub.
For further details, please refer to [PaddleHub website](https://www.paddlepaddle.org.cn/hub).
# Paddle-Lite
## Introduction
[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) is a set of lightweight inference engine which is fully functional, easy to use and then performs well. Lightweighting is reflected in the use of fewer bits to represent the weight and activation of the neural network, which can greatly reduce the size of the model, solve the problem of limited storage space of the mobile device, and the inference speed is better than other frameworks on the whole.
In [PaddleClas](https://github.com/PaddlePaddle/PaddleClas), we uses Paddle-Lite to [evaluate the performance on the mobile device](../models/Mobile_en.md), in this section we uses the `MobileNetV1` model trained on the `ImageNet1k` dataset as an example to introduce how to use `Paddle-Lite` to evaluate the model speed on the mobile terminal (evaluated on SD855)
## Evaluation Steps
### Export the Inference Model
* First you should transform the saved model during training to the special model which can be used to inference, the special model can be exported by `tools/export_model.py`, the specific way of transform is as follows.
```shell
python tools/export_model.py -m MobileNetV1 -p pretrained/MobileNetV1_pretrained/ -o inference/MobileNetV1
```
Finally the `model` and `parmas` can be saved in `inference/MobileNetV1`.
### Download Benchmark Binary File
* Use the adb (Android Debug Bridge) tool to connect the Android phone and the PC, then develop and debug. After installing adb and ensuring that the PC and the phone are successfully connected, use the following command to view the ARM version of the phone and select the pre-compiled library based on ARM version.
```shell
adb shell getprop ro.product.cpu.abi
```
* Download Benchmark_bin File
```shell
wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/benchmark_bin_v8
```
If the ARM version is v7, the v7 benchmark_bin file should be downloaded, the command is as follow.
```shell
wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/benchmark_bin_v7
```
### Inference benchmark
After the PC and mobile phone are successfully connected, use the following command to start the model evaluation.
```
sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
```
Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows.
```
PaddleLite Benchmark
Threads=1 Warmup=10 Repeats=30
MobileNetV1 min = 30.89100 max = 30.73600 average = 30.79750
Threads=2 Warmup=10 Repeats=30
MobileNetV1 min = 18.26600 max = 18.14000 average = 18.21637
Threads=4 Warmup=10 Repeats=30
MobileNetV1 min = 10.03200 max = 9.94300 average = 9.97627
```
Here is the model inference speed under different number of threads, the unit is FPS, taking model on one threads as an example, the average speed of MobileNetV1 on SD855 is `30.79750FPS`.
### Model Optimization and Speed Evaluation
* In II.III section, we mention that the model will be optimized before evaluation, here you can first optimize the model, and then directly load the optimized model for speed evaluation
* Paddle-Lite
In Paddle-Lite, we provides multiple strategies to automatically optimize the original training model, which contain Quantify, Subgraph fusion, Hybrid scheduling, Kernel optimization and so on. In order to make the optimization more convenient and easy to use, we provide opt tools to automatically complete the optimization steps and output a lightweight, optimal and executable model in Paddle-Lite, which can be downloaded on [Paddle-Lite Model Optimization Page](https://paddle-lite.readthedocs.io/zh/latest/user_guides/model_optimize_tool.html). Here we take `MacOS` as our development environment, download[opt_mac](https://paddlelite-data.bj.bcebos.com/model_optimize_tool/opt_mac) model optimization tools and use the following commands to optimize the model.
```shell
model_file="../MobileNetV1/model"
param_file="../MobileNetV1/params"
opt_models_dir="./opt_models"
mkdir ${opt_models_dir}
./opt_mac --model_file=${model_file} \
--param_file=${param_file} \
--valid_targets=arm \
--optimize_out_type=naive_buffer \
--prefer_int8_kernel=false \
--optimize_out=${opt_models_dir}/MobileNetV1
```
Where the `model_file` and `param_file` are exported model file and the file address respectively, after transforming successfully, the `MobileNetV1.nb` will be saved in `opt_models`
Use the benchmark_bin file to load the optimized model for evaluation. The commands are as follows.
```shell
bash benchmark.sh ./benchmark_bin_v8 ./opt_models result_armv8.txt
```
Finally the result is saved in `result_armv8.txt` and shown as follow.
```
PaddleLite Benchmark
Threads=1 Warmup=10 Repeats=30
MobileNetV1_lite min = 30.89500 max = 30.78500 average = 30.84173
Threads=2 Warmup=10 Repeats=30
MobileNetV1_lite min = 18.25300 max = 18.11000 average = 18.18017
Threads=4 Warmup=10 Repeats=30
MobileNetV1_lite min = 10.00600 max = 9.90000 average = 9.96177
```
Taking the model on one threads as an example, the average speed of MobileNetV1 on SD855 is `30.84173FPS`.
More specific parameter explanation and Paddle-Lite usage can refer to [Paddle-Lite docs](https://paddle-lite.readthedocs.io/zh/latest/)
# Model Quantifization
Int8 quantization is one of the key features in [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim).
It supports two kinds of training aware, **Dynamic strategy** and **Static strategy**,
layer-wise and channel-wise quantization,
and using PaddleLite to deploy models generated by PaddleSlim.
By using this toolkit, [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) quantized the mobilenet_v3_large_x1_0 model whose accuracy is 78.9% after distilled.
After quantized, the prediction speed is accelerated from 19.308ms to 14.395ms on SD855.
The storage size is reduced from 21M to 10M.
The top1 recognition accuracy rate is 75.9%.
For specific training methods, please refer to [PaddleSlim quant aware](../../../deploy/slim/README_en.md)
# Model Service Deployment
## Overview
[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep-learning researchers to easily deploy online inference services, supporting one-click deployment of industry, high concurrency and efficient communication between client and server and supporting multiple programming languages to develop clients.
Taking HTTP inference service deployment as an example to introduce how to use PaddleServing to deploy model services in PaddleClas.
## Serving Install
It is recommends to use docker to install and deploy the Serving environment in the Serving official website, first, you need to pull the docker environment and create Serving-based docker.
```shell
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
nvidia-docker exec -it test bash
```
In docker, you need to install some packages about Serving
```shell
pip install paddlepaddle-gpu
pip install paddle-serving-client
pip install paddle-serving-server-gpu
```
* If the installation speed is too slow, you can add `-i https://pypi.tuna.tsinghua.edu.cn/simple` following pip to speed up the process.
* If you want to deploy CPU service, you can install the cpu version of Serving, the command is as follow.
```shell
pip install paddle-serving-server
```
### Export Model
Exporting the Serving model using `tools/export_serving_model.py`, taking ResNet50_vd as an example, the command is as follow.
```shell
python tools/export_serving_model.py -m ResNet50_vd -p ./pretrained/ResNet50_vd_pretrained/ -o serving
```
finally, the client configures, model parameters and structure file will be saved in `ppcls_client_conf` and `ppcls_model`.
### Service Deployment and Request
* Using the following commands to start the Serving.
```shell
python tools/serving/image_service_gpu.py serving/ppcls_model workdir 9292
```
`serving/ppcls_model` is the address of the Serving model just saved, `workdir` is the work directory, and `9292` is the port of the service.
* Using the following script to send an identification request to the Serving and return the result.
```
python tools/serving/image_http_client.py 9292 ./docs/images/logo.png
```
`9292` is the port for sending the request, which is consistent with the Serving starting port, and `./docs/images/logo.png` is the test image, the final top1 label and probability are returned.
* For more Serving deployment, such RPC inference service, you can refer to the Serving official website: [https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet)
# Train with DALI
## Preface
[The NVIDIA Data Loading Library](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html) (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It can build Dataloader of Paddle.
Since the Deep learning relies on a large amount of data in the training stage, these data need to be loaded and preprocessed. These operations are usually executed on the CPU, which limits the further improvement of the training speed, especially when the batch_size is large, which become the bottleneck of speed. DALI can use GPU to accelerate these operations, thereby further improve the training speed.
## Installing DALI
DALI only support Linux x64 and version of CUDA is 10.2 or later.
* For CUDA 10:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda100
* For CUDA 11.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda110
For more information about installing DALI, please refer to [DALI](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/installation.html).
## Using DALI
Paddleclas supports training with DALI in static graph. Since DALI only supports GPU training, `CUDA_VISIBLE_DEVICES` needs to be set, and DALI needs to occupy GPU memory, so it needs to reserve GPU memory for Dali. To train with DALI, just set the fields in the training config `use_dali = True`, or start the training by the following command:
```shell
# set the GPUs that can be seen
export CUDA_VISIBLE_DEVICES="0"
# set the GPU memory used for neural network training, generally 0.8 or 0.7, and the remaining GPU memory is reserved for DALI
export FLAGS_fraction_of_gpu_memory_to_use=0.80
python tools/static/train.py -c configs/ResNet/ResNet50.yaml -o use_dali=True
```
And you can train with muti-GPUs:
```shell
# set the GPUs that can be seen
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
# set the GPU memory used for neural network training, generally 0.8 or 0.7, and the remaining GPU memory is reserved for DALI
export FLAGS_fraction_of_gpu_memory_to_use=0.80
python -m paddle.distributed.launch \
--gpus="0,1,2,3,4,5,6,7" \
tools/static/train.py \
-c ./configs/ResNet/ResNet50.yaml \
-o use_dali=True
```
## Train with FP16
On the basis of the above, using FP16 half-precision can further improve the training speed, you can refer to the following command.
```shell
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export FLAGS_fraction_of_gpu_memory_to_use=0.8
python -m paddle.distributed.launch \
--gpus="0,1,2,3,4,5,6,7" \
tools/static/train.py \
-c configs/ResNet/ResNet50_fp16.yaml
```
faq_series
================================
.. toctree::
:maxdepth: 2
faq_2021_s2_en.md
faq_2021_s1_en.md
faq_2020_s1_en.md
faq_selected_30_en.md
...@@ -58,12 +58,12 @@ The results are shown in the table below: ...@@ -58,12 +58,12 @@ The results are shown in the table below:
- Address of the pre-training model: [General recognition pre-training model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/pretrain/general_PPLCNet_x2_5_pretrained_v1.0.pdparams) - Address of the pre-training model: [General recognition pre-training model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/pretrain/general_PPLCNet_x2_5_pretrained_v1.0.pdparams)
<a name="4"></a> <a name="4"></a>
# 4.Customized Feature Extraction ## 4.Customized Feature Extraction
Customized feature extraction refers to retraining the feature extraction model based on one's own task. It consists of four main steps: 1) data preparation, 2) model training, 3) model evaluation, and 4) model inference. Customized feature extraction refers to retraining the feature extraction model based on one's own task. It consists of four main steps: 1) data preparation, 2) model training, 3) model evaluation, and 4) model inference.
<a name="4.1"></a> <a name="4.1"></a>
## 4.1 Data Preparation ### 4.1 Data Preparation
To start with, customize your dataset based on the task (See [Format description](../data_preparation/recognition_dataset_en.md#1) for the dataset format). Before initiating the model training, modify the data-related content in the configuration files, including the address of the dataset and the class number. The corresponding locations in configuration files are shown below: To start with, customize your dataset based on the task (See [Format description](../data_preparation/recognition_dataset_en.md#1) for the dataset format). Before initiating the model training, modify the data-related content in the configuration files, including the address of the dataset and the class number. The corresponding locations in configuration files are shown below:
...@@ -99,7 +99,7 @@ Train: ...@@ -99,7 +99,7 @@ Train:
``` ```
<a name="4.2"></a> <a name="4.2"></a>
## 4.2 Model Training ### 4.2 Model Training
- Single machine single card training - Single machine single card training
...@@ -130,7 +130,7 @@ python -m paddle.distributed.launch \ ...@@ -130,7 +130,7 @@ python -m paddle.distributed.launch \
``` ```
<a name="4.3"></a> <a name="4.3"></a>
## 4.3 Model Evaluation ### 4.3 Model Evaluation
- Single Card Evaluation - Single Card Evaluation
...@@ -154,21 +154,21 @@ python -m paddle.distributed.launch \ ...@@ -154,21 +154,21 @@ python -m paddle.distributed.launch \
**Recommendation:** It is suggested to employ multi-card evaluation, which can quickly obtain the feature set of the overall dataset using multi-card parallel computing, accelerating the evaluation process. **Recommendation:** It is suggested to employ multi-card evaluation, which can quickly obtain the feature set of the overall dataset using multi-card parallel computing, accelerating the evaluation process.
<a name="4.4"></a> <a name="4.4"></a>
## 4.4 Model Inference ### 4.4 Model Inference
Two steps are included in the inference: 1)exporting the inference model; 2)obtaining the feature vector. Two steps are included in the inference: 1)exporting the inference model; 2)obtaining the feature vector.
### 4.4.1 Export Inference Model #### 4.4.1 Export Inference Model
``` ```
python tools/export_model \ python tools/export_model.py \
-c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml \ -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml \
-o Global.pretrained_model="output/RecModel/best_model" -o Global.pretrained_model="output/RecModel/best_model"
``` ```
The generated inference models are under the directory `inference`, which comprises three files, namely, `inference.pdmodel``inference.pdiparams``inference.pdiparams.info`. Among them, `inference.pdmodel` serves to store the structure of inference model while `inference.pdiparams` and `inference.pdiparams.info` are mobilized to store model-related parameters. The generated inference models are under the directory `inference`, which comprises three files, namely, `inference.pdmodel``inference.pdiparams``inference.pdiparams.info`. Among them, `inference.pdmodel` serves to store the structure of inference model while `inference.pdiparams` and `inference.pdiparams.info` are mobilized to store model-related parameters.
### 4.4.2 Obtain Feature Vector #### 4.4.2 Obtain Feature Vector
``` ```
cd deploy cd deploy
......
image_recognition_pipeline
================================
.. toctree::
:maxdepth: 2
mainbody_detection_en.md
feature_extraction_en.md
vector_search_en.md
Welcome to PaddleClas 欢迎使用PaddleClas图像分类库
================================ ================================
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
:numbered:
:caption: Contents:
tutorials/index models_training/index
introduction/index
image_recognition_pipeline/index
others/index
faq_series/index
data_preparation/index
installation/index
models/index models/index
advanced_tutorials/index advanced_tutorials/index
application/index algorithm_introduction/index
extension/index inference_deployment/index
competition_support_en.md quick_start/index
update_history_en.md
faq_en.md
...@@ -293,8 +293,6 @@ sh tools/run.sh ...@@ -293,8 +293,6 @@ sh tools/run.sh
* The prediction results will be shown on the screen, which is as follows. * The prediction results will be shown on the screen, which is as follows.
<div align="center"> ![](../../images/inference_deployment/cpp_infer_result.png)
<img src="./docs/imgs/cpp_infer_result.png" width="600">
</div>
* In the above results,`class id` represents the id corresponding to the category with the highest confidence, and `score` represents the probability that the image belongs to that category. * In the above results,`class id` represents the id corresponding to the category with the highest confidence, and `score` represents the probability that the image belongs to that category.
inference_deployment
================================
.. toctree::
:maxdepth: 2
export_model_en.md
python_deploy_en.md
cpp_deploy_en.md
paddle_serving_deploy_en.md
paddle_hub_serving_deploy_en.md
paddle_lite_deploy_en.md
whl_deploy_en.md
...@@ -258,9 +258,7 @@ export LD_LIBRARY_PATH=/data/local/tmp/debug:$LD_LIBRARY_PATH ...@@ -258,9 +258,7 @@ export LD_LIBRARY_PATH=/data/local/tmp/debug:$LD_LIBRARY_PATH
The result is as follows: The result is as follows:
<div align="center"> ![](../../images/inference_deployment/lite_demo_result.png)
<img src="./imgs/lite_demo_result.png" width="600">
</div>
<a name="3"></a> <a name="3"></a>
## 3. FAQ ## 3. FAQ
......
...@@ -39,9 +39,7 @@ pip3 install dist/* ...@@ -39,9 +39,7 @@ pip3 install dist/*
## 2. Quick Start ## 2. Quick Start
* Using the `ResNet50` model provided by PaddleClas, the following image(`'docs/images/inference_deployment/whl_demo.jpg'`) as an example. * Using the `ResNet50` model provided by PaddleClas, the following image(`'docs/images/inference_deployment/whl_demo.jpg'`) as an example.
<div align="center"> ![](../../images/inference_deployment/whl_demo.jpg)
<img src="../images/inference_deployment/whl_demo.jpg" width = "400" />
</div>
* Python * Python
```python ```python
......
installation
================================
.. toctree::
:maxdepth: 2
install_paddle_en.md
install_paddleclas_en.md
# Installation PaddlePaddle # Install PaddlePaddle
--- ---
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
- [3. Install PaddlePaddle using pip](#3) - [3. Install PaddlePaddle using pip](#3)
- [4. Verify installation](#4) - [4. Verify installation](#4)
At present, **PaddleClas** requires **PaddlePaddle** version **>=2.0**. Docker is recomended to run Paddleclas, for more detailed information about docker and nvidia-docker, you can refer to the [tutorial](https://docs.docker.com/get-started/). If you do not want to use docker, you can skip section [2. (Recommended) Prepare a docker environment](#2), and go into section [3. Install PaddlePaddle using pip](#3). At present, **PaddleClas** requires **PaddlePaddle** version `>=2.0`. Docker is recomended to run Paddleclas, for more detailed information about docker and nvidia-docker, you can refer to the [tutorial](https://docs.docker.com/get-started/). If you do not want to use docker, you can skip section [2. (Recommended) Prepare a docker environment](#2), and go into section [3. Install PaddlePaddle using pip](#3).
<a name="1"></a> <a name="1"></a>
...@@ -96,5 +96,5 @@ python -c "import paddle; print(paddle.__version__)" ...@@ -96,5 +96,5 @@ python -c "import paddle; print(paddle.__version__)"
Note: Note:
* Make sure the compiled source code is later than PaddlePaddle2.0. * Make sure the compiled source code is later than PaddlePaddle2.0.
* Indicate **WITH_DISTRIBUTE=ON** when compiling, Please refer to [Instruction](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/Tables.html#id3) for more details. * Indicate `WITH_DISTRIBUTE=ON` when compiling, Please refer to [Instruction](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/Tables.html#id3) for more details.
* When running in docker, in order to ensure that the container has enough shared memory for dataloader acceleration of Paddle, please set the parameter `--shm-size=8g` at creating a docker container, if conditions permit, you can set it to a larger value. * When running in docker, in order to ensure that the container has enough shared memory for dataloader acceleration of Paddle, please set the parameter `--shm-size=8g` at creating a docker container, if conditions permit, you can set it to a larger value.
introduction
================================
.. toctree::
:maxdepth: 2
function_intro_en.md
more_demo/index
more_demo
================================
.. toctree::
:maxdepth: 1
product.md
logo.md
cartoon.md
more_demo.md
vehicle.md
...@@ -27,13 +27,13 @@ In the field of computer vision, the quality of backbone network determines the ...@@ -27,13 +27,13 @@ In the field of computer vision, the quality of backbone network determines the
## 2. Introduction ## 2. Introduction
Recent years witnessed the emergence of many lightweight backbone networks. In past two years, in particular, there were abundant networks searched by NAS that either enjoy advantages on FLOPs or Params, or have an edge in terms of inference speed on ARM devices. However, few of them dedicated to specified optimization of Intel CPU, resulting their imperfect inference speed on the intel CPU side. Based on this, we specially design the backbone network PP-LCNet for Intel CPU devices with its acceleration library MKLDNN. Compared with other lightweight SOTA models, this backbone network can further improve the performance of the model without increasing the inference time, significantly outperforming the existing SOTA models. A comparison chart with other models is shown below. Recent years witnessed the emergence of many lightweight backbone networks. In past two years, in particular, there were abundant networks searched by NAS that either enjoy advantages on FLOPs or Params, or have an edge in terms of inference speed on ARM devices. However, few of them dedicated to specified optimization of Intel CPU, resulting their imperfect inference speed on the intel CPU side. Based on this, we specially design the backbone network PP-LCNet for Intel CPU devices with its acceleration library MKLDNN. Compared with other lightweight SOTA models, this backbone network can further improve the performance of the model without increasing the inference time, significantly outperforming the existing SOTA models. A comparison chart with other models is shown below.
<div align=center><img src="../../images/PP-LCNet/PP-LCNet-Acc.png" width="500" height="400"/></div> ![](../../images/PP-LCNet/PP-LCNet-Acc.png)
<a name="3"></a> <a name="3"></a>
## 3. Method ## 3. Method
The overall structure of the network is shown in the figure below. The overall structure of the network is shown in the figure below.
<div align=center><img src="../../images/PP-LCNet/PP-LCNet.png" width="700" height="400"/></div> ![](../../images/PP-LCNet/PP-LCNet.png)
Build on extensive experiments, we found that many seemingly less time-consuming operations will increase the latency on Intel CPU-based devices, especially when the MKLDNN acceleration library is enabled. Therefore, we finally chose a block with the leanest possible structure and the fastest possible speed to form our BaseNet (similar to MobileNetV1). Based on BaseNet, we summarized four strategies that can improve the accuracy of the model without increasing the latency, and we combined these four strategies to form PP-LCNet. Each of these four strategies is introduced as below: Build on extensive experiments, we found that many seemingly less time-consuming operations will increase the latency on Intel CPU-based devices, especially when the MKLDNN acceleration library is enabled. Therefore, we finally chose a block with the leanest possible structure and the fastest possible speed to form our BaseNet (similar to MobileNetV1). Based on BaseNet, we summarized four strategies that can improve the accuracy of the model without increasing the latency, and we combined these four strategies to form PP-LCNet. Each of these four strategies is introduced as below:
......
...@@ -2,15 +2,29 @@ models ...@@ -2,15 +2,29 @@ models
================================ ================================
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
models_intro_en.md
Tricks_en.md
ResNet_and_vd_en.md
Mobile_en.md
SEResNext_and_Res2Net_en.md
Inception_en.md
HRNet_en.md
DPN_DenseNet_en.md DPN_DenseNet_en.md
models_intro_en.md
RepVGG_en.md
EfficientNet_and_ResNeXt101_wsl_en.md EfficientNet_and_ResNeXt101_wsl_en.md
ViT_and_DeiT_en.md
SwinTransformer_en.md
Others_en.md Others_en.md
SEResNext_and_Res2Net_en.md
ESNet_en.md
HRNet_en.md
ReXNet_en.md
Inception_en.md
TNT_en.md
RedNet_en.md
DLA_en.md
ResNeSt_RegNet_en.md
PP-LCNet_en.md
HarDNet_en.md
ResNet_and_vd_en.md
LeViT_en.md
Mobile_en.md
MixNet_en.md
Twins_en.md
PVTV2_en.md
此差异已折叠。
models_training
================================
.. toctree::
:maxdepth: 2
config_description_en.md
recognition_en.md
classification_en.md
train_strategy_en.md
...@@ -52,6 +52,6 @@ More information about the command,please refer to [VisualDL](https://github.c ...@@ -52,6 +52,6 @@ More information about the command,please refer to [VisualDL](https://github.c
Then you can enter the address `127.0.0.1:8840` and view the training process in the browser: Then you can enter the address `127.0.0.1:8840` and view the training process in the browser:
<div align="center">
<img src="../../images/VisualDL/train_loss.png" width="400"> ![](../../images/VisualDL/train_loss.png)
</div>
others
================================
.. toctree::
:maxdepth: 2
transfer_learning_en.md
train_with_DALI_en.md
VisualDL_en.md
train_on_xpu_en.md
feature_visiualization_en.md
paddle_mobile_inference_en.md
competition_support_en.md
update_history_en.md
versions_en.md
# Paddle-Lite # Benchmark on Mobile
--- ---
......
quick_start
================================
.. toctree::
:maxdepth: 2
quick_start_classification_new_user_en.md
quick_start_classification_professional_en.md
quick_start_recognition_en.md
quick_start_multilabel_classification_en.md
...@@ -78,7 +78,7 @@ After the unzip operation is completed, there are three `.txt` files for trainin ...@@ -78,7 +78,7 @@ After the unzip operation is completed, there are three `.txt` files for trainin
The image files of the flowers102 dataset are stored in the `dataset/flowers102/jpg` directory. The image examples are as follows: The image files of the flowers102 dataset are stored in the `dataset/flowers102/jpg` directory. The image examples are as follows:
<div align="center"> <div align="center">
<img src="../../images/quick_start/Examples-Flower-102.png" width = "800" /> ![](../../images/quick_start/Examples-Flower-102.png)
</div> </div>
Return to the root directory of `PaddleClas`: Return to the root directory of `PaddleClas`:
...@@ -148,9 +148,7 @@ python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml ...@@ -148,9 +148,7 @@ python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml
After the training is completed, the `Top1 Acc` curve of the validation set is shown below, and the highest accuracy rate is 0.2735. After the training is completed, the `Top1 Acc` curve of the validation set is shown below, and the highest accuracy rate is 0.2735.
<div align="center"> ![](../../images/quick_start/r50_vd_acc.png)
<img src="../../images/quick_start/r50_vd_acc.png" width = "800" />
</div>
<a name="4.2.2"></a> <a name="4.2.2"></a>
#### 4.2.2 Use pre-trained models for training #### 4.2.2 Use pre-trained models for training
...@@ -165,9 +163,7 @@ python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml -o Arch.pr ...@@ -165,9 +163,7 @@ python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml -o Arch.pr
The `Top1 Acc` curve of the validation set is shown below. The highest accuracy rate is `0.9402`. After loading the pre-trained model, the accuracy of the flowers102 data set has been greatly improved, and the absolute accuracy has increased by more than 65%. The `Top1 Acc` curve of the validation set is shown below. The highest accuracy rate is `0.9402`. After loading the pre-trained model, the accuracy of the flowers102 data set has been greatly improved, and the absolute accuracy has increased by more than 65%.
<div align="center"> ![](../../images/quick_start/r50_vd_pretrained_acc.png)
<img src="../../images/quick_start/r50_vd_pretrained_acc.png" width = "800" />
</div>
<a name="5"></a> <a name="5"></a>
## 5. Model prediction ## 5. Model prediction
......
...@@ -165,9 +165,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.u ...@@ -165,9 +165,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.u
The image to be retrieved is shown below. The image to be retrieved is shown below.
<div align="center"> ![](../../images/recognition/product_demo/query/daoxiangcunjinzhubing_6.jpg)
<img src="../../images/recognition/product_demo/query/daoxiangcunjinzhubing_6.jpg" width = "400" />
</div>
The final output is shown below. The final output is shown below.
...@@ -182,9 +180,7 @@ where bbox indicates the location of the detected object, rec_docs indicates the ...@@ -182,9 +180,7 @@ where bbox indicates the location of the detected object, rec_docs indicates the
The detection result is also saved in the folder `output`, for this image, the visualization result is as follows. The detection result is also saved in the folder `output`, for this image, the visualization result is as follows.
<div align="center"> ![](../../images/recognition/product_demo/result/daoxiangcunjinzhubing_6_en.jpg)
<img src="../../images/recognition/product_demo/result/daoxiangcunjinzhubing_6_en.jpg" width = "400" />
</div>
<a name="2.2.2"></a> <a name="2.2.2"></a>
...@@ -228,9 +224,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.i ...@@ -228,9 +224,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.i
The image to be retrieved is shown below. The image to be retrieved is shown below.
<div align="center"> ![](../../images/recognition/product_demo/query/anmuxi.jpg)
<img src="../../images/recognition/product_demo/query/anmuxi.jpg" width = "400" />
</div>
The output is empty. The output is empty.
...@@ -298,6 +292,5 @@ The output is as follows: ...@@ -298,6 +292,5 @@ The output is as follows:
The final recognition result is `Anmuxi Ambrosial Yogurt`, which is corrrect, the visualization result is as follows. The final recognition result is `Anmuxi Ambrosial Yogurt`, which is corrrect, the visualization result is as follows.
<div align="center"> ![](../../images/recognition/product_demo/result/anmuxi_en.jpg)
<img src="../../images/recognition/product_demo/result/anmuxi_en.jpg" width = "400" />
</div> </div>
...@@ -159,7 +159,7 @@ python -m paddle.distributed.launch \ ...@@ -159,7 +159,7 @@ python -m paddle.distributed.launch \
#### 4.4.1 导出推理模型 #### 4.4.1 导出推理模型
``` ```
python tools/export_model \ python tools/export_model.py \
-c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml \ -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml \
-o Global.pretrained_model="output/RecModel/best_model" -o Global.pretrained_model="output/RecModel/best_model"
``` ```
......
此差异已折叠。
...@@ -61,6 +61,7 @@ from ppcls.arch.backbone.model_zoo.tnt import TNT_small ...@@ -61,6 +61,7 @@ from ppcls.arch.backbone.model_zoo.tnt import TNT_small
from ppcls.arch.backbone.model_zoo.hardnet import HarDNet68, HarDNet85, HarDNet39_ds, HarDNet68_ds from ppcls.arch.backbone.model_zoo.hardnet import HarDNet68, HarDNet85, HarDNet39_ds, HarDNet68_ds
from ppcls.arch.backbone.model_zoo.cspnet import CSPDarkNet53 from ppcls.arch.backbone.model_zoo.cspnet import CSPDarkNet53
from ppcls.arch.backbone.model_zoo.pvt_v2 import PVT_V2_B0, PVT_V2_B1, PVT_V2_B2_Linear, PVT_V2_B2, PVT_V2_B3, PVT_V2_B4, PVT_V2_B5 from ppcls.arch.backbone.model_zoo.pvt_v2 import PVT_V2_B0, PVT_V2_B1, PVT_V2_B2_Linear, PVT_V2_B2, PVT_V2_B3, PVT_V2_B4, PVT_V2_B5
from ppcls.arch.backbone.model_zoo.repvgg import RepVGG_A0, RepVGG_A1, RepVGG_A2, RepVGG_B0, RepVGG_B1, RepVGG_B2, RepVGG_B1g2, RepVGG_B1g4, RepVGG_B2g4, RepVGG_B3g4
from ppcls.arch.backbone.variant_models.resnet_variant import ResNet50_last_stage_stride1 from ppcls.arch.backbone.variant_models.resnet_variant import ResNet50_last_stage_stride1
from ppcls.arch.backbone.variant_models.vgg_variant import VGG19Sigmoid from ppcls.arch.backbone.variant_models.vgg_variant import VGG19Sigmoid
from ppcls.arch.backbone.variant_models.pp_lcnet_variant import PPLCNet_x2_5_Tanh from ppcls.arch.backbone.variant_models.pp_lcnet_variant import PPLCNet_x2_5_Tanh
......
...@@ -33,18 +33,12 @@ MODEL_URLS = { ...@@ -33,18 +33,12 @@ MODEL_URLS = {
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B1_pretrained.pdparams", "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B1_pretrained.pdparams",
"RepVGG_B2": "RepVGG_B2":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B2_pretrained.pdparams", "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B2_pretrained.pdparams",
"RepVGG_B3":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B3_pretrained.pdparams",
"RepVGG_B1g2": "RepVGG_B1g2":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B1g2_pretrained.pdparams", "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B1g2_pretrained.pdparams",
"RepVGG_B1g4": "RepVGG_B1g4":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B1g4_pretrained.pdparams", "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B1g4_pretrained.pdparams",
"RepVGG_B2g2":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B2g2_pretrained.pdparams",
"RepVGG_B2g4": "RepVGG_B2g4":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B2g4_pretrained.pdparams", "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B2g4_pretrained.pdparams",
"RepVGG_B3g2":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B3g2_pretrained.pdparams",
"RepVGG_B3g4": "RepVGG_B3g4":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B3g4_pretrained.pdparams", "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/RepVGG_B3g4_pretrained.pdparams",
} }
...@@ -92,6 +86,8 @@ class RepVGGBlock(nn.Layer): ...@@ -92,6 +86,8 @@ class RepVGGBlock(nn.Layer):
groups=1, groups=1,
padding_mode='zeros'): padding_mode='zeros'):
super(RepVGGBlock, self).__init__() super(RepVGGBlock, self).__init__()
self.is_repped = False
self.in_channels = in_channels self.in_channels = in_channels
self.out_channels = out_channels self.out_channels = out_channels
self.kernel_size = kernel_size self.kernel_size = kernel_size
...@@ -127,6 +123,12 @@ class RepVGGBlock(nn.Layer): ...@@ -127,6 +123,12 @@ class RepVGGBlock(nn.Layer):
groups=groups) groups=groups)
def forward(self, inputs): def forward(self, inputs):
if not self.training and not self.is_repped:
self.rep()
self.is_repped = True
if self.training and self.is_repped:
self.is_repped = False
if not self.training: if not self.training:
return self.nonlinearity(self.rbr_reparam(inputs)) return self.nonlinearity(self.rbr_reparam(inputs))
...@@ -137,7 +139,7 @@ class RepVGGBlock(nn.Layer): ...@@ -137,7 +139,7 @@ class RepVGGBlock(nn.Layer):
return self.nonlinearity( return self.nonlinearity(
self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out) self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)
def eval(self): def rep(self):
if not hasattr(self, 'rbr_reparam'): if not hasattr(self, 'rbr_reparam'):
self.rbr_reparam = nn.Conv2D( self.rbr_reparam = nn.Conv2D(
in_channels=self.in_channels, in_channels=self.in_channels,
...@@ -148,12 +150,9 @@ class RepVGGBlock(nn.Layer): ...@@ -148,12 +150,9 @@ class RepVGGBlock(nn.Layer):
dilation=self.dilation, dilation=self.dilation,
groups=self.groups, groups=self.groups,
padding_mode=self.padding_mode) padding_mode=self.padding_mode)
self.training = False
kernel, bias = self.get_equivalent_kernel_bias() kernel, bias = self.get_equivalent_kernel_bias()
self.rbr_reparam.weight.set_value(kernel) self.rbr_reparam.weight.set_value(kernel)
self.rbr_reparam.bias.set_value(bias) self.rbr_reparam.bias.set_value(bias)
for layer in self.sublayers():
layer.eval()
def get_equivalent_kernel_bias(self): def get_equivalent_kernel_bias(self):
kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
...@@ -248,12 +247,6 @@ class RepVGG(nn.Layer): ...@@ -248,12 +247,6 @@ class RepVGG(nn.Layer):
self.cur_layer_idx += 1 self.cur_layer_idx += 1
return nn.Sequential(*blocks) return nn.Sequential(*blocks)
def eval(self):
self.training = False
for layer in self.sublayers():
layer.training = False
layer.eval()
def forward(self, x): def forward(self, x):
out = self.stage0(x) out = self.stage0(x)
out = self.stage1(out) out = self.stage1(out)
...@@ -367,17 +360,6 @@ def RepVGG_B2(pretrained=False, use_ssld=False, **kwargs): ...@@ -367,17 +360,6 @@ def RepVGG_B2(pretrained=False, use_ssld=False, **kwargs):
return model return model
def RepVGG_B2g2(pretrained=False, use_ssld=False, **kwargs):
model = RepVGG(
num_blocks=[4, 6, 16, 1],
width_multiplier=[2.5, 2.5, 2.5, 5],
override_groups_map=g2_map,
**kwargs)
_load_pretrained(
pretrained, model, MODEL_URLS["RepVGG_B2g2"], use_ssld=use_ssld)
return model
def RepVGG_B2g4(pretrained=False, use_ssld=False, **kwargs): def RepVGG_B2g4(pretrained=False, use_ssld=False, **kwargs):
model = RepVGG( model = RepVGG(
num_blocks=[4, 6, 16, 1], num_blocks=[4, 6, 16, 1],
...@@ -389,28 +371,6 @@ def RepVGG_B2g4(pretrained=False, use_ssld=False, **kwargs): ...@@ -389,28 +371,6 @@ def RepVGG_B2g4(pretrained=False, use_ssld=False, **kwargs):
return model return model
def RepVGG_B3(pretrained=False, use_ssld=False, **kwargs):
model = RepVGG(
num_blocks=[4, 6, 16, 1],
width_multiplier=[3, 3, 3, 5],
override_groups_map=None,
**kwargs)
_load_pretrained(
pretrained, model, MODEL_URLS["RepVGG_B3"], use_ssld=use_ssld)
return model
def RepVGG_B3g2(pretrained=False, use_ssld=False, **kwargs):
model = RepVGG(
num_blocks=[4, 6, 16, 1],
width_multiplier=[3, 3, 3, 5],
override_groups_map=g2_map,
**kwargs)
_load_pretrained(
pretrained, model, MODEL_URLS["RepVGG_B3g2"], use_ssld=use_ssld)
return model
def RepVGG_B3g4(pretrained=False, use_ssld=False, **kwargs): def RepVGG_B3g4(pretrained=False, use_ssld=False, **kwargs):
model = RepVGG( model = RepVGG(
num_blocks=[4, 6, 16, 1], num_blocks=[4, 6, 16, 1],
......
...@@ -372,7 +372,7 @@ def _load_pretrained(pretrained, model, model_url, use_ssld=False): ...@@ -372,7 +372,7 @@ def _load_pretrained(pretrained, model, model_url, use_ssld=False):
) )
def TNT_small(pretrained=False, **kwargs): def TNT_small(pretrained=False, use_ssld=False, **kwargs):
model = TNT(patch_size=16, model = TNT(patch_size=16,
embed_dim=384, embed_dim=384,
in_dim=24, in_dim=24,
...@@ -381,5 +381,6 @@ def TNT_small(pretrained=False, **kwargs): ...@@ -381,5 +381,6 @@ def TNT_small(pretrained=False, **kwargs):
in_num_head=4, in_num_head=4,
qkv_bias=False, qkv_bias=False,
**kwargs) **kwargs)
_load_pretrained(pretrained, model, MODEL_URLS["TNT_small"]) _load_pretrained(
pretrained, model, MODEL_URLS["TNT_small"], use_ssld=use_ssld)
return model return model
...@@ -18,7 +18,7 @@ Global: ...@@ -18,7 +18,7 @@ Global:
# model architecture # model architecture
Arch: Arch:
name: "DistillationModel" name: "DistillationModel"
class_num: 1000 class_num: &class_num 1000
# if not null, its lengths should be same as models # if not null, its lengths should be same as models
pretrained_list: pretrained_list:
# if not null, its lengths should be same as models # if not null, its lengths should be same as models
...@@ -28,11 +28,13 @@ Arch: ...@@ -28,11 +28,13 @@ Arch:
models: models:
- Teacher: - Teacher:
name: MobileNetV3_large_x1_0 name: MobileNetV3_large_x1_0
class_num: *class_num
pretrained: True pretrained: True
use_ssld: True use_ssld: True
dropout_prob: null dropout_prob: null
- Student: - Student:
name: MobileNetV3_small_x1_0 name: MobileNetV3_small_x1_0
class_num: *class_num
pretrained: False pretrained: False
dropout_prob: null dropout_prob: null
......
...@@ -92,7 +92,7 @@ class Engine(object): ...@@ -92,7 +92,7 @@ class Engine(object):
self.vdl_writer = LogWriter(logdir=vdl_writer_path) self.vdl_writer = LogWriter(logdir=vdl_writer_path)
# set device # set device
assert self.config["Global"]["device"] in ["cpu", "gpu", "xpu", "npu"] assert self.config["Global"]["device"] in ["cpu", "gpu", "xpu", "npu", "mlu"]
self.device = paddle.set_device(self.config["Global"]["device"]) self.device = paddle.set_device(self.config["Global"]["device"])
logger.info('train with paddle {} and device {}'.format( logger.info('train with paddle {} and device {}'.format(
paddle.__version__, self.device)) paddle.__version__, self.device))
...@@ -107,7 +107,9 @@ class Engine(object): ...@@ -107,7 +107,9 @@ class Engine(object):
self.scale_loss = 1.0 self.scale_loss = 1.0
self.use_dynamic_loss_scaling = False self.use_dynamic_loss_scaling = False
if self.amp: if self.amp:
AMP_RELATED_FLAGS_SETTING = {'FLAGS_max_inplace_grad_add': 8, } AMP_RELATED_FLAGS_SETTING = {
'FLAGS_max_inplace_grad_add': 8,
}
if paddle.is_compiled_with_cuda(): if paddle.is_compiled_with_cuda():
AMP_RELATED_FLAGS_SETTING.update({ AMP_RELATED_FLAGS_SETTING.update({
'FLAGS_cudnn_batchnorm_spatial_persistent': 1 'FLAGS_cudnn_batchnorm_spatial_persistent': 1
...@@ -172,7 +174,9 @@ class Engine(object): ...@@ -172,7 +174,9 @@ class Engine(object):
if metric_config is not None: if metric_config is not None:
metric_config = metric_config.get("Train") metric_config = metric_config.get("Train")
if metric_config is not None: if metric_config is not None:
if hasattr(self.train_dataloader, "collate_fn"): if hasattr(
self.train_dataloader, "collate_fn"
) and self.train_dataloader.collate_fn is not None:
for m_idx, m in enumerate(metric_config): for m_idx, m in enumerate(metric_config):
if "TopkAcc" in m: if "TopkAcc" in m:
msg = f"'TopkAcc' metric can not be used when setting 'batch_transform_ops' in config. The 'TopkAcc' metric has been removed." msg = f"'TopkAcc' metric can not be used when setting 'batch_transform_ops' in config. The 'TopkAcc' metric has been removed."
......
...@@ -21,7 +21,6 @@ from ppcls.utils import profiler ...@@ -21,7 +21,6 @@ from ppcls.utils import profiler
def train_epoch(engine, epoch_id, print_batch_step): def train_epoch(engine, epoch_id, print_batch_step):
tic = time.time() tic = time.time()
v_current = [int(i) for i in paddle.__version__.split(".")]
for iter_id, batch in enumerate(engine.train_dataloader): for iter_id, batch in enumerate(engine.train_dataloader):
if iter_id >= engine.max_iter: if iter_id >= engine.max_iter:
break break
......
...@@ -302,8 +302,5 @@ class AccuracyScore(MutiLabelMetric): ...@@ -302,8 +302,5 @@ class AccuracyScore(MutiLabelMetric):
fps = mcm[:, 0, 1] fps = mcm[:, 0, 1]
accuracy = (sum(tps) + sum(tns)) / ( accuracy = (sum(tps) + sum(tns)) / (
sum(tps) + sum(tns) + sum(fns) + sum(fps)) sum(tps) + sum(tns) + sum(fns) + sum(fps))
precision = sum(tps) / (sum(tps) + sum(fps))
recall = sum(tps) / (sum(tps) + sum(fns))
F1 = 2 * (accuracy * recall) / (accuracy + recall)
metric_dict["AccuracyScore"] = paddle.to_tensor(accuracy) metric_dict["AccuracyScore"] = paddle.to_tensor(accuracy)
return metric_dict return metric_dict
...@@ -91,9 +91,10 @@ def main(args): ...@@ -91,9 +91,10 @@ def main(args):
use_xpu = global_config.get("use_xpu", False) use_xpu = global_config.get("use_xpu", False)
use_npu = global_config.get("use_npu", False) use_npu = global_config.get("use_npu", False)
use_mlu = global_config.get("use_mlu", False)
assert ( assert (
use_gpu and use_xpu and use_npu use_gpu and use_xpu and use_npu and use_mlu
) is not True, "gpu, xpu and npu can not be true in the same time in static mode!" ) is not True, "gpu, xpu, npu and mlu can not be true in the same time in static mode!"
if use_gpu: if use_gpu:
device = paddle.set_device('gpu') device = paddle.set_device('gpu')
...@@ -101,6 +102,8 @@ def main(args): ...@@ -101,6 +102,8 @@ def main(args):
device = paddle.set_device('xpu') device = paddle.set_device('xpu')
elif use_npu: elif use_npu:
device = paddle.set_device('npu') device = paddle.set_device('npu')
elif use_mlu:
device = paddle.set_device('mlu')
else: else:
device = paddle.set_device('cpu') device = paddle.set_device('cpu')
......
...@@ -20,7 +20,9 @@ with open('requirements.txt', encoding="utf-8-sig") as f: ...@@ -20,7 +20,9 @@ with open('requirements.txt', encoding="utf-8-sig") as f:
def readme(): def readme():
with open('docs/en/whl_en.md', encoding="utf-8-sig") as f: with open(
'docs/en/inference_deployment/whl_deploy_en.md',
encoding="utf-8-sig") as f:
README = f.read() README = f.read()
return README return README
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册