提交 b18015de 编写于 作者: G gaotingquan 提交者: Tingquan Gao

docs: fix problems commented in reviewing

上级 307424b6
...@@ -247,7 +247,7 @@ Batch_size is an important hyperparameter in neural networks training, whose val ...@@ -247,7 +247,7 @@ Batch_size is an important hyperparameter in neural networks training, whose val
### Q5.4 What is weight_decay? How to set it? ### Q5.4 What is weight_decay? How to set it?
**A**:过拟合是机器学习中常见的一个名词,简单理解即为模型在训练数据上表现很好,但在测试数据上表现较差,在图像分类问题中,同样存在过拟合的问题,为了避免过拟合,很多正则方式被提出,其中,weight_decay 是其中一个广泛使用的避免过拟合的方式。当使用 SGD 优化器时,weight_decay 等价于在最终的损失函数后添加 L2 正则化,L2 正则化使得网络的权重倾向于选择更小的值,最终整个网络中的参数值更趋向于 0,模型的泛化性能相应提高。在各大深度学习框架的实现中,该值表达的含义是 L2 正则前的系数,在飞桨框架中,该值的名称是 L2Decay,所以以下都称其为 L2Decay。该系数越大,表示加入的正则越强,模型越趋于欠拟合状态。具体到数据集来说: **A**:
Overfitting is a common term in machine learning, which is simply understood as a model that performs well on training data but less satisfactory on test data. In image classification, there is also the problem of overfitting, and many regularization methods are proposed to avoid it, among which weight_decay is one of the widely used ways. When using SGD optimizer, weight_decay is equivalent to adding L2 regularization after the final loss function, which makes the weights of the network tend to choose smaller values, so eventually, the parameter values in the whole network tend to be more towards 0, and the generalization performance of the model is improved accordingly. In the implementation of major deep learning frameworks, this value means the coefficient before the L2 regularization, which is called L2Decay in the PaddlePaddle framework. The larger the coefficient is, the stronger the added regularization is, and the more the model tends to be underfitted. The specific information of the dataset is as follows: Overfitting is a common term in machine learning, which is simply understood as a model that performs well on training data but less satisfactory on test data. In image classification, there is also the problem of overfitting, and many regularization methods are proposed to avoid it, among which weight_decay is one of the widely used ways. When using SGD optimizer, weight_decay is equivalent to adding L2 regularization after the final loss function, which makes the weights of the network tend to choose smaller values, so eventually, the parameter values in the whole network tend to be more towards 0, and the generalization performance of the model is improved accordingly. In the implementation of major deep learning frameworks, this value means the coefficient before the L2 regularization, which is called L2Decay in the PaddlePaddle framework. The larger the coefficient is, the stronger the added regularization is, and the more the model tends to be underfitted. The specific information of the dataset is as follows:
......
...@@ -7,113 +7,26 @@ ...@@ -7,113 +7,26 @@
## Contents ## Contents
- [Recent Updates](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#近期更新)(2021.09.08) - [1. Theory](#1)
- [Selection](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#精选) - [1.1 Basic Knowledge of PaddleClas](#1.1)
- [1. Theory](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1) - [1.2 Backbone Network and Pre-trained Model Library](#1.2)
- [1.1 Basic Knowledge of PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1.1) - [1.3 Image Classification](#1.3)
- [1.2 Backbone Network and Pre-trained Model Library](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1.2) - [1.4 General Detection](#1.4)
- [1.3 Image Classification](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1.3) - [1.5 Image Recognition](#1.5)
- [1.4 General Detection](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1.4) - [1.6 Vector Search](#1.6)
- [1.5 Image Recognition](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1.5) - [2. Practice](#2)
- [1.6 Vector Search](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#1.6) - [2.1 Common Problems in Training and Evaluation](#2.1)
- [2. Practice](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2) - [2.2 Image Classification](#2.2)
- [2.1 Common Problems in Training and Evaluation](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2.1) - [2.3 General Detection](#2.3)
- [2.2 Image Classification](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2.2) - [2.4 Image Recognition](#2.4)
- [2.3 General Detection](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2.3) - [2.5 Vector Search](#2.5)
- [2.4 Image Recognition](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2.4) - [2.6 Model Inference Deployment](#2.6)
- [2.5 Vector Search](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2.5)
- [2.6 Model Inference Deployment](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/faq_series/faq_2021_s2.md#2.6) <a name="1"></a>
## Recent Updates
#### Q2.1.7: How to tackle the reported error `ERROR: Unexpected segmentation fault encountered in DataLoader workers.` during training?
**A**
Try setting the field `num_workers` in the training configuration file to `0`; try making the field `batch_size` in the file smaller; ensure that the dataset format and the dataset path in the profile are correct.
#### Q2.1.8: How to use `Mixup` and `Cutmix` during training?
**A**
- For `Mixup`, please refer to [Mixup](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/DataAugment/ResNet50_ Mixup.yaml#L63-L65); and `Cuxmix`, please refer to [Cuxmix](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/ DataAugment/ResNet50_Cutmix.yaml#L63-L65).
- The training accuracy (Acc) metric cannot be calculated when using `Mixup` or `Cutmix` for training, so you need to remove the field `Metric.Train.TopkAcc` in the configuration file, please refer to [Metric.Train.TopkAcc](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L125-L128) for more details.
#### Q2.1.9: What are the fields `Global.pretrain_model` and `Global.checkpoints` used for in the training configuration file yaml?
**A**
- When `fine-tune` is required, the path of the file of pre-training model weights can be configured via the field `Global.pretrain_model`, which usually has the suffix `.pdparams`.
- During training, the training program automatically saves the breakpoint information at the end of each epoch, including the optimizer information `.pdopt` and model weights information `.pdparams`. In the event that the training process is unexpectedly interrupted and needs to be resumed, the breakpoint information file saved during training can be configured via the field `Global.checkpoints`, for example by configuring `checkpoints: . /output/ResNet18/epoch_18` to restore the breakpoint information at the end of 18 epoch training. PaddleClas will automatically load `epoch_18.pdopt` and `epoch_18.pdparams` to continue training from 19 epoch.
#### Q2.6.3: How to convert the model to `ONNX` format?
**A**:Paddle supports two ways and relies on the `paddle2onnx` tool, which first requires the installation of `paddle2onnx`.
```
pip install paddle2onnx
```
- From inference model to ONNX format model.
Take the `combined` format inference model (containing `.pdmodel` and `.pdiparams` files) exported from the dynamic graph as an example, run the following command to convert the model format:
```
paddle2onnx --model_dir ${model_path} --model_filename ${model_path}/inference.pdmodel --params_filename ${model_path}/inference.pdiparams --save_file ${save_path}/model.onnx --enable_onnx_checker True
```
In the above commands:
- `model_dir`: this parameter needs to contain `.pdmodel` and `.pdiparams` files.
- `model_filename`: this parameter is used to specify the path of the `.pdmodel` file under the parameter `model_dir`.
- `params_filename`: this parameter is used to specify the path of the `.pdiparams` file under the parameter `model_dir`.
- `save_file`: this parameter is used to specify the path to the directory where the converted model is saved.
For the conversion of a non-`combined` format inference model exported from a static diagram (usually containing the file `__model__` and multiple parameter files), and more parameter descriptions, please refer to the official documentation of [paddle2onnx](https://github.com/ PaddlePaddle/Paddle2ONNX/blob/develop/README_zh.md#Parameter options).
- Exporting ONNX format models directly from the model networking code.
Take the model networking code of dynamic graphs as an example, the model class is a subclass that inherits from `paddle.nn.Layer` and the code is shown below.
```
import paddle
from paddle.static import InputSpec
class SimpleNet(paddle.nn.Layer):
def __init__(self):
pass
def forward(self, x):
pass
net = SimpleNet()
x_spec = InputSpec(shape=[None, 3, 224, 224], dtype='float32', name='x')
paddle.onnx.export(layer=net, path="./SimpleNet", input_spec=[x_spec])
```
Among them:
- `InputSpec()` function is used to describe the signature information of the model input, including the `shape`, `type` and `name` of the input data (can be omitted).
- The `paddle.onnx.export()` function needs to specify the model grouping object `net`, the save path of the exported model `save_path`, and the description of the model's input data `input_spec`.
Note that the `paddlepaddle` `2.0.0` or above should be adopted.See [paddle.onnx.export](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/onnx/) for more details on the parameters of the `paddle.onnx.export()` function.
#### Q2.5.4: How to set the parameter `pq_size` when build searches the base library?
**A**
`pq_size` is a parameter of the PQ search algorithm, which can be simply understood as a "tiered" search algorithm. And `pq_size` is the "capacity" of each tier, so the setting of this parameter will affect the performance. However, in the case that the total data volume of the base library is not too large (less than 10,000), this parameter has little impact on the performance. So for most application scenarios, there is no need to modify this parameter when building the base library. For more details on the PQ search algorithm, see the related [paper](https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf).
## Selection
## 1. Theory ## 1. Theory
<a name="1.1"></a>
### 1.1 Basic Knowledge of PaddleClas ### 1.1 Basic Knowledge of PaddleClas
...@@ -148,11 +61,11 @@ By introducing the concept of momentum, the effect of historical updates is take ...@@ -148,11 +61,11 @@ By introducing the concept of momentum, the effect of historical updates is take
**A**: Currently, it is not implemented. If needed, you can try to modify the code yourself. In brief, the idea proposed in this paper is to fine-tune the final FC layer of the trained model using a larger resolution as input. Specifically, train the model network on a lower resolution dataset first, then set the parameter `stop_gradient=True ` for the weights of all layers of the network except the final FC layer, and at last fine-tune the network with a larger resolution input. **A**: Currently, it is not implemented. If needed, you can try to modify the code yourself. In brief, the idea proposed in this paper is to fine-tune the final FC layer of the trained model using a larger resolution as input. Specifically, train the model network on a lower resolution dataset first, then set the parameter `stop_gradient=True ` for the weights of all layers of the network except the final FC layer, and at last fine-tune the network with a larger resolution input.
<a name="1.2"></a>
### 1.2 Backbone Network and Pre-trained Model Library ### 1.2 Backbone Network and Pre-trained Model Library
<a name="1.3"></a>
### 1.3 Image Classification ### 1.3 Image Classification
...@@ -168,7 +81,7 @@ PaddleClas provides a variety of data augmentation methods, which can be divided ...@@ -168,7 +81,7 @@ PaddleClas provides a variety of data augmentation methods, which can be divided
Among them, RandAngment provides a variety of random combinations of data augmentation methods, which can meet the needs of brightness, contrast, saturation, hue and other aspects. Among them, RandAngment provides a variety of random combinations of data augmentation methods, which can meet the needs of brightness, contrast, saturation, hue and other aspects.
<a name="1.4"></a>
### 1.4 General Detection ### 1.4 General Detection
...@@ -186,7 +99,7 @@ The training data is a randomly selected subset of publicly available datasets s ...@@ -186,7 +99,7 @@ The training data is a randomly selected subset of publicly available datasets s
**A**:The current mainbody detection model is trained using publicly available datasets such as COCO, Object365, RPC, LogoDet, etc. If the data to be detected is similar to industrial quality inspection and other data with large differences from common categories, it is necessary to fine-tune the training based on the current detection model again. **A**:The current mainbody detection model is trained using publicly available datasets such as COCO, Object365, RPC, LogoDet, etc. If the data to be detected is similar to industrial quality inspection and other data with large differences from common categories, it is necessary to fine-tune the training based on the current detection model again.
<a name="1.5"></a>
### 1.5 Image Recognition ### 1.5 Image Recognition
...@@ -208,7 +121,7 @@ The product recognition model is recommended. For one, the range of products is ...@@ -208,7 +121,7 @@ The product recognition model is recommended. For one, the range of products is
Vectors with small dimensions should be adopted. 128 or even smaller are practically used to speed up the computation. In general, a dimension of 512 is large enough to adequately represent the features. Vectors with small dimensions should be adopted. 128 or even smaller are practically used to speed up the computation. In general, a dimension of 512 is large enough to adequately represent the features.
<a name="1.6"></a>
### 1.6 Vector Search ### 1.6 Vector Search
...@@ -222,11 +135,11 @@ Vectors with small dimensions should be adopted. 128 or even smaller are practic ...@@ -222,11 +135,11 @@ Vectors with small dimensions should be adopted. 128 or even smaller are practic
Both `Query` and `Gallery` are data set configurations, where `Gallery` is used to configure the base library data and `Query` is used to configure the validation set. When performing Eval, the model is first used to forward compute feature vectors on the `Gallery` base library data, which are used to construct the base library, and then the model forward computes feature vectors on the data in the `Query` validation set, and then computes metrics such as recall rate in the base library. Both `Query` and `Gallery` are data set configurations, where `Gallery` is used to configure the base library data and `Query` is used to configure the validation set. When performing Eval, the model is first used to forward compute feature vectors on the `Gallery` base library data, which are used to construct the base library, and then the model forward computes feature vectors on the data in the `Query` validation set, and then computes metrics such as recall rate in the base library.
<a name="2"></a>
## 2. Practice ## 2. Practice
<a name="2.1"></a>
### 2.1 Common Problems in Training and Evaluation ### 2.1 Common Problems in Training and Evaluation
...@@ -293,7 +206,7 @@ PaddleClas saves/updates the following three types of models during training. ...@@ -293,7 +206,7 @@ PaddleClas saves/updates the following three types of models during training.
- When `fine-tune` is required, the path of the file of pre-training model weights can be configured via the field `Global.pretrain_model`, which usually has the suffix `.pdparams`. - When `fine-tune` is required, the path of the file of pre-training model weights can be configured via the field `Global.pretrain_model`, which usually has the suffix `.pdparams`.
- During training, the training program automatically saves the breakpoint information at the end of each epoch, including the optimizer information `.pdopt` and model weights information `.pdparams`. In the event that the training process is unexpectedly interrupted and needs to be resumed, the breakpoint information file saved during training can be configured via the field `Global.checkpoints`, for example by configuring `checkpoints: . /output/ResNet18/epoch_18` to restore the breakpoint information at the end of 18 epoch training. PaddleClas will automatically load `epoch_18.pdopt` and `epoch_18.pdparams` to continue training from 19 epoch. - During training, the training program automatically saves the breakpoint information at the end of each epoch, including the optimizer information `.pdopt` and model weights information `.pdparams`. In the event that the training process is unexpectedly interrupted and needs to be resumed, the breakpoint information file saved during training can be configured via the field `Global.checkpoints`, for example by configuring `checkpoints: . /output/ResNet18/epoch_18` to restore the breakpoint information at the end of 18 epoch training. PaddleClas will automatically load `epoch_18.pdopt` and `epoch_18.pdparams` to continue training from 19 epoch.
<a name="2.2"></a>
### 2.2 Image Classification ### 2.2 Image Classification
...@@ -309,7 +222,7 @@ PaddleClas saves/updates the following three types of models during training. ...@@ -309,7 +222,7 @@ PaddleClas saves/updates the following three types of models during training.
**A**:When training SwinTransformer, please use `Paddle` `2.1.1` or above, and load the pre-trained model we provide. Also, the learning rate should be kept at an appropriate level. **A**:When training SwinTransformer, please use `Paddle` `2.1.1` or above, and load the pre-trained model we provide. Also, the learning rate should be kept at an appropriate level.
<a name="2.3"></a>
### 2.3 General Detection ### 2.3 General Detection
...@@ -317,7 +230,7 @@ PaddleClas saves/updates the following three types of models during training. ...@@ -317,7 +230,7 @@ PaddleClas saves/updates the following three types of models during training.
**A**:The mainbody detection model returns the detection frame, but in fact, in order to make the subsequent recognition model more accurate, the original image is also returned along with the detection frame. Subsequently, the original image or the detection frame will be sorted according to its similarity with the images in the library, and the label of the image in the library with the highest similarity will be the label of the recognized image. **A**:The mainbody detection model returns the detection frame, but in fact, in order to make the subsequent recognition model more accurate, the original image is also returned along with the detection frame. Subsequently, the original image or the detection frame will be sorted according to its similarity with the images in the library, and the label of the image in the library with the highest similarity will be the label of the recognized image.
#### Q2.3.2:在直播场景中,需要提供一个直播即时识别画面,能够在延迟几秒内找到特征目标物并用框圈起,这个可以实现吗?In a live broadcast scenario, is it possible to provide a live instant recognition screen that can find the target object of the feature and circle it with a delay of a few seconds? #### Q2.3.2:
**A**:A real-time detection presents high requirements for the detection speed; PP-YOLO is a lightweight target detection model provided by Paddle team, which strikes a good balance of detection speed and accuracy, you can try PP-YOLO for detection. For the use of PP-YOLO, you can refer to [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/configs/ppyolo/README_cn.md). **A**:A real-time detection presents high requirements for the detection speed; PP-YOLO is a lightweight target detection model provided by Paddle team, which strikes a good balance of detection speed and accuracy, you can try PP-YOLO for detection. For the use of PP-YOLO, you can refer to [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/configs/ppyolo/README_cn.md).
...@@ -325,7 +238,7 @@ PaddleClas saves/updates the following three types of models during training. ...@@ -325,7 +238,7 @@ PaddleClas saves/updates the following three types of models during training.
**A**:If the detection model does not perform well on your own dataset, you need to finetune it again on your own detection dataset. **A**:If the detection model does not perform well on your own dataset, you need to finetune it again on your own detection dataset.
<a name="2.4"></a>
### 2.4 Image Recognition ### 2.4 Image Recognition
...@@ -345,7 +258,7 @@ PaddleClas saves/updates the following three types of models during training. ...@@ -345,7 +258,7 @@ PaddleClas saves/updates the following three types of models during training.
**A**:In the configuration file (e.g. inference_product.yaml), `IndexProcess.score_thres` controls the minimum value of cosine similarity of the recognized image to the image in the library. When the cosine similarity is less than this value, the result will not be printed. You can adjust this value according to your actual data. **A**:In the configuration file (e.g. inference_product.yaml), `IndexProcess.score_thres` controls the minimum value of cosine similarity of the recognized image to the image in the library. When the cosine similarity is less than this value, the result will not be printed. You can adjust this value according to your actual data.
<a name="2.5"></a>
### 2.5 Vector Search ### 2.5 Vector Search
...@@ -369,6 +282,8 @@ If you are using the release/2.2 branch, it is recommended to update it to the r ...@@ -369,6 +282,8 @@ If you are using the release/2.2 branch, it is recommended to update it to the r
`pq_size` is a parameter of the PQ search algorithm, which can be simply understood as a "tiered" search algorithm. And `pq_size` is the "capacity" of each tier, so the setting of this parameter will affect the performance. However, in the case that the total data volume of the base library is not too large (less than 10,000), this parameter has little impact on the performance. So for most application scenarios, there is no need to modify this parameter when building the base library. For more details on the PQ search algorithm, see the related [paper](https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf). `pq_size` is a parameter of the PQ search algorithm, which can be simply understood as a "tiered" search algorithm. And `pq_size` is the "capacity" of each tier, so the setting of this parameter will affect the performance. However, in the case that the total data volume of the base library is not too large (less than 10,000), this parameter has little impact on the performance. So for most application scenarios, there is no need to modify this parameter when building the base library. For more details on the PQ search algorithm, see the related [paper](https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf).
<a name="2.6"></a>
### 2.6 Model Inference Deployment ### 2.6 Model Inference Deployment
#### Q2.6.1: How to add the parameter of a module that is enabled by hub serving? #### Q2.6.1: How to add the parameter of a module that is enabled by hub serving?
......
...@@ -7,8 +7,7 @@ ...@@ -7,8 +7,7 @@
* 图像分类、识别、检索领域大佬众多,模型和论文更新速度也很快,本文档回答主要依赖有限的项目实践,难免挂一漏万,如有遗漏和不足,也希望有识之士帮忙补充和修正,万分感谢。 * 图像分类、识别、检索领域大佬众多,模型和论文更新速度也很快,本文档回答主要依赖有限的项目实践,难免挂一漏万,如有遗漏和不足,也希望有识之士帮忙补充和修正,万分感谢。
## 目录 ## 目录
* [近期更新](#近期更新)(2021.09.08)
* [精选](#精选)
* [1. 理论篇](#1) * [1. 理论篇](#1)
* [1.1 PaddleClas 基础知识](#1.1) * [1.1 PaddleClas 基础知识](#1.1)
* [1.2 骨干网络和预训练模型库](#1.2) * [1.2 骨干网络和预训练模型库](#1.2)
...@@ -24,74 +23,6 @@ ...@@ -24,74 +23,6 @@
* [2.5 检索模块](#2.5) * [2.5 检索模块](#2.5)
* [2.6 模型预测部署](#2.6) * [2.6 模型预测部署](#2.6)
<a name="近期更新"></a>
## 近期更新
#### Q2.1.7: 在训练时,出现如下报错信息:`ERROR: Unexpected segmentation fault encountered in DataLoader workers.`,如何排查解决问题呢?
**A**:尝试将训练配置文件中的字段 `num_workers` 设置为 `0`;尝试将训练配置文件中的字段 `batch_size` 调小一些;检查数据集格式和配置文件中的数据集路径是否正确。
#### Q2.1.8: 如何在训练时使用 `Mixup` 和 `Cutmix` ?
**A**
* `Mixup` 的使用方法请参考 [Mixup](../../../ppcls/configs/ImageNet/DataAugment/ResNet50_Mixup.yaml#L63-L65)`Cuxmix` 请参考 [Cuxmix](../../../ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L63-L65)
* 使用 `Mixup``Cutmix` 做训练时无法计算训练的精度(Acc)指标,因此需要在配置文件中取消 `Metric.Train.TopkAcc` 字段,可参考 [Metric.Train.TopkAcc](../../../ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L125-L128)
#### Q2.1.9: 训练配置 yaml 文件中,字段 `Global.pretrain_model` 和 `Global.checkpoints` 分别用于配置什么呢?
**A**
* 当需要 `fine-tune` 时,可以通过字段 `Global.pretrain_model` 配置预训练模型权重文件的路径,预训练模型权重文件后缀名通常为 `.pdparams`
* 在训练过程中,训练程序会自动保存每个 epoch 结束时的断点信息,包括优化器信息 `.pdopt` 和模型权重信息 `.pdparams`。在训练过程意外中断等情况下,需要恢复训练时,可以通过字段 `Global.checkpoints` 配置训练过程中保存的断点信息文件,例如通过配置 `checkpoints: ./output/ResNet18/epoch_18` 即可恢复 18 epoch 训练结束时的断点信息,PaddleClas 将自动加载 `epoch_18.pdopt``epoch_18.pdparams`,从 19 epoch 继续训练。
#### Q2.6.3: 如何将模型转为 `ONNX` 格式?
**A**:Paddle 支持两种转 ONNX 格式模型的方式,且依赖于 `paddle2onnx` 工具,首先需要安装 `paddle2onnx`
```shell
pip install paddle2onnx
```
* 从 inference model 转为 ONNX 格式模型:
以动态图导出的 `combined` 格式 inference model(包含 `.pdmodel` 和 `.pdiparams` 两个文件)为例,使用以下命令进行模型格式转换:
```shell
paddle2onnx --model_dir ${model_path} --model_filename ${model_path}/inference.pdmodel --params_filename ${model_path}/inference.pdiparams --save_file ${save_path}/model.onnx --enable_onnx_checker True
```
上述命令中:
* `model_dir`:该参数下需要包含 `.pdmodel` 和 `.pdiparams` 两个文件;
* `model_filename`:该参数用于指定参数 `model_dir` 下的 `.pdmodel` 文件路径;
* `params_filename`:该参数用于指定参数 `model_dir` 下的 `.pdiparams` 文件路径;
* `save_file`:该参数用于指定转换后的模型保存目录路径。
关于静态图导出的非 `combined` 格式的 inference model(通常包含文件 `__model__` 和多个参数文件)转换模型格式,以及更多参数说明请参考 paddle2onnx 官方文档 [paddle2onnx](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_zh.md#%E5%8F%82%E6%95%B0%E9%80%89%E9%A1%B9)。
* 直接从模型组网代码导出 ONNX 格式模型:
以动态图模型组网代码为例,模型类为继承于 `paddle.nn.Layer` 的子类,代码如下所示:
```python
import paddle
from paddle.static import InputSpec
class SimpleNet(paddle.nn.Layer):
def __init__(self):
pass
def forward(self, x):
pass
net = SimpleNet()
x_spec = InputSpec(shape=[None, 3, 224, 224], dtype='float32', name='x')
paddle.onnx.export(layer=net, path="./SimpleNet", input_spec=[x_spec])
```
其中:
* `InputSpec()` 函数用于描述模型输入的签名信息,包括输入数据的 `shape`、`type` 和 `name`(可省略);
* `paddle.onnx.export()` 函数需要指定模型组网对象 `net`,导出模型的保存路径 `save_path`,模型的输入数据描述 `input_spec`。
需要注意,`paddlepaddle` 版本需大于 `2.0.0`。关于 `paddle.onnx.export()` 函数的更多参数说明请参考[paddle.onnx.export](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/onnx/export_cn.html#export)。
#### Q2.5.4: 在 build 检索底库时,参数 `pq_size` 应该如何设置?
**A**`pq_size` 是 PQ 检索算法的参数。PQ 检索算法可以简单理解为“分层”检索算法,`pq_size` 是每层的“容量”,因此该参数的设置会影响检索性能,不过,在底库总数据量不太大(小于 10000 张)的情况下,这个参数对性能的影响很小,因此对于大多数使用场景而言,在构建底库时无需修改该参数。关于 PQ 检索算法的更多内容,可以查看相关[论文](https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf)
<a name="精选"></a>
## 精选
<a name="1"></a> <a name="1"></a>
## 1. 理论篇 ## 1. 理论篇
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册