-[2.2 Model Structure](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#2.2)
-[2.3 Loss Function](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#2.3)
-[2.4 Optimizer, Learning Rate Decay, and Weight Decay](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#2.4)
-[2.5 Evaluation During Training](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#2.5)
-[2.6 Model Saving](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#2.6)
-[2.7 Model Pruning and Quantification](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#2.7)
-[Codes and Methods for Inference and Deployment](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/code_overview.md#3)
-[Overview of Code and Content](#1)
-[Training Module](#2)
-[2.1 Data](#2.1)
-[2.2 Model Structure](#2.2)
-[2.3 Loss Function](#2.3)
-[2.4 Optimizer, Learning Rate Decay, and Weight Decay](#2.4)
-[2.5 Evaluation During Training](#2.5)
-[2.6 Model Saving](#2.6)
-[2.7 Model Pruning and Quantification](#2.7)
-[Codes and Methods for Inference and Deployment](#3)
<aname="1"></a>
## 1 Overview of Code and Content
The main code and content structure of PaddleClas are as follows:
- benchmark: The folder stores shell scripts to test the speed metrics of different models in PaddleClas, such as single-card training speed metrics, multi-card training speed metrics, etc.
- dataset: The folder stores datasets and the scripts used to process datasets. The scripts are responsible for processing the dataset into a suitable format for Dataloader.
- deploy: Deploy the core code, the folder stores the deployment tools, which support python/cpp inference, Hub Serveing, Paddle Lite, Slim offline quantification and other deployment methods.
- ppcls: Train the core code, the folder holds the main body of the PaddleClas framework. It also has configuration files, and specific code of model training, evaluation, inference, dynamic to static export, etc.
- tools: The file contains the entry functions and scripts for training, evaluation, inference, and dynamic to static export.
- benchmark: shell scripts to test the speed metrics of different models in PaddleClas, such as single-card training speed metrics, multi-card training speed metrics, etc.
- dataset: datasets and the scripts used to process datasets. The scripts are responsible for processing the dataset into a suitable format for Dataloader.
- deploy: code for deployment, including deployment tools, which support python/cpp inference, Hub Serveing, Paddle Lite, Slim offline quantification and other deployment methods.
- ppcls: code for training and evaluation which is the main body of the PaddleClas framework. It also contains configuration files, and specific code of model training, evaluation, inference, dynamic to static export, etc.
- tools: entry functions and scripts for training, evaluation, inference, and dynamic to static export.
- The requirements.txt file is adopted to install the dependencies for PaddleClas. Use pip for upgrading, installation, and application.
- tests: Full-link tests of PaddleClas models from training to prediction to verify that whether each function works properly.
- test_tipc: TIPC tests of PaddleClas models from training to prediction to verify that whether each function works properly.
<aname="2"></a>
## 2 Training Module
The training of deep learning model mainly contains data, model structure, loss function, strategies such as optimizer, learning rate decay, and weight decay strategy, etc., which are explained below.
Modules of training deep learning model mainly contains data, model structure, loss function,
strategies such as optimizer, learning rate decay, and weight decay strategy, etc., which are explained below.
<aname="2.1"></a>
## 2.1 Data
For supervised tasks, the training data generally contains the original data and its annotation. In a single-label-based image classification task, the raw data refers to the image data, while the annotation is the class to which the image data belongs. In PaddleClas, a label file, in the following format, is required for training, with each row containing one training sample and separated by a separator (space by default), representing the image path and the class label respectively.
For supervised tasks, the training data generally contains the raw data and its annotation.
In a single-label-based image classification task, the raw data refers to the image data,
while the annotation is the class to which the image data belongs. In PaddleClas, a label file,
in the following format, is required for training,
with each row containing one training sample and separated by a separator (space by default),
representing the image path and the class label respectively.
```
train/n01440764/n01440764_10026.JPEG 0
train/n01440764/n01440764_10027.JPEG 0
```
The code `ppcls/data/dataloader/common_dataset.py` contains the `CommonDataset` class inherited from `paddle.io.Dataset`, which is a dataset class that can index and fetch a given sample by a key value. Dataset classes such as `ImageNetDataset`, `LogoDataset`, `CommonDataset`, etc. are all inherited from this class.
`ppcls/data/dataloader/common_dataset.py` contains the `CommonDataset` class inherited from `paddle.io.Dataset`,
which is a dataset class that can index and fetch a given sample by a key value.
Dataset classes such as `ImageNetDataset`, `LogoDataset`, `CommonDataset`, etc. are all inherited from this class.
For the read-in data, the raw image needs to be transformed by data conversion. The standard data preprocessing during training contains `DecodeImage`, `RandCropImage`, `RandFlipImage`, `NormalizeImage`, and `ToCHWImage`. The data preprocessing is mainly in the `transforms` field, which is presented in a list, and then converts the data in order, as reflected in the configuration file below.
The raw image needs to be preprocessed before training.
The standard data preprocessing during training contains
`DecodeImage`, `RandCropImage`, `RandFlipImage`, `NormalizeImage`, and `ToCHWImage`.
The data preprocessing is mainly in the `transforms` field, which is presented in a list,
and then converts the data in order, as reflected in the configuration file below.
```
```yaml
DataLoader:
Train:
dataset:
...
...
@@ -70,19 +82,25 @@ DataLoader:
order:''
```
PaddleClas also contains `AutoAugment`, `RandAugment`, and other data augmentation methods, which can also be configured in the configuration file and thus added to the data preprocessing of the training. Each data conversion method is implemented as a class for easy migration and reuse. For more specific implementation of data processing, please refer to the code under `ppcls/data/preprocess/ops/`.
PaddleClas also contains `AutoAugment`, `RandAugment`, and other data augmentation methods,
which can also be configured in the configuration file and thus added to the data preprocessing of the training.
Each data augmentation and process method is implemented as a class for easy migration and reuse.
For more specific implementation of data processing, please refer to the code under `ppcls/data/preprocess/ops/`.
You can also use methods such as mixup or cutmix to augment the data that make up a batch. PaddleClas integrates `MixupOperator`, `CutmixOperator`, `FmixOperator`, and other batch-based data augmentation methods, which can be configured by deploying the mix parameter in the configuration file. For more specific implementation, please refer to `ppcls/data/preprocess /batch_ops/batch_operators.py`.
You can also use methods such as mixup or cutmix to augment the data that make up a batch.
PaddleClas integrates `MixupOperator`, `CutmixOperator`, `FmixOperator`, and other batch-based data augmentation methods,
which can be configured by deploying the mix parameter in the configuration file.
For code implementation, please refer to `ppcls/data/preprocess /batch_ops/batch_operators.py`.
In image classification, the data post-processing is mainly `argmax` operation, which is not elaborated here.
<aname="2.2"></a>
## 2.2 Model Structure
The model in the configuration file is structured as follows:
```
```yaml
Arch:
name:ResNet50
class_num:1000
...
...
@@ -90,11 +108,17 @@ Arch:
use_ssld:False
```
`Arch.name` indicates the name of the model, `Arch.pretrained` whether to add a pre-trained model, and `use_ssld` whether to use a pre-trained model based on `SSLD` knowledge distillation. All model names are defined in `ppcls/arch/backbone/__init__.py`.
`Arch.name`: the name of the model
`Arch.pretrained`: whether to add a pre-trained model
`Arch.use_ssld`: whether to use a pre-trained model based on `SSLD` knowledge distillation.
All model names are defined in `ppcls/arch/backbone/__init__.py`.
Correspondingly, the model object is created in `ppcls/arch/__init__.py` with the `build_model` method.
```
```python
defbuild_model(config):
config=copy.deepcopy(config)
model_type=config.pop("name")
...
...
@@ -104,21 +128,24 @@ def build_model(config):
```
<aname="2.3"></a>
## 2.3 Loss Function
PaddleClas contains`CELoss` , `JSDivLoss`, `TripletLoss`, `CenterLoss` and other loss functions, all defined in `ppcls/loss`.
PaddleClas implement`CELoss` , `JSDivLoss`, `TripletLoss`, `CenterLoss` and other loss functions, all defined in `ppcls/loss`.
In the `ppcls/loss/__init__.py` file, `CombinedLoss` is used to construct and combine loss functions. The loss functions and calculation methods required in different training strategies are disparate, and the following factors are considered by PaddleClas in the construction of the loss function.
In the `ppcls/loss/__init__.py` file, `CombinedLoss` is used to construct and combine loss functions.
The loss functions and calculation methods required in different training strategies are disparate,
and the following factors are considered by PaddleClas in the construction of the loss function.
1. whether to use label smooth
2. whether to use mixup or cutmix
3. whether to use distillation method for training
4. whether to train metric learning
The user can specify the type and weight of the loss function in the configuration file, such as adding TripletLossV2 to the training, the configuration file is as follows:
User can specify the type and weight of the loss function in the configuration file,
such as adding TripletLossV2 to the training, the configuration file is as follows:
```
```yaml
Loss:
Train:
-CELoss:
...
...
@@ -129,18 +156,22 @@ Loss:
```
<aname="2.4"></a>
## 2.4 Optimizer, Learning Rate Decay, and Weight Decay
In image classification tasks, `Momentum` is a commonly used optimizer, and several optimizer strategies such as `Momentum`, `RMSProp`, `Adam`, and `AdamW` are provided in PaddleClas.
In image classification tasks, `Momentum` is a commonly used optimizer,
and several optimizer strategies such as `Momentum`, `RMSProp`, `Adam`, and `AdamW` are provided in PaddleClas.
The weight decay strategy is a common regularization method, mainly adopted to prevent model overfitting. Two weight decay strategies, `L1Decay` and `L2Decay`, are provided in PaddleClas.
The weight decay strategy is a common regularization method, mainly adopted to prevent model overfitting.
Two weight decay strategies, `L1Decay` and `L2Decay`, are provided in PaddleClas.
Learning rate decay is an essential training method for accuracy improvement in image classification tasks. PaddleClas currently supports `Cosine`, `Piecewise`, `Linear`, and other learning rate decay strategies.
Learning rate decay is an essential training method for accuracy improvement in image classification tasks.
PaddleClas currently supports `Cosine`, `Piecewise`, `Linear`, and other learning rate decay strategies.
In the configuration file, the optimizer, weight decay, and learning rate decay strategies can be configured with the following fields.
In the configuration file, the optimizer, weight decay,
and learning rate decay strategies can be configured with the following fields.
```
```yaml
Optimizer:
name:Momentum
momentum:0.9
...
...
@@ -156,7 +187,7 @@ Optimizer:
Employ `build_optimizer` in `ppcls/optimizer/__init__.py` to create the optimizer and learning rate objects.
Different optimizers and weight decay strategies are implemented as classes, which can be found in the file `ppcls/optimizer/optimizer.py`; different learning rate decay strategies can be found in the file `ppcls/optimizer/learning_rate.py`.
Different optimizers and weight decay strategies are implemented as classes,
which can be found in the file `ppcls/optimizer/optimizer.py`.
Different learning rate decay strategies can be found in the file `ppcls/optimizer/learning_rate.py`.
<aname="2.5"></a>
## 2.5 Evaluation During Training
When training the model, you can set the interval of model saving, or you can evaluate the validation set every several epochs so that the model with the best accuracy can be saved. Follow the fields below to configure.
When training the model, you can set the interval of model saving,
or you can evaluate the validation set every several epochs so that the model with the best accuracy can be saved.
Follow the examples below to configure.
```
Global:
...
...
@@ -201,50 +236,68 @@ Global:
```
<aname="2.6"></a>
## 2.6 Model Saving
The model is saved through the `paddle.save()` function of the Paddle framework. The dynamic graph version of the model is saved in the form of a dictionary to facilitate further training. The specific implementation is as follows:
```
def save_model(program, model_path, epoch_id, prefix='ppcls'): model_path = os.path.join(model_path, str(epoch_id)) _mkdir_if_not_exist(model_path) model_prefix = os.path.join(model_path, prefix) paddle.static.save(program, model_prefix) logger.info( logger.coloring("Already save model in {}".format(model_path), "HEADER"))
The model is saved through the `paddle.save()` function of the Paddle framework.
The dynamic graph version of the model is saved in the form of a dictionary to facilitate further training.
logger.coloring("Already save model in {}".format(model_path),"HEADER"))
```
When saving, there are two things to keep in mind:
1. Only save the model on node 0, otherwise, if all nodes save models to the same path, a file conflict may occur during multi-card training when multiple nodes write files, preventing the final saved model from being loaded correctly.
1. Only save the model on node 0, otherwise, if all nodes save models to the same path,
a file conflict may occur during multi-card training when multiple nodes write files,
preventing the final saved model from being loaded correctly.
2. Optimizer parameters also need to be saved to facilitate subsequent loading of breakpoints for training.
- Model pruning and quantification training
If you want to conduct compression training, please configure with the following fields.
<aname="2.7"></a>
## 2.7 Model Pruning and Quantification
If you want to conduct compression training, please configure with the following fields.
1. Model pruning:
```
Slim: prune: name: fpgm pruned_ratio: 0.3
```yaml
Slim:
prune:
name:fpgm
pruned_ratio:0.3
```
2. Model quantification:
```yaml
Slim:
quant:
name:pact
```
Slim: quant: name: pact
```
For details of the training method, see [Pruning and Quantification Application](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/model_prune_ quantization.md), and the algorithm is described in [Pruning and Quantification algorithms](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/algorithm_introduction/ model_prune_quantization.md).
For details of the training method, see [Pruning and Quantification Application](model_prune_quantization_en.md),
and the algorithm is described in [Pruning and Quantification algorithms](model_prune_quantization_en.md).
<aname="3"></a>
## 3 Codes and Methods for Inference and Deployment
- If you wish to quantify the classification model offline, please refer to the [Model Pruning and Quantification Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/advanced_tutorials/model_) for offline quantification.
- If you wish to use python for server-side deployment, please refer to [Python Inference Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/inference_ deployment/python_deploy.md).
- If you wish to use cpp for server-side deployment, please refer to [Cpp Inference Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/inference_ deployment/cpp_deploy.md).
- If you wish to deploy the classification model as a service, please refer to the [Hub Serving Inference Deployment Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/inference_deployment/ paddle_hub_serving_deploy.md).
- If you wish to use classification models for inference on mobile, please refer to [PaddleLite Inference Deployment Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/inference_ deployment/paddle_lite_deploy.md)
- If you wish to use the whl package for inference of classification models, please refer to [whl Package Inference](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/inference_deployment/whl_ deploy.md) .
- If you wish to quantify the classification model offline, please refer to
[Model Pruning and Quantification Tutorial](model_prune_quantization_en.md) for offline quantification.
- If you wish to use python for server-side deployment,
please refer to [Python Inference Tutorial](../inference_deployment/python_deploy_en.md).
- If you wish to use cpp for server-side deployment,
please refer to [Cpp Inference Tutorial](../inference_deployment/cpp_deploy_en.md).
- If you wish to deploy the classification model as a service,
please refer to the [Hub Serving Inference Deployment Tutorial](../inference_deployment/paddle_hub_serving_deploy_en.md).
- If you wish to use classification models for inference on mobile,
please refer to [PaddleLite Inference Deployment Tutorial](../inference_deployment/paddle_lite_deploy_en.md)
- If you wish to use the whl package for inference of classification models,
please refer to [whl Package Inference](../inference_deployment/whl_deploy_en.md) .