From 8e914a869fd7536281b7d3fa110db214ecb9bbb7 Mon Sep 17 00:00:00 2001 From: WenmuZhou Date: Thu, 3 Dec 2020 15:44:52 +0800 Subject: [PATCH] update config file doc --- doc/doc_ch/config.md | 134 +++++++++++++++++++++++----------- doc/doc_en/config_en.md | 158 ++++++++++++++++++++++++++-------------- 2 files changed, 197 insertions(+), 95 deletions(-) diff --git a/doc/doc_ch/config.md b/doc/doc_ch/config.md index f9c664d4..2cc502ca 100644 --- a/doc/doc_ch/config.md +++ b/doc/doc_ch/config.md @@ -1,4 +1,4 @@ -# 可选参数列表 +## 可选参数列表 以下列表可以通过`--help`查看 @@ -8,65 +8,115 @@ | -o | ALL | 设置配置文件里的参数内容 | None | 使用-o配置相较于-c选择的配置文件具有更高的优先级。例如:`-o Global.use_gpu=false` | -## 配置文件 Global 参数介绍 +## 配置文件参数介绍 以 `rec_chinese_lite_train_v1.1.yml ` 为例 - +### Global | 字段 | 用途 | 默认值 | 备注 | | :----------------------: | :---------------------: | :--------------: | :--------------------: | -| algorithm | 设置算法 | 与配置文件同步 | 选择模型,支持模型请参考[简介](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/README.md) | -| use_gpu | 设置代码运行场所 | true | \ | -| epoch_num | 最大训练epoch数 | 3000 | \ | +| use_gpu | 设置代码是否在gpu运行 | true | \ | +| epoch_num | 最大训练epoch数 | 500 | \ | | log_smooth_window | 滑动窗口大小 | 20 | \ | | print_batch_step | 设置打印log间隔 | 10 | \ | | save_model_dir | 设置模型保存路径 | output/{算法名称} | \ | | save_epoch_step | 设置模型保存间隔 | 3 | \ | | eval_batch_step | 设置模型评估间隔 | 2000 或 [1000, 2000] | 2000 表示每2000次迭代评估一次,[1000, 2000]表示从1000次迭代开始,每2000次评估一次 | -|train_batch_size_per_card | 设置训练时单卡batch size | 256 | \ | -| test_batch_size_per_card | 设置评估时单卡batch size | 256 | \ | -| image_shape | 设置输入图片尺寸 | [3, 32, 100] | \ | +| cal_metric_during_train | 设置是否在训练过程中评估指标,此时评估的是模型在当前batch下的指标 | true | \ | +| load_static_weights | 设置预训练模型是否是静态图模式保存(目前仅检测算法需要) | true | \ | +| pretrained_model | 设置加载预训练模型路径 | ./pretrain_models/CRNN/best_accuracy | \ | +| checkpoints | 加载模型参数路径 | None | 用于中断后加载参数继续训练 | +| use_visualdl | 设置是否启用visualdl进行可视化log展示 | False | [教程地址](https://www.paddlepaddle.org.cn/paddle/visualdl) | +| infer_img | 设置预测图像路径或文件夹路径 | ./infer_img | \| +| character_dict_path | 设置字典路径 | ./ppocr/utils/ppocr_keys_v1.txt | \ | | max_text_length | 设置文本最大长度 | 25 | \ | | character_type | 设置字符类型 | ch | en/ch, en时将使用默认dict,ch时使用自定义dict| -| character_dict_path | 设置字典路径 | ./ppocr/utils/ic15_dict.txt | \ | -| loss_type | 设置 loss 类型 | ctc | 支持两种loss: ctc / attention | -| distort | 设置是否使用数据增强 | false | 设置为true时,将在训练时随机进行扰动,支持的扰动操作可阅读[img_tools.py](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/data/rec/img_tools.py) | -| use_space_char | 设置是否识别空格 | false | 仅在 character_type=ch 时支持空格 | +| use_space_char | 设置是否识别空格 | True | 仅在 character_type=ch 时支持空格 | | label_list | 设置方向分类器支持的角度 | ['0','180'] | 仅在方向分类器中生效 | -| average_window | ModelAverage优化器中的窗口长度计算比例 | 0.15 | 目前仅应用与SRN | -| max_average_window | 平均值计算窗口长度的最大值 | 15625 | 推荐设置为一轮训练中mini-batchs的数目| -| min_average_window | 平均值计算窗口长度的最小值 | 10000 | \ | -| reader_yml | 设置reader配置文件 | ./configs/rec/rec_icdar15_reader.yml | \ | -| pretrain_weights | 加载预训练模型路径 | ./pretrain_models/CRNN/best_accuracy | \ | -| checkpoints | 加载模型参数路径 | None | 用于中断后加载参数继续训练 | -| save_inference_dir | inference model 保存路径 | None | 用于保存inference model | +| save_res_path | 设置检测模型的结果保存地址 | ./output/det_db/predicts_db.txt | 仅在检测模型中生效 | -## 配置文件 Reader 系列参数介绍 +### Optimizer ([ppocr/optimizer](../../ppocr/optimizer)) -以 `rec_chinese_reader.yml` 为例 +| 字段 | 用途 | 默认值 | 备注 | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | 优化器类名 | Adam | 目前支持`Momentum`,`Adam`,`RMSProp`, 见[ppocr/optimizer/optimizer.py](../../ppocr/optimizer/optimizer.py) | +| beta1 | 设置一阶矩估计的指数衰减率 | 0.9 | \ | +| beta2 | 设置二阶矩估计的指数衰减率 | 0.999 | \ | +| **lr** | 设置学习率decay方式 | - | \ | +| name | 学习率decay类名 | Cosine | 目前支持`Linear`,`Cosine`,`Step`,`Piecewise`, 见[ppocr/optimizer/learning_rate.py](../../ppocr/optimizer/learning_rate.py) | +| learning_rate | 基础学习率 | 0.001 | \ | +| **regularizer** | 设置网络正则化方式 | - | \ | +| name | 正则化类名 | L2 | 目前支持`L1`,`L2`, 见[ppocr/optimizer/regularizer.py](../../ppocr/optimizer/regularizer.py) | +| factor | 学习率衰减系数 | 0.00004 | \ | -| 字段 | 用途 | 默认值 | 备注 | -| :----------------------: | :---------------------: | :--------------: | :--------------------: | -| reader_function | 选择数据读取方式 | ppocr.data.rec.dataset_traversal,SimpleReader | 支持SimpleReader / LMDBReader 两种数据读取方式 | -| num_workers | 设置数据读取线程数 | 8 | \ | -| img_set_dir | 数据集路径 | ./train_data | \ | -| label_file_path | 数据标签路径 | ./train_data/rec_gt_train.txt| \ | -| infer_img | 预测图像文件夹路径 | ./infer_img | \| -## 配置文件 Optimizer 系列参数介绍 +### Architecture ([ppocr/modeling](../../ppocr/modeling)) +在ppocr中,网络被划分为Transform,Backbone,Neck和Head四个阶段 + +| 字段 | 用途 | 默认值 | 备注 | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| model_type | 网络类型 | rec | 目前支持`rec`,`det`,`cls` | +| algorithm | 模型名称 | CRNN | 支持列表见[algorithm_overview](./algorithm_overview.md) | +| **Transform** | 设置变换方式 | - | 目前仅rec类型的算法支持, 具体见[ppocr/modeling/transform](../../ppocr/modeling/transform) | +| name | 变换方式类名 | TPS | 目前支持`TPS` | +| num_fiducial | TPS控制点数 | 20 | 上下边各十个 | +| loc_lr | 定位网络学习率 | 0.1 | \ | +| model_name | 定位网络大小 | small | 目前支持`small`,`large` | +| **Backbone** | 设置网络backbone类名 | - | 具体见[ppocr/modeling/backbones](../../ppocr/modeling/backbones) | +| name | backbone类名 | ResNet | 目前支持`MobileNetV3`,`ResNet` | +| layers | resnet层数 | 34 | 支持18,34,50,101,152,200 | +| model_name | MobileNetV3 网络大小 | small | 支持`small`,`large` | +| **Neck** | 设置网络neck | - | 具体见[ppocr/modeling/necks](../../ppocr/modeling/necks) | +| name | neck类名 | SequenceEncoder | 目前支持`SequenceEncoder`,`DBFPN` | +| encoder_type | SequenceEncoder编码器类型 | rnn | 支持`reshape`,`fc`,`rnn` | +| hidden_size | rnn内部单元数 | 48 | \ | +| out_channels | DBFPN输出通道数 | 256 | \ | +| **Head** | 设置网络Head | - | 具体见[ppocr/modeling/heads](../../ppocr/modeling/heads) | +| name | head类名 | CTCHead | 目前支持`CTCHead`,`DBHead`,`ClsHead` | +| fc_decay | CTCHead正则化系数 | 0.0004 | \ | +| k | DBHead二值化系数 | 50 | \ | +| class_dim | ClsHead输出分类数 | 2 | \ | + -以 `rec_icdar15_train.yml` 为例 +### Loss ([ppocr/losses](../../ppocr/losses)) + +| 字段 | 用途 | 默认值 | 备注 | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | 网络loss类名 | CTCLoss | 目前支持`CTCLoss`,`DBLoss`,`ClsLoss` | +| balance_loss | DBLossloss中是否对正负样本数量进行均衡(使用OHEM) | True | \ | +| ohem_ratio | DBLossloss中的OHEM的负正样本比例 | 3 | \ | +| main_loss_type | DBLossloss中shrink_map所采用的的loss | DiceLoss | 支持`DiceLoss`,`BCELoss` | +| alpha | DBLossloss中shrink_map_loss的系数 | 5 | \ | +| beta | DBLossloss中threshold_map_loss的系数 | 10 | \ | + +### PostProcess ([ppocr/postprocess](../../ppocr/postprocess)) + +| 字段 | 用途 | 默认值 | 备注 | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | 后处理类名 | CTCLabelDecode | 目前支持`CTCLoss`,`AttnLabelDecode`,`DBPostProcess`,`ClsPostProcess` | +| thresh | DBPostProcess中分割图进行二值化的阈值 | 0.3 | \ | +| box_thresh | DBPostProcess中对输出框进行过滤的阈值,低于此阈值的框不会输出 | 0.7 | \ | +| max_candidates | DBPostProcess中输出的最大文本框数量 | 1000 | | +| unclip_ratio | DBPostProcess中对文本框进行放大的比例 | 2.0 | \ | + +### Metric ([ppocr/metrics](../../ppocr/metrics)) + +| 字段 | 用途 | 默认值 | 备注 | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | 指标评估方法名称 | CTCLabelDecode | 目前支持`DetMetric`,`RecMetric`,`ClsMetric` | +| main_indicator | 主要指标,用于选取最优模型 | acc | 对于检测方法为hmean,识别和分类方法为acc | +### Dataset ([ppocr/data](../../ppocr/data)) | 字段 | 用途 | 默认值 | 备注 | | :---------------------: | :---------------------: | :--------------: | :--------------------: | -| function | 选择优化器 | pocr.optimizer,AdamDecay | 目前只支持Adam方式 | -| base_lr | 设置初始学习率 | 0.0005 | \ | -| beta1 | 设置一阶矩估计的指数衰减率 | 0.9 | \ | -| beta2 | 设置二阶矩估计的指数衰减率 | 0.999 | \ | -| decay | 是否使用decay | \ | \ | -| function(decay) | 设置decay方式 | - | 目前支持cosine_decay, cosine_decay_warmup与piecewise_decay | -| step_each_epoch | 每个epoch包含多少次迭代, cosine_decay/cosine_decay_warmup时有效 | 20 | 计算方式:total_image_num / (batch_size_per_card * card_size) | -| total_epoch | 总共迭代多少个epoch, cosine_decay/cosine_decay_warmup时有效 | 1000 | 与Global.epoch_num 一致 | -| warmup_minibatch | 线性warmup的迭代次数, cosine_decay_warmup时有效 | 1000 | \ | -| boundaries | 学习率下降时的迭代次数间隔, piecewise_decay时有效 | - | 参数为列表形式 | -| decay_rate | 学习率衰减系数, piecewise_decay时有效 | - | \ | +| **dataset** | 每次迭代返回一个样本 | - | - | +| name | dataset类名 | SimpleDataSet | 目前支持`SimpleDataSet`和`LMDBDateSet` | +| data_dir | 数据集图片存放路径 | ./train_data | \ | +| label_file_list | 数据标签路径 | ["./train_data/train_list.txt"] | dataset为LMDBDateSet时不需要此参数 | +| ratio_list | 数据集的比例 | [1.0] | 若label_file_list中有两个train_list,且ratio_list为[0.4,0.6],则从train_list1中采样40%,从train_list2中采样60%组合整个dataset | +| transforms | 对图片和标签进行变换的方法列表 | [DecodeImage,CTCLabelEncode,RecResizeImg,KeepKeys] | 见[ppocr/data/imaug](../../ppocr/data/imaug) | +| **loader** | dataloader相关 | - | | +| shuffle | 每个epoch是否将数据集顺序打乱 | True | \ | +| batch_size_per_card | 训练时单卡batch size | 256 | \ | +| drop_last | 是否丢弃因数据集样本数不能被 batch_size 整除而产生的最后一个不完整的mini-batch | True | \ | +| num_workers | 用于加载数据的子进程个数,若为0即为不开启子进程,在主进程中进行数据加载 | 8 | \ | \ No newline at end of file diff --git a/doc/doc_en/config_en.md b/doc/doc_en/config_en.md index 722da662..574bb41b 100644 --- a/doc/doc_en/config_en.md +++ b/doc/doc_en/config_en.md @@ -1,69 +1,121 @@ -# OPTIONAL PARAMETERS LIST +## Optional parameter list -The following list can be viewed via `--help` +The following list can be viewed through `--help` | FLAG | Supported script | Use | Defaults | Note | | :----------------------: | :------------: | :---------------: | :--------------: | :-----------------: | -| -c | ALL | Specify configuration file to use | None | **Please refer to the parameter introduction for configuration file usage** | -| -o | ALL | set configuration options | None | Configuration using -o has higher priority than the configuration file selected with -c. E.g: `-o Global.use_gpu=false` | - +| -c | ALL | Specify configuration file to use | None | **Please refer to the parameter introduction for configuration file usage** | +| -o | ALL | set configuration options | None | Configuration using -o has higher priority than the configuration file selected with -c. E.g: -o Global.use_gpu=false | ## INTRODUCTION TO GLOBAL PARAMETERS OF CONFIGURATION FILE -Take `rec_chinese_lite_train_v1.1.yml` as an example - +Take rec_chinese_lite_train_v1.1.yml as an example +### Global -| Parameter | Use | Default | Note | +| Parameter | Use | Defaults | Note | | :----------------------: | :---------------------: | :--------------: | :--------------------: | -| algorithm | Select algorithm to use | Synchronize with configuration file | For selecting model, please refer to the supported model [list](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/README_en.md) | -| use_gpu | Set using GPU or not | true | \ | -| epoch_num | Maximum training epoch number | 3000 | \ | +| use_gpu | Set using GPU or not | true | \ | +| epoch_num | Maximum training epoch number | 500 | \ | | log_smooth_window | Sliding window size | 20 | \ | | print_batch_step | Set print log interval | 10 | \ | -| save_model_dir | Set model save path | output/{model_name} | \ | +| save_model_dir | Set model save path | output/{算法名称} | \ | | save_epoch_step | Set model save interval | 3 | \ | -| eval_batch_step | Set the model evaluation interval |2000 or [1000, 2000] |runing evaluation every 2000 iters or evaluation is run every 2000 iterations after the 1000th iteration | -|train_batch_size_per_card | Set the batch size during training | 256 | \ | -| test_batch_size_per_card | Set the batch size during testing | 256 | \ | -| image_shape | Set input image size | [3, 32, 100] | \ | -| max_text_length | Set the maximum text length | 25 | \ | -| character_type | Set character type | ch | en/ch, the default dict will be used for en, and the custom dict will be used for ch| -| character_dict_path | Set dictionary path | ./ppocr/utils/ic15_dict.txt | \ | -| loss_type | Set loss type | ctc | Supports two types of loss: ctc / attention | -| distort | Set use distort | false | Support distort type ,read [img_tools.py](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/data/rec/img_tools.py) | -| use_space_char | Wether to recognize space | false | Only support in character_type=ch mode | - label_list | Set the angle supported by the direction classifier | ['0','180'] | Only valid in the direction classifier | -| reader_yml | Set the reader configuration file | ./configs/rec/rec_icdar15_reader.yml | \ | -| pretrain_weights | Load pre-trained model path | ./pretrain_models/CRNN/best_accuracy | \ | -| checkpoints | Load saved model path | None | Used to load saved parameters to continue training after interruption | -| save_inference_dir | path to save model for inference | None | Use to save inference model | - -## INTRODUCTION TO READER PARAMETERS OF CONFIGURATION FILE - -Take `rec_chinese_reader.yml` as an example: - -| Parameter | Use | Default | Note | -| :----------------------: | :---------------------: | :--------------: | :--------------------: | -| reader_function | Select data reading method | ppocr.data.rec.dataset_traversal,SimpleReader | Support two data reading methods: SimpleReader / LMDBReader | -| num_workers | Set the number of data reading threads | 8 | \ | -| img_set_dir | Image folder path | ./train_data | \ | -| label_file_path | Groundtruth file path | ./train_data/rec_gt_train.txt| \ | -| infer_img | Result folder path | ./infer_img | \| +| eval_batch_step | Set the model evaluation interval | 2000 or [1000, 2000] | runing evaluation every 2000 iters or evaluation is run every 2000 iterations after the 1000th iteration | +| cal_metric_during_train | Set whether to evaluate the metric during the training process. At this time, the metric of the model under the current batch is evaluated | true | \ | +| load_static_weights | Set whether the pre-training model is saved in static graph mode (currently only required by the detection algorithm) | true | \ | +| pretrained_model | Set the path of the pre-trained model | ./pretrain_models/CRNN/best_accuracy | \ | +| checkpoints | set model parameter path | None | Used to load parameters after interruption to continue training| +| use_visualdl | Set whether to enable visualdl for visual log display | False | [Tutorial](https://www.paddlepaddle.org.cn/paddle/visualdl) | +| infer_img | Set inference image path or folder path | ./infer_img | \| +| character_dict_path | Set dictionary path | ./ppocr/utils/ppocr_keys_v1.txt | \ | +| max_text_length | Set the maximum length of text | 25 | \ | +| character_type | Set character type | ch | en/ch, the default dict will be used for en, and the custom dict will be used for ch | +| use_space_char | Set whether to recognize spaces | True | Only support in character_type=ch mode | +| label_list | Set the angle supported by the direction classifier | ['0','180'] | Only valid in angle classifier model | +| save_res_path | Set the save address of the test model results | ./output/det_db/predicts_db.txt | Only valid in the text detection model | + +### Optimizer ([ppocr/optimizer](../../ppocr/optimizer)) + +| Parameter | Use | Defaults | Note | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | Optimizer class name | Adam | Currently supports`Momentum`,`Adam`,`RMSProp`, see [ppocr/optimizer/optimizer.py](../../ppocr/optimizer/optimizer.py) | +| beta1 | Set the exponential decay rate for the 1st moment estimates | 0.9 | \ | +| beta2 | Set the exponential decay rate for the 2nd moment estimates | 0.999 | \ | +| **lr** | Set the learning rate decay method | - | \ | +| name | Learning rate decay class name | Cosine | Currently supports`Linear`,`Cosine`,`Step`,`Piecewise`, see[ppocr/optimizer/learning_rate.py](../../ppocr/optimizer/learning_rate.py) | +| learning_rate | Set the base learning rate | 0.001 | \ | +| **regularizer** | Set network regularization method | - | \ | +| name | Regularizer class name | L2 | Currently support`L1`,`L2`, see[ppocr/optimizer/regularizer.py](../../ppocr/optimizer/regularizer.py) | +| factor | Learning rate decay coefficient | 0.00004 | \ | + + +### Architecture ([ppocr/modeling](../../ppocr/modeling)) +In ppocr, the network is divided into four stages: Transform, Backbone, Neck and Head + +| Parameter | Use | Defaults | Note | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| model_type | Network Type | rec | Currently support`rec`,`det`,`cls` | +| algorithm | Model name | CRNN | See [algorithm_overview](./algorithm_overview.md) for the support list | +| **Transform** | Set the transformation method | - | Currently only recognition algorithms are supported, see [ppocr/modeling/transform](../../ppocr/modeling/transform) for details | +| name | Transformation class name | TPS | Currently supports `TPS` | +| num_fiducial | Number of TPS control points | 20 | Ten on the top and bottom | +| loc_lr | Localization network learning rate | 0.1 | \ | +| model_name | Localization network size | small | Currently support`small`,`large` | +| **Backbone** | Set the network backbone class name | - | see [ppocr/modeling/backbones](../../ppocr/modeling/backbones) | +| name | backbone class name | ResNet | Currently support`MobileNetV3`,`ResNet` | +| layers | resnet layers | 34 | Currently support18,34,50,101,152,200 | +| model_name | MobileNetV3 network size | small | Currently support`small`,`large` | +| **Neck** | Set network neck | - | see[ppocr/modeling/necks](../../ppocr/modeling/necks) | +| name | neck class name | SequenceEncoder | Currently support`SequenceEncoder`,`DBFPN` | +| encoder_type | SequenceEncoder encoder type | rnn | Currently support`reshape`,`fc`,`rnn` | +| hidden_size | rnn number of internal units | 48 | \ | +| out_channels | Number of DBFPN output channels | 256 | \ | +| **Head** | Set the network head | - | see[ppocr/modeling/heads](../../ppocr/modeling/heads) | +| name | head class name | CTCHead | Currently support`CTCHead`,`DBHead`,`ClsHead` | +| fc_decay | CTCHead regularization coefficient | 0.0004 | \ | +| k | DBHead binarization coefficient | 50 | \ | +| class_dim | ClsHead output category number | 2 | \ | + + +### Loss ([ppocr/losses](../../ppocr/losses)) + +| Parameter | Use | Defaults | Note | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | loss class name | CTCLoss | Currently support`CTCLoss`,`DBLoss`,`ClsLoss` | +| balance_loss | Whether to balance the number of positive and negative samples in DBLossloss (using OHEM) | True | \ | +| ohem_ratio | The negative and positive sample ratio of OHEM in DBLossloss | 3 | \ | +| main_loss_type | The loss used by shrink_map in DBLossloss | DiceLoss | Currently support`DiceLoss`,`BCELoss` | +| alpha | The coefficient of shrink_map_loss in DBLossloss | 5 | \ | +| beta | The coefficient of threshold_map_loss in DBLossloss | 10 | \ | -## INTRODUCTION TO OPTIMIZER PARAMETERS OF CONFIGURATION FILE +### PostProcess ([ppocr/postprocess](../../ppocr/postprocess)) -Take `rec_icdar15_train.yml` as an example: +| Parameter | Use | Defaults | Note | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | Post-processing class name | CTCLabelDecode | Currently support`CTCLoss`,`AttnLabelDecode`,`DBPostProcess`,`ClsPostProcess` | +| thresh | The threshold for binarization of the segmentation map in DBPostProcess | 0.3 | \ | +| box_thresh | The threshold for filtering output boxes in DBPostProcess. Boxes below this threshold will not be output | 0.7 | \ | +| max_candidates | The maximum number of text boxes output in DBPostProcess | 1000 | | +| unclip_ratio | The unclip ratio of the text box in DBPostProcess | 2.0 | \ | + +### Metric ([ppocr/metrics](../../ppocr/metrics)) + +| Parameter | Use | Defaults | Note | +| :---------------------: | :---------------------: | :--------------: | :--------------------: | +| name | Metric method name | CTCLabelDecode | Currently support`DetMetric`,`RecMetric`,`ClsMetric` | +| main_indicator | Main indicators, used to select the best model | acc | For the detection method is hmean, the recognition and classification method is acc | -| Parameter | Use | Default | None | +### Dataset ([ppocr/data](../../ppocr/data)) +| Parameter | Use | Defaults | Note | | :---------------------: | :---------------------: | :--------------: | :--------------------: | -| function | Select Optimizer function | pocr.optimizer,AdamDecay | Only support Adam | -| base_lr | Set the base lr | 0.0005 | \ | -| beta1 | Set the exponential decay rate for the 1st moment estimates | 0.9 | \ | -| beta2 | Set the exponential decay rate for the 2nd moment estimates | 0.999 | \ | -| decay | Whether to use decay | \ | \ | -| function(decay) | Set the decay function | cosine_decay | Support cosine_decay, cosine_decay_warmup and piecewise_decay | -| step_each_epoch | The number of steps in an epoch. Used in cosine_decay/cosine_decay_warmup | 20 | Calculation: total_image_num / (batch_size_per_card * card_size) | -| total_epoch | The number of epochs. Used in cosine_decay/cosine_decay_warmup | 1000 | Consistent with Global.epoch_num | -| warmup_minibatch | Number of steps for linear warmup. Used in cosine_decay_warmup | 1000 | \ | -| boundaries | The step intervals to reduce learning rate. Used in piecewise_decay | - | The format is list | -| decay_rate | Learning rate decay rate. Used in piecewise_decay | - | \ | +| **dataset** | Return one sample per iteration | - | - | +| name | dataset class name | SimpleDataSet | Currently support`SimpleDataSet`,`LMDBDateSet` | +| data_dir | Image folder path | ./train_data | \ | +| label_file_list | Groundtruth file path | ["./train_data/train_list.txt"] | This parameter is not required when dataset is LMDBDateSet | +| ratio_list | Ratio of data set | [1.0] | If there are two train_lists in label_file_list and ratio_list is [0.4,0.6], 40% will be sampled from train_list1, and 60% will be sampled from train_list2 to combine the entire dataset | +| transforms | List of methods to transform images and labels | [DecodeImage,CTCLabelEncode,RecResizeImg,KeepKeys] | see[ppocr/data/imaug](../../ppocr/data/imaug) | +| **loader** | dataloader related | - | | +| shuffle | Does each epoch disrupt the order of the data set | True | \ | +| batch_size_per_card | Single card batch size during training | 256 | \ | +| drop_last | Whether to discard the last incomplete mini-batch because the number of samples in the data set cannot be divisible by batch_size | True | \ | +| num_workers | The number of sub-processes used to load data, if it is 0, the sub-process is not started, and the data is loaded in the main process | 8 | \ | \ No newline at end of file -- GitLab