未验证 提交 40503f14 编写于 作者: W wangguanzhong 提交者: GitHub

[cherry-pick] ppsmrt (#6010)

* add ppsmrt, test=document_fix

* fix picodet cfg, test=document_fix

* add rcnn & picodet, test=document_fix

* rename battery dataset

* add faster rcnn

* update README, test=document_fix

* add data analysis, test=document_fix

* add faster r101, test=document_fix
上级 c0ede7b1
# 数据分析功能说明
为了更好的帮助用户进行数据分析,从推荐更合适的模型,我们推出了**数据分析**功能,用户不需要上传原图,只需要上传标注好的文件格式即可进一步分析数据特点。
当前支持格式有:
* LabelMe标注数据格式
* 精灵标注数据格式
* LabelImg标注数据格式
* VOC数据格式
* COCO数据格式
* Seg数据格式
## LabelMe标注数据格式
1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的json文件,每一个json文件除后缀外与对应的图像同名。
2. 支持检测与分割任务。若提供的标注信息与所选择的任务类型不匹配,则将提示错误。
3. 对于检测任务,需提供rectangle类型标注信息;对于分割任务,需提供polygon类型标注信息。
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169194724-c3fff1db-78b0-4013-925b-b99e5f51e5f2.png" width = "600" />
</div>
## 精灵标注数据格式
1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的json文件,每一个json文件除后缀外与对应的图像同名。
2. 支持检测与分割任务。若提供的标注信息与所选择的任务类型不匹配,则将提示错误。
3. 对于检测任务,需提供bndbox或polygon类型标注信息;对于分割任务,需提供polygon类型标注信息。
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169194724-c3fff1db-78b0-4013-925b-b99e5f51e5f2.png" width = "600" />
</div>
## LabelImg标注数据格式
1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的xml文件,每一个xml文件除后缀外与对应的图像同名。
2. 仅支持检测任务。
3. 标注文件中必须提供bndbox字段信息;segmentation字段是可选的。
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169195232-2ccd4c07-8203-44a5-9911-97c092a228d8.png" width = "600" />
</div>
## VOC数据格式
1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的xml文件,每一个xml文件除后缀外与对应的图像同名。
2. 仅支持检测任务。
3. 标注文件中必须提供bndbox字段信息;segmentation字段是可选的。
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169195232-2ccd4c07-8203-44a5-9911-97c092a228d8.png" width = "600" />
</div>
## COCO数据格式
1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中仅存在一个名为annotation.json的文件。
2. 支持检测与分割任务。若提供的标注信息与所选择的任务类型不匹配,则将提示错误。
3. 对于检测任务,标注文件中必须包含bbox字段,segmentation字段是可选的;对于分割任务,标注文件中必须包含segmentation字段。
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169195416-eb12f1bb-6d18-4354-bad5-c18961aa049d.png" width = "600" />
</div>
## Seg数据格式
1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的png文件,每一个png文件除后缀外与对应的图像同名。
2. 仅支持分割任务。
3. 标注文件需要与原始图像在像素上严格保持一一对应,格式只可为png(后缀为.png或.PNG)。标注文件中的每个像素值为[0,255]区间内从0开始依序递增的整数ID,除255外,标注ID值的增加不能跳跃。在标注文件中,使用255表示需要忽略的像素,使用0表示背景类标注。
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169195389-85a9bda2-282b-452f-a809-d0100291f86f.png" width = "600" />
</div>
# 飞桨产业模型选型工具PP-SMRT
## 一、项目介绍
PP-SMRT (PaddlePaddle Sense Model Recommend Tool) 是飞桨结合产业落地经验推出的产业模型选型工具,在项目落地过程中,用户根据自身的实际情况,输入自己的需求,即可以得到对应在算法模型、部署硬件以及教程文档的信息。同时为了更加精准的推荐,增加了数据分析功能,用户上传自己的标注文件,系统可以自动分析数据特点,例如数据分布不均衡、小目标、密集型等,从而提供更加精准的模型以及优化策略,更好的符合场景的需求。
PP-SMRT用户图
本文档主要介绍PP-SMRT在检测方向上是如何进行模型选型推荐,以及推荐模型的使用方法。分割方向模型介绍请参考[文档](https://github.com/PaddlePaddle/PaddleSeg)
## 二、数据介绍
PP-SMRT结合产业真实场景,通过比较检测算法效果,向用户推荐最适合的模型。目前PP-SMRT覆盖工业质检、城市安防两大场景,下面介绍PP-SMRT进行算法对比所使用的数据集
### 1. 新能源电池质检数据集
数据集为新能源电池电池组件质检数据集,包含15021张图片,包含22045个标注框,覆盖45种缺陷类型,例如掉胶,裂纹,划痕等。
新能源电池数据展示图:
<div align="center">
<img src="images/Board_diaojiao_1591.png" width = "200" />
<img src="images/UpCoa_liewen_163.png" width = "200" />
</div>
数据集特点为:
1. 类别分布均衡
2. 属于小目标数据
3. 非密集型数据
### 2. 铝件质检数据集
数据集为铝件生产过程中的质检数据集,包含11293张图片,包含43157个标注框,覆盖5种缺陷类型,例如划伤,压伤,起皮等。
铝件质检数据展示图:
<div align="center">
<img src="images/lvjian1_0.jpg" width = "200" />
<img src="images/lvjian1_10.jpg" width = "200" />
</div>
数据集特点为:
1. 类别分布不均衡
2. 属于小目标数据
3. 非密集型数据
### 3. 人车数据集
数据集包含2600张人工标注的两点anchor box标签。标签包括以下人和车的类别共22种:
其中行人包括普通行人、3D 假人、坐着的人、骑车的人;车辆包括两厢车、三厢车、小型客车、小货车、皮卡车、轻卡、厢式货车、牵引车、水泥车、工程车辆、校车、中小型客车、大型单层客车、小型电动车、摩托车、自行车、三轮车以及其它特殊车辆。
人车数据展示图:
<div align="center">
<img src="images/renche_00002.jpg" width = "200" />
<img src="images/renche_00204.jpg" width = "200" />
</div>
数据集特点为:
1. 类别分布不均衡
2. 属于小目标数据
3. 非密集型数据
**说明:**
数据集特点判断依据如下:
- 数据分布不均衡:采样1000张图片,不同类别样本个数标准差大于400
- 小目标数据集:相对大小小于0.1或绝对大小小于32像素的样本个数比例大于30%
- 密集型数据集:
```
密集目标定义:周围目标距离小于自身大小两倍的个数大于2;
密集图片定义:密集目标个数占图片目标总数50%以上;
密集数据集定义:密集图片个数占总个数30%以上
```
以上数据集特点通过数据分析工具实现,当前支持多种检测、分割数据标注格式,详细文档说明请参考[链接](DataAnalysis.md)
## 三、推荐模型使用全流程
通过模型选型工具会得到对应场景和数据特点的检测模型配置,例如[PP-YOLOE](./configs/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml)
该配置文件的使用方法如下
### 1. 环境配置
首先需要安装PaddlePaddle
```bash
# CUDA10.2
pip install paddlepaddle-gpu==2.2.2 -i https://mirror.baidu.com/pypi/simple
# CPU
pip install paddlepaddle==2.2.2 -i https://mirror.baidu.com/pypi/simple
```
然后安装PaddleDetection和相关依赖
```bash
# 克隆PaddleDetection仓库
cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/PaddleDetection.git
# 安装其他依赖
cd PaddleDetection
pip install -r requirements.txt
```
详细安装文档请参考[文档](../docs/tutorials/INSTALL_cn.md)
### 2. 数据准备
用户需要准备训练数据集,建议标注文件使用COCO数据格式。如果使用lableme或者VOC数据格式,先使用[格式转换脚本](../tools/x2coco.py)将标注格式转化为COCO,详细数据准备文档请参考[文档](../docs/tutorials/PrepareDataSet.md)
本文档以新能源电池工业质检子数据集为例展开,数据下载[链接](https://bj.bcebos.com/v1/paddle-smrt/data/battery_mini.zip)
数据储存格式如下:
```
battery_mini
├── annotations
│   ├── test.json
│   └── train.json
└── images
├── Board_daowen_101.png
├── Board_daowen_109.png
├── Board_daowen_117.png
...
```
### 3. 模型训练/评估/预测
使用经过模型选型工具推荐的模型进行训练,目前所推荐的模型均使用**单卡训练**,可以在训练的过程中进行评估,模型默认保存在`./output`
```bash
python tools/train.py -c configs/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml --eval
```
如果训练过程出现中断,可以使用-r命令恢复训练
```bash
python tools/train.py -c configs/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml --eval -r output/ppyoloe_crn_m_300e_battery_1024/9.pdparams
```
如果期望单独评估模型训练精度,可以使用`tools/eval.py`
```bash
python tools/eval.py -c configs/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml -o weights=output/ppyoloe_crn_m_300e_battery_1024/model_final.pdparams
```
完成训练后,可以使用`tools/infer.py`可视化训练效果
```bash
python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml -o weights=output/ppyoloe_crn_m_300e_battery_1024/model_final.pdparams --infer_img=images/Board_diaojiao_1591.png
```
更多模型训练参数请参考[文档](../docs/tutorials/GETTING_STARTED_cn.md)
### 4. 模型导出部署
完成模型训练后,需要将模型部署到1080Ti,2080Ti或其他服务器设备上,使用Paddle Inference完成C++部署
首先需要将模型导出为部署时使用的模型和配置文件
```bash
python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml -o weights=output/ppyoloe_crn_m_300e_battery_1024/model_final.pdparams
```
接下来可以使用PaddleDetection中的部署代码实现C++部署,详细步骤请参考[文档](../deploy/cpp/README.md)
如果期望使用可视化界面的方式进行部署,可以参考下面部分的内容。
## 四、部署demo
为了更方便大家部署,我们也提供了完备的可视化部署Demo,欢迎尝试使用
* [Windows Demo下载地址](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/docs/csharp_deploy)
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169064583-c931f4c0-dfd6-4bfa-85f1-be68eb351e4a.png" width = "800" />
</div>
* [Linux Demo下载地址](https://github.com/cjh3020889729/The-PaddleX-QT-Visualize-GUI)
<div align="center">
<img src="https://user-images.githubusercontent.com/48433081/169065951-147f8d51-bf3e-4a28-9197-d717968de73f.png" width = "800" />
</div>
## 五、场景范例
为了更方便大家更好的进行产业落地,PP-SMRT也提供了详细的应用范例,欢迎大家使用。
* 工业视觉
* [工业缺陷检测](https://aistudio.baidu.com/aistudio/projectdetail/2598319)
* [表计读数](https://aistudio.baidu.com/aistudio/projectdetail/2598327)
* [钢筋计数](https://aistudio.baidu.com/aistudio/projectdetail/2404188)
* 城市
* [行人计数](https://aistudio.baidu.com/aistudio/projectdetail/2421822)
* [车辆计数](https://aistudio.baidu.com/aistudio/projectdetail/3391734?contributionType=1)
* [安全帽检测](https://aistudio.baidu.com/aistudio/projectdetail/3944737?contributionType=1)
weights: output/picodet_l_1024_coco_lcnet_battery/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
worker_num: 2
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 50
LearningRate:
base_lr: 0.006
schedulers:
- !CosineDecay
max_epochs: 50
- !LinearWarmup
start_factor: 0.001
steps: 300
TrainReader:
sample_transforms:
- Decode: {}
- RandomCrop: {}
- RandomFlip: {prob: 0.5}
- RandomDistort: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 10
print_flops: false
find_unused_parameters: True
use_ema: true
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.00004
type: L2
architecture: PicoDet
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
LCNet:
scale: 2.0
feature_maps: [3, 4, 5]
LCPAN:
out_channels: 160
use_depthwise: True
num_features: 4
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 160
feat_out: 160
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
fpn_stride: [8, 16, 32, 64]
feat_in_chan: 160
prior_prob: 0.01
reg_max: 7
cell_offset: 0.5
grid_cell_scale: 5.0
static_assigner_epoch: 100
use_align_head: True
static_assigner:
name: ATSSAssigner
topk: 9
force_gt_matching: False
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
loss_class:
name: VarifocalLoss
use_sigmoid: False
iou_weighted: True
loss_weight: 1.0
loss_dfl:
name: DistributionFocalLoss
loss_weight: 0.5
loss_bbox:
name: GIoULoss
loss_weight: 2.5
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
weights: output/picodet_l_1024_coco_lcnet_lvjian1/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
worker_num: 2
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 50
LearningRate:
base_lr: 0.006
schedulers:
- !CosineDecay
max_epochs: 50
- !LinearWarmup
start_factor: 0.001
steps: 300
TrainReader:
sample_transforms:
- Decode: {}
- RandomCrop: {}
- RandomFlip: {prob: 0.5}
- RandomDistort: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 10
print_flops: false
find_unused_parameters: True
use_ema: true
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.00004
type: L2
architecture: PicoDet
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
LCNet:
scale: 2.0
feature_maps: [3, 4, 5]
LCPAN:
out_channels: 160
use_depthwise: True
num_features: 4
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 160
feat_out: 160
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
fpn_stride: [8, 16, 32, 64]
feat_in_chan: 160
prior_prob: 0.01
reg_max: 7
cell_offset: 0.5
grid_cell_scale: 5.0
static_assigner_epoch: 100
use_align_head: True
static_assigner:
name: ATSSAssigner
topk: 9
force_gt_matching: False
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
loss_class:
name: VarifocalLoss
use_sigmoid: False
iou_weighted: True
loss_weight: 1.0
loss_dfl:
name: DistributionFocalLoss
loss_weight: 0.5
loss_bbox:
name: GIoULoss
loss_weight: 2.5
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
weights: output/picodet_l_1024_coco_lcnet_renche/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
worker_num: 2
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: test.json
dataset_dir: dataset/renche
epoch: 50
LearningRate:
base_lr: 0.006
schedulers:
- !CosineDecay
max_epochs: 50
- !LinearWarmup
start_factor: 0.001
steps: 300
TrainReader:
sample_transforms:
- Decode: {}
- RandomCrop: {}
- RandomFlip: {prob: 0.5}
- RandomDistort: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 10
print_flops: false
find_unused_parameters: True
use_ema: true
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.00004
type: L2
architecture: PicoDet
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
LCNet:
scale: 2.0
feature_maps: [3, 4, 5]
LCPAN:
out_channels: 160
use_depthwise: True
num_features: 4
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 160
feat_out: 160
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
fpn_stride: [8, 16, 32, 64]
feat_in_chan: 160
prior_prob: 0.01
reg_max: 7
cell_offset: 0.5
grid_cell_scale: 5.0
static_assigner_epoch: 100
use_align_head: True
static_assigner:
name: ATSSAssigner
topk: 9
force_gt_matching: False
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
loss_class:
name: VarifocalLoss
use_sigmoid: False
iou_weighted: True
loss_weight: 1.0
loss_dfl:
name: DistributionFocalLoss
loss_weight: 0.5
loss_bbox:
name: GIoULoss
loss_weight: 2.5
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
weights: output/picodet_l_640_coco_lcnet_battery/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
worker_num: 2
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 50
LearningRate:
base_lr: 0.006
schedulers:
- !CosineDecay
max_epochs: 50
- !LinearWarmup
start_factor: 0.001
steps: 300
TrainReader:
sample_transforms:
- Decode: {}
- RandomCrop: {}
- RandomFlip: {prob: 0.5}
- RandomDistort: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 10
print_flops: false
find_unused_parameters: True
use_ema: true
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.00004
type: L2
architecture: PicoDet
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
LCNet:
scale: 2.0
feature_maps: [3, 4, 5]
LCPAN:
out_channels: 160
use_depthwise: True
num_features: 4
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 160
feat_out: 160
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
fpn_stride: [8, 16, 32, 64]
feat_in_chan: 160
prior_prob: 0.01
reg_max: 7
cell_offset: 0.5
grid_cell_scale: 5.0
static_assigner_epoch: 100
use_align_head: True
static_assigner:
name: ATSSAssigner
topk: 9
force_gt_matching: False
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
loss_class:
name: VarifocalLoss
use_sigmoid: False
iou_weighted: True
loss_weight: 1.0
loss_dfl:
name: DistributionFocalLoss
loss_weight: 0.5
loss_bbox:
name: GIoULoss
loss_weight: 2.5
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
weights: output/picodet_l_640_coco_lcnet_lvjian1/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
worker_num: 2
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: /paddle/dataset/model-select/gongye/lvjian1/slice_lvjian1_data/train/
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: /paddle/dataset/model-select/gongye/lvjian1/slice_lvjian1_data/eval/
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 50
LearningRate:
base_lr: 0.006
schedulers:
- !CosineDecay
max_epochs: 50
- !LinearWarmup
start_factor: 0.001
steps: 300
TrainReader:
sample_transforms:
- Decode: {}
- RandomCrop: {}
- RandomFlip: {prob: 0.5}
- RandomDistort: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 10
print_flops: false
find_unused_parameters: True
use_ema: true
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.00004
type: L2
architecture: PicoDet
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
LCNet:
scale: 2.0
feature_maps: [3, 4, 5]
LCPAN:
out_channels: 160
use_depthwise: True
num_features: 4
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 160
feat_out: 160
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
fpn_stride: [8, 16, 32, 64]
feat_in_chan: 160
prior_prob: 0.01
reg_max: 7
cell_offset: 0.5
grid_cell_scale: 5.0
static_assigner_epoch: 100
use_align_head: True
static_assigner:
name: ATSSAssigner
topk: 9
force_gt_matching: False
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
loss_class:
name: VarifocalLoss
use_sigmoid: False
iou_weighted: True
loss_weight: 1.0
loss_dfl:
name: DistributionFocalLoss
loss_weight: 0.5
loss_bbox:
name: GIoULoss
loss_weight: 2.5
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
weights: output/picodet_l_640_coco_lcnet_renche/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams
worker_num: 2
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: test.json
dataset_dir: dataset/renche
epoch: 50
LearningRate:
base_lr: 0.006
schedulers:
- !CosineDecay
max_epochs: 50
- !LinearWarmup
start_factor: 0.001
steps: 300
TrainReader:
sample_transforms:
- Decode: {}
- RandomCrop: {}
- RandomFlip: {prob: 0.5}
- RandomDistort: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 10
print_flops: false
find_unused_parameters: True
use_ema: true
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.00004
type: L2
architecture: PicoDet
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
LCNet:
scale: 2.0
feature_maps: [3, 4, 5]
LCPAN:
out_channels: 160
use_depthwise: True
num_features: 4
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 160
feat_out: 160
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
fpn_stride: [8, 16, 32, 64]
feat_in_chan: 160
prior_prob: 0.01
reg_max: 7
cell_offset: 0.5
grid_cell_scale: 5.0
static_assigner_epoch: 100
use_align_head: True
static_assigner:
name: ATSSAssigner
topk: 9
force_gt_matching: False
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
loss_class:
name: VarifocalLoss
use_sigmoid: False
iou_weighted: True
loss_weight: 1.0
loss_dfl:
name: DistributionFocalLoss
loss_weight: 0.5
loss_bbox:
name: GIoULoss
loss_weight: 2.5
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path'
epoch: 40
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 5
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 8
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 101
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.4
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path'
epoch: 40
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 5
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 1024, 1024]
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 101
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.4
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 20
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 1024, 1024]
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 101
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[8, 7], [24, 12], [14, 25],
[37, 35], [30, 140], [89, 52],
[93, 189], [226, 99], [264, 352]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 20
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 101
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[8, 7], [24, 12], [14, 25],
[37, 35], [30, 140], [89, 52],
[93, 189], [226, 99], [264, 352]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: dataset/renche/test.json
epoch: 100
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 8
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 1024, 1024]
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 101
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: dataset/renche/test.json
epoch: 100
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 8
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 101
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path'
epoch: 40
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 5
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 8
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 50
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.4
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path'
epoch: 40
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 5
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 1024, 1024]
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 50
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.4
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 20
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 1024, 1024]
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 50
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[8, 7], [24, 12], [14, 25],
[37, 35], [30, 140], [89, 52],
[93, 189], [226, 99], [264, 352]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 20
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 2
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 50
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[8, 7], [24, 12], [14, 25],
[37, 35], [30, 140], [89, 52],
[93, 189], [226, 99], [264, 352]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: dataset/renche/test.json
epoch: 100
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 8
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 1024, 1024]
sample_transforms:
- Decode: {}
- Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 50
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/coco/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/coco/renche
TestDataset:
!ImageFolder
anno_path: dataset/coco/renche/test.json
epoch: 100
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 80
- !LinearWarmup
start_factor: 0.
steps: 1000
snapshot_epoch: 3
worker_num: 8
TrainReader:
inputs_def:
num_max_boxes: 100
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 100}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
batch_size: 2
shuffle: true
drop_last: true
use_shared_memory: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
YOLOv3:
backbone: ResNet
neck: PPYOLOPAN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
ResNet:
depth: 50
variant: d
return_idx: [1, 2, 3]
dcn_v2_stages: [3]
freeze_at: -1
freeze_norm: false
norm_decay: 0.
PPYOLOPAN:
drop_block: true
block_size: 3
keep_prob: 0.9
spp: true
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
iou_aware: true
iou_aware_factor: 0.5
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
scale_x_y: 1.05
iou_loss: IouLoss
iou_aware_loss: IouAwareLoss
IouLoss:
loss_weight: 2.5
loss_square: true
IouAwareLoss:
loss_weight: 1.0
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.01
downsample_ratio: 32
clip_bbox: true
scale_x_y: 1.05
nms:
name: MatrixNMS
keep_top_k: 100
score_threshold: 0.01
post_threshold: 0.01
nms_top_k: -1
background_label: -1
weights: output/ppyoloe_crn_l_300e_battery/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_l_300e_battery_1024/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
worker_num: 4
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_l_300e_lvjian1/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 30
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 1
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_l_300e_lvjian1_1024/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
worker_num: 4
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 30
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 1
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_l_300e_renche/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: test.json
dataset_dir: dataset/renche
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_l_300e_renche_1024/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
depth_mult: 1.0
width_mult: 1.0
worker_num: 4
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: test.json
dataset_dir: dataset/renche
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_m_300e_battery/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_m_300e_battery_1024/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
worker_num: 4
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_m_300e_lvjian1/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 30
LearningRate:
base_lr: 0.002
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 16
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 1
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_m_300e_lvjian1_1024/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
worker_num: 2
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 5
TrainDataset:
!COCODataSet
image_dir: images
anno_path: train.json
dataset_dir: dataset/slice_lvjian1_data/train
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
TestDataset:
!ImageFolder
anno_path: val.json
dataset_dir: dataset/slice_lvjian1_data/eval
epoch: 30
LearningRate:
base_lr: 0.0015
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 2
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_m_300e_renche/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: dataset/renche
TestDataset:
!ImageFolder
anno_path: test.json
dataset_dir: dataset/renche
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_m_300e_renche_1024/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
worker_num: 4
eval_height: &eval_height 1024
eval_width: &eval_width 1024
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 22
TrainDataset:
!COCODataSet
image_dir: train_images
anno_path: train.json
dataset_dir: /paddle/dataset/renche
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: train_images
anno_path: test.json
dataset_dir: /paddle/dataset/renche
TestDataset:
!ImageFolder
anno_path: test.json
dataset_dir: /paddle/dataset/renche
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
weights: output/ppyoloe_crn_s_300e_battery/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams
depth_mult: 0.33
width_mult: 0.50
worker_num: 4
eval_height: &eval_height 640
eval_width: &eval_width 640
eval_size: &eval_size [*eval_height, *eval_width]
metric: COCO
num_classes: 45
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train.json
dataset_dir: dataset/battery_mini
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
TestDataset:
!ImageFolder
anno_path: annotations/test.json
dataset_dir: dataset/battery_mini
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !CosineDecay
max_epochs: 36
- !LinearWarmup
start_factor: 0.
epochs: 3
TrainReader:
sample_transforms:
- Decode: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
TestReader:
inputs_def:
image_shape: [3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {target_size: *eval_size, keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_size: 1
use_gpu: true
use_xpu: false
log_iter: 100
save_dir: output
snapshot_epoch: 5
print_flops: false
# Exporting the model
export:
post_process: True # Whether post-processing is included in the network when export model.
nms: True # Whether NMS is included in the network when export model.
benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported.
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.6
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册