提交 1942c445 编写于 作者: C chenguowei01

Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleSeg into develop

......@@ -6,10 +6,26 @@
## 简介
PaddleSeg是基于[PaddlePaddle](https://www.paddlepaddle.org.cn)开发的语义分割库,覆盖了DeepLabv3+, U-Net, ICNet三类主流的分割模型。通过统一的配置,帮助用户更便捷地完成从训练到部署的全流程图像分割应用。
PaddleSeg是基于[PaddlePaddle](https://www.paddlepaddle.org.cn)开发的语义分割库,覆盖了DeepLabv3+, U-Net, ICNet, PSPNet, HRNet等主流分割模型。通过统一的配置,帮助用户更便捷地完成从训练到部署的全流程图像分割应用。
PaddleSeg具备高性能、丰富的数据增强、工业级部署、全流程应用的特点:
</br>
- [特点](#特点)
- [安装](#安装)
- [使用教程](#使用教程)
- [快速入门](#快速入门)
- [基础功能](#基础功能)
- [预测部署](#预测部署)
- [高级功能](#高级功能)
- [在线体验](#在线体验)
- [FAQ](#FAQ)
- [交流与反馈](#交流与反馈)
- [更新日志](#更新日志)
- [贡献代码](#贡献代码)
</br>
## 特点
- **丰富的数据增强**
......@@ -17,29 +33,42 @@ PaddleSeg具备高性能、丰富的数据增强、工业级部署、全流程
- **模块化设计**
支持U-Net, DeepLabv3+, ICNet, PSPNet种主流分割网络,结合预训练模型和可调节的骨干网络,满足不同性能和精度的要求;选择不同的损失函数如Dice Loss, BCE Loss等方式可以强化小目标和不均衡样本场景下的分割精度。
支持U-Net, DeepLabv3+, ICNet, PSPNet, HRNet五种主流分割网络,结合预训练模型和可调节的骨干网络,满足不同性能和精度的要求;选择不同的损失函数如Dice Loss, BCE Loss等方式可以强化小目标和不均衡样本场景下的分割精度。
- **高性能**
PaddleSeg支持多进程IO、多卡并行、跨卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化功能,可以大幅度减少分割模型的显存开销,更快完成分割模型训练。
PaddleSeg支持多进程I/O、多卡并行、跨卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化功能,可大幅度减少分割模型的显存开销,让开发者更低成本、更高效地完成图像分割训练。
- **工业级部署**
基于[Paddle Serving](https://github.com/PaddlePaddle/Serving)和PaddlePaddle高性能预测引擎,结合百度开放的AI能力,轻松搭建人像分割和车道线分割服务
全面提供**服务端****移动端**的工业级部署能力,依托飞桨高性能推理引擎和高性能图像处理实现,开发者可以轻松完成高性能的分割模型部署和集成。通过[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite),可以在移动设备或者嵌入式设备上完成轻量级、高性能的人像分割模型部署
</br>
## 安装
## 环境依赖
### 1. 安装PaddlePaddle
版本要求
* PaddlePaddle >= 1.6.1
* Python 2.7 or 3.5+
通过以下命令安装python包依赖,请确保在该分支上至少执行过一次以下命令
```shell
$ pip install -r requirements.txt
由于图像分割模型计算开销大,推荐在GPU版本的PaddlePaddle下使用PaddleSeg.
```
pip install -U paddlepaddle-gpu
```
同时请保证您参考NVIDIA官网,已经正确配置和安装了显卡驱动,CUDA 9,cuDNN 7.3,NCCL2等依赖,其他更加详细的安装信息请参考:[PaddlePaddle安装说明](https://www.paddlepaddle.org.cn/install/doc/index)
其他如CUDA版本、cuDNN版本等兼容信息请查看[PaddlePaddle安装](https://www.paddlepaddle.org.cn/install/doc/index)
### 2. 下载PaddleSeg代码
```
git clone https://github.com/PaddlePaddle/PaddleSeg
```
### 3. 安装PaddleSeg依赖
通过以下命令安装python包依赖,请确保在该分支上至少执行过一次以下命令:
```
cd PaddleSeg
pip install -r requirements.txt
```
</br>
......@@ -51,35 +80,49 @@ $ pip install -r requirements.txt
### 快速入门
* [安装说明](./docs/installation.md)
* [训练/评估/可视化](./docs/usage.md)
* [PaddleSeg快速入门](./docs/usage.md)
### 基础功能
* [分割模型介绍](./docs/models.md)
* [预训练模型列表](./docs/model_zoo.md)
* [自定义数据的准备与标注](./docs/data_prepare.md)
* [自定义数据的标注与准备](./docs/data_prepare.md)
* [脚本使用和配置说明](./docs/config.md)
* [数据和配置校验](./docs/check.md)
* [如何训练DeepLabv3+](./turtorial/finetune_deeplabv3plus.md)
* [如何训练U-Net](./turtorial/finetune_unet.md)
* [如何训练ICNet](./turtorial/finetune_icnet.md)
* [如何训练PSPNet](./turtorial/finetune_pspnet.md)
* [如何训练HRNet](./turtorial/finetune_hrnet.md)
* [分割模型介绍](./docs/models.md)
* [预训练模型下载](./docs/model_zoo.md)
* [DeepLabv3+模型使用教程](./turtorial/finetune_deeplabv3plus.md)
* [U-Net模型使用教程](./turtorial/finetune_unet.md)
* [ICNet模型使用教程](./turtorial/finetune_icnet.md)
* [PSPNet模型使用教程](./turtorial/finetune_pspnet.md)
* [HRNet模型使用教程](./turtorial/finetune_hrnet.md)
* [Fast-SCNN模型使用教程](./turtorial/finetune_fast_scnn.md)
### 预测部署
* [模型导出](./docs/model_export.md)
* [使用Python预测](./deploy/python/)
* [使用C++预测](./deploy/cpp/)
* [移动端预测部署](./deploy/lite/)
* [Python预测](./deploy/python/)
* [C++预测](./deploy/cpp/)
* [Paddle-Lite移动端预测部署](./deploy/lite/)
### 高级功能
* [PaddleSeg的数据增强](./docs/data_aug.md)
* [PaddleSeg的loss选择](./docs/loss_select.md)
* [如何解决二分类中类别不均衡问题](./docs/loss_select.md)
* [特色垂类模型使用](./contrib)
* [多进程训练和混合精度训练](./docs/multiple_gpus_train_and_mixed_precision_train.md)
* 使用PaddleSlim进行分割模型压缩([量化](./slim/quantization/README.md), [蒸馏](./slim/distillation/README.md), [剪枝](./slim/prune/README.md), [搜索](./slim/nas/README.md))
## 在线体验
我们在AI Studio平台上提供了在线体验的教程,欢迎体验:
|在线教程|链接|
|-|-|
|快速开始|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/100798)|
|U-Net图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)|
|DeepLabv3+图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/226703)|
|工业质检(零件瑕疵检测)|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/184392)|
|人像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/188833)|
|PaddleSeg特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/226710)|
</br>
......@@ -104,25 +147,14 @@ python pdseg/train.py --cfg xxx.yaml TRAIN.RESUME_MODEL_DIR /PATH/TO/MODEL_CKPT/
A: 降低Batch size,使用Group Norm策略;请注意训练过程中当`DEFAULT_NORM_TYPE`选择`bn`时,为了Batch Norm计算稳定性,batch size需要满足>=2
</br>
#### Q: 出现错误 ModuleNotFoundError: No module named 'paddle.fluid.contrib.mixed_precision'
A: 请将PaddlePaddle升级至1.5.2版本或以上。
## 在线体验
PaddleSeg在AI Studio平台上提供了在线体验的教程,欢迎体验:
|教程|链接|
|-|-|
|U-Net宠物分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)|
|DeepLabv3+图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/101696)|
|PaddleSeg特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/115541)|
</br>
## 交流与反馈
## 交流与反馈
* 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/PaddleSeg/issues)来提交问题、报告与建议
* 微信公众号:飞桨PaddlePaddle
* QQ群: 796771754
......@@ -131,25 +163,36 @@ PaddleSeg在AI Studio平台上提供了在线体验的教程,欢迎体验:
<p align="center"> &#8194;&#8194;&#8194;微信公众号&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;&#8194;官方技术交流QQ群</p>
## 更新日志
* 2019.12.15
**`v0.3.0`**
* 新增HRNet分割网络,提供基于cityscapes和ImageNet的[预训练模型](./docs/model_zoo.md)8个
* 支持使用[伪彩色标签](./docs/data_prepare.md#%E7%81%B0%E5%BA%A6%E6%A0%87%E6%B3%A8vs%E4%BC%AA%E5%BD%A9%E8%89%B2%E6%A0%87%E6%B3%A8)进行训练/评估/预测,提升训练体验,并提供将灰度标注图转为伪彩色标注图的脚本
* 新增[学习率warmup](./docs/configs/solver_group.md#lr_warmup)功能,支持与不同的学习率Decay策略配合使用
* 新增图像归一化操作的GPU化实现,进一步提升预测速度。
* 新增Python部署方案,更低成本完成工业级部署。
* 新增Paddle-Lite移动端部署方案,支持人像分割模型的移动端部署。
* 新增不同分割模型的预测[性能数据Benchmark](./deploy/python/docs/PaddleSeg_Infer_Benchmark.md), 便于开发者提供模型选型性能参考。
* 2019.11.04
**`v0.2.0`**
* 新增PSPNet分割网络,提供基于COCO和cityscapes数据集的[预训练模型](./docs/model_zoo.md)4个
* 新增Dice Loss、BCE Loss以及组合Loss配置,支持样本不均衡场景下的[模型优化](./docs/loss_select.md)
* 支持[FP16混合精度训练](./docs/multiple_gpus_train_and_mixed_precision_train.md)以及动态Loss Scaling,在不损耗精度的情况下,训练速度提升30%+
* 支持[PaddlePaddle多卡多进程训练](./docs/multiple_gpus_train_and_mixed_precision_train.md),多卡训练时训练速度提升15%+
* 发布基于UNet的[工业标记表盘分割模型](./contrib#%E5%B7%A5%E4%B8%9A%E7%94%A8%E8%A1%A8%E5%88%86%E5%89%B2)
* 新增PSPNet分割网络,提供基于COCO和cityscapes数据集的[预训练模型](./docs/model_zoo.md)4个
* 新增Dice Loss、BCE Loss以及组合Loss配置,支持样本不均衡场景下的[模型优化](./docs/loss_select.md)
* 支持[FP16混合精度训练](./docs/multiple_gpus_train_and_mixed_precision_train.md)以及动态Loss Scaling,在不损耗精度的情况下,训练速度提升30%+
* 支持[PaddlePaddle多卡多进程训练](./docs/multiple_gpus_train_and_mixed_precision_train.md),多卡训练时训练速度提升15%+
* 发布基于UNet的[工业标记表盘分割模型](./contrib#%E5%B7%A5%E4%B8%9A%E7%94%A8%E8%A1%A8%E5%88%86%E5%89%B2)
* 2019.09.10
**`v0.1.0`**
* PaddleSeg分割库初始版本发布,包含DeepLabv3+, U-Net, ICNet三类分割模型, 其中DeepLabv3+支持Xception, MobileNet v2两种可调节的骨干网络。
* CVPR19 LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P)
* 预置基于DeepLabv3+网络的[人像分割](./contrib/HumanSeg/)[车道线分割](./contrib/RoadLine)预测模型发布
* CVPR19 LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P)
* 预置基于DeepLabv3+网络的[人像分割](./contrib/HumanSeg/)[车道线分割](./contrib/RoadLine)预测模型发布
</br>
## 如何贡献代码
## 贡献代码
我们非常欢迎您为PaddleSeg贡献代码或者提供使用建议。
我们非常欢迎您为PaddleSeg贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交pull requests.
EVAL_CROP_SIZE: (2048, 1024) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (1024, 1024) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
MAX_SCALE_FACTOR: 2.0 # for stepscaling
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
FLIP: False
FLIP_RATIO: 0.2
RICH_CROP:
ENABLE: True
ASPECT_RATIO: 0.0
BLUR: False
BLUR_RATIO: 0.1
MAX_ROTATION: 0
MIN_AREA_RATIO: 0.0
BRIGHTNESS_JITTER_RATIO: 0.4
CONTRAST_JITTER_RATIO: 0.4
SATURATION_JITTER_RATIO: 0.4
BATCH_SIZE: 12
MEAN: [0.5, 0.5, 0.5]
STD: [0.5, 0.5, 0.5]
DATASET:
DATA_DIR: "./dataset/cityscapes/"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 19
TEST_FILE_LIST: "dataset/cityscapes/val.list"
TRAIN_FILE_LIST: "dataset/cityscapes/train.list"
VAL_FILE_LIST: "dataset/cityscapes/val.list"
IGNORE_INDEX: 255
FREEZE:
MODEL_FILENAME: "model"
PARAMS_FILENAME: "params"
MODEL:
DEFAULT_NORM_TYPE: "bn"
MODEL_NAME: "fast_scnn"
TEST:
TEST_MODEL: "snapshots/cityscape_fast_scnn/final/"
TRAIN:
MODEL_SAVE_DIR: "snapshots/cityscape_fast_scnn/"
SNAPSHOT_EPOCH: 10
SOLVER:
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
NUM_EPOCHS: 100
TRAIN_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling
EVAL_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling
EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "unpadding" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (512, 512) # (width, height), for unpadding
AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (2048, 1024) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
MAX_SCALE_FACTOR: 1.25 # for stepscaling
MIN_SCALE_FACTOR: 0.75 # for stepscaling
MAX_SCALE_FACTOR: 2.0 # for stepscaling
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
BATCH_SIZE: 4
DATASET:
DATA_DIR: "./dataset/mini_pet/"
DATA_DIR: "./dataset/cityscapes/"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
NUM_CLASSES: 19
TEST_FILE_LIST: "dataset/cityscapes/val.list"
TRAIN_FILE_LIST: "dataset/cityscapes/train.list"
VAL_FILE_LIST: "dataset/cityscapes/val.list"
IGNORE_INDEX: 255
SEPARATOR: " "
FREEZE:
MODEL_FILENAME: "__model__"
PARAMS_FILENAME: "__params__"
MODEL_FILENAME: "model"
PARAMS_FILENAME: "params"
MODEL:
MODEL_NAME: "deeplabv3p"
DEFAULT_NORM_TYPE: "bn"
MODEL_NAME: "deeplabv3p"
DEEPLAB:
BACKBONE: "xception_65"
BACKBONE: "mobilenetv2"
ASPP_WITH_SEP_CONV: True
DECODER_USE_SEP_CONV: True
ENCODER_WITH_ASPP: False
ENABLE_DECODER: False
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_coco/"
MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_pet/"
PRETRAINED_MODEL_DIR: u"pretrained_model/deeplabv3p_mobilenetv2-1-0_bn_coco"
MODEL_SAVE_DIR: "saved_model/deeplabv3p_mobilenetv2_cityscapes"
SNAPSHOT_EPOCH: 10
SYNC_BATCH_NORM: True
TEST:
TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_pet/final"
TEST_MODEL: "saved_model/deeplabv3p_mobilenetv2_cityscapes/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
LR: 0.01
LR_POLICY: "poly"
OPTIMIZER: "sgd"
NUM_EPOCHS: 100
EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (713, 713) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding
FIX_RESIZE_SIZE: (2048, 1024) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
......@@ -19,23 +19,25 @@ DATASET:
TRAIN_FILE_LIST: "dataset/cityscapes/train.list"
VAL_FILE_LIST: "dataset/cityscapes/val.list"
IGNORE_INDEX: 255
SEPARATOR: " "
FREEZE:
MODEL_FILENAME: "model"
PARAMS_FILENAME: "params"
MODEL:
MODEL_NAME: "pspnet"
DEFAULT_NORM_TYPE: "bn"
PSPNET:
DEPTH_MULTIPLIER: 1
LAYERS: 50
TEST:
TEST_MODEL: "snapshots/cityscapes_pspnet50/final"
MODEL_NAME: "deeplabv3p"
DEEPLAB:
ASPP_WITH_SEP_CONV: True
DECODER_USE_SEP_CONV: True
TRAIN:
MODEL_SAVE_DIR: "snapshots/cityscapes_pspnet50/"
PRETRAINED_MODEL_DIR: u"pretrained_model/pspnet50_bn_cityscapes/"
PRETRAINED_MODEL_DIR: u"pretrained_model/deeplabv3p_xception65_bn_coco"
MODEL_SAVE_DIR: "saved_model/deeplabv3p_xception65_bn_cityscapes"
SNAPSHOT_EPOCH: 10
SYNC_BATCH_NORM: True
TEST:
TEST_MODEL: "saved_model/deeplabv3p_xception65_bn_cityscapes/final"
SOLVER:
LR: 0.001
LR: 0.01
LR_POLICY: "poly"
OPTIMIZER: "sgd"
NUM_EPOCHS: 700
NUM_EPOCHS: 100
# 数据集配置
DATASET:
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
MODEL_NAME: "deeplabv3p"
DEFAULT_NORM_TYPE: "bn"
DEEPLAB:
BACKBONE: "xception_65"
# 其他配置
TRAIN_CROP_SIZE: (512, 512)
EVAL_CROP_SIZE: (512, 512)
AUG:
AUG_METHOD: "unpadding"
FIX_RESIZE_SIZE: (512, 512)
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_coco/"
MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_optic/final"
SOLVER:
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "adam"
\ No newline at end of file
......@@ -27,16 +27,17 @@ FREEZE:
MODEL_FILENAME: "__model__"
PARAMS_FILENAME: "__params__"
MODEL:
MODEL_NAME: "unet"
MODEL_NAME: "fast_scnn"
DEFAULT_NORM_TYPE: "bn"
TEST:
TEST_MODEL: "./saved_model/unet_pet/final/"
TRAIN:
MODEL_SAVE_DIR: "./saved_model/unet_pet/"
PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/"
PRETRAINED_MODEL_DIR: "./pretrained_model/fast_scnn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/fast_scnn_pet/"
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./saved_model/fast_scnn_pet/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
LR_POLICY: "poly"
OPTIMIZER: "adam"
OPTIMIZER: "sgd"
# 数据集配置
DATASET:
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
MODEL_NAME: "hrnet"
DEFAULT_NORM_TYPE: "bn"
HRNET:
STAGE2:
NUM_CHANNELS: [18, 36]
STAGE3:
NUM_CHANNELS: [18, 36, 72]
STAGE4:
NUM_CHANNELS: [18, 36, 72, 144]
# 其他配置
TRAIN_CROP_SIZE: (512, 512)
EVAL_CROP_SIZE: (512, 512)
AUG:
AUG_METHOD: "unpadding"
FIX_RESIZE_SIZE: (512, 512)
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/hrnet_w18_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/hrnet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/hrnet_optic/final"
SOLVER:
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "adam"
# 数据集配置
DATASET:
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
MODEL_NAME: "icnet"
DEFAULT_NORM_TYPE: "bn"
MULTI_LOSS_WEIGHT: "[1.0, 0.4, 0.16]"
ICNET:
DEPTH_MULTIPLIER: 0.5
# 其他配置
TRAIN_CROP_SIZE: (512, 512)
EVAL_CROP_SIZE: (512, 512)
AUG:
AUG_METHOD: "unpadding"
FIX_RESIZE_SIZE: (512, 512)
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/icnet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/icnet_optic/final"
SOLVER:
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "adam"
TRAIN_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling
EVAL_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "unpadding" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (512, 512) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
MAX_SCALE_FACTOR: 1.25 # for stepscaling
MIN_SCALE_FACTOR: 0.75 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
BATCH_SIZE: 4
DATASET:
DATA_DIR: "./dataset/mini_pet/"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
IGNORE_INDEX: 255
SEPARATOR: " "
FREEZE:
MODEL_FILENAME: "__model__"
PARAMS_FILENAME: "__params__"
MODEL:
MODEL_NAME: "icnet"
DEFAULT_NORM_TYPE: "bn"
MULTI_LOSS_WEIGHT: "[1.0, 0.4, 0.16]"
ICNET:
DEPTH_MULTIPLIER: 0.5
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/icnet_pet/"
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./saved_model/icnet_pet/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
LR_POLICY: "poly"
OPTIMIZER: "sgd"
# 数据集配置
DATASET:
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
MODEL_NAME: "pspnet"
DEFAULT_NORM_TYPE: "bn"
PSPNET:
DEPTH_MULTIPLIER: 1
LAYERS: 50
# 其他配置
TRAIN_CROP_SIZE: (512, 512)
EVAL_CROP_SIZE: (512, 512)
AUG:
AUG_METHOD: "unpadding"
FIX_RESIZE_SIZE: (512, 512)
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/pspnet50_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/pspnet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/pspnet_optic/final"
SOLVER:
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "adam"
# 数据集配置
DATASET:
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
MODEL_NAME: "unet"
DEFAULT_NORM_TYPE: "bn"
# 其他配置
TRAIN_CROP_SIZE: (512, 512)
EVAL_CROP_SIZE: (512, 512)
AUG:
AUG_METHOD: "unpadding"
FIX_RESIZE_SIZE: (512, 512)
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/"
MODEL_SAVE_DIR: "./saved_model/unet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/unet_optic/final"
SOLVER:
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "adam"
......@@ -6,13 +6,13 @@ args = get_arguments()
cfg = AttrDict()
# 待预测图像所在路径
cfg.data_dir = os.path.join(args.example , "data", "testing_images")
cfg.data_dir = os.path.join("data", "testing_images")
# 待预测图像名称列表
cfg.data_list_file = os.path.join(args.example , "data", "test_id.txt")
cfg.data_list_file = os.path.join("data", "test_id.txt")
# 模型加载路径
cfg.model_path = os.path.join(args.example , "ACE2P")
cfg.model_path = args.example
# 预测结果保存路径
cfg.vis_dir = os.path.join(args.example , "result")
cfg.vis_dir = "result"
# 预测类别数
cfg.class_num = 20
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
if __name__ == "__main__":
download_file_and_uncompress(
url='https://paddleseg.bj.bcebos.com/models/ACE2P.tgz',
savepath=LOCAL_PATH,
extrapath=LOCAL_PATH,
extraname='ACE2P')
print("Pretrained Model download success!")
# -*- coding: utf-8 -*-
import os
import cv2
import numpy as np
from utils.util import get_arguments
from utils.palette import get_palette
from PIL import Image as PILImage
import importlib
args = get_arguments()
config = importlib.import_module('config')
cfg = getattr(config, 'cfg')
# paddle垃圾回收策略FLAG,ACE2P模型较大,当显存不够时建议开启
os.environ['FLAGS_eager_delete_tensor_gb']='0.0'
import paddle.fluid as fluid
# 预测数据集类
class TestDataSet():
def __init__(self):
self.data_dir = cfg.data_dir
self.data_list_file = cfg.data_list_file
self.data_list = self.get_data_list()
self.data_num = len(self.data_list)
def get_data_list(self):
# 获取预测图像路径列表
data_list = []
data_file_handler = open(self.data_list_file, 'r')
for line in data_file_handler:
img_name = line.strip()
name_prefix = img_name.split('.')[0]
if len(img_name.split('.')) == 1:
img_name = img_name + '.jpg'
img_path = os.path.join(self.data_dir, img_name)
data_list.append(img_path)
return data_list
def preprocess(self, img):
# 图像预处理
if cfg.example == 'ACE2P':
reader = importlib.import_module('reader')
ACE2P_preprocess = getattr(reader, 'preprocess')
img = ACE2P_preprocess(img)
else:
img = cv2.resize(img, cfg.input_size).astype(np.float32)
img -= np.array(cfg.MEAN)
img /= np.array(cfg.STD)
img = img.transpose((2, 0, 1))
img = np.expand_dims(img, axis=0)
return img
def get_data(self, index):
# 获取图像信息
img_path = self.data_list[index]
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
if img is None:
return img, img,img_path, None
img_name = img_path.split(os.sep)[-1]
name_prefix = img_name.replace('.'+img_name.split('.')[-1],'')
img_shape = img.shape[:2]
img_process = self.preprocess(img)
return img, img_process, name_prefix, img_shape
def infer():
if not os.path.exists(cfg.vis_dir):
os.makedirs(cfg.vis_dir)
palette = get_palette(cfg.class_num)
# 人像分割结果显示阈值
thresh = 120
place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
# 加载预测模型
test_prog, feed_name, fetch_list = fluid.io.load_inference_model(
dirname=cfg.model_path, executor=exe, params_filename='__params__')
#加载预测数据集
test_dataset = TestDataSet()
data_num = test_dataset.data_num
for idx in range(data_num):
# 数据获取
ori_img, image, im_name, im_shape = test_dataset.get_data(idx)
if image is None:
print(im_name, 'is None')
continue
# 预测
if cfg.example == 'ACE2P':
# ACE2P模型使用多尺度预测
reader = importlib.import_module('reader')
multi_scale_test = getattr(reader, 'multi_scale_test')
parsing, logits = multi_scale_test(exe, test_prog, feed_name, fetch_list, image, im_shape)
else:
# HumanSeg,RoadLine模型单尺度预测
result = exe.run(program=test_prog, feed={feed_name[0]: image}, fetch_list=fetch_list)
parsing = np.argmax(result[0][0], axis=0)
parsing = cv2.resize(parsing.astype(np.uint8), im_shape[::-1])
# 预测结果保存
result_path = os.path.join(cfg.vis_dir, im_name + '.png')
if cfg.example == 'HumanSeg':
logits = result[0][0][1]*255
logits = cv2.resize(logits, im_shape[::-1])
ret, logits = cv2.threshold(logits, thresh, 0, cv2.THRESH_TOZERO)
logits = 255 *(logits - thresh)/(255 - thresh)
# 将分割结果添加到alpha通道
rgba = np.concatenate((ori_img, np.expand_dims(logits, axis=2)), axis=2)
cv2.imwrite(result_path, rgba)
else:
output_im = PILImage.fromarray(np.asarray(parsing, dtype=np.uint8))
output_im.putpalette(palette)
output_im.save(result_path)
if (idx + 1) % 100 == 0:
print('%d processd' % (idx + 1))
print('%d processd done' % (idx + 1))
return 0
if __name__ == "__main__":
infer()
# -*- coding: utf-8 -*-
import numpy as np
import paddle.fluid as fluid
from ACE2P.config import cfg
from config import cfg
import cv2
def get_affine_points(src_shape, dst_shape, rot_grad=0):
......
......@@ -8,7 +8,7 @@ from PIL import Image as PILImage
import importlib
args = get_arguments()
config = importlib.import_module(args.example+'.config')
config = importlib.import_module('config')
cfg = getattr(config, 'cfg')
# paddle垃圾回收策略FLAG,ACE2P模型较大,当显存不够时建议开启
......
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: RainbowSecret
## Microsoft Research
## yuyua@microsoft.com
## Copyright (c) 2018
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import cv2
def get_palette(num_cls):
""" Returns the color map for visualizing the segmentation mask.
Args:
num_cls: Number of classes
Returns:
The color map
"""
n = num_cls
palette = [0] * (n * 3)
for j in range(0, n):
lab = j
palette[j * 3 + 0] = 0
palette[j * 3 + 1] = 0
palette[j * 3 + 2] = 0
i = 0
while lab:
palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i))
palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i))
palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i))
i += 1
lab >>= 3
return palette
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import argparse
import os
def get_arguments():
parser = argparse.ArgumentParser()
parser.add_argument("--use_gpu",
action="store_true",
help="Use gpu or cpu to test.")
parser.add_argument('--example',
type=str,
help='RoadLine, HumanSeg or ACE2P')
return parser.parse_args()
class AttrDict(dict):
def __init__(self, *args, **kwargs):
super(AttrDict, self).__init__(*args, **kwargs)
def __getattr__(self, name):
if name in self.__dict__:
return self.__dict__[name]
elif name in self:
return self[name]
else:
raise AttributeError(name)
def __setattr__(self, name, value):
if name in self.__dict__:
self.__dict__[name] = value
else:
self[name] = value
def merge_cfg_from_args(args, cfg):
"""Merge config keys, values in args into the global config."""
for k, v in vars(args).items():
d = cfg
try:
value = eval(v)
except:
value = v
if value is not None:
cfg[k] = value
# LaneNet 模型训练教程
* 本教程旨在介绍如何通过使用PaddleSeg进行车道线检测
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
## 环境依赖
* PaddlePaddle >= 1.7.0 或develop版本
* Python 3.5+
通过以下命令安装python包依赖,请确保在该分支上至少执行过一次以下命令
```shell
$ pip install -r requirements.txt
```
## 一. 准备待训练数据
我们提前准备好了一份处理好的数据集,通过以下代码进行下载,该数据集由图森车道线检测数据集转换而来,你也可以在这个[页面](https://github.com/TuSimple/tusimple-benchmark/issues/3)下载原始数据集。
```shell
python dataset/download_tusimple.py
```
数据目录结构
```
LaneNet
|-- dataset
|-- tusimple_lane_detection
|-- training
|-- gt_binary_image
|-- gt_image
|-- gt_instance_image
|-- train_part.txt
|-- val_part.txt
```
## 二. 下载预训练模型
下载[vgg预训练模型](https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_pretrained.tar),放在```pretrained_models```文件夹下。
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
* 数据集
* 训练集主目录
* 训练集文件列表
* 测试集文件列表
* 评估集文件列表
* 预训练模型
* 预训练模型名称
* 预训练模型的backbone网络
* 预训练模型路径
* 其他
* 学习率
* Batch大小
* ...
在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/tusimple_lane_detection`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/lanenet.yaml**
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/tusimple_lane_detection"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt"
TRAIN_FILE_LIST: "./dataset/tusimple_lane_detection/training/train_part.txt"
VAL_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt"
SEPARATOR: " "
# 预训练模型配置
MODEL:
MODEL_NAME: "lanenet"
# 其他配置
EVAL_CROP_SIZE: (512, 256)
TRAIN_CROP_SIZE: (512, 256)
AUG:
AUG_METHOD: u"unpadding" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (512, 256) # (width, height), for unpadding
MIRROR: False
RICH_CROP:
ENABLE: False
BATCH_SIZE: 4
TEST:
TEST_MODEL: "./saved_model/lanenet/final/"
TRAIN:
MODEL_SAVE_DIR: "./saved_model/lanenet/"
PRETRAINED_MODEL_DIR: "./pretrained_models/VGG16_pretrained"
SNAPSHOT_EPOCH: 5
SOLVER:
NUM_EPOCHS: 100
LR: 0.0005
LR_POLICY: "poly"
OPTIMIZER: "sgd"
WEIGHT_DECAY: 0.001
```
## 五. 开始训练
使用下述命令启动训练
```shell
CUDA_VISIBLE_DEVICES=0 python -u train.py --cfg configs/lanenet.yaml --use_gpu --use_mpio --do_eval
```
## 六. 进行评估
模型训练完成,使用下述命令启动评估
```shell
CUDA_VISIBLE_DEVICES=0 python -u eval.py --use_gpu --cfg configs/lanenet.yaml
```
## 七. 可视化
需要先下载一个车前视角和鸟瞰图视角转换所需文件,点击[链接](https://paddleseg.bj.bcebos.com/resources/tusimple_ipm_remap.tar),下载后放在```./utils```下。同时我们提供了一个训练好的模型,点击[链接](https://paddleseg.bj.bcebos.com/models/lanenet_vgg_tusimple.tar),下载后放在```./pretrained_models/```下,使用如下命令进行可视化
```shell
CUDA_VISIBLE_DEVICES=0 python -u ./vis.py --cfg configs/lanenet.yaml --use_gpu --vis_dir vis_result \
TEST.TEST_MODEL pretrained_models/LaneNet_vgg_tusimple/
```
可视化结果示例:
预测结果:<br/>
![](imgs/0005_pred_lane.png)
分割结果:<br/>
![](imgs/0005_pred_binary.png)<br/>
车道线实例预测结果:<br/>
![](imgs/0005_pred_instance.png)
TRAIN_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling
EVAL_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "unpadding" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (512, 512) # (width, height), for unpadding
EVAL_CROP_SIZE: (512, 256) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (512, 256) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: u"unpadding" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (512, 256) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
MAX_SCALE_FACTOR: 1.25 # for stepscaling
MIN_SCALE_FACTOR: 0.75 # for stepscaling
MAX_SCALE_FACTOR: 2.0 # for stepscaling
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
MIRROR: False
RICH_CROP:
ENABLE: False
BATCH_SIZE: 4
DATASET:
DATA_DIR: "./dataset/mini_pet/"
DATALOADER:
BUF_SIZE: 256
NUM_WORKERS: 4
DATASET:
DATA_DIR: "./dataset/tusimple_lane_detection"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
IGNORE_INDEX: 255
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt"
TEST_TOTAL_IMAGES: 362
TRAIN_FILE_LIST: "./dataset/tusimple_lane_detection/training/train_part.txt"
TRAIN_TOTAL_IMAGES: 3264
VAL_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt"
VAL_TOTAL_IMAGES: 362
SEPARATOR: " "
IGNORE_INDEX: 255
FREEZE:
MODEL_FILENAME: "__model__"
PARAMS_FILENAME: "__params__"
MODEL:
MODEL_NAME: "hrnet"
MODEL_NAME: "lanenet"
DEFAULT_NORM_TYPE: "bn"
HRNET:
STAGE2:
NUM_CHANNELS: [18, 36]
STAGE3:
NUM_CHANNELS: [18, 36, 72]
STAGE4:
NUM_CHANNELS: [18, 36, 72, 144]
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/hrnet_w18_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/hrnet_w18_bn_pet/"
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./saved_model/hrnet_w18_bn_pet/final"
TEST_MODEL: "./saved_model/lanenet/final/"
TRAIN:
MODEL_SAVE_DIR: "./saved_model/lanenet/"
PRETRAINED_MODEL_DIR: "./pretrained_models/VGG16_pretrained"
SNAPSHOT_EPOCH: 1
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
LR: 0.0005
LR_POLICY: "poly"
OPTIMIZER: "sgd"
WEIGHT_DECAY: 0.001
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import cv2
import numpy as np
from utils.config import cfg
from models.model_builder import ModelPhase
from pdseg.data_aug import get_random_scale, randomly_scale_image_and_label, random_rotation, \
rand_scale_aspect, hsv_color_jitter, rand_crop
def resize(img, grt=None, grt_instance=None, mode=ModelPhase.TRAIN):
"""
改变图像及标签图像尺寸
AUG.AUG_METHOD为unpadding,所有模式均直接resize到AUG.FIX_RESIZE_SIZE的尺寸
AUG.AUG_METHOD为stepscaling, 按比例resize,训练时比例范围AUG.MIN_SCALE_FACTOR到AUG.MAX_SCALE_FACTOR,间隔为AUG.SCALE_STEP_SIZE,其他模式返回原图
AUG.AUG_METHOD为rangescaling,长边对齐,短边按比例变化,训练时长边对齐范围AUG.MIN_RESIZE_VALUE到AUG.MAX_RESIZE_VALUE,其他模式长边对齐AUG.INF_RESIZE_VALUE
Args:
img(numpy.ndarray): 输入图像
grt(numpy.ndarray): 标签图像,默认为None
mode(string): 模式, 默认训练模式,即ModelPhase.TRAIN
Returns:
resize后的图像和标签图
"""
if cfg.AUG.AUG_METHOD == 'unpadding':
target_size = cfg.AUG.FIX_RESIZE_SIZE
img = cv2.resize(img, target_size, interpolation=cv2.INTER_LINEAR)
if grt is not None:
grt = cv2.resize(grt, target_size, interpolation=cv2.INTER_NEAREST)
if grt_instance is not None:
grt_instance = cv2.resize(grt_instance, target_size, interpolation=cv2.INTER_NEAREST)
elif cfg.AUG.AUG_METHOD == 'stepscaling':
if mode == ModelPhase.TRAIN:
min_scale_factor = cfg.AUG.MIN_SCALE_FACTOR
max_scale_factor = cfg.AUG.MAX_SCALE_FACTOR
step_size = cfg.AUG.SCALE_STEP_SIZE
scale_factor = get_random_scale(min_scale_factor, max_scale_factor,
step_size)
img, grt = randomly_scale_image_and_label(
img, grt, scale=scale_factor)
elif cfg.AUG.AUG_METHOD == 'rangescaling':
min_resize_value = cfg.AUG.MIN_RESIZE_VALUE
max_resize_value = cfg.AUG.MAX_RESIZE_VALUE
if mode == ModelPhase.TRAIN:
if min_resize_value == max_resize_value:
random_size = min_resize_value
else:
random_size = int(
np.random.uniform(min_resize_value, max_resize_value) + 0.5)
else:
random_size = cfg.AUG.INF_RESIZE_VALUE
value = max(img.shape[0], img.shape[1])
scale = float(random_size) / float(value)
img = cv2.resize(
img, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_LINEAR)
if grt is not None:
grt = cv2.resize(
grt, (0, 0),
fx=scale,
fy=scale,
interpolation=cv2.INTER_NEAREST)
else:
raise Exception("Unexpect data augmention method: {}".format(
cfg.AUG.AUG_METHOD))
return img, grt, grt_instance
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "../../../", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
def download_tusimple_dataset(savepath, extrapath):
url = "https://paddleseg.bj.bcebos.com/dataset/tusimple_lane_detection.tar"
download_file_and_uncompress(
url=url, savepath=savepath, extrapath=extrapath)
if __name__ == "__main__":
download_tusimple_dataset(LOCAL_PATH, LOCAL_PATH)
print("Dataset download finish!")
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
cur_path = os.path.abspath(os.path.dirname(__file__))
root_path = os.path.split(os.path.split(cur_path)[0])[0]
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SEG_PATH = os.path.join(LOCAL_PATH, "../../../")
sys.path.append(SEG_PATH)
sys.path.append(root_path)
import time
import argparse
import functools
import pprint
import cv2
import numpy as np
import paddle
import paddle.fluid as fluid
from utils.config import cfg
from pdseg.utils.timer import Timer, calculate_eta
from models.model_builder import build_model
from models.model_builder import ModelPhase
from reader import LaneNetDataset
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg model evalution')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess IO or not',
action='store_true',
default=False)
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
return parser.parse_args()
def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs):
np.set_printoptions(precision=5, suppress=True)
startup_prog = fluid.Program()
test_prog = fluid.Program()
dataset = LaneNetDataset(
file_list=cfg.DATASET.VAL_FILE_LIST,
mode=ModelPhase.TRAIN,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
#TODO: check is batch reader compatitable with Windows
if use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
for b in data_gen:
yield b
py_reader, pred, grts, masks, accuracy, fp, fn = build_model(
test_prog, startup_prog, phase=ModelPhase.EVAL)
py_reader.decorate_sample_generator(
data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE)
# Get device environment
places = fluid.cuda_places() if use_gpu else fluid.cpu_places()
place = places[0]
dev_count = len(places)
print("#Device count: {}".format(dev_count))
exe = fluid.Executor(place)
exe.run(startup_prog)
test_prog = test_prog.clone(for_test=True)
ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir
if ckpt_dir is not None:
print('load test model:', ckpt_dir)
fluid.io.load_params(exe, ckpt_dir, main_program=test_prog)
# Use streaming confusion matrix to calculate mean_iou
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
fetch_list = [pred.name, grts.name, masks.name, accuracy.name, fp.name, fn.name]
num_images = 0
step = 0
avg_acc = 0.0
avg_fp = 0.0
avg_fn = 0.0
# cur_images = 0
all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1
timer = Timer()
timer.start()
py_reader.start()
while True:
try:
step += 1
pred, grts, masks, out_acc, out_fp, out_fn = exe.run(
test_prog, fetch_list=fetch_list, return_numpy=True)
avg_acc += np.mean(out_acc) * pred.shape[0]
avg_fp += np.mean(out_fp) * pred.shape[0]
avg_fn += np.mean(out_fn) * pred.shape[0]
num_images += pred.shape[0]
speed = 1.0 / timer.elapsed_time()
print(
"[EVAL]step={} accuracy={:.4f} fp={:.4f} fn={:.4f} step/sec={:.2f} | ETA {}"
.format(step, avg_acc / num_images, avg_fp / num_images, avg_fn / num_images, speed,
calculate_eta(all_step - step, speed)))
timer.restart()
sys.stdout.flush()
except fluid.core.EOFException:
break
print("[EVAL]#image={} accuracy={:.4f} fp={:.4f} fn={:.4f}".format(
num_images, avg_acc / num_images, avg_fp / num_images, avg_fn / num_images))
return avg_acc / num_images, avg_fp / num_images, avg_fn / num_images
def main():
args = parse_args()
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
cfg.check_and_infer()
print(pprint.pformat(cfg))
evaluate(cfg, **args.__dict__)
if __name__ == '__main__':
main()
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
import numpy as np
from utils.config import cfg
def unsorted_segment_sum(data, segment_ids, unique_labels, feature_dims):
zeros = fluid.layers.fill_constant_batch_size_like(unique_labels, shape=[1, feature_dims],
dtype='float32', value=0)
segment_ids = fluid.layers.unsqueeze(segment_ids, axes=[1])
segment_ids.stop_gradient = True
segment_sum = fluid.layers.scatter_nd_add(zeros, segment_ids, data)
zeros.stop_gradient = True
return segment_sum
def norm(x, axis=-1):
distance = fluid.layers.reduce_sum(fluid.layers.abs(x), dim=axis, keep_dim=True)
return distance
def discriminative_loss_single(
prediction,
correct_label,
feature_dim,
label_shape,
delta_v,
delta_d,
param_var,
param_dist,
param_reg):
correct_label = fluid.layers.reshape(
correct_label, [
label_shape[1] * label_shape[0]])
prediction = fluid.layers.transpose(prediction, [1, 2, 0])
reshaped_pred = fluid.layers.reshape(
prediction, [
label_shape[1] * label_shape[0], feature_dim])
unique_labels, unique_id, counts = fluid.layers.unique_with_counts(correct_label)
correct_label.stop_gradient = True
counts = fluid.layers.cast(counts, 'float32')
num_instances = fluid.layers.shape(unique_labels)
segmented_sum = unsorted_segment_sum(
reshaped_pred, unique_id, unique_labels, feature_dims=feature_dim)
counts_rsp = fluid.layers.reshape(counts, (-1, 1))
mu = fluid.layers.elementwise_div(segmented_sum, counts_rsp)
counts_rsp.stop_gradient = True
mu_expand = fluid.layers.gather(mu, unique_id)
tmp = fluid.layers.elementwise_sub(mu_expand, reshaped_pred)
distance = norm(tmp)
distance = distance - delta_v
distance_pos = fluid.layers.greater_equal(distance, fluid.layers.zeros_like(distance))
distance_pos = fluid.layers.cast(distance_pos, 'float32')
distance = distance * distance_pos
distance = fluid.layers.square(distance)
l_var = unsorted_segment_sum(distance, unique_id, unique_labels, feature_dims=1)
l_var = fluid.layers.elementwise_div(l_var, counts_rsp)
l_var = fluid.layers.reduce_sum(l_var)
l_var = l_var / fluid.layers.cast(num_instances * (num_instances - 1), 'float32')
mu_interleaved_rep = fluid.layers.expand(mu, [num_instances, 1])
mu_band_rep = fluid.layers.expand(mu, [1, num_instances])
mu_band_rep = fluid.layers.reshape(mu_band_rep, (num_instances * num_instances, feature_dim))
mu_diff = fluid.layers.elementwise_sub(mu_band_rep, mu_interleaved_rep)
intermediate_tensor = fluid.layers.reduce_sum(fluid.layers.abs(mu_diff), dim=1)
intermediate_tensor.stop_gradient = True
zero_vector = fluid.layers.zeros([1], 'float32')
bool_mask = fluid.layers.not_equal(intermediate_tensor, zero_vector)
temp = fluid.layers.where(bool_mask)
mu_diff_bool = fluid.layers.gather(mu_diff, temp)
mu_norm = norm(mu_diff_bool)
mu_norm = 2. * delta_d - mu_norm
mu_norm_pos = fluid.layers.greater_equal(mu_norm, fluid.layers.zeros_like(mu_norm))
mu_norm_pos = fluid.layers.cast(mu_norm_pos, 'float32')
mu_norm = mu_norm * mu_norm_pos
mu_norm_pos.stop_gradient = True
mu_norm = fluid.layers.square(mu_norm)
l_dist = fluid.layers.reduce_mean(mu_norm)
l_reg = fluid.layers.reduce_mean(norm(mu, axis=1))
l_var = param_var * l_var
l_dist = param_dist * l_dist
l_reg = param_reg * l_reg
loss = l_var + l_dist + l_reg
return loss, l_var, l_dist, l_reg
def discriminative_loss(prediction, correct_label, feature_dim, image_shape,
delta_v, delta_d, param_var, param_dist, param_reg):
batch_size = int(cfg.BATCH_SIZE_PER_DEV)
output_ta_loss = 0.
output_ta_var = 0.
output_ta_dist = 0.
output_ta_reg = 0.
for i in range(batch_size):
disc_loss_single, l_var_single, l_dist_single, l_reg_single = discriminative_loss_single(
prediction[i], correct_label[i], feature_dim, image_shape, delta_v, delta_d, param_var, param_dist,
param_reg)
output_ta_loss += disc_loss_single
output_ta_var += l_var_single
output_ta_dist += l_dist_single
output_ta_reg += l_reg_single
disc_loss = output_ta_loss / batch_size
l_var = output_ta_var / batch_size
l_dist = output_ta_dist / batch_size
l_reg = output_ta_reg / batch_size
return disc_loss, l_var, l_dist, l_reg
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import models.modeling
#import models.backbone
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
sys.path.append("..")
import struct
import paddle.fluid as fluid
from paddle.fluid.proto.framework_pb2 import VarType
from pdseg import solver
from utils.config import cfg
from pdseg.loss import multi_softmax_with_loss
from loss import discriminative_loss
from models.modeling import lanenet
class ModelPhase(object):
"""
Standard name for model phase in PaddleSeg
The following standard keys are defined:
* `TRAIN`: training mode.
* `EVAL`: testing/evaluation mode.
* `PREDICT`: prediction/inference mode.
* `VISUAL` : visualization mode
"""
TRAIN = 'train'
EVAL = 'eval'
PREDICT = 'predict'
VISUAL = 'visual'
@staticmethod
def is_train(phase):
return phase == ModelPhase.TRAIN
@staticmethod
def is_predict(phase):
return phase == ModelPhase.PREDICT
@staticmethod
def is_eval(phase):
return phase == ModelPhase.EVAL
@staticmethod
def is_visual(phase):
return phase == ModelPhase.VISUAL
@staticmethod
def is_valid_phase(phase):
""" Check valid phase """
if ModelPhase.is_train(phase) or ModelPhase.is_predict(phase) \
or ModelPhase.is_eval(phase) or ModelPhase.is_visual(phase):
return True
return False
def seg_model(image, class_num):
model_name = cfg.MODEL.MODEL_NAME
if model_name == 'lanenet':
logits = lanenet.lanenet(image, class_num)
else:
raise Exception(
"unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet"
)
return logits
def softmax(logit):
logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
logit = fluid.layers.softmax(logit)
logit = fluid.layers.transpose(logit, [0, 3, 1, 2])
return logit
def sigmoid_to_softmax(logit):
"""
one channel to two channel
"""
logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
logit = fluid.layers.sigmoid(logit)
logit_back = 1 - logit
logit = fluid.layers.concat([logit_back, logit], axis=-1)
logit = fluid.layers.transpose(logit, [0, 3, 1, 2])
return logit
def build_model(main_prog, start_prog, phase=ModelPhase.TRAIN):
if not ModelPhase.is_valid_phase(phase):
raise ValueError("ModelPhase {} is not valid!".format(phase))
if ModelPhase.is_train(phase):
width = cfg.TRAIN_CROP_SIZE[0]
height = cfg.TRAIN_CROP_SIZE[1]
else:
width = cfg.EVAL_CROP_SIZE[0]
height = cfg.EVAL_CROP_SIZE[1]
image_shape = [cfg.DATASET.DATA_DIM, height, width]
grt_shape = [1, height, width]
class_num = cfg.DATASET.NUM_CLASSES
with fluid.program_guard(main_prog, start_prog):
with fluid.unique_name.guard():
image = fluid.layers.data(
name='image', shape=image_shape, dtype='float32')
label = fluid.layers.data(
name='label', shape=grt_shape, dtype='int32')
if cfg.MODEL.MODEL_NAME == 'lanenet':
label_instance = fluid.layers.data(
name='label_instance', shape=grt_shape, dtype='int32')
mask = fluid.layers.data(
name='mask', shape=grt_shape, dtype='int32')
# use PyReader when doing traning and evaluation
if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase):
py_reader = fluid.io.PyReader(
feed_list=[image, label, label_instance, mask],
capacity=cfg.DATALOADER.BUF_SIZE,
iterable=False,
use_double_buffer=True)
loss_type = cfg.SOLVER.LOSS
if not isinstance(loss_type, list):
loss_type = list(loss_type)
logits = seg_model(image, class_num)
if ModelPhase.is_train(phase):
loss_valid = False
valid_loss = []
if cfg.MODEL.MODEL_NAME == 'lanenet':
embeding_logit = logits[1]
logits = logits[0]
disc_loss, _, _, l_reg = discriminative_loss(embeding_logit, label_instance, 4,
image_shape[1:], 0.5, 3.0, 1.0, 1.0, 0.001)
if "softmax_loss" in loss_type:
weight = None
if cfg.MODEL.MODEL_NAME == 'lanenet':
weight = get_dynamic_weight(label)
seg_loss = multi_softmax_with_loss(logits, label, mask, class_num, weight)
loss_valid = True
valid_loss.append("softmax_loss")
if not loss_valid:
raise Exception("SOLVER.LOSS: {} is set wrong. it should "
"include one of (softmax_loss, bce_loss, dice_loss) at least"
" example: ['softmax_loss']".format(cfg.SOLVER.LOSS))
invalid_loss = [x for x in loss_type if x not in valid_loss]
if len(invalid_loss) > 0:
print("Warning: the loss {} you set is invalid. it will not be included in loss computed.".format(invalid_loss))
avg_loss = disc_loss + 0.00001 * l_reg + seg_loss
#get pred result in original size
if isinstance(logits, tuple):
logit = logits[0]
else:
logit = logits
if logit.shape[2:] != label.shape[2:]:
logit = fluid.layers.resize_bilinear(logit, label.shape[2:])
# return image input and logit output for inference graph prune
if ModelPhase.is_predict(phase):
if class_num == 1:
logit = sigmoid_to_softmax(logit)
else:
logit = softmax(logit)
return image, logit
if class_num == 1:
out = sigmoid_to_softmax(logit)
out = fluid.layers.transpose(out, [0, 2, 3, 1])
else:
out = fluid.layers.transpose(logit, [0, 2, 3, 1])
pred = fluid.layers.argmax(out, axis=3)
pred = fluid.layers.unsqueeze(pred, axes=[3])
if ModelPhase.is_visual(phase):
if cfg.MODEL.MODEL_NAME == 'lanenet':
return pred, logits[1]
if class_num == 1:
logit = sigmoid_to_softmax(logit)
else:
logit = softmax(logit)
return pred, logit
accuracy, fp, fn = compute_metric(pred, label)
if ModelPhase.is_eval(phase):
return py_reader, pred, label, mask, accuracy, fp, fn
if ModelPhase.is_train(phase):
optimizer = solver.Solver(main_prog, start_prog)
decayed_lr = optimizer.optimise(avg_loss)
return py_reader, avg_loss, decayed_lr, pred, label, mask, disc_loss, seg_loss, accuracy, fp, fn
def compute_metric(pred, label):
label = fluid.layers.transpose(label, [0, 2, 3, 1])
idx = fluid.layers.where(pred == 1)
pix_cls_ret = fluid.layers.gather_nd(label, idx)
correct_num = fluid.layers.reduce_sum(fluid.layers.cast(pix_cls_ret, 'float32'))
gt_num = fluid.layers.cast(fluid.layers.shape(fluid.layers.gather_nd(label,
fluid.layers.where(label == 1)))[0], 'int64')
pred_num = fluid.layers.cast(fluid.layers.shape(fluid.layers.gather_nd(pred, idx))[0], 'int64')
accuracy = correct_num / gt_num
false_pred = pred_num - correct_num
fp = fluid.layers.cast(false_pred, 'float32') / fluid.layers.cast(fluid.layers.shape(pix_cls_ret)[0], 'int64')
label_cls_ret = fluid.layers.gather_nd(label, fluid.layers.where(label == 1))
mis_pred = fluid.layers.cast(fluid.layers.shape(label_cls_ret)[0], 'int64') - correct_num
fn = fluid.layers.cast(mis_pred, 'float32') / fluid.layers.cast(fluid.layers.shape(label_cls_ret)[0], 'int64')
accuracy.stop_gradient = True
fp.stop_gradient = True
fn.stop_gradient = True
return accuracy, fp, fn
def get_dynamic_weight(label):
label = fluid.layers.reshape(label, [-1])
unique_labels, unique_id, counts = fluid.layers.unique_with_counts(label)
counts = fluid.layers.cast(counts, 'float32')
weight = 1.0 / fluid.layers.log((counts / fluid.layers.reduce_sum(counts) + 1.02))
return weight
def to_int(string, dest="I"):
return struct.unpack(dest, string)[0]
def parse_shape_from_file(filename):
with open(filename, "rb") as file:
version = file.read(4)
lod_level = to_int(file.read(8), dest="Q")
for i in range(lod_level):
_size = to_int(file.read(8), dest="Q")
_ = file.read(_size)
version = file.read(4)
tensor_desc_size = to_int(file.read(4))
tensor_desc = VarType.TensorDesc()
tensor_desc.ParseFromString(file.read(tensor_desc_size))
return tuple(tensor_desc.dims)
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
from __future__ import print_function
import paddle.fluid as fluid
from utils.config import cfg
from pdseg.models.libs.model_libs import scope, name_scope
from pdseg.models.libs.model_libs import bn, bn_relu, relu
from pdseg.models.libs.model_libs import conv, max_pool, deconv
from pdseg.models.backbone.vgg import VGGNet as vgg_backbone
#from models.backbone.vgg import VGGNet as vgg_backbone
# Bottleneck type
REGULAR = 1
DOWNSAMPLING = 2
UPSAMPLING = 3
DILATED = 4
ASYMMETRIC = 5
def prelu(x, decoder=False):
# If decoder, then perform relu else perform prelu
if decoder:
return fluid.layers.relu(x)
return fluid.layers.prelu(x, 'channel')
def iniatial_block(inputs, name_scope='iniatial_block'):
'''
The initial block for Enet has 2 branches: The convolution branch and Maxpool branch.
The conv branch has 13 filters, while the maxpool branch gives 3 channels corresponding to the RGB channels.
Both output layers are then concatenated to give an output of 16 channels.
:param inputs(Tensor): A 4D tensor of shape [batch_size, height, width, channels]
:return net_concatenated(Tensor): a 4D Tensor of new shape [batch_size, height, width, channels]
'''
# Convolutional branch
with scope(name_scope):
net_conv = conv(inputs, 13, 3, stride=2, padding=1)
net_conv = bn(net_conv)
net_conv = fluid.layers.prelu(net_conv, 'channel')
# Max pool branch
net_pool = max_pool(inputs, [2, 2], stride=2, padding='SAME')
# Concatenated output - does it matter max pool comes first or conv comes first? probably not.
net_concatenated = fluid.layers.concat([net_conv, net_pool], axis=1)
return net_concatenated
def bottleneck(inputs,
output_depth,
filter_size,
regularizer_prob,
projection_ratio=4,
type=REGULAR,
seed=0,
output_shape=None,
dilation_rate=None,
decoder=False,
name_scope='bottleneck'):
# Calculate the depth reduction based on the projection ratio used in 1x1 convolution.
reduced_depth = int(inputs.shape[1] / projection_ratio)
# DOWNSAMPLING BOTTLENECK
if type == DOWNSAMPLING:
#=============MAIN BRANCH=============
#Just perform a max pooling
with scope('down_sample'):
inputs_shape = inputs.shape
with scope('main_max_pool'):
net_main = fluid.layers.conv2d(inputs, inputs_shape[1], filter_size=3, stride=2, padding='SAME')
#First get the difference in depth to pad, then pad with zeros only on the last dimension.
depth_to_pad = abs(inputs_shape[1] - output_depth)
paddings = [0, 0, 0, depth_to_pad, 0, 0, 0, 0]
with scope('main_padding'):
net_main = fluid.layers.pad(net_main, paddings=paddings)
with scope('block1'):
net = conv(inputs, reduced_depth, [2, 2], stride=2, padding='same')
net = bn(net)
net = prelu(net, decoder=decoder)
with scope('block2'):
net = conv(net, reduced_depth, [filter_size, filter_size], padding='same')
net = bn(net)
net = prelu(net, decoder=decoder)
with scope('block3'):
net = conv(net, output_depth, [1, 1], padding='same')
net = bn(net)
net = prelu(net, decoder=decoder)
# Regularizer
net = fluid.layers.dropout(net, regularizer_prob, seed=seed)
# Finally, combine the two branches together via an element-wise addition
net = fluid.layers.elementwise_add(net, net_main)
net = prelu(net, decoder=decoder)
return net, inputs_shape
# DILATION CONVOLUTION BOTTLENECK
# Everything is the same as a regular bottleneck except for the dilation rate argument
elif type == DILATED:
#Check if dilation rate is given
if not dilation_rate:
raise ValueError('Dilation rate is not given.')
with scope('dilated'):
# Save the main branch for addition later
net_main = inputs
# First projection with 1x1 kernel (dimensionality reduction)
with scope('block1'):
net = conv(inputs, reduced_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Second conv block --- apply dilated convolution here
with scope('block2'):
net = conv(net, reduced_depth, filter_size, padding='SAME', dilation=dilation_rate)
net = bn(net)
net = prelu(net, decoder=decoder)
# Final projection with 1x1 kernel (Expansion)
with scope('block3'):
net = conv(net, output_depth, [1,1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Regularizer
net = fluid.layers.dropout(net, regularizer_prob, seed=seed)
net = prelu(net, decoder=decoder)
# Add the main branch
net = fluid.layers.elementwise_add(net_main, net)
net = prelu(net, decoder=decoder)
return net
# ASYMMETRIC CONVOLUTION BOTTLENECK
# Everything is the same as a regular bottleneck except for a [5,5] kernel decomposed into two [5,1] then [1,5]
elif type == ASYMMETRIC:
# Save the main branch for addition later
with scope('asymmetric'):
net_main = inputs
# First projection with 1x1 kernel (dimensionality reduction)
with scope('block1'):
net = conv(inputs, reduced_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Second conv block --- apply asymmetric conv here
with scope('block2'):
with scope('asymmetric_conv2a'):
net = conv(net, reduced_depth, [filter_size, 1], padding='same')
with scope('asymmetric_conv2b'):
net = conv(net, reduced_depth, [1, filter_size], padding='same')
net = bn(net)
net = prelu(net, decoder=decoder)
# Final projection with 1x1 kernel
with scope('block3'):
net = conv(net, output_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Regularizer
net = fluid.layers.dropout(net, regularizer_prob, seed=seed)
net = prelu(net, decoder=decoder)
# Add the main branch
net = fluid.layers.elementwise_add(net_main, net)
net = prelu(net, decoder=decoder)
return net
# UPSAMPLING BOTTLENECK
# Everything is the same as a regular one, except convolution becomes transposed.
elif type == UPSAMPLING:
#Check if pooling indices is given
#Check output_shape given or not
if output_shape is None:
raise ValueError('Output depth is not given')
#=======MAIN BRANCH=======
#Main branch to upsample. output shape must match with the shape of the layer that was pooled initially, in order
#for the pooling indices to work correctly. However, the initial pooled layer was padded, so need to reduce dimension
#before unpooling. In the paper, padding is replaced with convolution for this purpose of reducing the depth!
with scope('upsampling'):
with scope('unpool'):
net_unpool = conv(inputs, output_depth, [1, 1])
net_unpool = bn(net_unpool)
net_unpool = fluid.layers.resize_bilinear(net_unpool, out_shape=output_shape[2:])
# First 1x1 projection to reduce depth
with scope('block1'):
net = conv(inputs, reduced_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
with scope('block2'):
net = deconv(net, reduced_depth, filter_size=filter_size, stride=2, padding='same')
net = bn(net)
net = prelu(net, decoder=decoder)
# Final projection with 1x1 kernel
with scope('block3'):
net = conv(net, output_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Regularizer
net = fluid.layers.dropout(net, regularizer_prob, seed=seed)
net = prelu(net, decoder=decoder)
# Finally, add the unpooling layer and the sub branch together
net = fluid.layers.elementwise_add(net, net_unpool)
net = prelu(net, decoder=decoder)
return net
# REGULAR BOTTLENECK
else:
with scope('regular'):
net_main = inputs
# First projection with 1x1 kernel
with scope('block1'):
net = conv(inputs, reduced_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Second conv block
with scope('block2'):
net = conv(net, reduced_depth, [filter_size, filter_size], padding='same')
net = bn(net)
net = prelu(net, decoder=decoder)
# Final projection with 1x1 kernel
with scope('block3'):
net = conv(net, output_depth, [1, 1])
net = bn(net)
net = prelu(net, decoder=decoder)
# Regularizer
net = fluid.layers.dropout(net, regularizer_prob, seed=seed)
net = prelu(net, decoder=decoder)
# Add the main branch
net = fluid.layers.elementwise_add(net_main, net)
net = prelu(net, decoder=decoder)
return net
def ENet_stage1(inputs, name_scope='stage1_block'):
with scope(name_scope):
with scope('bottleneck1_0'):
net, inputs_shape_1 \
= bottleneck(inputs, output_depth=64, filter_size=3, regularizer_prob=0.01, type=DOWNSAMPLING,
name_scope='bottleneck1_0')
with scope('bottleneck1_1'):
net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01,
name_scope='bottleneck1_1')
with scope('bottleneck1_2'):
net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01,
name_scope='bottleneck1_2')
with scope('bottleneck1_3'):
net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01,
name_scope='bottleneck1_3')
with scope('bottleneck1_4'):
net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01,
name_scope='bottleneck1_4')
return net, inputs_shape_1
def ENet_stage2(inputs, name_scope='stage2_block'):
with scope(name_scope):
net, inputs_shape_2 \
= bottleneck(inputs, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DOWNSAMPLING,
name_scope='bottleneck2_0')
for i in range(2):
with scope('bottleneck2_{}'.format(str(4 * i + 1))):
net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1,
name_scope='bottleneck2_{}'.format(str(4 * i + 1)))
with scope('bottleneck2_{}'.format(str(4 * i + 2))):
net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+1)),
name_scope='bottleneck2_{}'.format(str(4 * i + 2)))
with scope('bottleneck2_{}'.format(str(4 * i + 3))):
net = bottleneck(net, output_depth=128, filter_size=5, regularizer_prob=0.1, type=ASYMMETRIC,
name_scope='bottleneck2_{}'.format(str(4 * i + 3)))
with scope('bottleneck2_{}'.format(str(4 * i + 4))):
net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+2)),
name_scope='bottleneck2_{}'.format(str(4 * i + 4)))
return net, inputs_shape_2
def ENet_stage3(inputs, name_scope='stage3_block'):
with scope(name_scope):
for i in range(2):
with scope('bottleneck3_{}'.format(str(4 * i + 0))):
net = bottleneck(inputs, output_depth=128, filter_size=3, regularizer_prob=0.1,
name_scope='bottleneck3_{}'.format(str(4 * i + 0)))
with scope('bottleneck3_{}'.format(str(4 * i + 1))):
net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+1)),
name_scope='bottleneck3_{}'.format(str(4 * i + 1)))
with scope('bottleneck3_{}'.format(str(4 * i + 2))):
net = bottleneck(net, output_depth=128, filter_size=5, regularizer_prob=0.1, type=ASYMMETRIC,
name_scope='bottleneck3_{}'.format(str(4 * i + 2)))
with scope('bottleneck3_{}'.format(str(4 * i + 3))):
net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+2)),
name_scope='bottleneck3_{}'.format(str(4 * i + 3)))
return net
def ENet_stage4(inputs, inputs_shape, connect_tensor,
skip_connections=True, name_scope='stage4_block'):
with scope(name_scope):
with scope('bottleneck4_0'):
net = bottleneck(inputs, output_depth=64, filter_size=3, regularizer_prob=0.1,
type=UPSAMPLING, decoder=True, output_shape=inputs_shape,
name_scope='bottleneck4_0')
if skip_connections:
net = fluid.layers.elementwise_add(net, connect_tensor)
with scope('bottleneck4_1'):
net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.1, decoder=True,
name_scope='bottleneck4_1')
with scope('bottleneck4_2'):
net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.1, decoder=True,
name_scope='bottleneck4_2')
return net
def ENet_stage5(inputs, inputs_shape, connect_tensor, skip_connections=True,
name_scope='stage5_block'):
with scope(name_scope):
net = bottleneck(inputs, output_depth=16, filter_size=3, regularizer_prob=0.1, type=UPSAMPLING,
decoder=True, output_shape=inputs_shape,
name_scope='bottleneck5_0')
if skip_connections:
net = fluid.layers.elementwise_add(net, connect_tensor)
with scope('bottleneck5_1'):
net = bottleneck(net, output_depth=16, filter_size=3, regularizer_prob=0.1, decoder=True,
name_scope='bottleneck5_1')
return net
def decoder(input, num_classes):
if 'enet' in cfg.MODEL.LANENET.BACKBONE:
# Segmentation branch
with scope('LaneNetSeg'):
initial, stage1, stage2, inputs_shape_1, inputs_shape_2 = input
segStage3 = ENet_stage3(stage2)
segStage4 = ENet_stage4(segStage3, inputs_shape_2, stage1)
segStage5 = ENet_stage5(segStage4, inputs_shape_1, initial)
segLogits = deconv(segStage5, num_classes, filter_size=2, stride=2, padding='SAME')
# Embedding branch
with scope('LaneNetEm'):
emStage3 = ENet_stage3(stage2)
emStage4 = ENet_stage4(emStage3, inputs_shape_2, stage1)
emStage5 = ENet_stage5(emStage4, inputs_shape_1, initial)
emLogits = deconv(emStage5, 4, filter_size=2, stride=2, padding='SAME')
elif 'vgg' in cfg.MODEL.LANENET.BACKBONE:
encoder_list = ['pool5', 'pool4', 'pool3']
# score stage
input_tensor = input[encoder_list[0]]
with scope('score_origin'):
score = conv(input_tensor, 64, 1)
encoder_list = encoder_list[1:]
for i in range(len(encoder_list)):
with scope('deconv_{:d}'.format(i + 1)):
deconv_out = deconv(score, 64, filter_size=4, stride=2, padding='SAME')
input_tensor = input[encoder_list[i]]
with scope('score_{:d}'.format(i + 1)):
score = conv(input_tensor, 64, 1)
score = fluid.layers.elementwise_add(deconv_out, score)
with scope('deconv_final'):
emLogits = deconv(score, 64, filter_size=16, stride=8, padding='SAME')
with scope('score_final'):
segLogits = conv(emLogits, num_classes, 1)
emLogits = relu(conv(emLogits, 4, 1))
return segLogits, emLogits
def encoder(input):
if 'vgg' in cfg.MODEL.LANENET.BACKBONE:
model = vgg_backbone(layers=16)
#output = model.net(input)
_, encode_feature_dict = model.net(input, end_points=13, decode_points=[7, 10, 13])
output = {}
output['pool3'] = encode_feature_dict[7]
output['pool4'] = encode_feature_dict[10]
output['pool5'] = encode_feature_dict[13]
elif 'enet' in cfg.MODEL.LANET.BACKBONE:
with scope('LaneNetBase'):
initial = iniatial_block(input)
stage1, inputs_shape_1 = ENet_stage1(initial)
stage2, inputs_shape_2 = ENet_stage2(stage1)
output = (initial, stage1, stage2, inputs_shape_1, inputs_shape_2)
else:
raise Exception("LaneNet expect enet and vgg backbone, but received {}".
format(cfg.MODEL.LANENET.BACKBONE))
return output
def lanenet(img, num_classes):
output = encoder(img)
segLogits, emLogits = decoder(output, num_classes)
return segLogits, emLogits
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import sys
import os
import time
import codecs
import numpy as np
import cv2
from utils.config import cfg
import data_aug as aug
from pdseg.data_utils import GeneratorEnqueuer
from models.model_builder import ModelPhase
import copy
def cv2_imread(file_path, flag=cv2.IMREAD_COLOR):
# resolve cv2.imread open Chinese file path issues on Windows Platform.
return cv2.imdecode(np.fromfile(file_path, dtype=np.uint8), flag)
class LaneNetDataset():
def __init__(self,
file_list,
data_dir,
shuffle=False,
mode=ModelPhase.TRAIN):
self.mode = mode
self.shuffle = shuffle
self.data_dir = data_dir
self.shuffle_seed = 0
# NOTE: Please ensure file list was save in UTF-8 coding format
with codecs.open(file_list, 'r', 'utf-8') as flist:
self.lines = [line.strip() for line in flist]
self.all_lines = copy.deepcopy(self.lines)
if shuffle and cfg.NUM_TRAINERS > 1:
np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines)
elif shuffle:
np.random.shuffle(self.lines)
def generator(self):
if self.shuffle and cfg.NUM_TRAINERS > 1:
np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines)
num_lines = len(self.all_lines) // cfg.NUM_TRAINERS
self.lines = self.all_lines[num_lines * cfg.TRAINER_ID: num_lines * (cfg.TRAINER_ID + 1)]
self.shuffle_seed += 1
elif self.shuffle:
np.random.shuffle(self.lines)
for line in self.lines:
yield self.process_image(line, self.data_dir, self.mode)
def sharding_generator(self, pid=0, num_processes=1):
"""
Use line id as shard key for multiprocess io
It's a normal generator if pid=0, num_processes=1
"""
for index, line in enumerate(self.lines):
# Use index and pid to shard file list
if index % num_processes == pid:
yield self.process_image(line, self.data_dir, self.mode)
def batch_reader(self, batch_size):
br = self.batch(self.reader, batch_size)
for batch in br:
yield batch[0], batch[1], batch[2]
def multiprocess_generator(self, max_queue_size=32, num_processes=8):
# Re-shuffle file list
if self.shuffle and cfg.NUM_TRAINERS > 1:
np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines)
num_lines = len(self.all_lines) // self.num_trainers
self.lines = self.all_lines[num_lines * self.trainer_id: num_lines * (self.trainer_id + 1)]
self.shuffle_seed += 1
elif self.shuffle:
np.random.shuffle(self.lines)
# Create multiple sharding generators according to num_processes for multiple processes
generators = []
for pid in range(num_processes):
generators.append(self.sharding_generator(pid, num_processes))
try:
enqueuer = GeneratorEnqueuer(generators)
enqueuer.start(max_queue_size=max_queue_size, workers=num_processes)
while True:
generator_out = None
while enqueuer.is_running():
if not enqueuer.queue.empty():
generator_out = enqueuer.queue.get(timeout=5)
break
else:
time.sleep(0.01)
if generator_out is None:
break
yield generator_out
finally:
if enqueuer is not None:
enqueuer.stop()
def batch(self, reader, batch_size, is_test=False, drop_last=False):
def batch_reader(is_test=False, drop_last=drop_last):
if is_test:
imgs, grts, grts_instance, img_names, valid_shapes, org_shapes = [], [], [], [], [], []
for img, grt, grt_instance, img_name, valid_shape, org_shape in reader():
imgs.append(img)
grts.append(grt)
grts_instance.append(grt_instance)
img_names.append(img_name)
valid_shapes.append(valid_shape)
org_shapes.append(org_shape)
if len(imgs) == batch_size:
yield np.array(imgs), np.array(
grts), np.array(grts_instance), img_names, np.array(valid_shapes), np.array(
org_shapes)
imgs, grts, grts_instance, img_names, valid_shapes, org_shapes = [], [], [], [], [], []
if not drop_last and len(imgs) > 0:
yield np.array(imgs), np.array(grts), np.array(grts_instance), img_names, np.array(
valid_shapes), np.array(org_shapes)
else:
imgs, labs, labs_instance, ignore = [], [], [], []
bs = 0
for img, lab, lab_instance, ig in reader():
imgs.append(img)
labs.append(lab)
labs_instance.append(lab_instance)
ignore.append(ig)
bs += 1
if bs == batch_size:
yield np.array(imgs), np.array(labs), np.array(labs_instance), np.array(ignore)
bs = 0
imgs, labs, labs_instance, ignore = [], [], [], []
if not drop_last and bs > 0:
yield np.array(imgs), np.array(labs), np.array(labs_instance), np.array(ignore)
return batch_reader(is_test, drop_last)
def load_image(self, line, src_dir, mode=ModelPhase.TRAIN):
# original image cv2.imread flag setting
cv2_imread_flag = cv2.IMREAD_COLOR
if cfg.DATASET.IMAGE_TYPE == "rgba":
# If use RBGA 4 channel ImageType, use IMREAD_UNCHANGED flags to
# reserver alpha channel
cv2_imread_flag = cv2.IMREAD_UNCHANGED
parts = line.strip().split(cfg.DATASET.SEPARATOR)
if len(parts) != 3:
if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL:
raise Exception("File list format incorrect! It should be"
" image_name{}label_name\\n".format(
cfg.DATASET.SEPARATOR))
img_name, grt_name, grt_instance_name = parts[0], None, None
else:
img_name, grt_name, grt_instance_name = parts[0], parts[1], parts[2]
img_path = os.path.join(src_dir, img_name)
img = cv2_imread(img_path, cv2_imread_flag)
if grt_name is not None:
grt_path = os.path.join(src_dir, grt_name)
grt_instance_path = os.path.join(src_dir, grt_instance_name)
grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE)
grt[grt == 255] = 1
grt[grt != 1] = 0
grt_instance = cv2_imread(grt_instance_path, cv2.IMREAD_GRAYSCALE)
else:
grt = None
grt_instance = None
if img is None:
raise Exception(
"Empty image, src_dir: {}, img: {} & lab: {}".format(
src_dir, img_path, grt_path))
img_height = img.shape[0]
img_width = img.shape[1]
if grt is not None:
grt_height = grt.shape[0]
grt_width = grt.shape[1]
if img_height != grt_height or img_width != grt_width:
raise Exception(
"source img and label img must has the same size")
else:
if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL:
raise Exception(
"Empty image, src_dir: {}, img: {} & lab: {}".format(
src_dir, img_path, grt_path))
if len(img.shape) < 3:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
img_channels = img.shape[2]
if img_channels < 3:
raise Exception("PaddleSeg only supports gray, rgb or rgba image")
if img_channels != cfg.DATASET.DATA_DIM:
raise Exception(
"Input image channel({}) is not match cfg.DATASET.DATA_DIM({}), img_name={}"
.format(img_channels, cfg.DATASET.DATADIM, img_name))
if img_channels != len(cfg.MEAN):
raise Exception(
"img name {}, img chns {} mean size {}, size unequal".format(
img_name, img_channels, len(cfg.MEAN)))
if img_channels != len(cfg.STD):
raise Exception(
"img name {}, img chns {} std size {}, size unequal".format(
img_name, img_channels, len(cfg.STD)))
return img, grt, grt_instance, img_name, grt_name
def normalize_image(self, img):
""" 像素归一化后减均值除方差 """
img = img.transpose((2, 0, 1)).astype('float32') / 255.0
img_mean = np.array(cfg.MEAN).reshape((len(cfg.MEAN), 1, 1))
img_std = np.array(cfg.STD).reshape((len(cfg.STD), 1, 1))
img -= img_mean
img /= img_std
return img
def process_image(self, line, data_dir, mode):
""" process_image """
img, grt, grt_instance, img_name, grt_name = self.load_image(
line, data_dir, mode=mode)
if mode == ModelPhase.TRAIN:
img, grt, grt_instance = aug.resize(img, grt, grt_instance, mode)
if cfg.AUG.RICH_CROP.ENABLE:
if cfg.AUG.RICH_CROP.BLUR:
if cfg.AUG.RICH_CROP.BLUR_RATIO <= 0:
n = 0
elif cfg.AUG.RICH_CROP.BLUR_RATIO >= 1:
n = 1
else:
n = int(1.0 / cfg.AUG.RICH_CROP.BLUR_RATIO)
if n > 0:
if np.random.randint(0, n) == 0:
radius = np.random.randint(3, 10)
if radius % 2 != 1:
radius = radius + 1
if radius > 9:
radius = 9
img = cv2.GaussianBlur(img, (radius, radius), 0, 0)
img, grt = aug.random_rotation(
img,
grt,
rich_crop_max_rotation=cfg.AUG.RICH_CROP.MAX_ROTATION,
mean_value=cfg.DATASET.PADDING_VALUE)
img, grt = aug.rand_scale_aspect(
img,
grt,
rich_crop_min_scale=cfg.AUG.RICH_CROP.MIN_AREA_RATIO,
rich_crop_aspect_ratio=cfg.AUG.RICH_CROP.ASPECT_RATIO)
img = aug.hsv_color_jitter(
img,
brightness_jitter_ratio=cfg.AUG.RICH_CROP.
BRIGHTNESS_JITTER_RATIO,
saturation_jitter_ratio=cfg.AUG.RICH_CROP.
SATURATION_JITTER_RATIO,
contrast_jitter_ratio=cfg.AUG.RICH_CROP.
CONTRAST_JITTER_RATIO)
if cfg.AUG.FLIP:
if cfg.AUG.FLIP_RATIO <= 0:
n = 0
elif cfg.AUG.FLIP_RATIO >= 1:
n = 1
else:
n = int(1.0 / cfg.AUG.FLIP_RATIO)
if n > 0:
if np.random.randint(0, n) == 0:
img = img[::-1, :, :]
grt = grt[::-1, :]
if cfg.AUG.MIRROR:
if np.random.randint(0, 2) == 1:
img = img[:, ::-1, :]
grt = grt[:, ::-1]
img, grt = aug.rand_crop(img, grt, mode=mode)
elif ModelPhase.is_eval(mode):
img, grt, grt_instance = aug.resize(img, grt, grt_instance, mode=mode)
elif ModelPhase.is_visual(mode):
ori_img = img.copy()
img, grt, grt_instance = aug.resize(img, grt, grt_instance, mode=mode)
valid_shape = [img.shape[0], img.shape[1]]
else:
raise ValueError("Dataset mode={} Error!".format(mode))
# Normalize image
img = self.normalize_image(img)
if ModelPhase.is_train(mode) or ModelPhase.is_eval(mode):
grt = np.expand_dims(np.array(grt).astype('int32'), axis=0)
ignore = (grt != cfg.DATASET.IGNORE_INDEX).astype('int32')
if ModelPhase.is_train(mode):
return (img, grt, grt_instance, ignore)
elif ModelPhase.is_eval(mode):
return (img, grt, grt_instance, ignore)
elif ModelPhase.is_visual(mode):
return (img, grt, grt_instance, img_name, valid_shape, ori_img)
pre-commit
yapf == 0.26.0
flake8
pyyaml >= 5.1
tb-paddle
tensorboard >= 1.15.0
Pillow
numpy
six
opencv-python
tqdm
requests
sklearn
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
cur_path = os.path.abspath(os.path.dirname(__file__))
root_path = os.path.split(os.path.split(cur_path)[0])[0]
SEG_PATH = os.path.join(cur_path, "../../../")
sys.path.append(SEG_PATH)
sys.path.append(root_path)
import argparse
import pprint
import numpy as np
import paddle.fluid as fluid
from utils.config import cfg
from pdseg.utils.timer import Timer, calculate_eta
from reader import LaneNetDataset
from models.model_builder import build_model
from models.model_builder import ModelPhase
from models.model_builder import parse_shape_from_file
from eval import evaluate
from vis import visualize
from utils import dist_utils
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg training')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess I/O or not',
action='store_true',
default=False)
parser.add_argument(
'--log_steps',
dest='log_steps',
help='Display logging information at every log_steps',
default=10,
type=int)
parser.add_argument(
'--debug',
dest='debug',
help='debug mode, display detail information of training',
action='store_true')
parser.add_argument(
'--use_tb',
dest='use_tb',
help='whether to record the data during training to Tensorboard',
action='store_true')
parser.add_argument(
'--tb_log_dir',
dest='tb_log_dir',
help='Tensorboard logging directory',
default=None,
type=str)
parser.add_argument(
'--do_eval',
dest='do_eval',
help='Evaluation models result on every new checkpoint',
action='store_true')
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
return parser.parse_args()
def save_vars(executor, dirname, program=None, vars=None):
"""
Temporary resolution for Win save variables compatability.
Will fix in PaddlePaddle v1.5.2
"""
save_program = fluid.Program()
save_block = save_program.global_block()
for each_var in vars:
# NOTE: don't save the variable which type is RAW
if each_var.type == fluid.core.VarDesc.VarType.RAW:
continue
new_var = save_block.create_var(
name=each_var.name,
shape=each_var.shape,
dtype=each_var.dtype,
type=each_var.type,
lod_level=each_var.lod_level,
persistable=True)
file_path = os.path.join(dirname, new_var.name)
file_path = os.path.normpath(file_path)
save_block.append_op(
type='save',
inputs={'X': [new_var]},
outputs={},
attrs={'file_path': file_path})
executor.run(save_program)
def save_checkpoint(exe, program, ckpt_name):
"""
Save checkpoint for evaluation or resume training
"""
ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name))
print("Save model checkpoint to {}".format(ckpt_dir))
if not os.path.isdir(ckpt_dir):
os.makedirs(ckpt_dir)
save_vars(
exe,
ckpt_dir,
program,
vars=list(filter(fluid.io.is_persistable, program.list_vars())))
return ckpt_dir
def load_checkpoint(exe, program):
"""
Load checkpoiont from pretrained model directory for resume training
"""
print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR)
if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR):
raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format(
cfg.TRAIN.RESUME_MODEL_DIR))
fluid.io.load_persistables(
exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program)
model_path = cfg.TRAIN.RESUME_MODEL_DIR
# Check is path ended by path spearator
if model_path[-1] == os.sep:
model_path = model_path[0:-1]
epoch_name = os.path.basename(model_path)
# If resume model is final model
if epoch_name == 'final':
begin_epoch = cfg.SOLVER.NUM_EPOCHS
# If resume model path is end of digit, restore epoch status
elif epoch_name.isdigit():
epoch = int(epoch_name)
begin_epoch = epoch + 1
else:
raise ValueError("Resume model path is not valid!")
print("Model checkpoint loaded successfully!")
return begin_epoch
def print_info(*msg):
if cfg.TRAINER_ID == 0:
print(*msg)
def train(cfg):
startup_prog = fluid.Program()
train_prog = fluid.Program()
drop_last = True
dataset = LaneNetDataset(
file_list=cfg.DATASET.TRAIN_FILE_LIST,
mode=ModelPhase.TRAIN,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
if args.use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
batch_data = []
for b in data_gen:
batch_data.append(b)
if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS):
for item in batch_data:
yield item
batch_data = []
# Get device environment
gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0))
place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace()
places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# Get number of GPU
dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places)
print_info("#Device count: {}".format(dev_count))
# Make sure BATCH_SIZE can divided by GPU cards
assert cfg.BATCH_SIZE % dev_count == 0, (
'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format(
cfg.BATCH_SIZE, dev_count))
# If use multi-gpu training mode, batch data will allocated to each GPU evenly
batch_size_per_dev = cfg.BATCH_SIZE // dev_count
cfg.BATCH_SIZE_PER_DEV = batch_size_per_dev
print_info("batch_size_per_dev: {}".format(batch_size_per_dev))
py_reader, avg_loss, lr, pred, grts, masks, emb_loss, seg_loss, accuracy, fp, fn = build_model(
train_prog, startup_prog, phase=ModelPhase.TRAIN)
py_reader.decorate_sample_generator(
data_generator, batch_size=batch_size_per_dev, drop_last=drop_last)
exe = fluid.Executor(place)
exe.run(startup_prog)
exec_strategy = fluid.ExecutionStrategy()
# Clear temporary variables every 100 iteration
if args.use_gpu:
exec_strategy.num_threads = fluid.core.get_cuda_device_count()
exec_strategy.num_iteration_per_drop_scope = 100
build_strategy = fluid.BuildStrategy()
if cfg.NUM_TRAINERS > 1 and args.use_gpu:
dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog)
exec_strategy.num_threads = 1
if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu:
if dev_count > 1:
# Apply sync batch norm strategy
print_info("Sync BatchNorm strategy is effective.")
build_strategy.sync_batch_norm = True
else:
print_info(
"Sync BatchNorm strategy will not be effective if GPU device"
" count <= 1")
compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel(
loss_name=avg_loss.name,
exec_strategy=exec_strategy,
build_strategy=build_strategy)
# Resume training
begin_epoch = cfg.SOLVER.BEGIN_EPOCH
if cfg.TRAIN.RESUME_MODEL_DIR:
begin_epoch = load_checkpoint(exe, train_prog)
# Load pretrained model
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR):
print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR)
load_vars = []
load_fail_vars = []
def var_shape_matched(var, shape):
"""
Check whehter persitable variable shape is match with current network
"""
var_exist = os.path.exists(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_exist:
var_shape = parse_shape_from_file(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_shape != shape:
print(var.name, var_shape, shape)
return var_shape == shape
return False
for x in train_prog.list_vars():
if isinstance(x, fluid.framework.Parameter):
shape = tuple(fluid.global_scope().find_var(
x.name).get_tensor().shape())
if var_shape_matched(x, shape):
load_vars.append(x)
else:
load_fail_vars.append(x)
fluid.io.load_vars(
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars)
for var in load_vars:
print_info("Parameter[{}] loaded sucessfully!".format(var.name))
for var in load_fail_vars:
print_info(
"Parameter[{}] don't exist or shape does not match current network, skip"
" to load it.".format(var.name))
print_info("{}/{} pretrained parameters loaded successfully!".format(
len(load_vars),
len(load_vars) + len(load_fail_vars)))
else:
print_info(
'Pretrained model dir {} not exists, training from scratch...'.
format(cfg.TRAIN.PRETRAINED_MODEL_DIR))
# fetch_list = [avg_loss.name, lr.name, accuracy.name, precision.name, recall.name]
fetch_list = [avg_loss.name, lr.name, seg_loss.name, emb_loss.name, accuracy.name, fp.name, fn.name]
if args.debug:
# Fetch more variable info and use streaming confusion matrix to
# calculate IoU results if in debug mode
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
fetch_list.extend([pred.name, grts.name, masks.name])
# cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
if args.use_tb:
if not args.tb_log_dir:
print_info("Please specify the log directory by --tb_log_dir.")
exit(1)
from tb_paddle import SummaryWriter
log_writer = SummaryWriter(args.tb_log_dir)
# trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0))
# num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
global_step = 0
all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE
if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True:
all_step += 1
all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1)
avg_loss = 0.0
avg_seg_loss = 0.0
avg_emb_loss = 0.0
avg_acc = 0.0
avg_fp = 0.0
avg_fn = 0.0
timer = Timer()
timer.start()
if begin_epoch > cfg.SOLVER.NUM_EPOCHS:
raise ValueError(
("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format(
begin_epoch, cfg.SOLVER.NUM_EPOCHS))
if args.use_mpio:
print_info("Use multiprocess reader")
else:
print_info("Use multi-thread reader")
for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1):
py_reader.start()
while True:
try:
# If not in debug mode, avoid unnessary log and calculate
loss, lr, out_seg_loss, out_emb_loss, out_acc, out_fp, out_fn = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
avg_loss += np.mean(np.array(loss))
avg_seg_loss += np.mean(np.array(out_seg_loss))
avg_emb_loss += np.mean(np.array(out_emb_loss))
avg_acc += np.mean(out_acc)
avg_fp += np.mean(out_fp)
avg_fn += np.mean(out_fn)
global_step += 1
if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0:
avg_loss /= args.log_steps
avg_seg_loss /= args.log_steps
avg_emb_loss /= args.log_steps
avg_acc /= args.log_steps
avg_fp /= args.log_steps
avg_fn /= args.log_steps
speed = args.log_steps / timer.elapsed_time()
print((
"epoch={} step={} lr={:.5f} loss={:.4f} seg_loss={:.4f} emb_loss={:.4f} accuracy={:.4} fp={:.4} fn={:.4} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, avg_seg_loss, avg_emb_loss, avg_acc, avg_fp, avg_fn, speed,
calculate_eta(all_step - global_step, speed)))
if args.use_tb:
log_writer.add_scalar('Train/loss', avg_loss,
global_step)
log_writer.add_scalar('Train/lr', lr[0],
global_step)
log_writer.add_scalar('Train/speed', speed,
global_step)
sys.stdout.flush()
avg_loss = 0.0
avg_seg_loss = 0.0
avg_emb_loss = 0.0
avg_acc = 0.0
avg_fp = 0.0
avg_fn = 0.0
timer.restart()
except fluid.core.EOFException:
py_reader.reset()
break
except Exception as e:
print(e)
if epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0 and cfg.TRAINER_ID == 0:
ckpt_dir = save_checkpoint(exe, train_prog, epoch)
if args.do_eval:
print("Evaluation start")
accuracy, fp, fn = evaluate(
cfg=cfg,
ckpt_dir=ckpt_dir,
use_gpu=args.use_gpu,
use_mpio=args.use_mpio)
if args.use_tb:
log_writer.add_scalar('Evaluate/accuracy', accuracy,
global_step)
log_writer.add_scalar('Evaluate/fp', fp,
global_step)
log_writer.add_scalar('Evaluate/fn', fn,
global_step)
# Use Tensorboard to visualize results
if args.use_tb and cfg.DATASET.VIS_FILE_LIST is not None:
visualize(
cfg=cfg,
use_gpu=args.use_gpu,
vis_file_list=cfg.DATASET.VIS_FILE_LIST,
vis_dir="visual",
ckpt_dir=ckpt_dir,
log_writer=log_writer)
# save final model
if cfg.TRAINER_ID == 0:
save_checkpoint(exe, train_prog, 'final')
def main(args):
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0))
cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
cfg.check_and_infer()
print_info(pprint.pformat(cfg))
train(cfg)
if __name__ == '__main__':
args = parse_args()
if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True:
print(
"You can not set use_gpu = True in the model because you are using paddlepaddle-cpu."
)
print(
"Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU."
)
sys.exit(1)
main(args)
# -*- coding: utf-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
from __future__ import unicode_literals
import os
import sys
# LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
# PDSEG_PATH = os.path.join(LOCAL_PATH, "../../../", "pdseg")
# print(PDSEG_PATH)
# sys.path.insert(0, PDSEG_PATH)
# print(sys.path)
from pdseg.utils.collect import SegConfig
import numpy as np
cfg = SegConfig()
########################## 基本配置 ###########################################
# 均值,图像预处理减去的均值
cfg.MEAN = [0.5, 0.5, 0.5]
# 标准差,图像预处理除以标准差·
cfg.STD = [0.5, 0.5, 0.5]
# 批处理大小
cfg.BATCH_SIZE = 1
# 验证时图像裁剪尺寸(宽,高)
cfg.EVAL_CROP_SIZE = tuple()
# 训练时图像裁剪尺寸(宽,高)
cfg.TRAIN_CROP_SIZE = tuple()
# 多进程训练总进程数
cfg.NUM_TRAINERS = 1
# 多进程训练进程ID
cfg.TRAINER_ID = 0
# 每张gpu上的批大小,无需设置,程序会自动根据batch调整
cfg.BATCH_SIZE_PER_DEV = 1
########################## 数据载入配置 #######################################
# 数据载入时的并发数, 建议值8
cfg.DATALOADER.NUM_WORKERS = 8
# 数据载入时缓存队列大小, 建议值256
cfg.DATALOADER.BUF_SIZE = 256
########################## 数据集配置 #########################################
# 数据主目录目录
cfg.DATASET.DATA_DIR = './dataset/cityscapes/'
# 训练集列表
cfg.DATASET.TRAIN_FILE_LIST = './dataset/cityscapes/train.list'
# 训练集数量
cfg.DATASET.TRAIN_TOTAL_IMAGES = 2975
# 验证集列表
cfg.DATASET.VAL_FILE_LIST = './dataset/cityscapes/val.list'
# 验证数据数量
cfg.DATASET.VAL_TOTAL_IMAGES = 500
# 测试数据列表
cfg.DATASET.TEST_FILE_LIST = './dataset/cityscapes/test.list'
# 测试数据数量
cfg.DATASET.TEST_TOTAL_IMAGES = 500
# Tensorboard 可视化的数据集
cfg.DATASET.VIS_FILE_LIST = None
# 类别数(需包括背景类)
cfg.DATASET.NUM_CLASSES = 19
# 输入图像类型, 支持三通道'rgb',四通道'rgba',单通道灰度图'gray'
cfg.DATASET.IMAGE_TYPE = 'rgb'
# 输入图片的通道数
cfg.DATASET.DATA_DIM = 3
# 数据列表分割符, 默认为空格
cfg.DATASET.SEPARATOR = ' '
# 忽略的像素标签值, 默认为255,一般无需改动
cfg.DATASET.IGNORE_INDEX = 255
# 数据增强是图像的padding值
cfg.DATASET.PADDING_VALUE = [127.5,127.5,127.5]
########################### 数据增强配置 ######################################
# 图像镜像左右翻转
cfg.AUG.MIRROR = True
# 图像上下翻转开关,True/False
cfg.AUG.FLIP = False
# 图像启动上下翻转的概率,0-1
cfg.AUG.FLIP_RATIO = 0.5
# 图像resize的固定尺寸(宽,高),非负
cfg.AUG.FIX_RESIZE_SIZE = tuple()
# 图像resize的方式有三种:
# unpadding(固定尺寸),stepscaling(按比例resize),rangescaling(长边对齐)
cfg.AUG.AUG_METHOD = 'rangescaling'
# 图像resize方式为stepscaling,resize最小尺度,非负
cfg.AUG.MIN_SCALE_FACTOR = 0.5
# 图像resize方式为stepscaling,resize最大尺度,不小于MIN_SCALE_FACTOR
cfg.AUG.MAX_SCALE_FACTOR = 2.0
# 图像resize方式为stepscaling,resize尺度范围间隔,非负
cfg.AUG.SCALE_STEP_SIZE = 0.25
# 图像resize方式为rangescaling,训练时长边resize的范围最小值,非负
cfg.AUG.MIN_RESIZE_VALUE = 400
# 图像resize方式为rangescaling,训练时长边resize的范围最大值,
# 不小于MIN_RESIZE_VALUE
cfg.AUG.MAX_RESIZE_VALUE = 600
# 图像resize方式为rangescaling, 测试验证可视化模式下长边resize的长度,
# 在MIN_RESIZE_VALUE到MAX_RESIZE_VALUE范围内
cfg.AUG.INF_RESIZE_VALUE = 500
# RichCrop数据增广开关,用于提升模型鲁棒性
cfg.AUG.RICH_CROP.ENABLE = False
# 图像旋转最大角度,0-90
cfg.AUG.RICH_CROP.MAX_ROTATION = 15
# 裁取图像与原始图像面积比,0-1
cfg.AUG.RICH_CROP.MIN_AREA_RATIO = 0.5
# 裁取图像宽高比范围,非负
cfg.AUG.RICH_CROP.ASPECT_RATIO = 0.33
# 亮度调节范围,0-1
cfg.AUG.RICH_CROP.BRIGHTNESS_JITTER_RATIO = 0.5
# 饱和度调节范围,0-1
cfg.AUG.RICH_CROP.SATURATION_JITTER_RATIO = 0.5
# 对比度调节范围,0-1
cfg.AUG.RICH_CROP.CONTRAST_JITTER_RATIO = 0.5
# 图像模糊开关,True/False
cfg.AUG.RICH_CROP.BLUR = False
# 图像启动模糊百分比,0-1
cfg.AUG.RICH_CROP.BLUR_RATIO = 0.1
########################### 训练配置 ##########################################
# 模型保存路径
cfg.TRAIN.MODEL_SAVE_DIR = ''
# 预训练模型路径
cfg.TRAIN.PRETRAINED_MODEL_DIR = ''
# 是否resume,继续训练
cfg.TRAIN.RESUME_MODEL_DIR = ''
# 是否使用多卡间同步BatchNorm均值和方差
cfg.TRAIN.SYNC_BATCH_NORM = False
# 模型参数保存的epoch间隔数,可用来继续训练中断的模型
cfg.TRAIN.SNAPSHOT_EPOCH = 10
########################### 模型优化相关配置 ##################################
# 初始学习率
cfg.SOLVER.LR = 0.1
# 学习率下降方法, 支持poly piecewise cosine 三种
cfg.SOLVER.LR_POLICY = "poly"
# 优化算法, 支持SGD和Adam两种算法
cfg.SOLVER.OPTIMIZER = "sgd"
# 动量参数
cfg.SOLVER.MOMENTUM = 0.9
# 二阶矩估计的指数衰减率
cfg.SOLVER.MOMENTUM2 = 0.999
# 学习率Poly下降指数
cfg.SOLVER.POWER = 0.9
# step下降指数
cfg.SOLVER.GAMMA = 0.1
# step下降间隔
cfg.SOLVER.DECAY_EPOCH = [10, 20]
# 学习率权重衰减,0-1
cfg.SOLVER.WEIGHT_DECAY = 0.00004
# 训练开始epoch数,默认为1
cfg.SOLVER.BEGIN_EPOCH = 1
# 训练epoch数,正整数
cfg.SOLVER.NUM_EPOCHS = 30
# loss的选择,支持softmax_loss, bce_loss, dice_loss
cfg.SOLVER.LOSS = ["softmax_loss"]
# cross entropy weight, 默认为None,如果设置为'dynamic',会根据每个batch中各个类别的数目,
# 动态调整类别权重。
# 也可以设置一个静态权重(list的方式),比如有3类,每个类别权重可以设置为[0.1, 2.0, 0.9]
cfg.SOLVER.CROSS_ENTROPY_WEIGHT = None
########################## 测试配置 ###########################################
# 测试模型路径
cfg.TEST.TEST_MODEL = ''
########################## 模型通用配置 #######################################
# 模型名称, 支持deeplab, unet, icnet三种
cfg.MODEL.MODEL_NAME = ''
# BatchNorm类型: bn、gn(group_norm)
cfg.MODEL.DEFAULT_NORM_TYPE = 'bn'
# 多路损失加权值
cfg.MODEL.MULTI_LOSS_WEIGHT = [1.0]
# DEFAULT_NORM_TYPE为gn时group数
cfg.MODEL.DEFAULT_GROUP_NUMBER = 32
# 极小值, 防止分母除0溢出,一般无需改动
cfg.MODEL.DEFAULT_EPSILON = 1e-5
# BatchNorm动量, 一般无需改动
cfg.MODEL.BN_MOMENTUM = 0.99
# 是否使用FP16训练
cfg.MODEL.FP16 = False
# 混合精度训练需对LOSS进行scale, 默认为动态scale,静态scale可以设置为512.0
cfg.MODEL.SCALE_LOSS = "DYNAMIC"
########################## DeepLab模型配置 ####################################
# DeepLab backbone 配置, 可选项xception_65, mobilenetv2
cfg.MODEL.DEEPLAB.BACKBONE = "xception_65"
# DeepLab output stride
cfg.MODEL.DEEPLAB.OUTPUT_STRIDE = 16
# MobileNet backbone scale 设置
cfg.MODEL.DEEPLAB.DEPTH_MULTIPLIER = 1.0
# MobileNet backbone scale 设置
cfg.MODEL.DEEPLAB.ENCODER_WITH_ASPP = True
# MobileNet backbone scale 设置
cfg.MODEL.DEEPLAB.ENABLE_DECODER = True
# ASPP是否使用可分离卷积
cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV = True
# 解码器是否使用可分离卷积
cfg.MODEL.DEEPLAB.DECODER_USE_SEP_CONV = True
########################## UNET模型配置 #######################################
# 上采样方式, 默认为双线性插值
cfg.MODEL.UNET.UPSAMPLE_MODE = 'bilinear'
########################## ICNET模型配置 ######################################
# RESNET backbone scale 设置
cfg.MODEL.ICNET.DEPTH_MULTIPLIER = 0.5
# RESNET 层数 设置
cfg.MODEL.ICNET.LAYERS = 50
########################## PSPNET模型配置 ######################################
# Lannet backbone name
cfg.MODEL.LANENET.BACKBONE = "vgg"
########################## LaneNet模型配置 ######################################
########################## 预测部署模型配置 ###################################
# 预测保存的模型名称
cfg.FREEZE.MODEL_FILENAME = '__model__'
# 预测保存的参数名称
cfg.FREEZE.PARAMS_FILENAME = '__params__'
# 预测模型参数保存的路径
cfg.FREEZE.SAVE_DIR = 'freeze_model'
#copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import paddle.fluid as fluid
def nccl2_prepare(args, startup_prog, main_prog):
config = fluid.DistributeTranspilerConfig()
config.mode = "nccl2"
t = fluid.DistributeTranspiler(config=config)
envs = args.dist_env
t.transpile(
envs["trainer_id"],
trainers=','.join(envs["trainer_endpoints"]),
current_endpoint=envs["current_endpoint"],
startup_program=startup_prog,
program=main_prog)
def pserver_prepare(args, train_prog, startup_prog):
config = fluid.DistributeTranspilerConfig()
config.slice_var_up = args.split_var
t = fluid.DistributeTranspiler(config=config)
envs = args.dist_env
training_role = envs["training_role"]
t.transpile(
envs["trainer_id"],
program=train_prog,
pservers=envs["pserver_endpoints"],
trainers=envs["num_trainers"],
sync_mode=not args.async_mode,
startup_program=startup_prog)
if training_role == "PSERVER":
pserver_program = t.get_pserver_program(envs["current_endpoint"])
pserver_startup_program = t.get_startup_program(
envs["current_endpoint"],
pserver_program,
startup_program=startup_prog)
return pserver_program, pserver_startup_program
elif training_role == "TRAINER":
train_program = t.get_trainer_program()
return train_program, startup_prog
else:
raise ValueError(
'PADDLE_TRAINING_ROLE environment variable must be either TRAINER or PSERVER'
)
def nccl2_prepare_paddle(trainer_id, startup_prog, main_prog):
config = fluid.DistributeTranspilerConfig()
config.mode = "nccl2"
t = fluid.DistributeTranspiler(config=config)
t.transpile(
trainer_id,
trainers=os.environ.get('PADDLE_TRAINER_ENDPOINTS'),
current_endpoint=os.environ.get('PADDLE_CURRENT_ENDPOINT'),
startup_program=startup_prog,
program=main_prog)
def prepare_for_multi_process(exe, build_strategy, train_prog):
# prepare for multi-process
trainer_id = int(os.environ.get('PADDLE_TRAINER_ID', 0))
num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
if num_trainers < 2: return
build_strategy.num_trainers = num_trainers
build_strategy.trainer_id = trainer_id
# NOTE(zcd): use multi processes to train the model,
# and each process use one GPU card.
startup_prog = fluid.Program()
nccl2_prepare_paddle(trainer_id, startup_prog, train_prog)
# the startup_prog are run two times, but it doesn't matter.
exe.run(startup_prog)
"""
generate tusimple training dataset
"""
import argparse
import glob
import json
import os
import os.path as ops
import shutil
import cv2
import numpy as np
def init_args():
parser = argparse.ArgumentParser()
parser.add_argument('--src_dir', type=str, help='The origin path of unzipped tusimple dataset')
return parser.parse_args()
def process_json_file(json_file_path, src_dir, ori_dst_dir, binary_dst_dir, instance_dst_dir):
assert ops.exists(json_file_path), '{:s} not exist'.format(json_file_path)
image_nums = len(os.listdir(os.path.join(src_dir, ori_dst_dir)))
with open(json_file_path, 'r') as file:
for line_index, line in enumerate(file):
info_dict = json.loads(line)
image_dir = ops.split(info_dict['raw_file'])[0]
image_dir_split = image_dir.split('/')[1:]
image_dir_split.append(ops.split(info_dict['raw_file'])[1])
image_name = '_'.join(image_dir_split)
image_path = ops.join(src_dir, info_dict['raw_file'])
assert ops.exists(image_path), '{:s} not exist'.format(image_path)
h_samples = info_dict['h_samples']
lanes = info_dict['lanes']
image_name_new = '{:s}.png'.format('{:d}'.format(line_index + image_nums).zfill(4))
src_image = cv2.imread(image_path, cv2.IMREAD_COLOR)
dst_binary_image = np.zeros([src_image.shape[0], src_image.shape[1]], np.uint8)
dst_instance_image = np.zeros([src_image.shape[0], src_image.shape[1]], np.uint8)
for lane_index, lane in enumerate(lanes):
assert len(h_samples) == len(lane)
lane_x = []
lane_y = []
for index in range(len(lane)):
if lane[index] == -2:
continue
else:
ptx = lane[index]
pty = h_samples[index]
lane_x.append(ptx)
lane_y.append(pty)
if not lane_x:
continue
lane_pts = np.vstack((lane_x, lane_y)).transpose()
lane_pts = np.array([lane_pts], np.int64)
cv2.polylines(dst_binary_image, lane_pts, isClosed=False,
color=255, thickness=5)
cv2.polylines(dst_instance_image, lane_pts, isClosed=False,
color=lane_index * 50 + 20, thickness=5)
dst_binary_image_path = ops.join(src_dir, binary_dst_dir, image_name_new)
dst_instance_image_path = ops.join(src_dir, instance_dst_dir, image_name_new)
dst_rgb_image_path = ops.join(src_dir, ori_dst_dir, image_name_new)
cv2.imwrite(dst_binary_image_path, dst_binary_image)
cv2.imwrite(dst_instance_image_path, dst_instance_image)
cv2.imwrite(dst_rgb_image_path, src_image)
print('Process {:s} success'.format(image_name))
def gen_sample(src_dir, b_gt_image_dir, i_gt_image_dir, image_dir, phase='train', split=False):
label_list = []
with open('{:s}/{}ing/{}.txt'.format(src_dir, phase, phase), 'w') as file:
for image_name in os.listdir(b_gt_image_dir):
if not image_name.endswith('.png'):
continue
binary_gt_image_path = ops.join(b_gt_image_dir, image_name)
instance_gt_image_path = ops.join(i_gt_image_dir, image_name)
image_path = ops.join(image_dir, image_name)
assert ops.exists(image_path), '{:s} not exist'.format(image_path)
assert ops.exists(instance_gt_image_path), '{:s} not exist'.format(instance_gt_image_path)
b_gt_image = cv2.imread(binary_gt_image_path, cv2.IMREAD_COLOR)
i_gt_image = cv2.imread(instance_gt_image_path, cv2.IMREAD_COLOR)
image = cv2.imread(image_path, cv2.IMREAD_COLOR)
if b_gt_image is None or image is None or i_gt_image is None:
print('image: {:s} corrupt'.format(image_name))
continue
else:
info = '{:s} {:s} {:s}'.format(image_path, binary_gt_image_path, instance_gt_image_path)
file.write(info + '\n')
label_list.append(info)
if phase == 'train' and split:
np.random.RandomState(0).shuffle(label_list)
val_list_len = len(label_list) // 10
val_label_list = label_list[:val_list_len]
train_label_list = label_list[val_list_len:]
with open('{:s}/{}ing/train_part.txt'.format(src_dir, phase, phase), 'w') as file:
for info in train_label_list:
file.write(info + '\n')
with open('{:s}/{}ing/val_part.txt'.format(src_dir, phase, phase), 'w') as file:
for info in val_label_list:
file.write(info + '\n')
return
def process_tusimple_dataset(src_dir):
traing_folder_path = ops.join(src_dir, 'training')
testing_folder_path = ops.join(src_dir, 'testing')
os.makedirs(traing_folder_path, exist_ok=True)
os.makedirs(testing_folder_path, exist_ok=True)
for json_label_path in glob.glob('{:s}/label*.json'.format(src_dir)):
json_label_name = ops.split(json_label_path)[1]
shutil.copyfile(json_label_path, ops.join(traing_folder_path, json_label_name))
for json_label_path in glob.glob('{:s}/test_label.json'.format(src_dir)):
json_label_name = ops.split(json_label_path)[1]
shutil.copyfile(json_label_path, ops.join(testing_folder_path, json_label_name))
train_gt_image_dir = ops.join('training', 'gt_image')
train_gt_binary_dir = ops.join('training', 'gt_binary_image')
train_gt_instance_dir = ops.join('training', 'gt_instance_image')
test_gt_image_dir = ops.join('testing', 'gt_image')
test_gt_binary_dir = ops.join('testing', 'gt_binary_image')
test_gt_instance_dir = ops.join('testing', 'gt_instance_image')
os.makedirs(os.path.join(src_dir, train_gt_image_dir), exist_ok=True)
os.makedirs(os.path.join(src_dir, train_gt_binary_dir), exist_ok=True)
os.makedirs(os.path.join(src_dir, train_gt_instance_dir), exist_ok=True)
os.makedirs(os.path.join(src_dir, test_gt_image_dir), exist_ok=True)
os.makedirs(os.path.join(src_dir, test_gt_binary_dir), exist_ok=True)
os.makedirs(os.path.join(src_dir, test_gt_instance_dir), exist_ok=True)
for json_label_path in glob.glob('{:s}/*.json'.format(traing_folder_path)):
process_json_file(json_label_path, src_dir, train_gt_image_dir, train_gt_binary_dir, train_gt_instance_dir)
gen_sample(src_dir, train_gt_binary_dir, train_gt_instance_dir, train_gt_image_dir, 'train', True)
if __name__ == '__main__':
args = init_args()
process_tusimple_dataset(args.src_dir)
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# this code heavily base on https://github.com/MaybeShewill-CV/lanenet-lane-detection/blob/master/lanenet_model/lanenet_postprocess.py
"""
LaneNet model post process
"""
import os.path as ops
import math
import cv2
import time
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
def _morphological_process(image, kernel_size=5):
"""
morphological process to fill the hole in the binary segmentation result
:param image:
:param kernel_size:
:return:
"""
if len(image.shape) == 3:
raise ValueError('Binary segmentation result image should be a single channel image')
if image.dtype is not np.uint8:
image = np.array(image, np.uint8)
kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(kernel_size, kernel_size))
# close operation fille hole
closing = cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel, iterations=1)
return closing
def _connect_components_analysis(image):
"""
connect components analysis to remove the small components
:param image:
:return:
"""
if len(image.shape) == 3:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
else:
gray_image = image
return cv2.connectedComponentsWithStats(gray_image, connectivity=8, ltype=cv2.CV_32S)
class _LaneFeat(object):
"""
"""
def __init__(self, feat, coord, class_id=-1):
"""
lane feat object
:param feat: lane embeddng feats [feature_1, feature_2, ...]
:param coord: lane coordinates [x, y]
:param class_id: lane class id
"""
self._feat = feat
self._coord = coord
self._class_id = class_id
@property
def feat(self):
return self._feat
@feat.setter
def feat(self, value):
if not isinstance(value, np.ndarray):
value = np.array(value, dtype=np.float64)
if value.dtype != np.float32:
value = np.array(value, dtype=np.float64)
self._feat = value
@property
def coord(self):
return self._coord
@coord.setter
def coord(self, value):
if not isinstance(value, np.ndarray):
value = np.array(value)
if value.dtype != np.int32:
value = np.array(value, dtype=np.int32)
self._coord = value
@property
def class_id(self):
return self._class_id
@class_id.setter
def class_id(self, value):
if not isinstance(value, np.int64):
raise ValueError('Class id must be integer')
self._class_id = value
class _LaneNetCluster(object):
"""
Instance segmentation result cluster
"""
def __init__(self):
"""
"""
self._color_map = [np.array([255, 0, 0]),
np.array([0, 255, 0]),
np.array([0, 0, 255]),
np.array([125, 125, 0]),
np.array([0, 125, 125]),
np.array([125, 0, 125]),
np.array([50, 100, 50]),
np.array([100, 50, 100])]
@staticmethod
def _embedding_feats_dbscan_cluster(embedding_image_feats):
"""
dbscan cluster
"""
db = DBSCAN(eps=0.4, min_samples=500)
try:
features = StandardScaler().fit_transform(embedding_image_feats)
db.fit(features)
except Exception as err:
print(err)
ret = {
'origin_features': None,
'cluster_nums': 0,
'db_labels': None,
'unique_labels': None,
'cluster_center': None
}
return ret
db_labels = db.labels_
unique_labels = np.unique(db_labels)
num_clusters = len(unique_labels)
cluster_centers = db.components_
ret = {
'origin_features': features,
'cluster_nums': num_clusters,
'db_labels': db_labels,
'unique_labels': unique_labels,
'cluster_center': cluster_centers
}
return ret
@staticmethod
def _get_lane_embedding_feats(binary_seg_ret, instance_seg_ret):
"""
get lane embedding features according the binary seg result
"""
idx = np.where(binary_seg_ret == 255)
lane_embedding_feats = instance_seg_ret[idx]
lane_coordinate = np.vstack((idx[1], idx[0])).transpose()
assert lane_embedding_feats.shape[0] == lane_coordinate.shape[0]
ret = {
'lane_embedding_feats': lane_embedding_feats,
'lane_coordinates': lane_coordinate
}
return ret
def apply_lane_feats_cluster(self, binary_seg_result, instance_seg_result):
"""
:param binary_seg_result:
:param instance_seg_result:
:return:
"""
# get embedding feats and coords
get_lane_embedding_feats_result = self._get_lane_embedding_feats(
binary_seg_ret=binary_seg_result,
instance_seg_ret=instance_seg_result
)
# dbscan cluster
dbscan_cluster_result = self._embedding_feats_dbscan_cluster(
embedding_image_feats=get_lane_embedding_feats_result['lane_embedding_feats']
)
mask = np.zeros(shape=[binary_seg_result.shape[0], binary_seg_result.shape[1], 3], dtype=np.uint8)
db_labels = dbscan_cluster_result['db_labels']
unique_labels = dbscan_cluster_result['unique_labels']
coord = get_lane_embedding_feats_result['lane_coordinates']
if db_labels is None:
return None, None
lane_coords = []
for index, label in enumerate(unique_labels.tolist()):
if label == -1:
continue
idx = np.where(db_labels == label)
pix_coord_idx = tuple((coord[idx][:, 1], coord[idx][:, 0]))
mask[pix_coord_idx] = self._color_map[index]
lane_coords.append(coord[idx])
return mask, lane_coords
class LaneNetPostProcessor(object):
"""
lanenet post process for lane generation
"""
def __init__(self, ipm_remap_file_path='./utils/tusimple_ipm_remap.yml'):
"""
convert front car view to bird view
"""
assert ops.exists(ipm_remap_file_path), '{:s} not exist'.format(ipm_remap_file_path)
self._cluster = _LaneNetCluster()
self._ipm_remap_file_path = ipm_remap_file_path
remap_file_load_ret = self._load_remap_matrix()
self._remap_to_ipm_x = remap_file_load_ret['remap_to_ipm_x']
self._remap_to_ipm_y = remap_file_load_ret['remap_to_ipm_y']
self._color_map = [np.array([255, 0, 0]),
np.array([0, 255, 0]),
np.array([0, 0, 255]),
np.array([125, 125, 0]),
np.array([0, 125, 125]),
np.array([125, 0, 125]),
np.array([50, 100, 50]),
np.array([100, 50, 100])]
def _load_remap_matrix(self):
fs = cv2.FileStorage(self._ipm_remap_file_path, cv2.FILE_STORAGE_READ)
remap_to_ipm_x = fs.getNode('remap_ipm_x').mat()
remap_to_ipm_y = fs.getNode('remap_ipm_y').mat()
ret = {
'remap_to_ipm_x': remap_to_ipm_x,
'remap_to_ipm_y': remap_to_ipm_y,
}
fs.release()
return ret
def postprocess(self, binary_seg_result, instance_seg_result=None,
min_area_threshold=100, source_image=None,
data_source='tusimple'):
# convert binary_seg_result
binary_seg_result = np.array(binary_seg_result * 255, dtype=np.uint8)
# apply image morphology operation to fill in the hold and reduce the small area
morphological_ret = _morphological_process(binary_seg_result, kernel_size=5)
connect_components_analysis_ret = _connect_components_analysis(image=morphological_ret)
labels = connect_components_analysis_ret[1]
stats = connect_components_analysis_ret[2]
for index, stat in enumerate(stats):
if stat[4] <= min_area_threshold:
idx = np.where(labels == index)
morphological_ret[idx] = 0
# apply embedding features cluster
mask_image, lane_coords = self._cluster.apply_lane_feats_cluster(
binary_seg_result=morphological_ret,
instance_seg_result=instance_seg_result
)
if mask_image is None:
return {
'mask_image': None,
'fit_params': None,
'source_image': None,
}
# lane line fit
fit_params = []
src_lane_pts = []
for lane_index, coords in enumerate(lane_coords):
if data_source == 'tusimple':
tmp_mask = np.zeros(shape=(720, 1280), dtype=np.uint8)
tmp_mask[tuple((np.int_(coords[:, 1] * 720 / 256), np.int_(coords[:, 0] * 1280 / 512)))] = 255
else:
raise ValueError('Wrong data source now only support tusimple')
tmp_ipm_mask = cv2.remap(
tmp_mask,
self._remap_to_ipm_x,
self._remap_to_ipm_y,
interpolation=cv2.INTER_NEAREST
)
nonzero_y = np.array(tmp_ipm_mask.nonzero()[0])
nonzero_x = np.array(tmp_ipm_mask.nonzero()[1])
fit_param = np.polyfit(nonzero_y, nonzero_x, 2)
fit_params.append(fit_param)
[ipm_image_height, ipm_image_width] = tmp_ipm_mask.shape
plot_y = np.linspace(10, ipm_image_height, ipm_image_height - 10)
fit_x = fit_param[0] * plot_y ** 2 + fit_param[1] * plot_y + fit_param[2]
lane_pts = []
for index in range(0, plot_y.shape[0], 5):
src_x = self._remap_to_ipm_x[
int(plot_y[index]), int(np.clip(fit_x[index], 0, ipm_image_width - 1))]
if src_x <= 0:
continue
src_y = self._remap_to_ipm_y[
int(plot_y[index]), int(np.clip(fit_x[index], 0, ipm_image_width - 1))]
src_y = src_y if src_y > 0 else 0
lane_pts.append([src_x, src_y])
src_lane_pts.append(lane_pts)
# tusimple test data sample point along y axis every 10 pixels
source_image_width = source_image.shape[1]
for index, single_lane_pts in enumerate(src_lane_pts):
single_lane_pt_x = np.array(single_lane_pts, dtype=np.float32)[:, 0]
single_lane_pt_y = np.array(single_lane_pts, dtype=np.float32)[:, 1]
if data_source == 'tusimple':
start_plot_y = 240
end_plot_y = 720
else:
raise ValueError('Wrong data source now only support tusimple')
step = int(math.floor((end_plot_y - start_plot_y) / 10))
for plot_y in np.linspace(start_plot_y, end_plot_y, step):
diff = single_lane_pt_y - plot_y
fake_diff_bigger_than_zero = diff.copy()
fake_diff_smaller_than_zero = diff.copy()
fake_diff_bigger_than_zero[np.where(diff <= 0)] = float('inf')
fake_diff_smaller_than_zero[np.where(diff > 0)] = float('-inf')
idx_low = np.argmax(fake_diff_smaller_than_zero)
idx_high = np.argmin(fake_diff_bigger_than_zero)
previous_src_pt_x = single_lane_pt_x[idx_low]
previous_src_pt_y = single_lane_pt_y[idx_low]
last_src_pt_x = single_lane_pt_x[idx_high]
last_src_pt_y = single_lane_pt_y[idx_high]
if previous_src_pt_y < start_plot_y or last_src_pt_y < start_plot_y or \
fake_diff_smaller_than_zero[idx_low] == float('-inf') or \
fake_diff_bigger_than_zero[idx_high] == float('inf'):
continue
interpolation_src_pt_x = (abs(previous_src_pt_y - plot_y) * previous_src_pt_x +
abs(last_src_pt_y - plot_y) * last_src_pt_x) / \
(abs(previous_src_pt_y - plot_y) + abs(last_src_pt_y - plot_y))
interpolation_src_pt_y = (abs(previous_src_pt_y - plot_y) * previous_src_pt_y +
abs(last_src_pt_y - plot_y) * last_src_pt_y) / \
(abs(previous_src_pt_y - plot_y) + abs(last_src_pt_y - plot_y))
if interpolation_src_pt_x > source_image_width or interpolation_src_pt_x < 10:
continue
lane_color = self._color_map[index].tolist()
cv2.circle(source_image, (int(interpolation_src_pt_x),
int(interpolation_src_pt_y)), 5, lane_color, -1)
ret = {
'mask_image': mask_image,
'fit_params': fit_params,
'source_image': source_image,
}
return ret
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
cur_path = os.path.abspath(os.path.dirname(__file__))
root_path = os.path.split(os.path.split(cur_path)[0])[0]
SEG_PATH = os.path.join(cur_path, "../../../")
sys.path.append(SEG_PATH)
sys.path.append(root_path)
import matplotlib
matplotlib.use('Agg')
import time
import argparse
import pprint
import cv2
import numpy as np
import paddle.fluid as fluid
from utils.config import cfg
from reader import LaneNetDataset
from models.model_builder import build_model
from models.model_builder import ModelPhase
from utils import lanenet_postprocess
import matplotlib.pyplot as plt
def parse_args():
parser = argparse.ArgumentParser(description='PaddeSeg visualization tools')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu', dest='use_gpu', help='Use gpu or cpu', action='store_true')
parser.add_argument(
'--vis_dir',
dest='vis_dir',
help='visual save dir',
type=str,
default='visual')
parser.add_argument(
'--also_save_raw_results',
dest='also_save_raw_results',
help='whether to save raw result',
action='store_true')
parser.add_argument(
'--local_test',
dest='local_test',
help='if in local test mode, only visualize 5 images for testing',
action='store_true')
parser.add_argument(
'opts',
help='See config.py for all options',
default=None,
nargs=argparse.REMAINDER)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
return parser.parse_args()
def makedirs(directory):
if not os.path.exists(directory):
os.makedirs(directory)
def to_png_fn(fn, name=""):
"""
Append png as filename postfix
"""
directory, filename = os.path.split(fn)
basename, ext = os.path.splitext(filename)
return basename + name + ".png"
def minmax_scale(input_arr):
min_val = np.min(input_arr)
max_val = np.max(input_arr)
output_arr = (input_arr - min_val) * 255.0 / (max_val - min_val)
return output_arr
def visualize(cfg,
vis_file_list=None,
use_gpu=False,
vis_dir="visual",
also_save_raw_results=False,
ckpt_dir=None,
log_writer=None,
local_test=False,
**kwargs):
if vis_file_list is None:
vis_file_list = cfg.DATASET.TEST_FILE_LIST
dataset = LaneNetDataset(
file_list=vis_file_list,
mode=ModelPhase.VISUAL,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
startup_prog = fluid.Program()
test_prog = fluid.Program()
pred, logit = build_model(test_prog, startup_prog, phase=ModelPhase.VISUAL)
# Clone forward graph
test_prog = test_prog.clone(for_test=True)
# Get device environment
place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_prog)
ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir
fluid.io.load_params(exe, ckpt_dir, main_program=test_prog)
save_dir = os.path.join(vis_dir, 'visual_results')
makedirs(save_dir)
if also_save_raw_results:
raw_save_dir = os.path.join(vis_dir, 'raw_results')
makedirs(raw_save_dir)
fetch_list = [pred.name, logit.name]
test_reader = dataset.batch(dataset.generator, batch_size=1, is_test=True)
postprocessor = lanenet_postprocess.LaneNetPostProcessor()
for imgs, grts, grts_instance, img_names, valid_shapes, org_imgs in test_reader:
segLogits, emLogits = exe.run(
program=test_prog,
feed={'image': imgs},
fetch_list=fetch_list,
return_numpy=True)
num_imgs = segLogits.shape[0]
for i in range(num_imgs):
gt_image = org_imgs[i]
binary_seg_image, instance_seg_image = segLogits[i].squeeze(-1), emLogits[i].transpose((1,2,0))
postprocess_result = postprocessor.postprocess(
binary_seg_result=binary_seg_image,
instance_seg_result=instance_seg_image,
source_image=gt_image
)
pred_binary_fn = os.path.join(save_dir, to_png_fn(img_names[i], name='_pred_binary'))
pred_lane_fn = os.path.join(save_dir, to_png_fn(img_names[i], name='_pred_lane'))
pred_instance_fn = os.path.join(save_dir, to_png_fn(img_names[i], name='_pred_instance'))
dirname = os.path.dirname(pred_binary_fn)
makedirs(dirname)
mask_image = postprocess_result['mask_image']
for i in range(4):
instance_seg_image[:, :, i] = minmax_scale(instance_seg_image[:, :, i])
embedding_image = np.array(instance_seg_image).astype(np.uint8)
plt.figure('mask_image')
plt.imshow(mask_image[:, :, (2, 1, 0)])
plt.figure('src_image')
plt.imshow(gt_image[:, :, (2, 1, 0)])
plt.figure('instance_image')
plt.imshow(embedding_image[:, :, (2, 1, 0)])
plt.figure('binary_image')
plt.imshow(binary_seg_image * 255, cmap='gray')
plt.show()
cv2.imwrite(pred_binary_fn, np.array(binary_seg_image * 255).astype(np.uint8))
cv2.imwrite(pred_lane_fn, postprocess_result['source_image'])
cv2.imwrite(pred_instance_fn, mask_image)
print(pred_lane_fn, 'saved!')
if __name__ == '__main__':
args = parse_args()
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
cfg.check_and_infer()
print(pprint.pformat(cfg))
visualize(cfg, **args.__dict__)
......@@ -16,7 +16,7 @@ import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "test")
TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
if __name__ == "__main__":
download_file_and_uncompress(
url='https://paddleseg.bj.bcebos.com/models/unet_mechanical_industry_meter.tar',
savepath=LOCAL_PATH,
extrapath=LOCAL_PATH)
print("Pretrained Model download success!")
......@@ -21,14 +21,14 @@ DATALOADER:
BUF_SIZE: 256
NUM_WORKERS: 4
DATASET:
DATA_DIR: "./dataset/mini_mechanical_industry_meter_data/"
DATA_DIR: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 5
TEST_FILE_LIST: "./dataset/mini_mechanical_industry_meter_data/val_mini.txt"
TEST_FILE_LIST: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/val_mini.txt"
TEST_TOTAL_IMAGES: 8
TRAIN_FILE_LIST: "./dataset/mini_mechanical_industry_meter_data/train_mini.txt"
TRAIN_FILE_LIST: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/train_mini.txt"
TRAIN_TOTAL_IMAGES: 64
VAL_FILE_LIST: "./dataset/mini_mechanical_industry_meter_data/val_mini.txt"
VAL_FILE_LIST: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/val_mini.txt"
VAL_TOTAL_IMAGES: 8
SEPARATOR: "|"
IGNORE_INDEX: 255
......
# PaddleSeg 特色垂类分割模型
提供基于PaddlePaddle最新的分割特色模型
提供基于PaddlePaddle最新的分割特色模型:
## Augmented Context Embedding with Edge Perceiving (ACE2P)
- [人像分割](#人像分割)
- [人体解析](#人体解析)
- [车道线分割](#车道线分割)
- [工业用表分割](#工业用表分割)
- [在线体验](#在线体验)
## 人像分割
### 1. 模型概述
CVPR 19 Look into Person (LIP) 单人人像分割比赛冠军模型,详见[ACE2P](./ACE2P)
**Note:** 本章节所有命令均在`contrib/HumanSeg`目录下执行。
### 2. 模型下载
```
cd contrib/HumanSeg
```
点击[链接](https://paddleseg.bj.bcebos.com/models/ACE2P.tgz),下载, 在contrib/ACE2P下解压, `tar -xzf ACE2P.tgz`
### 1. 模型结构
### 3. 数据下载
DeepLabv3+ backbone为Xception65
前往LIP数据集官网: http://47.100.21.47:9999/overview.php 或点击 [Baidu_Drive](https://pan.baidu.com/s/1nvqmZBN#list/path=%2Fsharelink2787269280-523292635003760%2FLIP%2FLIP&parentPath=%2Fsharelink2787269280-523292635003760),
### 2. 下载模型和数据
执行以下命令下载并解压模型和数据集:
加载Testing_images.zip, 解压到contrib/ACE2P/data文件夹下
```
python download_HumanSeg.py
```
或点击[链接](https://paddleseg.bj.bcebos.com/models/HumanSeg.tgz)进行手动下载,并解压到contrib/HumanSeg文件夹下
### 4. 运行
**NOTE:** 运行该模型需要2G左右显存
### 3. 运行
使用GPU预测
使用GPU预测
```
python -u infer.py --example ACE2P --use_gpu
python -u infer.py --example HumanSeg --use_gpu
```
使用CPU预测:
```
python -u infer.py --example ACE2P
python -u infer.py --example HumanSeg
```
## 人像分割 (HumanSeg)
预测结果存放在contrib/HumanSeg/HumanSeg/result目录下。
### 1. 模型结构
### 4. 预测结果示例:
DeepLabv3+ backbone为Xception65
原图:
![](HumanSeg/imgs/Human.jpg)
预测结果:
![](HumanSeg/imgs/HumanSeg.jpg)
### 2. 下载模型和数据
点击[链接](https://paddleseg.bj.bcebos.com/models/HumanSeg.tgz),下载解压到contrib文件夹下
### 3. 运行
## 人体解析
![](ACE2P/imgs/result.jpg)
人体解析(Human Parsing)是细粒度的语义分割任务,旨在识别像素级别的人类图像的组成部分(例如,身体部位和服装)。本章节使用冠军模型Augmented Context Embedding with Edge Perceiving (ACE2P)进行预测分割。
**Note:** 本章节所有命令均在`contrib/ACE2P`目录下执行。
使用GPU预测:
```
python -u infer.py --example HumanSeg --use_gpu
cd contrib/ACE2P
```
### 1. 模型概述
Augmented Context Embedding with Edge Perceiving (ACE2P)通过融合底层特征、全局上下文信息和边缘细节,端到端训练学习人体解析任务。以ACE2P单人人体解析网络为基础的解决方案在CVPR2019第三届Look into Person (LIP)挑战赛中赢得了全部三个人体解析任务的第一名。详情请参见[ACE2P](./ACE2P)
### 2. 模型下载
执行以下命令下载并解压ACE2P预测模型:
使用CPU预测:
```
python -u infer.py --example HumanSeg
python download_ACE2P.py
```
或点击[链接](https://paddleseg.bj.bcebos.com/models/ACE2P.tgz)进行手动下载, 并在contrib/ACE2P下解压。
### 4. 预测结果示例:
### 3. 数据下载
测试图片共10000张,
点击 [Baidu_Drive](https://pan.baidu.com/s/1nvqmZBN#list/path=%2Fsharelink2787269280-523292635003760%2FLIP%2FLIP&parentPath=%2Fsharelink2787269280-523292635003760)
下载Testing_images.zip,或前往LIP数据集官网进行下载。
下载后解压到contrib/ACE2P/data文件夹下
### 4. 运行
使用GPU预测
```
python -u infer.py --example ACE2P --use_gpu
```
使用CPU预测:
```
python -u infer.py --example ACE2P
```
原图:![](imgs/Human.jpg)
**NOTE:** 运行该模型需要2G左右显存。由于数据图片较多,预测过程将比较耗时。
#### 5. 预测结果示例:
原图:
![](ACE2P/imgs/117676_2149260.jpg)
预测结果:
预测结果:![](imgs/HumanSeg.jpg)
![](ACE2P/imgs/117676_2149260.png)
### 备注
## 车道线分割 (RoadLine)
1. 数据及模型路径等详细配置见ACE2P/HumanSeg/RoadLine下的config.py文件
2. ACE2P模型需预留2G显存,若显存超可调小FLAGS_fraction_of_gpu_memory_to_use
## 车道线分割
**Note:** 本章节所有命令均在`contrib/RoadLine`目录下执行。
```
cd contrib/RoadLine
```
### 1. 模型结构
......@@ -75,7 +142,15 @@ Deeplabv3+ backbone为MobileNetv2
### 2. 下载模型和数据
点击[链接](https://paddleseg.bj.bcebos.com/inference_model/RoadLine.tgz),下载解压在contrib文件夹下
执行以下命令下载并解压模型和数据集:
```
python download_RoadLine.py
```
或点击[链接](https://paddleseg.bj.bcebos.com/inference_model/RoadLine.tgz)进行手动下载,并解压到contrib/RoadLine文件夹下
### 3. 运行
......@@ -92,45 +167,84 @@ python -u infer.py --example RoadLine --use_gpu
python -u infer.py --example RoadLine
```
预测结果存放在contrib/RoadLine/RoadLine/result目录下。
#### 4. 预测结果示例:
原图:![](imgs/RoadLine.jpg)
原图:
![](RoadLine/imgs/RoadLine.jpg)
预测结果:![](imgs/RoadLine.png)
预测结果:
![](RoadLine/imgs/RoadLine.png)
## 工业用表分割
**Note:** 本章节所有命令均在`PaddleSeg`目录下执行。
### 1. 模型结构
unet
### 2. 数据准备
cd到PaddleSeg/dataset文件夹下,执行download_mini_mechanical_industry_meter.py
执行以下命令下载并解压数据集,数据集将存放在contrib/MechanicalIndustryMeter文件夹下:
```
python ./contrib/MechanicalIndustryMeter/download_mini_mechanical_industry_meter.py
```
### 3. 下载预训练模型
```
python ./pretrained_model/download_model.py unet_bn_coco
```
### 4. 训练与评估
### 3. 训练与评估
```
export CUDA_VISIBLE_DEVICES=0
python ./pdseg/train.py --log_steps 10 --cfg contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml --use_gpu --do_eval --use_mpio
```
### 5. 可视化
我们已提供了一个训练好的模型,执行以下命令进行下载,下载后将存放在./contrib/MechanicalIndustryMeter/文件夹下。
```
CUDA_VISIBLE_DEVICES=0 python ./pdseg/train.py --log_steps 10 --cfg configs/unet_mechanical_meter.yaml --use_gpu --do_eval --use_mpio
python ./contrib/MechanicalIndustryMeter/download_unet_mechanical_industry_meter.py
```
### 4. 可视化
我们提供了一个训练好的模型,点击[链接](https://paddleseg.bj.bcebos.com/models/unet_mechanical_industry_meter.tar),下载后放在PaddleSeg/pretrained_model下
使用该模型进行预测可视化:
```
CUDA_VISIBLE_DEVICES=0 python ./pdseg/vis.py --cfg configs/unet_mechanical_meter.yaml --use_gpu --vis_dir vis_meter \
TEST.TEST_MODEL "./pretrained_model/unet_gongyeyongbiao/"
python ./pdseg/vis.py --cfg contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml --use_gpu --vis_dir vis_meter \
TEST.TEST_MODEL "./contrib/MechanicalIndustryMeter/unet_mechanical_industry_meter/"
```
可视化结果会保存在vis_meter文件夹下
可视化结果会保存在./vis_meter文件夹下。
### 5. 可视化结果示例:
### 6. 可视化结果示例:
原图:![](imgs/1560143028.5_IMG_3091.JPG)
原图:
![](MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.JPG)
预测结果:![](imgs/1560143028.5_IMG_3091.png)
预测结果:
# 备注
![](MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.png)
## 在线体验
PaddleSeg在AI Studio平台上提供了在线体验的教程,欢迎体验:
|教程|链接|
|-|-|
|工业质检|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/184392)|
|人像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/188833)|
|特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/226710)|
1. 数据及模型路径等详细配置见ACE2P/HumanSeg/RoadLine下的config.py文件
2. ACE2P模型需预留2G显存,若显存超可调小FLAGS_fraction_of_gpu_memory_to_use
cmake_minimum_required(VERSION 3.0)
project(PaddleMaskDetector CXX C)
option(WITH_MKL "Compile demo with MKL/OpenBlas support,defaultuseMKL." ON)
option(WITH_GPU "Compile demo with GPU/CPU, default use CPU." ON)
option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." ON)
option(USE_TENSORRT "Compile demo with TensorRT." OFF)
SET(PADDLE_DIR "" CACHE PATH "Location of libraries")
SET(OPENCV_DIR "" CACHE PATH "Location of libraries")
SET(CUDA_LIB "" CACHE PATH "Location of libraries")
macro(safe_set_static_flag)
foreach(flag_var
CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO)
if(${flag_var} MATCHES "/MD")
string(REGEX REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}")
endif(${flag_var} MATCHES "/MD")
endforeach(flag_var)
endmacro()
if (WITH_MKL)
ADD_DEFINITIONS(-DUSE_MKL)
endif()
if (NOT DEFINED PADDLE_DIR OR ${PADDLE_DIR} STREQUAL "")
message(FATAL_ERROR "please set PADDLE_DIR with -DPADDLE_DIR=/path/paddle_influence_dir")
endif()
if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "")
message(FATAL_ERROR "please set OPENCV_DIR with -DOPENCV_DIR=/path/opencv")
endif()
include_directories("${CMAKE_SOURCE_DIR}/")
include_directories("${PADDLE_DIR}/")
include_directories("${PADDLE_DIR}/third_party/install/protobuf/include")
include_directories("${PADDLE_DIR}/third_party/install/glog/include")
include_directories("${PADDLE_DIR}/third_party/install/gflags/include")
include_directories("${PADDLE_DIR}/third_party/install/xxhash/include")
if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/include")
include_directories("${PADDLE_DIR}/third_party/install/snappy/include")
endif()
if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/include")
include_directories("${PADDLE_DIR}/third_party/install/snappystream/include")
endif()
include_directories("${PADDLE_DIR}/third_party/install/zlib/include")
include_directories("${PADDLE_DIR}/third_party/boost")
include_directories("${PADDLE_DIR}/third_party/eigen3")
if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib")
link_directories("${PADDLE_DIR}/third_party/install/snappy/lib")
endif()
if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/lib")
link_directories("${PADDLE_DIR}/third_party/install/snappystream/lib")
endif()
link_directories("${PADDLE_DIR}/third_party/install/zlib/lib")
link_directories("${PADDLE_DIR}/third_party/install/protobuf/lib")
link_directories("${PADDLE_DIR}/third_party/install/glog/lib")
link_directories("${PADDLE_DIR}/third_party/install/gflags/lib")
link_directories("${PADDLE_DIR}/third_party/install/xxhash/lib")
link_directories("${PADDLE_DIR}/paddle/lib/")
link_directories("${CMAKE_CURRENT_BINARY_DIR}")
if (WIN32)
include_directories("${PADDLE_DIR}/paddle/fluid/inference")
include_directories("${PADDLE_DIR}/paddle/include")
link_directories("${PADDLE_DIR}/paddle/fluid/inference")
include_directories("${OPENCV_DIR}/build/include")
include_directories("${OPENCV_DIR}/opencv/build/include")
link_directories("${OPENCV_DIR}/build/x64/vc14/lib")
else ()
find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/share/OpenCV NO_DEFAULT_PATH)
include_directories("${PADDLE_DIR}/paddle/include")
link_directories("${PADDLE_DIR}/paddle/lib")
include_directories(${OpenCV_INCLUDE_DIRS})
endif ()
if (WIN32)
add_definitions("/DGOOGLE_GLOG_DLL_DECL=")
set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /bigobj /MTd")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /bigobj /MT")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /bigobj /MTd")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /bigobj /MT")
if (WITH_STATIC_LIB)
safe_set_static_flag()
add_definitions(-DSTATIC_LIB)
endif()
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -o2 -fopenmp -std=c++11")
set(CMAKE_STATIC_LIBRARY_PREFIX "")
endif()
# TODO let users define cuda lib path
if (WITH_GPU)
if (NOT DEFINED CUDA_LIB OR ${CUDA_LIB} STREQUAL "")
message(FATAL_ERROR "please set CUDA_LIB with -DCUDA_LIB=/path/cuda-8.0/lib64")
endif()
if (NOT WIN32)
if (NOT DEFINED CUDNN_LIB)
message(FATAL_ERROR "please set CUDNN_LIB with -DCUDNN_LIB=/path/cudnn_v7.4/cuda/lib64")
endif()
endif(NOT WIN32)
endif()
if (NOT WIN32)
if (USE_TENSORRT AND WITH_GPU)
include_directories("${PADDLE_DIR}/third_party/install/tensorrt/include")
link_directories("${PADDLE_DIR}/third_party/install/tensorrt/lib")
endif()
endif(NOT WIN32)
if (NOT WIN32)
set(NGRAPH_PATH "${PADDLE_DIR}/third_party/install/ngraph")
if(EXISTS ${NGRAPH_PATH})
include(GNUInstallDirs)
include_directories("${NGRAPH_PATH}/include")
link_directories("${NGRAPH_PATH}/${CMAKE_INSTALL_LIBDIR}")
set(NGRAPH_LIB ${NGRAPH_PATH}/${CMAKE_INSTALL_LIBDIR}/libngraph${CMAKE_SHARED_LIBRARY_SUFFIX})
endif()
endif()
if(WITH_MKL)
include_directories("${PADDLE_DIR}/third_party/install/mklml/include")
if (WIN32)
set(MATH_LIB ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.lib
${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.lib)
else ()
set(MATH_LIB ${PADDLE_DIR}/third_party/install/mklml/lib/libmklml_intel${CMAKE_SHARED_LIBRARY_SUFFIX}
${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5${CMAKE_SHARED_LIBRARY_SUFFIX})
execute_process(COMMAND cp -r ${PADDLE_DIR}/third_party/install/mklml/lib/libmklml_intel${CMAKE_SHARED_LIBRARY_SUFFIX} /usr/lib)
endif ()
set(MKLDNN_PATH "${PADDLE_DIR}/third_party/install/mkldnn")
if(EXISTS ${MKLDNN_PATH})
include_directories("${MKLDNN_PATH}/include")
if (WIN32)
set(MKLDNN_LIB ${MKLDNN_PATH}/lib/mkldnn.lib)
else ()
set(MKLDNN_LIB ${MKLDNN_PATH}/lib/libmkldnn.so.0)
endif ()
endif()
else()
set(MATH_LIB ${PADDLE_DIR}/third_party/install/openblas/lib/libopenblas${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
if (WIN32)
if(EXISTS "${PADDLE_DIR}/paddle/fluid/inference/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(DEPS
${PADDLE_DIR}/paddle/fluid/inference/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX})
else()
set(DEPS
${PADDLE_DIR}/paddle/lib/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
endif()
if(WITH_STATIC_LIB)
set(DEPS
${PADDLE_DIR}/paddle/lib/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX})
else()
set(DEPS
${PADDLE_DIR}/paddle/lib/libpaddle_fluid${CMAKE_SHARED_LIBRARY_SUFFIX})
endif()
if (NOT WIN32)
set(DEPS ${DEPS}
${MATH_LIB} ${MKLDNN_LIB}
glog gflags protobuf z xxhash
)
if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/lib")
set(DEPS ${DEPS} snappystream)
endif()
if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib")
set(DEPS ${DEPS} snappy)
endif()
else()
set(DEPS ${DEPS}
${MATH_LIB} ${MKLDNN_LIB}
opencv_world346 glog gflags_static libprotobuf zlibstatic xxhash)
set(DEPS ${DEPS} libcmt shlwapi)
if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib")
set(DEPS ${DEPS} snappy)
endif()
if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/lib")
set(DEPS ${DEPS} snappystream)
endif()
endif(NOT WIN32)
if(WITH_GPU)
if(NOT WIN32)
if (USE_TENSORRT)
set(DEPS ${DEPS} ${PADDLE_DIR}/third_party/install/tensorrt/lib/libnvinfer${CMAKE_STATIC_LIBRARY_SUFFIX})
set(DEPS ${DEPS} ${PADDLE_DIR}/third_party/install/tensorrt/lib/libnvinfer_plugin${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
set(DEPS ${DEPS} ${CUDA_LIB}/libcudart${CMAKE_SHARED_LIBRARY_SUFFIX})
set(DEPS ${DEPS} ${CUDNN_LIB}/libcudnn${CMAKE_SHARED_LIBRARY_SUFFIX})
else()
set(DEPS ${DEPS} ${CUDA_LIB}/cudart${CMAKE_STATIC_LIBRARY_SUFFIX} )
set(DEPS ${DEPS} ${CUDA_LIB}/cublas${CMAKE_STATIC_LIBRARY_SUFFIX} )
set(DEPS ${DEPS} ${CUDA_LIB}/cudnn${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
endif()
if (NOT WIN32)
set(EXTERNAL_LIB "-ldl -lrt -lgomp -lz -lm -lpthread")
set(DEPS ${DEPS} ${EXTERNAL_LIB} ${OpenCV_LIBS})
endif()
add_executable(main main.cc humanseg.cc humanseg_postprocess.cc)
target_link_libraries(main ${DEPS})
if (WIN32)
add_custom_command(TARGET main POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./release/mkldnn.dll
)
endif()
{
"configurations": [
{
"name": "x64-Release",
"generator": "Ninja",
"configurationType": "RelWithDebInfo",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${projectDir}\\out\\build\\${name}",
"installRoot": "${projectDir}\\out\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"variables": [
{
"name": "CUDA_LIB",
"value": "D:/projects/packages/cuda10_0/lib64",
"type": "PATH"
},
{
"name": "CUDNN_LIB",
"value": "D:/projects/packages/cuda10_0/lib64",
"type": "PATH"
},
{
"name": "OPENCV_DIR",
"value": "D:/projects/packages/opencv3_4_6",
"type": "PATH"
},
{
"name": "PADDLE_DIR",
"value": "D:/projects/packages/fluid_inference1_6_1",
"type": "PATH"
},
{
"name": "CMAKE_BUILD_TYPE",
"value": "Release",
"type": "STRING"
}
]
}
]
}
\ No newline at end of file
# 视频实时图像分割模型C++预测部署
本文档主要介绍实时图像分割模型如何在`Windows``Linux`上完成基于`C++`的预测部署。
## C++预测部署编译
### 1. 下载模型
点击右边下载:[模型下载地址](https://paddleseg.bj.bcebos.com/deploy/models/humanseg_paddleseg_int8.zip)
模型文件路径将做为预测时的输入参数,请解压到合适的目录位置。
### 2. 编译
本项目支持在Windows和Linux上编译并部署C++项目,不同平台的编译请参考:
- [Linux 编译](./docs/linux_build.md)
- [Windows 使用 Visual Studio 2019编译](./docs/windows_build.md)
# 视频实时人像分割模型Linux平台C++预测部署
## 1. 系统和软件依赖
### 1.1 操作系统及硬件要求
- Ubuntu 14.04 或者 16.04 (其它平台未测试)
- GCC版本4.8.5 ~ 4.9.2
- 支持Intel MKL-DNN的CPU
- NOTE: 如需在Nvidia GPU运行,请自行安装CUDA 9.0 / 10.0 + CUDNN 7.3+ (不支持9.1/10.1版本的CUDA)
### 1.2 下载PaddlePaddle C++预测库
PaddlePaddle C++ 预测库主要分为CPU版本和GPU版本。
其中,GPU 版本支持`CUDA 10.0``CUDA 9.0`:
以下为各版本C++预测库的下载链接:
| 版本 | 链接 |
| ---- | ---- |
| CPU+MKL版 | [fluid_inference.tgz](https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-avx-mkl/fluid_inference.tgz) |
| CUDA9.0+MKL 版 | [fluid_inference.tgz](https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz) |
| CUDA10.0+MKL 版 | [fluid_inference.tgz](https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz) |
更多可用预测库版本,请点击以下链接下载:[C++预测库下载列表](https://paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/build_and_install_lib_cn.html)
下载并解压, 解压后的 `fluid_inference`目录包含的内容:
```
fluid_inference
├── paddle # paddle核心库和头文件
|
├── third_party # 第三方依赖库和头文件
|
└── version.txt # 版本和编译信息
```
**注意:** 请把解压后的目录放到合适的路径,**该目录路径后续会作为编译依赖**使用。
## 2. 编译与运行
### 2.1 配置编译脚本
打开文件`linux_build.sh`, 看到以下内容:
```shell
# 是否使用GPU
WITH_GPU=OFF
# Paddle 预测库路径
PADDLE_DIR=/PATH/TO/fluid_inference/
# CUDA库路径, 仅 WITH_GPU=ON 时设置
CUDA_LIB=/PATH/TO/CUDA_LIB64/
# CUDNN库路径,仅 WITH_GPU=ON 且 CUDA_LIB有效时设置
CUDNN_LIB=/PATH/TO/CUDNN_LIB64/
# OpenCV 库路径, 无须设置
OPENCV_DIR=/PATH/TO/opencv3gcc4.8/
cd build
cmake .. \
-DWITH_GPU=${WITH_GPU} \
-DPADDLE_DIR=${PADDLE_DIR} \
-DCUDA_LIB=${CUDA_LIB} \
-DCUDNN_LIB=${CUDNN_LIB} \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_STATIC_LIB=OFF
make -j4
```
把上述参数根据实际情况做修改后,运行脚本编译程序:
```shell
sh linux_build.sh
```
### 2.2. 运行和可视化
可执行文件有 **2** 个参数,第一个是前面导出的`inference_model`路径,第二个是需要预测的视频路径。
示例:
```shell
./build/main ./models /PATH/TO/TEST_VIDEO
```
点击下载[测试视频](https://paddleseg.bj.bcebos.com/deploy/data/test.avi)
预测的结果保存在视频文件`result.avi`中。
# 视频实时人像分割模型Windows平台C++预测部署
## 1. 系统和软件依赖
### 1.1 基础依赖
- Windows 10 / Windows Server 2016+ (其它平台未测试)
- Visual Studio 2019 (社区版或专业版均可)
- CUDA 9.0 / 10.0 + CUDNN 7.3+ (不支持9.1/10.1版本的CUDA)
### 1.2 下载OpenCV并设置环境变量
- 在OpenCV官网下载适用于Windows平台的3.4.6版本: [点击下载](https://sourceforge.net/projects/opencvlibrary/files/3.4.6/opencv-3.4.6-vc14_vc15.exe/download)
- 运行下载的可执行文件,将OpenCV解压至合适目录,这里以解压到`D:\projects\opencv`为例
- 把OpenCV动态库加入到系统环境变量
- 此电脑(我的电脑)->属性->高级系统设置->环境变量
- 在系统变量中找到Path(如没有,自行创建),并双击编辑
- 新建,将opencv路径填入并保存,如D:\projects\opencv\build\x64\vc14\bin
**注意:** `OpenCV`的解压目录后续将做为编译配置项使用,所以请放置合适的目录中。
### 1.3 下载PaddlePaddle C++ 预测库
`PaddlePaddle` **C++ 预测库** 主要分为`CPU``GPU`版本, 其中`GPU版本`提供`CUDA 9.0``CUDA 10.0` 支持。
常用的版本如下:
| 版本 | 链接 |
| ---- | ---- |
| CPU+MKL版 | [fluid_inference_install_dir.zip](https://paddle-wheel.bj.bcebos.com/1.6.3/win-infer/mkl/cpu/fluid_inference_install_dir.zip) |
| CUDA9.0+MKL 版 | [fluid_inference_install_dir.zip](https://paddle-wheel.bj.bcebos.com/1.6.3/win-infer/mkl/post97/fluid_inference_install_dir.zip) |
| CUDA10.0+MKL 版 | [fluid_inference_install_dir.zip](https://paddle-wheel.bj.bcebos.com/1.6.3/win-infer/mkl/post107/fluid_inference_install_dir.zip) |
更多不同平台的可用预测库版本,请[点击查看](https://paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/windows_cpp_inference.html) 选择适合你的版本。
下载并解压, 解压后的 `fluid_inference`目录包含的内容:
```
fluid_inference_install_dir
├── paddle # paddle核心库和头文件
|
├── third_party # 第三方依赖库和头文件
|
└── version.txt # 版本和编译信息
```
**注意:** 这里的`fluid_inference_install_dir` 目录所在路径,将用于后面的编译参数设置,请放置在合适的位置。
## 2. Visual Studio 2019 编译
- 2.1 打开Visual Studio 2019 Community,点击`继续但无需代码`, 如下图:
![step2.1](https://paddleseg.bj.bcebos.com/inference/vs2019_step1.png)
- 2.2 点击 `文件`->`打开`->`CMake`, 如下图:
![step2.2](https://paddleseg.bj.bcebos.com/inference/vs2019_step2.png)
- 2.3 选择本项目根目录`CMakeList.txt`文件打开, 如下图:
![step2.3](https://paddleseg.bj.bcebos.com/deploy/docs/vs2019_step2.3.png)
- 2.4 点击:`项目`->`PaddleMaskDetector的CMake设置`
![step2.4](https://paddleseg.bj.bcebos.com/deploy/docs/vs2019_step2.4.png)
- 2.5 点击浏览设置`OPENCV_DIR`, `CUDA_LIB``PADDLE_DIR` 3个编译依赖库的位置, 设置完成后点击`保存并生成CMake缓存并加载变量`
![step2.5](https://paddleseg.bj.bcebos.com/inference/vs2019_step5.png)
- 2.6 点击`生成`->`全部生成` 编译项目
![step2.6](https://paddleseg.bj.bcebos.com/inference/vs2019_step6.png)
## 3. 运行程序
成功编译后, 产出的可执行文件在项目子目录`out\build\x64-Release`目录, 按以下步骤运行代码:
- 打开`cmd`切换至该目录
- 运行以下命令传入模型路径与测试视频
```shell
main.exe ./models/ ./data/test.avi
```
第一个参数即人像分割预测模型的路径,第二个参数即要预测的视频。
点击下载[测试视频](https://paddleseg.bj.bcebos.com/deploy/data/test.avi)
运行后,预测结果保存在文件`result.avi`中。
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
# include "humanseg.h"
# include "humanseg_postprocess.h"
// Normalize the image by (pix - mean) * scale
void NormalizeImage(
const std::vector<float> &mean,
const std::vector<float> &scale,
cv::Mat& im, // NOLINT
float* input_buffer) {
int height = im.rows;
int width = im.cols;
int stride = width * height;
for (int h = 0; h < height; h++) {
for (int w = 0; w < width; w++) {
int base = h * width + w;
input_buffer[base + 0 * stride] =
(im.at<cv::Vec3f>(h, w)[0] - mean[0]) * scale[0];
input_buffer[base + 1 * stride] =
(im.at<cv::Vec3f>(h, w)[1] - mean[1]) * scale[1];
input_buffer[base + 2 * stride] =
(im.at<cv::Vec3f>(h, w)[2] - mean[2]) * scale[2];
}
}
}
// Load Model and return model predictor
void LoadModel(
const std::string& model_dir,
bool use_gpu,
std::unique_ptr<paddle::PaddlePredictor>* predictor) {
// Config the model info
paddle::AnalysisConfig config;
auto prog_file = model_dir + "/__model__";
auto params_file = model_dir + "/__params__";
config.SetModel(prog_file, params_file);
if (use_gpu) {
config.EnableUseGpu(100, 0);
} else {
config.DisableGpu();
}
config.SwitchUseFeedFetchOps(false);
config.SwitchSpecifyInputNames(true);
// Memory optimization
config.EnableMemoryOptim();
*predictor = std::move(CreatePaddlePredictor(config));
}
void HumanSeg::Preprocess(const cv::Mat& image_mat) {
// Clone the image : keep the original mat for postprocess
cv::Mat im = image_mat.clone();
auto eval_wh = cv::Size(eval_size_[0], eval_size_[1]);
cv::resize(im, im, eval_wh, 0.f, 0.f, cv::INTER_LINEAR);
im.convertTo(im, CV_32FC3, 1.0);
int rc = im.channels();
int rh = im.rows;
int rw = im.cols;
input_shape_ = {1, rc, rh, rw};
input_data_.resize(1 * rc * rh * rw);
float* buffer = input_data_.data();
NormalizeImage(mean_, scale_, im, input_data_.data());
}
cv::Mat HumanSeg::Postprocess(const cv::Mat& im) {
int h = input_shape_[2];
int w = input_shape_[3];
scoremap_data_.resize(3 * h * w * sizeof(float));
float* base = output_data_.data() + h * w;
for (int i = 0; i < h * w; ++i) {
scoremap_data_[i] = uchar(base[i] * 255);
}
cv::Mat im_scoremap = cv::Mat(h, w, CV_8UC1);
im_scoremap.data = scoremap_data_.data();
cv::resize(im_scoremap, im_scoremap, cv::Size(im.cols, im.rows));
im_scoremap.convertTo(im_scoremap, CV_32FC1, 1 / 255.0);
float* pblob = reinterpret_cast<float*>(im_scoremap.data);
int out_buff_capacity = 10 * im.cols * im.rows * sizeof(float);
segout_data_.resize(out_buff_capacity);
unsigned char* seg_result = segout_data_.data();
MergeProcess(im.data, pblob, im.rows, im.cols, seg_result);
cv::Mat seg_mat(im.rows, im.cols, CV_8UC1, seg_result);
cv::resize(seg_mat, seg_mat, cv::Size(im.cols, im.rows));
cv::GaussianBlur(seg_mat, seg_mat, cv::Size(5, 5), 0, 0);
float fg_threshold = 0.8;
float bg_threshold = 0.4;
cv::Mat show_seg_mat;
seg_mat.convertTo(seg_mat, CV_32FC1, 1 / 255.0);
ThresholdMask(seg_mat, fg_threshold, bg_threshold, show_seg_mat);
auto out_im = MergeSegMat(show_seg_mat, im);
return out_im;
}
cv::Mat HumanSeg::Predict(const cv::Mat& im) {
// Preprocess image
Preprocess(im);
// Prepare input tensor
auto input_names = predictor_->GetInputNames();
auto in_tensor = predictor_->GetInputTensor(input_names[0]);
in_tensor->Reshape(input_shape_);
in_tensor->copy_from_cpu(input_data_.data());
// Run predictor
predictor_->ZeroCopyRun();
// Get output tensor
auto output_names = predictor_->GetOutputNames();
auto out_tensor = predictor_->GetOutputTensor(output_names[0]);
auto output_shape = out_tensor->shape();
// Calculate output length
int output_size = 1;
for (int j = 0; j < output_shape.size(); ++j) {
output_size *= output_shape[j];
}
output_data_.resize(output_size);
out_tensor->copy_to_cpu(output_data_.data());
// Postprocessing result
return Postprocess(im);
}
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <string>
#include <vector>
#include <memory>
#include <utility>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/optflow.hpp>
#include "paddle_inference_api.h" // NOLINT
// Load Paddle Inference Model
void LoadModel(
const std::string& model_dir,
bool use_gpu,
std::unique_ptr<paddle::PaddlePredictor>* predictor);
class HumanSeg {
public:
explicit HumanSeg(const std::string& model_dir,
const std::vector<float>& mean,
const std::vector<float>& scale,
const std::vector<int>& eval_size,
bool use_gpu = false) :
mean_(mean),
scale_(scale),
eval_size_(eval_size) {
LoadModel(model_dir, use_gpu, &predictor_);
}
// Run predictor
cv::Mat Predict(const cv::Mat& im);
private:
// Preprocess image and copy data to input buffer
void Preprocess(const cv::Mat& im);
// Postprocess result
cv::Mat Postprocess(const cv::Mat& im);
std::unique_ptr<paddle::PaddlePredictor> predictor_;
std::vector<float> input_data_;
std::vector<int> input_shape_;
std::vector<float> output_data_;
std::vector<uchar> scoremap_data_;
std::vector<uchar> segout_data_;
std::vector<float> mean_;
std::vector<float> scale_;
std::vector<int> eval_size_;
};
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <iostream>
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/optflow.hpp>
#include "humanseg_postprocess.h" // NOLINT
int HumanSegTrackFuse(const cv::Mat &track_fg_cfd,
const cv::Mat &dl_fg_cfd,
const cv::Mat &dl_weights,
const cv::Mat &is_track,
const float cfd_diff_thres,
const int patch_size,
cv::Mat cur_fg_cfd) {
float *cur_fg_cfd_ptr = reinterpret_cast<float*>(cur_fg_cfd.data);
float *dl_fg_cfd_ptr = reinterpret_cast<float*>(dl_fg_cfd.data);
float *track_fg_cfd_ptr = reinterpret_cast<float*>(track_fg_cfd.data);
float *dl_weights_ptr = reinterpret_cast<float*>(dl_weights.data);
uchar *is_track_ptr = reinterpret_cast<uchar*>(is_track.data);
int y_offset = 0;
int ptr_offset = 0;
int h = track_fg_cfd.rows;
int w = track_fg_cfd.cols;
float dl_fg_score = 0.0;
float track_fg_score = 0.0;
for (int y = 0; y < h; ++y) {
for (int x = 0; x < w; ++x) {
dl_fg_score = dl_fg_cfd_ptr[ptr_offset];
if (is_track_ptr[ptr_offset] > 0) {
track_fg_score = track_fg_cfd_ptr[ptr_offset];
if (dl_fg_score > 0.9 || dl_fg_score < 0.1) {
if (dl_weights_ptr[ptr_offset] <= 0.10) {
cur_fg_cfd_ptr[ptr_offset] = dl_fg_score * 0.3
+ track_fg_score * 0.7;
} else {
cur_fg_cfd_ptr[ptr_offset] = dl_fg_score * 0.4
+ track_fg_score * 0.6;
}
} else {
cur_fg_cfd_ptr[ptr_offset] = dl_fg_score * dl_weights_ptr[ptr_offset]
+ track_fg_score * (1 - dl_weights_ptr[ptr_offset]);
}
} else {
cur_fg_cfd_ptr[ptr_offset] = dl_fg_score;
}
++ptr_offset;
}
y_offset += w;
ptr_offset = y_offset;
}
return 0;
}
int HumanSegTracking(const cv::Mat &prev_gray,
const cv::Mat &cur_gray,
const cv::Mat &prev_fg_cfd,
int patch_size,
cv::Mat track_fg_cfd,
cv::Mat is_track,
cv::Mat dl_weights,
cv::Ptr<cv::optflow::DISOpticalFlow> disflow) {
cv::Mat flow_fw;
disflow->calc(prev_gray, cur_gray, flow_fw);
cv::Mat flow_bw;
disflow->calc(cur_gray, prev_gray, flow_bw);
float double_check_thres = 8;
cv::Point2f fxy_fw;
int dy_fw = 0;
int dx_fw = 0;
cv::Point2f fxy_bw;
int dy_bw = 0;
int dx_bw = 0;
float *prev_fg_cfd_ptr = reinterpret_cast<float*>(prev_fg_cfd.data);
float *track_fg_cfd_ptr = reinterpret_cast<float*>(track_fg_cfd.data);
float *dl_weights_ptr = reinterpret_cast<float*>(dl_weights.data);
uchar *is_track_ptr = reinterpret_cast<uchar*>(is_track.data);
int prev_y_offset = 0;
int prev_ptr_offset = 0;
int cur_ptr_offset = 0;
float *flow_fw_ptr = reinterpret_cast<float*>(flow_fw.data);
float roundy_fw = 0.0;
float roundx_fw = 0.0;
float roundy_bw = 0.0;
float roundx_bw = 0.0;
int h = prev_fg_cfd.rows;
int w = prev_fg_cfd.cols;
for (int r = 0; r < h; ++r) {
for (int c = 0; c < w; ++c) {
++prev_ptr_offset;
fxy_fw = flow_fw.ptr<cv::Point2f>(r)[c];
roundy_fw = fxy_fw.y >= 0 ? 0.5 : -0.5;
roundx_fw = fxy_fw.x >= 0 ? 0.5 : -0.5;
dy_fw = static_cast<int>(fxy_fw.y + roundy_fw);
dx_fw = static_cast<int>(fxy_fw.x + roundx_fw);
int cur_x = c + dx_fw;
int cur_y = r + dy_fw;
if (cur_x < 0
|| cur_x >= h
|| cur_y < 0
|| cur_y >= w) {
continue;
}
fxy_bw = flow_bw.ptr<cv::Point2f>(cur_y)[cur_x];
roundy_bw = fxy_bw.y >= 0 ? 0.5 : -0.5;
roundx_bw = fxy_bw.x >= 0 ? 0.5 : -0.5;
dy_bw = static_cast<int>(fxy_bw.y + roundy_bw);
dx_bw = static_cast<int>(fxy_bw.x + roundx_bw);
auto total = (dy_fw + dy_bw) * (dy_fw + dy_bw)
+ (dx_fw + dx_bw) * (dx_fw + dx_bw);
if (total >= double_check_thres) {
continue;
}
cur_ptr_offset = cur_y * w + cur_x;
if (abs(dy_fw) <= 0
&& abs(dx_fw) <= 0
&& abs(dy_bw) <= 0
&& abs(dx_bw) <= 0) {
dl_weights_ptr[cur_ptr_offset] = 0.05;
}
is_track_ptr[cur_ptr_offset] = 1;
track_fg_cfd_ptr[cur_ptr_offset] = prev_fg_cfd_ptr[prev_ptr_offset];
}
prev_y_offset += w;
prev_ptr_offset = prev_y_offset - 1;
}
return 0;
}
int MergeProcess(const uchar *im_buff,
const float *scoremap_buff,
const int height,
const int width,
uchar *result_buff) {
cv::Mat prev_fg_cfd;
cv::Mat cur_fg_cfd;
cv::Mat cur_fg_mask;
cv::Mat track_fg_cfd;
cv::Mat prev_gray;
cv::Mat cur_gray;
cv::Mat bgr_temp;
cv::Mat is_track;
cv::Mat static_roi;
cv::Mat weights;
cv::Ptr<cv::optflow::DISOpticalFlow> disflow =
cv::optflow::createOptFlow_DIS(
cv::optflow::DISOpticalFlow::PRESET_ULTRAFAST);
bool is_init = false;
const float *cfd_ptr = scoremap_buff;
if (!is_init) {
is_init = true;
cur_fg_cfd = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0));
memcpy(cur_fg_cfd.data, cfd_ptr, height * width * sizeof(float));
cur_fg_mask = cv::Mat(height, width, CV_8UC1, cv::Scalar::all(0));
if (height <= 64 || width <= 64) {
disflow->setFinestScale(1);
} else if (height <= 160 || width <= 160) {
disflow->setFinestScale(2);
} else {
disflow->setFinestScale(3);
}
is_track = cv::Mat(height, width, CV_8UC1, cv::Scalar::all(0));
static_roi = cv::Mat(height, width, CV_8UC1, cv::Scalar::all(0));
track_fg_cfd = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0));
bgr_temp = cv::Mat(height, width, CV_8UC3);
memcpy(bgr_temp.data, im_buff, height * width * 3 * sizeof(uchar));
cv::cvtColor(bgr_temp, cur_gray, cv::COLOR_BGR2GRAY);
weights = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0.30));
} else {
memcpy(cur_fg_cfd.data, cfd_ptr, height * width * sizeof(float));
memcpy(bgr_temp.data, im_buff, height * width * 3 * sizeof(uchar));
cv::cvtColor(bgr_temp, cur_gray, cv::COLOR_BGR2GRAY);
memset(is_track.data, 0, height * width * sizeof(uchar));
memset(static_roi.data, 0, height * width * sizeof(uchar));
weights = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0.30));
HumanSegTracking(prev_gray,
cur_gray,
prev_fg_cfd,
0,
track_fg_cfd,
is_track,
weights,
disflow);
HumanSegTrackFuse(track_fg_cfd,
cur_fg_cfd,
weights,
is_track,
1.1,
0,
cur_fg_cfd);
}
int ksize = 3;
cv::GaussianBlur(cur_fg_cfd, cur_fg_cfd, cv::Size(ksize, ksize), 0, 0);
prev_fg_cfd = cur_fg_cfd.clone();
prev_gray = cur_gray.clone();
cur_fg_cfd.convertTo(cur_fg_mask, CV_8UC1, 255);
memcpy(result_buff, cur_fg_mask.data, height * width);
return 0;
}
cv::Mat MergeSegMat(const cv::Mat& seg_mat,
const cv::Mat& ori_frame) {
cv::Mat return_frame;
cv::resize(ori_frame, return_frame, cv::Size(ori_frame.cols, ori_frame.rows));
for (int i = 0; i < ori_frame.rows; i++) {
for (int j = 0; j < ori_frame.cols; j++) {
float score = seg_mat.at<uchar>(i, j) / 255.0;
if (score > 0.1) {
return_frame.at<cv::Vec3b>(i, j)[2] = static_cast<int>((1 - score) * 255
+ score*return_frame.at<cv::Vec3b>(i, j)[2]);
return_frame.at<cv::Vec3b>(i, j)[1] = static_cast<int>((1 - score) * 255
+ score*return_frame.at<cv::Vec3b>(i, j)[1]);
return_frame.at<cv::Vec3b>(i, j)[0] = static_cast<int>((1 - score) * 255
+ score*return_frame.at<cv::Vec3b>(i, j)[0]);
} else {
return_frame.at<cv::Vec3b>(i, j) = {255, 255, 255};
}
}
}
return return_frame;
}
int ThresholdMask(const cv::Mat &fg_cfd,
const float fg_thres,
const float bg_thres,
cv::Mat& fg_mask) {
if (fg_cfd.type() != CV_32FC1) {
printf("ThresholdMask: type is not CV_32FC1.\n");
return -1;
}
if (!(fg_mask.type() == CV_8UC1
&& fg_mask.rows == fg_cfd.rows
&& fg_mask.cols == fg_cfd.cols)) {
fg_mask = cv::Mat(fg_cfd.rows, fg_cfd.cols, CV_8UC1, cv::Scalar::all(0));
}
for (int r = 0; r < fg_cfd.rows; ++r) {
for (int c = 0; c < fg_cfd.cols; ++c) {
float score = fg_cfd.at<float>(r, c);
if (score < bg_thres) {
fg_mask.at<uchar>(r, c) = 0;
} else if (score > fg_thres) {
fg_mask.at<uchar>(r, c) = 255;
} else {
fg_mask.at<uchar>(r, c) = static_cast<int>(
(score-bg_thres) / (fg_thres - bg_thres) * 255);
}
}
}
return 0;
}
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <opencv2/core/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/optflow.hpp>
int ThresholdMask(const cv::Mat &fg_cfd,
const float fg_thres,
const float bg_thres,
cv::Mat& fg_mask);
cv::Mat MergeSegMat(const cv::Mat& seg_mat,
const cv::Mat& ori_frame);
int MergeProcess(const uchar *im_buff,
const float *im_scoremap_buff,
const int height,
const int width,
uchar *result_buff);
OPENCV_URL=https://paddleseg.bj.bcebos.com/deploy/deps/opencv346.tar.bz2
if [ ! -d "./deps/opencv346" ]; then
mkdir -p deps
cd deps
wget -c ${OPENCV_URL}
tar xvfj opencv346.tar.bz2
rm -rf opencv346.tar.bz2
cd ..
fi
WITH_GPU=OFF
PADDLE_DIR=/root/projects/deps/fluid_inference/
CUDA_LIB=/usr/local/cuda-10.0/lib64/
CUDNN_LIB=/usr/local/cuda-10.0/lib64/
OPENCV_DIR=$(pwd)/deps/opencv346/
echo ${OPENCV_DIR}
rm -rf build
mkdir -p build
cd build
cmake .. \
-DWITH_GPU=${WITH_GPU} \
-DPADDLE_DIR=${PADDLE_DIR} \
-DCUDA_LIB=${CUDA_LIB} \
-DCUDNN_LIB=${CUDNN_LIB} \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_STATIC_LIB=OFF
make clean
make -j12
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <iostream>
#include <string>
#include "humanseg.h" // NOLINT
#include "humanseg_postprocess.h" // NOLINT
// Do predicting on a video file
int VideoPredict(const std::string& video_path, HumanSeg& seg)
{
cv::VideoCapture capture;
capture.open(video_path.c_str());
if (!capture.isOpened()) {
printf("can not open video : %s\n", video_path.c_str());
return -1;
}
int video_width = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_WIDTH));
int video_height = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_HEIGHT));
cv::VideoWriter video_out;
std::string video_out_path = "result.avi";
video_out.open(video_out_path.c_str(),
CV_FOURCC('M', 'J', 'P', 'G'),
30.0,
cv::Size(video_width, video_height),
true);
if (!video_out.isOpened()) {
printf("create video writer failed!\n");
return -1;
}
cv::Mat frame;
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
cv::Mat out_im = seg.Predict(frame);
video_out.write(out_im);
}
capture.release();
video_out.release();
return 0;
}
// Do predicting on a image file
int ImagePredict(const std::string& image_path, HumanSeg& seg)
{
cv::Mat img = imread(image_path, cv::IMREAD_COLOR);
cv::Mat out_im = seg.Predict(img);
imwrite("result.jpeg", out_im);
return 0;
}
int main(int argc, char* argv[]) {
if (argc < 3 || argc > 4) {
std::cout << "Usage:"
<< "./humanseg ./models/ ./data/test.avi"
<< std::endl;
return -1;
}
bool use_gpu = (argc == 4 ? std::stoi(argv[3]) : false);
auto model_dir = std::string(argv[1]);
auto input_path = std::string(argv[2]);
// Init Model
std::vector<float> means = {104.008, 116.669, 122.675};
std::vector<float> scale = {1.000, 1.000, 1.000};
std::vector<int> eval_sz = {192, 192};
HumanSeg seg(model_dir, means, scale, eval_sz, use_gpu);
// Call ImagePredict while input_path is a image file path
// The output will be saved as result.jpeg
// ImagePredict(input_path, seg);
// Call VideoPredict while input_path is a video file path
// The output will be saved as result.avi
VideoPredict(input_path, seg);
return 0;
}
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
if __name__ == "__main__":
download_file_and_uncompress(
url='https://paddleseg.bj.bcebos.com/inference_model/RoadLine.tgz',
savepath=LOCAL_PATH,
extrapath=LOCAL_PATH,
extraname='RoadLine')
print("Pretrained Model download success!")
# -*- coding: utf-8 -*-
import os
import cv2
import numpy as np
from utils.util import get_arguments
from utils.palette import get_palette
from PIL import Image as PILImage
import importlib
args = get_arguments()
config = importlib.import_module('config')
cfg = getattr(config, 'cfg')
# paddle垃圾回收策略FLAG,ACE2P模型较大,当显存不够时建议开启
os.environ['FLAGS_eager_delete_tensor_gb']='0.0'
import paddle.fluid as fluid
# 预测数据集类
class TestDataSet():
def __init__(self):
self.data_dir = cfg.data_dir
self.data_list_file = cfg.data_list_file
self.data_list = self.get_data_list()
self.data_num = len(self.data_list)
def get_data_list(self):
# 获取预测图像路径列表
data_list = []
data_file_handler = open(self.data_list_file, 'r')
for line in data_file_handler:
img_name = line.strip()
name_prefix = img_name.split('.')[0]
if len(img_name.split('.')) == 1:
img_name = img_name + '.jpg'
img_path = os.path.join(self.data_dir, img_name)
data_list.append(img_path)
return data_list
def preprocess(self, img):
# 图像预处理
if cfg.example == 'ACE2P':
reader = importlib.import_module(args.example+'.reader')
ACE2P_preprocess = getattr(reader, 'preprocess')
img = ACE2P_preprocess(img)
else:
img = cv2.resize(img, cfg.input_size).astype(np.float32)
img -= np.array(cfg.MEAN)
img /= np.array(cfg.STD)
img = img.transpose((2, 0, 1))
img = np.expand_dims(img, axis=0)
return img
def get_data(self, index):
# 获取图像信息
img_path = self.data_list[index]
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
if img is None:
return img, img,img_path, None
img_name = img_path.split(os.sep)[-1]
name_prefix = img_name.replace('.'+img_name.split('.')[-1],'')
img_shape = img.shape[:2]
img_process = self.preprocess(img)
return img, img_process, name_prefix, img_shape
def infer():
if not os.path.exists(cfg.vis_dir):
os.makedirs(cfg.vis_dir)
palette = get_palette(cfg.class_num)
# 人像分割结果显示阈值
thresh = 120
place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
# 加载预测模型
test_prog, feed_name, fetch_list = fluid.io.load_inference_model(
dirname=cfg.model_path, executor=exe, params_filename='__params__')
#加载预测数据集
test_dataset = TestDataSet()
data_num = test_dataset.data_num
for idx in range(data_num):
# 数据获取
ori_img, image, im_name, im_shape = test_dataset.get_data(idx)
if image is None:
print(im_name, 'is None')
continue
# 预测
if cfg.example == 'ACE2P':
# ACE2P模型使用多尺度预测
reader = importlib.import_module(args.example+'.reader')
multi_scale_test = getattr(reader, 'multi_scale_test')
parsing, logits = multi_scale_test(exe, test_prog, feed_name, fetch_list, image, im_shape)
else:
# HumanSeg,RoadLine模型单尺度预测
result = exe.run(program=test_prog, feed={feed_name[0]: image}, fetch_list=fetch_list)
parsing = np.argmax(result[0][0], axis=0)
parsing = cv2.resize(parsing.astype(np.uint8), im_shape[::-1])
# 预测结果保存
result_path = os.path.join(cfg.vis_dir, im_name + '.png')
if cfg.example == 'HumanSeg':
logits = result[0][0][1]*255
logits = cv2.resize(logits, im_shape[::-1])
ret, logits = cv2.threshold(logits, thresh, 0, cv2.THRESH_TOZERO)
logits = 255 *(logits - thresh)/(255 - thresh)
# 将分割结果添加到alpha通道
rgba = np.concatenate((ori_img, np.expand_dims(logits, axis=2)), axis=2)
cv2.imwrite(result_path, rgba)
else:
output_im = PILImage.fromarray(np.asarray(parsing, dtype=np.uint8))
output_im.putpalette(palette)
output_im.save(result_path)
if (idx + 1) % 100 == 0:
print('%d processd' % (idx + 1))
print('%d processd done' % (idx + 1))
return 0
if __name__ == "__main__":
infer()
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: RainbowSecret
## Microsoft Research
## yuyua@microsoft.com
## Copyright (c) 2018
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import cv2
def get_palette(num_cls):
""" Returns the color map for visualizing the segmentation mask.
Args:
num_cls: Number of classes
Returns:
The color map
"""
n = num_cls
palette = [0] * (n * 3)
for j in range(0, n):
lab = j
palette[j * 3 + 0] = 0
palette[j * 3 + 1] = 0
palette[j * 3 + 2] = 0
i = 0
while lab:
palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i))
palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i))
palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i))
i += 1
lab >>= 3
return palette
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import argparse
import os
def get_arguments():
parser = argparse.ArgumentParser()
parser.add_argument("--use_gpu",
action="store_true",
help="Use gpu or cpu to test.")
parser.add_argument('--example',
type=str,
help='RoadLine, HumanSeg or ACE2P')
return parser.parse_args()
class AttrDict(dict):
def __init__(self, *args, **kwargs):
super(AttrDict, self).__init__(*args, **kwargs)
def __getattr__(self, name):
if name in self.__dict__:
return self.__dict__[name]
elif name in self:
return self[name]
else:
raise AttributeError(name)
def __setattr__(self, name, value):
if name in self.__dict__:
self.__dict__[name] = value
else:
self[name] = value
def merge_cfg_from_args(args, cfg):
"""Merge config keys, values in args into the global config."""
for k, v in vars(args).items():
d = cfg
try:
value = eval(v)
except:
value = v
if value is not None:
cfg[k] = value
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
def download_pet_dataset(savepath, extrapath):
url = "https://paddleseg.bj.bcebos.com/dataset/optic_disc_seg.zip"
download_file_and_uncompress(
url=url, savepath=savepath, extrapath=extrapath)
if __name__ == "__main__":
download_pet_dataset(LOCAL_PATH, LOCAL_PATH)
print("Dataset download finish!")
......@@ -99,4 +99,4 @@ cd /d D:\projects\PaddleSeg\deploy\cpp\out\build\x64-Release
demo.exe --conf=/path/to/your/conf --input_dir=/path/to/your/input/data/directory
```
更详细说明请参考ReadMe文档: [预测和可视化部分](../ReadMe.md)
更详细说明请参考ReadMe文档: [预测和可视化部分](../README.md)
......@@ -10,11 +10,10 @@
* Android手机或开发板;
### 2.2 安装
* git clone https://github.com/PaddlePaddle/PaddleSeg.git
* 打开Android Studio,在"Welcome to Android Studio"窗口点击"Open an existing Android Studio project",在弹出的路径选择窗口中进入"/PaddleSeg/lite/humanseg-android-demo/"目录,然后点击右下角的"Open"按钮即可导入工程
* git clone https://github.com/PaddlePaddle/PaddleSeg.git ;
* 打开Android Studio,在"Welcome to Android Studio"窗口点击"Open an existing Android Studio project",在弹出的路径选择窗口中进入"/PaddleSeg/lite/humanseg_android_demo/"目录,然后点击右下角的"Open"按钮即可导入工程,构建工程的过程中会下载demo需要的模型和Lite预测库;
* 通过USB连接Android手机或开发板;
* 载入工程后,点击菜单栏的Run->Run 'App'按钮,在弹出的"Select Deployment Target"窗口选择已经连接的Android设备,然后点击"OK"按钮;
* 手机上会出现Demo的主界面,选择"Image Segmentation"图标,进入的人像分割示例程序;
* 在人像分割Demo中,默认会载入一张人像图像,并会在图像下方给出CPU的预测结果;
* 在人像分割Demo中,你还可以通过上方的"Gallery"和"Take Photo"按钮从相册或相机中加载测试图像;
......@@ -48,7 +47,7 @@ Paddle-Lite的编译目前支持Docker,Linux和Mac OS开发环境,建议使
* PaddlePredictor.jar;
* arm64-v8a/libpaddle_lite_jni.so;
* armeabi-v7a/libpaddle_lite_jni.so;
* armeabi-v7a/libpaddle_lite_jni.so;
下面分别介绍两种方法:
......
import java.security.MessageDigest
apply plugin: 'com.android.application'
android {
compileSdkVersion 28
defaultConfig {
applicationId "com.baidu.paddle.lite.demo.human_segmentation"
minSdkVersion 15
targetSdkVersion 28
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.android.support.constraint:constraint-layout:1.1.3'
implementation 'com.android.support:design:28.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation files('libs/PaddlePredictor.jar')
}
def paddleLiteLibs = 'https://paddlelite-demo.bj.bcebos.com/libs/android/paddle_lite_libs_v2_1_0_bug_fixed.tar.gz'
task downloadAndExtractPaddleLiteLibs(type: DefaultTask) {
doFirst {
println "Downloading and extracting Paddle Lite libs"
}
doLast {
// Prepare cache folder for libs
if (!file("cache").exists()) {
mkdir "cache"
}
// Generate cache name for libs
MessageDigest messageDigest = MessageDigest.getInstance('MD5')
messageDigest.update(paddleLiteLibs.bytes)
String cacheName = new BigInteger(1, messageDigest.digest()).toString(32)
// Download libs
if (!file("cache/${cacheName}.tar.gz").exists()) {
ant.get(src: paddleLiteLibs, dest: file("cache/${cacheName}.tar.gz"))
}
// Unpack libs
copy {
from tarTree("cache/${cacheName}.tar.gz")
into "cache/${cacheName}"
}
// Copy PaddlePredictor.jar
if (!file("libs/PaddlePredictor.jar").exists()) {
copy {
from "cache/${cacheName}/java/PaddlePredictor.jar"
into "libs"
}
}
// Copy libpaddle_lite_jni.so for armeabi-v7a and arm64-v8a
if (!file("src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so").exists()) {
copy {
from "cache/${cacheName}/java/libs/armeabi-v7a/"
into "src/main/jniLibs/armeabi-v7a"
}
}
if (!file("src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so").exists()) {
copy {
from "cache/${cacheName}/java/libs/arm64-v8a/"
into "src/main/jniLibs/arm64-v8a"
}
}
}
}
preBuild.dependsOn downloadAndExtractPaddleLiteLibs
def paddleLiteModels = [
[
'src' : 'https://paddlelite-demo.bj.bcebos.com/models/deeplab_mobilenet_fp32_for_cpu_v2_1_0.tar.gz',
'dest' : 'src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu'
],
]
task downloadAndExtractPaddleLiteModels(type: DefaultTask) {
doFirst {
println "Downloading and extracting Paddle Lite models"
}
doLast {
// Prepare cache folder for models
if (!file("cache").exists()) {
mkdir "cache"
}
paddleLiteModels.eachWithIndex { model, index ->
MessageDigest messageDigest = MessageDigest.getInstance('MD5')
messageDigest.update(model.src.bytes)
String cacheName = new BigInteger(1, messageDigest.digest()).toString(32)
// Download model file
if (!file("cache/${cacheName}.tar.gz").exists()) {
ant.get(src: model.src, dest: file("cache/${cacheName}.tar.gz"))
}
// Unpack model file
copy {
from tarTree("cache/${cacheName}.tar.gz")
into "cache/${cacheName}"
}
// Copy model file
if (!file("${model.dest}/__model__.nb").exists() || !file("${model.dest}/param.nb").exists()) {
copy {
from "cache/${cacheName}"
into "${model.dest}"
}
}
}
}
}
preBuild.dependsOn downloadAndExtractPaddleLiteModels
## This file must *NOT* be checked into Version Control Systems,
# as it contains information specific to your local configuration.
#
# Location of the SDK. This is only used by Gradle.
# For customization when using a Version Control System, please read the
# header note.
#Mon Nov 25 17:01:52 CST 2019
sdk.dir=/Users/chenlingchi/Library/Android/sdk
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.lite.demo">
package="com.baidu.paddle.lite.demo.segmentation">
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
......@@ -17,15 +17,11 @@
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>
<activity
android:name=".segmentation.ImgSegActivity"
android:label="Image Segmentation"/>
<activity
android:name=".segmentation.ImgSegSettingsActivity"
android:name=".SettingsActivity"
android:label="Settings">
</activity>
</application>
......
......@@ -14,7 +14,7 @@
* limitations under the License.
*/
package com.baidu.paddle.lite.demo;
package com.baidu.paddle.lite.demo.segmentation;
import android.content.res.Configuration;
import android.os.Bundle;
......
package com.baidu.paddle.lite.demo;
package com.baidu.paddle.lite.demo.segmentation;
import android.Manifest;
import android.app.ProgressDialog;
import android.content.ContentResolver;
import android.content.Intent;
import android.content.SharedPreferences;
import android.content.pm.PackageManager;
import android.database.Cursor;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.os.Bundle;
import android.os.Environment;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Message;
import android.preference.PreferenceManager;
import android.provider.MediaStore;
import android.support.annotation.NonNull;
import android.support.v4.app.ActivityCompat;
import android.support.v4.content.ContextCompat;
import android.support.v4.content.FileProvider;
import android.support.v7.app.ActionBar;
import android.support.v7.app.AppCompatActivity;
import android.text.method.ScrollingMovementMethod;
import android.util.Log;
import android.view.Menu;
import android.view.MenuInflater;
import android.view.MenuItem;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.baidu.paddle.lite.demo.segmentation.config.Config;
import com.baidu.paddle.lite.demo.segmentation.preprocess.Preprocess;
import com.baidu.paddle.lite.demo.segmentation.visual.Visualize;
import java.io.File;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.io.InputStream;
public class MainActivity extends AppCompatActivity {
public class CommonActivity extends AppCompatActivity {
private static final String TAG = CommonActivity.class.getSimpleName();
private static final String TAG = MainActivity.class.getSimpleName();
public static final int OPEN_GALLERY_REQUEST_CODE = 0;
public static final int TAKE_PHOTO_REQUEST_CODE = 1;
......@@ -51,14 +57,25 @@ public class CommonActivity extends AppCompatActivity {
protected Handler sender = null; // send command to worker thread
protected HandlerThread worker = null; // worker thread to load&run model
protected TextView tvInputSetting;
protected ImageView ivInputImage;
protected TextView tvOutputResult;
protected TextView tvInferenceTime;
// model config
Config config = new Config();
protected Predictor predictor = new Predictor();
Preprocess preprocess = new Preprocess();
Visualize visualize = new Visualize();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
ActionBar supportActionBar = getSupportActionBar();
if (supportActionBar != null) {
supportActionBar.setDisplayHomeAsUpEnabled(true);
}
setContentView(R.layout.activity_main);
receiver = new Handler() {
@Override
public void handleMessage(Message msg) {
......@@ -69,7 +86,7 @@ public class CommonActivity extends AppCompatActivity {
break;
case RESPONSE_LOAD_MODEL_FAILED:
pbLoadModel.dismiss();
Toast.makeText(CommonActivity.this, "Load model failed!", Toast.LENGTH_SHORT).show();
Toast.makeText(MainActivity.this, "Load model failed!", Toast.LENGTH_SHORT).show();
onLoadModelFailed();
break;
case RESPONSE_RUN_MODEL_SUCCESSED:
......@@ -78,7 +95,7 @@ public class CommonActivity extends AppCompatActivity {
break;
case RESPONSE_RUN_MODEL_FAILED:
pbRunModel.dismiss();
Toast.makeText(CommonActivity.this, "Run model failed!", Toast.LENGTH_SHORT).show();
Toast.makeText(MainActivity.this, "Run model failed!", Toast.LENGTH_SHORT).show();
onRunModelFailed();
break;
default:
......@@ -113,6 +130,29 @@ public class CommonActivity extends AppCompatActivity {
}
}
};
tvInputSetting = findViewById(R.id.tv_input_setting);
ivInputImage = findViewById(R.id.iv_input_image);
tvInferenceTime = findViewById(R.id.tv_inference_time);
tvOutputResult = findViewById(R.id.tv_output_result);
tvInputSetting.setMovementMethod(ScrollingMovementMethod.getInstance());
tvOutputResult.setMovementMethod(ScrollingMovementMethod.getInstance());
}
public boolean onLoadModel() {
return predictor.init(MainActivity.this, config);
}
public boolean onRunModel() {
return predictor.isLoaded() && predictor.runModel(preprocess,visualize);
}
public void onLoadModelFailed() {
}
public void onRunModelFailed() {
}
public void loadModel() {
......@@ -125,33 +165,61 @@ public class CommonActivity extends AppCompatActivity {
sender.sendEmptyMessage(REQUEST_RUN_MODEL);
}
public boolean onLoadModel() {
return true;
}
public boolean onRunModel() {
return true;
}
public void onLoadModelSuccessed() {
}
public void onLoadModelFailed() {
// load test image from file_paths and run model
try {
if (config.imagePath.isEmpty()) {
return;
}
Bitmap image = null;
// read test image file from custom file_paths if the first character of mode file_paths is '/', otherwise read test
// image file from assets
if (!config.imagePath.substring(0, 1).equals("/")) {
InputStream imageStream = getAssets().open(config.imagePath);
image = BitmapFactory.decodeStream(imageStream);
} else {
if (!new File(config.imagePath).exists()) {
return;
}
image = BitmapFactory.decodeFile(config.imagePath);
}
if (image != null && predictor.isLoaded()) {
predictor.setInputImage(image);
runModel();
}
} catch (IOException e) {
Toast.makeText(MainActivity.this, "Load image failed!", Toast.LENGTH_SHORT).show();
e.printStackTrace();
}
}
public void onRunModelSuccessed() {
// obtain results and update UI
tvInferenceTime.setText("Inference time: " + predictor.inferenceTime() + " ms");
Bitmap outputImage = predictor.outputImage();
if (outputImage != null) {
ivInputImage.setImageBitmap(outputImage);
}
tvOutputResult.setText(predictor.outputResult());
tvOutputResult.scrollTo(0, 0);
}
public void onRunModelFailed() {
}
public void onImageChanged(Bitmap image) {
// rerun model if users pick test image from gallery or camera
if (image != null && predictor.isLoaded()) {
predictor.setInputImage(image);
runModel();
}
}
public void onImageChanged(String path) {
Bitmap image = BitmapFactory.decodeFile(path);
predictor.setInputImage(image);
runModel();
}
public void onSettingsClicked() {
startActivity(new Intent(MainActivity.this, SettingsActivity.class));
}
@Override
......@@ -186,7 +254,6 @@ public class CommonActivity extends AppCompatActivity {
}
return super.onOptionsItemSelected(item);
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
......@@ -195,33 +262,6 @@ public class CommonActivity extends AppCompatActivity {
Toast.makeText(this, "Permission Denied", Toast.LENGTH_SHORT).show();
}
}
private boolean requestAllPermissions() {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this,
Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.CAMERA},
0);
return false;
}
return true;
}
private void openGallery() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, OPEN_GALLERY_REQUEST_CODE);
}
private void takePhoto() {
Intent takePhotoIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePhotoIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE);
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
......@@ -251,14 +291,97 @@ public class CommonActivity extends AppCompatActivity {
}
}
}
private boolean requestAllPermissions() {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this,
Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.CAMERA},
0);
return false;
}
return true;
}
private void openGallery() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, OPEN_GALLERY_REQUEST_CODE);
}
private void takePhoto() {
Intent takePhotoIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePhotoIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE);
}
}
@Override
public boolean onPrepareOptionsMenu(Menu menu) {
boolean isLoaded = predictor.isLoaded();
menu.findItem(R.id.open_gallery).setEnabled(isLoaded);
menu.findItem(R.id.take_photo).setEnabled(isLoaded);
return super.onPrepareOptionsMenu(menu);
}
@Override
protected void onResume() {
Log.i(TAG,"begin onResume");
super.onResume();
SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this);
boolean settingsChanged = false;
String model_path = sharedPreferences.getString(getString(R.string.MODEL_PATH_KEY),
getString(R.string.MODEL_PATH_DEFAULT));
String label_path = sharedPreferences.getString(getString(R.string.LABEL_PATH_KEY),
getString(R.string.LABEL_PATH_DEFAULT));
String image_path = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY),
getString(R.string.IMAGE_PATH_DEFAULT));
settingsChanged |= !model_path.equalsIgnoreCase(config.modelPath);
settingsChanged |= !label_path.equalsIgnoreCase(config.labelPath);
settingsChanged |= !image_path.equalsIgnoreCase(config.imagePath);
int cpu_thread_num = Integer.parseInt(sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY),
getString(R.string.CPU_THREAD_NUM_DEFAULT)));
settingsChanged |= cpu_thread_num != config.cpuThreadNum;
String cpu_power_mode =
sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY),
getString(R.string.CPU_POWER_MODE_DEFAULT));
settingsChanged |= !cpu_power_mode.equalsIgnoreCase(config.cpuPowerMode);
String input_color_format =
sharedPreferences.getString(getString(R.string.INPUT_COLOR_FORMAT_KEY),
getString(R.string.INPUT_COLOR_FORMAT_DEFAULT));
settingsChanged |= !input_color_format.equalsIgnoreCase(config.inputColorFormat);
long[] input_shape =
Utils.parseLongsFromString(sharedPreferences.getString(getString(R.string.INPUT_SHAPE_KEY),
getString(R.string.INPUT_SHAPE_DEFAULT)), ",");
settingsChanged |= input_shape.length != config.inputShape.length;
if (!settingsChanged) {
for (int i = 0; i < input_shape.length; i++) {
settingsChanged |= input_shape[i] != config.inputShape[i];
}
}
if (settingsChanged) {
config.init(model_path,label_path,image_path,cpu_thread_num,cpu_power_mode,
input_color_format,input_shape);
preprocess.init(config);
// update UI
tvInputSetting.setText("Model: " + config.modelPath.substring(config.modelPath.lastIndexOf("/") + 1) + "\n" + "CPU" +
" Thread Num: " + Integer.toString(config.cpuThreadNum) + "\n" + "CPU Power Mode: " + config.cpuPowerMode);
tvInputSetting.scrollTo(0, 0);
// reload model if configure has been changed
loadModel();
}
}
@Override
protected void onDestroy() {
if (predictor != null) {
predictor.releaseModel();
}
worker.quit();
super.onDestroy();
}
......
......@@ -4,9 +4,12 @@ import android.content.Context;
import android.graphics.Bitmap;
import android.util.Log;
import com.baidu.paddle.lite.MobileConfig;
import com.baidu.paddle.lite.PaddlePredictor;
import com.baidu.paddle.lite.PowerMode;
import com.baidu.paddle.lite.Tensor;
import com.baidu.paddle.lite.demo.Predictor;
import com.baidu.paddle.lite.demo.segmentation.config.Config;
import com.baidu.paddle.lite.demo.segmentation.preprocess.Preprocess;
import com.baidu.paddle.lite.demo.segmentation.visual.Visualize;
......@@ -14,15 +17,11 @@ import java.io.InputStream;
import java.util.Date;
import java.util.Vector;
import static android.graphics.Color.blue;
import static android.graphics.Color.green;
import static android.graphics.Color.red;
public class ImgSegPredictor extends Predictor {
private static final String TAG = ImgSegPredictor.class.getSimpleName();
public class Predictor {
private static final String TAG = Predictor.class.getSimpleName();
protected Vector<String> wordLabels = new Vector<String>();
Config config;
Config config = new Config();
protected Bitmap inputImage = null;
protected Bitmap scaledImage = null;
......@@ -31,10 +30,27 @@ public class ImgSegPredictor extends Predictor {
protected float preprocessTime = 0;
protected float postprocessTime = 0;
public ImgSegPredictor() {
public boolean isLoaded = false;
public int warmupIterNum = 0;
public int inferIterNum = 1;
protected Context appCtx = null;
public int cpuThreadNum = 1;
public String cpuPowerMode = "LITE_POWER_HIGH";
public String modelPath = "";
public String modelName = "";
protected PaddlePredictor paddlePredictor = null;
protected float inferenceTime = 0;
public Predictor() {
super();
}
public boolean init(Context appCtx, String modelPath, int cpuThreadNum, String cpuPowerMode) {
this.appCtx = appCtx;
isLoaded = loadModel(modelPath, cpuThreadNum, cpuPowerMode);
return isLoaded;
}
public boolean init(Context appCtx, Config config) {
if (config.inputShape.length != 4) {
......@@ -55,8 +71,9 @@ public class ImgSegPredictor extends Predictor {
Log.i(TAG, "only RGB and BGR color format is supported.");
return false;
}
super.init(appCtx, config.modelPath, config.cpuThreadNum, config.cpuPowerMode);
if (!super.isLoaded()) {
init(appCtx, config.modelPath, config.cpuThreadNum, config.cpuPowerMode);
if (!isLoaded()) {
return false;
}
this.config = config;
......@@ -64,6 +81,11 @@ public class ImgSegPredictor extends Predictor {
return isLoaded;
}
public boolean isLoaded() {
return paddlePredictor != null && isLoaded;
}
protected boolean loadLabel(String labelPath) {
wordLabels.clear();
// load word labels from file
......@@ -87,11 +109,80 @@ public class ImgSegPredictor extends Predictor {
}
public Tensor getInput(int idx) {
return super.getInput(idx);
if (!isLoaded()) {
return null;
}
return paddlePredictor.getInput(idx);
}
public Tensor getOutput(int idx) {
return super.getOutput(idx);
if (!isLoaded()) {
return null;
}
return paddlePredictor.getOutput(idx);
}
protected boolean loadModel(String modelPath, int cpuThreadNum, String cpuPowerMode) {
// release model if exists
releaseModel();
// load model
if (modelPath.isEmpty()) {
return false;
}
String realPath = modelPath;
if (!modelPath.substring(0, 1).equals("/")) {
// read model files from custom file_paths if the first character of mode file_paths is '/'
// otherwise copy model to cache from assets
realPath = appCtx.getCacheDir() + "/" + modelPath;
Utils.copyDirectoryFromAssets(appCtx, modelPath, realPath);
}
if (realPath.isEmpty()) {
return false;
}
MobileConfig modelConfig = new MobileConfig();
modelConfig.setModelDir(realPath);
modelConfig.setThreads(cpuThreadNum);
if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_HIGH")) {
modelConfig.setPowerMode(PowerMode.LITE_POWER_HIGH);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_LOW")) {
modelConfig.setPowerMode(PowerMode.LITE_POWER_LOW);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_FULL")) {
modelConfig.setPowerMode(PowerMode.LITE_POWER_FULL);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_NO_BIND")) {
modelConfig.setPowerMode(PowerMode.LITE_POWER_NO_BIND);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_HIGH")) {
modelConfig.setPowerMode(PowerMode.LITE_POWER_RAND_HIGH);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_LOW")) {
modelConfig.setPowerMode(PowerMode.LITE_POWER_RAND_LOW);
} else {
Log.e(TAG, "unknown cpu power mode!");
return false;
}
paddlePredictor = PaddlePredictor.createPaddlePredictor(modelConfig);
this.cpuThreadNum = cpuThreadNum;
this.cpuPowerMode = cpuPowerMode;
this.modelPath = realPath;
this.modelName = realPath.substring(realPath.lastIndexOf("/") + 1);
return true;
}
public boolean runModel() {
if (!isLoaded()) {
return false;
}
// warm up
for (int i = 0; i < warmupIterNum; i++){
paddlePredictor.run();
}
// inference
Date start = new Date();
for (int i = 0; i < inferIterNum; i++) {
paddlePredictor.run();
}
Date end = new Date();
inferenceTime = (end.getTime() - start.getTime()) / (float) inferIterNum;
return true;
}
public boolean runModel(Bitmap image) {
......@@ -106,39 +197,42 @@ public class ImgSegPredictor extends Predictor {
// set input shape
Tensor inputTensor = getInput(0);
inputTensor.resize(config.inputShape);
// pre-process image
Date start = new Date();
preprocess.init(config);
preprocess.to_array(scaledImage);
// feed input tensor with pre-processed data
inputTensor.setData(preprocess.inputData);
Date end = new Date();
preprocessTime = (float) (end.getTime() - start.getTime());
// inference
super.runModel();
runModel();
start = new Date();
Tensor outputTensor = getOutput(0);
// post-process
this.outputImage = visualize.draw(inputImage,outputTensor);
this.outputImage = visualize.draw(inputImage, outputTensor);
postprocessTime = (float) (end.getTime() - start.getTime());
start = new Date();
outputResult = new String();
end = new Date();
return true;
}
public void releaseModel() {
paddlePredictor = null;
isLoaded = false;
cpuThreadNum = 1;
cpuPowerMode = "LITE_POWER_HIGH";
modelPath = "";
modelName = "";
}
public void setConfig(Config config){
this.config = config;
......@@ -164,13 +258,32 @@ public class ImgSegPredictor extends Predictor {
return postprocessTime;
}
public String modelPath() {
return modelPath;
}
public String modelName() {
return modelName;
}
public int cpuThreadNum() {
return cpuThreadNum;
}
public String cpuPowerMode() {
return cpuPowerMode;
}
public float inferenceTime() {
return inferenceTime;
}
public void setInputImage(Bitmap image) {
if (image == null) {
return;
}
// scale image to the size of input tensor
Bitmap rgbaImage = image.copy(Bitmap.Config.ARGB_8888, true);
Bitmap scaleImage = Bitmap.createScaledBitmap(rgbaImage, (int) this.config.inputShape[3], (int) this.config.inputShape[2], true);
this.inputImage = rgbaImage;
this.scaledImage = scaleImage;
......
......@@ -7,14 +7,10 @@ import android.preference.EditTextPreference;
import android.preference.ListPreference;
import android.support.v7.app.ActionBar;
import com.baidu.paddle.lite.demo.AppCompatPreferenceActivity;
import com.baidu.paddle.lite.demo.R;
import com.baidu.paddle.lite.demo.Utils;
import java.util.ArrayList;
import java.util.List;
public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener {
public class SettingsActivity extends AppCompatPreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener {
ListPreference lpChoosePreInstalledModel = null;
CheckBoxPreference cbEnableCustomSettings = null;
EditTextPreference etModelPath = null;
......@@ -23,24 +19,21 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen
ListPreference lpCPUThreadNum = null;
ListPreference lpCPUPowerMode = null;
ListPreference lpInputColorFormat = null;
EditTextPreference etInputShape = null;
EditTextPreference etInputMean = null;
EditTextPreference etInputStd = null;
List<String> preInstalledModelPaths = null;
List<String> preInstalledLabelPaths = null;
List<String> preInstalledImagePaths = null;
List<String> preInstalledInputShapes = null;
List<String> preInstalledCPUThreadNums = null;
List<String> preInstalledCPUPowerModes = null;
List<String> preInstalledInputColorFormats = null;
List<String> preInstalledInputMeans = null;
List<String> preInstalledInputStds = null;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
addPreferencesFromResource(R.xml.settings_img_seg);
addPreferencesFromResource(R.xml.settings);
ActionBar supportActionBar = getSupportActionBar();
if (supportActionBar != null) {
supportActionBar.setDisplayHomeAsUpEnabled(true);
......@@ -50,24 +43,20 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen
preInstalledModelPaths = new ArrayList<String>();
preInstalledLabelPaths = new ArrayList<String>();
preInstalledImagePaths = new ArrayList<String>();
preInstalledInputShapes = new ArrayList<String>();
preInstalledCPUThreadNums = new ArrayList<String>();
preInstalledCPUPowerModes = new ArrayList<String>();
preInstalledInputColorFormats = new ArrayList<String>();
preInstalledInputMeans = new ArrayList<String>();
preInstalledInputStds = new ArrayList<String>();
// add deeplab_mobilenet_for_cpu
preInstalledModelPaths.add(getString(R.string.ISG_MODEL_PATH_DEFAULT));
preInstalledLabelPaths.add(getString(R.string.ISG_LABEL_PATH_DEFAULT));
preInstalledImagePaths.add(getString(R.string.ISG_IMAGE_PATH_DEFAULT));
preInstalledCPUThreadNums.add(getString(R.string.ISG_CPU_THREAD_NUM_DEFAULT));
preInstalledCPUPowerModes.add(getString(R.string.ISG_CPU_POWER_MODE_DEFAULT));
preInstalledInputColorFormats.add(getString(R.string.ISG_INPUT_COLOR_FORMAT_DEFAULT));
preInstalledInputShapes.add(getString(R.string.ISG_INPUT_SHAPE_DEFAULT));
preInstalledModelPaths.add(getString(R.string.MODEL_PATH_DEFAULT));
preInstalledLabelPaths.add(getString(R.string.LABEL_PATH_DEFAULT));
preInstalledImagePaths.add(getString(R.string.IMAGE_PATH_DEFAULT));
preInstalledCPUThreadNums.add(getString(R.string.CPU_THREAD_NUM_DEFAULT));
preInstalledCPUPowerModes.add(getString(R.string.CPU_POWER_MODE_DEFAULT));
preInstalledInputColorFormats.add(getString(R.string.INPUT_COLOR_FORMAT_DEFAULT));
// initialize UI components
lpChoosePreInstalledModel =
(ListPreference) findPreference(getString(R.string.ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY));
(ListPreference) findPreference(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY));
String[] preInstalledModelNames = new String[preInstalledModelPaths.size()];
for (int i = 0; i < preInstalledModelPaths.size(); i++) {
preInstalledModelNames[i] =
......@@ -76,38 +65,36 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen
lpChoosePreInstalledModel.setEntries(preInstalledModelNames);
lpChoosePreInstalledModel.setEntryValues(preInstalledModelPaths.toArray(new String[preInstalledModelPaths.size()]));
cbEnableCustomSettings =
(CheckBoxPreference) findPreference(getString(R.string.ISG_ENABLE_CUSTOM_SETTINGS_KEY));
etModelPath = (EditTextPreference) findPreference(getString(R.string.ISG_MODEL_PATH_KEY));
(CheckBoxPreference) findPreference(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY));
etModelPath = (EditTextPreference) findPreference(getString(R.string.MODEL_PATH_KEY));
etModelPath.setTitle("Model Path (SDCard: " + Utils.getSDCardDirectory() + ")");
etLabelPath = (EditTextPreference) findPreference(getString(R.string.ISG_LABEL_PATH_KEY));
etImagePath = (EditTextPreference) findPreference(getString(R.string.ISG_IMAGE_PATH_KEY));
etLabelPath = (EditTextPreference) findPreference(getString(R.string.LABEL_PATH_KEY));
etImagePath = (EditTextPreference) findPreference(getString(R.string.IMAGE_PATH_KEY));
lpCPUThreadNum =
(ListPreference) findPreference(getString(R.string.ISG_CPU_THREAD_NUM_KEY));
(ListPreference) findPreference(getString(R.string.CPU_THREAD_NUM_KEY));
lpCPUPowerMode =
(ListPreference) findPreference(getString(R.string.ISG_CPU_POWER_MODE_KEY));
(ListPreference) findPreference(getString(R.string.CPU_POWER_MODE_KEY));
lpInputColorFormat =
(ListPreference) findPreference(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY));
etInputShape = (EditTextPreference) findPreference(getString(R.string.ISG_INPUT_SHAPE_KEY));
(ListPreference) findPreference(getString(R.string.INPUT_COLOR_FORMAT_KEY));
}
private void reloadPreferenceAndUpdateUI() {
SharedPreferences sharedPreferences = getPreferenceScreen().getSharedPreferences();
boolean enableCustomSettings =
sharedPreferences.getBoolean(getString(R.string.ISG_ENABLE_CUSTOM_SETTINGS_KEY), false);
String modelPath = sharedPreferences.getString(getString(R.string.ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY),
getString(R.string.ISG_MODEL_PATH_DEFAULT));
sharedPreferences.getBoolean(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY), false);
String modelPath = sharedPreferences.getString(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY),
getString(R.string.MODEL_PATH_DEFAULT));
int modelIdx = lpChoosePreInstalledModel.findIndexOfValue(modelPath);
if (modelIdx >= 0 && modelIdx < preInstalledModelPaths.size()) {
if (!enableCustomSettings) {
SharedPreferences.Editor editor = sharedPreferences.edit();
editor.putString(getString(R.string.ISG_MODEL_PATH_KEY), preInstalledModelPaths.get(modelIdx));
editor.putString(getString(R.string.ISG_LABEL_PATH_KEY), preInstalledLabelPaths.get(modelIdx));
editor.putString(getString(R.string.ISG_IMAGE_PATH_KEY), preInstalledImagePaths.get(modelIdx));
editor.putString(getString(R.string.ISG_CPU_THREAD_NUM_KEY), preInstalledCPUThreadNums.get(modelIdx));
editor.putString(getString(R.string.ISG_CPU_POWER_MODE_KEY), preInstalledCPUPowerModes.get(modelIdx));
editor.putString(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY),
editor.putString(getString(R.string.MODEL_PATH_KEY), preInstalledModelPaths.get(modelIdx));
editor.putString(getString(R.string.LABEL_PATH_KEY), preInstalledLabelPaths.get(modelIdx));
editor.putString(getString(R.string.IMAGE_PATH_KEY), preInstalledImagePaths.get(modelIdx));
editor.putString(getString(R.string.CPU_THREAD_NUM_KEY), preInstalledCPUThreadNums.get(modelIdx));
editor.putString(getString(R.string.CPU_POWER_MODE_KEY), preInstalledCPUPowerModes.get(modelIdx));
editor.putString(getString(R.string.INPUT_COLOR_FORMAT_KEY),
preInstalledInputColorFormats.get(modelIdx));
editor.putString(getString(R.string.ISG_INPUT_SHAPE_KEY), preInstalledInputShapes.get(modelIdx));
editor.commit();
}
lpChoosePreInstalledModel.setSummary(modelPath);
......@@ -119,23 +106,18 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen
lpCPUThreadNum.setEnabled(enableCustomSettings);
lpCPUPowerMode.setEnabled(enableCustomSettings);
lpInputColorFormat.setEnabled(enableCustomSettings);
etInputShape.setEnabled(enableCustomSettings);
etInputMean.setEnabled(enableCustomSettings);
etInputStd.setEnabled(enableCustomSettings);
modelPath = sharedPreferences.getString(getString(R.string.ISG_MODEL_PATH_KEY),
getString(R.string.ISG_MODEL_PATH_DEFAULT));
String labelPath = sharedPreferences.getString(getString(R.string.ISG_LABEL_PATH_KEY),
getString(R.string.ISG_LABEL_PATH_DEFAULT));
String imagePath = sharedPreferences.getString(getString(R.string.ISG_IMAGE_PATH_KEY),
getString(R.string.ISG_IMAGE_PATH_DEFAULT));
String cpuThreadNum = sharedPreferences.getString(getString(R.string.ISG_CPU_THREAD_NUM_KEY),
getString(R.string.ISG_CPU_THREAD_NUM_DEFAULT));
String cpuPowerMode = sharedPreferences.getString(getString(R.string.ISG_CPU_POWER_MODE_KEY),
getString(R.string.ISG_CPU_POWER_MODE_DEFAULT));
String inputColorFormat = sharedPreferences.getString(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY),
getString(R.string.ISG_INPUT_COLOR_FORMAT_DEFAULT));
String inputShape = sharedPreferences.getString(getString(R.string.ISG_INPUT_SHAPE_KEY),
getString(R.string.ISG_INPUT_SHAPE_DEFAULT));
modelPath = sharedPreferences.getString(getString(R.string.MODEL_PATH_KEY),
getString(R.string.MODEL_PATH_DEFAULT));
String labelPath = sharedPreferences.getString(getString(R.string.LABEL_PATH_KEY),
getString(R.string.LABEL_PATH_DEFAULT));
String imagePath = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY),
getString(R.string.IMAGE_PATH_DEFAULT));
String cpuThreadNum = sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY),
getString(R.string.CPU_THREAD_NUM_DEFAULT));
String cpuPowerMode = sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY),
getString(R.string.CPU_POWER_MODE_DEFAULT));
String inputColorFormat = sharedPreferences.getString(getString(R.string.INPUT_COLOR_FORMAT_KEY),
getString(R.string.INPUT_COLOR_FORMAT_DEFAULT));
etModelPath.setSummary(modelPath);
etModelPath.setText(modelPath);
etLabelPath.setSummary(labelPath);
......@@ -148,8 +130,7 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen
lpCPUPowerMode.setSummary(cpuPowerMode);
lpInputColorFormat.setValue(inputColorFormat);
lpInputColorFormat.setSummary(inputColorFormat);
etInputShape.setSummary(inputShape);
etInputShape.setText(inputShape);
}
@Override
......@@ -167,9 +148,9 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen
@Override
public void onSharedPreferenceChanged(SharedPreferences sharedPreferences, String key) {
if (key.equals(getString(R.string.ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY))) {
if (key.equals(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY))) {
SharedPreferences.Editor editor = sharedPreferences.edit();
editor.putBoolean(getString(R.string.ISG_ENABLE_CUSTOM_SETTINGS_KEY), false);
editor.putBoolean(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY), false);
editor.commit();
}
reloadPreferenceAndUpdateUI();
......
package com.baidu.paddle.lite.demo;
package com.baidu.paddle.lite.demo.segmentation;
import android.content.Context;
import android.os.Environment;
......
......@@ -9,7 +9,6 @@ public class Config {
public String imagePath = "";
public int cpuThreadNum = 1;
public String cpuPowerMode = "";
public String inputColorFormat = "";
public long[] inputShape = new long[]{};
......@@ -22,7 +21,6 @@ public class Config {
this.imagePath = imagePath;
this.cpuThreadNum = cpuThreadNum;
this.cpuPowerMode = cpuPowerMode;
this.inputColorFormat = inputColorFormat;
this.inputShape = inputShape;
}
......@@ -30,7 +28,6 @@ public class Config {
public void setInputShape(Bitmap inputImage){
this.inputShape[0] = 1;
this.inputShape[1] = 3;
this.inputShape[2] = inputImage.getHeight();
this.inputShape[3] = inputImage.getWidth();
......
......@@ -4,7 +4,7 @@
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".segmentation.ImgSegActivity">
tools:context=".segmentation.MainActivity">
<RelativeLayout
android:layout_width="match_parent"
......
<resources>
<string name="app_name">Human Segmentation</string>
<!-- image segmentation settings -->
<string name="CHOOSE_PRE_INSTALLED_MODEL_KEY">CHOOSE_PRE_INSTALLED_MODEL_KEY</string>
<string name="ENABLE_CUSTOM_SETTINGS_KEY">ENABLE_CUSTOM_SETTINGS_KEY</string>
<string name="MODEL_PATH_KEY">MODEL_PATH_KEY</string>
<string name="LABEL_PATH_KEY">LABEL_PATH_KEY</string>
<string name="IMAGE_PATH_KEY">IMAGE_PATH_KEY</string>
<string name="CPU_THREAD_NUM_KEY">CPU_THREAD_NUM_KEY</string>
<string name="CPU_POWER_MODE_KEY">CPU_POWER_MODE_KEY</string>
<string name="INPUT_COLOR_FORMAT_KEY">INPUT_COLOR_FORMAT_KEY</string>
<string name="INPUT_SHAPE_KEY">INPUT_SHAPE_KEY</string>
<string name="MODEL_PATH_DEFAULT">image_segmentation/models/deeplab_mobilenet_for_cpu</string>
<string name="LABEL_PATH_DEFAULT">image_segmentation/labels/label_list</string>
<string name="IMAGE_PATH_DEFAULT">image_segmentation/images/human.jpg</string>
<string name="CPU_THREAD_NUM_DEFAULT">1</string>
<string name="CPU_POWER_MODE_DEFAULT">LITE_POWER_HIGH</string>
<string name="INPUT_COLOR_FORMAT_DEFAULT">RGB</string>
<string name="INPUT_SHAPE_DEFAULT">1,3,513,513</string>
</resources>
......@@ -2,42 +2,42 @@
<PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android" >
<PreferenceCategory android:title="Model Settings">
<ListPreference
android:defaultValue="@string/ISG_MODEL_PATH_DEFAULT"
android:key="@string/ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY"
android:defaultValue="@string/MODEL_PATH_DEFAULT"
android:key="@string/CHOOSE_PRE_INSTALLED_MODEL_KEY"
android:negativeButtonText="@null"
android:positiveButtonText="@null"
android:title="Choose pre-installed models" />
<CheckBoxPreference
android:defaultValue="false"
android:key="@string/ISG_ENABLE_CUSTOM_SETTINGS_KEY"
android:key="@string/ENABLE_CUSTOM_SETTINGS_KEY"
android:summaryOn="Enable"
android:summaryOff="Disable"
android:title="Enable custom settings"/>
<EditTextPreference
android:key="@string/ISG_MODEL_PATH_KEY"
android:defaultValue="@string/ISG_MODEL_PATH_DEFAULT"
android:key="@string/MODEL_PATH_KEY"
android:defaultValue="@string/MODEL_PATH_DEFAULT"
android:title="Model Path" />
<EditTextPreference
android:key="@string/ISG_LABEL_PATH_KEY"
android:defaultValue="@string/ISG_LABEL_PATH_DEFAULT"
android:key="@string/LABEL_PATH_KEY"
android:defaultValue="@string/LABEL_PATH_DEFAULT"
android:title="Label Path" />
<EditTextPreference
android:key="@string/ISG_IMAGE_PATH_KEY"
android:defaultValue="@string/ISG_IMAGE_PATH_DEFAULT"
android:key="@string/IMAGE_PATH_KEY"
android:defaultValue="@string/IMAGE_PATH_DEFAULT"
android:title="Image Path" />
</PreferenceCategory>
<PreferenceCategory android:title="CPU Settings">
<ListPreference
android:defaultValue="@string/ISG_CPU_THREAD_NUM_DEFAULT"
android:key="@string/ISG_CPU_THREAD_NUM_KEY"
android:defaultValue="@string/CPU_THREAD_NUM_DEFAULT"
android:key="@string/CPU_THREAD_NUM_KEY"
android:negativeButtonText="@null"
android:positiveButtonText="@null"
android:title="CPU Thread Num"
android:entries="@array/cpu_thread_num_entries"
android:entryValues="@array/cpu_thread_num_values"/>
<ListPreference
android:defaultValue="@string/ISG_CPU_POWER_MODE_DEFAULT"
android:key="@string/ISG_CPU_POWER_MODE_KEY"
android:defaultValue="@string/CPU_POWER_MODE_DEFAULT"
android:key="@string/CPU_POWER_MODE_KEY"
android:negativeButtonText="@null"
android:positiveButtonText="@null"
android:title="CPU Power Mode"
......@@ -46,17 +46,14 @@
</PreferenceCategory>
<PreferenceCategory android:title="Input Settings">
<ListPreference
android:defaultValue="@string/ISG_INPUT_COLOR_FORMAT_DEFAULT"
android:key="@string/ISG_INPUT_COLOR_FORMAT_KEY"
android:defaultValue="@string/INPUT_COLOR_FORMAT_DEFAULT"
android:key="@string/INPUT_COLOR_FORMAT_KEY"
android:negativeButtonText="@null"
android:positiveButtonText="@null"
android:title="Input Color Format: BGR or RGB"
android:entries="@array/input_color_format_entries"
android:entryValues="@array/input_color_format_values"/>
<EditTextPreference
android:key="@string/ISG_INPUT_SHAPE_KEY"
android:defaultValue="@string/ISG_INPUT_SHAPE_DEFAULT"
android:title="Input Shape: (1,1,h,w) or (1,3,h,w)" />
</PreferenceCategory>
</PreferenceScreen>
apply plugin: 'com.android.application'
android {
compileSdkVersion 28
defaultConfig {
applicationId "com.baidu.paddle.lite.demo"
minSdkVersion 15
targetSdkVersion 28
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.android.support.constraint:constraint-layout:1.1.3'
implementation 'com.android.support:design:28.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation files('libs/PaddlePredictor.jar')
}
package com.baidu.paddle.lite.demo;
import android.content.Intent;
import android.content.SharedPreferences;
import android.os.Bundle;
import android.preference.PreferenceManager;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import com.baidu.paddle.lite.demo.segmentation.ImgSegActivity;
public class MainActivity extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = MainActivity.class.getSimpleName();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// clear all setting items to avoid app crashing due to the incorrect settings
SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this);
SharedPreferences.Editor editor = sharedPreferences.edit();
editor.clear();
editor.commit();
}
@Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.v_img_seg: {
Intent intent = new Intent(MainActivity.this, ImgSegActivity.class);
startActivity(intent);
} break;
}
}
@Override
protected void onDestroy() {
super.onDestroy();
System.exit(0);
}
}
package com.baidu.paddle.lite.demo;
import android.content.Context;
import android.util.Log;
import com.baidu.paddle.lite.*;
import java.util.ArrayList;
import java.util.Date;
public class Predictor {
private static final String TAG = Predictor.class.getSimpleName();
public boolean isLoaded = false;
public int warmupIterNum = 0;
public int inferIterNum = 1;
protected Context appCtx = null;
public int cpuThreadNum = 1;
public String cpuPowerMode = "LITE_POWER_HIGH";
public String modelPath = "";
public String modelName = "";
protected PaddlePredictor paddlePredictor = null;
protected float inferenceTime = 0;
public Predictor() {
}
public boolean init(Context appCtx, String modelPath, int cpuThreadNum, String cpuPowerMode) {
this.appCtx = appCtx;
isLoaded = loadModel(modelPath, cpuThreadNum, cpuPowerMode);
return isLoaded;
}
protected boolean loadModel(String modelPath, int cpuThreadNum, String cpuPowerMode) {
// release model if exists
releaseModel();
// load model
if (modelPath.isEmpty()) {
return false;
}
String realPath = modelPath;
if (!modelPath.substring(0, 1).equals("/")) {
// read model files from custom file_paths if the first character of mode file_paths is '/'
// otherwise copy model to cache from assets
realPath = appCtx.getCacheDir() + "/" + modelPath;
Utils.copyDirectoryFromAssets(appCtx, modelPath, realPath);
}
if (realPath.isEmpty()) {
return false;
}
MobileConfig config = new MobileConfig();
config.setModelDir(realPath);
config.setThreads(cpuThreadNum);
if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_HIGH")) {
config.setPowerMode(PowerMode.LITE_POWER_HIGH);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_LOW")) {
config.setPowerMode(PowerMode.LITE_POWER_LOW);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_FULL")) {
config.setPowerMode(PowerMode.LITE_POWER_FULL);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_NO_BIND")) {
config.setPowerMode(PowerMode.LITE_POWER_NO_BIND);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_HIGH")) {
config.setPowerMode(PowerMode.LITE_POWER_RAND_HIGH);
} else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_LOW")) {
config.setPowerMode(PowerMode.LITE_POWER_RAND_LOW);
} else {
Log.e(TAG, "unknown cpu power mode!");
return false;
}
paddlePredictor = PaddlePredictor.createPaddlePredictor(config);
this.cpuThreadNum = cpuThreadNum;
this.cpuPowerMode = cpuPowerMode;
this.modelPath = realPath;
this.modelName = realPath.substring(realPath.lastIndexOf("/") + 1);
return true;
}
public void releaseModel() {
paddlePredictor = null;
isLoaded = false;
cpuThreadNum = 1;
cpuPowerMode = "LITE_POWER_HIGH";
modelPath = "";
modelName = "";
}
public Tensor getInput(int idx) {
if (!isLoaded()) {
return null;
}
return paddlePredictor.getInput(idx);
}
public Tensor getOutput(int idx) {
if (!isLoaded()) {
return null;
}
return paddlePredictor.getOutput(idx);
}
public boolean runModel() {
if (!isLoaded()) {
return false;
}
// warm up
for (int i = 0; i < warmupIterNum; i++){
paddlePredictor.run();
}
// inference
Date start = new Date();
for (int i = 0; i < inferIterNum; i++) {
paddlePredictor.run();
}
Date end = new Date();
inferenceTime = (end.getTime() - start.getTime()) / (float) inferIterNum;
return true;
}
public boolean isLoaded() {
return paddlePredictor != null && isLoaded;
}
public String modelPath() {
return modelPath;
}
public String modelName() {
return modelName;
}
public int cpuThreadNum() {
return cpuThreadNum;
}
public String cpuPowerMode() {
return cpuPowerMode;
}
public float inferenceTime() {
return inferenceTime;
}
}
package com.baidu.paddle.lite.demo.segmentation;
import android.content.Intent;
import android.content.SharedPreferences;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.Bundle;
import android.preference.PreferenceManager;
import android.text.method.ScrollingMovementMethod;
import android.util.Log;
import android.view.Menu;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.baidu.paddle.lite.demo.CommonActivity;
import com.baidu.paddle.lite.demo.R;
import com.baidu.paddle.lite.demo.Utils;
import com.baidu.paddle.lite.demo.segmentation.config.Config;
import com.baidu.paddle.lite.demo.segmentation.preprocess.Preprocess;
import com.baidu.paddle.lite.demo.segmentation.visual.Visualize;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
public class ImgSegActivity extends CommonActivity {
private static final String TAG = ImgSegActivity.class.getSimpleName();
protected TextView tvInputSetting;
protected ImageView ivInputImage;
protected TextView tvOutputResult;
protected TextView tvInferenceTime;
// model config
Config config = new Config();
protected ImgSegPredictor predictor = new ImgSegPredictor();
Preprocess preprocess = new Preprocess();
Visualize visualize = new Visualize();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_img_seg);
tvInputSetting = findViewById(R.id.tv_input_setting);
ivInputImage = findViewById(R.id.iv_input_image);
tvInferenceTime = findViewById(R.id.tv_inference_time);
tvOutputResult = findViewById(R.id.tv_output_result);
tvInputSetting.setMovementMethod(ScrollingMovementMethod.getInstance());
tvOutputResult.setMovementMethod(ScrollingMovementMethod.getInstance());
}
@Override
public boolean onLoadModel() {
return super.onLoadModel() && predictor.init(ImgSegActivity.this, config);
}
@Override
public boolean onRunModel() {
return super.onRunModel() && predictor.isLoaded() && predictor.runModel(preprocess,visualize);
}
@Override
public void onLoadModelSuccessed() {
super.onLoadModelSuccessed();
// load test image from file_paths and run model
try {
if (config.imagePath.isEmpty()) {
return;
}
Bitmap image = null;
// read test image file from custom file_paths if the first character of mode file_paths is '/', otherwise read test
// image file from assets
if (!config.imagePath.substring(0, 1).equals("/")) {
InputStream imageStream = getAssets().open(config.imagePath);
image = BitmapFactory.decodeStream(imageStream);
} else {
if (!new File(config.imagePath).exists()) {
return;
}
image = BitmapFactory.decodeFile(config.imagePath);
}
if (image != null && predictor.isLoaded()) {
predictor.setInputImage(image);
runModel();
}
} catch (IOException e) {
Toast.makeText(ImgSegActivity.this, "Load image failed!", Toast.LENGTH_SHORT).show();
e.printStackTrace();
}
}
@Override
public void onLoadModelFailed() {
super.onLoadModelFailed();
}
@Override
public void onRunModelSuccessed() {
super.onRunModelSuccessed();
// obtain results and update UI
tvInferenceTime.setText("Inference time: " + predictor.inferenceTime() + " ms");
Bitmap outputImage = predictor.outputImage();
if (outputImage != null) {
ivInputImage.setImageBitmap(outputImage);
}
tvOutputResult.setText(predictor.outputResult());
tvOutputResult.scrollTo(0, 0);
}
@Override
public void onRunModelFailed() {
super.onRunModelFailed();
}
@Override
public void onImageChanged(Bitmap image) {
super.onImageChanged(image);
// rerun model if users pick test image from gallery or camera
if (image != null && predictor.isLoaded()) {
// predictor.setConfig(config);
predictor.setInputImage(image);
runModel();
}
}
@Override
public void onImageChanged(String path) {
super.onImageChanged(path);
Bitmap image = BitmapFactory.decodeFile(path);
predictor.setInputImage(image);
runModel();
}
public void onSettingsClicked() {
super.onSettingsClicked();
startActivity(new Intent(ImgSegActivity.this, ImgSegSettingsActivity.class));
}
@Override
public boolean onPrepareOptionsMenu(Menu menu) {
boolean isLoaded = predictor.isLoaded();
menu.findItem(R.id.open_gallery).setEnabled(isLoaded);
menu.findItem(R.id.take_photo).setEnabled(isLoaded);
return super.onPrepareOptionsMenu(menu);
}
@Override
protected void onResume() {
Log.i(TAG,"begin onResume");
super.onResume();
SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this);
boolean settingsChanged = false;
String model_path = sharedPreferences.getString(getString(R.string.ISG_MODEL_PATH_KEY),
getString(R.string.ISG_MODEL_PATH_DEFAULT));
String label_path = sharedPreferences.getString(getString(R.string.ISG_LABEL_PATH_KEY),
getString(R.string.ISG_LABEL_PATH_DEFAULT));
String image_path = sharedPreferences.getString(getString(R.string.ISG_IMAGE_PATH_KEY),
getString(R.string.ISG_IMAGE_PATH_DEFAULT));
settingsChanged |= !model_path.equalsIgnoreCase(config.modelPath);
settingsChanged |= !label_path.equalsIgnoreCase(config.labelPath);
settingsChanged |= !image_path.equalsIgnoreCase(config.imagePath);
int cpu_thread_num = Integer.parseInt(sharedPreferences.getString(getString(R.string.ISG_CPU_THREAD_NUM_KEY),
getString(R.string.ISG_CPU_THREAD_NUM_DEFAULT)));
settingsChanged |= cpu_thread_num != config.cpuThreadNum;
String cpu_power_mode =
sharedPreferences.getString(getString(R.string.ISG_CPU_POWER_MODE_KEY),
getString(R.string.ISG_CPU_POWER_MODE_DEFAULT));
settingsChanged |= !cpu_power_mode.equalsIgnoreCase(config.cpuPowerMode);
String input_color_format =
sharedPreferences.getString(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY),
getString(R.string.ISG_INPUT_COLOR_FORMAT_DEFAULT));
settingsChanged |= !input_color_format.equalsIgnoreCase(config.inputColorFormat);
long[] input_shape =
Utils.parseLongsFromString(sharedPreferences.getString(getString(R.string.ISG_INPUT_SHAPE_KEY),
getString(R.string.ISG_INPUT_SHAPE_DEFAULT)), ",");
settingsChanged |= input_shape.length != config.inputShape.length;
if (!settingsChanged) {
for (int i = 0; i < input_shape.length; i++) {
settingsChanged |= input_shape[i] != config.inputShape[i];
}
}
if (settingsChanged) {
config.init(model_path,label_path,image_path,cpu_thread_num,cpu_power_mode,
input_color_format,input_shape);
preprocess.init(config);
// update UI
tvInputSetting.setText("Model: " + config.modelPath.substring(config.modelPath.lastIndexOf("/") + 1) + "\n" + "CPU" +
" Thread Num: " + Integer.toString(config.cpuThreadNum) + "\n" + "CPU Power Mode: " + config.cpuPowerMode);
tvInputSetting.scrollTo(0, 0);
// reload model if configure has been changed
loadModel();
}
}
@Override
protected void onDestroy() {
if (predictor != null) {
predictor.releaseModel();
}
super.onDestroy();
}
}
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
tools:context=".MainActivity">
<ScrollView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:fadingEdge="vertical"
android:scrollbars="vertical">
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical">
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="300dp"
android:orientation="horizontal">
<RelativeLayout
android:id="@+id/v_img_seg"
android:layout_width="wrap_content"
android:layout_height="fill_parent"
android:layout_weight="1"
android:clickable="true"
android:onClick="onClick">
<ImageView
android:id="@+id/iv_img_seg_image"
android:layout_width="200dp"
android:layout_height="200dp"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:layout_margin="12dp"
android:adjustViewBounds="true"
android:src="@drawable/image_segementation"
android:scaleType="fitCenter"/>
<TextView
android:id="@+id/iv_img_seg_title"
android:layout_below="@id/iv_img_seg_image"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_margin="8dp"
android:text="Image Segmentation"
android:textStyle="bold"
android:textAllCaps="false"
android:singleLine="false"/>
</RelativeLayout>
</LinearLayout>
</LinearLayout>
</ScrollView>
</android.support.constraint.ConstraintLayout>
\ No newline at end of file
<resources>
<string name="app_name">Segmentation-demo</string>
<!-- image segmentation settings -->
<string name="ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY">ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY</string>
<string name="ISG_ENABLE_CUSTOM_SETTINGS_KEY">ISG_ENABLE_CUSTOM_SETTINGS_KEY</string>
<string name="ISG_MODEL_PATH_KEY">ISG_MODEL_PATH_KEY</string>
<string name="ISG_LABEL_PATH_KEY">ISG_LABEL_PATH_KEY</string>
<string name="ISG_IMAGE_PATH_KEY">ISG_IMAGE_PATH_KEY</string>
<string name="ISG_CPU_THREAD_NUM_KEY">ISG_CPU_THREAD_NUM_KEY</string>
<string name="ISG_CPU_POWER_MODE_KEY">ISG_CPU_POWER_MODE_KEY</string>
<string name="ISG_INPUT_COLOR_FORMAT_KEY">ISG_INPUT_COLOR_FORMAT_KEY</string>
<string name="ISG_INPUT_SHAPE_KEY">ISG_INPUT_SHAPE_KEY</string>
<string name="ISG_MODEL_PATH_DEFAULT">image_segmentation/models/deeplab_mobilenet_for_cpu</string>
<string name="ISG_LABEL_PATH_DEFAULT">image_segmentation/labels/label_list</string>
<string name="ISG_IMAGE_PATH_DEFAULT">image_segmentation/images/human.jpg</string>
<string name="ISG_CPU_THREAD_NUM_DEFAULT">1</string>
<string name="ISG_CPU_POWER_MODE_DEFAULT">LITE_POWER_HIGH</string>
<string name="ISG_INPUT_COLOR_FORMAT_DEFAULT">RGB</string>
<string name="ISG_INPUT_SHAPE_DEFAULT">1,3,513,513</string>
</resources>
# PaddleSeg 分割模型预测性能测试
# PaddleSeg 分割模型预测Benchmark
## 测试软件环境
- CUDA 9.0
......@@ -9,15 +9,6 @@
- GPU: Tesla V100
- CPU:Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
## 测试方法
- 输入采用 1000张RGB图片,batch_size 统一为 1。
- 重复跑多轮,去掉第一轮预热时间,计后续几轮的平均时间:包括数据拷贝到GPU,预测引擎计算时间,预测结果拷贝回CPU 时间。
- 采用Fluid C++预测引擎
- 测试时开启了 FLAGS_cudnn_exhaustive_search=True,使用exhaustive方式搜索卷积计算算法。
- 对于每个模型,同事测试了`OP`优化模型和原生模型的推理速度, 并分别就是否开启`FP16``FP32`的进行了测试
## 推理速度测试数据
**说明**`OP优化模型`指的是`PaddleSeg 0.3.0`版以后导出的新版模型,把图像的预处理和后处理部分放入 GPU 中进行加速,提高性能。每个模型包含了三种`eval_crop_size``192x192`/`512x512`/`768x768`
......@@ -501,7 +492,7 @@
### 3. 不同的EVAL_CROP_SIZE对图片能的影响
### 3. 不同的EVAL_CROP_SIZE对图片能的影响
`deeplabv3p_xception`上的数据对比图:
![xception](https://paddleseg.bj.bcebos.com/inference/benchmark/xception.png)
......
......@@ -11,11 +11,11 @@
## 2. 安装 TensorRT 5.1
请参考`Nvidia`[官方安装教程](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html)
请参考Nvidia[官方安装教程](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html)
## 3. 编译 PaddlePaddle
这里假设`Python`版本为`3.7`以及`cuda` `cudnn` `tensorRT`安装路径如下:
这里假设`Python`版本为`3.7`以及`CUDA` `cuDNN` `TensorRT`安装路径如下:
```bash
# 假设 cuda 安装路径
/usr/local/cuda-9.0/
......
......@@ -31,6 +31,7 @@ gflags.DEFINE_string("conf", default="", help="Configuration File Path")
gflags.DEFINE_string("input_dir", default="", help="Directory of Input Images")
gflags.DEFINE_boolean("use_pr", default=False, help="Use optimized model")
gflags.DEFINE_string("trt_mode", default="", help="Use optimized model")
gflags.DEFINE_string("ext", default=".jpeg|.jpg", help="Input Image File Extensions")
gflags.FLAGS = gflags.FLAGS
......@@ -146,9 +147,9 @@ class ImageReader:
# process multiple images with multithreading
def process(self, imgs, use_pr=False):
imgs_data = []
with ThreadPoolExecutor(max_workers=self.config.batch_size) as exec:
with ThreadPoolExecutor(max_workers=self.config.batch_size) as exe_pool:
tasks = [
exec.submit(self.process_worker, imgs, idx, use_pr)
exe_pool.submit(self.process_worker, imgs, idx, use_pr)
for idx in range(len(imgs))
]
for task in as_completed(tasks):
......@@ -315,4 +316,4 @@ if __name__ == "__main__":
"Invalid trt_mode [%s], only support[int8, fp16, fp32]" % trt_mode)
exit(-1)
# run inference
run(gflags.FLAGS.conf, gflags.FLAGS.input_dir)
run(gflags.FLAGS.conf, gflags.FLAGS.input_dir, gflags.FLAGS.ext)
......@@ -44,7 +44,7 @@
**注意:导出的标注文件位于`保存位置`下的`outputs`目录。**
精灵标注产出的真值文件可参考我们给出的文件夹`docs/annotation/jingling_demo`
精灵标注产出的真值文件可参考我们给出的文件夹[docs/annotation/jingling_demo](jingling_demo)
<div align="center">
<img src="../imgs/annotation/jingling-4.png" width="300px"/>
......@@ -54,6 +54,7 @@
**注意:** 对于中间有空洞的目标(例如游泳圈),暂不支持对空洞部分的标注。如有需要,可借助[labelme](./labelme2seg.md)
## 3 数据格式转换
最后用我们提供的数据转换脚本将上述标注工具产出的数据格式转换为模型训练时所需的数据格式。
* 经过数据格式转换后的数据集目录结构如下:
......@@ -84,13 +85,18 @@ pip install pillow
* 运行以下代码,将标注后的数据转换成满足以上格式的数据集:
```
python pdseg/tools/jingling2seg.py <path/to/label_json_file>
python pdseg/tools/jingling2seg.py <PATH/TO/LABEL_JSON_FILE>
```
其中,`<path/to/label_json_files>`为精灵标注产出的json文件所在文件夹的目录,一般为精灵工具使用(3)中`保存位置`下的`outputs`目录。
其中,`<PATH/TO/LABEL_JSON_FILE>`为精灵标注产出的json文件所在文件夹的目录,一般为精灵工具使用(3)中`保存位置`下的`outputs`目录。
我们已内置了一个标注的示例,可运行以下代码进行体验:
转换得到的数据集可参考我们给出的文件夹`docs/annotation/jingling_demo`。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
```
python pdseg/tools/jingling2seg.py docs/annotation/jingling_demo/outputs/
```
转换得到的数据集可参考我们给出的文件夹[docs/annotation/jingling_demo](jingling_demo)。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
<div align="center">
<img src="../imgs/annotation/jingling-5.png" width="600px"/>
......
{"path":"/Users/dataset/aa63d7e6db0d03137883772c246c6761fc201059.jpg","outputs":{"object":[{"name":"person","polygon":{"x1":321.99,"y1":63,"x2":293,"y2":98.00999999999999,"x3":245.01,"y3":141.01,"x4":221,"y4":194,"x5":231.99,"y5":237,"x6":231.99,"y6":348.01,"x7":191,"y7":429,"x8":197,"y8":465.01,"x9":193,"y9":586,"x10":151,"y10":618.01,"x11":124,"y11":622,"x12":100,"y12":703,"x13":121.99,"y13":744,"x14":141.99,"y14":724,"x15":163,"y15":658.01,"x16":238.01,"y16":646,"x17":259,"y17":627,"x18":313,"y18":618.01,"x19":416,"y19":639,"x20":464,"y20":606,"x21":454,"y21":555.01,"x22":404,"y22":508.01,"x23":430,"y23":489,"x24":407,"y24":464,"x25":397,"y25":365.01,"x26":407,"y26":290,"x27":361.99,"y27":252,"x28":376,"y28":215.01,"x29":391.99,"y29":189,"x30":388.01,"y30":135.01,"x31":340,"y31":120,"x32":313,"y32":161.01,"x33":307,"y33":188.01,"x34":311,"y34":207,"x35":277,"y35":186,"x36":293,"y36":137,"x37":308.01,"y37":117,"x38":361,"y38":93}}]},"time_labeled":1568101256852,"labeled":true,"size":{"width":706,"height":1000,"depth":3}}
\ No newline at end of file
{"path":"/Users/dataset/jingling.jpg","outputs":{"object":[{"name":"person","polygon":{"x1":321.99,"y1":63,"x2":293,"y2":98.00999999999999,"x3":245.01,"y3":141.01,"x4":221,"y4":194,"x5":231.99,"y5":237,"x6":231.99,"y6":348.01,"x7":191,"y7":429,"x8":197,"y8":465.01,"x9":193,"y9":586,"x10":151,"y10":618.01,"x11":124,"y11":622,"x12":100,"y12":703,"x13":121.99,"y13":744,"x14":141.99,"y14":724,"x15":163,"y15":658.01,"x16":238.01,"y16":646,"x17":259,"y17":627,"x18":313,"y18":618.01,"x19":416,"y19":639,"x20":464,"y20":606,"x21":454,"y21":555.01,"x22":404,"y22":508.01,"x23":430,"y23":489,"x24":407,"y24":464,"x25":397,"y25":365.01,"x26":407,"y26":290,"x27":361.99,"y27":252,"x28":376,"y28":215.01,"x29":391.99,"y29":189,"x30":388.01,"y30":135.01,"x31":340,"y31":120,"x32":313,"y32":161.01,"x33":307,"y33":188.01,"x34":311,"y34":207,"x35":277,"y35":186,"x36":293,"y36":137,"x37":308.01,"y37":117,"x38":361,"y38":93}}]},"time_labeled":1568101256852,"labeled":true,"size":{"width":706,"height":1000,"depth":3}}
\ No newline at end of file
......@@ -47,7 +47,7 @@ git clone https://github.com/wkentaro/labelme
​ (3) 图片中所有目标的标注都完成后,点击`Save`保存json文件,**请将json文件和图片放在同一个文件夹里**,点击`Next Image`标注下一张图片。
LableMe产出的真值文件可参考我们给出的文件夹`docs/annotation/labelme_demo`
LableMe产出的真值文件可参考我们给出的文件夹[docs/annotation/labelme_demo](labelme_demo)
<div align="center">
<img src="../imgs/annotation/image-5.png" width="600px"/>
......@@ -64,6 +64,7 @@ LableMe产出的真值文件可参考我们给出的文件夹`docs/annotation/la
</div>
## 3 数据格式转换
最后用我们提供的数据转换脚本将上述标注工具产出的数据格式转换为模型训练时所需的数据格式。
* 经过数据格式转换后的数据集目录结构如下:
......@@ -94,12 +95,18 @@ pip install pillow
* 运行以下代码,将标注后的数据转换成满足以上格式的数据集:
```
python pdseg/tools/labelme2seg.py <path/to/label_json_file>
python pdseg/tools/labelme2seg.py <PATH/TO/LABEL_JSON_FILE>
```
其中,`<path/to/label_json_files>`为图片以及LabelMe产出的json文件所在文件夹的目录,同时也是转换后的标注集所在文件夹的目录。
其中,`<PATH/TO/LABEL_JSON_FILE>`为图片以及LabelMe产出的json文件所在文件夹的目录,同时也是转换后的标注集所在文件夹的目录。
转换得到的数据集可参考我们给出的文件夹`docs/annotation/labelme_demo`。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
我们已内置了一个标注的示例,可运行以下代码进行体验:
```
python pdseg/tools/labelme2seg.py docs/annotation/labelme_demo/
```
转换得到的数据集可参考我们给出的文件夹[docs/annotation/labelme_demo](labelme_demo)。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
<div align="center">
<img src="../imgs/annotation/image-7.png" width="600px"/>
......
# PaddleSeg 性能Benchmark
## 训练性能
### 多GPU加速比
### 显存开销对比
## 预测性能对比
### Windows
### Linux
#### Naive
#### Analysis
......@@ -55,7 +55,7 @@ Doing label pixel statistics:
-`AUG.AUG_METHOD`为stepscaling时,`EVAL_CROP_SIZE`的宽高应不小于原图中最大的宽高。
-`AUG.AUG_METHOD`为rangscaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最大的宽高。
-`AUG.AUG_METHOD`为rangescaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最大的宽高。
### 11 数据增强参数`AUG.INF_RESIZE_VALUE`校验
验证`AUG.INF_RESIZE_VALUE`是否在[`AUG.MIN_RESIZE_VALUE`~`AUG.MAX_RESIZE_VALUE`]范围内。若在范围内,则通过校验。
# PaddleSeg 分割库配置说明
# 脚本使用和配置说明
PaddleSeg提供了提供了统一的配置用于 训练/评估/可视化/导出模型
PaddleSeg提供了 **训练**/**评估**/**可视化**/**模型导出** 等4个功能的使用脚本。所有脚本都支持通过不同的Flags来开启特定功能,也支持通过Options来修改默认的训练配置。它们的使用方式非常接近,如下:
```shell
# 训练
python pdseg/train.py ${FLAGS} ${OPTIONS}
# 评估
python pdseg/eval.py ${FLAGS} ${OPTIONS}
# 可视化
python pdseg/vis.py ${FLAGS} ${OPTIONS}
# 模型导出
python pdseg/export_model.py ${FLAGS} ${OPTIONS}
```
**Note:** FLAGS必须位于OPTIONS之前,否会将会遇到报错,例如如下的例子:
```shell
# FLAGS "--cfg configs/unet_optic.yaml" 必须在 OPTIONS "BATCH_SIZE 1" 之前
python pdseg/train.py BATCH_SIZE 1 --cfg configs/unet_optic.yaml
```
## 命令行FLAGS
|FLAG|用途|支持脚本|默认值|备注|
|-|-|-|-|-|
|--cfg|配置文件路径|ALL|None||
|--use_gpu|是否使用GPU进行训练|train/eval/vis|False||
|--use_mpio|是否使用多进程进行IO处理|train/eval|False|打开该开关会占用一定量的CPU内存,但是可以提高训练速度。</br> **NOTE:** windows平台下不支持该功能, 建议使用自定义数据初次训练时不打开,打开会导致数据读取异常不可见。 |
|--use_tb|是否使用TensorBoard记录训练数据|train|False||
|--log_steps|训练日志的打印周期(单位为step)|train|10||
|--debug|是否打印debug信息|train|False|IOU等指标涉及到混淆矩阵的计算,会降低训练速度|
|--tb_log_dir &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|TensorBoard的日志路径|train|None||
|--do_eval|是否在保存模型时进行效果评估 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|train|False||
|--vis_dir|保存可视化图片的路径|vis|"visual"||
## OPTIONS
PaddleSeg提供了统一的配置用于 训练/评估/可视化/导出模型。一共存在三套配置方案:
* 命令行窗口传递的参数。
* configs目录下的yaml文件。
* 默认参数,位于pdseg/utils/config.py。
三者的优先级顺序为 命令行窗口 > yaml > 默认配置。
配置包含以下Group:
* [通用](./configs/basic_group.md)
* [DATASET](./configs/dataset_group.md)
* [DATALOADER](./configs/dataloader_group.md)
* [FREEZE](./configs/freeze_group.md)
* [MODEL](./configs/model_group.md)
* [SOLVER](./configs/solver_group.md)
* [TRAIN](./configs/train_group.md)
* [TEST](./configs/test_group.md)
`Note`:
代码详见pdseg/utils/config.py
|OPTIONS|用途|支持脚本|
|-|-|-|
|[BASIC](./configs/basic_group.md)|通用配置|ALL|
|[DATASET](./configs/dataset_group.md)|数据集相关|train/eval/vis|
|[MODEL](./configs/model_group.md)|模型相关|ALL|
|[TRAIN](./configs/train_group.md)|训练相关|train|
|[SOLVER](./configs/solver_group.md)|训练优化相关|train|
|[TEST](./configs/test_group.md)|测试模型相关|eval/vis/export_model|
|[AUG](./data_aug.md)|数据增强|ALL|
[FREEZE](./configs/freeze_group.md)|模型导出相关|export_model|
|[DATALOADER](./configs/dataloader_group.md)|数据加载相关|ALL|
在进行自定义的分割任务之前,您需要准备一份yaml文件,建议参照[configs目录下的示例yaml](../configs)进行修改。
以下是PaddleSeg的默认配置,供查询使用。
```yaml
########################## 基本配置 ###########################################
# 批处理大小
BATCH_SIZE: 1
# 验证时图像裁剪尺寸(宽,高)
EVAL_CROP_SIZE: tuple()
# 训练时图像裁剪尺寸(宽,高)
TRAIN_CROP_SIZE: tuple()
########################## 数据集配置 #########################################
DATASET:
# 数据主目录目录
DATA_DIR: './dataset/cityscapes/'
# 训练集列表
TRAIN_FILE_LIST: './dataset/cityscapes/train.list'
# 验证集列表
VAL_FILE_LIST: './dataset/cityscapes/val.list'
# 测试数据列表
TEST_FILE_LIST: './dataset/cityscapes/test.list'
# Tensorboard 可视化的数据集
VIS_FILE_LIST: None
# 类别数(需包括背景类)
NUM_CLASSES: 19
# 输入图像类型, 支持三通道'rgb',四通道'rgba',单通道灰度图'gray'
IMAGE_TYPE: 'rgb'
# 输入图片的通道数
DATA_DIM: 3
# 数据列表分割符, 默认为空格
SEPARATOR: ' '
# 忽略的像素标签值, 默认为255,一般无需改动
IGNORE_INDEX: 255
########################## 模型通用配置 #######################################
MODEL:
# 模型名称, 已支持deeplabv3p, unet, icnet,pspnet,hrnet
MODEL_NAME: ''
# BatchNorm类型: bn、gn(group_norm)
DEFAULT_NORM_TYPE: 'bn'
# 多路损失加权值
MULTI_LOSS_WEIGHT: [1.0]
# DEFAULT_NORM_TYPE为gn时group数
DEFAULT_GROUP_NUMBER: 32
# 极小值, 防止分母除0溢出,一般无需改动
DEFAULT_EPSILON: 1e-5
# BatchNorm动量, 一般无需改动
BN_MOMENTUM: 0.99
# 是否使用FP16训练
FP16: False
########################## DeepLab模型配置 ####################################
DEEPLAB:
# DeepLab backbone 配置, 可选项xception_65, mobilenetv2
BACKBONE: "xception_65"
# DeepLab output stride
OUTPUT_STRIDE: 16
# MobileNet v2 backbone scale 设置
DEPTH_MULTIPLIER: 1.0
# MobileNet v2 backbone scale 设置
ENCODER_WITH_ASPP: True
# MobileNet v2 backbone scale 设置
ENABLE_DECODER: True
# ASPP是否使用可分离卷积
ASPP_WITH_SEP_CONV: True
# 解码器是否使用可分离卷积
DECODER_USE_SEP_CONV: True
########################## UNET模型配置 #######################################
UNET:
# 上采样方式, 默认为双线性插值
UPSAMPLE_MODE: 'bilinear'
########################## ICNET模型配置 ######################################
ICNET:
# RESNET backbone scale 设置
DEPTH_MULTIPLIER: 0.5
# RESNET 层数 设置
LAYERS: 50
########################## PSPNET模型配置 ######################################
PSPNET:
# RESNET backbone scale 设置
DEPTH_MULTIPLIER: 1
# RESNET backbone 层数 设置
LAYERS: 50
########################## HRNET模型配置 ######################################
HRNET:
# HRNET STAGE2 设置
STAGE2:
NUM_MODULES: 1
NUM_CHANNELS: [40, 80]
# HRNET STAGE3 设置
STAGE3:
NUM_MODULES: 4
NUM_CHANNELS: [40, 80, 160]
# HRNET STAGE4 设置
STAGE4:
NUM_MODULES: 3
NUM_CHANNELS: [40, 80, 160, 320]
########################### 训练配置 ##########################################
TRAIN:
# 模型保存路径
MODEL_SAVE_DIR: ''
# 预训练模型路径
PRETRAINED_MODEL_DIR: ''
# 是否resume,继续训练
RESUME_MODEL_DIR: ''
# 是否使用多卡间同步BatchNorm均值和方差
SYNC_BATCH_NORM: False
# 模型参数保存的epoch间隔数,可用来继续训练中断的模型
SNAPSHOT_EPOCH: 10
########################### 模型优化相关配置 ##################################
SOLVER:
# 初始学习率
LR: 0.1
# 学习率下降方法, 支持poly piecewise cosine 三种
LR_POLICY: "poly"
# 优化算法, 支持SGD和Adam两种算法
OPTIMIZER: "sgd"
# 动量参数
MOMENTUM: 0.9
# 二阶矩估计的指数衰减率
MOMENTUM2: 0.999
# 学习率Poly下降指数
POWER: 0.9
# step下降指数
GAMMA: 0.1
# step下降间隔
DECAY_EPOCH: [10, 20]
# 学习率权重衰减,0-1
WEIGHT_DECAY: 0.00004
# 训练开始epoch数,默认为1
BEGIN_EPOCH: 1
# 训练epoch数,正整数
NUM_EPOCHS: 30
# loss的选择,支持softmax_loss, bce_loss, dice_loss
LOSS: ["softmax_loss"]
# 是否开启warmup学习策略
LR_WARMUP: False
# warmup的迭代次数
LR_WARMUP_STEPS: 2000
########################## 测试配置 ###########################################
TEST:
# 测试模型路径
TEST_MODEL: ''
########################### 数据增强配置 ######################################
AUG:
# 图像resize的方式有三种:
# unpadding(固定尺寸),stepscaling(按比例resize),rangescaling(长边对齐)
AUG_METHOD: 'unpadding'
# 图像resize的固定尺寸(宽,高),非负
FIX_RESIZE_SIZE: (500, 500)
# 图像resize方式为stepscaling,resize最小尺度,非负
MIN_SCALE_FACTOR: 0.5
# 图像resize方式为stepscaling,resize最大尺度,不小于MIN_SCALE_FACTOR
MAX_SCALE_FACTOR: 2.0
# 图像resize方式为stepscaling,resize尺度范围间隔,非负
SCALE_STEP_SIZE: 0.25
# 图像resize方式为rangescaling,训练时长边resize的范围最小值,非负
MIN_RESIZE_VALUE: 400
# 图像resize方式为rangescaling,训练时长边resize的范围最大值,
# 不小于MIN_RESIZE_VALUE
MAX_RESIZE_VALUE: 600
# 图像resize方式为rangescaling, 测试验证可视化模式下长边resize的长度,
# 在MIN_RESIZE_VALUE到MAX_RESIZE_VALUE范围内
INF_RESIZE_VALUE: 500
# 图像镜像左右翻转
MIRROR: True
# 图像上下翻转开关,True/False
FLIP: False
# 图像启动上下翻转的概率,0-1
FLIP_RATIO: 0.5
RICH_CROP:
# RichCrop数据增广开关,用于提升模型鲁棒性
ENABLE: False
# 图像旋转最大角度,0-90
MAX_ROTATION: 15
# 裁取图像与原始图像面积比,0-1
MIN_AREA_RATIO: 0.5
# 裁取图像宽高比范围,非负
ASPECT_RATIO: 0.33
# 亮度调节范围,0-1
BRIGHTNESS_JITTER_RATIO: 0.5
# 饱和度调节范围,0-1
SATURATION_JITTER_RATIO: 0.5
# 对比度调节范围,0-1
CONTRAST_JITTER_RATIO: 0.5
# 图像模糊开关,True/False
BLUR: False
# 图像启动模糊百分比,0-1
BLUR_RATIO: 0.1
########################## 预测部署模型配置 ###################################
FREEZE:
# 预测保存的模型名称
MODEL_FILENAME: '__model__'
# 预测保存的参数名称
PARAMS_FILENAME: '__params__'
# 预测模型参数保存的路径
SAVE_DIR: 'freeze_model'
########################## 数据载入配置 #######################################
DATALOADER:
# 数据载入时的并发数, 建议值8
NUM_WORKERS: 8
# 数据载入时缓存队列大小, 建议值256
BUF_SIZE: 256
```
......@@ -2,70 +2,58 @@
BASIC Group存放所有通用配置
## `MEAN`
## `BATCH_SIZE`
图像预处理减去的均值(格式为 *[R, G, B]*
训练、评估、可视化时所用的BATCH大小
### 默认值
[0.5, 0.5, 0.5]
1(需要根据实际需求填写)
<br/>
<br/>
### 注意事项
## `STD`
* 当指定了多卡运行时,PaddleSeg会将数据平分到每张卡上运行,因此每张卡单次运行的数量为 BATCH_SIZE // dev_count
图像预处理所除的标准差(格式为 *[R, G, B]*
* 多卡运行时,请确保BATCH_SIZE可被dev_count整除
### 默认
* 增大BATCH_SIZE有利于模型训练时的收敛速度,但是会带来显存的开销。请根据实际情况评估后填写合适的
[0.5, 0.5, 0.5]
* 目前PaddleSeg提供的很多预训练模型都有BN层,如果BATCH SIZE设置为1,则此时训练可能不稳定导致nan
<br/>
<br/>
## `EVAL_CROP_SIZE`
## `TRAIN_CROP_SIZE`
评估时所对图片裁剪的大小(格式为 *[宽, 高]*
训练时所对图片裁剪的大小(格式为 *[宽, 高]*
### 默认值
无(需要用户自己填写)
### 注意事项
* 裁剪的大小不能小于原图,请将该字段的值填写为评估数据中最长的宽和高
`TRAIN_CROP_SIZE`可以设置任意大小,具体如何设置根据数据集而定。
<br/>
<br/>
## `TRAIN_CROP_SIZE`
## `EVAL_CROP_SIZE`
训练时所对图片裁剪的大小(格式为 *[宽, 高]*
评估时所对图片裁剪的大小(格式为 *[宽, 高]*
### 默认值
无(需要用户自己填写)
<br/>
<br/>
## `BATCH_SIZE`
训练、评估、可视化时所用的BATCH大小
### 默认值
1(需要根据实际需求填写)
### 注意事项
`EVAL_CROP_SIZE`的设置需要满足以下条件,共有3种情形:
-`AUG.AUG_METHOD`为unpadding时,`EVAL_CROP_SIZE`的宽高应不小于`AUG.FIX_RESIZE_SIZE`的宽高。
-`AUG.AUG_METHOD`为stepscaling时,`EVAL_CROP_SIZE`的宽高应不小于原图中最长的宽高。
-`AUG.AUG_METHOD`为rangescaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最长的宽高。
* 当指定了多卡运行时,PaddleSeg会将数据平分到每张卡上运行,因此每张卡单次运行的数量为 BATCH_SIZE // dev_count
<br/>
<br/>
* 多卡运行时,请确保BATCH_SIZE可被dev_count整除
* 增大BATCH_SIZE有利于模型训练时的收敛速度,但是会带来显存的开销。请根据实际情况评估后填写合适的值
* 目前PaddleSeg提供的很多预训练模型都有BN层,如果BATCH SIZE设置为1,则此时训练可能不稳定导致nan
<br/>
<br/>
......@@ -5,11 +5,12 @@ MODEL Group存放所有和模型相关的配置,该Group还包含三个子Grou
* [DeepLabv3p](./model_deeplabv3p_group.md)
* [UNet](./model_unet_group.md)
* [ICNet](./model_icnet_group.md)
* [PSPNet](./model_pspnet_group.md)
* [HRNet](./model_hrnet_group.md)
## `MODEL_NAME`
所选模型,支持`deeplabv3p` `unet` `icnet` `hrnet`种模型
所选模型,支持`deeplabv3p` `unet` `icnet` `pspnet` `hrnet`种模型
### 默认值
......@@ -20,7 +21,13 @@ MODEL Group存放所有和模型相关的配置,该Group还包含三个子Grou
## `DEFAULT_NORM_TYPE`
模型所用norm类型,支持`bn` [`gn`]()
模型所用norm类型,支持`bn`(Batch Norm)、`gn`(Group Norm)
![](../imgs/gn.png)
关于Group Norm的介绍可以参考论文:https://arxiv.org/abs/1803.08494
GN 把通道分为组,并计算每一组之内的均值和方差,以进行归一化。GN 的计算与批量大小无关,其精度也在各种批量大小下保持稳定。适应于网络参数很重的模型,比如deeplabv3+这种,可以在一个小batch下取得一个较好的训练效果。
### 默认值
......@@ -111,4 +118,3 @@ loss = 1.0 * loss1 + 0.4 * loss2 + 0.16 * loss3
<br/>
<br/>
# cfg.MODEL.PSPNET
MODEL.PSPNET 子Group存放所有和PSPNet模型相关的配置
## `DEPTH_MULTIPER`
Resnet backbone的depth multiper
### 默认值
1
<br/>
<br/>
## `LAYERS`
ResNet backbone的层数,支持`18` `34` `50` `101` `152`等五种
### 默认值
50
<br/>
<br/>
......@@ -5,7 +5,7 @@ TRAIN Group存放所有和训练相关的配置
## `MODEL_SAVE_DIR`
在训练周期内定期保存模型的主目录
## 默认值
### 默认值
无(需要用户自己填写)
<br/>
......@@ -14,10 +14,10 @@ TRAIN Group存放所有和训练相关的配置
## `PRETRAINED_MODEL_DIR`
预训练模型路径
## 默认值
### 默认值
## 注意事项
### 注意事项
* 若未指定该字段,则模型会随机初始化所有的参数,从头开始训练
......@@ -31,10 +31,10 @@ TRAIN Group存放所有和训练相关的配置
## `RESUME_MODEL_DIR`
从指定路径中恢复参数并继续训练
## 默认值
### 默认值
## 注意事项
### 注意事项
*`RESUME_MODEL_DIR`存在时,PaddleSeg会恢复到上一次训练的最近一个epoch,并且恢复训练过程中的临时变量(如已经衰减过的学习率,Optimizer的动量数据等),`PRETRAINED_MODEL`路径的最后一个目录必须为int数值或者字符串final,PaddleSeg会将int数值作为当前起始EPOCH继续训练,若目录为final,则不会继续训练。若目录不满足上述条件,PaddleSeg会抛出错误。
......@@ -42,12 +42,17 @@ TRAIN Group存放所有和训练相关的配置
<br/>
## `SYNC_BATCH_NORM`
是否在多卡间同步BN的均值和方差
是否在多卡间同步BN的均值和方差
## 默认值
Synchronized Batch Norm跨GPU批归一化策略最早在[MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240)
论文中提出,在[Bag of Freebies for Training Object Detection Neural Networks](https://arxiv.org/pdf/1902.04103.pdf)论文中以Yolov3验证了这一策略的有效性,[PaddleCV/yolov3](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/yolov3)实现了这一系列策略并比Darknet框架版本在COCO17数据上mAP高5.9.
PaddleSeg基于PaddlePaddle框架的sync_batch_norm策略,可以支持通过多卡实现大batch size的分割模型训练,可以得到更高的mIoU精度。
### 默认值
False
## 注意事项
### 注意事项
* 打开该选项会带来一定的性能消耗(多卡间同步数据导致)
......
......@@ -7,67 +7,108 @@
## Resize
resize步骤是指将输入图像按照某种规则讲图片重新缩放到某一个尺寸,PaddleSeg支持以下3种resize方式:
Resize步骤是指将输入图像按照某种规则讲图片重新缩放到某一个尺寸,PaddleSeg支持以下3种resize方式:
![](imgs/aug_method.png)
- Un-padding
将输入图像直接resize到某一个固定大小下,送入到网络中间训练,对应参数为AUG.FIX_RESIZE_SIZE。预测时同样操作。
- Unpadding
将输入图像直接resize到某一个固定大小下,送入到网络中间训练。预测时同样操作。
- Step-Scaling
将输入图像按照某一个比例resize,这个比例以某一个步长在一定范围内随机变动。设定最小比例参数为`AUG.MIN_SCALE_FACTOR`, 最大比例参数`AUG.MAX_SCALE_FACTOR`,步长参数为`AUG.SCALE_STEP_SIZE`预测时不对输入图像做处理。
将输入图像按照某一个比例resize,这个比例以某一个步长在一定范围内随机变动。预测时不对输入图像做处理。
- Range-Scaling
固定长宽比resize,即图像长边对齐到某一个固定大小,短边随同样的比例变化。设定最小大小参数为`AUG.MIN_RESIZE_VALUE`,设定最大大小参数为`AUG.MAX_RESIZE_VALUE`。预测时需要将长边对齐到`AUG.INF_RESIZE_VALUE`所指定的大小,其中`AUG.INF_RESIZE_VALUE``AUG.MIN_RESIZE_VALUE``AUG.MAX_RESIZE_VALUE`范围内。
将输入图像按照长边变化进行resize,即图像长边对齐到某一长度,该长度在一定范围内随机变动,短边随同样的比例变化。
预测时需要将长边对齐到另外指定的固定长度。
Range-Scaling示意图如下:
![](imgs/rangescale.png)
|Resize方式|配置参数|含义|备注|
|-|-|-|-|
|Unpadding|AUG.FIX_RESIZE_SIZE|Resize的固定尺寸|
|Step-Scaling|AUG.MIN_SCALE_FACTOR|Resize最小比例|
||AUG.MAX_SCALE_FACTOR|Resize最大比例|
||AUG.SCALE_STEP_SIZE|Resize比例选取的步长|
|Range-Scaling|AUG.MIN_RESIZE_VALUE|图像长边变动范围的最小值|
||AUG.MAX_RESIZE_VALUE|图像长边变动范围的最大值|
|&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|AUG.INF_RESIZE_VALUE|预测时长边对齐时所指定的固定长度|取值必须在<br>[AUG.MIN_RESIZE_VALUE,<br> AUG.MAX_RESIZE_VALUE]<br>范围内。|
**注:本文所有配置参数可在configs目录下您的yaml文件中进行设置。**
## 图像翻转
PaddleSeg支持以下2种翻转方式:
- 左右翻转(Mirror)
使用开关`AUG.MIRROR`,为True时该项功能开启,为False时该项功能关闭
以50%概率对图像进行左右翻转
- 上下翻转(Flip)
使用开关`AUG.FLIP`,为True时该项功能开启,`AUG.FLIP_RATIO`控制是否上下翻转的概率。为False时该项功能关闭
以一定概率对图像进行上下翻转
以上2种开关独立运作,可组合使用。故图像翻转一共有如下4种可能的情况:
<img src="imgs/data_aug_flip_mirror.png" width="60%" height="60%" />
|图像翻转方式|配置参数|含义|备注|
|-|-|-|-|
|Mirror|AUG.MIRROR|左右翻转开关|为True时开启,为False时关闭|
|Flip|AUG.FLIP|上下翻转开关|为True时开启,为False时关闭|
||AUG.FLIP_RATIO|控制是否上下翻转的概率|当AUG.FLIP为False时无效|
## Rich Crop
Rich Crop是PaddleSeg结合实际业务经验开放的一套数据增强策略,面向标注数据少,测试数据情况繁杂的分割业务场景使用的数据增强策略。流程如下图所示:
![RichCrop示意图](imgs/data_aug_example.png)
rich crop是指对图像进行多种变换,保证在训练过程中数据的丰富多样性,PaddleSeg支持以下几种变换。`AUG.RICH_CROP.ENABLE`为False时会直接跳过该步骤。
Rich Crop是指对图像进行多种变换,保证在训练过程中数据的丰富多样性,包含以下4种变换:
- Blur
使用高斯模糊对图像进行平滑。
- Rotation
图像旋转,旋转角度在一定范围内随机选取,旋转产生的多余的区域使用`DATASET.PADDING_VALUE`值进行填充。
- blur
图像加模糊,使用开关`AUG.RICH_CROP.BLUR`,为False时该项功能关闭。`AUG.RICH_CROP.BLUR_RATIO`控制加入模糊的概率
- Aspect
图像长宽比调整,从图像中按一定大小和宽高比裁取一定区域出来之后进行resize
- rotation
图像旋转,`AUG.RICH_CROP.MAX_ROTATION`控制最大旋转角度。旋转产生的多余的区域的填充值为均值
- Color jitter
图像颜色抖动,共进行亮度、饱和度和对比度三种颜色属性的调节
- aspect
图像长宽比调整,从图像中crop一定区域出来之后在某一长宽比内进行resize。控制参数`AUG.RICH_CROP.MIN_AREA_RATIO``AUG.RICH_CROP.ASPECT_RATIO`
|Rich crop方式|配置参数|含义|备注|
|-|-|-|-|
|Rich crop|AUG.RICH_CROP.ENABLE|Rich crop总开关|为True时开启,为False时关闭所有变换|
|Blur|AUG.RICH_CROP.BLUR|图像模糊开关|为True时开启,为False时关闭|
||AUG.RICH_CROP.BLUR_RATIO|控制进行模糊的概率|当AUG.RICH_CROP.BLUR为False时无效|
|Rotation|AUG.RICH_CROP.MAX_ROTATION|图像正向旋转的最大角度|取值0~90°,实际旋转角度在\[-AUG.RICH_CROP.MAX_ROTATION, AUG.RICH_CROP.MAX_ROTATION]范围内随机选取|
|Aspect|AUG.RICH_CROP.MIN_AREA_RATIO|裁取图像与原始图像面积比最小值|取值0~1,取值越小则变化范围越大,若为0则不进行调节|
||AUG.RICH_CROP.ASPECT_RATIO|裁取图像宽高比范围|取值非负,越小则变化范围越大,若为0则不进行调节|
|Color jitter|AUG.RICH_CROP.BRIGHTNESS_JITTER_RATIO|亮度调节因子|取值0~1,取值越大则变化范围越大,若为0则不进行调节|
||AUG.RICH_CROP.SATURATION_JITTER_RATIO|饱和度调节因子|取值0~1,取值越大则变化范围越大,若为0则不进行调节|
|&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|AUG.RICH_CROP.CONTRAST_JITTER_RATIO|对比度调节因子&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|取值0~1,取值越大则变化范围越大,若为0则不进行调节|
- color jitter
图像颜色调整,控制参数`AUG.RICH_CROP.BRIGHTNESS_JITTER_RATIO``AUG.RICH_CROP.SATURATION_JITTER_RATIO``AUG.RICH_CROP.CONTRAST_JITTER_RATIO`
## Random Crop
该步骤主要是通过crop的方式使得输入到网络中的图像在某一个固定大小,控制该大小的参数为TRAIN_CROP_SIZE,类型为tuple,格式为(width, height). 当输入图像大小小于CROP_SIZE的时候会对输入图像进行padding,padding值为均值。
- 输入图片格式
- 原图
- 图片格式:RGB三通道图片和RGBA四通道图片两种类型的图片进行训练,但是在一次训练过程只能存在一种格式。
- 图片转换:灰度图片经过预处理后之后会转变成三通道图片
- 图片参数设置:当图片为三通道图片时IMAGE_TYPE设置为rgb, 对应MEAN和STD也必须是一个长度为3的list,当图片为四通道图片时IMAGE_TYPE设置为rgba,对应的MEAN和STD必须是一个长度为4的list。
- 标注图
- 图片格式:标注图片必须为png格式的单通道多值图,元素值代表的是这个元素所属于的类别。
- 图片转换:在datalayer层对label图片进行的任何resize,以及旋转的操作,都必须采用最近邻的插值方式。
- 图片ignore:设置TRAIN.IGNORE_INDEX 参数可以选择性忽略掉属于某一个类别的所有像素点。这个参数一般设置为255
随机裁剪图片和标签图,该步骤主要是通过裁剪的方式使得输入到网络中的图像在某一个固定大小。
Random crop过程分为3种情形:
- 当输入图像尺寸等于CROP_SIZE时,返回原图。
- 当输入图像尺寸大于CROP_SIZE时,直接裁剪。
- 当输入图像尺寸小于CROP_SIZE时,分别使用`DATASET.PADDING_VALUE`值和`DATASET.IGNORE_INDEX`值对图像和标签图进行填充,再进行裁剪。
|Random crop方式|配置参数|含义|备注|
|-|-|-|-|
|Train crop|TRAIN_CROP_SIZE|训练过程进行random crop后的图像尺寸|类型为tuple,格式为(width, height)
|Eval crop &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|EVAL_CROP_SIZE|除训练外的过程进行random crop后的图像尺寸|类型为tuple,格式为(width, height)
`TRAIN_CROP_SIZE`可以设置任意大小,具体如何设置根据数据集而定。
`EVAL_CROP_SIZE`的设置需要满足以下条件,共有3种情形:
-`AUG.AUG_METHOD`为unpadding时,`EVAL_CROP_SIZE`的宽高应不小于`AUG.FIX_RESIZE_SIZE`的宽高。
-`AUG.AUG_METHOD`为stepscaling时,`EVAL_CROP_SIZE`的宽高应不小于原图中最长的宽高。
-`AUG.AUG_METHOD`为rangescaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最长的宽高。
......@@ -2,6 +2,45 @@
## 数据标注
### 标注协议
PaddleSeg采用单通道的标注图片,每一种像素值代表一种类别,像素标注类别需要从0开始递增,例如0,1,2,3表示有4种类别。
**NOTE:** 标注图像请使用PNG无损压缩格式的图片。标注类别最多为256类。
### 灰度标注vs伪彩色标注
一般的分割库使用单通道灰度图作为标注图片,往往显示出来是全黑的效果。灰度标注图的弊端:
1. 对图像标注后,无法直接观察标注是否正确。
2. 模型测试过程无法直接判断分割的实际效果。
**PaddleSeg支持伪彩色图作为标注图片,在原来的单通道图片基础上,注入调色板。在基本不增加图片大小的基础上,却可以显示出彩色的效果。**
同时PaddleSeg也兼容灰度图标注,用户原来的灰度数据集可以不做修改,直接使用。
![](./imgs/annotation/image-11.png)
### 灰度标注转换为伪彩色标注
如果用户需要转换成伪彩色标注图,可使用我们的转换工具。适用于以下两种常见的情况:
1. 如果您希望将指定目录下的所有灰度标注图转换为伪彩色标注图,则执行以下命令,指定灰度标注所在的目录即可。
```buildoutcfg
python pdseg/tools/gray2pseudo_color.py <dir_or_file> <output_dir>
```
|参数|用途|
|-|-|
|dir_or_file|指定灰度标注所在目录|
|output_dir|彩色标注图片的输出目录|
2. 如果您仅希望将指定数据集中的部分灰度标注图转换为伪彩色标注图,则执行以下命令,需要已有文件列表,按列表读取指定图片。
```buildoutcfg
python pdseg/tools/gray2pseudo_color.py <dir_or_file> <output_dir> --dataset_dir <dataset directory> --file_separator <file list separator>
```
|参数|用途|
|-|-|
|dir_or_file|指定文件列表路径|
|output_dir|彩色标注图片的输出目录|
|--dataset_dir|数据集所在根目录|
|--file_separator|文件列表分隔符|
### 标注教程
用户需预先采集好用于训练、评估和测试的图片,然后使用数据标注工具完成数据标注。
PddleSeg已支持2种标注工具:LabelMe、精灵数据标注工具。标注教程如下:
......@@ -9,63 +48,32 @@ PddleSeg已支持2种标注工具:LabelMe、精灵数据标注工具。标注
- [LabelMe标注教程](annotation/labelme2seg.md)
- [精灵数据标注工具教程](annotation/jingling2seg.md)
最后用我们提供的数据转换脚本将上述标注工具产出的数据格式转换为模型训练时所需的数据格式。
## 文件列表
### 文件列表规范
PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试集。像素标注类别需要从0开始递增。
**NOTE:** 标注图像请使用PNG无损压缩格式的图片
以Cityscapes数据集为例, 我们需要整理出训练集、验证集、测试集对应的原图和标注文件列表用于PaddleSeg训练即可。
其中`DATASET.DATA_DIR`为数据根目录,文件列表的路径以数据集根目录作为相对路径起始点。
```
./cityscapes/ # 数据集根目录
├── gtFine # 标注目录
│   ├── test
│   │   ├── berlin
│   │   └── ...
│   ├── train
│   │   ├── aachen
│   │   └── ...
│   └── val
│   ├── frankfurt
│   └── ...
└── leftImg8bit # 原图目录
├── test
│   ├── berlin
│   └── ...
├── train
│   ├── aachen
│   └── ...
└── val
├── frankfurt
└── ...
```
PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试集。在训练、评估、可视化过程前必须准备好相应的文件列表。
文件列表组织形式如下
```
原始图片路径 [SEP] 标注图片路径
```
其中`[SEP]`是文件路径分割符,可以在`DATASET.SEPARATOR`配置项中修改, 默认为空格。文件列表的路径以数据集根目录作为相对路径起始点,`DATASET.DATA_DIR`即为数据集根目录。
如下图所示,左边为原图的图片路径,右边为图片对应的标注路径。
其中`[SEP]`是文件路径分割符,可以在`DATASET.SEPARATOR`配置项中修改, 默认为空格。
![cityscapes_filelist](./imgs/file_list.png)
**注意事项**
* 务必保证分隔符在文件列表中每行只存在一次, 如文件名中存在空格,请使用'|'等文件名不可用字符进行切分
* 务必保证分隔符在文件列表中每行只存在一次, 如文件名中存在空格,请使用"|"等文件名不可用字符进行切分
* 文件列表请使用**UTF-8**格式保存, PaddleSeg默认使用UTF-8编码读取file_list文件
如下图所示,左边为原图的图片路径,右边为图片对应的标注路径。
![cityscapes_filelist](./imgs/file_list.png)
若数据集缺少标注图片,则文件列表不用包含分隔符和标注图片路径,如下图所示。
![cityscapes_filelist](./imgs/file_list2.png)
**注意事项**
......@@ -75,32 +83,14 @@ PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试
不可在`DATASET.TRAIN_FILE_LIST``DATASET.VAL_FILE_LIST`配置项中使用。
完整的配置信息可以参考[`./docs/annotation/cityscapes_demo`](../docs/annotation/cityscapes_demo/)目录下的yaml和文件列表。
**符合规范的文件列表是什么样的呢?**
### 文件列表生成
PaddleSeg提供了生成文件列表的使用脚本,可适用于自定义数据集或cityscapes数据集,并支持通过不同的Flags来开启特定功能。
```
python pdseg/tools/create_dataset_list.py <your/dataset/dir> ${FLAGS}
```
运行后将在数据集根目录下生成训练/验证/测试集的文件列表(文件主名与`--second_folder`一致,扩展名为`.txt`)。
**Note:** 若训练/验证/测试集缺少标注图片,仍可自动生成不含分隔符和标注图片路径的文件列表。
#### 命令行FLAGS列表
请参考目录[`./docs/annotation/cityscapes_demo`](../docs/annotation/cityscapes_demo/)
|FLAG|用途|默认值|参数数目|
|-|-|-|-|
|--type|指定数据集类型,`cityscapes``自定义`|`自定义`|1|
|--separator|文件列表分隔符|'&#124;'|1|
|--folder|图片和标签集的文件夹名|'images' 'annotations'|2|
|--second_folder|训练/验证/测试集的文件夹名|'train' 'val' 'test'|若干|
|--format|图片和标签集的数据格式|'jpg' 'png'|2|
|--postfix|按文件主名(无扩展名)是否包含指定后缀对图片和标签集进行筛选|'' ''(2个空字符)|2|
### 数据集目录结构整理
#### 使用示例
- **对于自定义数据集**
如果用户想要生成数据集的文件列表,需要整理成如下的目录结构(类似于Cityscapes数据集):
如果用户想要生成自己数据集的文件列表,需要整理成如下的目录结构:
```
./dataset/ # 数据集根目录
├── annotations # 标注目录
......@@ -125,9 +115,32 @@ python pdseg/tools/create_dataset_list.py <your/dataset/dir> ${FLAGS}
└── ...
Note:以上目录名可任意
```
必须指定自定义数据集目录,可以按需要设定FLAG。
**Note:** 无需指定`--type`
### 文件列表生成
PaddleSeg提供了生成文件列表的使用脚本,可适用于自定义数据集或cityscapes数据集,并支持通过不同的Flags来开启特定功能。
```
python pdseg/tools/create_dataset_list.py <your/dataset/dir> ${FLAGS}
```
运行后将在数据集根目录下生成训练/验证/测试集的文件列表(文件主名与`--second_folder`一致,扩展名为`.txt`)。
**Note:** 生成文件列表要求:要么原图和标注图片数量一致,要么只有原图,没有标注图片。若数据集缺少标注图片,仍可自动生成不含分隔符和标注图片路径的文件列表。
#### 命令行FLAGS列表
|FLAG|用途|默认值|参数数目|
|-|-|-|-|
|--type|指定数据集类型,`cityscapes``自定义`|`自定义`|1|
|--separator|文件列表分隔符|"&#124;"|1|
|--folder|图片和标签集的文件夹名|"images" "annotations"|2|
|--second_folder|训练/验证/测试集的文件夹名|"train" "val" "test"|若干|
|--format|图片和标签集的数据格式|"jpg" "png"|2|
|--postfix|按文件主名(无扩展名)是否包含指定后缀对图片和标签集进行筛选|"" ""(2个空字符)|2|
#### 使用示例
- **对于自定义数据集**
若您已经按上述说明整理好了数据集目录结构,可以运行下面的命令生成文件列表。
```
# 生成文件列表,其分隔符为空格,图片和标签集的数据格式都为png
python pdseg/tools/create_dataset_list.py <your/dataset/dir> --separator " " --format png png
......@@ -137,22 +150,26 @@ python pdseg/tools/create_dataset_list.py <your/dataset/dir> --separator " " --f
python pdseg/tools/create_dataset_list.py <your/dataset/dir> \
--folder img gt --second_folder training validation
```
**Note:** 必须指定自定义数据集目录,可以按需要设定FLAG。无需指定`--type`
- **对于cityscapes数据集**
若您使用的是cityscapes数据集,可以运行下面的命令生成文件列表。
```
# 生成cityscapes文件列表,其分隔符为逗号
python pdseg/tools/create_dataset_list.py <your/dataset/dir> --type cityscapes --separator ","
```
**Note:**
必须指定cityscapes数据集目录,`--type`必须为`cityscapes`
在cityscapes类型下,部分FLAG将被重新设定,无需手动指定,具体如下:
|FLAG|固定值|
|-|-|
|--folder|'leftImg8bit' 'gtFine'|
|--format|'png' 'png'|
|--postfix|'_leftImg8bit' '_gtFine_labelTrainIds'|
|--folder|"leftImg8bit" "gtFine"|
|--format|"png" "png"|
|--postfix|"_leftImg8bit" "_gtFine_labelTrainIds"|
其余FLAG可以按需要设定。
```
# 生成cityscapes文件列表,其分隔符为逗号
python pdseg/tools/create_dataset_list.py <your/dataset/dir> --type cityscapes --separator ","
```
docs/imgs/annotation/image-7.png

154.9 KB | W: | H:

docs/imgs/annotation/image-7.png

161.3 KB | W: | H:

docs/imgs/annotation/image-7.png
docs/imgs/annotation/image-7.png
docs/imgs/annotation/image-7.png
docs/imgs/annotation/image-7.png
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/annotation/jingling-5.png

112.5 KB | W: | H:

docs/imgs/annotation/jingling-5.png

127.4 KB | W: | H:

docs/imgs/annotation/jingling-5.png
docs/imgs/annotation/jingling-5.png
docs/imgs/annotation/jingling-5.png
docs/imgs/annotation/jingling-5.png
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/deeplabv3p.png

269.2 KB | W: | H:

docs/imgs/deeplabv3p.png

252.7 KB | W: | H:

docs/imgs/deeplabv3p.png
docs/imgs/deeplabv3p.png
docs/imgs/deeplabv3p.png
docs/imgs/deeplabv3p.png
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/icnet.png

68.9 KB | W: | H:

docs/imgs/icnet.png

415.0 KB | W: | H:

docs/imgs/icnet.png
docs/imgs/icnet.png
docs/imgs/icnet.png
docs/imgs/icnet.png
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/tensorboard_image.JPG

120.1 KB | W: | H:

docs/imgs/tensorboard_image.JPG

811.2 KB | W: | H:

docs/imgs/tensorboard_image.JPG
docs/imgs/tensorboard_image.JPG
docs/imgs/tensorboard_image.JPG
docs/imgs/tensorboard_image.JPG
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/tensorboard_scalar.JPG

193.4 KB | W: | H:

docs/imgs/tensorboard_scalar.JPG

124.1 KB | W: | H:

docs/imgs/tensorboard_scalar.JPG
docs/imgs/tensorboard_scalar.JPG
docs/imgs/tensorboard_scalar.JPG
docs/imgs/tensorboard_scalar.JPG
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/unet.png

35.7 KB | W: | H:

docs/imgs/unet.png

175.6 KB | W: | H:

docs/imgs/unet.png
docs/imgs/unet.png
docs/imgs/unet.png
docs/imgs/unet.png
  • 2-up
  • Swipe
  • Onion skin
docs/imgs/usage_vis_demo.jpg

65.0 KB | W: | H:

docs/imgs/usage_vis_demo.jpg

325.8 KB | W: | H:

docs/imgs/usage_vis_demo.jpg
docs/imgs/usage_vis_demo.jpg
docs/imgs/usage_vis_demo.jpg
docs/imgs/usage_vis_demo.jpg
  • 2-up
  • Swipe
  • Onion skin
# PaddleSeg 安装说明
## 1. 安装PaddlePaddle
版本要求
* PaddlePaddle >= 1.6.1
* Python 2.7 or 3.5+
更多详细安装信息如CUDA版本、cuDNN版本等兼容信息请查看[PaddlePaddle安装](https://www.paddlepaddle.org.cn/install/doc/index)
### pip安装
由于图像分割模型计算开销大,推荐在GPU版本的PaddlePaddle下使用PaddleSeg.
```
pip install paddlepaddle-gpu
```
### Conda安装
PaddlePaddle最新版本1.5支持Conda安装,可以减少相关依赖安装成本,conda相关使用说明可以参考[Anaconda](https://www.anaconda.com/distribution/)
```
conda install -c paddle paddlepaddle-gpu cudatoolkit=9.0
```
* 如果有多卡训练需求,请安装 NVIDIA NCCL >= 2.4.7,并在Linux环境下运行
更多安装方式详情可以查看 [PaddlePaddle安装说明](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/install/index_cn.html)
## 2. 下载PaddleSeg代码
```
git clone https://github.com/PaddlePaddle/PaddleSeg
```
## 3. 安装PaddleSeg依赖
```
cd PaddleSeg
pip install -r requirements.txt
```
# dice loss解决二分类中样本不均衡问题
# 如何解决二分类中类别不均衡问题
对于二类图像分割任务中,经常出现类别分布不均匀的情况,例如:工业产品的瑕疵检测、道路提取及病变区域提取等。
目前PaddleSeg提供了三种loss函数,分别为softmax loss(sotfmax with cross entroy loss)、dice loss(dice coefficient loss)和bce loss(binary cross entroy loss). 我们可使用dice loss解决这个问题。
注:dice loss和bce loss仅支持二分类。
## Dice loss
Dice loss的定义如下:
对于二类图像分割任务中,往往存在类别分布不均的情况,如:瑕疵检测,道路提取及病变区域提取等等。
在DeepGlobe比赛的Road Extraction中,训练数据道路占比为:%4.5。如下为其图片样例:
<p align="center">
<img src="./imgs/deepglobe.png" hspace='10'/> <br />
<img src="./imgs/dice.png" hspace='10' height="46" width="200"/> <br />
</p>
可以看出道路在整张图片中的比例很小。
## 数据集下载
我们从DeepGlobe比赛的Road Extraction的训练集中随机抽取了800张图片作为训练集,200张图片作为验证集,
制作了一个小型的道路提取数据集[MiniDeepGlobeRoadExtraction](https://paddleseg.bj.bcebos.com/dataset/MiniDeepGlobeRoadExtraction.zip)
## softmax loss与dice loss
在图像分割中,softmax loss(sotfmax with cross entroy loss)同等的对待每一像素,因此当背景占据绝大部分的情况下,
网络将偏向于背景的学习,使网络对目标的提取能力变差。`dice loss(dice coefficient loss)`通过计算预测与标注之间的重叠部分计算损失函数,避免了类别不均衡带来的影响,能够取得更好的效果。
在实际应用中`dice loss`往往与`bce loss(binary cross entroy loss)`结合使用,提高模型训练的稳定性。
其中 Y 表示ground truth,P 表示预测结果。| |表示矩阵元素之和。![](./imgs/dice2.png) 表示*Y**P*的共有元素数,
实际通过求两者的逐像素乘积之和进行计算。例如:
<p align="center">
<img src="./imgs/dice3.png" hspace='10' /> <br />
</p>
其中 1 表示前景,0 表示背景。
**Note:** 在标注图片中,务必保证前景像素值为1,背景像素值为0.
dice loss的定义如下:
Dice系数请参见[维基百科](https://zh.wikipedia.org/wiki/Dice%E7%B3%BB%E6%95%B0)
![equation](http://latex.codecogs.com/gif.latex?dice\\_loss=1-\frac{2|Y\bigcap{P}|}{|Y|+|P|})
**为什么在类别不均衡问题上,dice loss效果比softmax loss更好?**
其中 ![equation](http://latex.codecogs.com/gif.latex?|Y\bigcap{P}|) 表示*Y**P*的共有元素数,
实际计算通过求两者的乘积之和进行计算。如下所示:
首先来看softmax loss的定义:
<p align="center">
<img src="./imgs/dice1.png" hspace='10' height="68" width="513"/> <br />
<img src="./imgs/softmax_loss.png" height="130" /> <br />
</p>
其中 y 表示ground truth,p 表示网络输出。
在图像分割中,`softmax loss`评估每一个像素点的类别预测,然后平均所有的像素点。这个本质上就是对图片上的每个像素进行平等的学习。这就造成了一个问题,如果在图像上的多种类别有不平衡的表征,那么训练会由最主流的类别主导。以上面DeepGlobe道路提取的数据为例子,网络将偏向于背景的学习,降低了网络对前景目标的提取能力。
`dice loss(dice coefficient loss)`通过预测和标注的交集除以它们的总体像素进行计算,它将一个类别的所有像素作为一个整体作为考量,而且计算交集在总体中的占比,所以不受大量背景像素的影响,能够取得更好的效果。
在实际应用中`dice loss`往往与`bce loss(binary cross entroy loss)`结合使用,提高模型训练的稳定性。
[dice系数详解](https://zh.wikipedia.org/wiki/Dice%E7%B3%BB%E6%95%B0)
## PaddleSeg指定训练loss
PaddleSeg通过`cfg.SOLVER.LOSS`参数可以选择训练时的损失函数,
`cfg.SOLVER.LOSS=['dice_loss','bce_loss']`将指定训练loss为`dice loss``bce loss`的组合
## 实验比较
## Dice loss解决类别不均衡问题的示例
我们以道路提取任务为例应用dice loss.
在DeepGlobe比赛的Road Extraction中,训练数据道路占比为:4.5%. 如下为其图片样例:
<p align="center">
<img src="./imgs/deepglobe.png" hspace='10'/> <br />
</p>
可以看出道路在整张图片中的比例很小。
### 数据集下载
我们从DeepGlobe比赛的Road Extraction的训练集中随机抽取了800张图片作为训练集,200张图片作为验证集,
制作了一个小型的道路提取数据集[MiniDeepGlobeRoadExtraction](https://paddleseg.bj.bcebos.com/dataset/MiniDeepGlobeRoadExtraction.zip)
### 实验比较
在MiniDeepGlobeRoadExtraction数据集进行了实验比较。
......@@ -73,5 +98,4 @@ softmax loss和dice loss + bce loss实验结果如下图所示。
<img src="./imgs/loss_comparison.png" hspace='10' height="208" width="516"/> <br />
</p>
# PaddleSeg 预训练模型
PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训练模型,通过加载预训练模型后训练可以在自定义数据集中得到更稳定地效果。
PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训练模型。因为对于自定
义数据集的场景,使用预训练模型进行训练可以得到更稳定地效果。用户可以根据模型类型、自己的数据集和预训练数据集的相似程度,选择并下载预训练模型。
## ImageNet预训练模型
......@@ -32,6 +33,11 @@ PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训
| HRNet_W48 | ImageNet | [hrnet_w48_imagenet.tar](https://paddleseg.bj.bcebos.com/models/hrnet_w48_imagenet.tar) | 78.95%/94.42% |
| HRNet_W64 | ImageNet | [hrnet_w64_imagenet.tar](https://paddleseg.bj.bcebos.com/models/hrnet_w64_imagenet.tar) | 79.30%/94.61% |
| 模型 | 数据集合 | 下载地址 | Accuray Top1/5 Error |
|---|---|---|---|
| ResNet50(适配PSPNet) | ImageNet | [resnet50_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet50_v2_pspnet.tgz)| -- |
| ResNet101(适配PSPNet) | ImageNet | [resnet101_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet101_v2_pspnet.tgz)| -- |
## COCO预训练模型
数据集为COCO实例分割数据集合转换成的语义分割数据集合
......@@ -57,3 +63,6 @@ train数据集合为Cityscapes训练集合,测试为Cityscapes的验证集合
| PSPNet/bn | Cityscapes |[pspnet50_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/pspnet50_cityscapes.tgz) |16|false| 0.7013 |
| PSPNet/bn | Cityscapes |[pspnet101_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/pspnet101_cityscapes.tgz) |16|false| 0.7734 |
| HRNet_W18/bn | Cityscapes |[hrnet_w18_bn_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/hrnet_w18_bn_cityscapes.tgz) | 4 | false | 0.7936 |
| Fast-SCNN/bn | Cityscapes |[fast_scnn_cityscapes.tar](https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape.tar) | 32 | false | 0.6964 |
测试环境为python 3.7.3,v100,cudnn 7.6.2。
# PaddleSeg 分割模型介绍
### U-Net
U-Net 起源于医疗图像分割,整个网络是标准的encoder-decoder网络,特点是参数少,计算快,应用性强,对于一般场景适应度很高。
- [U-Net](#U-Net)
- [DeepLabv3+](#DeepLabv3)
- [PSPNet](#PSPNet)
- [ICNet](#ICNet)
- [HRNet](#HRNet)
## U-Net
U-Net [1] 起源于医疗图像分割,整个网络是标准的encoder-decoder网络,特点是参数少,计算快,应用性强,对于一般场景适应度很高。U-Net最早于2015年提出,并在ISBI 2015 Cell Tracking Challenge取得了第一。经过发展,目前有多个变形和应用。
原始U-Net的结构如下图所示,由于网络整体结构类似于大写的英文字母U,故得名U-net。左侧可视为一个编码器,右侧可视为一个解码器。编码器有四个子模块,每个子模块包含两个卷积层,每个子模块之后通过max pool进行下采样。由于卷积使用的是valid模式,故实际输出比输入图像小一些。具体来说,后一个子模块的分辨率=(前一个子模块的分辨率-4)/2。U-Net使用了Overlap-tile 策略用于补全输入图像的上下信息,使得任意大小的输入图像都可获得无缝分割。同样解码器也包含四个子模块,分辨率通过上采样操作依次上升,直到与输入图像的分辨率基本一致。该网络还使用了跳跃连接,以拼接的方式将解码器和编码器中相同分辨率的feature map进行特征融合,帮助解码器更好地恢复目标的细节。
![](./imgs/unet.png)
### DeepLabv3+
## DeepLabv3+
DeepLabv3+ 是DeepLab系列的最后一篇文章,其前作有 DeepLabv1,DeepLabv2, DeepLabv3,
在最新作中,DeepLab的作者通过encoder-decoder进行多尺度信息的融合,同时保留了原来的空洞卷积和ASSP层,
其骨干网络使用了Xception模型,提高了语义分割的健壮性和运行速率,在 PASCAL VOC 2012 dataset取得新的state-of-art performance,89.0mIOU。
DeepLabv3+ [2] 是DeepLab系列的最后一篇文章,其前作有 DeepLabv1, DeepLabv2, DeepLabv3.
在最新作中,作者通过encoder-decoder进行多尺度信息的融合,以优化分割效果,尤其是目标边缘的效果。
并且其使用了Xception模型作为骨干网络,并将深度可分离卷积(depthwise separable convolution)应用到atrous spatial pyramid pooling(ASPP)中和decoder模块,提高了语义分割的健壮性和运行速率,在 PASCAL VOC 2012 和 Cityscapes 数据集上取得新的state-of-art performance.
![](./imgs/deeplabv3p.png)
在PaddleSeg当前实现中,支持两种分类Backbone网络的切换
在PaddleSeg当前实现中,支持两种分类Backbone网络的切换:
- MobileNetv2:
- MobileNetv2
适用于移动设备的快速网络,如果对分割性能有较高的要求,请使用这一backbone网络。
- Xception:
- Xception
DeepLabv3+原始实现的backbone网络,兼顾了精度和性能,适用于服务端部署。
## PSPNet
Pyramid Scene Parsing Network (PSPNet) [3] 起源于场景解析(Scene Parsing)领域。如下图所示,普通FCN [4] 面向复杂场景出现三种误分割现象:(1)关系不匹配。将船误分类成车,显然车一般不会出现在水面上。(2)类别混淆。摩天大厦和建筑物这两个类别相近,误将摩天大厦分类成建筑物。(3)类别不显著。枕头区域较小且纹理与床相近,误将枕头分类成床。
![](./imgs/pspnet2.png)
### ICNet
PSPNet的出发点是在算法中引入更多的上下文信息来解决上述问题。为了融合了图像中不同区域的上下文信息,PSPNet通过特殊设计的全局均值池化操作(global average pooling)和特征融合构造金字塔池化模块 (Pyramid Pooling Module)。PSPNet最终获得了2016年ImageNet场景解析挑战赛的冠军,并在PASCAL VOC 2012 和 Cityscapes 数据集上取得当时的最佳效果。整个网络结构如下:
Image Cascade Network(ICNet)主要用于图像实时语义分割。相较于其它压缩计算的方法,ICNet即考虑了速度,也考虑了准确性。 ICNet的主要思想是将输入图像变换为不同的分辨率,然后用不同计算复杂度的子网络计算不同分辨率的输入,然后将结果合并。ICNet由三个子网络组成,计算复杂度高的网络处理低分辨率输入,计算复杂度低的网络处理分辨率高的网络,通过这种方式在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡。
![](./imgs/pspnet.png)
## ICNet
Image Cascade Network(ICNet) [5] 是一个基于PSPNet的语义分割网络,设计目的是减少PSPNet推断时期的耗时。ICNet主要用于图像实时语义分割。ICNet由三个不同分辨率的子网络组成,将输入图像变换为不同的分辨率,随后使用计算复杂度高的网络处理低分辨率输入,计算复杂度低的网络处理分辨率高的网络,通过这种方式在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡。并在PSPNet的基础上引入级联特征融合单元(cascade feature fusion unit),实现快速且高质量的分割模型。
整个网络结构如下:
![](./imgs/icnet.png)
## 参考
### HRNet
- [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)
High-Resolution Network (HRNet) [6] 在整个训练过程中始终维持高分辨率表示。
HRNet具有两个特点:(1)从高分辨率到低分辨率并行连接各子网络,(2)反复交换跨分辨率子网络信息。这两个特点使HRNet网络能够学习到更丰富的语义信息和细节信息。
HRNet在人体姿态估计、语义分割和目标检测领域都取得了显著的性能提升。
- [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
- [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545)
整个网络结构如下:
# PaddleSeg特殊网络结构介绍
![](./imgs/hrnet.png)
### Group Norm
## 参考文献
![](./imgs/gn.png)
关于Group Norm的介绍可以参考论文:https://arxiv.org/abs/1803.08494
[1] [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
GN 把通道分为组,并计算每一组之内的均值和方差,以进行归一化。GN 的计算与批量大小无关,其精度也在各种批量大小下保持稳定。适应于网络参数很重的模型,比如deeplabv3+这种,可以在一个小batch下取得一个较好的训练效果。
[2] [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)
[3] [Pyramid Scene Parsing Network](https://arxiv.org/abs/1612.01105)
### Synchronized Batch Norm
[4] [Fully Convolutional Networks for Semantic Segmentation](https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf)
Synchronized Batch Norm跨GPU批归一化策略最早在[MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240)
论文中提出,在[Bag of Freebies for Training Object Detection Neural Networks](https://arxiv.org/pdf/1902.04103.pdf)论文中以Yolov3验证了这一策略的有效性,[PaddleCV/yolov3](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/yolov3)实现了这一系列策略并比Darknet框架版本在COCO17数据上mAP高5.9.
[5] [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545)
PaddleSeg基于PaddlePaddle框架的sync_batch_norm策略,可以支持通过多卡实现大batch size的分割模型训练,可以得到更高的mIoU精度。
[6] [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919)
......@@ -4,7 +4,7 @@
* PaddlePaddle >= 1.6.1
* NVIDIA NCCL >= 2.4.7
环境配置,数据,预训练模型准备等工作请参考[安装说明](./installation.md)[PaddleSeg使用说明](./usage.md)
环境配置,数据,预训练模型准备等工作请参考[PaddleSeg使用说明](./usage.md)
### 多进程训练示例
......
# 训练/评估/可视化
# PaddleSeg快速入门
PaddleSeg提供了 **训练**/**评估**/**可视化** 等三个功能的使用脚本。三个脚本都支持通过不同的Flags来开启特定功能,也支持通过Options来修改默认的[训练配置](./config.md)。三者的使用方式非常接近,如下:
本教程通过一个简单的示例,说明如何基于PaddleSeg启动训练(训练可视化)、评估和可视化。我们选择基于COCO数据集预训练的unet模型作为预训练模型,以一个眼底医疗分割数据集为例。
```shell
# 训练
python pdseg/train.py ${FLAGS} ${OPTIONS}
# 评估
python pdseg/eval.py ${FLAGS} ${OPTIONS}
# 可视化
python pdseg/vis.py ${FLAGS} ${OPTIONS}
```
**Note:**
* FLAGS必须位于OPTIONS之前,否会将会遇到报错,例如如下的例子:
```shell
# FLAGS "--cfg configs/cityscapes.yaml" 必须在 OPTIONS "BATCH_SIZE 1" 之前
python pdseg/train.py BATCH_SIZE 1 --cfg configs/cityscapes.yaml
```
## 命令行FLAGS列表
|FLAG|支持脚本|用途|默认值|备注|
|-|-|-|-|-|
|--cfg|ALL|配置文件路径|None||
|--use_gpu|ALL|是否使用GPU进行训练|False||
|--use_mpio|train/eval|是否使用多进程进行IO处理|False|打开该开关会占用一定量的CPU内存,但是可以提高训练速度。</br> **NOTE:** windows平台下不支持该功能, 建议使用自定义数据初次训练时不打开,打开会导致数据读取异常不可见。 </br> |
|--use_tb|train|是否使用TensorBoard记录训练数据|False||
|--log_steps|train|训练日志的打印周期(单位为step)|10||
|--debug|train|是否打印debug信息|False|IOU等指标涉及到混淆矩阵的计算,会降低训练速度|
|--tb_log_dir|train|TensorBoard的日志路径|None||
|--do_eval|train|是否在保存模型时进行效果评估|False||
|--vis_dir|vis|保存可视化图片的路径|"visual"||
|--also_save_raw_results|vis|是否保存原始的预测图片|False||
## OPTIONS
详见[训练配置](./config.md)
- [1.准备工作](#1准备工作)
- [2.下载待训练数据](#2下载待训练数据)
- [3.下载预训练模型](#3下载预训练模型)
- [4.模型训练](#4模型训练)
- [5.训练过程可视化](#5训练过程可视化)
- [6.模型评估](#6模型评估)
- [7.模型可视化](#7模型可视化)
- [在线体验](#在线体验)
## 使用示例
下面通过一个简单的示例,说明如何基于PaddleSeg提供的预训练模型启动训练。我们选择基于COCO数据集预训练的unet模型作为预训练模型,在一个Oxford-IIIT Pet数据集上进行训练。
**Note:** 为了快速体验,我们使用Oxford-IIIT Pet做了一个小型数据集,后续数据都使用该小型数据集。
### 准备工作
## 1.准备工作
在开始教程前,请先确认准备工作已经完成:
1. 正确安装了PaddlePaddle
2. PaddleSeg相关依赖已经安装
如果有不确认的地方,请参考[安装说明](./installation.md)
如果有不确认的地方,请参考[首页安装说明](../README.md#安装)
## 2.下载待训练数据
![](../turtorial/imgs/optic.png)
我们提前准备好了一份眼底医疗分割数据集--视盘分割(optic disc segmentation),包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载:
### 下载预训练模型
```shell
# 下载预训练模型并进行解压
python pretrained_model/download_model.py unet_bn_coco
# 下载待训练数据集
python dataset/download_optic.py
```
### 下载Oxford-IIIT Pet数据集
我们使用了Oxford-IIIT中的猫和狗两个类别数据制作了一个小数据集mini_pet,用于快速体验。
更多关于数据集的介绍情参考[Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/)
## 3.下载预训练模型
```shell
# 下载预训练模型并进行解压
python dataset/download_pet.py
python pretrained_model/download_model.py unet_bn_coco
```
### 模型训练
## 4.模型训练
为了方便体验,我们在configs目录下放置了mini_pet所对应的配置文件`unet_pet.yaml`,可以通过`--cfg`指向该文件来设置训练配置。
为了方便体验,我们在configs目录下放置了配置文件`unet_optic.yaml`,可以通过`--cfg`指向该文件来设置训练配置。
我们选择GPU 0号卡进行训练,这可以通过环境变量`CUDA_VISIBLE_DEVICES`来指定
可以通过环境变量`CUDA_VISIBLE_DEVICES`来指定GPU卡号
```
# 指定GPU卡号(以0号卡为例)
export CUDA_VISIBLE_DEVICES=0
python pdseg/train.py --use_gpu \
# 训练
python pdseg/train.py --cfg configs/unet_optic.yaml \
--use_gpu \
--do_eval \
--use_tb \
--tb_log_dir train_log \
--cfg configs/unet_pet.yaml \
BATCH_SIZE 4 \
TRAIN.PRETRAINED_MODEL_DIR pretrained_model/unet_bn_coco \
SOLVER.LR 5e-5
SOLVER.LR 0.001
```
若需要使用多块GPU,以0、1、2号卡为例,可输入
```
export CUDA_VISIBLE_DEVICES=0,1,2
```
**NOTE:**
* 上述示例中,一共存在三套配置方案: PaddleSeg默认配置/unet_pet.yaml/OPTIONS,三者的优先级顺序为 OPTIONS > yaml > 默认配置。这个原则对于train.py/eval.py/vis.py都适用
* 如果发现因为内存不足而Crash。请适当调低BATCH_SIZE。如果本机GPU内存充足,则可以调高BATCH_SIZE的大小以获得更快的训练速度,BATCH_SIZE增大时,可以适当调高学习率。
* 如果发现因为内存不足而Crash。请适当调低`BATCH_SIZE`。如果本机GPU内存充足,则可以调高`BATCH_SIZE`的大小以获得更快的训练速度,`BATCH_SIZE`增大时,可以适当调高学习率`SOLVER.LR`.
* 如果在Linux系统下训练,可以使用`--use_mpio`使用多进程I/O,通过提升数据增强的处理速度进而大幅度提升GPU利用率。
### 训练过程可视化
## 5.训练过程可视化
当打开do_eval和use_tb两个开关后,我们可以通过TensorBoard查看边训练边评估的效果。
......@@ -101,40 +77,42 @@ tensorboard --logdir train_log --host {$HOST_IP} --port {$PORT}
```
NOTE:
1. 上述示例中,$HOST\_IP为机器IP地址,请替换为实际IP,$PORT请替换为可访问的端口
2. 数据量较大时,前端加载速度会比较慢,请耐心等待
1. 上述示例中,$HOST\_IP为机器IP地址,请替换为实际IP,$PORT请替换为可访问的端口
2. 数据量较大时,前端加载速度会比较慢,请耐心等待
启动TensorBoard命令后,我们可以在浏览器中查看对应的训练数据
`SCALAR`这个tab中,查看训练loss、iou、acc的变化趋势
启动TensorBoard命令后,我们可以在浏览器中查看对应的训练数据
`SCALAR`这个tab中,查看训练loss、iou、acc的变化趋势
![](./imgs/tensorboard_scalar.JPG)
`IMAGE`这个tab中,查看样本的预测情况
`IMAGE`这个tab中,查看样本图片。
![](./imgs/tensorboard_image.JPG)
### 模型评估
训练完成后,我们可以通过eval.py来评估模型效果。由于我们设置的训练EPOCH数量为100,保存间隔为10,因此一共会产生10个定期保存的模型,加上最终保存的final模型,一共有11个模型。我们选择最后保存的模型进行效果的评估:
## 6.模型评估
训练完成后,我们可以通过eval.py来评估模型效果。由于我们设置的训练EPOCH数量为10,保存间隔为5,因此一共会产生2个定期保存的模型,加上最终保存的final模型,一共有3个模型。我们选择最后保存的模型进行效果的评估:
```shell
python pdseg/eval.py --use_gpu \
--cfg configs/unet_pet.yaml \
TEST.TEST_MODEL saved_model/unet_pet/final
--cfg configs/unet_optic.yaml \
TEST.TEST_MODEL saved_model/unet_optic/final
```
可以看到,在经过训练后,模型在验证集上的mIoU指标达到了0.70+(由于随机种子等因素的影响,效果会有小范围波动,属于正常情况)。
可以看到,在经过训练后,模型在验证集上的mIoU指标达到了0.85+(由于随机种子等因素的影响,效果会有小范围波动,属于正常情况)。
### 模型可视化
通过vis.py来评估模型效果,我们选择最后保存的模型进行效果的评估
## 7.模型可视化
通过vis.py进行测试和可视化,以选择最后保存的模型进行测试为例
```shell
python pdseg/vis.py --use_gpu \
--cfg configs/unet_pet.yaml \
TEST.TEST_MODEL saved_model/unet_pet/final
--cfg configs/unet_optic.yaml \
TEST.TEST_MODEL saved_model/unet_optic/final
```
执行上述脚本后,会在主目录下产生一个visual/visual_results文件夹,里面存放着测试集图片的预测结果,我们选择其中几张图片进行查看,可以看到,在测试集中的图片上的预测效果已经很不错:
执行上述脚本后,会在主目录下产生一个visual文件夹,里面存放着测试集图片的预测结果,我们选择其中1张图片进行查看:
![](./imgs/usage_vis_demo.jpg)
![](./imgs/usage_vis_demo2.jpg)
![](./imgs/usage_vis_demo3.jpg)
`NOTE`
1. 可视化的图片会默认保存在visual/visual_results目录下,可以通过`--vis_dir`来指定输出目录
2. 训练过程中会使用DATASET.VIS_FILE_LIST中的图片进行可视化显示,而vis.py则会使用DATASET.TEST_FILE_LIST
1. 可视化的图片会默认保存在visual目录下,可以通过`--vis_dir`来指定输出目录。
2. 训练过程中会使用`DATASET.VIS_FILE_LIST`中的图片进行可视化显示,而vis.py则会使用`DATASET.TEST_FILE_LIST`.
## 在线体验
PaddleSeg在AI Studio平台上提供了在线体验的快速入门教程,欢迎[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/100798)
......@@ -14,3 +14,4 @@
# limitations under the License.
import models
import utils
from . import tools
\ No newline at end of file
......@@ -327,7 +327,7 @@ def random_jitter(cv_img, saturation_range, brightness_range, contrast_range):
brightness_ratio = np.random.uniform(-brightness_range, brightness_range)
contrast_ratio = np.random.uniform(-contrast_range, contrast_range)
order = [1, 2, 3]
order = [0, 1, 2]
np.random.shuffle(order)
for i in range(3):
......@@ -368,7 +368,7 @@ def hsv_color_jitter(crop_img,
def rand_crop(crop_img, crop_seg, mode=ModelPhase.TRAIN):
"""
随机裁剪图片和标签图, 若crop尺寸大于原始尺寸,分别使用均值和ignore值填充再进行crop,
随机裁剪图片和标签图, 若crop尺寸大于原始尺寸,分别使用DATASET.PADDING_VALUE值和DATASET.IGNORE_INDEX值填充再进行crop,
crop尺寸与原始尺寸一致,返回原图,crop尺寸小于原始尺寸直接crop
Args:
......
......@@ -20,7 +20,7 @@ import importlib
from utils.config import cfg
def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2):
def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2, weight=None):
ignore_mask = fluid.layers.cast(ignore_mask, 'float32')
label = fluid.layers.elementwise_min(
label, fluid.layers.assign(np.array([num_classes - 1], dtype=np.int32)))
......@@ -29,12 +29,40 @@ def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2):
label = fluid.layers.reshape(label, [-1, 1])
label = fluid.layers.cast(label, 'int64')
ignore_mask = fluid.layers.reshape(ignore_mask, [-1, 1])
loss, probs = fluid.layers.softmax_with_cross_entropy(
logit,
label,
ignore_index=cfg.DATASET.IGNORE_INDEX,
return_softmax=True)
if weight is None:
loss, probs = fluid.layers.softmax_with_cross_entropy(
logit,
label,
ignore_index=cfg.DATASET.IGNORE_INDEX,
return_softmax=True)
else:
label_one_hot = fluid.layers.one_hot(input=label, depth=num_classes)
if isinstance(weight, list):
assert len(weight) == num_classes, "weight length must equal num of classes"
weight = fluid.layers.assign(np.array([weight], dtype='float32'))
elif isinstance(weight, str):
assert weight.lower() == 'dynamic', 'if weight is string, must be dynamic!'
tmp = []
total_num = fluid.layers.cast(fluid.layers.shape(label)[0], 'float32')
for i in range(num_classes):
cls_pixel_num = fluid.layers.reduce_sum(label_one_hot[:, i])
ratio = total_num / (cls_pixel_num + 1)
tmp.append(ratio)
weight = fluid.layers.concat(tmp)
weight = weight / fluid.layers.reduce_sum(weight) * num_classes
elif isinstance(weight, fluid.layers.Variable):
pass
else:
raise ValueError('Expect weight is a list, string or Variable, but receive {}'.format(type(weight)))
weight = fluid.layers.reshape(weight, [1, num_classes])
weighted_label_one_hot = fluid.layers.elementwise_mul(label_one_hot, weight)
probs = fluid.layers.softmax(logit)
loss = fluid.layers.cross_entropy(
probs,
weighted_label_one_hot,
soft_label=True,
ignore_index=cfg.DATASET.IGNORE_INDEX)
weighted_label_one_hot.stop_gradient = True
loss = loss * ignore_mask
avg_loss = fluid.layers.mean(loss) / fluid.layers.mean(ignore_mask)
......@@ -43,6 +71,7 @@ def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2):
ignore_mask.stop_gradient = True
return avg_loss
# to change, how to appicate ignore index and ignore mask
def dice_loss(logit, label, ignore_mask=None, epsilon=0.00001):
if logit.shape[1] != 1 or label.shape[1] != 1 or ignore_mask.shape[1] != 1:
......@@ -65,6 +94,7 @@ def dice_loss(logit, label, ignore_mask=None, epsilon=0.00001):
ignore_mask.stop_gradient = True
return fluid.layers.reduce_mean(dice_score)
def bce_loss(logit, label, ignore_mask=None):
if logit.shape[1] != 1 or label.shape[1] != 1 or ignore_mask.shape[1] != 1:
raise Exception("bce loss is only applicable to binary classfication")
......@@ -80,20 +110,22 @@ def bce_loss(logit, label, ignore_mask=None):
return loss
def multi_softmax_with_loss(logits, label, ignore_mask=None, num_classes=2):
def multi_softmax_with_loss(logits, label, ignore_mask=None, num_classes=2, weight=None):
if isinstance(logits, tuple):
avg_loss = 0
for i, logit in enumerate(logits):
logit_label = fluid.layers.resize_nearest(label, logit.shape[2:])
logit_mask = (logit_label.astype('int32') !=
if label.shape[2] != logit.shape[2] or label.shape[3] != logit.shape[3]:
label = fluid.layers.resize_nearest(label, logit.shape[2:])
logit_mask = (label.astype('int32') !=
cfg.DATASET.IGNORE_INDEX).astype('int32')
loss = softmax_with_loss(logit, logit_label, logit_mask,
loss = softmax_with_loss(logit, label, logit_mask,
num_classes)
avg_loss += cfg.MODEL.MULTI_LOSS_WEIGHT[i] * loss
else:
avg_loss = softmax_with_loss(logits, label, ignore_mask, num_classes)
avg_loss = softmax_with_loss(logits, label, ignore_mask, num_classes, weight=weight)
return avg_loss
def multi_dice_loss(logits, label, ignore_mask=None):
if isinstance(logits, tuple):
avg_loss = 0
......@@ -107,6 +139,7 @@ def multi_dice_loss(logits, label, ignore_mask=None):
avg_loss = dice_loss(logits, label, ignore_mask)
return avg_loss
def multi_bce_loss(logits, label, ignore_mask=None):
if isinstance(logits, tuple):
avg_loss = 0
......
......@@ -14,5 +14,3 @@
# limitations under the License.
import models.modeling
import models.libs
import models.backbone
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.fluid as fluid
from paddle.fluid import ParamAttr
__all__ = ["VGGNet"]
def check_points(count, points):
if points is None:
return False
else:
if isinstance(points, list):
return (True if count in points else False)
else:
return (True if count == points else False)
class VGGNet():
def __init__(self, layers=16):
self.layers = layers
def net(self, input, class_dim=1000, end_points=None, decode_points=None):
short_cuts = dict()
layers_count = 0
layers = self.layers
vgg_spec = {
11: ([1, 1, 2, 2, 2]),
13: ([2, 2, 2, 2, 2]),
16: ([2, 2, 3, 3, 3]),
19: ([2, 2, 4, 4, 4])
}
assert layers in vgg_spec.keys(), \
"supported layers are {} but input layer is {}".format(vgg_spec.keys(), layers)
nums = vgg_spec[layers]
channels = [64, 128, 256, 512, 512]
conv = input
for i in range(len(nums)):
conv = self.conv_block(conv, channels[i], nums[i], name="conv" + str(i + 1) + "_")
layers_count += nums[i]
if check_points(layers_count, decode_points):
short_cuts[layers_count] = conv
if check_points(layers_count, end_points):
return conv, short_cuts
return conv
def conv_block(self, input, num_filter, groups, name=None):
conv = input
for i in range(groups):
conv = fluid.layers.conv2d(
input=conv,
num_filters=num_filter,
filter_size=3,
stride=1,
padding=1,
act='relu',
param_attr=fluid.param_attr.ParamAttr(
name=name + str(i + 1) + "_weights"),
bias_attr=False)
return fluid.layers.pool2d(
input=conv, pool_size=2, pool_type='max', pool_stride=2)
......@@ -164,3 +164,37 @@ def separate_conv(input, channel, stride, filter, dilation=1, act=None):
input = bn(input)
if act: input = act(input)
return input
def conv_bn_layer(input,
filter_size,
num_filters,
stride,
padding,
channels=None,
num_groups=1,
if_act=True,
name=None,
use_cudnn=True):
conv = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
stride=stride,
padding=padding,
groups=num_groups,
act=None,
use_cudnn=use_cudnn,
param_attr=fluid.ParamAttr(name=name + '_weights'),
bias_attr=False)
bn_name = name + '_bn'
bn = fluid.layers.batch_norm(
input=conv,
param_attr=fluid.ParamAttr(name=bn_name + "_scale"),
bias_attr=fluid.ParamAttr(name=bn_name + "_offset"),
moving_mean_name=bn_name + '_mean',
moving_variance_name=bn_name + '_variance')
if if_act:
return fluid.layers.relu6(bn)
else:
return bn
\ No newline at end of file
......@@ -24,7 +24,7 @@ from utils.config import cfg
from loss import multi_softmax_with_loss
from loss import multi_dice_loss
from loss import multi_bce_loss
from models.modeling import deeplab, unet, icnet, pspnet, hrnet
from models.modeling import deeplab, unet, icnet, pspnet, hrnet, fast_scnn
class ModelPhase(object):
......@@ -81,9 +81,11 @@ def seg_model(image, class_num):
logits = pspnet.pspnet(image, class_num)
elif model_name == 'hrnet':
logits = hrnet.hrnet(image, class_num)
elif model_name == 'fast_scnn':
logits = fast_scnn.fast_scnn(image, class_num)
else:
raise Exception(
"unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet"
"unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet, fast_scnn"
)
return logits
......@@ -223,8 +225,9 @@ def build_model(main_prog, start_prog, phase=ModelPhase.TRAIN):
avg_loss_list = []
valid_loss = []
if "softmax_loss" in loss_type:
weight = cfg.SOLVER.CROSS_ENTROPY_WEIGHT
avg_loss_list.append(
multi_softmax_with_loss(logits, label, mask, class_num))
multi_softmax_with_loss(logits, label, mask, class_num, weight))
loss_valid = True
valid_loss.append("softmax_loss")
if "dice_loss" in loss_type:
......
......@@ -27,6 +27,7 @@ from models.libs.model_libs import separate_conv
from models.backbone.mobilenet_v2 import MobileNetV2 as mobilenet_backbone
from models.backbone.xception import Xception as xception_backbone
def encoder(input):
# 编码器配置,采用ASPP架构,pooling + 1x1_conv + 三个不同尺度的空洞卷积并行, concat后1x1conv
# ASPP_WITH_SEP_CONV:默认为真,使用depthwise可分离卷积,否则使用普通卷积
......@@ -47,8 +48,7 @@ def encoder(input):
with scope('encoder'):
channel = 256
with scope("image_pool"):
image_avg = fluid.layers.reduce_mean(
input, [2, 3], keep_dim=True)
image_avg = fluid.layers.reduce_mean(input, [2, 3], keep_dim=True)
image_avg = bn_relu(
conv(
image_avg,
......@@ -250,14 +250,15 @@ def deeplabv3p(img, num_classes):
regularization_coeff=0.0),
initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.01))
with scope('logit'):
logit = conv(
data,
num_classes,
1,
stride=1,
padding=0,
bias_attr=True,
param_attr=param_attr)
with fluid.name_scope('last_conv'):
logit = conv(
data,
num_classes,
1,
stride=1,
padding=0,
bias_attr=True,
param_attr=param_attr)
logit = fluid.layers.resize_bilinear(logit, img.shape[2:])
return logit
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle.fluid as fluid
from models.libs.model_libs import scope
from models.libs.model_libs import bn, bn_relu, relu, conv_bn_layer
from models.libs.model_libs import conv, avg_pool
from models.libs.model_libs import separate_conv
from utils.config import cfg
def learning_to_downsample(x, dw_channels1=32, dw_channels2=48, out_channels=64):
x = relu(bn(conv(x, dw_channels1, 3, 2)))
with scope('dsconv1'):
x = separate_conv(x, dw_channels2, stride=2, filter=3, act=fluid.layers.relu)
with scope('dsconv2'):
x = separate_conv(x, out_channels, stride=2, filter=3, act=fluid.layers.relu)
return x
def shortcut(input, data_residual):
return fluid.layers.elementwise_add(input, data_residual)
def dropout2d(input, prob, is_train=False):
if not is_train:
return input
channels = input.shape[1]
keep_prob = 1.0 - prob
random_tensor = keep_prob + fluid.layers.uniform_random_batch_size_like(input, [-1, channels, 1, 1], min=0., max=1.)
binary_tensor = fluid.layers.floor(random_tensor)
output = input / keep_prob * binary_tensor
return output
def inverted_residual_unit(input,
num_in_filter,
num_filters,
ifshortcut,
stride,
filter_size,
padding,
expansion_factor,
name=None):
num_expfilter = int(round(num_in_filter * expansion_factor))
channel_expand = conv_bn_layer(
input=input,
num_filters=num_expfilter,
filter_size=1,
stride=1,
padding=0,
num_groups=1,
if_act=True,
name=name + '_expand')
bottleneck_conv = conv_bn_layer(
input=channel_expand,
num_filters=num_expfilter,
filter_size=filter_size,
stride=stride,
padding=padding,
num_groups=num_expfilter,
if_act=True,
name=name + '_dwise',
use_cudnn=False)
depthwise_output = bottleneck_conv
linear_out = conv_bn_layer(
input=bottleneck_conv,
num_filters=num_filters,
filter_size=1,
stride=1,
padding=0,
num_groups=1,
if_act=False,
name=name + '_linear')
if ifshortcut:
out = shortcut(input=input, data_residual=linear_out)
return out, depthwise_output
else:
return linear_out, depthwise_output
def inverted_blocks(input, in_c, t, c, n, s, name=None):
first_block, depthwise_output = inverted_residual_unit(
input=input,
num_in_filter=in_c,
num_filters=c,
ifshortcut=False,
stride=s,
filter_size=3,
padding=1,
expansion_factor=t,
name=name + '_1')
last_residual_block = first_block
last_c = c
for i in range(1, n):
last_residual_block, depthwise_output = inverted_residual_unit(
input=last_residual_block,
num_in_filter=last_c,
num_filters=c,
ifshortcut=True,
stride=1,
filter_size=3,
padding=1,
expansion_factor=t,
name=name + '_' + str(i + 1))
return last_residual_block, depthwise_output
def psp_module(input, out_features):
cat_layers = []
sizes = (1, 2, 3, 6)
for size in sizes:
psp_name = "psp" + str(size)
with scope(psp_name):
pool = fluid.layers.adaptive_pool2d(input,
pool_size=[size, size],
pool_type='avg',
name=psp_name + '_adapool')
data = conv(pool, out_features,
filter_size=1,
bias_attr=False,
name=psp_name + '_conv')
data_bn = bn(data, act='relu')
interp = fluid.layers.resize_bilinear(data_bn,
out_shape=input.shape[2:],
name=psp_name + '_interp', align_mode=0)
cat_layers.append(interp)
cat_layers = [input] + cat_layers
out = fluid.layers.concat(cat_layers, axis=1, name='psp_cat')
return out
class FeatureFusionModule:
"""Feature fusion module"""
def __init__(self, higher_in_channels, lower_in_channels, out_channels, scale_factor=4):
self.higher_in_channels = higher_in_channels
self.lower_in_channels = lower_in_channels
self.out_channels = out_channels
self.scale_factor = scale_factor
def net(self, higher_res_feature, lower_res_feature):
h, w = higher_res_feature.shape[2:]
lower_res_feature = fluid.layers.resize_bilinear(lower_res_feature, [h, w], align_mode=0)
with scope('dwconv'):
lower_res_feature = relu(bn(conv(lower_res_feature, self.out_channels, 1)))#(lower_res_feature)
with scope('conv_lower_res'):
lower_res_feature = bn(conv(lower_res_feature, self.out_channels, 1, bias_attr=True))
with scope('conv_higher_res'):
higher_res_feature = bn(conv(higher_res_feature, self.out_channels, 1, bias_attr=True))
out = higher_res_feature + lower_res_feature
return relu(out)
class GlobalFeatureExtractor():
"""Global feature extractor module"""
def __init__(self, in_channels=64, block_channels=(64, 96, 128), out_channels=128,
t=6, num_blocks=(3, 3, 3)):
self.in_channels = in_channels
self.block_channels = block_channels
self.out_channels = out_channels
self.t = t
self.num_blocks = num_blocks
def net(self, x):
x, _ = inverted_blocks(x, self.in_channels, self.t, self.block_channels[0],
self.num_blocks[0], 2, 'inverted_block_1')
x, _ = inverted_blocks(x, self.block_channels[0], self.t, self.block_channels[1],
self.num_blocks[1], 2, 'inverted_block_2')
x, _ = inverted_blocks(x, self.block_channels[1], self.t, self.block_channels[2],
self.num_blocks[2], 1, 'inverted_block_3')
x = psp_module(x, self.block_channels[2] // 4)
with scope('out'):
x = relu(bn(conv(x, self.out_channels, 1)))
return x
class Classifier:
"""Classifier"""
def __init__(self, dw_channels, num_classes, stride=1):
self.dw_channels = dw_channels
self.num_classes = num_classes
self.stride = stride
def net(self, x):
with scope('dsconv1'):
x = separate_conv(x, self.dw_channels, stride=self.stride, filter=3, act=fluid.layers.relu)
with scope('dsconv2'):
x = separate_conv(x, self.dw_channels, stride=self.stride, filter=3, act=fluid.layers.relu)
x = dropout2d(x, 0.1, is_train=cfg.PHASE=='train')
x = conv(x, self.num_classes, 1, bias_attr=True)
return x
def aux_layer(x, num_classes):
x = relu(bn(conv(x, 32, 3, padding=1)))
x = dropout2d(x, 0.1, is_train=(cfg.PHASE == 'train'))
with scope('logit'):
x = conv(x, num_classes, 1, bias_attr=True)
return x
def fast_scnn(img, num_classes):
size = img.shape[2:]
classifier = Classifier(128, num_classes)
global_feature_extractor = GlobalFeatureExtractor(64, [64, 96, 128], 128, 6, [3, 3, 3])
feature_fusion = FeatureFusionModule(64, 128, 128)
with scope('learning_to_downsample'):
higher_res_features = learning_to_downsample(img, 32, 48, 64)
with scope('global_feature_extractor'):
lower_res_feature = global_feature_extractor.net(higher_res_features)
with scope('feature_fusion'):
x = feature_fusion.net(higher_res_features, lower_res_feature)
with scope('classifier'):
logit = classifier.net(x)
logit = fluid.layers.resize_bilinear(logit, size, align_mode=0)
if len(cfg.MODEL.MULTI_LOSS_WEIGHT) == 3:
with scope('aux_layer_higher'):
higher_logit = aux_layer(higher_res_features, num_classes)
higher_logit = fluid.layers.resize_bilinear(higher_logit, size, align_mode=0)
with scope('aux_layer_lower'):
lower_logit = aux_layer(lower_res_feature, num_classes)
lower_logit = fluid.layers.resize_bilinear(lower_logit, size, align_mode=0)
return logit, higher_logit, lower_logit
elif len(cfg.MODEL.MULTI_LOSS_WEIGHT) == 2:
with scope('aux_layer_higher'):
higher_logit = aux_layer(higher_res_features, num_classes)
higher_logit = fluid.layers.resize_bilinear(higher_logit, size, align_mode=0)
return logit, higher_logit
return logit
\ No newline at end of file
......@@ -98,8 +98,8 @@ class SegDataset(object):
# Re-shuffle file list
if self.shuffle and cfg.NUM_TRAINERS > 1:
np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines)
num_lines = len(self.all_lines) // self.num_trainers
self.lines = self.all_lines[num_lines * self.trainer_id: num_lines * (self.trainer_id + 1)]
num_lines = len(self.all_lines) // cfg.NUM_TRAINERS
self.lines = self.all_lines[num_lines * cfg.TRAINER_ID: num_lines * (cfg.TRAINER_ID + 1)]
self.shuffle_seed += 1
elif self.shuffle:
np.random.shuffle(self.lines)
......
......@@ -116,18 +116,19 @@ def generate_list(args):
label_files = get_files(1, dataset_split, args)
if not image_files:
img_dir = os.path.join(dataset_root, args.folder[0], dataset_split)
print("No files in {}".format(img_dir))
warnings.warn("No images in {} !!!".format(img_dir))
num_images = len(image_files)
if not label_files:
label_dir = os.path.join(dataset_root, args.folder[1], dataset_split)
print("No files in {}".format(label_dir))
warnings.warn("No labels in {} !!!".format(label_dir))
num_label = len(label_files)
if num_images < num_label:
warnings.warn("number of images = {} < number of labels = {}."
.format(num_images, num_label))
continue
if num_images != num_label and num_label > 0:
raise Exception("Number of images = {} number of labels = {} \n"
"Either number of images is equal to number of labels, "
"or number of labels is equal to 0.\n"
"Please check your dataset!".format(num_images, num_label))
file_list = os.path.join(dataset_root, dataset_split + '.txt')
with open(file_list, "w") as f:
......
......@@ -2,13 +2,11 @@
from __future__ import print_function
import argparse
import glob
import os
import os.path as osp
import sys
import numpy as np
from PIL import Image
from pdseg.vis import get_color_map_list
def parse_args():
......@@ -26,6 +24,28 @@ def parse_args():
return parser.parse_args()
def get_color_map_list(num_classes):
""" Returns the color map for visualizing the segmentation mask,
which can support arbitrary number of classes.
Args:
num_classes: Number of classes
Returns:
The color map
"""
color_map = num_classes * [0, 0, 0]
for i in range(0, num_classes):
j = 0
lab = i
while lab:
color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j))
color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j))
color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j))
j += 1
lab >>= 3
return color_map
def gray2pseudo_color(args):
"""将灰度标注图片转换为伪彩色图片"""
input = args.dir_or_file
......@@ -36,18 +56,28 @@ def gray2pseudo_color(args):
color_map = get_color_map_list(256)
if os.path.isdir(input):
for grt_path in glob.glob(osp.join(input, '*.png')):
print('Converting original label:', grt_path)
basename = osp.basename(grt_path)
for fpath, dirs, fs in os.walk(input):
for f in fs:
try:
grt_path = osp.join(fpath, f)
_output_dir = fpath.replace(input, '')
_output_dir = _output_dir.lstrip(os.path.sep)
im = Image.open(grt_path)
lbl = np.asarray(im)
im = Image.open(grt_path)
lbl = np.asarray(im)
lbl_pil = Image.fromarray(lbl.astype(np.uint8), mode='P')
lbl_pil.putpalette(color_map)
lbl_pil = Image.fromarray(lbl.astype(np.uint8), mode='P')
lbl_pil.putpalette(color_map)
new_file = osp.join(output_dir, basename)
lbl_pil.save(new_file)
real_dir = osp.join(output_dir, _output_dir)
if not osp.exists(real_dir):
os.makedirs(real_dir)
new_grt_path = osp.join(real_dir, f)
lbl_pil.save(new_grt_path)
print('New label path:', new_grt_path)
except:
continue
elif os.path.isfile(input):
if args.dataset_dir is None or args.file_separator is None:
print('No dataset_dir or file_separator input!')
......@@ -58,17 +88,20 @@ def gray2pseudo_color(args):
grt_name = parts[1]
grt_path = os.path.join(args.dataset_dir, grt_name)
print('Converting original label:', grt_path)
basename = osp.basename(grt_path)
im = Image.open(grt_path)
lbl = np.asarray(im)
lbl_pil = Image.fromarray(lbl.astype(np.uint8), mode='P')
lbl_pil.putpalette(color_map)
new_file = osp.join(output_dir, basename)
lbl_pil.save(new_file)
grt_dir, _ = osp.split(grt_name)
new_dir = osp.join(output_dir, grt_dir)
if not osp.exists(new_dir):
os.makedirs(new_dir)
new_grt_path = osp.join(output_dir, grt_name)
lbl_pil.save(new_grt_path)
print('New label path:', new_grt_path)
else:
print('It\'s neither a dir nor a file')
......
......@@ -12,7 +12,7 @@ import numpy as np
import PIL.Image
import labelme
from pdseg.vis import get_color_map_list
from gray2pseudo_color import get_color_map_list
def parse_args():
......
......@@ -12,7 +12,7 @@ import numpy as np
import PIL.Image
import labelme
from pdseg.vis import get_color_map_list
from gray2pseudo_color import get_color_map_list
def parse_args():
......
......@@ -24,12 +24,14 @@ os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
import argparse
import pprint
import random
import shutil
import functools
import paddle
import numpy as np
import paddle.fluid as fluid
from paddle.fluid import profiler
from utils.config import cfg
from utils.timer import Timer, calculate_eta
......@@ -95,6 +97,24 @@ def parse_args():
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
parser.add_argument(
'--enable_ce',
dest='enable_ce',
help='If set True, enable continuous evaluation job.'
'This flag is only used for internal test.',
action='store_true')
# NOTE: This for benchmark
parser.add_argument(
'--is_profiler',
help='the profiler switch.(used for benchmark)',
default=0,
type=int)
parser.add_argument(
'--profiler_path',
help='the profiler output file path.(used for benchmark)',
default='./seg.profiler',
type=str)
return parser.parse_args()
......@@ -194,6 +214,9 @@ def print_info(*msg):
def train(cfg):
startup_prog = fluid.Program()
train_prog = fluid.Program()
if args.enable_ce:
startup_prog.random_seed = 1000
train_prog.random_seed = 1000
drop_last = True
dataset = SegDataset(
......@@ -431,6 +454,13 @@ def train(cfg):
sys.stdout.flush()
avg_loss = 0.0
timer.restart()
# NOTE : used for benchmark, profiler tools
if args.is_profiler and epoch == 1 and global_step == args.log_steps:
profiler.start_profiler("All")
elif args.is_profiler and epoch == 1 and global_step == args.log_steps + 5:
profiler.stop_profiler("total", args.profiler_path)
return
except fluid.core.EOFException:
py_reader.reset()
......@@ -483,6 +513,9 @@ def main(args):
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
if args.enable_ce:
random.seed(0)
np.random.seed(0)
cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0))
cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
......
......@@ -72,17 +72,11 @@ cfg.DATASET.IGNORE_INDEX = 255
cfg.DATASET.PADDING_VALUE = [127.5, 127.5, 127.5]
########################### 数据增强配置 ######################################
# 图像镜像左右翻转
cfg.AUG.MIRROR = True
# 图像上下翻转开关,True/False
cfg.AUG.FLIP = False
# 图像启动上下翻转的概率,0-1
cfg.AUG.FLIP_RATIO = 0.5
# 图像resize的固定尺寸(宽,高),非负
cfg.AUG.FIX_RESIZE_SIZE = tuple()
# 图像resize的方式有三种:
# unpadding(固定尺寸),stepscaling(按比例resize),rangescaling(长边对齐)
cfg.AUG.AUG_METHOD = 'rangescaling'
cfg.AUG.AUG_METHOD = 'unpadding'
# 图像resize的固定尺寸(宽,高),非负
cfg.AUG.FIX_RESIZE_SIZE = (512, 512)
# 图像resize方式为stepscaling,resize最小尺度,非负
cfg.AUG.MIN_SCALE_FACTOR = 0.5
# 图像resize方式为stepscaling,resize最大尺度,不小于MIN_SCALE_FACTOR
......@@ -98,6 +92,13 @@ cfg.AUG.MAX_RESIZE_VALUE = 600
# 在MIN_RESIZE_VALUE到MAX_RESIZE_VALUE范围内
cfg.AUG.INF_RESIZE_VALUE = 500
# 图像镜像左右翻转
cfg.AUG.MIRROR = True
# 图像上下翻转开关,True/False
cfg.AUG.FLIP = False
# 图像启动上下翻转的概率,0-1
cfg.AUG.FLIP_RATIO = 0.5
# RichCrop数据增广开关,用于提升模型鲁棒性
cfg.AUG.RICH_CROP.ENABLE = False
# 图像旋转最大角度,0-90
......@@ -158,13 +159,16 @@ cfg.SOLVER.LOSS = ["softmax_loss"]
cfg.SOLVER.LR_WARMUP = False
# warmup的迭代次数
cfg.SOLVER.LR_WARMUP_STEPS = 2000
# cross entropy weight, 默认为None,如果设置为'dynamic',会根据每个batch中各个类别的数目,
# 动态调整类别权重。
# 也可以设置一个静态权重(list的方式),比如有3类,每个类别权重可以设置为[0.1, 2.0, 0.9]
cfg.SOLVER.CROSS_ENTROPY_WEIGHT = None
########################## 测试配置 ###########################################
# 测试模型路径
cfg.TEST.TEST_MODEL = ''
########################## 模型通用配置 #######################################
# 模型名称, 支持deeplab, unet, icnet三种
# 模型名称, 已支持deeplabv3p, unet, icnet,pspnet,hrnet
cfg.MODEL.MODEL_NAME = ''
# BatchNorm类型: bn、gn(group_norm)
cfg.MODEL.DEFAULT_NORM_TYPE = 'bn'
......@@ -232,3 +236,19 @@ cfg.FREEZE.MODEL_FILENAME = '__model__'
cfg.FREEZE.PARAMS_FILENAME = '__params__'
# 预测模型参数保存的路径
cfg.FREEZE.SAVE_DIR = 'freeze_model'
########################## paddle-slim ######################################
cfg.SLIM.KNOWLEDGE_DISTILL_IS_TEACHER = False
cfg.SLIM.KNOWLEDGE_DISTILL = False
cfg.SLIM.KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR = ""
cfg.SLIM.NAS_PORT = 23333
cfg.SLIM.NAS_ADDRESS = ""
cfg.SLIM.NAS_SEARCH_STEPS = 100
cfg.SLIM.NAS_START_EVAL_EPOCH = 0
cfg.SLIM.NAS_IS_SERVER = True
cfg.SLIM.NAS_SPACE_NAME = ""
cfg.SLIM.PRUNE_PARAMS = ''
cfg.SLIM.PRUNE_RATIOS = []
......@@ -34,6 +34,7 @@ from utils.config import cfg
from reader import SegDataset
from models.model_builder import build_model
from models.model_builder import ModelPhase
from tools.gray2pseudo_color import get_color_map_list
def parse_args():
......@@ -73,28 +74,6 @@ def makedirs(directory):
os.makedirs(directory)
def get_color_map_list(num_classes):
""" Returns the color map for visualizing the segmentation mask,
which can support arbitrary number of classes.
Args:
num_classes: Number of classes
Returns:
The color map
"""
color_map = num_classes * [0, 0, 0]
for i in range(0, num_classes):
j = 0
lab = i
while lab:
color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j))
color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j))
color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j))
j += 1
lab >>= 3
return color_map
def to_png_fn(fn):
"""
Append png as filename postfix
......@@ -108,7 +87,7 @@ def to_png_fn(fn):
def visualize(cfg,
vis_file_list=None,
use_gpu=False,
vis_dir="visual_predict",
vis_dir="visual",
ckpt_dir=None,
log_writer=None,
local_test=False,
......@@ -138,7 +117,7 @@ def visualize(cfg,
fluid.io.load_params(exe, ckpt_dir, main_program=test_prog)
save_dir = os.path.join('visual', vis_dir)
save_dir = vis_dir
makedirs(save_dir)
fetch_list = [pred.name]
......
......@@ -81,6 +81,8 @@ model_urls = {
"https://paddleseg.bj.bcebos.com/models/pspnet101_cityscapes.tgz",
"hrnet_w18_bn_cityscapes":
"https://paddleseg.bj.bcebos.com/models/hrnet_w18_bn_cityscapes.tgz",
"fast_scnn_cityscapes":
"https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape.tar",
}
if __name__ == "__main__":
......
>运行该示例前请安装PaddleSlim和Paddle1.6或更高版本
# PaddleSeg蒸馏教程
在阅读本教程前,请确保您已经了解过[PaddleSeg使用说明](../../docs/usage.md)等章节,以便对PaddleSeg有一定的了解
该文档介绍如何使用[PaddleSlim](https://paddlepaddle.github.io/PaddleSlim)对分割库中的模型进行蒸馏。
该教程中所示操作,如无特殊说明,均在`PaddleSeg/`路径下执行。
## 概述
该示例使用PaddleSlim提供的[蒸馏策略](https://paddlepaddle.github.io/PaddleSlim/algo/algo/#3)对分割库中的模型进行蒸馏训练。
在阅读该示例前,建议您先了解以下内容:
- [PaddleSlim蒸馏API文档](https://paddlepaddle.github.io/PaddleSlim/api/single_distiller_api/)
## 安装PaddleSlim
可按照[PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/)中的步骤安装PaddleSlim
## 蒸馏策略说明
关于蒸馏API如何使用您可以参考PaddleSlim蒸馏API文档
这里以Deeplabv3-xception蒸馏训练Deeplabv3-mobilenet模型为例,首先,为了对`student model``teacher model`有个总体的认识,进一步确认蒸馏的对象,我们通过以下命令分别观察两个网络变量(Variables)的名称和形状:
```python
# 观察student model的Variables
student_vars = []
for v in fluid.default_main_program().list_vars():
try:
student_vars.append((v.name, v.shape))
except:
pass
print("="*50+"student_model_vars"+"="*50)
print(student_vars)
# 观察teacher model的Variables
teacher_vars = []
for v in teacher_program.list_vars():
try:
teacher_vars.append((v.name, v.shape))
except:
pass
print("="*50+"teacher_model_vars"+"="*50)
print(teacher_vars)
```
经过对比可以发现,`student model``teacher model`输入到`loss`的特征图分别为:
```bash
# student model
bilinear_interp_0.tmp_0
# teacher model
bilinear_interp_2.tmp_0
```
它们形状两两相同,且分别处于两个网络的输出部分。所以,我们用`l2_loss`对这几个特征图两两对应添加蒸馏loss。需要注意的是,teacher的Variable在merge过程中被自动添加了一个`name_prefix`,所以这里也需要加上这个前缀`"teacher_"`,merge过程请参考[蒸馏API文档](https://paddlepaddle.github.io/PaddleSlim/api/single_distiller_api/#merge)
```python
distill_loss = l2_loss('teacher_bilinear_interp_2.tmp_0', 'bilinear_interp_0.tmp_0')
```
我们也可以根据上述操作为蒸馏策略选择其他loss,PaddleSlim支持的有`FSP_loss`, `L2_loss`, `softmax_with_cross_entropy_loss` 以及自定义的任何loss。
## 训练
根据[PaddleSeg/pdseg/train.py](../../pdseg/train.py)编写压缩脚本`train_distill.py`
在该脚本中定义了teacher_model和student_model,用teacher_model的输出指导student_model的训练
### 执行示例
下载teacher的预训练模型([deeplabv3p_xception65_bn_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/xception65_bn_cityscapes.tgz))和student的预训练模型([mobilenet_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz)),
修改student config file(./slim/distillation/cityscape.yaml)中预训练模型的路径:
```
TRAIN:
PRETRAINED_MODEL_DIR: your_student_pretrained_model_dir
```
修改teacher config file(./slim/distillation/cityscape_teacher.yaml)中预训练模型的路径:
```
SLIM:
KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR: your_teacher_pretrained_model_dir
```
执行如下命令启动训练,每间隔```cfg.TRAIN.SNAPSHOT_EPOCH```会进行一次评估。
```shell
CUDA_VISIBLE_DEVICES=0,1
python -m paddle.distributed.launch ./slim/distillation/train_distill.py \
--log_steps 10 --cfg ./slim/distillation/cityscape.yaml \
--teacher_cfg ./slim/distillation/cityscape_teacher.yaml \
--use_gpu \
--use_mpio \
--do_eval
```
注意:如需修改配置文件中的参数,请在对应的配置文件中直接修改,暂不支持命令行输入覆盖。
## 评估预测
训练完成后的评估和预测请参考PaddleSeg的[快速入门](../../README.md#快速入门)[基础功能](../../README.md#基础功能)等章节
EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
MAX_SCALE_FACTOR: 2.0 # for stepscaling
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
FLIP: True
FLIP_RATIO: 0.2
RICH_CROP:
ENABLE: False
ASPECT_RATIO: 0.33
BLUR: True
BLUR_RATIO: 0.1
MAX_ROTATION: 15
MIN_AREA_RATIO: 0.5
BRIGHTNESS_JITTER_RATIO: 0.5
CONTRAST_JITTER_RATIO: 0.5
SATURATION_JITTER_RATIO: 0.5
BATCH_SIZE: 16
MEAN: [0.5, 0.5, 0.5]
STD: [0.5, 0.5, 0.5]
DATASET:
DATA_DIR: "./dataset/cityscapes/"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 19
TEST_FILE_LIST: "dataset/cityscapes/val.list"
TRAIN_FILE_LIST: "dataset/cityscapes/train.list"
VAL_FILE_LIST: "dataset/cityscapes/val.list"
IGNORE_INDEX: 255
FREEZE:
MODEL_FILENAME: "model"
PARAMS_FILENAME: "params"
MODEL:
DEFAULT_NORM_TYPE: "bn"
MODEL_NAME: "deeplabv3p"
DEEPLAB:
BACKBONE: "mobilenet"
ASPP_WITH_SEP_CONV: True
DECODER_USE_SEP_CONV: True
ENCODER_WITH_ASPP: False
ENABLE_DECODER: False
TEST:
TEST_MODEL: "snapshots/cityscape_v5/final/"
TRAIN:
MODEL_SAVE_DIR: "snapshots/cityscape_mbv2_kd_e100_1/"
PRETRAINED_MODEL_DIR: u"pretrained_model/mobilenet_cityscapes"
SNAPSHOT_EPOCH: 5
SYNC_BATCH_NORM: True
SOLVER:
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
NUM_EPOCHS: 100
EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling
TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling
AUG:
AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling
FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding
INF_RESIZE_VALUE: 500 # for rangescaling
MAX_RESIZE_VALUE: 600 # for rangescaling
MIN_RESIZE_VALUE: 400 # for rangescaling
MAX_SCALE_FACTOR: 2.0 # for stepscaling
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
FLIP: True
FLIP_RATIO: 0.2
RICH_CROP:
ENABLE: False
ASPECT_RATIO: 0.33
BLUR: True
BLUR_RATIO: 0.1
MAX_ROTATION: 15
MIN_AREA_RATIO: 0.5
BRIGHTNESS_JITTER_RATIO: 0.5
CONTRAST_JITTER_RATIO: 0.5
SATURATION_JITTER_RATIO: 0.5
BATCH_SIZE: 16
MEAN: [0.5, 0.5, 0.5]
STD: [0.5, 0.5, 0.5]
DATASET:
DATA_DIR: "./dataset/cityscapes/"
IMAGE_TYPE: "rgb" # choice rgb or rgba
NUM_CLASSES: 19
TEST_FILE_LIST: "dataset/cityscapes/val.list"
TRAIN_FILE_LIST: "dataset/cityscapes/train.list"
VAL_FILE_LIST: "dataset/cityscapes/val.list"
IGNORE_INDEX: 255
FREEZE:
MODEL_FILENAME: "model"
PARAMS_FILENAME: "params"
MODEL:
DEFAULT_NORM_TYPE: "bn"
MODEL_NAME: "deeplabv3p"
DEEPLAB:
BACKBONE: "xception_65"
ASPP_WITH_SEP_CONV: True
DECODER_USE_SEP_CONV: True
ENCODER_WITH_ASPP: True
ENABLE_DECODER: True
TEST:
TEST_MODEL: "snapshots/cityscape_v5/final/"
TRAIN:
MODEL_SAVE_DIR: "snapshots/cityscape_v7/"
PRETRAINED_MODEL_DIR: u"pretrain/deeplabv3plus_gn_init"
SNAPSHOT_EPOCH: 5
SYNC_BATCH_NORM: True
SOLVER:
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
NUM_EPOCHS: 100
SLIM:
KNOWLEDGE_DISTILL_IS_TEACHER: True
KNOWLEDGE_DISTILL: True
KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR: "pretrained_model/xception65_bn_cityscapes"
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import struct
import paddle.fluid as fluid
import numpy as np
from paddle.fluid.proto.framework_pb2 import VarType
import solver
from utils.config import cfg
from loss import multi_softmax_with_loss
from loss import multi_dice_loss
from loss import multi_bce_loss
from models.modeling import deeplab, unet, icnet, pspnet, hrnet, fast_scnn
class ModelPhase(object):
"""
Standard name for model phase in PaddleSeg
The following standard keys are defined:
* `TRAIN`: training mode.
* `EVAL`: testing/evaluation mode.
* `PREDICT`: prediction/inference mode.
* `VISUAL` : visualization mode
"""
TRAIN = 'train'
EVAL = 'eval'
PREDICT = 'predict'
VISUAL = 'visual'
@staticmethod
def is_train(phase):
return phase == ModelPhase.TRAIN
@staticmethod
def is_predict(phase):
return phase == ModelPhase.PREDICT
@staticmethod
def is_eval(phase):
return phase == ModelPhase.EVAL
@staticmethod
def is_visual(phase):
return phase == ModelPhase.VISUAL
@staticmethod
def is_valid_phase(phase):
""" Check valid phase """
if ModelPhase.is_train(phase) or ModelPhase.is_predict(phase) \
or ModelPhase.is_eval(phase) or ModelPhase.is_visual(phase):
return True
return False
def seg_model(image, class_num):
model_name = cfg.MODEL.MODEL_NAME
if model_name == 'unet':
logits = unet.unet(image, class_num)
elif model_name == 'deeplabv3p':
logits = deeplab.deeplabv3p(image, class_num)
elif model_name == 'icnet':
logits = icnet.icnet(image, class_num)
elif model_name == 'pspnet':
logits = pspnet.pspnet(image, class_num)
elif model_name == 'hrnet':
logits = hrnet.hrnet(image, class_num)
elif model_name == 'fast_scnn':
logits = fast_scnn.fast_scnn(image, class_num)
else:
raise Exception(
"unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet"
)
return logits
def softmax(logit):
logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
logit = fluid.layers.softmax(logit)
logit = fluid.layers.transpose(logit, [0, 3, 1, 2])
return logit
def sigmoid_to_softmax(logit):
"""
one channel to two channel
"""
logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
logit = fluid.layers.sigmoid(logit)
logit_back = 1 - logit
logit = fluid.layers.concat([logit_back, logit], axis=-1)
logit = fluid.layers.transpose(logit, [0, 3, 1, 2])
return logit
def export_preprocess(image):
"""导出模型的预处理流程"""
image = fluid.layers.transpose(image, [0, 3, 1, 2])
origin_shape = fluid.layers.shape(image)[-2:]
# 不同AUG_METHOD方法的resize
if cfg.AUG.AUG_METHOD == 'unpadding':
h_fix = cfg.AUG.FIX_RESIZE_SIZE[1]
w_fix = cfg.AUG.FIX_RESIZE_SIZE[0]
image = fluid.layers.resize_bilinear(
image, out_shape=[h_fix, w_fix], align_corners=False, align_mode=0)
elif cfg.AUG.AUG_METHOD == 'rangescaling':
size = cfg.AUG.INF_RESIZE_VALUE
value = fluid.layers.reduce_max(origin_shape)
scale = float(size) / value.astype('float32')
image = fluid.layers.resize_bilinear(
image, scale=scale, align_corners=False, align_mode=0)
# 存储resize后图像shape
valid_shape = fluid.layers.shape(image)[-2:]
# padding到eval_crop_size大小
width = cfg.EVAL_CROP_SIZE[0]
height = cfg.EVAL_CROP_SIZE[1]
pad_target = fluid.layers.assign(
np.array([height, width]).astype('float32'))
up = fluid.layers.assign(np.array([0]).astype('float32'))
down = pad_target[0] - valid_shape[0]
left = up
right = pad_target[1] - valid_shape[1]
paddings = fluid.layers.concat([up, down, left, right])
paddings = fluid.layers.cast(paddings, 'int32')
image = fluid.layers.pad2d(image, paddings=paddings, pad_value=127.5)
# normalize
mean = np.array(cfg.MEAN).reshape(1, len(cfg.MEAN), 1, 1)
mean = fluid.layers.assign(mean.astype('float32'))
std = np.array(cfg.STD).reshape(1, len(cfg.STD), 1, 1)
std = fluid.layers.assign(std.astype('float32'))
image = (image / 255 - mean) / std
# 使后面的网络能通过类似image.shape获取特征图的shape
image = fluid.layers.reshape(
image, shape=[-1, cfg.DATASET.DATA_DIM, height, width])
return image, valid_shape, origin_shape
def build_model(main_prog=None, start_prog=None, phase=ModelPhase.TRAIN, **kwargs):
if not ModelPhase.is_valid_phase(phase):
raise ValueError("ModelPhase {} is not valid!".format(phase))
if ModelPhase.is_train(phase):
width = cfg.TRAIN_CROP_SIZE[0]
height = cfg.TRAIN_CROP_SIZE[1]
else:
width = cfg.EVAL_CROP_SIZE[0]
height = cfg.EVAL_CROP_SIZE[1]
image_shape = [cfg.DATASET.DATA_DIM, height, width]
grt_shape = [1, height, width]
class_num = cfg.DATASET.NUM_CLASSES
#with fluid.program_guard(main_prog, start_prog):
# with fluid.unique_name.guard():
# 在导出模型的时候,增加图像标准化预处理,减小预测部署时图像的处理流程
# 预测部署时只须对输入图像增加batch_size维度即可
if cfg.SLIM.KNOWLEDGE_DISTILL_IS_TEACHER:
image = main_prog.global_block()._clone_variable(kwargs['image'],
force_persistable=False)
label = main_prog.global_block()._clone_variable(kwargs['label'],
force_persistable=False)
mask = main_prog.global_block()._clone_variable(kwargs['mask'],
force_persistable=False)
else:
if ModelPhase.is_predict(phase):
origin_image = fluid.layers.data(
name='image',
shape=[-1, -1, -1, cfg.DATASET.DATA_DIM],
dtype='float32',
append_batch_size=False)
image, valid_shape, origin_shape = export_preprocess(
origin_image)
else:
image = fluid.layers.data(
name='image', shape=image_shape, dtype='float32')
label = fluid.layers.data(
name='label', shape=grt_shape, dtype='int32')
mask = fluid.layers.data(
name='mask', shape=grt_shape, dtype='int32')
# use PyReader when doing traning and evaluation
if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase):
py_reader = None
if not cfg.SLIM.KNOWLEDGE_DISTILL_IS_TEACHER:
py_reader = fluid.io.PyReader(
feed_list=[image, label, mask],
capacity=cfg.DATALOADER.BUF_SIZE,
iterable=False,
use_double_buffer=True)
loss_type = cfg.SOLVER.LOSS
if not isinstance(loss_type, list):
loss_type = list(loss_type)
# dice_loss或bce_loss只适用两类分割中
if class_num > 2 and (("dice_loss" in loss_type) or
("bce_loss" in loss_type)):
raise Exception(
"dice loss and bce loss is only applicable to binary classfication"
)
# 在两类分割情况下,当loss函数选择dice_loss或bce_loss的时候,最后logit输出通道数设置为1
if ("dice_loss" in loss_type) or ("bce_loss" in loss_type):
class_num = 1
if "softmax_loss" in loss_type:
raise Exception(
"softmax loss can not combine with dice loss or bce loss"
)
logits = seg_model(image, class_num)
# 根据选择的loss函数计算相应的损失函数
if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase):
loss_valid = False
avg_loss_list = []
valid_loss = []
if "softmax_loss" in loss_type:
weight = cfg.SOLVER.CROSS_ENTROPY_WEIGHT
avg_loss_list.append(
multi_softmax_with_loss(logits, label, mask, class_num, weight))
loss_valid = True
valid_loss.append("softmax_loss")
if "dice_loss" in loss_type:
avg_loss_list.append(multi_dice_loss(logits, label, mask))
loss_valid = True
valid_loss.append("dice_loss")
if "bce_loss" in loss_type:
avg_loss_list.append(multi_bce_loss(logits, label, mask))
loss_valid = True
valid_loss.append("bce_loss")
if not loss_valid:
raise Exception(
"SOLVER.LOSS: {} is set wrong. it should "
"include one of (softmax_loss, bce_loss, dice_loss) at least"
" example: ['softmax_loss'], ['dice_loss'], ['bce_loss', 'dice_loss']"
.format(cfg.SOLVER.LOSS))
invalid_loss = [x for x in loss_type if x not in valid_loss]
if len(invalid_loss) > 0:
print(
"Warning: the loss {} you set is invalid. it will not be included in loss computed."
.format(invalid_loss))
avg_loss = 0
for i in range(0, len(avg_loss_list)):
avg_loss += avg_loss_list[i]
#get pred result in original size
if isinstance(logits, tuple):
logit = logits[0]
else:
logit = logits
if logit.shape[2:] != label.shape[2:]:
logit = fluid.layers.resize_bilinear(logit, label.shape[2:])
# return image input and logit output for inference graph prune
if ModelPhase.is_predict(phase):
# 两类分割中,使用dice_loss或bce_loss返回的logit为单通道,进行到两通道的变换
if class_num == 1:
logit = sigmoid_to_softmax(logit)
else:
logit = softmax(logit)
# 获取有效部分
logit = fluid.layers.slice(
logit, axes=[2, 3], starts=[0, 0], ends=valid_shape)
logit = fluid.layers.resize_bilinear(
logit,
out_shape=origin_shape,
align_corners=False,
align_mode=0)
logit = fluid.layers.argmax(logit, axis=1)
return origin_image, logit
if class_num == 1:
out = sigmoid_to_softmax(logit)
out = fluid.layers.transpose(out, [0, 2, 3, 1])
else:
out = fluid.layers.transpose(logit, [0, 2, 3, 1])
pred = fluid.layers.argmax(out, axis=3)
pred = fluid.layers.unsqueeze(pred, axes=[3])
if ModelPhase.is_visual(phase):
if class_num == 1:
logit = sigmoid_to_softmax(logit)
else:
logit = softmax(logit)
return pred, logit
if ModelPhase.is_eval(phase):
return py_reader, avg_loss, pred, label, mask
if ModelPhase.is_train(phase):
decayed_lr = None
if not cfg.SLIM.KNOWLEDGE_DISTILL:
optimizer = solver.Solver(main_prog, start_prog)
decayed_lr = optimizer.optimise(avg_loss)
# optimizer = solver.Solver(main_prog, start_prog)
# decayed_lr = optimizer.optimise(avg_loss)
return py_reader, avg_loss, decayed_lr, pred, label, mask, image
def to_int(string, dest="I"):
return struct.unpack(dest, string)[0]
def parse_shape_from_file(filename):
with open(filename, "rb") as file:
version = file.read(4)
lod_level = to_int(file.read(8), dest="Q")
for i in range(lod_level):
_size = to_int(file.read(8), dest="Q")
_ = file.read(_size)
version = file.read(4)
tensor_desc_size = to_int(file.read(4))
tensor_desc = VarType.TensorDesc()
tensor_desc.ParseFromString(file.read(tensor_desc_size))
return tuple(tensor_desc.dims)
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg")
sys.path.append(SEG_PATH)
import argparse
import pprint
import random
import shutil
import functools
import paddle
import numpy as np
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from metrics import ConfusionMatrix
from reader import SegDataset
from model_builder import build_model
from model_builder import ModelPhase
from model_builder import parse_shape_from_file
from eval import evaluate
from vis import visualize
from utils import dist_utils
import solver
from paddleslim.dist.single_distiller import merge, l2_loss
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg training')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--teacher_cfg',
dest='teacher_cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess I/O or not',
action='store_true',
default=False)
parser.add_argument(
'--log_steps',
dest='log_steps',
help='Display logging information at every log_steps',
default=10,
type=int)
parser.add_argument(
'--debug',
dest='debug',
help='debug mode, display detail information of training',
action='store_true')
parser.add_argument(
'--use_tb',
dest='use_tb',
help='whether to record the data during training to Tensorboard',
action='store_true')
parser.add_argument(
'--tb_log_dir',
dest='tb_log_dir',
help='Tensorboard logging directory',
default=None,
type=str)
parser.add_argument(
'--do_eval',
dest='do_eval',
help='Evaluation models result on every new checkpoint',
action='store_true')
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
parser.add_argument(
'--enable_ce',
dest='enable_ce',
help='If set True, enable continuous evaluation job.'
'This flag is only used for internal test.',
action='store_true')
return parser.parse_args()
def save_vars(executor, dirname, program=None, vars=None):
"""
Temporary resolution for Win save variables compatability.
Will fix in PaddlePaddle v1.5.2
"""
save_program = fluid.Program()
save_block = save_program.global_block()
for each_var in vars:
# NOTE: don't save the variable which type is RAW
if each_var.type == fluid.core.VarDesc.VarType.RAW:
continue
new_var = save_block.create_var(
name=each_var.name,
shape=each_var.shape,
dtype=each_var.dtype,
type=each_var.type,
lod_level=each_var.lod_level,
persistable=True)
file_path = os.path.join(dirname, new_var.name)
file_path = os.path.normpath(file_path)
save_block.append_op(
type='save',
inputs={'X': [new_var]},
outputs={},
attrs={'file_path': file_path})
executor.run(save_program)
def save_checkpoint(exe, program, ckpt_name):
"""
Save checkpoint for evaluation or resume training
"""
ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name))
print("Save model checkpoint to {}".format(ckpt_dir))
if not os.path.isdir(ckpt_dir):
os.makedirs(ckpt_dir)
save_vars(
exe,
ckpt_dir,
program,
vars=list(filter(fluid.io.is_persistable, program.list_vars())))
return ckpt_dir
def load_checkpoint(exe, program):
"""
Load checkpoiont from pretrained model directory for resume training
"""
print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR)
if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR):
raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format(
cfg.TRAIN.RESUME_MODEL_DIR))
fluid.io.load_persistables(
exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program)
model_path = cfg.TRAIN.RESUME_MODEL_DIR
# Check is path ended by path spearator
if model_path[-1] == os.sep:
model_path = model_path[0:-1]
epoch_name = os.path.basename(model_path)
# If resume model is final model
if epoch_name == 'final':
begin_epoch = cfg.SOLVER.NUM_EPOCHS
# If resume model path is end of digit, restore epoch status
elif epoch_name.isdigit():
epoch = int(epoch_name)
begin_epoch = epoch + 1
else:
raise ValueError("Resume model path is not valid!")
print("Model checkpoint loaded successfully!")
return begin_epoch
def update_best_model(ckpt_dir):
best_model_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model')
if os.path.exists(best_model_dir):
shutil.rmtree(best_model_dir)
shutil.copytree(ckpt_dir, best_model_dir)
def print_info(*msg):
if cfg.TRAINER_ID == 0:
print(*msg)
def train(cfg):
# startup_prog = fluid.Program()
# train_prog = fluid.Program()
drop_last = True
dataset = SegDataset(
file_list=cfg.DATASET.TRAIN_FILE_LIST,
mode=ModelPhase.TRAIN,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
if args.use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
batch_data = []
for b in data_gen:
batch_data.append(b)
if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS):
for item in batch_data:
yield item[0], item[1], item[2]
batch_data = []
# If use sync batch norm strategy, drop last batch if number of samples
# in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues
if not cfg.TRAIN.SYNC_BATCH_NORM:
for item in batch_data:
yield item[0], item[1], item[2]
# Get device environment
# places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# place = places[0]
gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0))
place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace()
places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# Get number of GPU
dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places)
print_info("#Device count: {}".format(dev_count))
# Make sure BATCH_SIZE can divided by GPU cards
assert cfg.BATCH_SIZE % dev_count == 0, (
'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format(
cfg.BATCH_SIZE, dev_count))
# If use multi-gpu training mode, batch data will allocated to each GPU evenly
batch_size_per_dev = cfg.BATCH_SIZE // dev_count
print_info("batch_size_per_dev: {}".format(batch_size_per_dev))
py_reader, loss, lr, pred, grts, masks, image = build_model(phase=ModelPhase.TRAIN)
py_reader.decorate_sample_generator(
data_generator, batch_size=batch_size_per_dev, drop_last=drop_last)
exe = fluid.Executor(place)
cfg.update_from_file(args.teacher_cfg_file)
# teacher_arch = teacher_cfg.architecture
teacher_program = fluid.Program()
teacher_startup_program = fluid.Program()
with fluid.program_guard(teacher_program, teacher_startup_program):
with fluid.unique_name.guard():
_, teacher_loss, _, _, _, _, _ = build_model(
teacher_program, teacher_startup_program, phase=ModelPhase.TRAIN, image=image,
label=grts, mask=masks)
exe.run(teacher_startup_program)
teacher_program = teacher_program.clone(for_test=True)
ckpt_dir = cfg.SLIM.KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR
assert ckpt_dir is not None
print('load teacher model:', ckpt_dir)
fluid.io.load_params(exe, ckpt_dir, main_program=teacher_program)
# cfg = load_config(FLAGS.config)
cfg.update_from_file(args.cfg_file)
data_name_map = {
'image': 'image',
'label': 'label',
'mask': 'mask',
}
merge(teacher_program, fluid.default_main_program(), data_name_map, place)
distill_pairs = [['teacher_bilinear_interp_2.tmp_0', 'bilinear_interp_0.tmp_0']]
def distill(pairs, weight):
"""
Add 3 pairs of distillation losses, each pair of feature maps is the
input of teacher and student's yolov3_loss respectively
"""
loss = l2_loss(pairs[0][0], pairs[0][1])
weighted_loss = loss * weight
return weighted_loss
distill_loss = distill(distill_pairs, 0.1)
cfg.update_from_file(args.cfg_file)
optimizer = solver.Solver(None, None)
all_loss = loss + distill_loss
lr = optimizer.optimise(all_loss)
exe.run(fluid.default_startup_program())
exec_strategy = fluid.ExecutionStrategy()
# Clear temporary variables every 100 iteration
if args.use_gpu:
exec_strategy.num_threads = fluid.core.get_cuda_device_count()
exec_strategy.num_iteration_per_drop_scope = 100
build_strategy = fluid.BuildStrategy()
build_strategy.fuse_all_reduce_ops = False
build_strategy.fuse_all_optimizer_ops = False
build_strategy.fuse_elewise_add_act_ops = True
if cfg.NUM_TRAINERS > 1 and args.use_gpu:
dist_utils.prepare_for_multi_process(exe, build_strategy, fluid.default_main_program())
exec_strategy.num_threads = 1
if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu:
if dev_count > 1:
# Apply sync batch norm strategy
print_info("Sync BatchNorm strategy is effective.")
build_strategy.sync_batch_norm = True
else:
print_info(
"Sync BatchNorm strategy will not be effective if GPU device"
" count <= 1")
compiled_train_prog = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(
loss_name=all_loss.name,
exec_strategy=exec_strategy,
build_strategy=build_strategy)
# Resume training
begin_epoch = cfg.SOLVER.BEGIN_EPOCH
if cfg.TRAIN.RESUME_MODEL_DIR:
begin_epoch = load_checkpoint(exe, fluid.default_main_program())
# Load pretrained model
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR):
print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR)
load_vars = []
load_fail_vars = []
def var_shape_matched(var, shape):
"""
Check whehter persitable variable shape is match with current network
"""
var_exist = os.path.exists(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_exist:
var_shape = parse_shape_from_file(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
return var_shape == shape
return False
for x in fluid.default_main_program().list_vars():
if isinstance(x, fluid.framework.Parameter):
shape = tuple(fluid.global_scope().find_var(
x.name).get_tensor().shape())
if var_shape_matched(x, shape):
load_vars.append(x)
else:
load_fail_vars.append(x)
fluid.io.load_vars(
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars)
for var in load_vars:
print_info("Parameter[{}] loaded sucessfully!".format(var.name))
for var in load_fail_vars:
print_info(
"Parameter[{}] don't exist or shape does not match current network, skip"
" to load it.".format(var.name))
print_info("{}/{} pretrained parameters loaded successfully!".format(
len(load_vars),
len(load_vars) + len(load_fail_vars)))
else:
print_info(
'Pretrained model dir {} not exists, training from scratch...'.
format(cfg.TRAIN.PRETRAINED_MODEL_DIR))
#fetch_list = [avg_loss.name, lr.name]
fetch_list = [loss.name, 'teacher_' + teacher_loss.name, distill_loss.name, lr.name]
if args.debug:
# Fetch more variable info and use streaming confusion matrix to
# calculate IoU results if in debug mode
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
fetch_list.extend([pred.name, grts.name, masks.name])
cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
if args.use_tb:
if not args.tb_log_dir:
print_info("Please specify the log directory by --tb_log_dir.")
exit(1)
from tb_paddle import SummaryWriter
log_writer = SummaryWriter(args.tb_log_dir)
# trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0))
# num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
global_step = 0
all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE
if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True:
all_step += 1
all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1)
avg_loss = 0.0
avg_t_loss = 0.0
avg_d_loss = 0.0
best_mIoU = 0.0
timer = Timer()
timer.start()
if begin_epoch > cfg.SOLVER.NUM_EPOCHS:
raise ValueError(
("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format(
begin_epoch, cfg.SOLVER.NUM_EPOCHS))
if args.use_mpio:
print_info("Use multiprocess reader")
else:
print_info("Use multi-thread reader")
for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1):
py_reader.start()
while True:
try:
if args.debug:
# Print category IoU and accuracy to check whether the
# traning process is corresponed to expectation
loss, lr, pred, grts, masks = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
cm.calculate(pred, grts, masks)
avg_loss += np.mean(np.array(loss))
global_step += 1
if global_step % args.log_steps == 0:
speed = args.log_steps / timer.elapsed_time()
avg_loss /= args.log_steps
category_acc, mean_acc = cm.accuracy()
category_iou, mean_iou = cm.mean_iou()
print_info((
"epoch={} step={} lr={:.5f} loss={:.4f} acc={:.5f} mIoU={:.5f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, mean_acc,
mean_iou, speed,
calculate_eta(all_step - global_step, speed)))
print_info("Category IoU: ", category_iou)
print_info("Category Acc: ", category_acc)
if args.use_tb:
log_writer.add_scalar('Train/mean_iou', mean_iou,
global_step)
log_writer.add_scalar('Train/mean_acc', mean_acc,
global_step)
log_writer.add_scalar('Train/loss', avg_loss,
global_step)
log_writer.add_scalar('Train/lr', lr[0],
global_step)
log_writer.add_scalar('Train/step/sec', speed,
global_step)
sys.stdout.flush()
avg_loss = 0.0
cm.zero_matrix()
timer.restart()
else:
# If not in debug mode, avoid unnessary log and calculate
loss, t_loss, d_loss, lr = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
avg_loss += np.mean(np.array(loss))
avg_t_loss += np.mean(np.array(t_loss))
avg_d_loss += np.mean(np.array(d_loss))
global_step += 1
if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0:
avg_loss /= args.log_steps
avg_t_loss /= args.log_steps
avg_d_loss /= args.log_steps
speed = args.log_steps / timer.elapsed_time()
print((
"epoch={} step={} lr={:.5f} loss={:.4f} teacher loss={:.4f} distill loss={:.4f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, avg_t_loss, avg_d_loss, speed,
calculate_eta(all_step - global_step, speed)))
if args.use_tb:
log_writer.add_scalar('Train/loss', avg_loss,
global_step)
log_writer.add_scalar('Train/lr', lr[0],
global_step)
log_writer.add_scalar('Train/speed', speed,
global_step)
sys.stdout.flush()
avg_loss = 0.0
avg_t_loss = 0.0
avg_d_loss = 0.0
timer.restart()
except fluid.core.EOFException:
py_reader.reset()
break
except Exception as e:
print(e)
if (epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0
or epoch == cfg.SOLVER.NUM_EPOCHS) and cfg.TRAINER_ID == 0:
ckpt_dir = save_checkpoint(exe, fluid.default_main_program(), epoch)
if args.do_eval:
print("Evaluation start")
_, mean_iou, _, mean_acc = evaluate(
cfg=cfg,
ckpt_dir=ckpt_dir,
use_gpu=args.use_gpu,
use_mpio=args.use_mpio)
if args.use_tb:
log_writer.add_scalar('Evaluate/mean_iou', mean_iou,
global_step)
log_writer.add_scalar('Evaluate/mean_acc', mean_acc,
global_step)
if mean_iou > best_mIoU:
best_mIoU = mean_iou
update_best_model(ckpt_dir)
print_info("Save best model {} to {}, mIoU = {:.4f}".format(
ckpt_dir,
os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model'),
mean_iou))
# Use Tensorboard to visualize results
if args.use_tb and cfg.DATASET.VIS_FILE_LIST is not None:
visualize(
cfg=cfg,
use_gpu=args.use_gpu,
vis_file_list=cfg.DATASET.VIS_FILE_LIST,
vis_dir="visual",
ckpt_dir=ckpt_dir,
log_writer=log_writer)
if cfg.TRAINER_ID == 0:
ckpt_dir = save_checkpoint(exe, fluid.default_main_program(), epoch)
# save final model
if cfg.TRAINER_ID == 0:
save_checkpoint(exe, fluid.default_main_program(), 'final')
def main(args):
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
if args.enable_ce:
random.seed(0)
np.random.seed(0)
cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0))
cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
cfg.check_and_infer()
print_info(pprint.pformat(cfg))
train(cfg)
if __name__ == '__main__':
args = parse_args()
if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True:
print(
"You can not set use_gpu = True in the model because you are using paddlepaddle-cpu."
)
print(
"Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU."
)
sys.exit(1)
main(args)
>运行该示例前请安装Paddle1.6或更高版本
# PaddleSeg神经网络搜索(NAS)示例
在阅读本教程前,请确保您已经了解过[PaddleSeg使用说明](../../docs/usage.md)等章节,以便对PaddleSeg有一定的了解
该文档介绍如何使用[PaddleSlim](https://paddlepaddle.github.io/PaddleSlim)对分割库中的模型进行搜索。
该教程中所示操作,如无特殊说明,均在`PaddleSeg/`路径下执行。
## 概述
我们选取Deeplab+mobilenetv2模型作为神经网络搜索示例,该示例使用[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)
辅助完成神经网络搜索实验,具体技术细节,请您参考[神经网络搜索策略](https://github.com/PaddlePaddle/PaddleSlim/blob/4670a79343c191b61a78e416826d122eea52a7ab/docs/zh_cn/tutorials/image_classification_nas_quick_start.ipynb)
## 定义搜索空间
搜索实验中,我们采用了SANAS的方式进行搜索,本次实验会对网络模型中的通道数和卷积核尺寸进行搜索。
所以我们定义了如下搜索空间:
- head通道模块`head_num`:定义了MobilenetV2 head模块中通道数变化区间;
- inverse_res_block1-6`filter_num1-6`: 定义了inverse_res_block模块中通道数变化区间;
- inverse_res_block`repeat`:定义了MobilenetV2 inverse_res_block模块中unit的个数;
- inverse_res_block`multiply`:定义了MobilenetV2 inverse_res_block模块中expansion_factor变化区间;
- 卷积核尺寸`k_size`:定义了MobilenetV2中卷积和尺寸大小是3x3或者5x5。
根据定义的搜索空间各个区间,我们的搜索空间tokens共9位,变化区间在([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [7, 5, 8, 6, 2, 5, 8, 6, 2, 5, 8, 6, 2, 5, 10, 6, 2, 5, 10, 6, 2, 5, 12, 6, 2])范围内。
初始化tokens为:[4, 4, 5, 1, 0, 4, 4, 1, 0, 4, 4, 3, 0, 4, 5, 2, 0, 4, 7, 2, 0, 4, 9, 0, 0]。
## 开始搜索
首先需要安装PaddleSlim,请参考[安装教程](https://paddlepaddle.github.io/PaddleSlim/#_2)
配置paddleseg的config, 下面只展示nas相关的内容
```shell
SLIM:
NAS_PORT: 23333 # 端口
NAS_ADDRESS: "" # ip地址,作为server不用填写,作为client的时候需要填写server的ip
NAS_SEARCH_STEPS: 100 # 搜索多少个结构
NAS_START_EVAL_EPOCH: -1 # 第几个epoch开始对模型进行评估
NAS_IS_SERVER: True # 是否为server
NAS_SPACE_NAME: "MobileNetV2SpaceSeg" # 搜索空间
```
## 训练与评估
执行以下命令,边训练边评估
```shell
CUDA_VISIBLE_DEVICES=0 python -u ./slim/nas/train_nas.py --log_steps 10 --cfg configs/deeplabv3p_mobilenetv2_cityscapes.yaml --use_gpu --use_mpio \
SLIM.NAS_PORT 23333 \
SLIM.NAS_ADDRESS "" \
SLIM.NAS_SEARCH_STEPS 2 \
SLIM.NAS_START_EVAL_EPOCH -1 \
SLIM.NAS_IS_SERVER True \
SLIM.NAS_SPACE_NAME "MobileNetV2SpaceSeg" \
```
## FAQ
- 运行报错:`socket.error: [Errno 98] Address already in use`
解决方法:当前端口被占用,请修改`SLIM.NAS_PORT`端口。
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
import paddle
import paddle.fluid as fluid
from utils.config import cfg
from models.libs.model_libs import scope, name_scope
from models.libs.model_libs import bn, bn_relu, relu
from models.libs.model_libs import conv
from models.libs.model_libs import separate_conv
from models.backbone.mobilenet_v2 import MobileNetV2 as mobilenet_backbone
from models.backbone.xception import Xception as xception_backbone
def encoder(input):
# 编码器配置,采用ASPP架构,pooling + 1x1_conv + 三个不同尺度的空洞卷积并行, concat后1x1conv
# ASPP_WITH_SEP_CONV:默认为真,使用depthwise可分离卷积,否则使用普通卷积
# OUTPUT_STRIDE: 下采样倍数,8或16,决定aspp_ratios大小
# aspp_ratios:ASPP模块空洞卷积的采样率
if cfg.MODEL.DEEPLAB.OUTPUT_STRIDE == 16:
aspp_ratios = [6, 12, 18]
elif cfg.MODEL.DEEPLAB.OUTPUT_STRIDE == 8:
aspp_ratios = [12, 24, 36]
else:
raise Exception("deeplab only support stride 8 or 16")
param_attr = fluid.ParamAttr(
name=name_scope + 'weights',
regularizer=None,
initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.06))
with scope('encoder'):
channel = 256
with scope("image_pool"):
image_avg = fluid.layers.reduce_mean(
input, [2, 3], keep_dim=True)
image_avg = bn_relu(
conv(
image_avg,
channel,
1,
1,
groups=1,
padding=0,
param_attr=param_attr))
image_avg = fluid.layers.resize_bilinear(image_avg, input.shape[2:])
with scope("aspp0"):
aspp0 = bn_relu(
conv(
input,
channel,
1,
1,
groups=1,
padding=0,
param_attr=param_attr))
with scope("aspp1"):
if cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV:
aspp1 = separate_conv(
input, channel, 1, 3, dilation=aspp_ratios[0], act=relu)
else:
aspp1 = bn_relu(
conv(
input,
channel,
stride=1,
filter_size=3,
dilation=aspp_ratios[0],
padding=aspp_ratios[0],
param_attr=param_attr))
with scope("aspp2"):
if cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV:
aspp2 = separate_conv(
input, channel, 1, 3, dilation=aspp_ratios[1], act=relu)
else:
aspp2 = bn_relu(
conv(
input,
channel,
stride=1,
filter_size=3,
dilation=aspp_ratios[1],
padding=aspp_ratios[1],
param_attr=param_attr))
with scope("aspp3"):
if cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV:
aspp3 = separate_conv(
input, channel, 1, 3, dilation=aspp_ratios[2], act=relu)
else:
aspp3 = bn_relu(
conv(
input,
channel,
stride=1,
filter_size=3,
dilation=aspp_ratios[2],
padding=aspp_ratios[2],
param_attr=param_attr))
with scope("concat"):
data = fluid.layers.concat([image_avg, aspp0, aspp1, aspp2, aspp3],
axis=1)
data = bn_relu(
conv(
data,
channel,
1,
1,
groups=1,
padding=0,
param_attr=param_attr))
data = fluid.layers.dropout(data, 0.9)
return data
def decoder(encode_data, decode_shortcut):
# 解码器配置
# encode_data:编码器输出
# decode_shortcut: 从backbone引出的分支, resize后与encode_data concat
# DECODER_USE_SEP_CONV: 默认为真,则concat后连接两个可分离卷积,否则为普通卷积
param_attr = fluid.ParamAttr(
name=name_scope + 'weights',
regularizer=None,
initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.06))
with scope('decoder'):
with scope('concat'):
decode_shortcut = bn_relu(
conv(
decode_shortcut,
48,
1,
1,
groups=1,
padding=0,
param_attr=param_attr))
encode_data = fluid.layers.resize_bilinear(
encode_data, decode_shortcut.shape[2:])
encode_data = fluid.layers.concat([encode_data, decode_shortcut],
axis=1)
if cfg.MODEL.DEEPLAB.DECODER_USE_SEP_CONV:
with scope("separable_conv1"):
encode_data = separate_conv(
encode_data, 256, 1, 3, dilation=1, act=relu)
with scope("separable_conv2"):
encode_data = separate_conv(
encode_data, 256, 1, 3, dilation=1, act=relu)
else:
with scope("decoder_conv1"):
encode_data = bn_relu(
conv(
encode_data,
256,
stride=1,
filter_size=3,
dilation=1,
padding=1,
param_attr=param_attr))
with scope("decoder_conv2"):
encode_data = bn_relu(
conv(
encode_data,
256,
stride=1,
filter_size=3,
dilation=1,
padding=1,
param_attr=param_attr))
return encode_data
def nas_backbone(input, arch):
# scale = cfg.MODEL.DEEPLAB.DEPTH_MULTIPLIER
# output_stride = cfg.MODEL.DEEPLAB.OUTPUT_STRIDE
# model = mobilenet_backbone(scale=scale, output_stride=output_stride)
end_points = 8
decode_point = 3
data, decode_shortcuts = arch(
input, end_points=end_points, return_block=decode_point, output_stride=16)
decode_shortcut = decode_shortcuts[decode_point]
return data, decode_shortcut
def deeplabv3p_nas(img, num_classes, arch=None):
data, decode_shortcut = nas_backbone(img, arch)
# 编码器解码器设置
cfg.MODEL.DEFAULT_EPSILON = 1e-5
if cfg.MODEL.DEEPLAB.ENCODER_WITH_ASPP:
data = encoder(data)
if cfg.MODEL.DEEPLAB.ENABLE_DECODER:
data = decoder(data, decode_shortcut)
# 根据类别数设置最后一个卷积层输出,并resize到图片原始尺寸
param_attr = fluid.ParamAttr(
name=name_scope + 'weights',
regularizer=fluid.regularizer.L2DecayRegularizer(
regularization_coeff=0.0),
initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.01))
with scope('logit'):
logit = conv(
data,
num_classes,
1,
stride=1,
padding=0,
bias_attr=True,
param_attr=param_attr)
logit = fluid.layers.resize_bilinear(logit, img.shape[2:])
return logit
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg")
sys.path.append(SEG_PATH)
import time
import argparse
import functools
import pprint
import cv2
import numpy as np
import paddle
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from model_builder import build_model
from model_builder import ModelPhase
from reader import SegDataset
from metrics import ConfusionMatrix
from mobilenetv2_search_space import MobileNetV2SpaceSeg
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg model evalution')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess IO or not',
action='store_true',
default=False)
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
return parser.parse_args()
def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs):
np.set_printoptions(precision=5, suppress=True)
startup_prog = fluid.Program()
test_prog = fluid.Program()
dataset = SegDataset(
file_list=cfg.DATASET.VAL_FILE_LIST,
mode=ModelPhase.EVAL,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
#TODO: check is batch reader compatitable with Windows
if use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
for b in data_gen:
yield b[0], b[1], b[2]
py_reader, avg_loss, pred, grts, masks = build_model(
test_prog, startup_prog, phase=ModelPhase.EVAL, arch=kwargs['arch'])
py_reader.decorate_sample_generator(
data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE)
# Get device environment
places = fluid.cuda_places() if use_gpu else fluid.cpu_places()
place = places[0]
dev_count = len(places)
print("#Device count: {}".format(dev_count))
exe = fluid.Executor(place)
exe.run(startup_prog)
test_prog = test_prog.clone(for_test=True)
ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir
if not os.path.exists(ckpt_dir):
raise ValueError('The TEST.TEST_MODEL {} is not found'.format(ckpt_dir))
if ckpt_dir is not None:
print('load test model:', ckpt_dir)
fluid.io.load_params(exe, ckpt_dir, main_program=test_prog)
# Use streaming confusion matrix to calculate mean_iou
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
conf_mat = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
fetch_list = [avg_loss.name, pred.name, grts.name, masks.name]
num_images = 0
step = 0
all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1
timer = Timer()
timer.start()
py_reader.start()
while True:
try:
step += 1
loss, pred, grts, masks = exe.run(
test_prog, fetch_list=fetch_list, return_numpy=True)
loss = np.mean(np.array(loss))
num_images += pred.shape[0]
conf_mat.calculate(pred, grts, masks)
_, iou = conf_mat.mean_iou()
_, acc = conf_mat.accuracy()
speed = 1.0 / timer.elapsed_time()
print(
"[EVAL]step={} loss={:.5f} acc={:.4f} IoU={:.4f} step/sec={:.2f} | ETA {}"
.format(step, loss, acc, iou, speed,
calculate_eta(all_step - step, speed)))
timer.restart()
sys.stdout.flush()
except fluid.core.EOFException:
break
category_iou, avg_iou = conf_mat.mean_iou()
category_acc, avg_acc = conf_mat.accuracy()
print("[EVAL]#image={} acc={:.4f} IoU={:.4f}".format(
num_images, avg_acc, avg_iou))
print("[EVAL]Category IoU:", category_iou)
print("[EVAL]Category Acc:", category_acc)
print("[EVAL]Kappa:{:.4f}".format(conf_mat.kappa()))
return category_iou, avg_iou, category_acc, avg_acc
def main():
args = parse_args()
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
cfg.check_and_infer()
print(pprint.pformat(cfg))
evaluate(cfg, **args.__dict__)
if __name__ == '__main__':
main()
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddleslim.nas.search_space.search_space_base import SearchSpaceBase
from paddleslim.nas.search_space.base_layer import conv_bn_layer
from paddleslim.nas.search_space.search_space_registry import SEARCHSPACE
from paddleslim.nas.search_space.utils import check_points
__all__ = ["MobileNetV2SpaceSeg"]
@SEARCHSPACE.register
class MobileNetV2SpaceSeg(SearchSpaceBase):
def __init__(self, input_size, output_size, block_num, block_mask=None):
super(MobileNetV2SpaceSeg, self).__init__(input_size, output_size,
block_num, block_mask)
# self.head_num means the first convolution channel
self.head_num = np.array([3, 4, 8, 12, 16, 24, 32]) #7
# self.filter_num1 ~ self.filter_num6 means following convlution channel
self.filter_num1 = np.array([3, 4, 8, 12, 16, 24, 32, 48]) #8
self.filter_num2 = np.array([8, 12, 16, 24, 32, 48, 64, 80]) #8
self.filter_num3 = np.array([16, 24, 32, 48, 64, 80, 96, 128]) #8
self.filter_num4 = np.array(
[24, 32, 48, 64, 80, 96, 128, 144, 160, 192]) #10
self.filter_num5 = np.array(
[32, 48, 64, 80, 96, 128, 144, 160, 192, 224]) #10
self.filter_num6 = np.array(
[64, 80, 96, 128, 144, 160, 192, 224, 256, 320, 384, 512]) #12
# self.k_size means kernel size
self.k_size = np.array([3, 5]) #2
# self.multiply means expansion_factor of each _inverted_residual_unit
self.multiply = np.array([1, 2, 3, 4, 6]) #5
# self.repeat means repeat_num _inverted_residual_unit in each _invresi_blocks
self.repeat = np.array([1, 2, 3, 4, 5, 6]) #6
def init_tokens(self):
"""
The initial token.
The first one is the index of the first layers' channel in self.head_num,
each line in the following represent the index of the [expansion_factor, filter_num, repeat_num, kernel_size]
"""
# original MobileNetV2
# yapf: disable
init_token_base = [4, # 1, 16, 1
4, 5, 1, 0, # 6, 24, 2
4, 4, 2, 0, # 6, 32, 3
4, 4, 3, 0, # 6, 64, 4
4, 5, 2, 0, # 6, 96, 3
4, 7, 2, 0, # 6, 160, 3
4, 9, 0, 0] # 6, 320, 1
# yapf: enable
return init_token_base
def range_table(self):
"""
Get range table of current search space, constrains the range of tokens.
"""
# head_num + 6 * [multiple(expansion_factor), filter_num, repeat, kernel_size]
# yapf: disable
range_table_base = [len(self.head_num),
len(self.multiply), len(self.filter_num1), len(self.repeat), len(self.k_size),
len(self.multiply), len(self.filter_num2), len(self.repeat), len(self.k_size),
len(self.multiply), len(self.filter_num3), len(self.repeat), len(self.k_size),
len(self.multiply), len(self.filter_num4), len(self.repeat), len(self.k_size),
len(self.multiply), len(self.filter_num5), len(self.repeat), len(self.k_size),
len(self.multiply), len(self.filter_num6), len(self.repeat), len(self.k_size)]
# yapf: enable
return range_table_base
def token2arch(self, tokens=None):
"""
return net_arch function
"""
if tokens is None:
tokens = self.init_tokens()
self.bottleneck_params_list = []
self.bottleneck_params_list.append(
(1, self.head_num[tokens[0]], 1, 1, 3))
self.bottleneck_params_list.append(
(self.multiply[tokens[1]], self.filter_num1[tokens[2]],
self.repeat[tokens[3]], 2, self.k_size[tokens[4]]))
self.bottleneck_params_list.append(
(self.multiply[tokens[5]], self.filter_num2[tokens[6]],
self.repeat[tokens[7]], 2, self.k_size[tokens[8]]))
self.bottleneck_params_list.append(
(self.multiply[tokens[9]], self.filter_num3[tokens[10]],
self.repeat[tokens[11]], 2, self.k_size[tokens[12]]))
self.bottleneck_params_list.append(
(self.multiply[tokens[13]], self.filter_num4[tokens[14]],
self.repeat[tokens[15]], 1, self.k_size[tokens[16]]))
self.bottleneck_params_list.append(
(self.multiply[tokens[17]], self.filter_num5[tokens[18]],
self.repeat[tokens[19]], 2, self.k_size[tokens[20]]))
self.bottleneck_params_list.append(
(self.multiply[tokens[21]], self.filter_num6[tokens[22]],
self.repeat[tokens[23]], 1, self.k_size[tokens[24]]))
def _modify_bottle_params(output_stride=None):
if output_stride is not None and output_stride % 2 != 0:
raise Exception("output stride must to be even number")
if output_stride is None:
return
else:
stride = 2
for i, layer_setting in enumerate(self.bottleneck_params_list):
t, c, n, s, ks = layer_setting
stride = stride * s
if stride > output_stride:
s = 1
self.bottleneck_params_list[i] = (t, c, n, s, ks)
def net_arch(input,
scale=1.0,
return_block=None,
end_points=None,
output_stride=None):
self.scale = scale
_modify_bottle_params(output_stride)
decode_ends = dict()
def check_points(count, points):
if points is None:
return False
else:
if isinstance(points, list):
return (True if count in points else False)
else:
return (True if count == points else False)
#conv1
# all padding is 'SAME' in the conv2d, can compute the actual padding automatic.
input = conv_bn_layer(
input,
num_filters=int(32 * self.scale),
filter_size=3,
stride=2,
padding='SAME',
act='relu6',
name='mobilenetv2_conv1')
layer_count = 1
depthwise_output = None
# bottleneck sequences
in_c = int(32 * self.scale)
for i, layer_setting in enumerate(self.bottleneck_params_list):
t, c, n, s, k = layer_setting
layer_count += 1
### return_block and end_points means block num
if check_points((layer_count - 1), return_block):
decode_ends[layer_count - 1] = depthwise_output
if check_points((layer_count - 1), end_points):
return input, decode_ends
input, depthwise_output = self._invresi_blocks(
input=input,
in_c=in_c,
t=t,
c=int(c * self.scale),
n=n,
s=s,
k=int(k),
name='mobilenetv2_conv' + str(i))
in_c = int(c * self.scale)
### return_block and end_points means block num
if check_points(layer_count, return_block):
decode_ends[layer_count] = depthwise_output
if check_points(layer_count, end_points):
return input, decode_ends
# last conv
input = conv_bn_layer(
input=input,
num_filters=int(1280 * self.scale)
if self.scale > 1.0 else 1280,
filter_size=1,
stride=1,
padding='SAME',
act='relu6',
name='mobilenetv2_conv' + str(i + 1))
input = fluid.layers.pool2d(
input=input,
pool_type='avg',
global_pooling=True,
name='mobilenetv2_last_pool')
return input
return net_arch
def _shortcut(self, input, data_residual):
"""Build shortcut layer.
Args:
input(Variable): input.
data_residual(Variable): residual layer.
Returns:
Variable, layer output.
"""
return fluid.layers.elementwise_add(input, data_residual)
def _inverted_residual_unit(self,
input,
num_in_filter,
num_filters,
ifshortcut,
stride,
filter_size,
expansion_factor,
reduction_ratio=4,
name=None):
"""Build inverted residual unit.
Args:
input(Variable), input.
num_in_filter(int), number of in filters.
num_filters(int), number of filters.
ifshortcut(bool), whether using shortcut.
stride(int), stride.
filter_size(int), filter size.
padding(str|int|list), padding.
expansion_factor(float), expansion factor.
name(str), name.
Returns:
Variable, layers output.
"""
num_expfilter = int(round(num_in_filter * expansion_factor))
channel_expand = conv_bn_layer(
input=input,
num_filters=num_expfilter,
filter_size=1,
stride=1,
padding='SAME',
num_groups=1,
act='relu6',
name=name + '_expand')
bottleneck_conv = conv_bn_layer(
input=channel_expand,
num_filters=num_expfilter,
filter_size=filter_size,
stride=stride,
padding='SAME',
num_groups=num_expfilter,
act='relu6',
name=name + '_dwise',
use_cudnn=False)
depthwise_output = bottleneck_conv
linear_out = conv_bn_layer(
input=bottleneck_conv,
num_filters=num_filters,
filter_size=1,
stride=1,
padding='SAME',
num_groups=1,
act=None,
name=name + '_linear')
out = linear_out
if ifshortcut:
out = self._shortcut(input=input, data_residual=out)
return out, depthwise_output
def _invresi_blocks(self, input, in_c, t, c, n, s, k, name=None):
"""Build inverted residual blocks.
Args:
input: Variable, input.
in_c: int, number of in filters.
t: float, expansion factor.
c: int, number of filters.
n: int, number of layers.
s: int, stride.
k: int, filter size.
name: str, name.
Returns:
Variable, layers output.
"""
first_block, depthwise_output = self._inverted_residual_unit(
input=input,
num_in_filter=in_c,
num_filters=c,
ifshortcut=False,
stride=s,
filter_size=k,
expansion_factor=t,
name=name + '_1')
last_residual_block = first_block
last_c = c
for i in range(1, n):
last_residual_block, depthwise_output = self._inverted_residual_unit(
input=last_residual_block,
num_in_filter=last_c,
num_filters=c,
ifshortcut=True,
stride=1,
filter_size=k,
expansion_factor=t,
name=name + '_' + str(i + 1))
return last_residual_block, depthwise_output
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import struct
import paddle.fluid as fluid
import numpy as np
from paddle.fluid.proto.framework_pb2 import VarType
import solver
from utils.config import cfg
from loss import multi_softmax_with_loss
from loss import multi_dice_loss
from loss import multi_bce_loss
import deeplab
class ModelPhase(object):
"""
Standard name for model phase in PaddleSeg
The following standard keys are defined:
* `TRAIN`: training mode.
* `EVAL`: testing/evaluation mode.
* `PREDICT`: prediction/inference mode.
* `VISUAL` : visualization mode
"""
TRAIN = 'train'
EVAL = 'eval'
PREDICT = 'predict'
VISUAL = 'visual'
@staticmethod
def is_train(phase):
return phase == ModelPhase.TRAIN
@staticmethod
def is_predict(phase):
return phase == ModelPhase.PREDICT
@staticmethod
def is_eval(phase):
return phase == ModelPhase.EVAL
@staticmethod
def is_visual(phase):
return phase == ModelPhase.VISUAL
@staticmethod
def is_valid_phase(phase):
""" Check valid phase """
if ModelPhase.is_train(phase) or ModelPhase.is_predict(phase) \
or ModelPhase.is_eval(phase) or ModelPhase.is_visual(phase):
return True
return False
def seg_model(image, class_num, arch):
model_name = cfg.MODEL.MODEL_NAME
if model_name == 'deeplabv3p':
logits = deeplab.deeplabv3p_nas(image, class_num, arch)
else:
raise Exception(
"unknow model name, only support deeplabv3p"
)
return logits
def softmax(logit):
logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
logit = fluid.layers.softmax(logit)
logit = fluid.layers.transpose(logit, [0, 3, 1, 2])
return logit
def sigmoid_to_softmax(logit):
"""
one channel to two channel
"""
logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
logit = fluid.layers.sigmoid(logit)
logit_back = 1 - logit
logit = fluid.layers.concat([logit_back, logit], axis=-1)
logit = fluid.layers.transpose(logit, [0, 3, 1, 2])
return logit
def export_preprocess(image):
"""导出模型的预处理流程"""
image = fluid.layers.transpose(image, [0, 3, 1, 2])
origin_shape = fluid.layers.shape(image)[-2:]
# 不同AUG_METHOD方法的resize
if cfg.AUG.AUG_METHOD == 'unpadding':
h_fix = cfg.AUG.FIX_RESIZE_SIZE[1]
w_fix = cfg.AUG.FIX_RESIZE_SIZE[0]
image = fluid.layers.resize_bilinear(
image, out_shape=[h_fix, w_fix], align_corners=False, align_mode=0)
elif cfg.AUG.AUG_METHOD == 'rangescaling':
size = cfg.AUG.INF_RESIZE_VALUE
value = fluid.layers.reduce_max(origin_shape)
scale = float(size) / value.astype('float32')
image = fluid.layers.resize_bilinear(
image, scale=scale, align_corners=False, align_mode=0)
# 存储resize后图像shape
valid_shape = fluid.layers.shape(image)[-2:]
# padding到eval_crop_size大小
width = cfg.EVAL_CROP_SIZE[0]
height = cfg.EVAL_CROP_SIZE[1]
pad_target = fluid.layers.assign(
np.array([height, width]).astype('float32'))
up = fluid.layers.assign(np.array([0]).astype('float32'))
down = pad_target[0] - valid_shape[0]
left = up
right = pad_target[1] - valid_shape[1]
paddings = fluid.layers.concat([up, down, left, right])
paddings = fluid.layers.cast(paddings, 'int32')
image = fluid.layers.pad2d(image, paddings=paddings, pad_value=127.5)
# normalize
mean = np.array(cfg.MEAN).reshape(1, len(cfg.MEAN), 1, 1)
mean = fluid.layers.assign(mean.astype('float32'))
std = np.array(cfg.STD).reshape(1, len(cfg.STD), 1, 1)
std = fluid.layers.assign(std.astype('float32'))
image = (image / 255 - mean) / std
# 使后面的网络能通过类似image.shape获取特征图的shape
image = fluid.layers.reshape(
image, shape=[-1, cfg.DATASET.DATA_DIM, height, width])
return image, valid_shape, origin_shape
def build_model(main_prog, start_prog, phase=ModelPhase.TRAIN, arch=None):
if not ModelPhase.is_valid_phase(phase):
raise ValueError("ModelPhase {} is not valid!".format(phase))
if ModelPhase.is_train(phase):
width = cfg.TRAIN_CROP_SIZE[0]
height = cfg.TRAIN_CROP_SIZE[1]
else:
width = cfg.EVAL_CROP_SIZE[0]
height = cfg.EVAL_CROP_SIZE[1]
image_shape = [cfg.DATASET.DATA_DIM, height, width]
grt_shape = [1, height, width]
class_num = cfg.DATASET.NUM_CLASSES
with fluid.program_guard(main_prog, start_prog):
with fluid.unique_name.guard():
# 在导出模型的时候,增加图像标准化预处理,减小预测部署时图像的处理流程
# 预测部署时只须对输入图像增加batch_size维度即可
if ModelPhase.is_predict(phase):
origin_image = fluid.layers.data(
name='image',
shape=[-1, -1, -1, cfg.DATASET.DATA_DIM],
dtype='float32',
append_batch_size=False)
image, valid_shape, origin_shape = export_preprocess(
origin_image)
else:
image = fluid.layers.data(
name='image', shape=image_shape, dtype='float32')
label = fluid.layers.data(
name='label', shape=grt_shape, dtype='int32')
mask = fluid.layers.data(
name='mask', shape=grt_shape, dtype='int32')
# use PyReader when doing traning and evaluation
if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase):
py_reader = fluid.io.PyReader(
feed_list=[image, label, mask],
capacity=cfg.DATALOADER.BUF_SIZE,
iterable=False,
use_double_buffer=True)
loss_type = cfg.SOLVER.LOSS
if not isinstance(loss_type, list):
loss_type = list(loss_type)
# dice_loss或bce_loss只适用两类分割中
if class_num > 2 and (("dice_loss" in loss_type) or
("bce_loss" in loss_type)):
raise Exception(
"dice loss and bce loss is only applicable to binary classfication"
)
# 在两类分割情况下,当loss函数选择dice_loss或bce_loss的时候,最后logit输出通道数设置为1
if ("dice_loss" in loss_type) or ("bce_loss" in loss_type):
class_num = 1
if "softmax_loss" in loss_type:
raise Exception(
"softmax loss can not combine with dice loss or bce loss"
)
logits = seg_model(image, class_num, arch)
# 根据选择的loss函数计算相应的损失函数
if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase):
loss_valid = False
avg_loss_list = []
valid_loss = []
if "softmax_loss" in loss_type:
weight = cfg.SOLVER.CROSS_ENTROPY_WEIGHT
avg_loss_list.append(
multi_softmax_with_loss(logits, label, mask, class_num, weight))
loss_valid = True
valid_loss.append("softmax_loss")
if "dice_loss" in loss_type:
avg_loss_list.append(multi_dice_loss(logits, label, mask))
loss_valid = True
valid_loss.append("dice_loss")
if "bce_loss" in loss_type:
avg_loss_list.append(multi_bce_loss(logits, label, mask))
loss_valid = True
valid_loss.append("bce_loss")
if not loss_valid:
raise Exception(
"SOLVER.LOSS: {} is set wrong. it should "
"include one of (softmax_loss, bce_loss, dice_loss) at least"
" example: ['softmax_loss'], ['dice_loss'], ['bce_loss', 'dice_loss']"
.format(cfg.SOLVER.LOSS))
invalid_loss = [x for x in loss_type if x not in valid_loss]
if len(invalid_loss) > 0:
print(
"Warning: the loss {} you set is invalid. it will not be included in loss computed."
.format(invalid_loss))
avg_loss = 0
for i in range(0, len(avg_loss_list)):
avg_loss += avg_loss_list[i]
#get pred result in original size
if isinstance(logits, tuple):
logit = logits[0]
else:
logit = logits
if logit.shape[2:] != label.shape[2:]:
logit = fluid.layers.resize_bilinear(logit, label.shape[2:])
# return image input and logit output for inference graph prune
if ModelPhase.is_predict(phase):
# 两类分割中,使用dice_loss或bce_loss返回的logit为单通道,进行到两通道的变换
if class_num == 1:
logit = sigmoid_to_softmax(logit)
else:
logit = softmax(logit)
# 获取有效部分
logit = fluid.layers.slice(
logit, axes=[2, 3], starts=[0, 0], ends=valid_shape)
logit = fluid.layers.resize_bilinear(
logit,
out_shape=origin_shape,
align_corners=False,
align_mode=0)
logit = fluid.layers.argmax(logit, axis=1)
return origin_image, logit
if class_num == 1:
out = sigmoid_to_softmax(logit)
out = fluid.layers.transpose(out, [0, 2, 3, 1])
else:
out = fluid.layers.transpose(logit, [0, 2, 3, 1])
pred = fluid.layers.argmax(out, axis=3)
pred = fluid.layers.unsqueeze(pred, axes=[3])
if ModelPhase.is_visual(phase):
if class_num == 1:
logit = sigmoid_to_softmax(logit)
else:
logit = softmax(logit)
return pred, logit
if ModelPhase.is_eval(phase):
return py_reader, avg_loss, pred, label, mask
if ModelPhase.is_train(phase):
optimizer = solver.Solver(main_prog, start_prog)
decayed_lr = optimizer.optimise(avg_loss)
return py_reader, avg_loss, decayed_lr, pred, label, mask
def to_int(string, dest="I"):
return struct.unpack(dest, string)[0]
def parse_shape_from_file(filename):
with open(filename, "rb") as file:
version = file.read(4)
lod_level = to_int(file.read(8), dest="Q")
for i in range(lod_level):
_size = to_int(file.read(8), dest="Q")
_ = file.read(_size)
version = file.read(4)
tensor_desc_size = to_int(file.read(4))
tensor_desc = VarType.TensorDesc()
tensor_desc.ParseFromString(file.read(tensor_desc_size))
return tuple(tensor_desc.dims)
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg")
sys.path.append(SEG_PATH)
import argparse
import pprint
import random
import shutil
import functools
import paddle
import numpy as np
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from metrics import ConfusionMatrix
from reader import SegDataset
from model_builder import build_model
from model_builder import ModelPhase
from model_builder import parse_shape_from_file
from eval_nas import evaluate
from vis import visualize
from utils import dist_utils
from mobilenetv2_search_space import MobileNetV2SpaceSeg
from paddleslim.nas.search_space.search_space_factory import SearchSpaceFactory
from paddleslim.analysis import flops
from paddleslim.nas.sa_nas import SANAS
from paddleslim.nas import search_space
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg training')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess I/O or not',
action='store_true',
default=False)
parser.add_argument(
'--log_steps',
dest='log_steps',
help='Display logging information at every log_steps',
default=10,
type=int)
parser.add_argument(
'--debug',
dest='debug',
help='debug mode, display detail information of training',
action='store_true')
parser.add_argument(
'--use_tb',
dest='use_tb',
help='whether to record the data during training to Tensorboard',
action='store_true')
parser.add_argument(
'--tb_log_dir',
dest='tb_log_dir',
help='Tensorboard logging directory',
default=None,
type=str)
parser.add_argument(
'--do_eval',
dest='do_eval',
help='Evaluation models result on every new checkpoint',
action='store_true')
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
parser.add_argument(
'--enable_ce',
dest='enable_ce',
help='If set True, enable continuous evaluation job.'
'This flag is only used for internal test.',
action='store_true')
return parser.parse_args()
def save_vars(executor, dirname, program=None, vars=None):
"""
Temporary resolution for Win save variables compatability.
Will fix in PaddlePaddle v1.5.2
"""
save_program = fluid.Program()
save_block = save_program.global_block()
for each_var in vars:
# NOTE: don't save the variable which type is RAW
if each_var.type == fluid.core.VarDesc.VarType.RAW:
continue
new_var = save_block.create_var(
name=each_var.name,
shape=each_var.shape,
dtype=each_var.dtype,
type=each_var.type,
lod_level=each_var.lod_level,
persistable=True)
file_path = os.path.join(dirname, new_var.name)
file_path = os.path.normpath(file_path)
save_block.append_op(
type='save',
inputs={'X': [new_var]},
outputs={},
attrs={'file_path': file_path})
executor.run(save_program)
def save_checkpoint(exe, program, ckpt_name):
"""
Save checkpoint for evaluation or resume training
"""
ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name))
print("Save model checkpoint to {}".format(ckpt_dir))
if not os.path.isdir(ckpt_dir):
os.makedirs(ckpt_dir)
save_vars(
exe,
ckpt_dir,
program,
vars=list(filter(fluid.io.is_persistable, program.list_vars())))
return ckpt_dir
def load_checkpoint(exe, program):
"""
Load checkpoiont from pretrained model directory for resume training
"""
print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR)
if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR):
raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format(
cfg.TRAIN.RESUME_MODEL_DIR))
fluid.io.load_persistables(
exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program)
model_path = cfg.TRAIN.RESUME_MODEL_DIR
# Check is path ended by path spearator
if model_path[-1] == os.sep:
model_path = model_path[0:-1]
epoch_name = os.path.basename(model_path)
# If resume model is final model
if epoch_name == 'final':
begin_epoch = cfg.SOLVER.NUM_EPOCHS
# If resume model path is end of digit, restore epoch status
elif epoch_name.isdigit():
epoch = int(epoch_name)
begin_epoch = epoch + 1
else:
raise ValueError("Resume model path is not valid!")
print("Model checkpoint loaded successfully!")
return begin_epoch
def update_best_model(ckpt_dir):
best_model_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model')
if os.path.exists(best_model_dir):
shutil.rmtree(best_model_dir)
shutil.copytree(ckpt_dir, best_model_dir)
def print_info(*msg):
if cfg.TRAINER_ID == 0:
print(*msg)
def train(cfg):
startup_prog = fluid.Program()
train_prog = fluid.Program()
if args.enable_ce:
startup_prog.random_seed = 1000
train_prog.random_seed = 1000
drop_last = True
dataset = SegDataset(
file_list=cfg.DATASET.TRAIN_FILE_LIST,
mode=ModelPhase.TRAIN,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
if args.use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
batch_data = []
for b in data_gen:
batch_data.append(b)
if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS):
for item in batch_data:
yield item[0], item[1], item[2]
batch_data = []
# If use sync batch norm strategy, drop last batch if number of samples
# in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues
if not cfg.TRAIN.SYNC_BATCH_NORM:
for item in batch_data:
yield item[0], item[1], item[2]
# Get device environment
# places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# place = places[0]
gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0))
place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace()
places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# Get number of GPU
dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places)
print_info("#Device count: {}".format(dev_count))
# Make sure BATCH_SIZE can divided by GPU cards
assert cfg.BATCH_SIZE % dev_count == 0, (
'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format(
cfg.BATCH_SIZE, dev_count))
# If use multi-gpu training mode, batch data will allocated to each GPU evenly
batch_size_per_dev = cfg.BATCH_SIZE // dev_count
print_info("batch_size_per_dev: {}".format(batch_size_per_dev))
config_info = {'input_size': 769, 'output_size': 1, 'block_num': 7}
config = ([(cfg.SLIM.NAS_SPACE_NAME, config_info)])
factory = SearchSpaceFactory()
space = factory.get_search_space(config)
port = cfg.SLIM.NAS_PORT
server_address = (cfg.SLIM.NAS_ADDRESS, port)
sa_nas = SANAS(config, server_addr=server_address, search_steps=cfg.SLIM.NAS_SEARCH_STEPS,
is_server=cfg.SLIM.NAS_IS_SERVER)
for step in range(cfg.SLIM.NAS_SEARCH_STEPS):
arch = sa_nas.next_archs()[0]
start_prog = fluid.Program()
train_prog = fluid.Program()
py_reader, avg_loss, lr, pred, grts, masks = build_model(
train_prog, start_prog, arch=arch, phase=ModelPhase.TRAIN)
cur_flops = flops(train_prog)
print('current step:', step, 'flops:', cur_flops)
py_reader.decorate_sample_generator(
data_generator, batch_size=batch_size_per_dev, drop_last=drop_last)
exe = fluid.Executor(place)
exe.run(start_prog)
exec_strategy = fluid.ExecutionStrategy()
# Clear temporary variables every 100 iteration
if args.use_gpu:
exec_strategy.num_threads = fluid.core.get_cuda_device_count()
exec_strategy.num_iteration_per_drop_scope = 100
build_strategy = fluid.BuildStrategy()
if cfg.NUM_TRAINERS > 1 and args.use_gpu:
dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog)
exec_strategy.num_threads = 1
if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu:
if dev_count > 1:
# Apply sync batch norm strategy
print_info("Sync BatchNorm strategy is effective.")
build_strategy.sync_batch_norm = True
else:
print_info(
"Sync BatchNorm strategy will not be effective if GPU device"
" count <= 1")
compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel(
loss_name=avg_loss.name,
exec_strategy=exec_strategy,
build_strategy=build_strategy)
# Resume training
begin_epoch = cfg.SOLVER.BEGIN_EPOCH
if cfg.TRAIN.RESUME_MODEL_DIR:
begin_epoch = load_checkpoint(exe, train_prog)
# Load pretrained model
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR):
print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR)
load_vars = []
load_fail_vars = []
def var_shape_matched(var, shape):
"""
Check whehter persitable variable shape is match with current network
"""
var_exist = os.path.exists(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_exist:
var_shape = parse_shape_from_file(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
return var_shape == shape
return False
for x in train_prog.list_vars():
if isinstance(x, fluid.framework.Parameter):
shape = tuple(fluid.global_scope().find_var(
x.name).get_tensor().shape())
if var_shape_matched(x, shape):
load_vars.append(x)
else:
load_fail_vars.append(x)
fluid.io.load_vars(
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars)
for var in load_vars:
print_info("Parameter[{}] loaded sucessfully!".format(var.name))
for var in load_fail_vars:
print_info(
"Parameter[{}] don't exist or shape does not match current network, skip"
" to load it.".format(var.name))
print_info("{}/{} pretrained parameters loaded successfully!".format(
len(load_vars),
len(load_vars) + len(load_fail_vars)))
else:
print_info(
'Pretrained model dir {} not exists, training from scratch...'.
format(cfg.TRAIN.PRETRAINED_MODEL_DIR))
fetch_list = [avg_loss.name, lr.name]
global_step = 0
all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE
if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True:
all_step += 1
all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1)
avg_loss = 0.0
timer = Timer()
timer.start()
if begin_epoch > cfg.SOLVER.NUM_EPOCHS:
raise ValueError(
("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format(
begin_epoch, cfg.SOLVER.NUM_EPOCHS))
if args.use_mpio:
print_info("Use multiprocess reader")
else:
print_info("Use multi-thread reader")
best_miou = 0.0
for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1):
py_reader.start()
while True:
try:
loss, lr = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
avg_loss += np.mean(np.array(loss))
global_step += 1
if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0:
avg_loss /= args.log_steps
speed = args.log_steps / timer.elapsed_time()
print((
"epoch={} step={} lr={:.5f} loss={:.4f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, speed,
calculate_eta(all_step - global_step, speed)))
sys.stdout.flush()
avg_loss = 0.0
timer.restart()
except fluid.core.EOFException:
py_reader.reset()
break
except Exception as e:
print(e)
if epoch > cfg.SLIM.NAS_START_EVAL_EPOCH:
ckpt_dir = save_checkpoint(exe, train_prog, '{}_tmp'.format(port))
_, mean_iou, _, mean_acc = evaluate(
cfg=cfg,
arch=arch,
ckpt_dir=ckpt_dir,
use_gpu=args.use_gpu,
use_mpio=args.use_mpio)
if best_miou < mean_iou:
print('search step {}, epoch {} best iou {}'.format(step, epoch, mean_iou))
best_miou = mean_iou
sa_nas.reward(float(best_miou))
def main(args):
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
if args.enable_ce:
random.seed(0)
np.random.seed(0)
cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0))
cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
cfg.check_and_infer()
print_info(pprint.pformat(cfg))
train(cfg)
if __name__ == '__main__':
args = parse_args()
if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True:
print(
"You can not set use_gpu = True in the model because you are using paddlepaddle-cpu."
)
print(
"Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU."
)
sys.exit(1)
main(args)
# PaddleSeg剪裁教程
在阅读本教程前,请确保您已经了解过[PaddleSeg使用说明](../../docs/usage.md)等章节,以便对PaddleSeg有一定的了解
该文档介绍如何使用[PaddleSlim](https://paddlepaddle.github.io/PaddleSlim)的卷积通道剪裁接口对检测库中的模型的卷积层的通道数进行剪裁。
在分割库中,可以直接调用`PaddleSeg/slim/prune/train_prune.py`脚本实现剪裁,在该脚本中调用了PaddleSlim的[paddleslim.prune.Pruner](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/#Pruner)接口。
该教程中所示操作,如无特殊说明,均在`PaddleSeg/`路径下执行。
## 1. 数据与预训练模型准备
执行如下命令,下载cityscapes数据集
```
python dataset/download_cityscapes.py
```
参照[预训练模型列表](../../docs/model_zoo.md)获取所需预训练模型
## 2. 确定待分析参数
我们通过剪裁卷积层参数达到缩减卷积层通道数的目的,在剪裁之前,我们需要确定待裁卷积层的参数的名称。
通过以下命令查看当前模型的所有参数:
```python
# 查看模型所有Paramters
for x in train_prog.list_vars():
if isinstance(x, fluid.framework.Parameter):
print(x.name, x.shape)
```
通过观察参数名称和参数的形状,筛选出所有卷积层参数,并确定要裁剪的卷积层参数。
## 3. 启动剪裁任务
使用`train_prune.py`启动裁剪任务时,通过`SLIM.PRUNE_PARAMS`选项指定待裁剪的参数名称列表,参数名之间用逗号分隔,通过`SLIM.PRUNE_RATIOS`选项指定各个参数被裁掉的比例。
```shell
CUDA_VISIBLE_DEVICES=0
python -u ./slim/prune/train_prune.py --log_steps 10 --cfg configs/cityscape_fast_scnn.yaml --use_gpu --use_mpio \
SLIM.PRUNE_PARAMS 'learning_to_downsample/weights,learning_to_downsample/dsconv1/pointwise/weights,learning_to_downsample/dsconv2/pointwise/weights' \
SLIM.PRUNE_RATIOS '[0.1,0.1,0.1]'
```
这里我们选取三个参数,按0.1的比例剪裁。
## 4. 评估
```shell
CUDA_VISIBLE_DEVICES=0
python -u ./slim/prune/eval_prune.py --cfg configs/cityscape_fast_scnn.yaml --use_gpu --use_mpio \
TEST.TEST_MODEL your_trained_model \
```
## 5. 模型
| 模型 | 数据集合 | 下载地址 |剪裁方法| flops | mIoU on val|
|---|---|---|---|---|---|
| Fast-SCNN/bn | Cityscapes |[fast_scnn_cityscapes.tar](https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape.tar) | 无 | 7.21g | 0.6964 |
| Fast-SCNN/bn | Cityscapes |[fast_scnn_cityscapes-uniform-51.tar](https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape-uniform-51.tar) | uniform | 3.54g | 0.6990 |
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg")
sys.path.append(SEG_PATH)
import time
import argparse
import functools
import pprint
import cv2
import numpy as np
import paddle
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from models.model_builder import build_model
from models.model_builder import ModelPhase
from reader import SegDataset
from metrics import ConfusionMatrix
from paddleslim.prune import load_model
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg model evalution')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess IO or not',
action='store_true',
default=False)
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
return parser.parse_args()
def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs):
np.set_printoptions(precision=5, suppress=True)
startup_prog = fluid.Program()
test_prog = fluid.Program()
dataset = SegDataset(
file_list=cfg.DATASET.VAL_FILE_LIST,
mode=ModelPhase.EVAL,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
#TODO: check is batch reader compatitable with Windows
if use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
for b in data_gen:
yield b[0], b[1], b[2]
py_reader, avg_loss, pred, grts, masks = build_model(
test_prog, startup_prog, phase=ModelPhase.EVAL)
py_reader.decorate_sample_generator(
data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE)
# Get device environment
places = fluid.cuda_places() if use_gpu else fluid.cpu_places()
place = places[0]
dev_count = len(places)
print("#Device count: {}".format(dev_count))
exe = fluid.Executor(place)
exe.run(startup_prog)
test_prog = test_prog.clone(for_test=True)
ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir
if not os.path.exists(ckpt_dir):
raise ValueError('The TEST.TEST_MODEL {} is not found'.format(ckpt_dir))
if ckpt_dir is not None:
print('load test model:', ckpt_dir)
load_model(exe, test_prog, ckpt_dir)
# Use streaming confusion matrix to calculate mean_iou
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
conf_mat = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
fetch_list = [avg_loss.name, pred.name, grts.name, masks.name]
num_images = 0
step = 0
all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1
timer = Timer()
timer.start()
py_reader.start()
while True:
try:
step += 1
loss, pred, grts, masks = exe.run(
test_prog, fetch_list=fetch_list, return_numpy=True)
loss = np.mean(np.array(loss))
num_images += pred.shape[0]
conf_mat.calculate(pred, grts, masks)
_, iou = conf_mat.mean_iou()
_, acc = conf_mat.accuracy()
speed = 1.0 / timer.elapsed_time()
print(
"[EVAL]step={} loss={:.5f} acc={:.4f} IoU={:.4f} step/sec={:.2f} | ETA {}"
.format(step, loss, acc, iou, speed,
calculate_eta(all_step - step, speed)))
timer.restart()
sys.stdout.flush()
except fluid.core.EOFException:
break
category_iou, avg_iou = conf_mat.mean_iou()
category_acc, avg_acc = conf_mat.accuracy()
print("[EVAL]#image={} acc={:.4f} IoU={:.4f}".format(
num_images, avg_acc, avg_iou))
print("[EVAL]Category IoU:", category_iou)
print("[EVAL]Category Acc:", category_acc)
print("[EVAL]Kappa:{:.4f}".format(conf_mat.kappa()))
return category_iou, avg_iou, category_acc, avg_acc
def main():
args = parse_args()
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
cfg.check_and_infer()
print(pprint.pformat(cfg))
evaluate(cfg, **args.__dict__)
if __name__ == '__main__':
main()
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
# GPU memory garbage collection optimization flags
os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0"
import sys
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg")
sys.path.append(SEG_PATH)
import argparse
import pprint
import shutil
import functools
import paddle
import numpy as np
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from metrics import ConfusionMatrix
from reader import SegDataset
from models.model_builder import build_model
from models.model_builder import ModelPhase
from models.model_builder import parse_shape_from_file
from eval_prune import evaluate
from vis import visualize
from utils import dist_utils
from paddleslim.prune import Pruner, save_model
from paddleslim.analysis import flops
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg training')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess I/O or not',
action='store_true',
default=False)
parser.add_argument(
'--log_steps',
dest='log_steps',
help='Display logging information at every log_steps',
default=10,
type=int)
parser.add_argument(
'--debug',
dest='debug',
help='debug mode, display detail information of training',
action='store_true')
parser.add_argument(
'--use_tb',
dest='use_tb',
help='whether to record the data during training to Tensorboard',
action='store_true')
parser.add_argument(
'--tb_log_dir',
dest='tb_log_dir',
help='Tensorboard logging directory',
default=None,
type=str)
parser.add_argument(
'--do_eval',
dest='do_eval',
help='Evaluation models result on every new checkpoint',
action='store_true')
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
return parser.parse_args()
def save_vars(executor, dirname, program=None, vars=None):
"""
Temporary resolution for Win save variables compatability.
Will fix in PaddlePaddle v1.5.2
"""
save_program = fluid.Program()
save_block = save_program.global_block()
for each_var in vars:
# NOTE: don't save the variable which type is RAW
if each_var.type == fluid.core.VarDesc.VarType.RAW:
continue
new_var = save_block.create_var(
name=each_var.name,
shape=each_var.shape,
dtype=each_var.dtype,
type=each_var.type,
lod_level=each_var.lod_level,
persistable=True)
file_path = os.path.join(dirname, new_var.name)
file_path = os.path.normpath(file_path)
save_block.append_op(
type='save',
inputs={'X': [new_var]},
outputs={},
attrs={'file_path': file_path})
executor.run(save_program)
def save_prune_checkpoint(exe, program, ckpt_name):
"""
Save checkpoint for evaluation or resume training
"""
ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name))
print("Save model checkpoint to {}".format(ckpt_dir))
if not os.path.isdir(ckpt_dir):
os.makedirs(ckpt_dir)
save_model(exe, program, ckpt_dir)
return ckpt_dir
def load_checkpoint(exe, program):
"""
Load checkpoiont from pretrained model directory for resume training
"""
print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR)
if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR):
raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format(
cfg.TRAIN.RESUME_MODEL_DIR))
fluid.io.load_persistables(
exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program)
model_path = cfg.TRAIN.RESUME_MODEL_DIR
# Check is path ended by path spearator
if model_path[-1] == os.sep:
model_path = model_path[0:-1]
epoch_name = os.path.basename(model_path)
# If resume model is final model
if epoch_name == 'final':
begin_epoch = cfg.SOLVER.NUM_EPOCHS
# If resume model path is end of digit, restore epoch status
elif epoch_name.isdigit():
epoch = int(epoch_name)
begin_epoch = epoch + 1
else:
raise ValueError("Resume model path is not valid!")
print("Model checkpoint loaded successfully!")
return begin_epoch
def print_info(*msg):
if cfg.TRAINER_ID == 0:
print(*msg)
def train(cfg):
startup_prog = fluid.Program()
train_prog = fluid.Program()
drop_last = True
dataset = SegDataset(
file_list=cfg.DATASET.TRAIN_FILE_LIST,
mode=ModelPhase.TRAIN,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
if args.use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
batch_data = []
for b in data_gen:
batch_data.append(b)
if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS):
for item in batch_data:
yield item[0], item[1], item[2]
batch_data = []
# If use sync batch norm strategy, drop last batch if number of samples
# in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues
if not cfg.TRAIN.SYNC_BATCH_NORM:
for item in batch_data:
yield item[0], item[1], item[2]
# Get device environment
# places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# place = places[0]
gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0))
place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace()
places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# Get number of GPU
dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places)
print_info("#Device count: {}".format(dev_count))
# Make sure BATCH_SIZE can divided by GPU cards
assert cfg.BATCH_SIZE % dev_count == 0, (
'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format(
cfg.BATCH_SIZE, dev_count))
# If use multi-gpu training mode, batch data will allocated to each GPU evenly
batch_size_per_dev = cfg.BATCH_SIZE // dev_count
print_info("batch_size_per_dev: {}".format(batch_size_per_dev))
py_reader, avg_loss, lr, pred, grts, masks = build_model(
train_prog, startup_prog, phase=ModelPhase.TRAIN)
py_reader.decorate_sample_generator(
data_generator, batch_size=batch_size_per_dev, drop_last=drop_last)
exe = fluid.Executor(place)
exe.run(startup_prog)
exec_strategy = fluid.ExecutionStrategy()
# Clear temporary variables every 100 iteration
if args.use_gpu:
exec_strategy.num_threads = fluid.core.get_cuda_device_count()
exec_strategy.num_iteration_per_drop_scope = 100
build_strategy = fluid.BuildStrategy()
if cfg.NUM_TRAINERS > 1 and args.use_gpu:
dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog)
exec_strategy.num_threads = 1
if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu:
if dev_count > 1:
# Apply sync batch norm strategy
print_info("Sync BatchNorm strategy is effective.")
build_strategy.sync_batch_norm = True
else:
print_info("Sync BatchNorm strategy will not be effective if GPU device"
" count <= 1")
pruned_params = cfg.SLIM.PRUNE_PARAMS.strip().split(',')
pruned_ratios = cfg.SLIM.PRUNE_RATIOS
if isinstance(pruned_ratios, float):
pruned_ratios = [pruned_ratios] * len(pruned_params)
elif isinstance(pruned_ratios, (list, tuple)):
pruned_ratios = list(pruned_ratios)
else:
raise ValueError('expect SLIM.PRUNE_RATIOS type is float, list, tuple, '
'but received {}'.format(type(pruned_ratios)))
# Resume training
begin_epoch = cfg.SOLVER.BEGIN_EPOCH
if cfg.TRAIN.RESUME_MODEL_DIR:
begin_epoch = load_checkpoint(exe, train_prog)
# Load pretrained model
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR):
print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR)
load_vars = []
load_fail_vars = []
def var_shape_matched(var, shape):
"""
Check whehter persitable variable shape is match with current network
"""
var_exist = os.path.exists(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_exist:
var_shape = parse_shape_from_file(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
return var_shape == shape
return False
for x in train_prog.list_vars():
if isinstance(x, fluid.framework.Parameter):
shape = tuple(fluid.global_scope().find_var(
x.name).get_tensor().shape())
if var_shape_matched(x, shape):
load_vars.append(x)
else:
load_fail_vars.append(x)
fluid.io.load_vars(
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars)
for var in load_vars:
print_info("Parameter[{}] loaded sucessfully!".format(var.name))
for var in load_fail_vars:
print_info("Parameter[{}] don't exist or shape does not match current network, skip"
" to load it.".format(var.name))
print_info("{}/{} pretrained parameters loaded successfully!".format(
len(load_vars),
len(load_vars) + len(load_fail_vars)))
else:
print_info('Pretrained model dir {} not exists, training from scratch...'.
format(cfg.TRAIN.PRETRAINED_MODEL_DIR))
fetch_list = [avg_loss.name, lr.name]
if args.debug:
# Fetch more variable info and use streaming confusion matrix to
# calculate IoU results if in debug mode
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
fetch_list.extend([pred.name, grts.name, masks.name])
cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
if args.use_tb:
if not args.tb_log_dir:
print_info("Please specify the log directory by --tb_log_dir.")
exit(1)
from tb_paddle import SummaryWriter
log_writer = SummaryWriter(args.tb_log_dir)
pruner = Pruner()
train_prog = pruner.prune(
train_prog,
fluid.global_scope(),
params=pruned_params,
ratios=pruned_ratios,
place=place,
only_graph=False)[0]
compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel(
loss_name=avg_loss.name,
exec_strategy=exec_strategy,
build_strategy=build_strategy)
global_step = 0
all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE
if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True:
all_step += 1
all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1)
avg_loss = 0.0
timer = Timer()
timer.start()
if begin_epoch > cfg.SOLVER.NUM_EPOCHS:
raise ValueError(
("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format(
begin_epoch, cfg.SOLVER.NUM_EPOCHS))
if args.use_mpio:
print_info("Use multiprocess reader")
else:
print_info("Use multi-thread reader")
for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1):
py_reader.start()
while True:
try:
if args.debug:
# Print category IoU and accuracy to check whether the
# traning process is corresponed to expectation
loss, lr, pred, grts, masks = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
cm.calculate(pred, grts, masks)
avg_loss += np.mean(np.array(loss))
global_step += 1
if global_step % args.log_steps == 0:
speed = args.log_steps / timer.elapsed_time()
avg_loss /= args.log_steps
category_acc, mean_acc = cm.accuracy()
category_iou, mean_iou = cm.mean_iou()
print_info((
"epoch={} step={} lr={:.5f} loss={:.4f} acc={:.5f} mIoU={:.5f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, mean_acc,
mean_iou, speed,
calculate_eta(all_step - global_step, speed)))
print_info("Category IoU: ", category_iou)
print_info("Category Acc: ", category_acc)
if args.use_tb:
log_writer.add_scalar('Train/mean_iou', mean_iou,
global_step)
log_writer.add_scalar('Train/mean_acc', mean_acc,
global_step)
log_writer.add_scalar('Train/loss', avg_loss,
global_step)
log_writer.add_scalar('Train/lr', lr[0],
global_step)
log_writer.add_scalar('Train/step/sec', speed,
global_step)
sys.stdout.flush()
avg_loss = 0.0
cm.zero_matrix()
timer.restart()
else:
# If not in debug mode, avoid unnessary log and calculate
loss, lr = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
avg_loss += np.mean(np.array(loss))
global_step += 1
if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0:
avg_loss /= args.log_steps
speed = args.log_steps / timer.elapsed_time()
print((
"epoch={} step={} lr={:.5f} loss={:.4f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, speed,
calculate_eta(all_step - global_step, speed)))
if args.use_tb:
log_writer.add_scalar('Train/loss', avg_loss,
global_step)
log_writer.add_scalar('Train/lr', lr[0],
global_step)
log_writer.add_scalar('Train/speed', speed,
global_step)
sys.stdout.flush()
avg_loss = 0.0
timer.restart()
except fluid.core.EOFException:
py_reader.reset()
break
except Exception as e:
print(e)
if epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0 and cfg.TRAINER_ID == 0:
ckpt_dir = save_prune_checkpoint(exe, train_prog, epoch)
if args.do_eval:
print("Evaluation start")
_, mean_iou, _, mean_acc = evaluate(
cfg=cfg,
ckpt_dir=ckpt_dir,
use_gpu=args.use_gpu,
use_mpio=args.use_mpio)
if args.use_tb:
log_writer.add_scalar('Evaluate/mean_iou', mean_iou,
global_step)
log_writer.add_scalar('Evaluate/mean_acc', mean_acc,
global_step)
# Use Tensorboard to visualize results
if args.use_tb and cfg.DATASET.VIS_FILE_LIST is not None:
visualize(
cfg=cfg,
use_gpu=args.use_gpu,
vis_file_list=cfg.DATASET.VIS_FILE_LIST,
vis_dir="visual",
ckpt_dir=ckpt_dir,
log_writer=log_writer)
# save final model
if cfg.TRAINER_ID == 0:
save_prune_checkpoint(exe, train_prog, 'final')
def main(args):
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts is not None:
cfg.update_from_list(args.opts)
cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0))
cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
cfg.check_and_infer()
print_info(pprint.pformat(cfg))
train(cfg)
if __name__ == '__main__':
args = parse_args()
if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True:
print(
"You can not set use_gpu = True in the model because you are using paddlepaddle-cpu."
)
print(
"Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU."
)
sys.exit(1)
main(args)
>运行该示例前请安装Paddle1.6或更高版本和PaddleSlim
# 分割模型量化压缩示例
## 概述
该示例使用PaddleSlim提供的[量化压缩API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/)对分割模型进行压缩。
在阅读该示例前,建议您先了解以下内容:
- [分割模型的常规训练方法](../../docs/usage.md)
- [PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/)
## 安装PaddleSlim
可按照[PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/)中的步骤安装PaddleSlim。
## 训练
### 数据集
请按照分割库的教程下载数据集并放到对应位置。
### 下载训练好的分割模型
在分割库根目录下运行以下命令:
```bash
mkdir pretrain
cd pretrain
wget https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz
tar xf mobilenet_cityscapes.tgz
```
### 定义量化配置
config = {
'weight_quantize_type': 'channel_wise_abs_max',
'activation_quantize_type': 'moving_average_abs_max',
'quantize_op_types': ['depthwise_conv2d', 'mul', 'conv2d'],
'not_quant_pattern': ['last_conv']
}
如何配置以及含义请参考[PaddleSlim 量化API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/)
### 插入量化反量化OP
使用[PaddleSlim quant_aware API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/#quant_aware)在Program中插入量化和反量化OP。
```
compiled_train_prog = quant_aware(train_prog, place, config, for_test=False)
```
### 关闭一些训练策略
因为量化要对Program做修改,所以一些会修改Program的训练策略需要关闭。``sync_batch_norm`` 和量化多卡训练同时使用时会出错, 需要将其关闭。
```
build_strategy.fuse_all_reduce_ops = False
build_strategy.sync_batch_norm = False
```
### 开始训练
step1: 设置gpu卡
```
export CUDA_VISIBLE_DEVICES=0
```
step2: 将``pdseg``文件夹加到系统路径
分割库根目录下运行以下命令
```
export PYTHONPATH=$PYTHONPATH:./pdseg
```
step2: 开始训练
在分割库根目录下运行以下命令进行训练。
```
python -u ./slim/quantization/train_quant.py --log_steps 10 --not_quant_pattern last_conv --cfg configs/deeplabv3p_mobilenetv2_cityscapes.yaml --use_gpu --use_mpio --do_eval \
TRAIN.PRETRAINED_MODEL_DIR "./pretrain/mobilenet_cityscapes/" \
TRAIN.MODEL_SAVE_DIR "./snapshots/mobilenetv2_quant" \
MODEL.DEEPLAB.ENCODER_WITH_ASPP False \
MODEL.DEEPLAB.ENABLE_DECODER False \
TRAIN.SYNC_BATCH_NORM False \
SOLVER.LR 0.0001 \
TRAIN.SNAPSHOT_EPOCH 1 \
SOLVER.NUM_EPOCHS 30 \
BATCH_SIZE 16 \
```
### 训练时的模型结构
[PaddleSlim 量化API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/)文档中介绍了``paddleslim.quant.quant_aware````paddleslim.quant.convert``两个接口。
``paddleslim.quant.quant_aware`` 作用是在网络中的conv2d、depthwise_conv2d、mul等算子的各个输入前插入连续的量化op和反量化op,并改变相应反向算子的某些输入。示例图如下:
<p align="center">
<img src="./images/TransformPass.png" height=400 width=520 hspace='10'/> <br />
<strong>图1:应用 paddleslim.quant.quant_aware 后的结果</strong>
</p>
### 边训练边测试
在脚本中边训练边测试得到的测试精度是基于图1中的网络结构进行的。
## 评估
### 最终评估模型
``paddleslim.quant.convert`` 主要用于改变Program中量化op和反量化op的顺序,即将类似图1中的量化op和反量化op顺序改变为图2中的布局。除此之外,``paddleslim.quant.convert`` 还会将`conv2d``depthwise_conv2d``mul`等算子参数变为量化后的int8_t范围内的值(但数据类型仍为float32),示例如图2:
<p align="center">
<img src="./images/FreezePass.png" height=400 width=420 hspace='10'/> <br />
<strong>图2:paddleslim.quant.convert 后的结果</strong>
</p>
所以在调用 ``paddleslim.quant.convert`` 之后,才得到最终的量化模型。此模型可使用PaddleLite进行加载预测,可参见教程[Paddle-Lite如何加载运行量化模型](https://github.com/PaddlePaddle/Paddle-Lite/wiki/model_quantization)
### 评估脚本
使用脚本[slim/quantization/eval_quant.py](./eval_quant.py)进行评估。
- 定义配置。使用和训练脚本中一样的量化配置,以得到和量化训练时同样的模型。
- 使用 ``paddleslim.quant.quant_aware`` 插入量化和反量化op。
- 使用 ``paddleslim.quant.convert`` 改变op顺序,得到最终量化模型进行评估。
评估命令:
分割库根目录下运行
```
python -u ./slim/quantization/eval_quant.py --cfg configs/deeplabv3p_mobilenetv2_cityscapes.yaml --use_gpu --not_quant_pattern last_conv --use_mpio --convert \
TEST.TEST_MODEL "./snapshots/mobilenetv2_quant/best_model" \
MODEL.DEEPLAB.ENCODER_WITH_ASPP False \
MODEL.DEEPLAB.ENABLE_DECODER False \
TRAIN.SYNC_BATCH_NORM False \
BATCH_SIZE 16 \
```
## 量化结果
## FAQ
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import time
import argparse
import functools
import pprint
import cv2
import numpy as np
import paddle
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from models.model_builder import build_model
from models.model_builder import ModelPhase
from reader import SegDataset
from metrics import ConfusionMatrix
from paddleslim.quant import quant_aware, convert
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg model evalution')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess IO or not',
action='store_true',
default=False)
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
parser.add_argument(
'--convert',
dest='convert',
help='Convert or not',
action='store_true',
default=False)
parser.add_argument(
"--not_quant_pattern",
nargs='+',
type=str,
help=
"Layers which name_scope contains string in not_quant_pattern will not be quantized"
)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
return parser.parse_args()
def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs):
np.set_printoptions(precision=5, suppress=True)
startup_prog = fluid.Program()
test_prog = fluid.Program()
dataset = SegDataset(
file_list=cfg.DATASET.VAL_FILE_LIST,
mode=ModelPhase.EVAL,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
#TODO: check is batch reader compatitable with Windows
if use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
for b in data_gen:
yield b[0], b[1], b[2]
py_reader, avg_loss, pred, grts, masks = build_model(
test_prog, startup_prog, phase=ModelPhase.EVAL)
py_reader.decorate_sample_generator(
data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE)
# Get device environment
places = fluid.cuda_places() if use_gpu else fluid.cpu_places()
place = places[0]
dev_count = len(places)
print("#Device count: {}".format(dev_count))
exe = fluid.Executor(place)
exe.run(startup_prog)
test_prog = test_prog.clone(for_test=True)
not_quant_pattern_list = []
if kwargs['not_quant_pattern'] is not None:
not_quant_pattern_list = kwargs['not_quant_pattern']
config = {
'weight_quantize_type': 'channel_wise_abs_max',
'activation_quantize_type': 'moving_average_abs_max',
'quantize_op_types': ['depthwise_conv2d', 'mul', 'conv2d'],
'not_quant_pattern': not_quant_pattern_list
}
test_prog = quant_aware(test_prog, place, config, for_test=True)
ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir
if not os.path.exists(ckpt_dir):
raise ValueError('The TEST.TEST_MODEL {} is not found'.format(ckpt_dir))
if ckpt_dir is not None:
print('load test model:', ckpt_dir)
fluid.io.load_persistables(exe, ckpt_dir, main_program=test_prog)
if kwargs['convert']:
test_prog = convert(test_prog, place, config)
# Use streaming confusion matrix to calculate mean_iou
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
conf_mat = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
fetch_list = [avg_loss.name, pred.name, grts.name, masks.name]
num_images = 0
step = 0
all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1
timer = Timer()
timer.start()
py_reader.start()
while True:
try:
step += 1
loss, pred, grts, masks = exe.run(
test_prog, fetch_list=fetch_list, return_numpy=True)
loss = np.mean(np.array(loss))
num_images += pred.shape[0]
conf_mat.calculate(pred, grts, masks)
_, iou = conf_mat.mean_iou()
_, acc = conf_mat.accuracy()
speed = 1.0 / timer.elapsed_time()
print(
"[EVAL]step={} loss={:.5f} acc={:.4f} IoU={:.4f} step/sec={:.2f} | ETA {}"
.format(step, loss, acc, iou, speed,
calculate_eta(all_step - step, speed)))
timer.restart()
sys.stdout.flush()
except fluid.core.EOFException:
break
category_iou, avg_iou = conf_mat.mean_iou()
category_acc, avg_acc = conf_mat.accuracy()
print("[EVAL]#image={} acc={:.4f} IoU={:.4f}".format(
num_images, avg_acc, avg_iou))
print("[EVAL]Category IoU:", category_iou)
print("[EVAL]Category Acc:", category_acc)
print("[EVAL]Kappa:{:.4f}".format(conf_mat.kappa()))
return category_iou, avg_iou, category_acc, avg_acc
def main():
args = parse_args()
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
cfg.check_and_infer()
print(pprint.pformat(cfg))
evaluate(cfg, **args.__dict__)
if __name__ == '__main__':
main()
# coding: utf8
# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import argparse
import pprint
import random
import shutil
import functools
import paddle
import numpy as np
import paddle.fluid as fluid
from utils.config import cfg
from utils.timer import Timer, calculate_eta
from metrics import ConfusionMatrix
from reader import SegDataset
from models.model_builder import build_model
from models.model_builder import ModelPhase
from models.model_builder import parse_shape_from_file
from eval_quant import evaluate
from vis import visualize
from utils import dist_utils
from train import save_vars, save_checkpoint, load_checkpoint, update_best_model, print_info
from paddleslim.quant import quant_aware
def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg training')
parser.add_argument(
'--cfg',
dest='cfg_file',
help='Config file for training (and optionally testing)',
default=None,
type=str)
parser.add_argument(
'--use_gpu',
dest='use_gpu',
help='Use gpu or cpu',
action='store_true',
default=False)
parser.add_argument(
'--use_mpio',
dest='use_mpio',
help='Use multiprocess I/O or not',
action='store_true',
default=False)
parser.add_argument(
'--log_steps',
dest='log_steps',
help='Display logging information at every log_steps',
default=10,
type=int)
parser.add_argument(
'--debug',
dest='debug',
help='debug mode, display detail information of training',
action='store_true')
parser.add_argument(
'--do_eval',
dest='do_eval',
help='Evaluation models result on every new checkpoint',
action='store_true')
parser.add_argument(
'opts',
help='See utils/config.py for all options',
default=None,
nargs=argparse.REMAINDER)
parser.add_argument(
'--enable_ce',
dest='enable_ce',
help='If set True, enable continuous evaluation job.'
'This flag is only used for internal test.',
action='store_true')
parser.add_argument(
"--not_quant_pattern",
nargs='+',
type=str,
help=
"Layers which name_scope contains string in not_quant_pattern will not be quantized"
)
return parser.parse_args()
def train_quant(cfg):
startup_prog = fluid.Program()
train_prog = fluid.Program()
if args.enable_ce:
startup_prog.random_seed = 1000
train_prog.random_seed = 1000
drop_last = True
dataset = SegDataset(
file_list=cfg.DATASET.TRAIN_FILE_LIST,
mode=ModelPhase.TRAIN,
shuffle=True,
data_dir=cfg.DATASET.DATA_DIR)
def data_generator():
if args.use_mpio:
data_gen = dataset.multiprocess_generator(
num_processes=cfg.DATALOADER.NUM_WORKERS,
max_queue_size=cfg.DATALOADER.BUF_SIZE)
else:
data_gen = dataset.generator()
batch_data = []
for b in data_gen:
batch_data.append(b)
if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS):
for item in batch_data:
yield item[0], item[1], item[2]
batch_data = []
# If use sync batch norm strategy, drop last batch if number of samples
# in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues
if not cfg.TRAIN.SYNC_BATCH_NORM:
for item in batch_data:
yield item[0], item[1], item[2]
# Get device environment
# places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# place = places[0]
gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0))
place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace()
places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places()
# Get number of GPU
dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places)
print_info("#Device count: {}".format(dev_count))
# Make sure BATCH_SIZE can divided by GPU cards
assert cfg.BATCH_SIZE % dev_count == 0, (
'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format(
cfg.BATCH_SIZE, dev_count))
# If use multi-gpu training mode, batch data will allocated to each GPU evenly
batch_size_per_dev = cfg.BATCH_SIZE // dev_count
print_info("batch_size_per_dev: {}".format(batch_size_per_dev))
py_reader, avg_loss, lr, pred, grts, masks = build_model(
train_prog, startup_prog, phase=ModelPhase.TRAIN)
py_reader.decorate_sample_generator(
data_generator, batch_size=batch_size_per_dev, drop_last=drop_last)
exe = fluid.Executor(place)
exe.run(startup_prog)
exec_strategy = fluid.ExecutionStrategy()
# Clear temporary variables every 100 iteration
if args.use_gpu:
exec_strategy.num_threads = fluid.core.get_cuda_device_count()
exec_strategy.num_iteration_per_drop_scope = 100
build_strategy = fluid.BuildStrategy()
if cfg.NUM_TRAINERS > 1 and args.use_gpu:
dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog)
exec_strategy.num_threads = 1
# Resume training
begin_epoch = cfg.SOLVER.BEGIN_EPOCH
if cfg.TRAIN.RESUME_MODEL_DIR:
begin_epoch = load_checkpoint(exe, train_prog)
# Load pretrained model
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR):
print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR)
load_vars = []
load_fail_vars = []
def var_shape_matched(var, shape):
"""
Check whehter persitable variable shape is match with current network
"""
var_exist = os.path.exists(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_exist:
var_shape = parse_shape_from_file(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
return var_shape == shape
return False
for x in train_prog.list_vars():
if isinstance(x, fluid.framework.Parameter):
shape = tuple(fluid.global_scope().find_var(
x.name).get_tensor().shape())
if var_shape_matched(x, shape):
load_vars.append(x)
else:
load_fail_vars.append(x)
fluid.io.load_vars(
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars)
for var in load_vars:
print_info("Parameter[{}] loaded sucessfully!".format(var.name))
for var in load_fail_vars:
print_info(
"Parameter[{}] don't exist or shape does not match current network, skip"
" to load it.".format(var.name))
print_info("{}/{} pretrained parameters loaded successfully!".format(
len(load_vars),
len(load_vars) + len(load_fail_vars)))
else:
print_info(
'Pretrained model dir {} not exists, training from scratch...'.
format(cfg.TRAIN.PRETRAINED_MODEL_DIR))
fetch_list = [avg_loss.name, lr.name]
if args.debug:
# Fetch more variable info and use streaming confusion matrix to
# calculate IoU results if in debug mode
np.set_printoptions(
precision=4, suppress=True, linewidth=160, floatmode="fixed")
fetch_list.extend([pred.name, grts.name, masks.name])
cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True)
not_quant_pattern = []
if args.not_quant_pattern:
not_quant_pattern = args.not_quant_pattern
config = {
'weight_quantize_type': 'channel_wise_abs_max',
'activation_quantize_type': 'moving_average_abs_max',
'quantize_op_types': ['depthwise_conv2d', 'mul', 'conv2d'],
'not_quant_pattern': not_quant_pattern
}
compiled_train_prog = quant_aware(train_prog, place, config, for_test=False)
eval_prog = quant_aware(train_prog, place, config, for_test=True)
build_strategy.fuse_all_reduce_ops = False
build_strategy.sync_batch_norm = False
compiled_train_prog = compiled_train_prog.with_data_parallel(
loss_name=avg_loss.name,
exec_strategy=exec_strategy,
build_strategy=build_strategy)
# trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0))
# num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
global_step = 0
all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE
if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True:
all_step += 1
all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1)
avg_loss = 0.0
best_mIoU = 0.0
timer = Timer()
timer.start()
if begin_epoch > cfg.SOLVER.NUM_EPOCHS:
raise ValueError(
("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format(
begin_epoch, cfg.SOLVER.NUM_EPOCHS))
if args.use_mpio:
print_info("Use multiprocess reader")
else:
print_info("Use multi-thread reader")
for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1):
py_reader.start()
while True:
try:
if args.debug:
# Print category IoU and accuracy to check whether the
# traning process is corresponed to expectation
loss, lr, pred, grts, masks = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
cm.calculate(pred, grts, masks)
avg_loss += np.mean(np.array(loss))
global_step += 1
if global_step % args.log_steps == 0:
speed = args.log_steps / timer.elapsed_time()
avg_loss /= args.log_steps
category_acc, mean_acc = cm.accuracy()
category_iou, mean_iou = cm.mean_iou()
print_info((
"epoch={} step={} lr={:.5f} loss={:.4f} acc={:.5f} mIoU={:.5f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, mean_acc,
mean_iou, speed,
calculate_eta(all_step - global_step, speed)))
print_info("Category IoU: ", category_iou)
print_info("Category Acc: ", category_acc)
sys.stdout.flush()
avg_loss = 0.0
cm.zero_matrix()
timer.restart()
else:
# If not in debug mode, avoid unnessary log and calculate
loss, lr = exe.run(
program=compiled_train_prog,
fetch_list=fetch_list,
return_numpy=True)
avg_loss += np.mean(np.array(loss))
global_step += 1
if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0:
avg_loss /= args.log_steps
speed = args.log_steps / timer.elapsed_time()
print((
"epoch={} step={} lr={:.5f} loss={:.4f} step/sec={:.3f} | ETA {}"
).format(epoch, global_step, lr[0], avg_loss, speed,
calculate_eta(all_step - global_step, speed)))
sys.stdout.flush()
avg_loss = 0.0
timer.restart()
except fluid.core.EOFException:
py_reader.reset()
break
except Exception as e:
print(e)
if (epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0
or epoch == cfg.SOLVER.NUM_EPOCHS) and cfg.TRAINER_ID == 0:
ckpt_dir = save_checkpoint(exe, eval_prog, epoch)
if args.do_eval:
print("Evaluation start")
_, mean_iou, _, mean_acc = evaluate(
cfg=cfg,
ckpt_dir=ckpt_dir,
use_gpu=args.use_gpu,
use_mpio=args.use_mpio,
not_quant_pattern=args.not_quant_pattern,
convert=False)
if mean_iou > best_mIoU:
best_mIoU = mean_iou
update_best_model(ckpt_dir)
print_info("Save best model {} to {}, mIoU = {:.4f}".format(
ckpt_dir,
os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model'),
mean_iou))
# save final model
if cfg.TRAINER_ID == 0:
save_checkpoint(exe, eval_prog, 'final')
def main(args):
if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file)
if args.opts:
cfg.update_from_list(args.opts)
if args.enable_ce:
random.seed(0)
np.random.seed(0)
cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0))
cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1))
cfg.check_and_infer()
print_info(pprint.pformat(cfg))
train_quant(cfg)
if __name__ == '__main__':
args = parse_args()
if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True:
print(
"You can not set use_gpu = True in the model because you are using paddlepaddle-cpu."
)
print(
"Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU."
)
sys.exit(1)
main(args)
# DeepLabv3+模型训练教程
# DeepLabv3+模型使用教程
* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`DeeplabV3+/Xception65/BatchNorm`*** 预训练模型在自定义数据集上进行训练。除了该配置之外,DeeplabV3+还支持以下不同[模型组合](#模型组合)的预训练模型,如果需要使用对应模型作为预训练模型,将下述内容中的Xception Backbone中的内容进行替换即可
本教程旨在介绍如何使用`DeepLabv3+`预训练模型在自定义数据集上进行训练、评估和可视化。我们以`DeeplabV3+/Xception65/BatchNorm`预训练模型为例。
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 本教程的所有命令都基于PaddleSeg主目录进行执行
* 本教程的所有命令都基于PaddleSeg主目录进行执行
## 一. 准备待训练数据
我们提前准备好了一份数据集,通过以下代码进行下载
![](./imgs/optic.png)
我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载:
```shell
python dataset/download_pet.py
python dataset/download_optic.py
```
## 二. 下载预训练模型
关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置
接着下载对应的预训练模型
```shell
python pretrained_model/download_model.py deeplabv3p_xception65_bn_coco
```
关于已有的DeepLabv3+预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
......@@ -45,19 +48,19 @@ python pretrained_model/download_model.py deeplabv3p_xception65_bn_coco
在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`中。
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/deeplabv3p_xception65_pet.yaml**
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/deeplabv3p_xception65_optic.yaml**
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/mini_pet/"
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
......@@ -75,15 +78,15 @@ AUG:
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_coco/"
MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_pet/"
SNAPSHOT_EPOCH: 10
MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_pet/final"
TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_optic/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
OPTIMIZER: "adam"
```
## 四. 配置/数据校验
......@@ -91,7 +94,7 @@ SOLVER:
在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程
```shell
python pdseg/check.py --cfg ./configs/deeplabv3p_xception65_pet.yaml
python pdseg/check.py --cfg ./configs/deeplabv3p_xception65_optic.yaml
```
......@@ -100,7 +103,10 @@ python pdseg/check.py --cfg ./configs/deeplabv3p_xception65_pet.yaml
校验通过后,使用下述命令启动训练
```shell
python pdseg/train.py --use_gpu --cfg ./configs/deeplabv3p_xception65_pet.yaml
# 指定GPU卡号(以0号卡为例)
export CUDA_VISIBLE_DEVICES=0
# 训练
python pdseg/train.py --use_gpu --cfg ./configs/deeplabv3p_xception65_optic.yaml
```
## 六. 进行评估
......@@ -108,22 +114,39 @@ python pdseg/train.py --use_gpu --cfg ./configs/deeplabv3p_xception65_pet.yaml
模型训练完成,使用下述命令启动评估
```shell
python pdseg/eval.py --use_gpu --cfg ./configs/deeplabv3p_xception65_pet.yaml
python pdseg/eval.py --use_gpu --cfg ./configs/deeplabv3p_xception65_optic.yaml
```
## 七. 进行可视化
使用下述命令启动预测和可视化
```shell
python pdseg/vis.py --use_gpu --cfg ./configs/deeplabv3p_xception65_optic.yaml
```
预测结果将保存在`visual`目录下,以下展示其中1张图片的预测效果:
![](imgs/optic_deeplab.png)
## 在线体验
PaddleSeg在AI Studio平台上提供了在线体验的DeepLabv3+图像分割教程,欢迎[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/226703)
## 模型组合
|预训练模型名称|BackBone|Norm Type|数据集|配置|
|预训练模型名称|Backbone|Norm Type|数据集|配置|
|-|-|-|-|-|
|mobilenetv2-2-0_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 2.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-1-5_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.5 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-1-0_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-0-5_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.5 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-0-25_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.25 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|xception41_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_41 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|xception65_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_mobilenetv2-1-0_bn_coco|MobileNet V2|bn|COCO|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|
|**deeplabv3p_xception65_bn_coco**|Xception|bn|COCO|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn |
|deeplabv3p_mobilenetv2-1-0_bn_cityscapes|MobileNet V2|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_xception65_gn_cityscapes|Xception|gn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: gn|
|deeplabv3p_xception65_bn_cityscapes|Xception|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-2-0_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 2.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-1-5_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.5 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-1-0_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-0-5_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.5 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|mobilenetv2-0-25_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.25 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|xception41_imagenet|Xception41|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_41 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|xception65_imagenet|Xception65|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_mobilenetv2-1-0_bn_coco|MobileNetV2|bn|COCO|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|
|**deeplabv3p_xception65_bn_coco**|Xception65|bn|COCO|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn |
|deeplabv3p_mobilenetv2-1-0_bn_cityscapes|MobileNetV2|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenetv2 <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_xception65_gn_cityscapes|Xception65|gn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: gn|
|deeplabv3p_xception65_bn_cityscapes|Xception65|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
# Fast-SCNN模型训练教程
* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`Fast_scnn_cityscapes`*** 预训练模型在自定义数据集上进行训练。
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 本教程的所有命令都基于PaddleSeg主目录进行执行
## 一. 准备待训练数据
我们提前准备好了一份数据集,通过以下代码进行下载
```shell
python dataset/download_pet.py
```
## 二. 下载预训练模型
```shell
python pretrained_model/download_model.py fast_scnn_cityscapes
```
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
* 数据集
* 训练集主目录
* 训练集文件列表
* 测试集文件列表
* 评估集文件列表
* 预训练模型
* 预训练模型名称
* 预训练模型的backbone网络
* 预训练模型的Normalization类型
* 预训练模型路径
* 其他
* 学习率
* Batch大小
* ...
在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/fast_scnn_pet.yaml**
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/mini_pet/"
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
# 预训练模型配置
MODEL:
MODEL_NAME: "fast_scnn"
DEFAULT_NORM_TYPE: "bn"
# 其他配置
TRAIN_CROP_SIZE: (512, 512)
EVAL_CROP_SIZE: (512, 512)
AUG:
AUG_METHOD: "unpadding"
FIX_RESIZE_SIZE: (512, 512)
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/fast_scnn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/fast_scnn_pet/"
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./saved_model/fast_scnn_pet/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
LR_POLICY: "poly"
OPTIMIZER: "sgd"
```
## 四. 配置/数据校验
在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程
```shell
python pdseg/check.py --cfg ./configs/fast_scnn_pet.yaml
```
## 五. 开始训练
校验通过后,使用下述命令启动训练
```shell
python pdseg/train.py --use_gpu --cfg ./configs/fast_scnn_pet.yaml
```
## 六. 进行评估
模型训练完成,使用下述命令启动评估
```shell
python pdseg/eval.py --use_gpu --cfg ./configs/fast_scnn_pet.yaml
```
## 七. 实时分割模型推理时间比较
| 模型 | eval size | inference time | mIoU on cityscape val|
|---|---|---|---|
| DeepLabv3+/MobileNetv2/bn | (1024, 2048) |16.14ms| 0.698|
| ICNet/bn |(1024, 2048) |8.76ms| 0.6831 |
| Fast-SCNN/bn | (1024, 2048) |6.28ms| 0.6964 |
上述测试环境为v100. 测试使用paddle的推理接口[zero_copy](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_usage/deploy/inference/python_infer_cn.html#id8)的方式,模型输出是类别,即argmax后的值。
# HRNet模型训练教程
# HRNet模型使用教程
* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`HRNet`*** 预训练模型在自定义数据集上进行训练
本教程旨在介绍如何通过使用PaddleSeg提供的 ***`HRNet`*** 预训练模型在自定义数据集上进行训练、评估和可视化
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 本教程的所有命令都基于PaddleSeg主目录进行执行
* 本教程的所有命令都基于PaddleSeg主目录进行执行
## 一. 准备待训练数据
我们提前准备好了一份数据集,通过以下代码进行下载
![](./imgs/optic.png)
我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载:
```shell
python dataset/download_pet.py
python dataset/download_optic.py
```
## 二. 下载预训练模型
关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置
## 二. 下载预训练模型
接着下载对应的预训练模型
......@@ -24,6 +25,8 @@ python dataset/download_pet.py
python pretrained_model/download_model.py hrnet_w18_bn_cityscapes
```
关于已有的HRNet预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
......@@ -45,19 +48,19 @@ python pretrained_model/download_model.py hrnet_w18_bn_cityscapes
在三者中,预训练模型的配置尤为重要,如果模型配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/hrnet_w18_pet.yaml**
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/hrnet_optic.yaml**
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/mini_pet/"
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
......@@ -80,15 +83,15 @@ AUG:
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/hrnet_w18_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/hrnet_w18_bn_pet/"
SNAPSHOT_EPOCH: 10
MODEL_SAVE_DIR: "./saved_model/hrnet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/hrnet_w18_bn_pet/final"
TEST_MODEL: "./saved_model/hrnet_optic/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
OPTIMIZER: "adam"
```
## 四. 配置/数据校验
......@@ -96,7 +99,7 @@ SOLVER:
在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程
```shell
python pdseg/check.py --cfg ./configs/hrnet_w18_pet.yaml
python pdseg/check.py --cfg ./configs/hrnet_optic.yaml
```
......@@ -105,7 +108,10 @@ python pdseg/check.py --cfg ./configs/hrnet_w18_pet.yaml
校验通过后,使用下述命令启动训练
```shell
python pdseg/train.py --use_gpu --cfg ./configs/hrnet_w18_pet.yaml
# 指定GPU卡号(以0号卡为例)
export CUDA_VISIBLE_DEVICES=0
# 训练
python pdseg/train.py --use_gpu --cfg ./configs/hrnet_optic.yaml
```
## 六. 进行评估
......@@ -113,19 +119,30 @@ python pdseg/train.py --use_gpu --cfg ./configs/hrnet_w18_pet.yaml
模型训练完成,使用下述命令启动评估
```shell
python pdseg/eval.py --use_gpu --cfg ./configs/hrnet_w18_pet.yaml
python pdseg/eval.py --use_gpu --cfg ./configs/hrnet_optic.yaml
```
## 七. 进行可视化
使用下述命令启动预测和可视化
```shell
python pdseg/vis.py --use_gpu --cfg ./configs/hrnet_optic.yaml
```
预测结果将保存在visual目录下,以下展示其中1张图片的预测效果:
![](imgs/optic_hrnet.png)
## 模型组合
|预训练模型名称|BackBone|Norm Type|数据集|配置|
|-|-|-|-|-|
|hrnet_w18_bn_cityscapes|-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144] <br> MODEL.DEFAULT_NORM_TYPE: bn|
| hrnet_w18_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w30_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [30, 60] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [30, 60, 120] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [30, 60, 120, 240] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w32_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [32, 64] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [32, 64, 128] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [32, 64, 128, 256] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w40_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [40, 80] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [40, 80, 160] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [40, 80, 160, 320] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w44_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [44, 88] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [44, 88, 176] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [44, 88, 176, 352] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w48_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [48, 96] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [48, 96, 192] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [48, 96, 192, 384] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w64_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [64, 128] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [64, 128, 256] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [64, 128, 256, 512] <br> MODEL.DEFAULT_NORM_TYPE: bn |
|预训练模型名称|Backbone|数据集|配置|
|-|-|-|-|
|hrnet_w18_bn_cityscapes|HRNet| Cityscapes | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144] <br> MODEL.DEFAULT_NORM_TYPE: bn|
| hrnet_w18_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w30_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [30, 60] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [30, 60, 120] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [30, 60, 120, 240] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w32_bn_imagenet |HRNet|ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [32, 64] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [32, 64, 128] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [32, 64, 128, 256] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w40_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [40, 80] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [40, 80, 160] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [40, 80, 160, 320] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w44_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [44, 88] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [44, 88, 176] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [44, 88, 176, 352] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w48_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [48, 96] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [48, 96, 192] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [48, 96, 192, 384] <br> MODEL.DEFAULT_NORM_TYPE: bn |
| hrnet_w64_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet <br> MODEL.HRNET.STAGE2.NUM_CHANNELS: [64, 128] <br> MODEL.HRNET.STAGE3.NUM_CHANNELS: [64, 128, 256] <br> MODEL.HRNET.STAGE4.NUM_CHANNELS: [64, 128, 256, 512] <br> MODEL.DEFAULT_NORM_TYPE: bn |
# ICNet模型训练教程
# ICNet模型使用教程
* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`ICNet`*** 预训练模型在自定义数据集上进行训练
本教程旨在介绍如何通过使用PaddleSeg提供的 ***`ICNet`*** 预训练模型在自定义数据集上进行训练、评估和可视化。
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 本教程的所有命令都基于PaddleSeg主目录进行执行
* 本教程的所有命令都基于PaddleSeg主目录进行执行
* 注意 ***`ICNet`*** 不支持在cpu环境上训练和评估
## 一. 准备待训练数据
我们提前准备好了一份数据集,通过以下代码进行下载
![](./imgs/optic.png)
我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载:
```shell
python dataset/download_pet.py
python dataset/download_optic.py
```
## 二. 下载预训练模型
关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置。
接着下载对应的预训练模型
```shell
python pretrained_model/download_model.py icnet_bn_cityscapes
```
关于已有的ICNet预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
......@@ -48,20 +50,19 @@ python pretrained_model/download_model.py icnet_bn_cityscapes
在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/icnet_pet.yaml**
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/icnet_optic.yaml**
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/mini_pet/"
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
......@@ -80,15 +81,15 @@ AUG:
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/icnet_pet/"
SNAPSHOT_EPOCH: 10
MODEL_SAVE_DIR: "./saved_model/icnet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/icnet_pet/final"
TEST_MODEL: "./saved_model/icnet_optic/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
OPTIMIZER: "adam"
```
## 四. 配置/数据校验
......@@ -96,7 +97,7 @@ SOLVER:
在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程
```shell
python pdseg/check.py --cfg ./configs/icnet_pet.yaml
python pdseg/check.py --cfg ./configs/icnet_optic.yaml
```
......@@ -105,7 +106,10 @@ python pdseg/check.py --cfg ./configs/icnet_pet.yaml
校验通过后,使用下述命令启动训练
```shell
python pdseg/train.py --use_gpu --cfg ./configs/icnet_pet.yaml
# 指定GPU卡号(以0号卡为例)
export CUDA_VISIBLE_DEVICES=0
# 训练
python pdseg/train.py --use_gpu --cfg ./configs/icnet_optic.yaml
```
## 六. 进行评估
......@@ -113,11 +117,22 @@ python pdseg/train.py --use_gpu --cfg ./configs/icnet_pet.yaml
模型训练完成,使用下述命令启动评估
```shell
python pdseg/eval.py --use_gpu --cfg ./configs/icnet_pet.yaml
python pdseg/eval.py --use_gpu --cfg ./configs/icnet_optic.yaml
```
## 七. 进行可视化
使用下述命令启动预测和可视化
```shell
python pdseg/vis.py --use_gpu --cfg ./configs/icnet_optic.yaml
```
预测结果将保存在visual目录下,以下展示其中1张图片的预测效果:
![](imgs/optic_icnet.png)
## 模型组合
|预训练模型名称|BackBone|Norm|数据集|配置|
|-|-|-|-|-|
|icnet_bn_cityscapes|-|bn|Cityscapes|MODEL.MODEL_NAME: icnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.MULTI_LOSS_WEIGHT: [1.0, 0.4, 0.16]|
|预训练模型名称|Backbone|数据集|配置|
|-|-|-|-|
|icnet_bn_cityscapes|ResNet50|Cityscapes|MODEL.MODEL_NAME: icnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.MULTI_LOSS_WEIGHT: [1.0, 0.4, 0.16]|
# PSPNET模型训练教程
* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`PSPNET`*** 预训练模型在自定义数据集上进行训练
本教程旨在介绍如何通过使用PaddleSeg提供的 ***`PSPNET`*** 预训练模型在自定义数据集上进行训练。
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 本教程的所有命令都基于PaddleSeg主目录进行执行
* 本教程的所有命令都基于PaddleSeg主目录进行执行
## 一. 准备待训练数据
我们提前准备好了一份数据集,通过以下代码进行下载
![](./imgs/optic.png)
我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载:
```shell
python dataset/download_pet.py
python dataset/download_optic.py
```
## 二. 下载预训练模型
关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置。
接着下载对应的预训练模型
```shell
python pretrained_model/download_model.py pspnet50_bn_cityscapes
```
关于已有的PSPNet预训练模型的列表,请参见[PSPNet预训练模型组合](#PSPNet预训练模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
......@@ -45,20 +47,19 @@ python pretrained_model/download_model.py pspnet50_bn_cityscapes
在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为`configs/test_pet.yaml`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为`configs/pspnet_optic.yaml`
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/mini_pet/"
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
......@@ -77,15 +78,15 @@ AUG:
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/pspnet50_bn_cityscapes/"
MODEL_SAVE_DIR: "./saved_model/pspnet_pet/"
SNAPSHOT_EPOCH: 10
MODEL_SAVE_DIR: "./saved_model/pspnet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/pspnet_pet/final"
TEST_MODEL: "./saved_model/pspnet_optic/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "sgd"
OPTIMIZER: "adam"
```
## 四. 配置/数据校验
......@@ -93,7 +94,7 @@ SOLVER:
在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程
```shell
python pdseg/check.py --cfg ./configs/test_pet.yaml
python pdseg/check.py --cfg ./configs/pspnet_optic.yaml
```
......@@ -102,7 +103,10 @@ python pdseg/check.py --cfg ./configs/test_pet.yaml
校验通过后,使用下述命令启动训练
```shell
python pdseg/train.py --use_gpu --cfg ./configs/test_pet.yaml
# 指定GPU卡号(以0号卡为例)
export CUDA_VISIBLE_DEVICES=0
# 训练
python pdseg/train.py --use_gpu --cfg ./configs/pspnet_optic.yaml
```
## 六. 进行评估
......@@ -110,12 +114,27 @@ python pdseg/train.py --use_gpu --cfg ./configs/test_pet.yaml
模型训练完成,使用下述命令启动评估
```shell
python pdseg/eval.py --use_gpu --cfg ./configs/test_pet.yaml
python pdseg/eval.py --use_gpu --cfg ./configs/pspnet_optic.yaml
```
## 七. 进行可视化
使用下述命令启动预测和可视化
```shell
python pdseg/vis.py --use_gpu --cfg ./configs/pspnet_optic.yaml
```
## 模型组合
预测结果将保存在visual目录下,以下展示其中1张图片的预测效果:
![](imgs/optic_pspnet.png)
## PSPNet预训练模型组合
|预训练模型名称|BackBone|Norm|数据集|配置|
|-|-|-|-|-|
|pspnet50_bn_cityscapes|ResNet50|bn|Cityscapes|MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 50|
|pspnet101_bn_cityscapes|ResNet101|bn|Cityscapes|MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 101|
|模型|BackBone|数据集|配置|
|-|-|-|-|
|[pspnet50_cityscapes](https://paddleseg.bj.bcebos.com/models/pspnet50_cityscapes.tgz)|ResNet50(适配PSPNet)|Cityscapes |MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 50|
|[pspnet101_cityscapes](https://paddleseg.bj.bcebos.com/models/pspnet101_cityscapes.tgz)|ResNet101(适配PSPNet)|Cityscapes |MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 101|
| [pspnet50_coco](https://paddleseg.bj.bcebos.com/models/pspnet50_coco.tgz)|ResNet50(适配PSPNet)|COCO |MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 50|
| [pspnet101_coco](https://paddleseg.bj.bcebos.com/models/pspnet101_coco.tgz) |ResNet101(适配PSPNet)| COCO |MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 101|
| [resnet50_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet50_v2_pspnet.tgz)| ResNet50(适配PSPNet) | ImageNet | MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 50 |
| [resnet101_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet101_v2_pspnet.tgz)| ResNet101(适配PSPNet) | ImageNet | MODEL.MODEL_NAME: pspnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MODEL.PSPNET.LAYERS: 101 |
# U-Net模型训练教程
# U-Net模型使用教程
* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`U-Net`*** 预训练模型在自定义数据集上进行训练
本教程旨在介绍如何通过使用PaddleSeg提供的 ***`U-Net`*** 预训练模型在自定义数据集上进行训练、评估和可视化。
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解
* 本教程的所有命令都基于PaddleSeg主目录进行执行
* 本教程的所有命令都基于PaddleSeg主目录进行执行
## 一. 准备待训练数据
我们提前准备好了一份数据集,通过以下代码进行下载
![](./imgs/optic.png)
我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载:
```shell
python dataset/download_pet.py
python dataset/download_optic.py
```
## 二. 下载预训练模型
关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置。
接着下载对应的预训练模型
```shell
python pretrained_model/download_model.py unet_bn_coco
```
关于已有的U-Net预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。
## 三. 准备配置
接着我们需要确定相关配置,从本教程的角度,配置分为三部分:
......@@ -45,20 +47,19 @@ python pretrained_model/download_model.py unet_bn_coco
在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`
数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/unet_pet.yaml**
其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/unet_optic.yaml**
```yaml
# 数据集配置
DATASET:
DATA_DIR: "./dataset/mini_pet/"
NUM_CLASSES: 3
TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt"
VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt"
VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt"
DATA_DIR: "./dataset/optic_disc_seg/"
NUM_CLASSES: 2
TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"
VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"
VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"
# 预训练模型配置
MODEL:
......@@ -74,13 +75,13 @@ AUG:
BATCH_SIZE: 4
TRAIN:
PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/"
MODEL_SAVE_DIR: "./saved_model/unet_pet/"
SNAPSHOT_EPOCH: 10
MODEL_SAVE_DIR: "./saved_model/unet_optic/"
SNAPSHOT_EPOCH: 5
TEST:
TEST_MODEL: "./saved_model/unet_pet/final"
TEST_MODEL: "./saved_model/unet_optic/final"
SOLVER:
NUM_EPOCHS: 100
LR: 0.005
NUM_EPOCHS: 10
LR: 0.001
LR_POLICY: "poly"
OPTIMIZER: "adam"
```
......@@ -90,7 +91,7 @@ SOLVER:
在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程
```shell
python pdseg/check.py --cfg ./configs/unet_pet.yaml
python pdseg/check.py --cfg ./configs/unet_optic.yaml
```
......@@ -99,7 +100,10 @@ python pdseg/check.py --cfg ./configs/unet_pet.yaml
校验通过后,使用下述命令启动训练
```shell
python pdseg/train.py --use_gpu --cfg ./configs/unet_pet.yaml
# 指定GPU卡号(以0号卡为例)
export CUDA_VISIBLE_DEVICES=0
# 训练
python pdseg/train.py --use_gpu --cfg ./configs/unet_optic.yaml
```
## 六. 进行评估
......@@ -107,11 +111,26 @@ python pdseg/train.py --use_gpu --cfg ./configs/unet_pet.yaml
模型训练完成,使用下述命令启动评估
```shell
python pdseg/eval.py --use_gpu --cfg ./configs/unet_pet.yaml
python pdseg/eval.py --use_gpu --cfg ./configs/unet_optic.yaml
```
## 七. 进行可视化
使用下述命令启动预测和可视化
```shell
python pdseg/vis.py --use_gpu --cfg ./configs/unet_optic.yaml
```
预测结果将保存在visual目录下,以下展示其中1张图片的预测效果:
![](imgs/optic_unet.png)
## 在线体验
PaddleSeg在AI Studio平台上提供了在线体验的U-Net分割教程,欢迎[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)
## 模型组合
|预训练模型名称|BackBone|Norm|数据集|配置|
|-|-|-|-|-|
|unet_bn_coco|-|bn|COCO|MODEL.MODEL_NAME: unet <br> MODEL.DEFAULT_NORM_TYPE: bn|
|预训练模型名称|Backbone|数据集|配置|
|-|-|-|-|
|unet_bn_coco|VGG16|COCO|MODEL.MODEL_NAME: unet <br> MODEL.DEFAULT_NORM_TYPE: bn|
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册