diff --git a/README.md b/README.md index af7ba67a8c2280a51580815762ed8d5e306c567f..0fefe77bb4c078a15c4f02a6d189a240cf304de6 100644 --- a/README.md +++ b/README.md @@ -6,10 +6,26 @@ ## 简介 -PaddleSeg是基于[PaddlePaddle](https://www.paddlepaddle.org.cn)开发的语义分割库,覆盖了DeepLabv3+, U-Net, ICNet三类主流的分割模型。通过统一的配置,帮助用户更便捷地完成从训练到部署的全流程图像分割应用。 +PaddleSeg是基于[PaddlePaddle](https://www.paddlepaddle.org.cn)开发的语义分割库,覆盖了DeepLabv3+, U-Net, ICNet, PSPNet, HRNet等主流分割模型。通过统一的配置,帮助用户更便捷地完成从训练到部署的全流程图像分割应用。 -PaddleSeg具备高性能、丰富的数据增强、工业级部署、全流程应用的特点: +
+ +- [特点](#特点) +- [安装](#安装) +- [使用教程](#使用教程) + - [快速入门](#快速入门) + - [基础功能](#基础功能) + - [预测部署](#预测部署) + - [高级功能](#高级功能) +- [在线体验](#在线体验) +- [FAQ](#FAQ) +- [交流与反馈](#交流与反馈) +- [更新日志](#更新日志) +- [贡献代码](#贡献代码) + +
+## 特点 - **丰富的数据增强** @@ -17,29 +33,42 @@ PaddleSeg具备高性能、丰富的数据增强、工业级部署、全流程 - **模块化设计** -支持U-Net, DeepLabv3+, ICNet, PSPNet四种主流分割网络,结合预训练模型和可调节的骨干网络,满足不同性能和精度的要求;选择不同的损失函数如Dice Loss, BCE Loss等方式可以强化小目标和不均衡样本场景下的分割精度。 +支持U-Net, DeepLabv3+, ICNet, PSPNet, HRNet五种主流分割网络,结合预训练模型和可调节的骨干网络,满足不同性能和精度的要求;选择不同的损失函数如Dice Loss, BCE Loss等方式可以强化小目标和不均衡样本场景下的分割精度。 - **高性能** -PaddleSeg支持多进程IO、多卡并行、跨卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化功能,可以大幅度减少分割模型的显存开销,更快完成分割模型训练。 +PaddleSeg支持多进程I/O、多卡并行、跨卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化功能,可大幅度减少分割模型的显存开销,让开发者更低成本、更高效地完成图像分割训练。 - **工业级部署** -基于[Paddle Serving](https://github.com/PaddlePaddle/Serving)和PaddlePaddle高性能预测引擎,结合百度开放的AI能力,轻松搭建人像分割和车道线分割服务。 +全面提供**服务端**和**移动端**的工业级部署能力,依托飞桨高性能推理引擎和高性能图像处理实现,开发者可以轻松完成高性能的分割模型部署和集成。通过[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite),可以在移动设备或者嵌入式设备上完成轻量级、高性能的人像分割模型部署。 -
+## 安装 -## 环境依赖 +### 1. 安装PaddlePaddle +版本要求 * PaddlePaddle >= 1.6.1 * Python 2.7 or 3.5+ -通过以下命令安装python包依赖,请确保在该分支上至少执行过一次以下命令 -```shell -$ pip install -r requirements.txt +由于图像分割模型计算开销大,推荐在GPU版本的PaddlePaddle下使用PaddleSeg. +``` +pip install -U paddlepaddle-gpu ``` +同时请保证您参考NVIDIA官网,已经正确配置和安装了显卡驱动,CUDA 9,cuDNN 7.3,NCCL2等依赖,其他更加详细的安装信息请参考:[PaddlePaddle安装说明](https://www.paddlepaddle.org.cn/install/doc/index)。 -其他如CUDA版本、cuDNN版本等兼容信息请查看[PaddlePaddle安装](https://www.paddlepaddle.org.cn/install/doc/index) +### 2. 下载PaddleSeg代码 + +``` +git clone https://github.com/PaddlePaddle/PaddleSeg +``` + +### 3. 安装PaddleSeg依赖 +通过以下命令安装python包依赖,请确保在该分支上至少执行过一次以下命令: +``` +cd PaddleSeg +pip install -r requirements.txt +```
@@ -51,35 +80,49 @@ $ pip install -r requirements.txt ### 快速入门 -* [安装说明](./docs/installation.md) -* [训练/评估/可视化](./docs/usage.md) +* [PaddleSeg快速入门](./docs/usage.md) ### 基础功能 -* [分割模型介绍](./docs/models.md) -* [预训练模型列表](./docs/model_zoo.md) -* [自定义数据的准备与标注](./docs/data_prepare.md) +* [自定义数据的标注与准备](./docs/data_prepare.md) +* [脚本使用和配置说明](./docs/config.md) * [数据和配置校验](./docs/check.md) -* [如何训练DeepLabv3+](./turtorial/finetune_deeplabv3plus.md) -* [如何训练U-Net](./turtorial/finetune_unet.md) -* [如何训练ICNet](./turtorial/finetune_icnet.md) -* [如何训练PSPNet](./turtorial/finetune_pspnet.md) -* [如何训练HRNet](./turtorial/finetune_hrnet.md) +* [分割模型介绍](./docs/models.md) +* [预训练模型下载](./docs/model_zoo.md) +* [DeepLabv3+模型使用教程](./turtorial/finetune_deeplabv3plus.md) +* [U-Net模型使用教程](./turtorial/finetune_unet.md) +* [ICNet模型使用教程](./turtorial/finetune_icnet.md) +* [PSPNet模型使用教程](./turtorial/finetune_pspnet.md) +* [HRNet模型使用教程](./turtorial/finetune_hrnet.md) +* [Fast-SCNN模型使用教程](./turtorial/finetune_fast_scnn.md) ### 预测部署 * [模型导出](./docs/model_export.md) -* [使用Python预测](./deploy/python/) -* [使用C++预测](./deploy/cpp/) -* [移动端预测部署](./deploy/lite/) +* [Python预测](./deploy/python/) +* [C++预测](./deploy/cpp/) +* [Paddle-Lite移动端预测部署](./deploy/lite/) ### 高级功能 * [PaddleSeg的数据增强](./docs/data_aug.md) -* [PaddleSeg的loss选择](./docs/loss_select.md) +* [如何解决二分类中类别不均衡问题](./docs/loss_select.md) * [特色垂类模型使用](./contrib) * [多进程训练和混合精度训练](./docs/multiple_gpus_train_and_mixed_precision_train.md) +* 使用PaddleSlim进行分割模型压缩([量化](./slim/quantization/README.md), [蒸馏](./slim/distillation/README.md), [剪枝](./slim/prune/README.md), [搜索](./slim/nas/README.md)) +## 在线体验 + +我们在AI Studio平台上提供了在线体验的教程,欢迎体验: + +|在线教程|链接| +|-|-| +|快速开始|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/100798)| +|U-Net图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)| +|DeepLabv3+图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/226703)| +|工业质检(零件瑕疵检测)|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/184392)| +|人像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/188833)| +|PaddleSeg特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/226710)|
@@ -104,25 +147,14 @@ python pdseg/train.py --cfg xxx.yaml TRAIN.RESUME_MODEL_DIR /PATH/TO/MODEL_CKPT/ A: 降低Batch size,使用Group Norm策略;请注意训练过程中当`DEFAULT_NORM_TYPE`选择`bn`时,为了Batch Norm计算稳定性,batch size需要满足>=2 -
#### Q: 出现错误 ModuleNotFoundError: No module named 'paddle.fluid.contrib.mixed_precision' A: 请将PaddlePaddle升级至1.5.2版本或以上。 -## 在线体验 - -PaddleSeg在AI Studio平台上提供了在线体验的教程,欢迎体验: - -|教程|链接| -|-|-| -|U-Net宠物分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)| -|DeepLabv3+图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/101696)| -|PaddleSeg特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/115541)| -
-## 交流与反馈 +## 交流与反馈 * 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/PaddleSeg/issues)来提交问题、报告与建议 * 微信公众号:飞桨PaddlePaddle * QQ群: 796771754 @@ -131,25 +163,36 @@ PaddleSeg在AI Studio平台上提供了在线体验的教程,欢迎体验:

   微信公众号                官方技术交流QQ群

## 更新日志 - +* 2019.12.15 + + **`v0.3.0`** + * 新增HRNet分割网络,提供基于cityscapes和ImageNet的[预训练模型](./docs/model_zoo.md)8个 + * 支持使用[伪彩色标签](./docs/data_prepare.md#%E7%81%B0%E5%BA%A6%E6%A0%87%E6%B3%A8vs%E4%BC%AA%E5%BD%A9%E8%89%B2%E6%A0%87%E6%B3%A8)进行训练/评估/预测,提升训练体验,并提供将灰度标注图转为伪彩色标注图的脚本 + * 新增[学习率warmup](./docs/configs/solver_group.md#lr_warmup)功能,支持与不同的学习率Decay策略配合使用 + * 新增图像归一化操作的GPU化实现,进一步提升预测速度。 + * 新增Python部署方案,更低成本完成工业级部署。 + * 新增Paddle-Lite移动端部署方案,支持人像分割模型的移动端部署。 + * 新增不同分割模型的预测[性能数据Benchmark](./deploy/python/docs/PaddleSeg_Infer_Benchmark.md), 便于开发者提供模型选型性能参考。 + + * 2019.11.04 **`v0.2.0`** - * 新增PSPNet分割网络,提供基于COCO和cityscapes数据集的[预训练模型](./docs/model_zoo.md)4个 - * 新增Dice Loss、BCE Loss以及组合Loss配置,支持样本不均衡场景下的[模型优化](./docs/loss_select.md) - * 支持[FP16混合精度训练](./docs/multiple_gpus_train_and_mixed_precision_train.md)以及动态Loss Scaling,在不损耗精度的情况下,训练速度提升30%+ - * 支持[PaddlePaddle多卡多进程训练](./docs/multiple_gpus_train_and_mixed_precision_train.md),多卡训练时训练速度提升15%+ - * 发布基于UNet的[工业标记表盘分割模型](./contrib#%E5%B7%A5%E4%B8%9A%E7%94%A8%E8%A1%A8%E5%88%86%E5%89%B2) + * 新增PSPNet分割网络,提供基于COCO和cityscapes数据集的[预训练模型](./docs/model_zoo.md)4个。 + * 新增Dice Loss、BCE Loss以及组合Loss配置,支持样本不均衡场景下的[模型优化](./docs/loss_select.md)。 + * 支持[FP16混合精度训练](./docs/multiple_gpus_train_and_mixed_precision_train.md)以及动态Loss Scaling,在不损耗精度的情况下,训练速度提升30%+。 + * 支持[PaddlePaddle多卡多进程训练](./docs/multiple_gpus_train_and_mixed_precision_train.md),多卡训练时训练速度提升15%+。 + * 发布基于UNet的[工业标记表盘分割模型](./contrib#%E5%B7%A5%E4%B8%9A%E7%94%A8%E8%A1%A8%E5%88%86%E5%89%B2)。 * 2019.09.10 **`v0.1.0`** * PaddleSeg分割库初始版本发布,包含DeepLabv3+, U-Net, ICNet三类分割模型, 其中DeepLabv3+支持Xception, MobileNet v2两种可调节的骨干网络。 - * CVPR19 LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P) - * 预置基于DeepLabv3+网络的[人像分割](./contrib/HumanSeg/)和[车道线分割](./contrib/RoadLine)预测模型发布 + * CVPR19 LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P)。 + * 预置基于DeepLabv3+网络的[人像分割](./contrib/HumanSeg/)和[车道线分割](./contrib/RoadLine)预测模型发布。
-## 如何贡献代码 +## 贡献代码 -我们非常欢迎您为PaddleSeg贡献代码或者提供使用建议。 +我们非常欢迎您为PaddleSeg贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交pull requests. diff --git a/configs/cityscape_fast_scnn.yaml b/configs/cityscape_fast_scnn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d9e996d64fc208186777e86289ea7329f8240a3b --- /dev/null +++ b/configs/cityscape_fast_scnn.yaml @@ -0,0 +1,53 @@ +EVAL_CROP_SIZE: (2048, 1024) # (width, height), for unpadding rangescaling and stepscaling +TRAIN_CROP_SIZE: (1024, 1024) # (width, height), for unpadding rangescaling and stepscaling +AUG: + AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling + FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding + INF_RESIZE_VALUE: 500 # for rangescaling + MAX_RESIZE_VALUE: 600 # for rangescaling + MIN_RESIZE_VALUE: 400 # for rangescaling + MAX_SCALE_FACTOR: 2.0 # for stepscaling + MIN_SCALE_FACTOR: 0.5 # for stepscaling + SCALE_STEP_SIZE: 0.25 # for stepscaling + MIRROR: True + FLIP: False + FLIP_RATIO: 0.2 + RICH_CROP: + ENABLE: True + ASPECT_RATIO: 0.0 + BLUR: False + BLUR_RATIO: 0.1 + MAX_ROTATION: 0 + MIN_AREA_RATIO: 0.0 + BRIGHTNESS_JITTER_RATIO: 0.4 + CONTRAST_JITTER_RATIO: 0.4 + SATURATION_JITTER_RATIO: 0.4 +BATCH_SIZE: 12 +MEAN: [0.5, 0.5, 0.5] +STD: [0.5, 0.5, 0.5] +DATASET: + DATA_DIR: "./dataset/cityscapes/" + IMAGE_TYPE: "rgb" # choice rgb or rgba + NUM_CLASSES: 19 + TEST_FILE_LIST: "dataset/cityscapes/val.list" + TRAIN_FILE_LIST: "dataset/cityscapes/train.list" + VAL_FILE_LIST: "dataset/cityscapes/val.list" + IGNORE_INDEX: 255 +FREEZE: + MODEL_FILENAME: "model" + PARAMS_FILENAME: "params" +MODEL: + DEFAULT_NORM_TYPE: "bn" + MODEL_NAME: "fast_scnn" + +TEST: + TEST_MODEL: "snapshots/cityscape_fast_scnn/final/" +TRAIN: + MODEL_SAVE_DIR: "snapshots/cityscape_fast_scnn/" + SNAPSHOT_EPOCH: 10 +SOLVER: + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "sgd" + NUM_EPOCHS: 100 + diff --git a/configs/deeplabv3p_mobilenetv2_cityscapes.yaml b/configs/deeplabv3p_mobilenetv2_cityscapes.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6b6835d11eed2bf87d332a4ada44785f78a959ec --- /dev/null +++ b/configs/deeplabv3p_mobilenetv2_cityscapes.yaml @@ -0,0 +1,46 @@ +EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling +TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling +AUG: + AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling + FIX_RESIZE_SIZE: (2048, 1024) # (width, height), for unpadding + INF_RESIZE_VALUE: 500 # for rangescaling + MAX_RESIZE_VALUE: 600 # for rangescaling + MIN_RESIZE_VALUE: 400 # for rangescaling + MAX_SCALE_FACTOR: 2.0 # for stepscaling + MIN_SCALE_FACTOR: 0.5 # for stepscaling + SCALE_STEP_SIZE: 0.25 # for stepscaling + MIRROR: True +BATCH_SIZE: 4 +DATASET: + DATA_DIR: "./dataset/cityscapes/" + IMAGE_TYPE: "rgb" # choice rgb or rgba + NUM_CLASSES: 19 + TEST_FILE_LIST: "dataset/cityscapes/val.list" + TRAIN_FILE_LIST: "dataset/cityscapes/train.list" + VAL_FILE_LIST: "dataset/cityscapes/val.list" + IGNORE_INDEX: 255 + SEPARATOR: " " +FREEZE: + MODEL_FILENAME: "model" + PARAMS_FILENAME: "params" +MODEL: + DEFAULT_NORM_TYPE: "bn" + MODEL_NAME: "deeplabv3p" + DEEPLAB: + BACKBONE: "mobilenetv2" + ASPP_WITH_SEP_CONV: True + DECODER_USE_SEP_CONV: True + ENCODER_WITH_ASPP: False + ENABLE_DECODER: False +TRAIN: + PRETRAINED_MODEL_DIR: u"pretrained_model/deeplabv3p_mobilenetv2-1-0_bn_coco" + MODEL_SAVE_DIR: "saved_model/deeplabv3p_mobilenetv2_cityscapes" + SNAPSHOT_EPOCH: 10 + SYNC_BATCH_NORM: True +TEST: + TEST_MODEL: "saved_model/deeplabv3p_mobilenetv2_cityscapes/final" +SOLVER: + LR: 0.01 + LR_POLICY: "poly" + OPTIMIZER: "sgd" + NUM_EPOCHS: 100 diff --git a/configs/pspnet.yaml b/configs/deeplabv3p_xception65_cityscapes.yaml similarity index 64% rename from configs/pspnet.yaml rename to configs/deeplabv3p_xception65_cityscapes.yaml index fdc960d6af81bcba30128e972f260da05b33b0e8..ec352f0f5856218fabd00b6b316d0184a45d90d1 100644 --- a/configs/pspnet.yaml +++ b/configs/deeplabv3p_xception65_cityscapes.yaml @@ -1,8 +1,8 @@ EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling -TRAIN_CROP_SIZE: (713, 713) # (width, height), for unpadding rangescaling and stepscaling +TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling AUG: AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling - FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding + FIX_RESIZE_SIZE: (2048, 1024) # (width, height), for unpadding INF_RESIZE_VALUE: 500 # for rangescaling MAX_RESIZE_VALUE: 600 # for rangescaling MIN_RESIZE_VALUE: 400 # for rangescaling @@ -19,23 +19,25 @@ DATASET: TRAIN_FILE_LIST: "dataset/cityscapes/train.list" VAL_FILE_LIST: "dataset/cityscapes/val.list" IGNORE_INDEX: 255 + SEPARATOR: " " FREEZE: MODEL_FILENAME: "model" PARAMS_FILENAME: "params" MODEL: - MODEL_NAME: "pspnet" DEFAULT_NORM_TYPE: "bn" - PSPNET: - DEPTH_MULTIPLIER: 1 - LAYERS: 50 -TEST: - TEST_MODEL: "snapshots/cityscapes_pspnet50/final" + MODEL_NAME: "deeplabv3p" + DEEPLAB: + ASPP_WITH_SEP_CONV: True + DECODER_USE_SEP_CONV: True TRAIN: - MODEL_SAVE_DIR: "snapshots/cityscapes_pspnet50/" - PRETRAINED_MODEL_DIR: u"pretrained_model/pspnet50_bn_cityscapes/" + PRETRAINED_MODEL_DIR: u"pretrained_model/deeplabv3p_xception65_bn_coco" + MODEL_SAVE_DIR: "saved_model/deeplabv3p_xception65_bn_cityscapes" SNAPSHOT_EPOCH: 10 + SYNC_BATCH_NORM: True +TEST: + TEST_MODEL: "saved_model/deeplabv3p_xception65_bn_cityscapes/final" SOLVER: - LR: 0.001 + LR: 0.01 LR_POLICY: "poly" OPTIMIZER: "sgd" - NUM_EPOCHS: 700 + NUM_EPOCHS: 100 diff --git a/configs/deeplabv3p_xception65_optic.yaml b/configs/deeplabv3p_xception65_optic.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7ec86926db355c53439802a5d891d9e736a1bba0 --- /dev/null +++ b/configs/deeplabv3p_xception65_optic.yaml @@ -0,0 +1,34 @@ +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + +# 预训练模型配置 +MODEL: + MODEL_NAME: "deeplabv3p" + DEFAULT_NORM_TYPE: "bn" + DEEPLAB: + BACKBONE: "xception_65" + +# 其他配置 +TRAIN_CROP_SIZE: (512, 512) +EVAL_CROP_SIZE: (512, 512) +AUG: + AUG_METHOD: "unpadding" + FIX_RESIZE_SIZE: (512, 512) +BATCH_SIZE: 4 +TRAIN: + PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_coco/" + MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_optic/" + SNAPSHOT_EPOCH: 5 +TEST: + TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_optic/final" +SOLVER: + NUM_EPOCHS: 10 + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "adam" \ No newline at end of file diff --git a/configs/deeplabv3p_xception65_pet.yaml b/configs/deeplabv3p_xception65_pet.yaml deleted file mode 100644 index 1b574497ea882c86c7e5785e16de976e5b33a50f..0000000000000000000000000000000000000000 --- a/configs/deeplabv3p_xception65_pet.yaml +++ /dev/null @@ -1,44 +0,0 @@ -TRAIN_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling -EVAL_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling -AUG: - AUG_METHOD: "unpadding" # choice unpadding rangescaling and stepscaling - FIX_RESIZE_SIZE: (512, 512) # (width, height), for unpadding - - INF_RESIZE_VALUE: 500 # for rangescaling - MAX_RESIZE_VALUE: 600 # for rangescaling - MIN_RESIZE_VALUE: 400 # for rangescaling - - MAX_SCALE_FACTOR: 1.25 # for stepscaling - MIN_SCALE_FACTOR: 0.75 # for stepscaling - SCALE_STEP_SIZE: 0.25 # for stepscaling - MIRROR: True -BATCH_SIZE: 4 -DATASET: - DATA_DIR: "./dataset/mini_pet/" - IMAGE_TYPE: "rgb" # choice rgb or rgba - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - IGNORE_INDEX: 255 - SEPARATOR: " " -FREEZE: - MODEL_FILENAME: "__model__" - PARAMS_FILENAME: "__params__" -MODEL: - MODEL_NAME: "deeplabv3p" - DEFAULT_NORM_TYPE: "bn" - DEEPLAB: - BACKBONE: "xception_65" -TRAIN: - PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_coco/" - MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_pet/" - SNAPSHOT_EPOCH: 10 -TEST: - TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_pet/final" -SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 - LR_POLICY: "poly" - OPTIMIZER: "sgd" diff --git a/configs/unet_pet.yaml b/configs/fast_scnn_pet.yaml similarity index 84% rename from configs/unet_pet.yaml rename to configs/fast_scnn_pet.yaml index a1781c5e8c4963ac269c4850f1012cc3d9ad8d15..2b9b659f18735324eeb282185a082e5c52e2a063 100644 --- a/configs/unet_pet.yaml +++ b/configs/fast_scnn_pet.yaml @@ -27,16 +27,17 @@ FREEZE: MODEL_FILENAME: "__model__" PARAMS_FILENAME: "__params__" MODEL: - MODEL_NAME: "unet" + MODEL_NAME: "fast_scnn" DEFAULT_NORM_TYPE: "bn" -TEST: - TEST_MODEL: "./saved_model/unet_pet/final/" + TRAIN: - MODEL_SAVE_DIR: "./saved_model/unet_pet/" - PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/" + PRETRAINED_MODEL_DIR: "./pretrained_model/fast_scnn_cityscapes/" + MODEL_SAVE_DIR: "./saved_model/fast_scnn_pet/" SNAPSHOT_EPOCH: 10 +TEST: + TEST_MODEL: "./saved_model/fast_scnn_pet/final" SOLVER: NUM_EPOCHS: 100 LR: 0.005 LR_POLICY: "poly" - OPTIMIZER: "adam" + OPTIMIZER: "sgd" diff --git a/configs/hrnet_optic.yaml b/configs/hrnet_optic.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7154bceeeaf99ec1962a4ac6e5ac79a9d78d3f4a --- /dev/null +++ b/configs/hrnet_optic.yaml @@ -0,0 +1,39 @@ +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + +# 预训练模型配置 +MODEL: + MODEL_NAME: "hrnet" + DEFAULT_NORM_TYPE: "bn" + HRNET: + STAGE2: + NUM_CHANNELS: [18, 36] + STAGE3: + NUM_CHANNELS: [18, 36, 72] + STAGE4: + NUM_CHANNELS: [18, 36, 72, 144] + +# 其他配置 +TRAIN_CROP_SIZE: (512, 512) +EVAL_CROP_SIZE: (512, 512) +AUG: + AUG_METHOD: "unpadding" + FIX_RESIZE_SIZE: (512, 512) +BATCH_SIZE: 4 +TRAIN: + PRETRAINED_MODEL_DIR: "./pretrained_model/hrnet_w18_bn_cityscapes/" + MODEL_SAVE_DIR: "./saved_model/hrnet_optic/" + SNAPSHOT_EPOCH: 5 +TEST: + TEST_MODEL: "./saved_model/hrnet_optic/final" +SOLVER: + NUM_EPOCHS: 10 + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "adam" diff --git a/configs/hrnet_w18_pet.yaml b/configs/hrnet_w18_pet.yaml deleted file mode 100644 index b1bfb9215e7f204444613fd9f6c78eba9c1c1432..0000000000000000000000000000000000000000 --- a/configs/hrnet_w18_pet.yaml +++ /dev/null @@ -1,49 +0,0 @@ -TRAIN_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling -EVAL_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling -AUG: - AUG_METHOD: "unpadding" # choice unpadding rangescaling and stepscaling - FIX_RESIZE_SIZE: (512, 512) # (width, height), for unpadding - - INF_RESIZE_VALUE: 500 # for rangescaling - MAX_RESIZE_VALUE: 600 # for rangescaling - MIN_RESIZE_VALUE: 400 # for rangescaling - - MAX_SCALE_FACTOR: 1.25 # for stepscaling - MIN_SCALE_FACTOR: 0.75 # for stepscaling - SCALE_STEP_SIZE: 0.25 # for stepscaling - MIRROR: True -BATCH_SIZE: 4 -DATASET: - DATA_DIR: "./dataset/mini_pet/" - IMAGE_TYPE: "rgb" # choice rgb or rgba - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - IGNORE_INDEX: 255 - SEPARATOR: " " -FREEZE: - MODEL_FILENAME: "__model__" - PARAMS_FILENAME: "__params__" -MODEL: - MODEL_NAME: "hrnet" - DEFAULT_NORM_TYPE: "bn" - HRNET: - STAGE2: - NUM_CHANNELS: [18, 36] - STAGE3: - NUM_CHANNELS: [18, 36, 72] - STAGE4: - NUM_CHANNELS: [18, 36, 72, 144] -TRAIN: - PRETRAINED_MODEL_DIR: "./pretrained_model/hrnet_w18_bn_cityscapes/" - MODEL_SAVE_DIR: "./saved_model/hrnet_w18_bn_pet/" - SNAPSHOT_EPOCH: 10 -TEST: - TEST_MODEL: "./saved_model/hrnet_w18_bn_pet/final" -SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 - LR_POLICY: "poly" - OPTIMIZER: "sgd" diff --git a/configs/icnet_optic.yaml b/configs/icnet_optic.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f2742e6cf3626ed82c1f379749c24ee6200fa3c --- /dev/null +++ b/configs/icnet_optic.yaml @@ -0,0 +1,35 @@ +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + +# 预训练模型配置 +MODEL: + MODEL_NAME: "icnet" + DEFAULT_NORM_TYPE: "bn" + MULTI_LOSS_WEIGHT: "[1.0, 0.4, 0.16]" + ICNET: + DEPTH_MULTIPLIER: 0.5 + +# 其他配置 +TRAIN_CROP_SIZE: (512, 512) +EVAL_CROP_SIZE: (512, 512) +AUG: + AUG_METHOD: "unpadding" + FIX_RESIZE_SIZE: (512, 512) +BATCH_SIZE: 4 +TRAIN: + PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/" + MODEL_SAVE_DIR: "./saved_model/icnet_optic/" + SNAPSHOT_EPOCH: 5 +TEST: + TEST_MODEL: "./saved_model/icnet_optic/final" +SOLVER: + NUM_EPOCHS: 10 + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "adam" diff --git a/configs/icnet_pet.yaml b/configs/icnet_pet.yaml deleted file mode 100644 index 0398d131ca12aea7902ec7be6542650377201c25..0000000000000000000000000000000000000000 --- a/configs/icnet_pet.yaml +++ /dev/null @@ -1,45 +0,0 @@ -TRAIN_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling -EVAL_CROP_SIZE: (512, 512) # (width, height), for unpadding rangescaling and stepscaling -AUG: - AUG_METHOD: "unpadding" # choice unpadding rangescaling and stepscaling - FIX_RESIZE_SIZE: (512, 512) # (width, height), for unpadding - - INF_RESIZE_VALUE: 500 # for rangescaling - MAX_RESIZE_VALUE: 600 # for rangescaling - MIN_RESIZE_VALUE: 400 # for rangescaling - - MAX_SCALE_FACTOR: 1.25 # for stepscaling - MIN_SCALE_FACTOR: 0.75 # for stepscaling - SCALE_STEP_SIZE: 0.25 # for stepscaling - MIRROR: True -BATCH_SIZE: 4 -DATASET: - DATA_DIR: "./dataset/mini_pet/" - IMAGE_TYPE: "rgb" # choice rgb or rgba - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - IGNORE_INDEX: 255 - SEPARATOR: " " -FREEZE: - MODEL_FILENAME: "__model__" - PARAMS_FILENAME: "__params__" -MODEL: - MODEL_NAME: "icnet" - DEFAULT_NORM_TYPE: "bn" - MULTI_LOSS_WEIGHT: "[1.0, 0.4, 0.16]" - ICNET: - DEPTH_MULTIPLIER: 0.5 -TRAIN: - PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/" - MODEL_SAVE_DIR: "./saved_model/icnet_pet/" - SNAPSHOT_EPOCH: 10 -TEST: - TEST_MODEL: "./saved_model/icnet_pet/final" -SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 - LR_POLICY: "poly" - OPTIMIZER: "sgd" diff --git a/configs/pspnet_optic.yaml b/configs/pspnet_optic.yaml new file mode 100644 index 0000000000000000000000000000000000000000..589e2b53cc640f124ad868f59a412e36fd7ced85 --- /dev/null +++ b/configs/pspnet_optic.yaml @@ -0,0 +1,35 @@ +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + +# 预训练模型配置 +MODEL: + MODEL_NAME: "pspnet" + DEFAULT_NORM_TYPE: "bn" + PSPNET: + DEPTH_MULTIPLIER: 1 + LAYERS: 50 + +# 其他配置 +TRAIN_CROP_SIZE: (512, 512) +EVAL_CROP_SIZE: (512, 512) +AUG: + AUG_METHOD: "unpadding" + FIX_RESIZE_SIZE: (512, 512) +BATCH_SIZE: 4 +TRAIN: + PRETRAINED_MODEL_DIR: "./pretrained_model/pspnet50_bn_cityscapes/" + MODEL_SAVE_DIR: "./saved_model/pspnet_optic/" + SNAPSHOT_EPOCH: 5 +TEST: + TEST_MODEL: "./saved_model/pspnet_optic/final" +SOLVER: + NUM_EPOCHS: 10 + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "adam" diff --git a/configs/unet_optic.yaml b/configs/unet_optic.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cd564817c7147c18ceaf360993042735019ec16d --- /dev/null +++ b/configs/unet_optic.yaml @@ -0,0 +1,32 @@ +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + +# 预训练模型配置 +MODEL: + MODEL_NAME: "unet" + DEFAULT_NORM_TYPE: "bn" + +# 其他配置 +TRAIN_CROP_SIZE: (512, 512) +EVAL_CROP_SIZE: (512, 512) +AUG: + AUG_METHOD: "unpadding" + FIX_RESIZE_SIZE: (512, 512) +BATCH_SIZE: 4 +TRAIN: + PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/" + MODEL_SAVE_DIR: "./saved_model/unet_optic/" + SNAPSHOT_EPOCH: 5 +TEST: + TEST_MODEL: "./saved_model/unet_optic/final" +SOLVER: + NUM_EPOCHS: 10 + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "adam" diff --git a/contrib/ACE2P/config.py b/contrib/ACE2P/config.py index f6ad509581a84d04bc1b6badca83648505c19444..a1fad0ec1e6c50493a0a8dfab0c5301add410ad0 100644 --- a/contrib/ACE2P/config.py +++ b/contrib/ACE2P/config.py @@ -6,13 +6,13 @@ args = get_arguments() cfg = AttrDict() # 待预测图像所在路径 -cfg.data_dir = os.path.join(args.example , "data", "testing_images") +cfg.data_dir = os.path.join("data", "testing_images") # 待预测图像名称列表 -cfg.data_list_file = os.path.join(args.example , "data", "test_id.txt") +cfg.data_list_file = os.path.join("data", "test_id.txt") # 模型加载路径 -cfg.model_path = os.path.join(args.example , "ACE2P") +cfg.model_path = args.example # 预测结果保存路径 -cfg.vis_dir = os.path.join(args.example , "result") +cfg.vis_dir = "result" # 预测类别数 cfg.class_num = 20 diff --git a/contrib/ACE2P/download_ACE2P.py b/contrib/ACE2P/download_ACE2P.py new file mode 100644 index 0000000000000000000000000000000000000000..bb4d33771dbd879a2d77664d2e0e45ed33b9bcb2 --- /dev/null +++ b/contrib/ACE2P/download_ACE2P.py @@ -0,0 +1,31 @@ +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test") +sys.path.append(TEST_PATH) + +from test_utils import download_file_and_uncompress + +if __name__ == "__main__": + download_file_and_uncompress( + url='https://paddleseg.bj.bcebos.com/models/ACE2P.tgz', + savepath=LOCAL_PATH, + extrapath=LOCAL_PATH, + extraname='ACE2P') + + print("Pretrained Model download success!") diff --git a/contrib/ACE2P/imgs/117676_2149260.jpg b/contrib/ACE2P/imgs/117676_2149260.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8314d8f8cc723b6f96785053bdcfe39d867755d5 Binary files /dev/null and b/contrib/ACE2P/imgs/117676_2149260.jpg differ diff --git a/contrib/ACE2P/imgs/117676_2149260.png b/contrib/ACE2P/imgs/117676_2149260.png new file mode 100644 index 0000000000000000000000000000000000000000..e3a9529644ead2013748431a3ade2f34264f19de Binary files /dev/null and b/contrib/ACE2P/imgs/117676_2149260.png differ diff --git a/contrib/ACE2P/infer.py b/contrib/ACE2P/infer.py new file mode 100644 index 0000000000000000000000000000000000000000..16eddc1eab8628eec7e38d27b1f18df13dd480d7 --- /dev/null +++ b/contrib/ACE2P/infer.py @@ -0,0 +1,130 @@ +# -*- coding: utf-8 -*- +import os +import cv2 +import numpy as np +from utils.util import get_arguments +from utils.palette import get_palette +from PIL import Image as PILImage +import importlib + +args = get_arguments() +config = importlib.import_module('config') +cfg = getattr(config, 'cfg') + +# paddle垃圾回收策略FLAG,ACE2P模型较大,当显存不够时建议开启 +os.environ['FLAGS_eager_delete_tensor_gb']='0.0' + +import paddle.fluid as fluid + +# 预测数据集类 +class TestDataSet(): + def __init__(self): + self.data_dir = cfg.data_dir + self.data_list_file = cfg.data_list_file + self.data_list = self.get_data_list() + self.data_num = len(self.data_list) + + def get_data_list(self): + # 获取预测图像路径列表 + data_list = [] + data_file_handler = open(self.data_list_file, 'r') + for line in data_file_handler: + img_name = line.strip() + name_prefix = img_name.split('.')[0] + if len(img_name.split('.')) == 1: + img_name = img_name + '.jpg' + img_path = os.path.join(self.data_dir, img_name) + data_list.append(img_path) + return data_list + + def preprocess(self, img): + # 图像预处理 + if cfg.example == 'ACE2P': + reader = importlib.import_module('reader') + ACE2P_preprocess = getattr(reader, 'preprocess') + img = ACE2P_preprocess(img) + else: + img = cv2.resize(img, cfg.input_size).astype(np.float32) + img -= np.array(cfg.MEAN) + img /= np.array(cfg.STD) + img = img.transpose((2, 0, 1)) + img = np.expand_dims(img, axis=0) + return img + + def get_data(self, index): + # 获取图像信息 + img_path = self.data_list[index] + img = cv2.imread(img_path, cv2.IMREAD_COLOR) + if img is None: + return img, img,img_path, None + + img_name = img_path.split(os.sep)[-1] + name_prefix = img_name.replace('.'+img_name.split('.')[-1],'') + img_shape = img.shape[:2] + img_process = self.preprocess(img) + + return img, img_process, name_prefix, img_shape + + +def infer(): + if not os.path.exists(cfg.vis_dir): + os.makedirs(cfg.vis_dir) + palette = get_palette(cfg.class_num) + # 人像分割结果显示阈值 + thresh = 120 + + place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace() + exe = fluid.Executor(place) + + # 加载预测模型 + test_prog, feed_name, fetch_list = fluid.io.load_inference_model( + dirname=cfg.model_path, executor=exe, params_filename='__params__') + + #加载预测数据集 + test_dataset = TestDataSet() + data_num = test_dataset.data_num + + for idx in range(data_num): + # 数据获取 + ori_img, image, im_name, im_shape = test_dataset.get_data(idx) + if image is None: + print(im_name, 'is None') + continue + + # 预测 + if cfg.example == 'ACE2P': + # ACE2P模型使用多尺度预测 + reader = importlib.import_module('reader') + multi_scale_test = getattr(reader, 'multi_scale_test') + parsing, logits = multi_scale_test(exe, test_prog, feed_name, fetch_list, image, im_shape) + else: + # HumanSeg,RoadLine模型单尺度预测 + result = exe.run(program=test_prog, feed={feed_name[0]: image}, fetch_list=fetch_list) + parsing = np.argmax(result[0][0], axis=0) + parsing = cv2.resize(parsing.astype(np.uint8), im_shape[::-1]) + + # 预测结果保存 + result_path = os.path.join(cfg.vis_dir, im_name + '.png') + if cfg.example == 'HumanSeg': + logits = result[0][0][1]*255 + logits = cv2.resize(logits, im_shape[::-1]) + ret, logits = cv2.threshold(logits, thresh, 0, cv2.THRESH_TOZERO) + logits = 255 *(logits - thresh)/(255 - thresh) + # 将分割结果添加到alpha通道 + rgba = np.concatenate((ori_img, np.expand_dims(logits, axis=2)), axis=2) + cv2.imwrite(result_path, rgba) + else: + output_im = PILImage.fromarray(np.asarray(parsing, dtype=np.uint8)) + output_im.putpalette(palette) + output_im.save(result_path) + + if (idx + 1) % 100 == 0: + print('%d processd' % (idx + 1)) + + print('%d processd done' % (idx + 1)) + + return 0 + + +if __name__ == "__main__": + infer() diff --git a/contrib/ACE2P/reader.py b/contrib/ACE2P/reader.py index 0a266637f3cf425a1bc3d61ad7377ff30de55723..ef5cc370738daf8550adfc20c227f942f1dd300f 100644 --- a/contrib/ACE2P/reader.py +++ b/contrib/ACE2P/reader.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- import numpy as np import paddle.fluid as fluid -from ACE2P.config import cfg +from config import cfg import cv2 def get_affine_points(src_shape, dst_shape, rot_grad=0): diff --git a/contrib/utils/__init__.py b/contrib/ACE2P/utils/__init__.py similarity index 100% rename from contrib/utils/__init__.py rename to contrib/ACE2P/utils/__init__.py diff --git a/contrib/utils/palette.py b/contrib/ACE2P/utils/palette.py similarity index 100% rename from contrib/utils/palette.py rename to contrib/ACE2P/utils/palette.py diff --git a/contrib/utils/util.py b/contrib/ACE2P/utils/util.py similarity index 100% rename from contrib/utils/util.py rename to contrib/ACE2P/utils/util.py diff --git a/contrib/imgs/Human.jpg b/contrib/HumanSeg/imgs/Human.jpg similarity index 100% rename from contrib/imgs/Human.jpg rename to contrib/HumanSeg/imgs/Human.jpg diff --git a/contrib/imgs/HumanSeg.jpg b/contrib/HumanSeg/imgs/HumanSeg.jpg similarity index 100% rename from contrib/imgs/HumanSeg.jpg rename to contrib/HumanSeg/imgs/HumanSeg.jpg diff --git a/contrib/infer.py b/contrib/HumanSeg/infer.py similarity index 98% rename from contrib/infer.py rename to contrib/HumanSeg/infer.py index 8f939c8455cd3868120781a7a8d96ace0ff772b1..971476933c431977ce80c73e1d939fe079e1af19 100644 --- a/contrib/infer.py +++ b/contrib/HumanSeg/infer.py @@ -8,7 +8,7 @@ from PIL import Image as PILImage import importlib args = get_arguments() -config = importlib.import_module(args.example+'.config') +config = importlib.import_module('config') cfg = getattr(config, 'cfg') # paddle垃圾回收策略FLAG,ACE2P模型较大,当显存不够时建议开启 diff --git a/contrib/HumanSeg/utils/__init__.py b/contrib/HumanSeg/utils/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/contrib/HumanSeg/utils/palette.py b/contrib/HumanSeg/utils/palette.py new file mode 100644 index 0000000000000000000000000000000000000000..2186203cbc2789f6eff70dfd92f724b4fe16cdb7 --- /dev/null +++ b/contrib/HumanSeg/utils/palette.py @@ -0,0 +1,38 @@ +##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +## Created by: RainbowSecret +## Microsoft Research +## yuyua@microsoft.com +## Copyright (c) 2018 +## +## This source code is licensed under the MIT-style license found in the +## LICENSE file in the root directory of this source tree +##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function +import numpy as np +import cv2 + + +def get_palette(num_cls): + """ Returns the color map for visualizing the segmentation mask. + Args: + num_cls: Number of classes + Returns: + The color map + """ + n = num_cls + palette = [0] * (n * 3) + for j in range(0, n): + lab = j + palette[j * 3 + 0] = 0 + palette[j * 3 + 1] = 0 + palette[j * 3 + 2] = 0 + i = 0 + while lab: + palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i)) + palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i)) + palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i)) + i += 1 + lab >>= 3 + return palette diff --git a/contrib/HumanSeg/utils/util.py b/contrib/HumanSeg/utils/util.py new file mode 100644 index 0000000000000000000000000000000000000000..7394870e7c94c1fb16169e314696b931eecdc3b2 --- /dev/null +++ b/contrib/HumanSeg/utils/util.py @@ -0,0 +1,47 @@ +from __future__ import division +from __future__ import print_function +from __future__ import unicode_literals +import argparse +import os + +def get_arguments(): + parser = argparse.ArgumentParser() + parser.add_argument("--use_gpu", + action="store_true", + help="Use gpu or cpu to test.") + parser.add_argument('--example', + type=str, + help='RoadLine, HumanSeg or ACE2P') + + return parser.parse_args() + + +class AttrDict(dict): + def __init__(self, *args, **kwargs): + super(AttrDict, self).__init__(*args, **kwargs) + + def __getattr__(self, name): + if name in self.__dict__: + return self.__dict__[name] + elif name in self: + return self[name] + else: + raise AttributeError(name) + + def __setattr__(self, name, value): + if name in self.__dict__: + self.__dict__[name] = value + else: + self[name] = value + +def merge_cfg_from_args(args, cfg): + """Merge config keys, values in args into the global config.""" + for k, v in vars(args).items(): + d = cfg + try: + value = eval(v) + except: + value = v + if value is not None: + cfg[k] = value + diff --git a/contrib/LaneNet/README.md b/contrib/LaneNet/README.md new file mode 100644 index 0000000000000000000000000000000000000000..b86777305c160edae7a55349d719c9df2a2da4f9 --- /dev/null +++ b/contrib/LaneNet/README.md @@ -0,0 +1,138 @@ +# LaneNet 模型训练教程 + +* 本教程旨在介绍如何通过使用PaddleSeg进行车道线检测 + +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 + +## 环境依赖 + +* PaddlePaddle >= 1.7.0 或develop版本 +* Python 3.5+ + +通过以下命令安装python包依赖,请确保在该分支上至少执行过一次以下命令 +```shell +$ pip install -r requirements.txt +``` + +## 一. 准备待训练数据 + +我们提前准备好了一份处理好的数据集,通过以下代码进行下载,该数据集由图森车道线检测数据集转换而来,你也可以在这个[页面](https://github.com/TuSimple/tusimple-benchmark/issues/3)下载原始数据集。 + +```shell +python dataset/download_tusimple.py +``` + +数据目录结构 +``` +LaneNet +|-- dataset + |-- tusimple_lane_detection + |-- training + |-- gt_binary_image + |-- gt_image + |-- gt_instance_image + |-- train_part.txt + |-- val_part.txt +``` +## 二. 下载预训练模型 + +下载[vgg预训练模型](https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_pretrained.tar),放在```pretrained_models```文件夹下。 + + +## 三. 准备配置 + +接着我们需要确定相关配置,从本教程的角度,配置分为三部分: + +* 数据集 + * 训练集主目录 + * 训练集文件列表 + * 测试集文件列表 + * 评估集文件列表 +* 预训练模型 + * 预训练模型名称 + * 预训练模型的backbone网络 + * 预训练模型路径 +* 其他 + * 学习率 + * Batch大小 + * ... + +在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。 + +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/tusimple_lane_detection`中 + +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/lanenet.yaml** + +```yaml +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/tusimple_lane_detection" + IMAGE_TYPE: "rgb" # choice rgb or rgba + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt" + TRAIN_FILE_LIST: "./dataset/tusimple_lane_detection/training/train_part.txt" + VAL_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt" + SEPARATOR: " " + +# 预训练模型配置 +MODEL: + MODEL_NAME: "lanenet" + +# 其他配置 +EVAL_CROP_SIZE: (512, 256) +TRAIN_CROP_SIZE: (512, 256) +AUG: + AUG_METHOD: u"unpadding" # choice unpadding rangescaling and stepscaling + FIX_RESIZE_SIZE: (512, 256) # (width, height), for unpadding + MIRROR: False + RICH_CROP: + ENABLE: False +BATCH_SIZE: 4 +TEST: + TEST_MODEL: "./saved_model/lanenet/final/" +TRAIN: + MODEL_SAVE_DIR: "./saved_model/lanenet/" + PRETRAINED_MODEL_DIR: "./pretrained_models/VGG16_pretrained" + SNAPSHOT_EPOCH: 5 +SOLVER: + NUM_EPOCHS: 100 + LR: 0.0005 + LR_POLICY: "poly" + OPTIMIZER: "sgd" + WEIGHT_DECAY: 0.001 +``` + + +## 五. 开始训练 + +使用下述命令启动训练 + +```shell +CUDA_VISIBLE_DEVICES=0 python -u train.py --cfg configs/lanenet.yaml --use_gpu --use_mpio --do_eval +``` + +## 六. 进行评估 + +模型训练完成,使用下述命令启动评估 + +```shell +CUDA_VISIBLE_DEVICES=0 python -u eval.py --use_gpu --cfg configs/lanenet.yaml +``` + +## 七. 可视化 +需要先下载一个车前视角和鸟瞰图视角转换所需文件,点击[链接](https://paddleseg.bj.bcebos.com/resources/tusimple_ipm_remap.tar),下载后放在```./utils```下。同时我们提供了一个训练好的模型,点击[链接](https://paddleseg.bj.bcebos.com/models/lanenet_vgg_tusimple.tar),下载后放在```./pretrained_models/```下,使用如下命令进行可视化 +```shell +CUDA_VISIBLE_DEVICES=0 python -u ./vis.py --cfg configs/lanenet.yaml --use_gpu --vis_dir vis_result \ +TEST.TEST_MODEL pretrained_models/LaneNet_vgg_tusimple/ +``` + +可视化结果示例: + + 预测结果:
+ ![](imgs/0005_pred_lane.png) + 分割结果:
+ ![](imgs/0005_pred_binary.png)
+ 车道线实例预测结果:
+ ![](imgs/0005_pred_instance.png) + + diff --git a/contrib/LaneNet/configs/lanenet.yaml b/contrib/LaneNet/configs/lanenet.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1445e8803e638b2a44a7170c5020c4a0c56dcd67 --- /dev/null +++ b/contrib/LaneNet/configs/lanenet.yaml @@ -0,0 +1,52 @@ +EVAL_CROP_SIZE: (512, 256) # (width, height), for unpadding rangescaling and stepscaling +TRAIN_CROP_SIZE: (512, 256) # (width, height), for unpadding rangescaling and stepscaling +AUG: + AUG_METHOD: u"unpadding" # choice unpadding rangescaling and stepscaling + FIX_RESIZE_SIZE: (512, 256) # (width, height), for unpadding + INF_RESIZE_VALUE: 500 # for rangescaling + MAX_RESIZE_VALUE: 600 # for rangescaling + MIN_RESIZE_VALUE: 400 # for rangescaling + MAX_SCALE_FACTOR: 2.0 # for stepscaling + MIN_SCALE_FACTOR: 0.5 # for stepscaling + SCALE_STEP_SIZE: 0.25 # for stepscaling + MIRROR: False + RICH_CROP: + ENABLE: False + +BATCH_SIZE: 4 + +DATALOADER: + BUF_SIZE: 256 + NUM_WORKERS: 4 +DATASET: + DATA_DIR: "./dataset/tusimple_lane_detection" + IMAGE_TYPE: "rgb" # choice rgb or rgba + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt" + TEST_TOTAL_IMAGES: 362 + TRAIN_FILE_LIST: "./dataset/tusimple_lane_detection/training/train_part.txt" + TRAIN_TOTAL_IMAGES: 3264 + VAL_FILE_LIST: "./dataset/tusimple_lane_detection/training/val_part.txt" + VAL_TOTAL_IMAGES: 362 + SEPARATOR: " " + IGNORE_INDEX: 255 + +FREEZE: + MODEL_FILENAME: "__model__" + PARAMS_FILENAME: "__params__" +MODEL: + MODEL_NAME: "lanenet" + DEFAULT_NORM_TYPE: "bn" +TEST: + TEST_MODEL: "./saved_model/lanenet/final/" +TRAIN: + MODEL_SAVE_DIR: "./saved_model/lanenet/" + PRETRAINED_MODEL_DIR: "./pretrained_models/VGG16_pretrained" + SNAPSHOT_EPOCH: 1 +SOLVER: + NUM_EPOCHS: 100 + LR: 0.0005 + LR_POLICY: "poly" + OPTIMIZER: "sgd" + WEIGHT_DECAY: 0.001 + diff --git a/contrib/LaneNet/data_aug.py b/contrib/LaneNet/data_aug.py new file mode 100644 index 0000000000000000000000000000000000000000..bffb956c657e8d93edcffe5c7946e3b1437a1ef1 --- /dev/null +++ b/contrib/LaneNet/data_aug.py @@ -0,0 +1,83 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function +import cv2 +import numpy as np +from utils.config import cfg +from models.model_builder import ModelPhase +from pdseg.data_aug import get_random_scale, randomly_scale_image_and_label, random_rotation, \ + rand_scale_aspect, hsv_color_jitter, rand_crop + +def resize(img, grt=None, grt_instance=None, mode=ModelPhase.TRAIN): + """ + 改变图像及标签图像尺寸 + AUG.AUG_METHOD为unpadding,所有模式均直接resize到AUG.FIX_RESIZE_SIZE的尺寸 + AUG.AUG_METHOD为stepscaling, 按比例resize,训练时比例范围AUG.MIN_SCALE_FACTOR到AUG.MAX_SCALE_FACTOR,间隔为AUG.SCALE_STEP_SIZE,其他模式返回原图 + AUG.AUG_METHOD为rangescaling,长边对齐,短边按比例变化,训练时长边对齐范围AUG.MIN_RESIZE_VALUE到AUG.MAX_RESIZE_VALUE,其他模式长边对齐AUG.INF_RESIZE_VALUE + + Args: + img(numpy.ndarray): 输入图像 + grt(numpy.ndarray): 标签图像,默认为None + mode(string): 模式, 默认训练模式,即ModelPhase.TRAIN + + Returns: + resize后的图像和标签图 + + """ + + if cfg.AUG.AUG_METHOD == 'unpadding': + target_size = cfg.AUG.FIX_RESIZE_SIZE + img = cv2.resize(img, target_size, interpolation=cv2.INTER_LINEAR) + if grt is not None: + grt = cv2.resize(grt, target_size, interpolation=cv2.INTER_NEAREST) + if grt_instance is not None: + grt_instance = cv2.resize(grt_instance, target_size, interpolation=cv2.INTER_NEAREST) + elif cfg.AUG.AUG_METHOD == 'stepscaling': + if mode == ModelPhase.TRAIN: + min_scale_factor = cfg.AUG.MIN_SCALE_FACTOR + max_scale_factor = cfg.AUG.MAX_SCALE_FACTOR + step_size = cfg.AUG.SCALE_STEP_SIZE + scale_factor = get_random_scale(min_scale_factor, max_scale_factor, + step_size) + img, grt = randomly_scale_image_and_label( + img, grt, scale=scale_factor) + elif cfg.AUG.AUG_METHOD == 'rangescaling': + min_resize_value = cfg.AUG.MIN_RESIZE_VALUE + max_resize_value = cfg.AUG.MAX_RESIZE_VALUE + if mode == ModelPhase.TRAIN: + if min_resize_value == max_resize_value: + random_size = min_resize_value + else: + random_size = int( + np.random.uniform(min_resize_value, max_resize_value) + 0.5) + else: + random_size = cfg.AUG.INF_RESIZE_VALUE + + value = max(img.shape[0], img.shape[1]) + scale = float(random_size) / float(value) + img = cv2.resize( + img, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_LINEAR) + if grt is not None: + grt = cv2.resize( + grt, (0, 0), + fx=scale, + fy=scale, + interpolation=cv2.INTER_NEAREST) + else: + raise Exception("Unexpect data augmention method: {}".format( + cfg.AUG.AUG_METHOD)) + + return img, grt, grt_instance diff --git a/contrib/LaneNet/dataset/download_tusimple.py b/contrib/LaneNet/dataset/download_tusimple.py new file mode 100644 index 0000000000000000000000000000000000000000..1549cafdea4bbc97aca0401d84cd8844165324c8 --- /dev/null +++ b/contrib/LaneNet/dataset/download_tusimple.py @@ -0,0 +1,33 @@ +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +TEST_PATH = os.path.join(LOCAL_PATH, "../../../", "test") +sys.path.append(TEST_PATH) + +from test_utils import download_file_and_uncompress + + +def download_tusimple_dataset(savepath, extrapath): + url = "https://paddleseg.bj.bcebos.com/dataset/tusimple_lane_detection.tar" + download_file_and_uncompress( + url=url, savepath=savepath, extrapath=extrapath) + + +if __name__ == "__main__": + download_tusimple_dataset(LOCAL_PATH, LOCAL_PATH) + print("Dataset download finish!") diff --git a/contrib/LaneNet/eval.py b/contrib/LaneNet/eval.py new file mode 100644 index 0000000000000000000000000000000000000000..9256c4f024e7d15c9c018c4fe5930e5b7865c7e0 --- /dev/null +++ b/contrib/LaneNet/eval.py @@ -0,0 +1,182 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +cur_path = os.path.abspath(os.path.dirname(__file__)) +root_path = os.path.split(os.path.split(cur_path)[0])[0] +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +SEG_PATH = os.path.join(LOCAL_PATH, "../../../") +sys.path.append(SEG_PATH) +sys.path.append(root_path) + +import time +import argparse +import functools +import pprint +import cv2 +import numpy as np +import paddle +import paddle.fluid as fluid + +from utils.config import cfg +from pdseg.utils.timer import Timer, calculate_eta +from models.model_builder import build_model +from models.model_builder import ModelPhase +from reader import LaneNetDataset + + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg model evalution') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess IO or not', + action='store_true', + default=False) + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + if len(sys.argv) == 1: + parser.print_help() + sys.exit(1) + return parser.parse_args() + + +def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs): + np.set_printoptions(precision=5, suppress=True) + + startup_prog = fluid.Program() + test_prog = fluid.Program() + + dataset = LaneNetDataset( + file_list=cfg.DATASET.VAL_FILE_LIST, + mode=ModelPhase.TRAIN, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + #TODO: check is batch reader compatitable with Windows + if use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + for b in data_gen: + yield b + + py_reader, pred, grts, masks, accuracy, fp, fn = build_model( + test_prog, startup_prog, phase=ModelPhase.EVAL) + + py_reader.decorate_sample_generator( + data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE) + + # Get device environment + places = fluid.cuda_places() if use_gpu else fluid.cpu_places() + place = places[0] + dev_count = len(places) + print("#Device count: {}".format(dev_count)) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + test_prog = test_prog.clone(for_test=True) + + ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir + + if ckpt_dir is not None: + print('load test model:', ckpt_dir) + fluid.io.load_params(exe, ckpt_dir, main_program=test_prog) + + # Use streaming confusion matrix to calculate mean_iou + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + fetch_list = [pred.name, grts.name, masks.name, accuracy.name, fp.name, fn.name] + num_images = 0 + step = 0 + avg_acc = 0.0 + avg_fp = 0.0 + avg_fn = 0.0 + # cur_images = 0 + all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1 + timer = Timer() + timer.start() + py_reader.start() + while True: + try: + step += 1 + pred, grts, masks, out_acc, out_fp, out_fn = exe.run( + test_prog, fetch_list=fetch_list, return_numpy=True) + + avg_acc += np.mean(out_acc) * pred.shape[0] + avg_fp += np.mean(out_fp) * pred.shape[0] + avg_fn += np.mean(out_fn) * pred.shape[0] + num_images += pred.shape[0] + + speed = 1.0 / timer.elapsed_time() + + print( + "[EVAL]step={} accuracy={:.4f} fp={:.4f} fn={:.4f} step/sec={:.2f} | ETA {}" + .format(step, avg_acc / num_images, avg_fp / num_images, avg_fn / num_images, speed, + calculate_eta(all_step - step, speed))) + + timer.restart() + sys.stdout.flush() + except fluid.core.EOFException: + break + + print("[EVAL]#image={} accuracy={:.4f} fp={:.4f} fn={:.4f}".format( + num_images, avg_acc / num_images, avg_fp / num_images, avg_fn / num_images)) + + return avg_acc / num_images, avg_fp / num_images, avg_fn / num_images + + +def main(): + args = parse_args() + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + cfg.check_and_infer() + print(pprint.pformat(cfg)) + evaluate(cfg, **args.__dict__) + + +if __name__ == '__main__': + main() diff --git a/contrib/LaneNet/imgs/0005_pred_binary.png b/contrib/LaneNet/imgs/0005_pred_binary.png new file mode 100644 index 0000000000000000000000000000000000000000..77f66b2510683d3b94e6e7f1c219365546d8ca37 Binary files /dev/null and b/contrib/LaneNet/imgs/0005_pred_binary.png differ diff --git a/contrib/LaneNet/imgs/0005_pred_instance.png b/contrib/LaneNet/imgs/0005_pred_instance.png new file mode 100644 index 0000000000000000000000000000000000000000..ec99b30e49db0d0f02e198f75785618ff12b3bb6 Binary files /dev/null and b/contrib/LaneNet/imgs/0005_pred_instance.png differ diff --git a/contrib/LaneNet/imgs/0005_pred_lane.png b/contrib/LaneNet/imgs/0005_pred_lane.png new file mode 100644 index 0000000000000000000000000000000000000000..18c656f734c2276eaf03a07daf4f018db505d8ea Binary files /dev/null and b/contrib/LaneNet/imgs/0005_pred_lane.png differ diff --git a/contrib/LaneNet/loss.py b/contrib/LaneNet/loss.py new file mode 100644 index 0000000000000000000000000000000000000000..e888374582d0c83357bb652d7188dbf429832604 --- /dev/null +++ b/contrib/LaneNet/loss.py @@ -0,0 +1,138 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import paddle.fluid as fluid +import numpy as np +from utils.config import cfg + + +def unsorted_segment_sum(data, segment_ids, unique_labels, feature_dims): + zeros = fluid.layers.fill_constant_batch_size_like(unique_labels, shape=[1, feature_dims], + dtype='float32', value=0) + segment_ids = fluid.layers.unsqueeze(segment_ids, axes=[1]) + segment_ids.stop_gradient = True + segment_sum = fluid.layers.scatter_nd_add(zeros, segment_ids, data) + zeros.stop_gradient = True + + return segment_sum + + +def norm(x, axis=-1): + distance = fluid.layers.reduce_sum(fluid.layers.abs(x), dim=axis, keep_dim=True) + return distance + +def discriminative_loss_single( + prediction, + correct_label, + feature_dim, + label_shape, + delta_v, + delta_d, + param_var, + param_dist, + param_reg): + + correct_label = fluid.layers.reshape( + correct_label, [ + label_shape[1] * label_shape[0]]) + prediction = fluid.layers.transpose(prediction, [1, 2, 0]) + reshaped_pred = fluid.layers.reshape( + prediction, [ + label_shape[1] * label_shape[0], feature_dim]) + + unique_labels, unique_id, counts = fluid.layers.unique_with_counts(correct_label) + correct_label.stop_gradient = True + counts = fluid.layers.cast(counts, 'float32') + num_instances = fluid.layers.shape(unique_labels) + + segmented_sum = unsorted_segment_sum( + reshaped_pred, unique_id, unique_labels, feature_dims=feature_dim) + + counts_rsp = fluid.layers.reshape(counts, (-1, 1)) + mu = fluid.layers.elementwise_div(segmented_sum, counts_rsp) + counts_rsp.stop_gradient = True + mu_expand = fluid.layers.gather(mu, unique_id) + tmp = fluid.layers.elementwise_sub(mu_expand, reshaped_pred) + + distance = norm(tmp) + distance = distance - delta_v + + distance_pos = fluid.layers.greater_equal(distance, fluid.layers.zeros_like(distance)) + distance_pos = fluid.layers.cast(distance_pos, 'float32') + distance = distance * distance_pos + + distance = fluid.layers.square(distance) + + l_var = unsorted_segment_sum(distance, unique_id, unique_labels, feature_dims=1) + l_var = fluid.layers.elementwise_div(l_var, counts_rsp) + l_var = fluid.layers.reduce_sum(l_var) + l_var = l_var / fluid.layers.cast(num_instances * (num_instances - 1), 'float32') + + mu_interleaved_rep = fluid.layers.expand(mu, [num_instances, 1]) + mu_band_rep = fluid.layers.expand(mu, [1, num_instances]) + mu_band_rep = fluid.layers.reshape(mu_band_rep, (num_instances * num_instances, feature_dim)) + + mu_diff = fluid.layers.elementwise_sub(mu_band_rep, mu_interleaved_rep) + + intermediate_tensor = fluid.layers.reduce_sum(fluid.layers.abs(mu_diff), dim=1) + intermediate_tensor.stop_gradient = True + zero_vector = fluid.layers.zeros([1], 'float32') + bool_mask = fluid.layers.not_equal(intermediate_tensor, zero_vector) + temp = fluid.layers.where(bool_mask) + mu_diff_bool = fluid.layers.gather(mu_diff, temp) + + mu_norm = norm(mu_diff_bool) + mu_norm = 2. * delta_d - mu_norm + mu_norm_pos = fluid.layers.greater_equal(mu_norm, fluid.layers.zeros_like(mu_norm)) + mu_norm_pos = fluid.layers.cast(mu_norm_pos, 'float32') + mu_norm = mu_norm * mu_norm_pos + mu_norm_pos.stop_gradient = True + + mu_norm = fluid.layers.square(mu_norm) + + l_dist = fluid.layers.reduce_mean(mu_norm) + + l_reg = fluid.layers.reduce_mean(norm(mu, axis=1)) + + l_var = param_var * l_var + l_dist = param_dist * l_dist + l_reg = param_reg * l_reg + loss = l_var + l_dist + l_reg + return loss, l_var, l_dist, l_reg + + +def discriminative_loss(prediction, correct_label, feature_dim, image_shape, + delta_v, delta_d, param_var, param_dist, param_reg): + batch_size = int(cfg.BATCH_SIZE_PER_DEV) + output_ta_loss = 0. + output_ta_var = 0. + output_ta_dist = 0. + output_ta_reg = 0. + for i in range(batch_size): + disc_loss_single, l_var_single, l_dist_single, l_reg_single = discriminative_loss_single( + prediction[i], correct_label[i], feature_dim, image_shape, delta_v, delta_d, param_var, param_dist, + param_reg) + output_ta_loss += disc_loss_single + output_ta_var += l_var_single + output_ta_dist += l_dist_single + output_ta_reg += l_reg_single + + disc_loss = output_ta_loss / batch_size + l_var = output_ta_var / batch_size + l_dist = output_ta_dist / batch_size + l_reg = output_ta_reg / batch_size + return disc_loss, l_var, l_dist, l_reg + + diff --git a/contrib/LaneNet/models/__init__.py b/contrib/LaneNet/models/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f750f6f7b42bb028f81a24edb2bb9e30c190578e --- /dev/null +++ b/contrib/LaneNet/models/__init__.py @@ -0,0 +1,17 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import models.modeling +#import models.backbone diff --git a/contrib/LaneNet/models/model_builder.py b/contrib/LaneNet/models/model_builder.py new file mode 100644 index 0000000000000000000000000000000000000000..ed6c275ecd51a2fc9f7f2fdf125300ce026c0a0a --- /dev/null +++ b/contrib/LaneNet/models/model_builder.py @@ -0,0 +1,261 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import sys +sys.path.append("..") +import struct + +import paddle.fluid as fluid +from paddle.fluid.proto.framework_pb2 import VarType + +from pdseg import solver +from utils.config import cfg +from pdseg.loss import multi_softmax_with_loss +from loss import discriminative_loss +from models.modeling import lanenet + +class ModelPhase(object): + """ + Standard name for model phase in PaddleSeg + + The following standard keys are defined: + * `TRAIN`: training mode. + * `EVAL`: testing/evaluation mode. + * `PREDICT`: prediction/inference mode. + * `VISUAL` : visualization mode + """ + + TRAIN = 'train' + EVAL = 'eval' + PREDICT = 'predict' + VISUAL = 'visual' + + @staticmethod + def is_train(phase): + return phase == ModelPhase.TRAIN + + @staticmethod + def is_predict(phase): + return phase == ModelPhase.PREDICT + + @staticmethod + def is_eval(phase): + return phase == ModelPhase.EVAL + + @staticmethod + def is_visual(phase): + return phase == ModelPhase.VISUAL + + @staticmethod + def is_valid_phase(phase): + """ Check valid phase """ + if ModelPhase.is_train(phase) or ModelPhase.is_predict(phase) \ + or ModelPhase.is_eval(phase) or ModelPhase.is_visual(phase): + return True + + return False + + +def seg_model(image, class_num): + model_name = cfg.MODEL.MODEL_NAME + if model_name == 'lanenet': + logits = lanenet.lanenet(image, class_num) + else: + raise Exception( + "unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet" + ) + return logits + + +def softmax(logit): + logit = fluid.layers.transpose(logit, [0, 2, 3, 1]) + logit = fluid.layers.softmax(logit) + logit = fluid.layers.transpose(logit, [0, 3, 1, 2]) + return logit + + +def sigmoid_to_softmax(logit): + """ + one channel to two channel + """ + logit = fluid.layers.transpose(logit, [0, 2, 3, 1]) + logit = fluid.layers.sigmoid(logit) + logit_back = 1 - logit + logit = fluid.layers.concat([logit_back, logit], axis=-1) + logit = fluid.layers.transpose(logit, [0, 3, 1, 2]) + return logit + + +def build_model(main_prog, start_prog, phase=ModelPhase.TRAIN): + if not ModelPhase.is_valid_phase(phase): + raise ValueError("ModelPhase {} is not valid!".format(phase)) + if ModelPhase.is_train(phase): + width = cfg.TRAIN_CROP_SIZE[0] + height = cfg.TRAIN_CROP_SIZE[1] + else: + width = cfg.EVAL_CROP_SIZE[0] + height = cfg.EVAL_CROP_SIZE[1] + + image_shape = [cfg.DATASET.DATA_DIM, height, width] + grt_shape = [1, height, width] + class_num = cfg.DATASET.NUM_CLASSES + + with fluid.program_guard(main_prog, start_prog): + with fluid.unique_name.guard(): + image = fluid.layers.data( + name='image', shape=image_shape, dtype='float32') + label = fluid.layers.data( + name='label', shape=grt_shape, dtype='int32') + if cfg.MODEL.MODEL_NAME == 'lanenet': + label_instance = fluid.layers.data( + name='label_instance', shape=grt_shape, dtype='int32') + mask = fluid.layers.data( + name='mask', shape=grt_shape, dtype='int32') + + # use PyReader when doing traning and evaluation + if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase): + py_reader = fluid.io.PyReader( + feed_list=[image, label, label_instance, mask], + capacity=cfg.DATALOADER.BUF_SIZE, + iterable=False, + use_double_buffer=True) + + + loss_type = cfg.SOLVER.LOSS + if not isinstance(loss_type, list): + loss_type = list(loss_type) + + logits = seg_model(image, class_num) + + if ModelPhase.is_train(phase): + loss_valid = False + valid_loss = [] + if cfg.MODEL.MODEL_NAME == 'lanenet': + embeding_logit = logits[1] + logits = logits[0] + disc_loss, _, _, l_reg = discriminative_loss(embeding_logit, label_instance, 4, + image_shape[1:], 0.5, 3.0, 1.0, 1.0, 0.001) + + if "softmax_loss" in loss_type: + weight = None + if cfg.MODEL.MODEL_NAME == 'lanenet': + weight = get_dynamic_weight(label) + seg_loss = multi_softmax_with_loss(logits, label, mask, class_num, weight) + loss_valid = True + valid_loss.append("softmax_loss") + + if not loss_valid: + raise Exception("SOLVER.LOSS: {} is set wrong. it should " + "include one of (softmax_loss, bce_loss, dice_loss) at least" + " example: ['softmax_loss']".format(cfg.SOLVER.LOSS)) + + invalid_loss = [x for x in loss_type if x not in valid_loss] + if len(invalid_loss) > 0: + print("Warning: the loss {} you set is invalid. it will not be included in loss computed.".format(invalid_loss)) + + avg_loss = disc_loss + 0.00001 * l_reg + seg_loss + + #get pred result in original size + if isinstance(logits, tuple): + logit = logits[0] + else: + logit = logits + + if logit.shape[2:] != label.shape[2:]: + logit = fluid.layers.resize_bilinear(logit, label.shape[2:]) + + # return image input and logit output for inference graph prune + if ModelPhase.is_predict(phase): + if class_num == 1: + logit = sigmoid_to_softmax(logit) + else: + logit = softmax(logit) + return image, logit + + if class_num == 1: + out = sigmoid_to_softmax(logit) + out = fluid.layers.transpose(out, [0, 2, 3, 1]) + else: + out = fluid.layers.transpose(logit, [0, 2, 3, 1]) + + pred = fluid.layers.argmax(out, axis=3) + pred = fluid.layers.unsqueeze(pred, axes=[3]) + if ModelPhase.is_visual(phase): + if cfg.MODEL.MODEL_NAME == 'lanenet': + return pred, logits[1] + if class_num == 1: + logit = sigmoid_to_softmax(logit) + else: + logit = softmax(logit) + return pred, logit + + accuracy, fp, fn = compute_metric(pred, label) + if ModelPhase.is_eval(phase): + return py_reader, pred, label, mask, accuracy, fp, fn + + if ModelPhase.is_train(phase): + optimizer = solver.Solver(main_prog, start_prog) + decayed_lr = optimizer.optimise(avg_loss) + return py_reader, avg_loss, decayed_lr, pred, label, mask, disc_loss, seg_loss, accuracy, fp, fn + + +def compute_metric(pred, label): + label = fluid.layers.transpose(label, [0, 2, 3, 1]) + + idx = fluid.layers.where(pred == 1) + pix_cls_ret = fluid.layers.gather_nd(label, idx) + + correct_num = fluid.layers.reduce_sum(fluid.layers.cast(pix_cls_ret, 'float32')) + + gt_num = fluid.layers.cast(fluid.layers.shape(fluid.layers.gather_nd(label, + fluid.layers.where(label == 1)))[0], 'int64') + pred_num = fluid.layers.cast(fluid.layers.shape(fluid.layers.gather_nd(pred, idx))[0], 'int64') + accuracy = correct_num / gt_num + + false_pred = pred_num - correct_num + fp = fluid.layers.cast(false_pred, 'float32') / fluid.layers.cast(fluid.layers.shape(pix_cls_ret)[0], 'int64') + + label_cls_ret = fluid.layers.gather_nd(label, fluid.layers.where(label == 1)) + mis_pred = fluid.layers.cast(fluid.layers.shape(label_cls_ret)[0], 'int64') - correct_num + fn = fluid.layers.cast(mis_pred, 'float32') / fluid.layers.cast(fluid.layers.shape(label_cls_ret)[0], 'int64') + accuracy.stop_gradient = True + fp.stop_gradient = True + fn.stop_gradient = True + return accuracy, fp, fn + + +def get_dynamic_weight(label): + label = fluid.layers.reshape(label, [-1]) + unique_labels, unique_id, counts = fluid.layers.unique_with_counts(label) + counts = fluid.layers.cast(counts, 'float32') + weight = 1.0 / fluid.layers.log((counts / fluid.layers.reduce_sum(counts) + 1.02)) + return weight + + +def to_int(string, dest="I"): + return struct.unpack(dest, string)[0] + + +def parse_shape_from_file(filename): + with open(filename, "rb") as file: + version = file.read(4) + lod_level = to_int(file.read(8), dest="Q") + for i in range(lod_level): + _size = to_int(file.read(8), dest="Q") + _ = file.read(_size) + version = file.read(4) + tensor_desc_size = to_int(file.read(4)) + tensor_desc = VarType.TensorDesc() + tensor_desc.ParseFromString(file.read(tensor_desc_size)) + return tuple(tensor_desc.dims) diff --git a/contrib/LaneNet/models/modeling/__init__.py b/contrib/LaneNet/models/modeling/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/contrib/LaneNet/models/modeling/lanenet.py b/contrib/LaneNet/models/modeling/lanenet.py new file mode 100644 index 0000000000000000000000000000000000000000..68837983f08ce4220bfb5bb0ea7d96404687b259 --- /dev/null +++ b/contrib/LaneNet/models/modeling/lanenet.py @@ -0,0 +1,440 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import division +from __future__ import print_function + +import paddle.fluid as fluid + + +from utils.config import cfg +from pdseg.models.libs.model_libs import scope, name_scope +from pdseg.models.libs.model_libs import bn, bn_relu, relu +from pdseg.models.libs.model_libs import conv, max_pool, deconv +from pdseg.models.backbone.vgg import VGGNet as vgg_backbone +#from models.backbone.vgg import VGGNet as vgg_backbone + +# Bottleneck type +REGULAR = 1 +DOWNSAMPLING = 2 +UPSAMPLING = 3 +DILATED = 4 +ASYMMETRIC = 5 + + +def prelu(x, decoder=False): + # If decoder, then perform relu else perform prelu + if decoder: + return fluid.layers.relu(x) + return fluid.layers.prelu(x, 'channel') + + +def iniatial_block(inputs, name_scope='iniatial_block'): + ''' + The initial block for Enet has 2 branches: The convolution branch and Maxpool branch. + The conv branch has 13 filters, while the maxpool branch gives 3 channels corresponding to the RGB channels. + Both output layers are then concatenated to give an output of 16 channels. + + :param inputs(Tensor): A 4D tensor of shape [batch_size, height, width, channels] + :return net_concatenated(Tensor): a 4D Tensor of new shape [batch_size, height, width, channels] + ''' + # Convolutional branch + with scope(name_scope): + net_conv = conv(inputs, 13, 3, stride=2, padding=1) + net_conv = bn(net_conv) + net_conv = fluid.layers.prelu(net_conv, 'channel') + + # Max pool branch + net_pool = max_pool(inputs, [2, 2], stride=2, padding='SAME') + + # Concatenated output - does it matter max pool comes first or conv comes first? probably not. + net_concatenated = fluid.layers.concat([net_conv, net_pool], axis=1) + return net_concatenated + + +def bottleneck(inputs, + output_depth, + filter_size, + regularizer_prob, + projection_ratio=4, + type=REGULAR, + seed=0, + output_shape=None, + dilation_rate=None, + decoder=False, + name_scope='bottleneck'): + + # Calculate the depth reduction based on the projection ratio used in 1x1 convolution. + reduced_depth = int(inputs.shape[1] / projection_ratio) + + # DOWNSAMPLING BOTTLENECK + if type == DOWNSAMPLING: + #=============MAIN BRANCH============= + #Just perform a max pooling + with scope('down_sample'): + inputs_shape = inputs.shape + with scope('main_max_pool'): + net_main = fluid.layers.conv2d(inputs, inputs_shape[1], filter_size=3, stride=2, padding='SAME') + + #First get the difference in depth to pad, then pad with zeros only on the last dimension. + depth_to_pad = abs(inputs_shape[1] - output_depth) + paddings = [0, 0, 0, depth_to_pad, 0, 0, 0, 0] + with scope('main_padding'): + net_main = fluid.layers.pad(net_main, paddings=paddings) + + with scope('block1'): + net = conv(inputs, reduced_depth, [2, 2], stride=2, padding='same') + net = bn(net) + net = prelu(net, decoder=decoder) + + with scope('block2'): + net = conv(net, reduced_depth, [filter_size, filter_size], padding='same') + net = bn(net) + net = prelu(net, decoder=decoder) + + with scope('block3'): + net = conv(net, output_depth, [1, 1], padding='same') + net = bn(net) + net = prelu(net, decoder=decoder) + + # Regularizer + net = fluid.layers.dropout(net, regularizer_prob, seed=seed) + + # Finally, combine the two branches together via an element-wise addition + net = fluid.layers.elementwise_add(net, net_main) + net = prelu(net, decoder=decoder) + + return net, inputs_shape + + # DILATION CONVOLUTION BOTTLENECK + # Everything is the same as a regular bottleneck except for the dilation rate argument + elif type == DILATED: + #Check if dilation rate is given + if not dilation_rate: + raise ValueError('Dilation rate is not given.') + + with scope('dilated'): + # Save the main branch for addition later + net_main = inputs + + # First projection with 1x1 kernel (dimensionality reduction) + with scope('block1'): + net = conv(inputs, reduced_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Second conv block --- apply dilated convolution here + with scope('block2'): + net = conv(net, reduced_depth, filter_size, padding='SAME', dilation=dilation_rate) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Final projection with 1x1 kernel (Expansion) + with scope('block3'): + net = conv(net, output_depth, [1,1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Regularizer + net = fluid.layers.dropout(net, regularizer_prob, seed=seed) + net = prelu(net, decoder=decoder) + + # Add the main branch + net = fluid.layers.elementwise_add(net_main, net) + net = prelu(net, decoder=decoder) + + return net + + # ASYMMETRIC CONVOLUTION BOTTLENECK + # Everything is the same as a regular bottleneck except for a [5,5] kernel decomposed into two [5,1] then [1,5] + elif type == ASYMMETRIC: + # Save the main branch for addition later + with scope('asymmetric'): + net_main = inputs + # First projection with 1x1 kernel (dimensionality reduction) + with scope('block1'): + net = conv(inputs, reduced_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Second conv block --- apply asymmetric conv here + with scope('block2'): + with scope('asymmetric_conv2a'): + net = conv(net, reduced_depth, [filter_size, 1], padding='same') + with scope('asymmetric_conv2b'): + net = conv(net, reduced_depth, [1, filter_size], padding='same') + net = bn(net) + net = prelu(net, decoder=decoder) + + # Final projection with 1x1 kernel + with scope('block3'): + net = conv(net, output_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Regularizer + net = fluid.layers.dropout(net, regularizer_prob, seed=seed) + net = prelu(net, decoder=decoder) + + # Add the main branch + net = fluid.layers.elementwise_add(net_main, net) + net = prelu(net, decoder=decoder) + + return net + + # UPSAMPLING BOTTLENECK + # Everything is the same as a regular one, except convolution becomes transposed. + elif type == UPSAMPLING: + #Check if pooling indices is given + + #Check output_shape given or not + if output_shape is None: + raise ValueError('Output depth is not given') + + #=======MAIN BRANCH======= + #Main branch to upsample. output shape must match with the shape of the layer that was pooled initially, in order + #for the pooling indices to work correctly. However, the initial pooled layer was padded, so need to reduce dimension + #before unpooling. In the paper, padding is replaced with convolution for this purpose of reducing the depth! + with scope('upsampling'): + with scope('unpool'): + net_unpool = conv(inputs, output_depth, [1, 1]) + net_unpool = bn(net_unpool) + net_unpool = fluid.layers.resize_bilinear(net_unpool, out_shape=output_shape[2:]) + + # First 1x1 projection to reduce depth + with scope('block1'): + net = conv(inputs, reduced_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + with scope('block2'): + net = deconv(net, reduced_depth, filter_size=filter_size, stride=2, padding='same') + net = bn(net) + net = prelu(net, decoder=decoder) + + # Final projection with 1x1 kernel + with scope('block3'): + net = conv(net, output_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Regularizer + net = fluid.layers.dropout(net, regularizer_prob, seed=seed) + net = prelu(net, decoder=decoder) + + # Finally, add the unpooling layer and the sub branch together + net = fluid.layers.elementwise_add(net, net_unpool) + net = prelu(net, decoder=decoder) + + return net + + # REGULAR BOTTLENECK + else: + with scope('regular'): + net_main = inputs + + # First projection with 1x1 kernel + with scope('block1'): + net = conv(inputs, reduced_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Second conv block + with scope('block2'): + net = conv(net, reduced_depth, [filter_size, filter_size], padding='same') + net = bn(net) + net = prelu(net, decoder=decoder) + + # Final projection with 1x1 kernel + with scope('block3'): + net = conv(net, output_depth, [1, 1]) + net = bn(net) + net = prelu(net, decoder=decoder) + + # Regularizer + net = fluid.layers.dropout(net, regularizer_prob, seed=seed) + net = prelu(net, decoder=decoder) + + # Add the main branch + net = fluid.layers.elementwise_add(net_main, net) + net = prelu(net, decoder=decoder) + + return net + + +def ENet_stage1(inputs, name_scope='stage1_block'): + with scope(name_scope): + with scope('bottleneck1_0'): + net, inputs_shape_1 \ + = bottleneck(inputs, output_depth=64, filter_size=3, regularizer_prob=0.01, type=DOWNSAMPLING, + name_scope='bottleneck1_0') + with scope('bottleneck1_1'): + net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, + name_scope='bottleneck1_1') + with scope('bottleneck1_2'): + net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, + name_scope='bottleneck1_2') + with scope('bottleneck1_3'): + net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, + name_scope='bottleneck1_3') + with scope('bottleneck1_4'): + net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, + name_scope='bottleneck1_4') + return net, inputs_shape_1 + + +def ENet_stage2(inputs, name_scope='stage2_block'): + with scope(name_scope): + net, inputs_shape_2 \ + = bottleneck(inputs, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DOWNSAMPLING, + name_scope='bottleneck2_0') + for i in range(2): + with scope('bottleneck2_{}'.format(str(4 * i + 1))): + net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, + name_scope='bottleneck2_{}'.format(str(4 * i + 1))) + with scope('bottleneck2_{}'.format(str(4 * i + 2))): + net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+1)), + name_scope='bottleneck2_{}'.format(str(4 * i + 2))) + with scope('bottleneck2_{}'.format(str(4 * i + 3))): + net = bottleneck(net, output_depth=128, filter_size=5, regularizer_prob=0.1, type=ASYMMETRIC, + name_scope='bottleneck2_{}'.format(str(4 * i + 3))) + with scope('bottleneck2_{}'.format(str(4 * i + 4))): + net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+2)), + name_scope='bottleneck2_{}'.format(str(4 * i + 4))) + return net, inputs_shape_2 + + +def ENet_stage3(inputs, name_scope='stage3_block'): + with scope(name_scope): + for i in range(2): + with scope('bottleneck3_{}'.format(str(4 * i + 0))): + net = bottleneck(inputs, output_depth=128, filter_size=3, regularizer_prob=0.1, + name_scope='bottleneck3_{}'.format(str(4 * i + 0))) + with scope('bottleneck3_{}'.format(str(4 * i + 1))): + net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+1)), + name_scope='bottleneck3_{}'.format(str(4 * i + 1))) + with scope('bottleneck3_{}'.format(str(4 * i + 2))): + net = bottleneck(net, output_depth=128, filter_size=5, regularizer_prob=0.1, type=ASYMMETRIC, + name_scope='bottleneck3_{}'.format(str(4 * i + 2))) + with scope('bottleneck3_{}'.format(str(4 * i + 3))): + net = bottleneck(net, output_depth=128, filter_size=3, regularizer_prob=0.1, type=DILATED, dilation_rate=(2 ** (2*i+2)), + name_scope='bottleneck3_{}'.format(str(4 * i + 3))) + return net + + +def ENet_stage4(inputs, inputs_shape, connect_tensor, + skip_connections=True, name_scope='stage4_block'): + with scope(name_scope): + with scope('bottleneck4_0'): + net = bottleneck(inputs, output_depth=64, filter_size=3, regularizer_prob=0.1, + type=UPSAMPLING, decoder=True, output_shape=inputs_shape, + name_scope='bottleneck4_0') + + if skip_connections: + net = fluid.layers.elementwise_add(net, connect_tensor) + with scope('bottleneck4_1'): + net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.1, decoder=True, + name_scope='bottleneck4_1') + with scope('bottleneck4_2'): + net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.1, decoder=True, + name_scope='bottleneck4_2') + + return net + + +def ENet_stage5(inputs, inputs_shape, connect_tensor, skip_connections=True, + name_scope='stage5_block'): + with scope(name_scope): + net = bottleneck(inputs, output_depth=16, filter_size=3, regularizer_prob=0.1, type=UPSAMPLING, + decoder=True, output_shape=inputs_shape, + name_scope='bottleneck5_0') + + if skip_connections: + net = fluid.layers.elementwise_add(net, connect_tensor) + with scope('bottleneck5_1'): + net = bottleneck(net, output_depth=16, filter_size=3, regularizer_prob=0.1, decoder=True, + name_scope='bottleneck5_1') + return net + + +def decoder(input, num_classes): + + if 'enet' in cfg.MODEL.LANENET.BACKBONE: + # Segmentation branch + with scope('LaneNetSeg'): + initial, stage1, stage2, inputs_shape_1, inputs_shape_2 = input + segStage3 = ENet_stage3(stage2) + segStage4 = ENet_stage4(segStage3, inputs_shape_2, stage1) + segStage5 = ENet_stage5(segStage4, inputs_shape_1, initial) + segLogits = deconv(segStage5, num_classes, filter_size=2, stride=2, padding='SAME') + + # Embedding branch + with scope('LaneNetEm'): + emStage3 = ENet_stage3(stage2) + emStage4 = ENet_stage4(emStage3, inputs_shape_2, stage1) + emStage5 = ENet_stage5(emStage4, inputs_shape_1, initial) + emLogits = deconv(emStage5, 4, filter_size=2, stride=2, padding='SAME') + + elif 'vgg' in cfg.MODEL.LANENET.BACKBONE: + encoder_list = ['pool5', 'pool4', 'pool3'] + # score stage + input_tensor = input[encoder_list[0]] + with scope('score_origin'): + score = conv(input_tensor, 64, 1) + encoder_list = encoder_list[1:] + for i in range(len(encoder_list)): + with scope('deconv_{:d}'.format(i + 1)): + deconv_out = deconv(score, 64, filter_size=4, stride=2, padding='SAME') + input_tensor = input[encoder_list[i]] + with scope('score_{:d}'.format(i + 1)): + score = conv(input_tensor, 64, 1) + score = fluid.layers.elementwise_add(deconv_out, score) + + with scope('deconv_final'): + emLogits = deconv(score, 64, filter_size=16, stride=8, padding='SAME') + with scope('score_final'): + segLogits = conv(emLogits, num_classes, 1) + emLogits = relu(conv(emLogits, 4, 1)) + return segLogits, emLogits + + +def encoder(input): + if 'vgg' in cfg.MODEL.LANENET.BACKBONE: + model = vgg_backbone(layers=16) + #output = model.net(input) + + _, encode_feature_dict = model.net(input, end_points=13, decode_points=[7, 10, 13]) + output = {} + output['pool3'] = encode_feature_dict[7] + output['pool4'] = encode_feature_dict[10] + output['pool5'] = encode_feature_dict[13] + elif 'enet' in cfg.MODEL.LANET.BACKBONE: + with scope('LaneNetBase'): + initial = iniatial_block(input) + stage1, inputs_shape_1 = ENet_stage1(initial) + stage2, inputs_shape_2 = ENet_stage2(stage1) + output = (initial, stage1, stage2, inputs_shape_1, inputs_shape_2) + else: + raise Exception("LaneNet expect enet and vgg backbone, but received {}". + format(cfg.MODEL.LANENET.BACKBONE)) + return output + + +def lanenet(img, num_classes): + + output = encoder(img) + segLogits, emLogits = decoder(output, num_classes) + + return segLogits, emLogits diff --git a/contrib/LaneNet/reader.py b/contrib/LaneNet/reader.py new file mode 100644 index 0000000000000000000000000000000000000000..29af37b8caf15da8cdabc73a847c30ca88d65c4a --- /dev/null +++ b/contrib/LaneNet/reader.py @@ -0,0 +1,321 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function +import sys +import os +import time +import codecs + +import numpy as np +import cv2 + +from utils.config import cfg +import data_aug as aug +from pdseg.data_utils import GeneratorEnqueuer +from models.model_builder import ModelPhase +import copy + + +def cv2_imread(file_path, flag=cv2.IMREAD_COLOR): + # resolve cv2.imread open Chinese file path issues on Windows Platform. + return cv2.imdecode(np.fromfile(file_path, dtype=np.uint8), flag) + + +class LaneNetDataset(): + def __init__(self, + file_list, + data_dir, + shuffle=False, + mode=ModelPhase.TRAIN): + self.mode = mode + self.shuffle = shuffle + self.data_dir = data_dir + + self.shuffle_seed = 0 + # NOTE: Please ensure file list was save in UTF-8 coding format + with codecs.open(file_list, 'r', 'utf-8') as flist: + self.lines = [line.strip() for line in flist] + self.all_lines = copy.deepcopy(self.lines) + if shuffle and cfg.NUM_TRAINERS > 1: + np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines) + elif shuffle: + np.random.shuffle(self.lines) + + def generator(self): + if self.shuffle and cfg.NUM_TRAINERS > 1: + np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines) + num_lines = len(self.all_lines) // cfg.NUM_TRAINERS + self.lines = self.all_lines[num_lines * cfg.TRAINER_ID: num_lines * (cfg.TRAINER_ID + 1)] + self.shuffle_seed += 1 + elif self.shuffle: + np.random.shuffle(self.lines) + + for line in self.lines: + yield self.process_image(line, self.data_dir, self.mode) + + def sharding_generator(self, pid=0, num_processes=1): + """ + Use line id as shard key for multiprocess io + It's a normal generator if pid=0, num_processes=1 + """ + for index, line in enumerate(self.lines): + # Use index and pid to shard file list + if index % num_processes == pid: + yield self.process_image(line, self.data_dir, self.mode) + + def batch_reader(self, batch_size): + br = self.batch(self.reader, batch_size) + for batch in br: + yield batch[0], batch[1], batch[2] + + def multiprocess_generator(self, max_queue_size=32, num_processes=8): + # Re-shuffle file list + if self.shuffle and cfg.NUM_TRAINERS > 1: + np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines) + num_lines = len(self.all_lines) // self.num_trainers + self.lines = self.all_lines[num_lines * self.trainer_id: num_lines * (self.trainer_id + 1)] + self.shuffle_seed += 1 + elif self.shuffle: + np.random.shuffle(self.lines) + + # Create multiple sharding generators according to num_processes for multiple processes + generators = [] + for pid in range(num_processes): + generators.append(self.sharding_generator(pid, num_processes)) + + try: + enqueuer = GeneratorEnqueuer(generators) + enqueuer.start(max_queue_size=max_queue_size, workers=num_processes) + while True: + generator_out = None + while enqueuer.is_running(): + if not enqueuer.queue.empty(): + generator_out = enqueuer.queue.get(timeout=5) + break + else: + time.sleep(0.01) + if generator_out is None: + break + yield generator_out + finally: + if enqueuer is not None: + enqueuer.stop() + + def batch(self, reader, batch_size, is_test=False, drop_last=False): + def batch_reader(is_test=False, drop_last=drop_last): + if is_test: + imgs, grts, grts_instance, img_names, valid_shapes, org_shapes = [], [], [], [], [], [] + for img, grt, grt_instance, img_name, valid_shape, org_shape in reader(): + imgs.append(img) + grts.append(grt) + grts_instance.append(grt_instance) + img_names.append(img_name) + valid_shapes.append(valid_shape) + org_shapes.append(org_shape) + if len(imgs) == batch_size: + yield np.array(imgs), np.array( + grts), np.array(grts_instance), img_names, np.array(valid_shapes), np.array( + org_shapes) + imgs, grts, grts_instance, img_names, valid_shapes, org_shapes = [], [], [], [], [], [] + + if not drop_last and len(imgs) > 0: + yield np.array(imgs), np.array(grts), np.array(grts_instance), img_names, np.array( + valid_shapes), np.array(org_shapes) + else: + imgs, labs, labs_instance, ignore = [], [], [], [] + bs = 0 + for img, lab, lab_instance, ig in reader(): + imgs.append(img) + labs.append(lab) + labs_instance.append(lab_instance) + ignore.append(ig) + bs += 1 + if bs == batch_size: + yield np.array(imgs), np.array(labs), np.array(labs_instance), np.array(ignore) + bs = 0 + imgs, labs, labs_instance, ignore = [], [], [], [] + + if not drop_last and bs > 0: + yield np.array(imgs), np.array(labs), np.array(labs_instance), np.array(ignore) + + return batch_reader(is_test, drop_last) + + def load_image(self, line, src_dir, mode=ModelPhase.TRAIN): + # original image cv2.imread flag setting + cv2_imread_flag = cv2.IMREAD_COLOR + if cfg.DATASET.IMAGE_TYPE == "rgba": + # If use RBGA 4 channel ImageType, use IMREAD_UNCHANGED flags to + # reserver alpha channel + cv2_imread_flag = cv2.IMREAD_UNCHANGED + + parts = line.strip().split(cfg.DATASET.SEPARATOR) + if len(parts) != 3: + if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL: + raise Exception("File list format incorrect! It should be" + " image_name{}label_name\\n".format( + cfg.DATASET.SEPARATOR)) + img_name, grt_name, grt_instance_name = parts[0], None, None + else: + img_name, grt_name, grt_instance_name = parts[0], parts[1], parts[2] + + img_path = os.path.join(src_dir, img_name) + img = cv2_imread(img_path, cv2_imread_flag) + + if grt_name is not None: + grt_path = os.path.join(src_dir, grt_name) + grt_instance_path = os.path.join(src_dir, grt_instance_name) + grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE) + grt[grt == 255] = 1 + grt[grt != 1] = 0 + grt_instance = cv2_imread(grt_instance_path, cv2.IMREAD_GRAYSCALE) + else: + grt = None + grt_instance = None + + if img is None: + raise Exception( + "Empty image, src_dir: {}, img: {} & lab: {}".format( + src_dir, img_path, grt_path)) + + img_height = img.shape[0] + img_width = img.shape[1] + + if grt is not None: + grt_height = grt.shape[0] + grt_width = grt.shape[1] + + if img_height != grt_height or img_width != grt_width: + raise Exception( + "source img and label img must has the same size") + else: + if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL: + raise Exception( + "Empty image, src_dir: {}, img: {} & lab: {}".format( + src_dir, img_path, grt_path)) + + if len(img.shape) < 3: + img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) + + img_channels = img.shape[2] + if img_channels < 3: + raise Exception("PaddleSeg only supports gray, rgb or rgba image") + if img_channels != cfg.DATASET.DATA_DIM: + raise Exception( + "Input image channel({}) is not match cfg.DATASET.DATA_DIM({}), img_name={}" + .format(img_channels, cfg.DATASET.DATADIM, img_name)) + if img_channels != len(cfg.MEAN): + raise Exception( + "img name {}, img chns {} mean size {}, size unequal".format( + img_name, img_channels, len(cfg.MEAN))) + if img_channels != len(cfg.STD): + raise Exception( + "img name {}, img chns {} std size {}, size unequal".format( + img_name, img_channels, len(cfg.STD))) + + return img, grt, grt_instance, img_name, grt_name + + def normalize_image(self, img): + """ 像素归一化后减均值除方差 """ + img = img.transpose((2, 0, 1)).astype('float32') / 255.0 + img_mean = np.array(cfg.MEAN).reshape((len(cfg.MEAN), 1, 1)) + img_std = np.array(cfg.STD).reshape((len(cfg.STD), 1, 1)) + img -= img_mean + img /= img_std + + return img + + def process_image(self, line, data_dir, mode): + """ process_image """ + img, grt, grt_instance, img_name, grt_name = self.load_image( + line, data_dir, mode=mode) + if mode == ModelPhase.TRAIN: + img, grt, grt_instance = aug.resize(img, grt, grt_instance, mode) + if cfg.AUG.RICH_CROP.ENABLE: + if cfg.AUG.RICH_CROP.BLUR: + if cfg.AUG.RICH_CROP.BLUR_RATIO <= 0: + n = 0 + elif cfg.AUG.RICH_CROP.BLUR_RATIO >= 1: + n = 1 + else: + n = int(1.0 / cfg.AUG.RICH_CROP.BLUR_RATIO) + if n > 0: + if np.random.randint(0, n) == 0: + radius = np.random.randint(3, 10) + if radius % 2 != 1: + radius = radius + 1 + if radius > 9: + radius = 9 + img = cv2.GaussianBlur(img, (radius, radius), 0, 0) + + img, grt = aug.random_rotation( + img, + grt, + rich_crop_max_rotation=cfg.AUG.RICH_CROP.MAX_ROTATION, + mean_value=cfg.DATASET.PADDING_VALUE) + + img, grt = aug.rand_scale_aspect( + img, + grt, + rich_crop_min_scale=cfg.AUG.RICH_CROP.MIN_AREA_RATIO, + rich_crop_aspect_ratio=cfg.AUG.RICH_CROP.ASPECT_RATIO) + img = aug.hsv_color_jitter( + img, + brightness_jitter_ratio=cfg.AUG.RICH_CROP. + BRIGHTNESS_JITTER_RATIO, + saturation_jitter_ratio=cfg.AUG.RICH_CROP. + SATURATION_JITTER_RATIO, + contrast_jitter_ratio=cfg.AUG.RICH_CROP. + CONTRAST_JITTER_RATIO) + + if cfg.AUG.FLIP: + if cfg.AUG.FLIP_RATIO <= 0: + n = 0 + elif cfg.AUG.FLIP_RATIO >= 1: + n = 1 + else: + n = int(1.0 / cfg.AUG.FLIP_RATIO) + if n > 0: + if np.random.randint(0, n) == 0: + img = img[::-1, :, :] + grt = grt[::-1, :] + + if cfg.AUG.MIRROR: + if np.random.randint(0, 2) == 1: + img = img[:, ::-1, :] + grt = grt[:, ::-1] + + img, grt = aug.rand_crop(img, grt, mode=mode) + elif ModelPhase.is_eval(mode): + img, grt, grt_instance = aug.resize(img, grt, grt_instance, mode=mode) + elif ModelPhase.is_visual(mode): + ori_img = img.copy() + img, grt, grt_instance = aug.resize(img, grt, grt_instance, mode=mode) + valid_shape = [img.shape[0], img.shape[1]] + else: + raise ValueError("Dataset mode={} Error!".format(mode)) + + # Normalize image + img = self.normalize_image(img) + + if ModelPhase.is_train(mode) or ModelPhase.is_eval(mode): + grt = np.expand_dims(np.array(grt).astype('int32'), axis=0) + ignore = (grt != cfg.DATASET.IGNORE_INDEX).astype('int32') + if ModelPhase.is_train(mode): + return (img, grt, grt_instance, ignore) + elif ModelPhase.is_eval(mode): + return (img, grt, grt_instance, ignore) + elif ModelPhase.is_visual(mode): + return (img, grt, grt_instance, img_name, valid_shape, ori_img) diff --git a/contrib/LaneNet/requirements.txt b/contrib/LaneNet/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..2b5eb8643803e1177297d2a766227e274dcdc29d --- /dev/null +++ b/contrib/LaneNet/requirements.txt @@ -0,0 +1,13 @@ +pre-commit +yapf == 0.26.0 +flake8 +pyyaml >= 5.1 +tb-paddle +tensorboard >= 1.15.0 +Pillow +numpy +six +opencv-python +tqdm +requests +sklearn diff --git a/contrib/LaneNet/train.py b/contrib/LaneNet/train.py new file mode 100644 index 0000000000000000000000000000000000000000..3ee9489c9b18b19b6b84615a400815a3bc33ccb2 --- /dev/null +++ b/contrib/LaneNet/train.py @@ -0,0 +1,470 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +cur_path = os.path.abspath(os.path.dirname(__file__)) +root_path = os.path.split(os.path.split(cur_path)[0])[0] +SEG_PATH = os.path.join(cur_path, "../../../") +sys.path.append(SEG_PATH) +sys.path.append(root_path) + +import argparse +import pprint + +import numpy as np +import paddle.fluid as fluid + +from utils.config import cfg +from pdseg.utils.timer import Timer, calculate_eta +from reader import LaneNetDataset +from models.model_builder import build_model +from models.model_builder import ModelPhase +from models.model_builder import parse_shape_from_file +from eval import evaluate +from vis import visualize +from utils import dist_utils + + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg training') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess I/O or not', + action='store_true', + default=False) + parser.add_argument( + '--log_steps', + dest='log_steps', + help='Display logging information at every log_steps', + default=10, + type=int) + parser.add_argument( + '--debug', + dest='debug', + help='debug mode, display detail information of training', + action='store_true') + parser.add_argument( + '--use_tb', + dest='use_tb', + help='whether to record the data during training to Tensorboard', + action='store_true') + parser.add_argument( + '--tb_log_dir', + dest='tb_log_dir', + help='Tensorboard logging directory', + default=None, + type=str) + parser.add_argument( + '--do_eval', + dest='do_eval', + help='Evaluation models result on every new checkpoint', + action='store_true') + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + return parser.parse_args() + + +def save_vars(executor, dirname, program=None, vars=None): + """ + Temporary resolution for Win save variables compatability. + Will fix in PaddlePaddle v1.5.2 + """ + + save_program = fluid.Program() + save_block = save_program.global_block() + + for each_var in vars: + # NOTE: don't save the variable which type is RAW + if each_var.type == fluid.core.VarDesc.VarType.RAW: + continue + new_var = save_block.create_var( + name=each_var.name, + shape=each_var.shape, + dtype=each_var.dtype, + type=each_var.type, + lod_level=each_var.lod_level, + persistable=True) + file_path = os.path.join(dirname, new_var.name) + file_path = os.path.normpath(file_path) + save_block.append_op( + type='save', + inputs={'X': [new_var]}, + outputs={}, + attrs={'file_path': file_path}) + + executor.run(save_program) + + +def save_checkpoint(exe, program, ckpt_name): + """ + Save checkpoint for evaluation or resume training + """ + ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name)) + print("Save model checkpoint to {}".format(ckpt_dir)) + if not os.path.isdir(ckpt_dir): + os.makedirs(ckpt_dir) + + save_vars( + exe, + ckpt_dir, + program, + vars=list(filter(fluid.io.is_persistable, program.list_vars()))) + + return ckpt_dir + + +def load_checkpoint(exe, program): + """ + Load checkpoiont from pretrained model directory for resume training + """ + + print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR) + if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR): + raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format( + cfg.TRAIN.RESUME_MODEL_DIR)) + + fluid.io.load_persistables( + exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program) + + model_path = cfg.TRAIN.RESUME_MODEL_DIR + # Check is path ended by path spearator + if model_path[-1] == os.sep: + model_path = model_path[0:-1] + epoch_name = os.path.basename(model_path) + # If resume model is final model + if epoch_name == 'final': + begin_epoch = cfg.SOLVER.NUM_EPOCHS + # If resume model path is end of digit, restore epoch status + elif epoch_name.isdigit(): + epoch = int(epoch_name) + begin_epoch = epoch + 1 + else: + raise ValueError("Resume model path is not valid!") + print("Model checkpoint loaded successfully!") + + return begin_epoch + + +def print_info(*msg): + if cfg.TRAINER_ID == 0: + print(*msg) + + +def train(cfg): + startup_prog = fluid.Program() + train_prog = fluid.Program() + drop_last = True + + dataset = LaneNetDataset( + file_list=cfg.DATASET.TRAIN_FILE_LIST, + mode=ModelPhase.TRAIN, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + if args.use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + batch_data = [] + for b in data_gen: + batch_data.append(b) + if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS): + for item in batch_data: + yield item + batch_data = [] + + # Get device environment + gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0)) + place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace() + places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + + # Get number of GPU + dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places) + print_info("#Device count: {}".format(dev_count)) + + # Make sure BATCH_SIZE can divided by GPU cards + assert cfg.BATCH_SIZE % dev_count == 0, ( + 'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format( + cfg.BATCH_SIZE, dev_count)) + # If use multi-gpu training mode, batch data will allocated to each GPU evenly + batch_size_per_dev = cfg.BATCH_SIZE // dev_count + cfg.BATCH_SIZE_PER_DEV = batch_size_per_dev + print_info("batch_size_per_dev: {}".format(batch_size_per_dev)) + + py_reader, avg_loss, lr, pred, grts, masks, emb_loss, seg_loss, accuracy, fp, fn = build_model( + train_prog, startup_prog, phase=ModelPhase.TRAIN) + py_reader.decorate_sample_generator( + data_generator, batch_size=batch_size_per_dev, drop_last=drop_last) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + exec_strategy = fluid.ExecutionStrategy() + # Clear temporary variables every 100 iteration + if args.use_gpu: + exec_strategy.num_threads = fluid.core.get_cuda_device_count() + exec_strategy.num_iteration_per_drop_scope = 100 + build_strategy = fluid.BuildStrategy() + + if cfg.NUM_TRAINERS > 1 and args.use_gpu: + dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog) + exec_strategy.num_threads = 1 + + if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu: + if dev_count > 1: + # Apply sync batch norm strategy + print_info("Sync BatchNorm strategy is effective.") + build_strategy.sync_batch_norm = True + else: + print_info( + "Sync BatchNorm strategy will not be effective if GPU device" + " count <= 1") + compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel( + loss_name=avg_loss.name, + exec_strategy=exec_strategy, + build_strategy=build_strategy) + + # Resume training + begin_epoch = cfg.SOLVER.BEGIN_EPOCH + if cfg.TRAIN.RESUME_MODEL_DIR: + begin_epoch = load_checkpoint(exe, train_prog) + # Load pretrained model + elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR): + print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR) + load_vars = [] + load_fail_vars = [] + + def var_shape_matched(var, shape): + """ + Check whehter persitable variable shape is match with current network + """ + var_exist = os.path.exists( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + if var_exist: + var_shape = parse_shape_from_file( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + if var_shape != shape: + print(var.name, var_shape, shape) + return var_shape == shape + return False + + for x in train_prog.list_vars(): + if isinstance(x, fluid.framework.Parameter): + shape = tuple(fluid.global_scope().find_var( + x.name).get_tensor().shape()) + if var_shape_matched(x, shape): + load_vars.append(x) + else: + load_fail_vars.append(x) + + fluid.io.load_vars( + exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars) + for var in load_vars: + print_info("Parameter[{}] loaded sucessfully!".format(var.name)) + for var in load_fail_vars: + print_info( + "Parameter[{}] don't exist or shape does not match current network, skip" + " to load it.".format(var.name)) + print_info("{}/{} pretrained parameters loaded successfully!".format( + len(load_vars), + len(load_vars) + len(load_fail_vars))) + else: + print_info( + 'Pretrained model dir {} not exists, training from scratch...'. + format(cfg.TRAIN.PRETRAINED_MODEL_DIR)) + + # fetch_list = [avg_loss.name, lr.name, accuracy.name, precision.name, recall.name] + fetch_list = [avg_loss.name, lr.name, seg_loss.name, emb_loss.name, accuracy.name, fp.name, fn.name] + if args.debug: + # Fetch more variable info and use streaming confusion matrix to + # calculate IoU results if in debug mode + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + fetch_list.extend([pred.name, grts.name, masks.name]) + # cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + + if args.use_tb: + if not args.tb_log_dir: + print_info("Please specify the log directory by --tb_log_dir.") + exit(1) + + from tb_paddle import SummaryWriter + log_writer = SummaryWriter(args.tb_log_dir) + + # trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0)) + # num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + global_step = 0 + all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE + if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True: + all_step += 1 + all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1) + + avg_loss = 0.0 + avg_seg_loss = 0.0 + avg_emb_loss = 0.0 + avg_acc = 0.0 + avg_fp = 0.0 + avg_fn = 0.0 + timer = Timer() + timer.start() + if begin_epoch > cfg.SOLVER.NUM_EPOCHS: + raise ValueError( + ("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format( + begin_epoch, cfg.SOLVER.NUM_EPOCHS)) + + if args.use_mpio: + print_info("Use multiprocess reader") + else: + print_info("Use multi-thread reader") + + for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1): + py_reader.start() + while True: + try: + # If not in debug mode, avoid unnessary log and calculate + loss, lr, out_seg_loss, out_emb_loss, out_acc, out_fp, out_fn = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + + avg_loss += np.mean(np.array(loss)) + avg_seg_loss += np.mean(np.array(out_seg_loss)) + avg_emb_loss += np.mean(np.array(out_emb_loss)) + avg_acc += np.mean(out_acc) + avg_fp += np.mean(out_fp) + avg_fn += np.mean(out_fn) + global_step += 1 + + if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0: + avg_loss /= args.log_steps + avg_seg_loss /= args.log_steps + avg_emb_loss /= args.log_steps + avg_acc /= args.log_steps + avg_fp /= args.log_steps + avg_fn /= args.log_steps + speed = args.log_steps / timer.elapsed_time() + print(( + "epoch={} step={} lr={:.5f} loss={:.4f} seg_loss={:.4f} emb_loss={:.4f} accuracy={:.4} fp={:.4} fn={:.4} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, avg_seg_loss, avg_emb_loss, avg_acc, avg_fp, avg_fn, speed, + calculate_eta(all_step - global_step, speed))) + if args.use_tb: + log_writer.add_scalar('Train/loss', avg_loss, + global_step) + log_writer.add_scalar('Train/lr', lr[0], + global_step) + log_writer.add_scalar('Train/speed', speed, + global_step) + sys.stdout.flush() + avg_loss = 0.0 + avg_seg_loss = 0.0 + avg_emb_loss = 0.0 + avg_acc = 0.0 + avg_fp = 0.0 + avg_fn = 0.0 + timer.restart() + + except fluid.core.EOFException: + py_reader.reset() + break + except Exception as e: + print(e) + + if epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0 and cfg.TRAINER_ID == 0: + ckpt_dir = save_checkpoint(exe, train_prog, epoch) + + if args.do_eval: + print("Evaluation start") + accuracy, fp, fn = evaluate( + cfg=cfg, + ckpt_dir=ckpt_dir, + use_gpu=args.use_gpu, + use_mpio=args.use_mpio) + if args.use_tb: + log_writer.add_scalar('Evaluate/accuracy', accuracy, + global_step) + log_writer.add_scalar('Evaluate/fp', fp, + global_step) + log_writer.add_scalar('Evaluate/fn', fn, + global_step) + + # Use Tensorboard to visualize results + if args.use_tb and cfg.DATASET.VIS_FILE_LIST is not None: + visualize( + cfg=cfg, + use_gpu=args.use_gpu, + vis_file_list=cfg.DATASET.VIS_FILE_LIST, + vis_dir="visual", + ckpt_dir=ckpt_dir, + log_writer=log_writer) + + # save final model + if cfg.TRAINER_ID == 0: + save_checkpoint(exe, train_prog, 'final') + + +def main(args): + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + + cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0)) + cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + + cfg.check_and_infer() + print_info(pprint.pformat(cfg)) + train(cfg) + + +if __name__ == '__main__': + args = parse_args() + if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True: + print( + "You can not set use_gpu = True in the model because you are using paddlepaddle-cpu." + ) + print( + "Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU." + ) + sys.exit(1) + main(args) diff --git a/contrib/LaneNet/utils/__init__.py b/contrib/LaneNet/utils/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/contrib/LaneNet/utils/config.py b/contrib/LaneNet/utils/config.py new file mode 100644 index 0000000000000000000000000000000000000000..d1186636c7d2b8004756bdfbaaca74aa47d32b7f --- /dev/null +++ b/contrib/LaneNet/utils/config.py @@ -0,0 +1,233 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function +from __future__ import unicode_literals + +import os +import sys + +# LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +# PDSEG_PATH = os.path.join(LOCAL_PATH, "../../../", "pdseg") +# print(PDSEG_PATH) +# sys.path.insert(0, PDSEG_PATH) +# print(sys.path) + +from pdseg.utils.collect import SegConfig +import numpy as np + +cfg = SegConfig() + +########################## 基本配置 ########################################### +# 均值,图像预处理减去的均值 +cfg.MEAN = [0.5, 0.5, 0.5] +# 标准差,图像预处理除以标准差· +cfg.STD = [0.5, 0.5, 0.5] +# 批处理大小 +cfg.BATCH_SIZE = 1 +# 验证时图像裁剪尺寸(宽,高) +cfg.EVAL_CROP_SIZE = tuple() +# 训练时图像裁剪尺寸(宽,高) +cfg.TRAIN_CROP_SIZE = tuple() +# 多进程训练总进程数 +cfg.NUM_TRAINERS = 1 +# 多进程训练进程ID +cfg.TRAINER_ID = 0 +# 每张gpu上的批大小,无需设置,程序会自动根据batch调整 +cfg.BATCH_SIZE_PER_DEV = 1 +########################## 数据载入配置 ####################################### +# 数据载入时的并发数, 建议值8 +cfg.DATALOADER.NUM_WORKERS = 8 +# 数据载入时缓存队列大小, 建议值256 +cfg.DATALOADER.BUF_SIZE = 256 + +########################## 数据集配置 ######################################### +# 数据主目录目录 +cfg.DATASET.DATA_DIR = './dataset/cityscapes/' +# 训练集列表 +cfg.DATASET.TRAIN_FILE_LIST = './dataset/cityscapes/train.list' +# 训练集数量 +cfg.DATASET.TRAIN_TOTAL_IMAGES = 2975 +# 验证集列表 +cfg.DATASET.VAL_FILE_LIST = './dataset/cityscapes/val.list' +# 验证数据数量 +cfg.DATASET.VAL_TOTAL_IMAGES = 500 +# 测试数据列表 +cfg.DATASET.TEST_FILE_LIST = './dataset/cityscapes/test.list' +# 测试数据数量 +cfg.DATASET.TEST_TOTAL_IMAGES = 500 +# Tensorboard 可视化的数据集 +cfg.DATASET.VIS_FILE_LIST = None +# 类别数(需包括背景类) +cfg.DATASET.NUM_CLASSES = 19 +# 输入图像类型, 支持三通道'rgb',四通道'rgba',单通道灰度图'gray' +cfg.DATASET.IMAGE_TYPE = 'rgb' +# 输入图片的通道数 +cfg.DATASET.DATA_DIM = 3 +# 数据列表分割符, 默认为空格 +cfg.DATASET.SEPARATOR = ' ' +# 忽略的像素标签值, 默认为255,一般无需改动 +cfg.DATASET.IGNORE_INDEX = 255 +# 数据增强是图像的padding值 +cfg.DATASET.PADDING_VALUE = [127.5,127.5,127.5] + +########################### 数据增强配置 ###################################### +# 图像镜像左右翻转 +cfg.AUG.MIRROR = True +# 图像上下翻转开关,True/False +cfg.AUG.FLIP = False +# 图像启动上下翻转的概率,0-1 +cfg.AUG.FLIP_RATIO = 0.5 +# 图像resize的固定尺寸(宽,高),非负 +cfg.AUG.FIX_RESIZE_SIZE = tuple() +# 图像resize的方式有三种: +# unpadding(固定尺寸),stepscaling(按比例resize),rangescaling(长边对齐) +cfg.AUG.AUG_METHOD = 'rangescaling' +# 图像resize方式为stepscaling,resize最小尺度,非负 +cfg.AUG.MIN_SCALE_FACTOR = 0.5 +# 图像resize方式为stepscaling,resize最大尺度,不小于MIN_SCALE_FACTOR +cfg.AUG.MAX_SCALE_FACTOR = 2.0 +# 图像resize方式为stepscaling,resize尺度范围间隔,非负 +cfg.AUG.SCALE_STEP_SIZE = 0.25 +# 图像resize方式为rangescaling,训练时长边resize的范围最小值,非负 +cfg.AUG.MIN_RESIZE_VALUE = 400 +# 图像resize方式为rangescaling,训练时长边resize的范围最大值, +# 不小于MIN_RESIZE_VALUE +cfg.AUG.MAX_RESIZE_VALUE = 600 +# 图像resize方式为rangescaling, 测试验证可视化模式下长边resize的长度, +# 在MIN_RESIZE_VALUE到MAX_RESIZE_VALUE范围内 +cfg.AUG.INF_RESIZE_VALUE = 500 + +# RichCrop数据增广开关,用于提升模型鲁棒性 +cfg.AUG.RICH_CROP.ENABLE = False +# 图像旋转最大角度,0-90 +cfg.AUG.RICH_CROP.MAX_ROTATION = 15 +# 裁取图像与原始图像面积比,0-1 +cfg.AUG.RICH_CROP.MIN_AREA_RATIO = 0.5 +# 裁取图像宽高比范围,非负 +cfg.AUG.RICH_CROP.ASPECT_RATIO = 0.33 +# 亮度调节范围,0-1 +cfg.AUG.RICH_CROP.BRIGHTNESS_JITTER_RATIO = 0.5 +# 饱和度调节范围,0-1 +cfg.AUG.RICH_CROP.SATURATION_JITTER_RATIO = 0.5 +# 对比度调节范围,0-1 +cfg.AUG.RICH_CROP.CONTRAST_JITTER_RATIO = 0.5 +# 图像模糊开关,True/False +cfg.AUG.RICH_CROP.BLUR = False +# 图像启动模糊百分比,0-1 +cfg.AUG.RICH_CROP.BLUR_RATIO = 0.1 + +########################### 训练配置 ########################################## +# 模型保存路径 +cfg.TRAIN.MODEL_SAVE_DIR = '' +# 预训练模型路径 +cfg.TRAIN.PRETRAINED_MODEL_DIR = '' +# 是否resume,继续训练 +cfg.TRAIN.RESUME_MODEL_DIR = '' +# 是否使用多卡间同步BatchNorm均值和方差 +cfg.TRAIN.SYNC_BATCH_NORM = False +# 模型参数保存的epoch间隔数,可用来继续训练中断的模型 +cfg.TRAIN.SNAPSHOT_EPOCH = 10 + +########################### 模型优化相关配置 ################################## +# 初始学习率 +cfg.SOLVER.LR = 0.1 +# 学习率下降方法, 支持poly piecewise cosine 三种 +cfg.SOLVER.LR_POLICY = "poly" +# 优化算法, 支持SGD和Adam两种算法 +cfg.SOLVER.OPTIMIZER = "sgd" +# 动量参数 +cfg.SOLVER.MOMENTUM = 0.9 +# 二阶矩估计的指数衰减率 +cfg.SOLVER.MOMENTUM2 = 0.999 +# 学习率Poly下降指数 +cfg.SOLVER.POWER = 0.9 +# step下降指数 +cfg.SOLVER.GAMMA = 0.1 +# step下降间隔 +cfg.SOLVER.DECAY_EPOCH = [10, 20] +# 学习率权重衰减,0-1 +cfg.SOLVER.WEIGHT_DECAY = 0.00004 +# 训练开始epoch数,默认为1 +cfg.SOLVER.BEGIN_EPOCH = 1 +# 训练epoch数,正整数 +cfg.SOLVER.NUM_EPOCHS = 30 +# loss的选择,支持softmax_loss, bce_loss, dice_loss +cfg.SOLVER.LOSS = ["softmax_loss"] +# cross entropy weight, 默认为None,如果设置为'dynamic',会根据每个batch中各个类别的数目, +# 动态调整类别权重。 +# 也可以设置一个静态权重(list的方式),比如有3类,每个类别权重可以设置为[0.1, 2.0, 0.9] +cfg.SOLVER.CROSS_ENTROPY_WEIGHT = None +########################## 测试配置 ########################################### +# 测试模型路径 +cfg.TEST.TEST_MODEL = '' + +########################## 模型通用配置 ####################################### +# 模型名称, 支持deeplab, unet, icnet三种 +cfg.MODEL.MODEL_NAME = '' +# BatchNorm类型: bn、gn(group_norm) +cfg.MODEL.DEFAULT_NORM_TYPE = 'bn' +# 多路损失加权值 +cfg.MODEL.MULTI_LOSS_WEIGHT = [1.0] +# DEFAULT_NORM_TYPE为gn时group数 +cfg.MODEL.DEFAULT_GROUP_NUMBER = 32 +# 极小值, 防止分母除0溢出,一般无需改动 +cfg.MODEL.DEFAULT_EPSILON = 1e-5 +# BatchNorm动量, 一般无需改动 +cfg.MODEL.BN_MOMENTUM = 0.99 +# 是否使用FP16训练 +cfg.MODEL.FP16 = False +# 混合精度训练需对LOSS进行scale, 默认为动态scale,静态scale可以设置为512.0 +cfg.MODEL.SCALE_LOSS = "DYNAMIC" + +########################## DeepLab模型配置 #################################### +# DeepLab backbone 配置, 可选项xception_65, mobilenetv2 +cfg.MODEL.DEEPLAB.BACKBONE = "xception_65" +# DeepLab output stride +cfg.MODEL.DEEPLAB.OUTPUT_STRIDE = 16 +# MobileNet backbone scale 设置 +cfg.MODEL.DEEPLAB.DEPTH_MULTIPLIER = 1.0 +# MobileNet backbone scale 设置 +cfg.MODEL.DEEPLAB.ENCODER_WITH_ASPP = True +# MobileNet backbone scale 设置 +cfg.MODEL.DEEPLAB.ENABLE_DECODER = True +# ASPP是否使用可分离卷积 +cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV = True +# 解码器是否使用可分离卷积 +cfg.MODEL.DEEPLAB.DECODER_USE_SEP_CONV = True + +########################## UNET模型配置 ####################################### +# 上采样方式, 默认为双线性插值 +cfg.MODEL.UNET.UPSAMPLE_MODE = 'bilinear' + +########################## ICNET模型配置 ###################################### +# RESNET backbone scale 设置 +cfg.MODEL.ICNET.DEPTH_MULTIPLIER = 0.5 +# RESNET 层数 设置 +cfg.MODEL.ICNET.LAYERS = 50 + +########################## PSPNET模型配置 ###################################### +# Lannet backbone name +cfg.MODEL.LANENET.BACKBONE = "vgg" + +########################## LaneNet模型配置 ###################################### + +########################## 预测部署模型配置 ################################### +# 预测保存的模型名称 +cfg.FREEZE.MODEL_FILENAME = '__model__' +# 预测保存的参数名称 +cfg.FREEZE.PARAMS_FILENAME = '__params__' +# 预测模型参数保存的路径 +cfg.FREEZE.SAVE_DIR = 'freeze_model' diff --git a/contrib/LaneNet/utils/dist_utils.py b/contrib/LaneNet/utils/dist_utils.py new file mode 100755 index 0000000000000000000000000000000000000000..64c8800fd2010d4e1e5def6cc4ea2e1ad673b4a3 --- /dev/null +++ b/contrib/LaneNet/utils/dist_utils.py @@ -0,0 +1,92 @@ +#copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function +import os +import paddle.fluid as fluid + + +def nccl2_prepare(args, startup_prog, main_prog): + config = fluid.DistributeTranspilerConfig() + config.mode = "nccl2" + t = fluid.DistributeTranspiler(config=config) + + envs = args.dist_env + + t.transpile( + envs["trainer_id"], + trainers=','.join(envs["trainer_endpoints"]), + current_endpoint=envs["current_endpoint"], + startup_program=startup_prog, + program=main_prog) + + +def pserver_prepare(args, train_prog, startup_prog): + config = fluid.DistributeTranspilerConfig() + config.slice_var_up = args.split_var + t = fluid.DistributeTranspiler(config=config) + envs = args.dist_env + training_role = envs["training_role"] + + t.transpile( + envs["trainer_id"], + program=train_prog, + pservers=envs["pserver_endpoints"], + trainers=envs["num_trainers"], + sync_mode=not args.async_mode, + startup_program=startup_prog) + if training_role == "PSERVER": + pserver_program = t.get_pserver_program(envs["current_endpoint"]) + pserver_startup_program = t.get_startup_program( + envs["current_endpoint"], + pserver_program, + startup_program=startup_prog) + return pserver_program, pserver_startup_program + elif training_role == "TRAINER": + train_program = t.get_trainer_program() + return train_program, startup_prog + else: + raise ValueError( + 'PADDLE_TRAINING_ROLE environment variable must be either TRAINER or PSERVER' + ) + + +def nccl2_prepare_paddle(trainer_id, startup_prog, main_prog): + config = fluid.DistributeTranspilerConfig() + config.mode = "nccl2" + t = fluid.DistributeTranspiler(config=config) + t.transpile( + trainer_id, + trainers=os.environ.get('PADDLE_TRAINER_ENDPOINTS'), + current_endpoint=os.environ.get('PADDLE_CURRENT_ENDPOINT'), + startup_program=startup_prog, + program=main_prog) + + +def prepare_for_multi_process(exe, build_strategy, train_prog): + # prepare for multi-process + trainer_id = int(os.environ.get('PADDLE_TRAINER_ID', 0)) + num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + if num_trainers < 2: return + + build_strategy.num_trainers = num_trainers + build_strategy.trainer_id = trainer_id + # NOTE(zcd): use multi processes to train the model, + # and each process use one GPU card. + startup_prog = fluid.Program() + nccl2_prepare_paddle(trainer_id, startup_prog, train_prog) + # the startup_prog are run two times, but it doesn't matter. + exe.run(startup_prog) diff --git a/contrib/LaneNet/utils/generate_tusimple_dataset.py b/contrib/LaneNet/utils/generate_tusimple_dataset.py new file mode 100644 index 0000000000000000000000000000000000000000..ea2a89584b6aa3eab45e0fb8516d935f13afe644 --- /dev/null +++ b/contrib/LaneNet/utils/generate_tusimple_dataset.py @@ -0,0 +1,165 @@ +""" +generate tusimple training dataset +""" +import argparse +import glob +import json +import os +import os.path as ops +import shutil + +import cv2 +import numpy as np + + +def init_args(): + parser = argparse.ArgumentParser() + parser.add_argument('--src_dir', type=str, help='The origin path of unzipped tusimple dataset') + + return parser.parse_args() + + +def process_json_file(json_file_path, src_dir, ori_dst_dir, binary_dst_dir, instance_dst_dir): + + assert ops.exists(json_file_path), '{:s} not exist'.format(json_file_path) + + image_nums = len(os.listdir(os.path.join(src_dir, ori_dst_dir))) + + with open(json_file_path, 'r') as file: + for line_index, line in enumerate(file): + info_dict = json.loads(line) + + image_dir = ops.split(info_dict['raw_file'])[0] + image_dir_split = image_dir.split('/')[1:] + image_dir_split.append(ops.split(info_dict['raw_file'])[1]) + image_name = '_'.join(image_dir_split) + image_path = ops.join(src_dir, info_dict['raw_file']) + assert ops.exists(image_path), '{:s} not exist'.format(image_path) + + h_samples = info_dict['h_samples'] + lanes = info_dict['lanes'] + + image_name_new = '{:s}.png'.format('{:d}'.format(line_index + image_nums).zfill(4)) + + src_image = cv2.imread(image_path, cv2.IMREAD_COLOR) + dst_binary_image = np.zeros([src_image.shape[0], src_image.shape[1]], np.uint8) + dst_instance_image = np.zeros([src_image.shape[0], src_image.shape[1]], np.uint8) + + for lane_index, lane in enumerate(lanes): + assert len(h_samples) == len(lane) + lane_x = [] + lane_y = [] + for index in range(len(lane)): + if lane[index] == -2: + continue + else: + ptx = lane[index] + pty = h_samples[index] + lane_x.append(ptx) + lane_y.append(pty) + if not lane_x: + continue + lane_pts = np.vstack((lane_x, lane_y)).transpose() + lane_pts = np.array([lane_pts], np.int64) + + cv2.polylines(dst_binary_image, lane_pts, isClosed=False, + color=255, thickness=5) + cv2.polylines(dst_instance_image, lane_pts, isClosed=False, + color=lane_index * 50 + 20, thickness=5) + + dst_binary_image_path = ops.join(src_dir, binary_dst_dir, image_name_new) + dst_instance_image_path = ops.join(src_dir, instance_dst_dir, image_name_new) + dst_rgb_image_path = ops.join(src_dir, ori_dst_dir, image_name_new) + + cv2.imwrite(dst_binary_image_path, dst_binary_image) + cv2.imwrite(dst_instance_image_path, dst_instance_image) + cv2.imwrite(dst_rgb_image_path, src_image) + + print('Process {:s} success'.format(image_name)) + + +def gen_sample(src_dir, b_gt_image_dir, i_gt_image_dir, image_dir, phase='train', split=False): + + label_list = [] + with open('{:s}/{}ing/{}.txt'.format(src_dir, phase, phase), 'w') as file: + + for image_name in os.listdir(b_gt_image_dir): + if not image_name.endswith('.png'): + continue + + binary_gt_image_path = ops.join(b_gt_image_dir, image_name) + instance_gt_image_path = ops.join(i_gt_image_dir, image_name) + image_path = ops.join(image_dir, image_name) + + assert ops.exists(image_path), '{:s} not exist'.format(image_path) + assert ops.exists(instance_gt_image_path), '{:s} not exist'.format(instance_gt_image_path) + + b_gt_image = cv2.imread(binary_gt_image_path, cv2.IMREAD_COLOR) + i_gt_image = cv2.imread(instance_gt_image_path, cv2.IMREAD_COLOR) + image = cv2.imread(image_path, cv2.IMREAD_COLOR) + + if b_gt_image is None or image is None or i_gt_image is None: + print('image: {:s} corrupt'.format(image_name)) + continue + else: + info = '{:s} {:s} {:s}'.format(image_path, binary_gt_image_path, instance_gt_image_path) + file.write(info + '\n') + label_list.append(info) + if phase == 'train' and split: + np.random.RandomState(0).shuffle(label_list) + val_list_len = len(label_list) // 10 + val_label_list = label_list[:val_list_len] + train_label_list = label_list[val_list_len:] + with open('{:s}/{}ing/train_part.txt'.format(src_dir, phase, phase), 'w') as file: + for info in train_label_list: + file.write(info + '\n') + with open('{:s}/{}ing/val_part.txt'.format(src_dir, phase, phase), 'w') as file: + for info in val_label_list: + file.write(info + '\n') + return + + +def process_tusimple_dataset(src_dir): + + traing_folder_path = ops.join(src_dir, 'training') + testing_folder_path = ops.join(src_dir, 'testing') + + os.makedirs(traing_folder_path, exist_ok=True) + os.makedirs(testing_folder_path, exist_ok=True) + + for json_label_path in glob.glob('{:s}/label*.json'.format(src_dir)): + json_label_name = ops.split(json_label_path)[1] + + shutil.copyfile(json_label_path, ops.join(traing_folder_path, json_label_name)) + + for json_label_path in glob.glob('{:s}/test_label.json'.format(src_dir)): + json_label_name = ops.split(json_label_path)[1] + + shutil.copyfile(json_label_path, ops.join(testing_folder_path, json_label_name)) + + train_gt_image_dir = ops.join('training', 'gt_image') + train_gt_binary_dir = ops.join('training', 'gt_binary_image') + train_gt_instance_dir = ops.join('training', 'gt_instance_image') + + test_gt_image_dir = ops.join('testing', 'gt_image') + test_gt_binary_dir = ops.join('testing', 'gt_binary_image') + test_gt_instance_dir = ops.join('testing', 'gt_instance_image') + + os.makedirs(os.path.join(src_dir, train_gt_image_dir), exist_ok=True) + os.makedirs(os.path.join(src_dir, train_gt_binary_dir), exist_ok=True) + os.makedirs(os.path.join(src_dir, train_gt_instance_dir), exist_ok=True) + + os.makedirs(os.path.join(src_dir, test_gt_image_dir), exist_ok=True) + os.makedirs(os.path.join(src_dir, test_gt_binary_dir), exist_ok=True) + os.makedirs(os.path.join(src_dir, test_gt_instance_dir), exist_ok=True) + + for json_label_path in glob.glob('{:s}/*.json'.format(traing_folder_path)): + process_json_file(json_label_path, src_dir, train_gt_image_dir, train_gt_binary_dir, train_gt_instance_dir) + + gen_sample(src_dir, train_gt_binary_dir, train_gt_instance_dir, train_gt_image_dir, 'train', True) + + +if __name__ == '__main__': + args = init_args() + + process_tusimple_dataset(args.src_dir) diff --git a/contrib/LaneNet/utils/lanenet_postprocess.py b/contrib/LaneNet/utils/lanenet_postprocess.py new file mode 100644 index 0000000000000000000000000000000000000000..21230279f7c4b4e1042e51f7583a3a13d4ebc5d7 --- /dev/null +++ b/contrib/LaneNet/utils/lanenet_postprocess.py @@ -0,0 +1,376 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +# this code heavily base on https://github.com/MaybeShewill-CV/lanenet-lane-detection/blob/master/lanenet_model/lanenet_postprocess.py +""" +LaneNet model post process +""" +import os.path as ops +import math + +import cv2 +import time +import numpy as np +from sklearn.cluster import DBSCAN +from sklearn.preprocessing import StandardScaler + + +def _morphological_process(image, kernel_size=5): + """ + morphological process to fill the hole in the binary segmentation result + :param image: + :param kernel_size: + :return: + """ + if len(image.shape) == 3: + raise ValueError('Binary segmentation result image should be a single channel image') + + if image.dtype is not np.uint8: + image = np.array(image, np.uint8) + + kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(kernel_size, kernel_size)) + + # close operation fille hole + closing = cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel, iterations=1) + + return closing + + +def _connect_components_analysis(image): + """ + connect components analysis to remove the small components + :param image: + :return: + """ + if len(image.shape) == 3: + gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) + else: + gray_image = image + + return cv2.connectedComponentsWithStats(gray_image, connectivity=8, ltype=cv2.CV_32S) + + +class _LaneFeat(object): + """ + + """ + def __init__(self, feat, coord, class_id=-1): + """ + lane feat object + :param feat: lane embeddng feats [feature_1, feature_2, ...] + :param coord: lane coordinates [x, y] + :param class_id: lane class id + """ + self._feat = feat + self._coord = coord + self._class_id = class_id + + @property + def feat(self): + return self._feat + + @feat.setter + def feat(self, value): + if not isinstance(value, np.ndarray): + value = np.array(value, dtype=np.float64) + + if value.dtype != np.float32: + value = np.array(value, dtype=np.float64) + + self._feat = value + + @property + def coord(self): + return self._coord + + @coord.setter + def coord(self, value): + if not isinstance(value, np.ndarray): + value = np.array(value) + + if value.dtype != np.int32: + value = np.array(value, dtype=np.int32) + + self._coord = value + + @property + def class_id(self): + return self._class_id + + @class_id.setter + def class_id(self, value): + if not isinstance(value, np.int64): + raise ValueError('Class id must be integer') + + self._class_id = value + + +class _LaneNetCluster(object): + """ + Instance segmentation result cluster + """ + def __init__(self): + """ + + """ + self._color_map = [np.array([255, 0, 0]), + np.array([0, 255, 0]), + np.array([0, 0, 255]), + np.array([125, 125, 0]), + np.array([0, 125, 125]), + np.array([125, 0, 125]), + np.array([50, 100, 50]), + np.array([100, 50, 100])] + + @staticmethod + def _embedding_feats_dbscan_cluster(embedding_image_feats): + """ + dbscan cluster + """ + db = DBSCAN(eps=0.4, min_samples=500) + + try: + features = StandardScaler().fit_transform(embedding_image_feats) + db.fit(features) + except Exception as err: + print(err) + ret = { + 'origin_features': None, + 'cluster_nums': 0, + 'db_labels': None, + 'unique_labels': None, + 'cluster_center': None + } + return ret + db_labels = db.labels_ + unique_labels = np.unique(db_labels) + num_clusters = len(unique_labels) + cluster_centers = db.components_ + + ret = { + 'origin_features': features, + 'cluster_nums': num_clusters, + 'db_labels': db_labels, + 'unique_labels': unique_labels, + 'cluster_center': cluster_centers + } + + return ret + + @staticmethod + def _get_lane_embedding_feats(binary_seg_ret, instance_seg_ret): + """ + get lane embedding features according the binary seg result + """ + + idx = np.where(binary_seg_ret == 255) + lane_embedding_feats = instance_seg_ret[idx] + + lane_coordinate = np.vstack((idx[1], idx[0])).transpose() + + assert lane_embedding_feats.shape[0] == lane_coordinate.shape[0] + + ret = { + 'lane_embedding_feats': lane_embedding_feats, + 'lane_coordinates': lane_coordinate + } + + return ret + + def apply_lane_feats_cluster(self, binary_seg_result, instance_seg_result): + """ + + :param binary_seg_result: + :param instance_seg_result: + :return: + """ + # get embedding feats and coords + get_lane_embedding_feats_result = self._get_lane_embedding_feats( + binary_seg_ret=binary_seg_result, + instance_seg_ret=instance_seg_result + ) + + # dbscan cluster + dbscan_cluster_result = self._embedding_feats_dbscan_cluster( + embedding_image_feats=get_lane_embedding_feats_result['lane_embedding_feats'] + ) + + mask = np.zeros(shape=[binary_seg_result.shape[0], binary_seg_result.shape[1], 3], dtype=np.uint8) + db_labels = dbscan_cluster_result['db_labels'] + unique_labels = dbscan_cluster_result['unique_labels'] + coord = get_lane_embedding_feats_result['lane_coordinates'] + + if db_labels is None: + return None, None + + lane_coords = [] + + for index, label in enumerate(unique_labels.tolist()): + if label == -1: + continue + idx = np.where(db_labels == label) + pix_coord_idx = tuple((coord[idx][:, 1], coord[idx][:, 0])) + mask[pix_coord_idx] = self._color_map[index] + lane_coords.append(coord[idx]) + + return mask, lane_coords + + +class LaneNetPostProcessor(object): + """ + lanenet post process for lane generation + """ + def __init__(self, ipm_remap_file_path='./utils/tusimple_ipm_remap.yml'): + """ + convert front car view to bird view + """ + assert ops.exists(ipm_remap_file_path), '{:s} not exist'.format(ipm_remap_file_path) + + self._cluster = _LaneNetCluster() + self._ipm_remap_file_path = ipm_remap_file_path + + remap_file_load_ret = self._load_remap_matrix() + self._remap_to_ipm_x = remap_file_load_ret['remap_to_ipm_x'] + self._remap_to_ipm_y = remap_file_load_ret['remap_to_ipm_y'] + + self._color_map = [np.array([255, 0, 0]), + np.array([0, 255, 0]), + np.array([0, 0, 255]), + np.array([125, 125, 0]), + np.array([0, 125, 125]), + np.array([125, 0, 125]), + np.array([50, 100, 50]), + np.array([100, 50, 100])] + + def _load_remap_matrix(self): + fs = cv2.FileStorage(self._ipm_remap_file_path, cv2.FILE_STORAGE_READ) + + remap_to_ipm_x = fs.getNode('remap_ipm_x').mat() + remap_to_ipm_y = fs.getNode('remap_ipm_y').mat() + + ret = { + 'remap_to_ipm_x': remap_to_ipm_x, + 'remap_to_ipm_y': remap_to_ipm_y, + } + + fs.release() + + return ret + + def postprocess(self, binary_seg_result, instance_seg_result=None, + min_area_threshold=100, source_image=None, + data_source='tusimple'): + + # convert binary_seg_result + binary_seg_result = np.array(binary_seg_result * 255, dtype=np.uint8) + # apply image morphology operation to fill in the hold and reduce the small area + morphological_ret = _morphological_process(binary_seg_result, kernel_size=5) + connect_components_analysis_ret = _connect_components_analysis(image=morphological_ret) + + labels = connect_components_analysis_ret[1] + stats = connect_components_analysis_ret[2] + for index, stat in enumerate(stats): + if stat[4] <= min_area_threshold: + idx = np.where(labels == index) + morphological_ret[idx] = 0 + + # apply embedding features cluster + mask_image, lane_coords = self._cluster.apply_lane_feats_cluster( + binary_seg_result=morphological_ret, + instance_seg_result=instance_seg_result + ) + + if mask_image is None: + return { + 'mask_image': None, + 'fit_params': None, + 'source_image': None, + } + + # lane line fit + fit_params = [] + src_lane_pts = [] + for lane_index, coords in enumerate(lane_coords): + if data_source == 'tusimple': + tmp_mask = np.zeros(shape=(720, 1280), dtype=np.uint8) + tmp_mask[tuple((np.int_(coords[:, 1] * 720 / 256), np.int_(coords[:, 0] * 1280 / 512)))] = 255 + else: + raise ValueError('Wrong data source now only support tusimple') + tmp_ipm_mask = cv2.remap( + tmp_mask, + self._remap_to_ipm_x, + self._remap_to_ipm_y, + interpolation=cv2.INTER_NEAREST + ) + nonzero_y = np.array(tmp_ipm_mask.nonzero()[0]) + nonzero_x = np.array(tmp_ipm_mask.nonzero()[1]) + + fit_param = np.polyfit(nonzero_y, nonzero_x, 2) + fit_params.append(fit_param) + + [ipm_image_height, ipm_image_width] = tmp_ipm_mask.shape + plot_y = np.linspace(10, ipm_image_height, ipm_image_height - 10) + fit_x = fit_param[0] * plot_y ** 2 + fit_param[1] * plot_y + fit_param[2] + + lane_pts = [] + for index in range(0, plot_y.shape[0], 5): + src_x = self._remap_to_ipm_x[ + int(plot_y[index]), int(np.clip(fit_x[index], 0, ipm_image_width - 1))] + if src_x <= 0: + continue + src_y = self._remap_to_ipm_y[ + int(plot_y[index]), int(np.clip(fit_x[index], 0, ipm_image_width - 1))] + src_y = src_y if src_y > 0 else 0 + + lane_pts.append([src_x, src_y]) + + src_lane_pts.append(lane_pts) + + # tusimple test data sample point along y axis every 10 pixels + source_image_width = source_image.shape[1] + for index, single_lane_pts in enumerate(src_lane_pts): + single_lane_pt_x = np.array(single_lane_pts, dtype=np.float32)[:, 0] + single_lane_pt_y = np.array(single_lane_pts, dtype=np.float32)[:, 1] + if data_source == 'tusimple': + start_plot_y = 240 + end_plot_y = 720 + else: + raise ValueError('Wrong data source now only support tusimple') + step = int(math.floor((end_plot_y - start_plot_y) / 10)) + for plot_y in np.linspace(start_plot_y, end_plot_y, step): + diff = single_lane_pt_y - plot_y + fake_diff_bigger_than_zero = diff.copy() + fake_diff_smaller_than_zero = diff.copy() + fake_diff_bigger_than_zero[np.where(diff <= 0)] = float('inf') + fake_diff_smaller_than_zero[np.where(diff > 0)] = float('-inf') + idx_low = np.argmax(fake_diff_smaller_than_zero) + idx_high = np.argmin(fake_diff_bigger_than_zero) + + previous_src_pt_x = single_lane_pt_x[idx_low] + previous_src_pt_y = single_lane_pt_y[idx_low] + last_src_pt_x = single_lane_pt_x[idx_high] + last_src_pt_y = single_lane_pt_y[idx_high] + + if previous_src_pt_y < start_plot_y or last_src_pt_y < start_plot_y or \ + fake_diff_smaller_than_zero[idx_low] == float('-inf') or \ + fake_diff_bigger_than_zero[idx_high] == float('inf'): + continue + + interpolation_src_pt_x = (abs(previous_src_pt_y - plot_y) * previous_src_pt_x + + abs(last_src_pt_y - plot_y) * last_src_pt_x) / \ + (abs(previous_src_pt_y - plot_y) + abs(last_src_pt_y - plot_y)) + interpolation_src_pt_y = (abs(previous_src_pt_y - plot_y) * previous_src_pt_y + + abs(last_src_pt_y - plot_y) * last_src_pt_y) / \ + (abs(previous_src_pt_y - plot_y) + abs(last_src_pt_y - plot_y)) + + if interpolation_src_pt_x > source_image_width or interpolation_src_pt_x < 10: + continue + + lane_color = self._color_map[index].tolist() + cv2.circle(source_image, (int(interpolation_src_pt_x), + int(interpolation_src_pt_y)), 5, lane_color, -1) + ret = { + 'mask_image': mask_image, + 'fit_params': fit_params, + 'source_image': source_image, + } + return ret diff --git a/contrib/LaneNet/vis.py b/contrib/LaneNet/vis.py new file mode 100644 index 0000000000000000000000000000000000000000..594258758316cb8945463962c71a52b42314faa6 --- /dev/null +++ b/contrib/LaneNet/vis.py @@ -0,0 +1,207 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +cur_path = os.path.abspath(os.path.dirname(__file__)) +root_path = os.path.split(os.path.split(cur_path)[0])[0] +SEG_PATH = os.path.join(cur_path, "../../../") +sys.path.append(SEG_PATH) +sys.path.append(root_path) + +import matplotlib +matplotlib.use('Agg') +import time +import argparse +import pprint +import cv2 +import numpy as np +import paddle.fluid as fluid + +from utils.config import cfg +from reader import LaneNetDataset +from models.model_builder import build_model +from models.model_builder import ModelPhase +from utils import lanenet_postprocess +import matplotlib.pyplot as plt + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddeSeg visualization tools') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', dest='use_gpu', help='Use gpu or cpu', action='store_true') + parser.add_argument( + '--vis_dir', + dest='vis_dir', + help='visual save dir', + type=str, + default='visual') + parser.add_argument( + '--also_save_raw_results', + dest='also_save_raw_results', + help='whether to save raw result', + action='store_true') + parser.add_argument( + '--local_test', + dest='local_test', + help='if in local test mode, only visualize 5 images for testing', + action='store_true') + parser.add_argument( + 'opts', + help='See config.py for all options', + default=None, + nargs=argparse.REMAINDER) + if len(sys.argv) == 1: + parser.print_help() + sys.exit(1) + return parser.parse_args() + + +def makedirs(directory): + if not os.path.exists(directory): + os.makedirs(directory) + + +def to_png_fn(fn, name=""): + """ + Append png as filename postfix + """ + directory, filename = os.path.split(fn) + basename, ext = os.path.splitext(filename) + + return basename + name + ".png" + + +def minmax_scale(input_arr): + min_val = np.min(input_arr) + max_val = np.max(input_arr) + + output_arr = (input_arr - min_val) * 255.0 / (max_val - min_val) + + return output_arr + + + +def visualize(cfg, + vis_file_list=None, + use_gpu=False, + vis_dir="visual", + also_save_raw_results=False, + ckpt_dir=None, + log_writer=None, + local_test=False, + **kwargs): + if vis_file_list is None: + vis_file_list = cfg.DATASET.TEST_FILE_LIST + + + dataset = LaneNetDataset( + file_list=vis_file_list, + mode=ModelPhase.VISUAL, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + startup_prog = fluid.Program() + test_prog = fluid.Program() + pred, logit = build_model(test_prog, startup_prog, phase=ModelPhase.VISUAL) + # Clone forward graph + test_prog = test_prog.clone(for_test=True) + + # Get device environment + place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace() + exe = fluid.Executor(place) + exe.run(startup_prog) + + ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir + + fluid.io.load_params(exe, ckpt_dir, main_program=test_prog) + + save_dir = os.path.join(vis_dir, 'visual_results') + makedirs(save_dir) + if also_save_raw_results: + raw_save_dir = os.path.join(vis_dir, 'raw_results') + makedirs(raw_save_dir) + + fetch_list = [pred.name, logit.name] + test_reader = dataset.batch(dataset.generator, batch_size=1, is_test=True) + + postprocessor = lanenet_postprocess.LaneNetPostProcessor() + for imgs, grts, grts_instance, img_names, valid_shapes, org_imgs in test_reader: + segLogits, emLogits = exe.run( + program=test_prog, + feed={'image': imgs}, + fetch_list=fetch_list, + return_numpy=True) + num_imgs = segLogits.shape[0] + + for i in range(num_imgs): + gt_image = org_imgs[i] + binary_seg_image, instance_seg_image = segLogits[i].squeeze(-1), emLogits[i].transpose((1,2,0)) + + postprocess_result = postprocessor.postprocess( + binary_seg_result=binary_seg_image, + instance_seg_result=instance_seg_image, + source_image=gt_image + ) + pred_binary_fn = os.path.join(save_dir, to_png_fn(img_names[i], name='_pred_binary')) + pred_lane_fn = os.path.join(save_dir, to_png_fn(img_names[i], name='_pred_lane')) + pred_instance_fn = os.path.join(save_dir, to_png_fn(img_names[i], name='_pred_instance')) + dirname = os.path.dirname(pred_binary_fn) + + makedirs(dirname) + mask_image = postprocess_result['mask_image'] + for i in range(4): + instance_seg_image[:, :, i] = minmax_scale(instance_seg_image[:, :, i]) + embedding_image = np.array(instance_seg_image).astype(np.uint8) + + plt.figure('mask_image') + plt.imshow(mask_image[:, :, (2, 1, 0)]) + plt.figure('src_image') + plt.imshow(gt_image[:, :, (2, 1, 0)]) + plt.figure('instance_image') + plt.imshow(embedding_image[:, :, (2, 1, 0)]) + plt.figure('binary_image') + plt.imshow(binary_seg_image * 255, cmap='gray') + plt.show() + + cv2.imwrite(pred_binary_fn, np.array(binary_seg_image * 255).astype(np.uint8)) + cv2.imwrite(pred_lane_fn, postprocess_result['source_image']) + cv2.imwrite(pred_instance_fn, mask_image) + print(pred_lane_fn, 'saved!') + + + +if __name__ == '__main__': + args = parse_args() + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + cfg.check_and_infer() + print(pprint.pformat(cfg)) + visualize(cfg, **args.__dict__) diff --git a/dataset/download_mini_mechanical_industry_meter.py b/contrib/MechanicalIndustryMeter/download_mini_mechanical_industry_meter.py similarity index 95% rename from dataset/download_mini_mechanical_industry_meter.py rename to contrib/MechanicalIndustryMeter/download_mini_mechanical_industry_meter.py index 3049df25219df7641990cedd409566779012a08d..f0409581ea9454417c545aa616b98ee8ece4dc53 100644 --- a/dataset/download_mini_mechanical_industry_meter.py +++ b/contrib/MechanicalIndustryMeter/download_mini_mechanical_industry_meter.py @@ -16,7 +16,7 @@ import sys import os LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) -TEST_PATH = os.path.join(LOCAL_PATH, "..", "test") +TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test") sys.path.append(TEST_PATH) from test_utils import download_file_and_uncompress diff --git a/contrib/MechanicalIndustryMeter/download_unet_mechanical_industry_meter.py b/contrib/MechanicalIndustryMeter/download_unet_mechanical_industry_meter.py new file mode 100644 index 0000000000000000000000000000000000000000..aa55bf5e03b8dcf31e52043fd5dc87086c03c32f --- /dev/null +++ b/contrib/MechanicalIndustryMeter/download_unet_mechanical_industry_meter.py @@ -0,0 +1,30 @@ +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test") +sys.path.append(TEST_PATH) + +from test_utils import download_file_and_uncompress + +if __name__ == "__main__": + download_file_and_uncompress( + url='https://paddleseg.bj.bcebos.com/models/unet_mechanical_industry_meter.tar', + savepath=LOCAL_PATH, + extrapath=LOCAL_PATH) + + print("Pretrained Model download success!") diff --git a/contrib/imgs/1560143028.5_IMG_3091.JPG b/contrib/MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.JPG similarity index 100% rename from contrib/imgs/1560143028.5_IMG_3091.JPG rename to contrib/MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.JPG diff --git a/contrib/imgs/1560143028.5_IMG_3091.png b/contrib/MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.png similarity index 100% rename from contrib/imgs/1560143028.5_IMG_3091.png rename to contrib/MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.png diff --git a/configs/unet_mechanical_meter.yaml b/contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml similarity index 77% rename from configs/unet_mechanical_meter.yaml rename to contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml index e1bc3a1183d2b435c84ad7b16002a3f604cf85b0..45ac8616f7993e15d3d262dc0e27f67624957e2a 100644 --- a/configs/unet_mechanical_meter.yaml +++ b/contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml @@ -21,14 +21,14 @@ DATALOADER: BUF_SIZE: 256 NUM_WORKERS: 4 DATASET: - DATA_DIR: "./dataset/mini_mechanical_industry_meter_data/" + DATA_DIR: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/" IMAGE_TYPE: "rgb" # choice rgb or rgba NUM_CLASSES: 5 - TEST_FILE_LIST: "./dataset/mini_mechanical_industry_meter_data/val_mini.txt" + TEST_FILE_LIST: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/val_mini.txt" TEST_TOTAL_IMAGES: 8 - TRAIN_FILE_LIST: "./dataset/mini_mechanical_industry_meter_data/train_mini.txt" + TRAIN_FILE_LIST: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/train_mini.txt" TRAIN_TOTAL_IMAGES: 64 - VAL_FILE_LIST: "./dataset/mini_mechanical_industry_meter_data/val_mini.txt" + VAL_FILE_LIST: "./contrib/MechanicalIndustryMeter/mini_mechanical_industry_meter_data/val_mini.txt" VAL_TOTAL_IMAGES: 8 SEPARATOR: "|" IGNORE_INDEX: 255 diff --git a/contrib/README.md b/contrib/README.md index 0dbbb9b473500820a919badff3ea21b5b9123bef..7b6a9b8b865f7c573c5e34bb0047ea28a57c52a4 100644 --- a/contrib/README.md +++ b/contrib/README.md @@ -1,72 +1,139 @@ # PaddleSeg 特色垂类分割模型 -提供基于PaddlePaddle最新的分割特色模型 +提供基于PaddlePaddle最新的分割特色模型: -## Augmented Context Embedding with Edge Perceiving (ACE2P) +- [人像分割](#人像分割) +- [人体解析](#人体解析) +- [车道线分割](#车道线分割) +- [工业用表分割](#工业用表分割) +- [在线体验](#在线体验) +## 人像分割 -### 1. 模型概述 - -CVPR 19 Look into Person (LIP) 单人人像分割比赛冠军模型,详见[ACE2P](./ACE2P) +**Note:** 本章节所有命令均在`contrib/HumanSeg`目录下执行。 -### 2. 模型下载 +``` +cd contrib/HumanSeg +``` -点击[链接](https://paddleseg.bj.bcebos.com/models/ACE2P.tgz),下载, 在contrib/ACE2P下解压, `tar -xzf ACE2P.tgz` +### 1. 模型结构 -### 3. 数据下载 +DeepLabv3+ backbone为Xception65 -前往LIP数据集官网: http://47.100.21.47:9999/overview.php 或点击 [Baidu_Drive](https://pan.baidu.com/s/1nvqmZBN#list/path=%2Fsharelink2787269280-523292635003760%2FLIP%2FLIP&parentPath=%2Fsharelink2787269280-523292635003760), +### 2. 下载模型和数据 + +执行以下命令下载并解压模型和数据集: -加载Testing_images.zip, 解压到contrib/ACE2P/data文件夹下 +``` +python download_HumanSeg.py +``` +或点击[链接](https://paddleseg.bj.bcebos.com/models/HumanSeg.tgz)进行手动下载,并解压到contrib/HumanSeg文件夹下 -### 4. 运行 -**NOTE:** 运行该模型需要2G左右显存 +### 3. 运行 -使用GPU预测 +使用GPU预测: ``` -python -u infer.py --example ACE2P --use_gpu +python -u infer.py --example HumanSeg --use_gpu ``` + 使用CPU预测: ``` -python -u infer.py --example ACE2P +python -u infer.py --example HumanSeg ``` -## 人像分割 (HumanSeg) +预测结果存放在contrib/HumanSeg/HumanSeg/result目录下。 -### 1. 模型结构 +### 4. 预测结果示例: -DeepLabv3+ backbone为Xception65 + 原图: + + ![](HumanSeg/imgs/Human.jpg) + + 预测结果: + + ![](HumanSeg/imgs/HumanSeg.jpg) -### 2. 下载模型和数据 - -点击[链接](https://paddleseg.bj.bcebos.com/models/HumanSeg.tgz),下载解压到contrib文件夹下 -### 3. 运行 +## 人体解析 + +![](ACE2P/imgs/result.jpg) + +人体解析(Human Parsing)是细粒度的语义分割任务,旨在识别像素级别的人类图像的组成部分(例如,身体部位和服装)。本章节使用冠军模型Augmented Context Embedding with Edge Perceiving (ACE2P)进行预测分割。 + + +**Note:** 本章节所有命令均在`contrib/ACE2P`目录下执行。 -使用GPU预测: ``` -python -u infer.py --example HumanSeg --use_gpu +cd contrib/ACE2P ``` +### 1. 模型概述 + +Augmented Context Embedding with Edge Perceiving (ACE2P)通过融合底层特征、全局上下文信息和边缘细节,端到端训练学习人体解析任务。以ACE2P单人人体解析网络为基础的解决方案在CVPR2019第三届Look into Person (LIP)挑战赛中赢得了全部三个人体解析任务的第一名。详情请参见[ACE2P](./ACE2P) + +### 2. 模型下载 + +执行以下命令下载并解压ACE2P预测模型: -使用CPU预测: ``` -python -u infer.py --example HumanSeg +python download_ACE2P.py ``` +或点击[链接](https://paddleseg.bj.bcebos.com/models/ACE2P.tgz)进行手动下载, 并在contrib/ACE2P下解压。 -### 4. 预测结果示例: +### 3. 数据下载 + +测试图片共10000张, +点击 [Baidu_Drive](https://pan.baidu.com/s/1nvqmZBN#list/path=%2Fsharelink2787269280-523292635003760%2FLIP%2FLIP&parentPath=%2Fsharelink2787269280-523292635003760) +下载Testing_images.zip,或前往LIP数据集官网进行下载。 +下载后解压到contrib/ACE2P/data文件夹下 + + +### 4. 运行 + + +使用GPU预测 +``` +python -u infer.py --example ACE2P --use_gpu +``` + +使用CPU预测: +``` +python -u infer.py --example ACE2P +``` - 原图:![](imgs/Human.jpg) +**NOTE:** 运行该模型需要2G左右显存。由于数据图片较多,预测过程将比较耗时。 + +#### 5. 预测结果示例: + + 原图: + + ![](ACE2P/imgs/117676_2149260.jpg) + + 预测结果: - 预测结果:![](imgs/HumanSeg.jpg) + ![](ACE2P/imgs/117676_2149260.png) + +### 备注 -## 车道线分割 (RoadLine) +1. 数据及模型路径等详细配置见ACE2P/HumanSeg/RoadLine下的config.py文件 +2. ACE2P模型需预留2G显存,若显存超可调小FLAGS_fraction_of_gpu_memory_to_use + + + + +## 车道线分割 + +**Note:** 本章节所有命令均在`contrib/RoadLine`目录下执行。 + +``` +cd contrib/RoadLine +``` ### 1. 模型结构 @@ -75,7 +142,15 @@ Deeplabv3+ backbone为MobileNetv2 ### 2. 下载模型和数据 -点击[链接](https://paddleseg.bj.bcebos.com/inference_model/RoadLine.tgz),下载解压在contrib文件夹下 + +执行以下命令下载并解压模型和数据集: + +``` +python download_RoadLine.py +``` + +或点击[链接](https://paddleseg.bj.bcebos.com/inference_model/RoadLine.tgz)进行手动下载,并解压到contrib/RoadLine文件夹下 + ### 3. 运行 @@ -92,45 +167,84 @@ python -u infer.py --example RoadLine --use_gpu python -u infer.py --example RoadLine ``` +预测结果存放在contrib/RoadLine/RoadLine/result目录下。 #### 4. 预测结果示例: - 原图:![](imgs/RoadLine.jpg) + 原图: + + ![](RoadLine/imgs/RoadLine.jpg) - 预测结果:![](imgs/RoadLine.png) + 预测结果: + + ![](RoadLine/imgs/RoadLine.png) + + ## 工业用表分割 + +**Note:** 本章节所有命令均在`PaddleSeg`目录下执行。 + ### 1. 模型结构 unet ### 2. 数据准备 -cd到PaddleSeg/dataset文件夹下,执行download_mini_mechanical_industry_meter.py +执行以下命令下载并解压数据集,数据集将存放在contrib/MechanicalIndustryMeter文件夹下: + +``` +python ./contrib/MechanicalIndustryMeter/download_mini_mechanical_industry_meter.py +``` + + +### 3. 下载预训练模型 + +``` +python ./pretrained_model/download_model.py unet_bn_coco +``` +### 4. 训练与评估 -### 3. 训练与评估 +``` +export CUDA_VISIBLE_DEVICES=0 +python ./pdseg/train.py --log_steps 10 --cfg contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml --use_gpu --do_eval --use_mpio +``` + +### 5. 可视化 +我们已提供了一个训练好的模型,执行以下命令进行下载,下载后将存放在./contrib/MechanicalIndustryMeter/文件夹下。 ``` -CUDA_VISIBLE_DEVICES=0 python ./pdseg/train.py --log_steps 10 --cfg configs/unet_mechanical_meter.yaml --use_gpu --do_eval --use_mpio +python ./contrib/MechanicalIndustryMeter/download_unet_mechanical_industry_meter.py ``` -### 4. 可视化 -我们提供了一个训练好的模型,点击[链接](https://paddleseg.bj.bcebos.com/models/unet_mechanical_industry_meter.tar),下载后放在PaddleSeg/pretrained_model下 +使用该模型进行预测可视化: + ``` -CUDA_VISIBLE_DEVICES=0 python ./pdseg/vis.py --cfg configs/unet_mechanical_meter.yaml --use_gpu --vis_dir vis_meter \ -TEST.TEST_MODEL "./pretrained_model/unet_gongyeyongbiao/" +python ./pdseg/vis.py --cfg contrib/MechanicalIndustryMeter/unet_mechanical_meter.yaml --use_gpu --vis_dir vis_meter \ +TEST.TEST_MODEL "./contrib/MechanicalIndustryMeter/unet_mechanical_industry_meter/" ``` -可视化结果会保存在vis_meter文件夹下 +可视化结果会保存在./vis_meter文件夹下。 -### 5. 可视化结果示例: +### 6. 可视化结果示例: - 原图:![](imgs/1560143028.5_IMG_3091.JPG) + 原图: + + ![](MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.JPG) - 预测结果:![](imgs/1560143028.5_IMG_3091.png) + 预测结果: -# 备注 + ![](MechanicalIndustryMeter/imgs/1560143028.5_IMG_3091.png) + +## 在线体验 + +PaddleSeg在AI Studio平台上提供了在线体验的教程,欢迎体验: + +|教程|链接| +|-|-| +|工业质检|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/184392)| +|人像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/188833)| +|特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/226710)| + -1. 数据及模型路径等详细配置见ACE2P/HumanSeg/RoadLine下的config.py文件 -2. ACE2P模型需预留2G显存,若显存超可调小FLAGS_fraction_of_gpu_memory_to_use diff --git a/contrib/RealTimeHumanSeg/cpp/CMakeLists.txt b/contrib/RealTimeHumanSeg/cpp/CMakeLists.txt new file mode 100644 index 0000000000000000000000000000000000000000..5a7b89acc41da5576a0f0ead7205385feabf5dab --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/CMakeLists.txt @@ -0,0 +1,221 @@ +cmake_minimum_required(VERSION 3.0) +project(PaddleMaskDetector CXX C) + +option(WITH_MKL "Compile demo with MKL/OpenBlas support,defaultuseMKL." ON) +option(WITH_GPU "Compile demo with GPU/CPU, default use CPU." ON) +option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." ON) +option(USE_TENSORRT "Compile demo with TensorRT." OFF) + +SET(PADDLE_DIR "" CACHE PATH "Location of libraries") +SET(OPENCV_DIR "" CACHE PATH "Location of libraries") +SET(CUDA_LIB "" CACHE PATH "Location of libraries") + +macro(safe_set_static_flag) + foreach(flag_var + CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE + CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO) + if(${flag_var} MATCHES "/MD") + string(REGEX REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}") + endif(${flag_var} MATCHES "/MD") + endforeach(flag_var) +endmacro() + +if (WITH_MKL) + ADD_DEFINITIONS(-DUSE_MKL) +endif() + +if (NOT DEFINED PADDLE_DIR OR ${PADDLE_DIR} STREQUAL "") + message(FATAL_ERROR "please set PADDLE_DIR with -DPADDLE_DIR=/path/paddle_influence_dir") +endif() + +if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "") + message(FATAL_ERROR "please set OPENCV_DIR with -DOPENCV_DIR=/path/opencv") +endif() + +include_directories("${CMAKE_SOURCE_DIR}/") +include_directories("${PADDLE_DIR}/") +include_directories("${PADDLE_DIR}/third_party/install/protobuf/include") +include_directories("${PADDLE_DIR}/third_party/install/glog/include") +include_directories("${PADDLE_DIR}/third_party/install/gflags/include") +include_directories("${PADDLE_DIR}/third_party/install/xxhash/include") +if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/include") + include_directories("${PADDLE_DIR}/third_party/install/snappy/include") +endif() +if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/include") + include_directories("${PADDLE_DIR}/third_party/install/snappystream/include") +endif() +include_directories("${PADDLE_DIR}/third_party/install/zlib/include") +include_directories("${PADDLE_DIR}/third_party/boost") +include_directories("${PADDLE_DIR}/third_party/eigen3") + +if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib") + link_directories("${PADDLE_DIR}/third_party/install/snappy/lib") +endif() +if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/lib") + link_directories("${PADDLE_DIR}/third_party/install/snappystream/lib") +endif() + +link_directories("${PADDLE_DIR}/third_party/install/zlib/lib") +link_directories("${PADDLE_DIR}/third_party/install/protobuf/lib") +link_directories("${PADDLE_DIR}/third_party/install/glog/lib") +link_directories("${PADDLE_DIR}/third_party/install/gflags/lib") +link_directories("${PADDLE_DIR}/third_party/install/xxhash/lib") +link_directories("${PADDLE_DIR}/paddle/lib/") +link_directories("${CMAKE_CURRENT_BINARY_DIR}") +if (WIN32) + include_directories("${PADDLE_DIR}/paddle/fluid/inference") + include_directories("${PADDLE_DIR}/paddle/include") + link_directories("${PADDLE_DIR}/paddle/fluid/inference") + include_directories("${OPENCV_DIR}/build/include") + include_directories("${OPENCV_DIR}/opencv/build/include") + link_directories("${OPENCV_DIR}/build/x64/vc14/lib") +else () + find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/share/OpenCV NO_DEFAULT_PATH) + include_directories("${PADDLE_DIR}/paddle/include") + link_directories("${PADDLE_DIR}/paddle/lib") + include_directories(${OpenCV_INCLUDE_DIRS}) +endif () + +if (WIN32) + add_definitions("/DGOOGLE_GLOG_DLL_DECL=") + set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /bigobj /MTd") + set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /bigobj /MT") + set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /bigobj /MTd") + set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /bigobj /MT") + if (WITH_STATIC_LIB) + safe_set_static_flag() + add_definitions(-DSTATIC_LIB) + endif() +else() + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -o2 -fopenmp -std=c++11") + set(CMAKE_STATIC_LIBRARY_PREFIX "") +endif() + +# TODO let users define cuda lib path +if (WITH_GPU) + if (NOT DEFINED CUDA_LIB OR ${CUDA_LIB} STREQUAL "") + message(FATAL_ERROR "please set CUDA_LIB with -DCUDA_LIB=/path/cuda-8.0/lib64") + endif() + if (NOT WIN32) + if (NOT DEFINED CUDNN_LIB) + message(FATAL_ERROR "please set CUDNN_LIB with -DCUDNN_LIB=/path/cudnn_v7.4/cuda/lib64") + endif() + endif(NOT WIN32) +endif() + + +if (NOT WIN32) + if (USE_TENSORRT AND WITH_GPU) + include_directories("${PADDLE_DIR}/third_party/install/tensorrt/include") + link_directories("${PADDLE_DIR}/third_party/install/tensorrt/lib") + endif() +endif(NOT WIN32) + +if (NOT WIN32) + set(NGRAPH_PATH "${PADDLE_DIR}/third_party/install/ngraph") + if(EXISTS ${NGRAPH_PATH}) + include(GNUInstallDirs) + include_directories("${NGRAPH_PATH}/include") + link_directories("${NGRAPH_PATH}/${CMAKE_INSTALL_LIBDIR}") + set(NGRAPH_LIB ${NGRAPH_PATH}/${CMAKE_INSTALL_LIBDIR}/libngraph${CMAKE_SHARED_LIBRARY_SUFFIX}) + endif() +endif() + +if(WITH_MKL) + include_directories("${PADDLE_DIR}/third_party/install/mklml/include") + if (WIN32) + set(MATH_LIB ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.lib + ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.lib) + else () + set(MATH_LIB ${PADDLE_DIR}/third_party/install/mklml/lib/libmklml_intel${CMAKE_SHARED_LIBRARY_SUFFIX} + ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5${CMAKE_SHARED_LIBRARY_SUFFIX}) + execute_process(COMMAND cp -r ${PADDLE_DIR}/third_party/install/mklml/lib/libmklml_intel${CMAKE_SHARED_LIBRARY_SUFFIX} /usr/lib) + endif () + set(MKLDNN_PATH "${PADDLE_DIR}/third_party/install/mkldnn") + if(EXISTS ${MKLDNN_PATH}) + include_directories("${MKLDNN_PATH}/include") + if (WIN32) + set(MKLDNN_LIB ${MKLDNN_PATH}/lib/mkldnn.lib) + else () + set(MKLDNN_LIB ${MKLDNN_PATH}/lib/libmkldnn.so.0) + endif () + endif() +else() + set(MATH_LIB ${PADDLE_DIR}/third_party/install/openblas/lib/libopenblas${CMAKE_STATIC_LIBRARY_SUFFIX}) +endif() + +if (WIN32) + if(EXISTS "${PADDLE_DIR}/paddle/fluid/inference/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX}") + set(DEPS + ${PADDLE_DIR}/paddle/fluid/inference/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX}) + else() + set(DEPS + ${PADDLE_DIR}/paddle/lib/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX}) + endif() +endif() + +if(WITH_STATIC_LIB) + set(DEPS + ${PADDLE_DIR}/paddle/lib/libpaddle_fluid${CMAKE_STATIC_LIBRARY_SUFFIX}) +else() + set(DEPS + ${PADDLE_DIR}/paddle/lib/libpaddle_fluid${CMAKE_SHARED_LIBRARY_SUFFIX}) +endif() + +if (NOT WIN32) + set(DEPS ${DEPS} + ${MATH_LIB} ${MKLDNN_LIB} + glog gflags protobuf z xxhash + ) + if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/lib") + set(DEPS ${DEPS} snappystream) + endif() + if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib") + set(DEPS ${DEPS} snappy) + endif() +else() + set(DEPS ${DEPS} + ${MATH_LIB} ${MKLDNN_LIB} + opencv_world346 glog gflags_static libprotobuf zlibstatic xxhash) + set(DEPS ${DEPS} libcmt shlwapi) + if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib") + set(DEPS ${DEPS} snappy) + endif() + if(EXISTS "${PADDLE_DIR}/third_party/install/snappystream/lib") + set(DEPS ${DEPS} snappystream) + endif() +endif(NOT WIN32) + +if(WITH_GPU) + if(NOT WIN32) + if (USE_TENSORRT) + set(DEPS ${DEPS} ${PADDLE_DIR}/third_party/install/tensorrt/lib/libnvinfer${CMAKE_STATIC_LIBRARY_SUFFIX}) + set(DEPS ${DEPS} ${PADDLE_DIR}/third_party/install/tensorrt/lib/libnvinfer_plugin${CMAKE_STATIC_LIBRARY_SUFFIX}) + endif() + set(DEPS ${DEPS} ${CUDA_LIB}/libcudart${CMAKE_SHARED_LIBRARY_SUFFIX}) + set(DEPS ${DEPS} ${CUDNN_LIB}/libcudnn${CMAKE_SHARED_LIBRARY_SUFFIX}) + else() + set(DEPS ${DEPS} ${CUDA_LIB}/cudart${CMAKE_STATIC_LIBRARY_SUFFIX} ) + set(DEPS ${DEPS} ${CUDA_LIB}/cublas${CMAKE_STATIC_LIBRARY_SUFFIX} ) + set(DEPS ${DEPS} ${CUDA_LIB}/cudnn${CMAKE_STATIC_LIBRARY_SUFFIX}) + endif() +endif() + +if (NOT WIN32) + set(EXTERNAL_LIB "-ldl -lrt -lgomp -lz -lm -lpthread") + set(DEPS ${DEPS} ${EXTERNAL_LIB} ${OpenCV_LIBS}) +endif() + +add_executable(main main.cc humanseg.cc humanseg_postprocess.cc) +target_link_libraries(main ${DEPS}) + +if (WIN32) + add_custom_command(TARGET main POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./release/mkldnn.dll + ) +endif() diff --git a/contrib/RealTimeHumanSeg/cpp/CMakeSettings.json b/contrib/RealTimeHumanSeg/cpp/CMakeSettings.json new file mode 100644 index 0000000000000000000000000000000000000000..87cbe721d98dc9a12079d2eb79c77e50d0e0408a --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/CMakeSettings.json @@ -0,0 +1,42 @@ +{ + "configurations": [ + { + "name": "x64-Release", + "generator": "Ninja", + "configurationType": "RelWithDebInfo", + "inheritEnvironments": [ "msvc_x64_x64" ], + "buildRoot": "${projectDir}\\out\\build\\${name}", + "installRoot": "${projectDir}\\out\\install\\${name}", + "cmakeCommandArgs": "", + "buildCommandArgs": "-v", + "ctestCommandArgs": "", + "variables": [ + { + "name": "CUDA_LIB", + "value": "D:/projects/packages/cuda10_0/lib64", + "type": "PATH" + }, + { + "name": "CUDNN_LIB", + "value": "D:/projects/packages/cuda10_0/lib64", + "type": "PATH" + }, + { + "name": "OPENCV_DIR", + "value": "D:/projects/packages/opencv3_4_6", + "type": "PATH" + }, + { + "name": "PADDLE_DIR", + "value": "D:/projects/packages/fluid_inference1_6_1", + "type": "PATH" + }, + { + "name": "CMAKE_BUILD_TYPE", + "value": "Release", + "type": "STRING" + } + ] + } + ] +} \ No newline at end of file diff --git a/contrib/RealTimeHumanSeg/cpp/README.md b/contrib/RealTimeHumanSeg/cpp/README.md new file mode 100644 index 0000000000000000000000000000000000000000..5f1184130cb4ebf18fd10f30378caa8c98bb8083 --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/README.md @@ -0,0 +1,15 @@ +# 视频实时图像分割模型C++预测部署 + +本文档主要介绍实时图像分割模型如何在`Windows`和`Linux`上完成基于`C++`的预测部署。 + +## C++预测部署编译 + +### 1. 下载模型 +点击右边下载:[模型下载地址](https://paddleseg.bj.bcebos.com/deploy/models/humanseg_paddleseg_int8.zip) + +模型文件路径将做为预测时的输入参数,请解压到合适的目录位置。 + +### 2. 编译 +本项目支持在Windows和Linux上编译并部署C++项目,不同平台的编译请参考: +- [Linux 编译](./docs/linux_build.md) +- [Windows 使用 Visual Studio 2019编译](./docs/windows_build.md) diff --git a/contrib/RealTimeHumanSeg/cpp/docs/linux_build.md b/contrib/RealTimeHumanSeg/cpp/docs/linux_build.md new file mode 100644 index 0000000000000000000000000000000000000000..823ff3ae7cc6b16d9f5696924ae5def746bc8892 --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/docs/linux_build.md @@ -0,0 +1,86 @@ +# 视频实时人像分割模型Linux平台C++预测部署 + + +## 1. 系统和软件依赖 + +### 1.1 操作系统及硬件要求 + +- Ubuntu 14.04 或者 16.04 (其它平台未测试) +- GCC版本4.8.5 ~ 4.9.2 +- 支持Intel MKL-DNN的CPU +- NOTE: 如需在Nvidia GPU运行,请自行安装CUDA 9.0 / 10.0 + CUDNN 7.3+ (不支持9.1/10.1版本的CUDA) + +### 1.2 下载PaddlePaddle C++预测库 + +PaddlePaddle C++ 预测库主要分为CPU版本和GPU版本。 + +其中,GPU 版本支持`CUDA 10.0` 和 `CUDA 9.0`: + +以下为各版本C++预测库的下载链接: + +| 版本 | 链接 | +| ---- | ---- | +| CPU+MKL版 | [fluid_inference.tgz](https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-avx-mkl/fluid_inference.tgz) | +| CUDA9.0+MKL 版 | [fluid_inference.tgz](https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz) | +| CUDA10.0+MKL 版 | [fluid_inference.tgz](https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz) | + +更多可用预测库版本,请点击以下链接下载:[C++预测库下载列表](https://paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/build_and_install_lib_cn.html) + + +下载并解压, 解压后的 `fluid_inference`目录包含的内容: +``` +fluid_inference +├── paddle # paddle核心库和头文件 +| +├── third_party # 第三方依赖库和头文件 +| +└── version.txt # 版本和编译信息 +``` + +**注意:** 请把解压后的目录放到合适的路径,**该目录路径后续会作为编译依赖**使用。 + +## 2. 编译与运行 + +### 2.1 配置编译脚本 + +打开文件`linux_build.sh`, 看到以下内容: +```shell +# 是否使用GPU +WITH_GPU=OFF +# Paddle 预测库路径 +PADDLE_DIR=/PATH/TO/fluid_inference/ +# CUDA库路径, 仅 WITH_GPU=ON 时设置 +CUDA_LIB=/PATH/TO/CUDA_LIB64/ +# CUDNN库路径,仅 WITH_GPU=ON 且 CUDA_LIB有效时设置 +CUDNN_LIB=/PATH/TO/CUDNN_LIB64/ +# OpenCV 库路径, 无须设置 +OPENCV_DIR=/PATH/TO/opencv3gcc4.8/ + +cd build +cmake .. \ + -DWITH_GPU=${WITH_GPU} \ + -DPADDLE_DIR=${PADDLE_DIR} \ + -DCUDA_LIB=${CUDA_LIB} \ + -DCUDNN_LIB=${CUDNN_LIB} \ + -DOPENCV_DIR=${OPENCV_DIR} \ + -DWITH_STATIC_LIB=OFF +make -j4 +``` + +把上述参数根据实际情况做修改后,运行脚本编译程序: +```shell +sh linux_build.sh +``` + +### 2.2. 运行和可视化 + +可执行文件有 **2** 个参数,第一个是前面导出的`inference_model`路径,第二个是需要预测的视频路径。 + +示例: +```shell +./build/main ./models /PATH/TO/TEST_VIDEO +``` + +点击下载[测试视频](https://paddleseg.bj.bcebos.com/deploy/data/test.avi) + +预测的结果保存在视频文件`result.avi`中。 diff --git a/contrib/RealTimeHumanSeg/cpp/docs/windows_build.md b/contrib/RealTimeHumanSeg/cpp/docs/windows_build.md new file mode 100644 index 0000000000000000000000000000000000000000..6937dbcff4f55c5a085aa9d0bd2674c04f3ac8e5 --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/docs/windows_build.md @@ -0,0 +1,83 @@ +# 视频实时人像分割模型Windows平台C++预测部署 + +## 1. 系统和软件依赖 + +### 1.1 基础依赖 + +- Windows 10 / Windows Server 2016+ (其它平台未测试) +- Visual Studio 2019 (社区版或专业版均可) +- CUDA 9.0 / 10.0 + CUDNN 7.3+ (不支持9.1/10.1版本的CUDA) + +### 1.2 下载OpenCV并设置环境变量 + +- 在OpenCV官网下载适用于Windows平台的3.4.6版本: [点击下载](https://sourceforge.net/projects/opencvlibrary/files/3.4.6/opencv-3.4.6-vc14_vc15.exe/download) +- 运行下载的可执行文件,将OpenCV解压至合适目录,这里以解压到`D:\projects\opencv`为例 +- 把OpenCV动态库加入到系统环境变量 + - 此电脑(我的电脑)->属性->高级系统设置->环境变量 + - 在系统变量中找到Path(如没有,自行创建),并双击编辑 + - 新建,将opencv路径填入并保存,如D:\projects\opencv\build\x64\vc14\bin + +**注意:** `OpenCV`的解压目录后续将做为编译配置项使用,所以请放置合适的目录中。 + +### 1.3 下载PaddlePaddle C++ 预测库 + +`PaddlePaddle` **C++ 预测库** 主要分为`CPU`和`GPU`版本, 其中`GPU版本`提供`CUDA 9.0` 和 `CUDA 10.0` 支持。 + +常用的版本如下: + +| 版本 | 链接 | +| ---- | ---- | +| CPU+MKL版 | [fluid_inference_install_dir.zip](https://paddle-wheel.bj.bcebos.com/1.6.3/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | +| CUDA9.0+MKL 版 | [fluid_inference_install_dir.zip](https://paddle-wheel.bj.bcebos.com/1.6.3/win-infer/mkl/post97/fluid_inference_install_dir.zip) | +| CUDA10.0+MKL 版 | [fluid_inference_install_dir.zip](https://paddle-wheel.bj.bcebos.com/1.6.3/win-infer/mkl/post107/fluid_inference_install_dir.zip) | + +更多不同平台的可用预测库版本,请[点击查看](https://paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/windows_cpp_inference.html) 选择适合你的版本。 + + +下载并解压, 解压后的 `fluid_inference`目录包含的内容: +``` +fluid_inference_install_dir +├── paddle # paddle核心库和头文件 +| +├── third_party # 第三方依赖库和头文件 +| +└── version.txt # 版本和编译信息 +``` + +**注意:** 这里的`fluid_inference_install_dir` 目录所在路径,将用于后面的编译参数设置,请放置在合适的位置。 + +## 2. Visual Studio 2019 编译 + +- 2.1 打开Visual Studio 2019 Community,点击`继续但无需代码`, 如下图: +![step2.1](https://paddleseg.bj.bcebos.com/inference/vs2019_step1.png) + +- 2.2 点击 `文件`->`打开`->`CMake`, 如下图: +![step2.2](https://paddleseg.bj.bcebos.com/inference/vs2019_step2.png) + +- 2.3 选择本项目根目录`CMakeList.txt`文件打开, 如下图: +![step2.3](https://paddleseg.bj.bcebos.com/deploy/docs/vs2019_step2.3.png) + +- 2.4 点击:`项目`->`PaddleMaskDetector的CMake设置` +![step2.4](https://paddleseg.bj.bcebos.com/deploy/docs/vs2019_step2.4.png) + +- 2.5 点击浏览设置`OPENCV_DIR`, `CUDA_LIB` 和 `PADDLE_DIR` 3个编译依赖库的位置, 设置完成后点击`保存并生成CMake缓存并加载变量` +![step2.5](https://paddleseg.bj.bcebos.com/inference/vs2019_step5.png) + +- 2.6 点击`生成`->`全部生成` 编译项目 +![step2.6](https://paddleseg.bj.bcebos.com/inference/vs2019_step6.png) + +## 3. 运行程序 + +成功编译后, 产出的可执行文件在项目子目录`out\build\x64-Release`目录, 按以下步骤运行代码: + +- 打开`cmd`切换至该目录 +- 运行以下命令传入模型路径与测试视频 + +```shell +main.exe ./models/ ./data/test.avi +``` +第一个参数即人像分割预测模型的路径,第二个参数即要预测的视频。 + +点击下载[测试视频](https://paddleseg.bj.bcebos.com/deploy/data/test.avi) + +运行后,预测结果保存在文件`result.avi`中。 diff --git a/contrib/RealTimeHumanSeg/cpp/humanseg.cc b/contrib/RealTimeHumanSeg/cpp/humanseg.cc new file mode 100644 index 0000000000000000000000000000000000000000..b81c81200064f6191e18cdb39fc8d6414aa5fe9d --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/humanseg.cc @@ -0,0 +1,132 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +# include "humanseg.h" +# include "humanseg_postprocess.h" + +// Normalize the image by (pix - mean) * scale +void NormalizeImage( + const std::vector &mean, + const std::vector &scale, + cv::Mat& im, // NOLINT + float* input_buffer) { + int height = im.rows; + int width = im.cols; + int stride = width * height; + for (int h = 0; h < height; h++) { + for (int w = 0; w < width; w++) { + int base = h * width + w; + input_buffer[base + 0 * stride] = + (im.at(h, w)[0] - mean[0]) * scale[0]; + input_buffer[base + 1 * stride] = + (im.at(h, w)[1] - mean[1]) * scale[1]; + input_buffer[base + 2 * stride] = + (im.at(h, w)[2] - mean[2]) * scale[2]; + } + } +} + +// Load Model and return model predictor +void LoadModel( + const std::string& model_dir, + bool use_gpu, + std::unique_ptr* predictor) { + // Config the model info + paddle::AnalysisConfig config; + auto prog_file = model_dir + "/__model__"; + auto params_file = model_dir + "/__params__"; + config.SetModel(prog_file, params_file); + if (use_gpu) { + config.EnableUseGpu(100, 0); + } else { + config.DisableGpu(); + } + config.SwitchUseFeedFetchOps(false); + config.SwitchSpecifyInputNames(true); + // Memory optimization + config.EnableMemoryOptim(); + *predictor = std::move(CreatePaddlePredictor(config)); +} + +void HumanSeg::Preprocess(const cv::Mat& image_mat) { + // Clone the image : keep the original mat for postprocess + cv::Mat im = image_mat.clone(); + auto eval_wh = cv::Size(eval_size_[0], eval_size_[1]); + cv::resize(im, im, eval_wh, 0.f, 0.f, cv::INTER_LINEAR); + + im.convertTo(im, CV_32FC3, 1.0); + int rc = im.channels(); + int rh = im.rows; + int rw = im.cols; + input_shape_ = {1, rc, rh, rw}; + input_data_.resize(1 * rc * rh * rw); + float* buffer = input_data_.data(); + NormalizeImage(mean_, scale_, im, input_data_.data()); +} + +cv::Mat HumanSeg::Postprocess(const cv::Mat& im) { + int h = input_shape_[2]; + int w = input_shape_[3]; + scoremap_data_.resize(3 * h * w * sizeof(float)); + float* base = output_data_.data() + h * w; + for (int i = 0; i < h * w; ++i) { + scoremap_data_[i] = uchar(base[i] * 255); + } + + cv::Mat im_scoremap = cv::Mat(h, w, CV_8UC1); + im_scoremap.data = scoremap_data_.data(); + cv::resize(im_scoremap, im_scoremap, cv::Size(im.cols, im.rows)); + im_scoremap.convertTo(im_scoremap, CV_32FC1, 1 / 255.0); + + float* pblob = reinterpret_cast(im_scoremap.data); + int out_buff_capacity = 10 * im.cols * im.rows * sizeof(float); + segout_data_.resize(out_buff_capacity); + unsigned char* seg_result = segout_data_.data(); + MergeProcess(im.data, pblob, im.rows, im.cols, seg_result); + cv::Mat seg_mat(im.rows, im.cols, CV_8UC1, seg_result); + cv::resize(seg_mat, seg_mat, cv::Size(im.cols, im.rows)); + cv::GaussianBlur(seg_mat, seg_mat, cv::Size(5, 5), 0, 0); + float fg_threshold = 0.8; + float bg_threshold = 0.4; + cv::Mat show_seg_mat; + seg_mat.convertTo(seg_mat, CV_32FC1, 1 / 255.0); + ThresholdMask(seg_mat, fg_threshold, bg_threshold, show_seg_mat); + auto out_im = MergeSegMat(show_seg_mat, im); + return out_im; +} + +cv::Mat HumanSeg::Predict(const cv::Mat& im) { + // Preprocess image + Preprocess(im); + // Prepare input tensor + auto input_names = predictor_->GetInputNames(); + auto in_tensor = predictor_->GetInputTensor(input_names[0]); + in_tensor->Reshape(input_shape_); + in_tensor->copy_from_cpu(input_data_.data()); + // Run predictor + predictor_->ZeroCopyRun(); + // Get output tensor + auto output_names = predictor_->GetOutputNames(); + auto out_tensor = predictor_->GetOutputTensor(output_names[0]); + auto output_shape = out_tensor->shape(); + // Calculate output length + int output_size = 1; + for (int j = 0; j < output_shape.size(); ++j) { + output_size *= output_shape[j]; + } + output_data_.resize(output_size); + out_tensor->copy_to_cpu(output_data_.data()); + // Postprocessing result + return Postprocess(im); +} diff --git a/contrib/RealTimeHumanSeg/cpp/humanseg.h b/contrib/RealTimeHumanSeg/cpp/humanseg.h new file mode 100644 index 0000000000000000000000000000000000000000..edaf825f713847a3b2c8bf5bae3a36de6ec03395 --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/humanseg.h @@ -0,0 +1,66 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once + +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "paddle_inference_api.h" // NOLINT + +// Load Paddle Inference Model +void LoadModel( + const std::string& model_dir, + bool use_gpu, + std::unique_ptr* predictor); + +class HumanSeg { + public: + explicit HumanSeg(const std::string& model_dir, + const std::vector& mean, + const std::vector& scale, + const std::vector& eval_size, + bool use_gpu = false) : + mean_(mean), + scale_(scale), + eval_size_(eval_size) { + LoadModel(model_dir, use_gpu, &predictor_); + } + + // Run predictor + cv::Mat Predict(const cv::Mat& im); + + private: + // Preprocess image and copy data to input buffer + void Preprocess(const cv::Mat& im); + // Postprocess result + cv::Mat Postprocess(const cv::Mat& im); + + std::unique_ptr predictor_; + std::vector input_data_; + std::vector input_shape_; + std::vector output_data_; + std::vector scoremap_data_; + std::vector segout_data_; + std::vector mean_; + std::vector scale_; + std::vector eval_size_; +}; diff --git a/contrib/RealTimeHumanSeg/cpp/humanseg_postprocess.cc b/contrib/RealTimeHumanSeg/cpp/humanseg_postprocess.cc new file mode 100644 index 0000000000000000000000000000000000000000..a373df3985b5bd72d05145d2c6d106043b5303ff --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/humanseg_postprocess.cc @@ -0,0 +1,282 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include +#include + +#include +#include +#include +#include + +#include "humanseg_postprocess.h" // NOLINT + +int HumanSegTrackFuse(const cv::Mat &track_fg_cfd, + const cv::Mat &dl_fg_cfd, + const cv::Mat &dl_weights, + const cv::Mat &is_track, + const float cfd_diff_thres, + const int patch_size, + cv::Mat cur_fg_cfd) { + float *cur_fg_cfd_ptr = reinterpret_cast(cur_fg_cfd.data); + float *dl_fg_cfd_ptr = reinterpret_cast(dl_fg_cfd.data); + float *track_fg_cfd_ptr = reinterpret_cast(track_fg_cfd.data); + float *dl_weights_ptr = reinterpret_cast(dl_weights.data); + uchar *is_track_ptr = reinterpret_cast(is_track.data); + int y_offset = 0; + int ptr_offset = 0; + int h = track_fg_cfd.rows; + int w = track_fg_cfd.cols; + float dl_fg_score = 0.0; + float track_fg_score = 0.0; + for (int y = 0; y < h; ++y) { + for (int x = 0; x < w; ++x) { + dl_fg_score = dl_fg_cfd_ptr[ptr_offset]; + if (is_track_ptr[ptr_offset] > 0) { + track_fg_score = track_fg_cfd_ptr[ptr_offset]; + if (dl_fg_score > 0.9 || dl_fg_score < 0.1) { + if (dl_weights_ptr[ptr_offset] <= 0.10) { + cur_fg_cfd_ptr[ptr_offset] = dl_fg_score * 0.3 + + track_fg_score * 0.7; + } else { + cur_fg_cfd_ptr[ptr_offset] = dl_fg_score * 0.4 + + track_fg_score * 0.6; + } + } else { + cur_fg_cfd_ptr[ptr_offset] = dl_fg_score * dl_weights_ptr[ptr_offset] + + track_fg_score * (1 - dl_weights_ptr[ptr_offset]); + } + } else { + cur_fg_cfd_ptr[ptr_offset] = dl_fg_score; + } + ++ptr_offset; + } + y_offset += w; + ptr_offset = y_offset; + } + return 0; +} + +int HumanSegTracking(const cv::Mat &prev_gray, + const cv::Mat &cur_gray, + const cv::Mat &prev_fg_cfd, + int patch_size, + cv::Mat track_fg_cfd, + cv::Mat is_track, + cv::Mat dl_weights, + cv::Ptr disflow) { + cv::Mat flow_fw; + disflow->calc(prev_gray, cur_gray, flow_fw); + + cv::Mat flow_bw; + disflow->calc(cur_gray, prev_gray, flow_bw); + + float double_check_thres = 8; + + cv::Point2f fxy_fw; + int dy_fw = 0; + int dx_fw = 0; + cv::Point2f fxy_bw; + int dy_bw = 0; + int dx_bw = 0; + + float *prev_fg_cfd_ptr = reinterpret_cast(prev_fg_cfd.data); + float *track_fg_cfd_ptr = reinterpret_cast(track_fg_cfd.data); + float *dl_weights_ptr = reinterpret_cast(dl_weights.data); + uchar *is_track_ptr = reinterpret_cast(is_track.data); + + int prev_y_offset = 0; + int prev_ptr_offset = 0; + int cur_ptr_offset = 0; + float *flow_fw_ptr = reinterpret_cast(flow_fw.data); + + float roundy_fw = 0.0; + float roundx_fw = 0.0; + float roundy_bw = 0.0; + float roundx_bw = 0.0; + + int h = prev_fg_cfd.rows; + int w = prev_fg_cfd.cols; + for (int r = 0; r < h; ++r) { + for (int c = 0; c < w; ++c) { + ++prev_ptr_offset; + + fxy_fw = flow_fw.ptr(r)[c]; + roundy_fw = fxy_fw.y >= 0 ? 0.5 : -0.5; + roundx_fw = fxy_fw.x >= 0 ? 0.5 : -0.5; + dy_fw = static_cast(fxy_fw.y + roundy_fw); + dx_fw = static_cast(fxy_fw.x + roundx_fw); + + int cur_x = c + dx_fw; + int cur_y = r + dy_fw; + + if (cur_x < 0 + || cur_x >= h + || cur_y < 0 + || cur_y >= w) { + continue; + } + fxy_bw = flow_bw.ptr(cur_y)[cur_x]; + roundy_bw = fxy_bw.y >= 0 ? 0.5 : -0.5; + roundx_bw = fxy_bw.x >= 0 ? 0.5 : -0.5; + dy_bw = static_cast(fxy_bw.y + roundy_bw); + dx_bw = static_cast(fxy_bw.x + roundx_bw); + + auto total = (dy_fw + dy_bw) * (dy_fw + dy_bw) + + (dx_fw + dx_bw) * (dx_fw + dx_bw); + if (total >= double_check_thres) { + continue; + } + + cur_ptr_offset = cur_y * w + cur_x; + if (abs(dy_fw) <= 0 + && abs(dx_fw) <= 0 + && abs(dy_bw) <= 0 + && abs(dx_bw) <= 0) { + dl_weights_ptr[cur_ptr_offset] = 0.05; + } + is_track_ptr[cur_ptr_offset] = 1; + track_fg_cfd_ptr[cur_ptr_offset] = prev_fg_cfd_ptr[prev_ptr_offset]; + } + prev_y_offset += w; + prev_ptr_offset = prev_y_offset - 1; + } + return 0; +} + +int MergeProcess(const uchar *im_buff, + const float *scoremap_buff, + const int height, + const int width, + uchar *result_buff) { + cv::Mat prev_fg_cfd; + cv::Mat cur_fg_cfd; + cv::Mat cur_fg_mask; + cv::Mat track_fg_cfd; + cv::Mat prev_gray; + cv::Mat cur_gray; + cv::Mat bgr_temp; + cv::Mat is_track; + cv::Mat static_roi; + cv::Mat weights; + cv::Ptr disflow = + cv::optflow::createOptFlow_DIS( + cv::optflow::DISOpticalFlow::PRESET_ULTRAFAST); + + bool is_init = false; + const float *cfd_ptr = scoremap_buff; + if (!is_init) { + is_init = true; + cur_fg_cfd = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0)); + memcpy(cur_fg_cfd.data, cfd_ptr, height * width * sizeof(float)); + cur_fg_mask = cv::Mat(height, width, CV_8UC1, cv::Scalar::all(0)); + + if (height <= 64 || width <= 64) { + disflow->setFinestScale(1); + } else if (height <= 160 || width <= 160) { + disflow->setFinestScale(2); + } else { + disflow->setFinestScale(3); + } + is_track = cv::Mat(height, width, CV_8UC1, cv::Scalar::all(0)); + static_roi = cv::Mat(height, width, CV_8UC1, cv::Scalar::all(0)); + track_fg_cfd = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0)); + + bgr_temp = cv::Mat(height, width, CV_8UC3); + memcpy(bgr_temp.data, im_buff, height * width * 3 * sizeof(uchar)); + cv::cvtColor(bgr_temp, cur_gray, cv::COLOR_BGR2GRAY); + weights = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0.30)); + } else { + memcpy(cur_fg_cfd.data, cfd_ptr, height * width * sizeof(float)); + memcpy(bgr_temp.data, im_buff, height * width * 3 * sizeof(uchar)); + cv::cvtColor(bgr_temp, cur_gray, cv::COLOR_BGR2GRAY); + memset(is_track.data, 0, height * width * sizeof(uchar)); + memset(static_roi.data, 0, height * width * sizeof(uchar)); + weights = cv::Mat(height, width, CV_32FC1, cv::Scalar::all(0.30)); + HumanSegTracking(prev_gray, + cur_gray, + prev_fg_cfd, + 0, + track_fg_cfd, + is_track, + weights, + disflow); + HumanSegTrackFuse(track_fg_cfd, + cur_fg_cfd, + weights, + is_track, + 1.1, + 0, + cur_fg_cfd); + } + int ksize = 3; + cv::GaussianBlur(cur_fg_cfd, cur_fg_cfd, cv::Size(ksize, ksize), 0, 0); + prev_fg_cfd = cur_fg_cfd.clone(); + prev_gray = cur_gray.clone(); + cur_fg_cfd.convertTo(cur_fg_mask, CV_8UC1, 255); + memcpy(result_buff, cur_fg_mask.data, height * width); + return 0; +} + +cv::Mat MergeSegMat(const cv::Mat& seg_mat, + const cv::Mat& ori_frame) { + cv::Mat return_frame; + cv::resize(ori_frame, return_frame, cv::Size(ori_frame.cols, ori_frame.rows)); + for (int i = 0; i < ori_frame.rows; i++) { + for (int j = 0; j < ori_frame.cols; j++) { + float score = seg_mat.at(i, j) / 255.0; + if (score > 0.1) { + return_frame.at(i, j)[2] = static_cast((1 - score) * 255 + + score*return_frame.at(i, j)[2]); + return_frame.at(i, j)[1] = static_cast((1 - score) * 255 + + score*return_frame.at(i, j)[1]); + return_frame.at(i, j)[0] = static_cast((1 - score) * 255 + + score*return_frame.at(i, j)[0]); + } else { + return_frame.at(i, j) = {255, 255, 255}; + } + } + } + return return_frame; +} + +int ThresholdMask(const cv::Mat &fg_cfd, + const float fg_thres, + const float bg_thres, + cv::Mat& fg_mask) { + if (fg_cfd.type() != CV_32FC1) { + printf("ThresholdMask: type is not CV_32FC1.\n"); + return -1; + } + if (!(fg_mask.type() == CV_8UC1 + && fg_mask.rows == fg_cfd.rows + && fg_mask.cols == fg_cfd.cols)) { + fg_mask = cv::Mat(fg_cfd.rows, fg_cfd.cols, CV_8UC1, cv::Scalar::all(0)); + } + + for (int r = 0; r < fg_cfd.rows; ++r) { + for (int c = 0; c < fg_cfd.cols; ++c) { + float score = fg_cfd.at(r, c); + if (score < bg_thres) { + fg_mask.at(r, c) = 0; + } else if (score > fg_thres) { + fg_mask.at(r, c) = 255; + } else { + fg_mask.at(r, c) = static_cast( + (score-bg_thres) / (fg_thres - bg_thres) * 255); + } + } + } + return 0; +} diff --git a/contrib/RealTimeHumanSeg/cpp/humanseg_postprocess.h b/contrib/RealTimeHumanSeg/cpp/humanseg_postprocess.h new file mode 100644 index 0000000000000000000000000000000000000000..f5059857c0108c600a6bd98bcaa355647fdc21e2 --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/humanseg_postprocess.h @@ -0,0 +1,34 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once + +#include +#include +#include +#include + +int ThresholdMask(const cv::Mat &fg_cfd, + const float fg_thres, + const float bg_thres, + cv::Mat& fg_mask); + +cv::Mat MergeSegMat(const cv::Mat& seg_mat, + const cv::Mat& ori_frame); + +int MergeProcess(const uchar *im_buff, + const float *im_scoremap_buff, + const int height, + const int width, + uchar *result_buff); diff --git a/contrib/RealTimeHumanSeg/cpp/linux_build.sh b/contrib/RealTimeHumanSeg/cpp/linux_build.sh new file mode 100644 index 0000000000000000000000000000000000000000..ff0b11bcf60f1b4ec4d7a9f63f7490ffb70ad6e0 --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/linux_build.sh @@ -0,0 +1,30 @@ +OPENCV_URL=https://paddleseg.bj.bcebos.com/deploy/deps/opencv346.tar.bz2 +if [ ! -d "./deps/opencv346" ]; then + mkdir -p deps + cd deps + wget -c ${OPENCV_URL} + tar xvfj opencv346.tar.bz2 + rm -rf opencv346.tar.bz2 + cd .. +fi + +WITH_GPU=OFF +PADDLE_DIR=/root/projects/deps/fluid_inference/ +CUDA_LIB=/usr/local/cuda-10.0/lib64/ +CUDNN_LIB=/usr/local/cuda-10.0/lib64/ +OPENCV_DIR=$(pwd)/deps/opencv346/ +echo ${OPENCV_DIR} + +rm -rf build +mkdir -p build +cd build + +cmake .. \ + -DWITH_GPU=${WITH_GPU} \ + -DPADDLE_DIR=${PADDLE_DIR} \ + -DCUDA_LIB=${CUDA_LIB} \ + -DCUDNN_LIB=${CUDNN_LIB} \ + -DOPENCV_DIR=${OPENCV_DIR} \ + -DWITH_STATIC_LIB=OFF +make clean +make -j12 diff --git a/contrib/RealTimeHumanSeg/cpp/main.cc b/contrib/RealTimeHumanSeg/cpp/main.cc new file mode 100644 index 0000000000000000000000000000000000000000..303051f051b885a83b0ef608fe2ab1319f97294e --- /dev/null +++ b/contrib/RealTimeHumanSeg/cpp/main.cc @@ -0,0 +1,92 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include +#include + +#include "humanseg.h" // NOLINT +#include "humanseg_postprocess.h" // NOLINT + +// Do predicting on a video file +int VideoPredict(const std::string& video_path, HumanSeg& seg) +{ + cv::VideoCapture capture; + capture.open(video_path.c_str()); + if (!capture.isOpened()) { + printf("can not open video : %s\n", video_path.c_str()); + return -1; + } + + int video_width = static_cast(capture.get(CV_CAP_PROP_FRAME_WIDTH)); + int video_height = static_cast(capture.get(CV_CAP_PROP_FRAME_HEIGHT)); + cv::VideoWriter video_out; + std::string video_out_path = "result.avi"; + video_out.open(video_out_path.c_str(), + CV_FOURCC('M', 'J', 'P', 'G'), + 30.0, + cv::Size(video_width, video_height), + true); + if (!video_out.isOpened()) { + printf("create video writer failed!\n"); + return -1; + } + cv::Mat frame; + while (capture.read(frame)) { + if (frame.empty()) { + break; + } + cv::Mat out_im = seg.Predict(frame); + video_out.write(out_im); + } + capture.release(); + video_out.release(); + return 0; +} + +// Do predicting on a image file +int ImagePredict(const std::string& image_path, HumanSeg& seg) +{ + cv::Mat img = imread(image_path, cv::IMREAD_COLOR); + cv::Mat out_im = seg.Predict(img); + imwrite("result.jpeg", out_im); + return 0; +} + +int main(int argc, char* argv[]) { + if (argc < 3 || argc > 4) { + std::cout << "Usage:" + << "./humanseg ./models/ ./data/test.avi" + << std::endl; + return -1; + } + + bool use_gpu = (argc == 4 ? std::stoi(argv[3]) : false); + auto model_dir = std::string(argv[1]); + auto input_path = std::string(argv[2]); + + // Init Model + std::vector means = {104.008, 116.669, 122.675}; + std::vector scale = {1.000, 1.000, 1.000}; + std::vector eval_sz = {192, 192}; + HumanSeg seg(model_dir, means, scale, eval_sz, use_gpu); + + // Call ImagePredict while input_path is a image file path + // The output will be saved as result.jpeg + // ImagePredict(input_path, seg); + + // Call VideoPredict while input_path is a video file path + // The output will be saved as result.avi + VideoPredict(input_path, seg); + return 0; +} diff --git a/contrib/RoadLine/download_RoadLine.py b/contrib/RoadLine/download_RoadLine.py new file mode 100644 index 0000000000000000000000000000000000000000..86b631784edadcff6d575c59e67ee23a1775216d --- /dev/null +++ b/contrib/RoadLine/download_RoadLine.py @@ -0,0 +1,31 @@ +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +TEST_PATH = os.path.join(LOCAL_PATH, "..", "..", "test") +sys.path.append(TEST_PATH) + +from test_utils import download_file_and_uncompress + +if __name__ == "__main__": + download_file_and_uncompress( + url='https://paddleseg.bj.bcebos.com/inference_model/RoadLine.tgz', + savepath=LOCAL_PATH, + extrapath=LOCAL_PATH, + extraname='RoadLine') + + print("Pretrained Model download success!") diff --git a/contrib/imgs/RoadLine.jpg b/contrib/RoadLine/imgs/RoadLine.jpg similarity index 100% rename from contrib/imgs/RoadLine.jpg rename to contrib/RoadLine/imgs/RoadLine.jpg diff --git a/contrib/imgs/RoadLine.png b/contrib/RoadLine/imgs/RoadLine.png similarity index 100% rename from contrib/imgs/RoadLine.png rename to contrib/RoadLine/imgs/RoadLine.png diff --git a/contrib/RoadLine/infer.py b/contrib/RoadLine/infer.py new file mode 100644 index 0000000000000000000000000000000000000000..971476933c431977ce80c73e1d939fe079e1af19 --- /dev/null +++ b/contrib/RoadLine/infer.py @@ -0,0 +1,130 @@ +# -*- coding: utf-8 -*- +import os +import cv2 +import numpy as np +from utils.util import get_arguments +from utils.palette import get_palette +from PIL import Image as PILImage +import importlib + +args = get_arguments() +config = importlib.import_module('config') +cfg = getattr(config, 'cfg') + +# paddle垃圾回收策略FLAG,ACE2P模型较大,当显存不够时建议开启 +os.environ['FLAGS_eager_delete_tensor_gb']='0.0' + +import paddle.fluid as fluid + +# 预测数据集类 +class TestDataSet(): + def __init__(self): + self.data_dir = cfg.data_dir + self.data_list_file = cfg.data_list_file + self.data_list = self.get_data_list() + self.data_num = len(self.data_list) + + def get_data_list(self): + # 获取预测图像路径列表 + data_list = [] + data_file_handler = open(self.data_list_file, 'r') + for line in data_file_handler: + img_name = line.strip() + name_prefix = img_name.split('.')[0] + if len(img_name.split('.')) == 1: + img_name = img_name + '.jpg' + img_path = os.path.join(self.data_dir, img_name) + data_list.append(img_path) + return data_list + + def preprocess(self, img): + # 图像预处理 + if cfg.example == 'ACE2P': + reader = importlib.import_module(args.example+'.reader') + ACE2P_preprocess = getattr(reader, 'preprocess') + img = ACE2P_preprocess(img) + else: + img = cv2.resize(img, cfg.input_size).astype(np.float32) + img -= np.array(cfg.MEAN) + img /= np.array(cfg.STD) + img = img.transpose((2, 0, 1)) + img = np.expand_dims(img, axis=0) + return img + + def get_data(self, index): + # 获取图像信息 + img_path = self.data_list[index] + img = cv2.imread(img_path, cv2.IMREAD_COLOR) + if img is None: + return img, img,img_path, None + + img_name = img_path.split(os.sep)[-1] + name_prefix = img_name.replace('.'+img_name.split('.')[-1],'') + img_shape = img.shape[:2] + img_process = self.preprocess(img) + + return img, img_process, name_prefix, img_shape + + +def infer(): + if not os.path.exists(cfg.vis_dir): + os.makedirs(cfg.vis_dir) + palette = get_palette(cfg.class_num) + # 人像分割结果显示阈值 + thresh = 120 + + place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace() + exe = fluid.Executor(place) + + # 加载预测模型 + test_prog, feed_name, fetch_list = fluid.io.load_inference_model( + dirname=cfg.model_path, executor=exe, params_filename='__params__') + + #加载预测数据集 + test_dataset = TestDataSet() + data_num = test_dataset.data_num + + for idx in range(data_num): + # 数据获取 + ori_img, image, im_name, im_shape = test_dataset.get_data(idx) + if image is None: + print(im_name, 'is None') + continue + + # 预测 + if cfg.example == 'ACE2P': + # ACE2P模型使用多尺度预测 + reader = importlib.import_module(args.example+'.reader') + multi_scale_test = getattr(reader, 'multi_scale_test') + parsing, logits = multi_scale_test(exe, test_prog, feed_name, fetch_list, image, im_shape) + else: + # HumanSeg,RoadLine模型单尺度预测 + result = exe.run(program=test_prog, feed={feed_name[0]: image}, fetch_list=fetch_list) + parsing = np.argmax(result[0][0], axis=0) + parsing = cv2.resize(parsing.astype(np.uint8), im_shape[::-1]) + + # 预测结果保存 + result_path = os.path.join(cfg.vis_dir, im_name + '.png') + if cfg.example == 'HumanSeg': + logits = result[0][0][1]*255 + logits = cv2.resize(logits, im_shape[::-1]) + ret, logits = cv2.threshold(logits, thresh, 0, cv2.THRESH_TOZERO) + logits = 255 *(logits - thresh)/(255 - thresh) + # 将分割结果添加到alpha通道 + rgba = np.concatenate((ori_img, np.expand_dims(logits, axis=2)), axis=2) + cv2.imwrite(result_path, rgba) + else: + output_im = PILImage.fromarray(np.asarray(parsing, dtype=np.uint8)) + output_im.putpalette(palette) + output_im.save(result_path) + + if (idx + 1) % 100 == 0: + print('%d processd' % (idx + 1)) + + print('%d processd done' % (idx + 1)) + + return 0 + + +if __name__ == "__main__": + infer() diff --git a/contrib/RoadLine/utils/__init__.py b/contrib/RoadLine/utils/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/contrib/RoadLine/utils/palette.py b/contrib/RoadLine/utils/palette.py new file mode 100644 index 0000000000000000000000000000000000000000..2186203cbc2789f6eff70dfd92f724b4fe16cdb7 --- /dev/null +++ b/contrib/RoadLine/utils/palette.py @@ -0,0 +1,38 @@ +##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +## Created by: RainbowSecret +## Microsoft Research +## yuyua@microsoft.com +## Copyright (c) 2018 +## +## This source code is licensed under the MIT-style license found in the +## LICENSE file in the root directory of this source tree +##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function +import numpy as np +import cv2 + + +def get_palette(num_cls): + """ Returns the color map for visualizing the segmentation mask. + Args: + num_cls: Number of classes + Returns: + The color map + """ + n = num_cls + palette = [0] * (n * 3) + for j in range(0, n): + lab = j + palette[j * 3 + 0] = 0 + palette[j * 3 + 1] = 0 + palette[j * 3 + 2] = 0 + i = 0 + while lab: + palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i)) + palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i)) + palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i)) + i += 1 + lab >>= 3 + return palette diff --git a/contrib/RoadLine/utils/util.py b/contrib/RoadLine/utils/util.py new file mode 100644 index 0000000000000000000000000000000000000000..7394870e7c94c1fb16169e314696b931eecdc3b2 --- /dev/null +++ b/contrib/RoadLine/utils/util.py @@ -0,0 +1,47 @@ +from __future__ import division +from __future__ import print_function +from __future__ import unicode_literals +import argparse +import os + +def get_arguments(): + parser = argparse.ArgumentParser() + parser.add_argument("--use_gpu", + action="store_true", + help="Use gpu or cpu to test.") + parser.add_argument('--example', + type=str, + help='RoadLine, HumanSeg or ACE2P') + + return parser.parse_args() + + +class AttrDict(dict): + def __init__(self, *args, **kwargs): + super(AttrDict, self).__init__(*args, **kwargs) + + def __getattr__(self, name): + if name in self.__dict__: + return self.__dict__[name] + elif name in self: + return self[name] + else: + raise AttributeError(name) + + def __setattr__(self, name, value): + if name in self.__dict__: + self.__dict__[name] = value + else: + self[name] = value + +def merge_cfg_from_args(args, cfg): + """Merge config keys, values in args into the global config.""" + for k, v in vars(args).items(): + d = cfg + try: + value = eval(v) + except: + value = v + if value is not None: + cfg[k] = value + diff --git a/dataset/download_optic.py b/dataset/download_optic.py new file mode 100644 index 0000000000000000000000000000000000000000..2fd66be11ef2e0bca483ecf6d7bcec2b01bebd7a --- /dev/null +++ b/dataset/download_optic.py @@ -0,0 +1,33 @@ +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +TEST_PATH = os.path.join(LOCAL_PATH, "..", "test") +sys.path.append(TEST_PATH) + +from test_utils import download_file_and_uncompress + + +def download_pet_dataset(savepath, extrapath): + url = "https://paddleseg.bj.bcebos.com/dataset/optic_disc_seg.zip" + download_file_and_uncompress( + url=url, savepath=savepath, extrapath=extrapath) + + +if __name__ == "__main__": + download_pet_dataset(LOCAL_PATH, LOCAL_PATH) + print("Dataset download finish!") diff --git a/deploy/cpp/docs/windows_vs2019_build.md b/deploy/cpp/docs/windows_vs2019_build.md index 5862740b19c4d146ce38b458cd8cbe76f7a84747..0b52f1ec778751c588bc6682891a6a1a9dfd5590 100644 --- a/deploy/cpp/docs/windows_vs2019_build.md +++ b/deploy/cpp/docs/windows_vs2019_build.md @@ -99,4 +99,4 @@ cd /d D:\projects\PaddleSeg\deploy\cpp\out\build\x64-Release demo.exe --conf=/path/to/your/conf --input_dir=/path/to/your/input/data/directory ``` -更详细说明请参考ReadMe文档: [预测和可视化部分](../ReadMe.md) +更详细说明请参考ReadMe文档: [预测和可视化部分](../README.md) diff --git a/deploy/lite/README.md b/deploy/lite/README.md index f4ec50be28e75d79ce2f61453737930bccf52cf4..a46dc2077df3e061e18e8ebf9e4b21ca4d0fbbaf 100644 --- a/deploy/lite/README.md +++ b/deploy/lite/README.md @@ -10,11 +10,10 @@ * Android手机或开发板; ### 2.2 安装 -* git clone https://github.com/PaddlePaddle/PaddleSeg.git ; -* 打开Android Studio,在"Welcome to Android Studio"窗口点击"Open an existing Android Studio project",在弹出的路径选择窗口中进入"/PaddleSeg/lite/humanseg-android-demo/"目录,然后点击右下角的"Open"按钮即可导入工程 +* git clone https://github.com/PaddlePaddle/PaddleSeg.git ; +* 打开Android Studio,在"Welcome to Android Studio"窗口点击"Open an existing Android Studio project",在弹出的路径选择窗口中进入"/PaddleSeg/lite/humanseg_android_demo/"目录,然后点击右下角的"Open"按钮即可导入工程,构建工程的过程中会下载demo需要的模型和Lite预测库; * 通过USB连接Android手机或开发板; * 载入工程后,点击菜单栏的Run->Run 'App'按钮,在弹出的"Select Deployment Target"窗口选择已经连接的Android设备,然后点击"OK"按钮; -* 手机上会出现Demo的主界面,选择"Image Segmentation"图标,进入的人像分割示例程序; * 在人像分割Demo中,默认会载入一张人像图像,并会在图像下方给出CPU的预测结果; * 在人像分割Demo中,你还可以通过上方的"Gallery"和"Take Photo"按钮从相册或相机中加载测试图像; @@ -48,7 +47,7 @@ Paddle-Lite的编译目前支持Docker,Linux和Mac OS开发环境,建议使 * PaddlePredictor.jar; * arm64-v8a/libpaddle_lite_jni.so; -* armeabi-v7a/libpaddle_lite_jni.so; +* armeabi-v7a/libpaddle_lite_jni.so; 下面分别介绍两种方法: diff --git a/deploy/lite/humanseg-android-demo/.gitignore b/deploy/lite/human_segmentation_demo/.gitignore similarity index 100% rename from deploy/lite/humanseg-android-demo/.gitignore rename to deploy/lite/human_segmentation_demo/.gitignore diff --git a/deploy/lite/humanseg-android-demo/app/.gitignore b/deploy/lite/human_segmentation_demo/app/.gitignore similarity index 100% rename from deploy/lite/humanseg-android-demo/app/.gitignore rename to deploy/lite/human_segmentation_demo/app/.gitignore diff --git a/deploy/lite/human_segmentation_demo/app/build.gradle b/deploy/lite/human_segmentation_demo/app/build.gradle new file mode 100644 index 0000000000000000000000000000000000000000..88d5a19ece9d3b1c14069a6fca3ceb70c2e3e7e6 --- /dev/null +++ b/deploy/lite/human_segmentation_demo/app/build.gradle @@ -0,0 +1,119 @@ +import java.security.MessageDigest + +apply plugin: 'com.android.application' + +android { + compileSdkVersion 28 + defaultConfig { + applicationId "com.baidu.paddle.lite.demo.human_segmentation" + minSdkVersion 15 + targetSdkVersion 28 + versionCode 1 + versionName "1.0" + testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" + } + buildTypes { + release { + minifyEnabled false + proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' + } + } +} + +dependencies { + implementation fileTree(include: ['*.jar'], dir: 'libs') + implementation 'com.android.support:appcompat-v7:28.0.0' + implementation 'com.android.support.constraint:constraint-layout:1.1.3' + implementation 'com.android.support:design:28.0.0' + testImplementation 'junit:junit:4.12' + androidTestImplementation 'com.android.support.test:runner:1.0.2' + androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' + implementation files('libs/PaddlePredictor.jar') +} + +def paddleLiteLibs = 'https://paddlelite-demo.bj.bcebos.com/libs/android/paddle_lite_libs_v2_1_0_bug_fixed.tar.gz' +task downloadAndExtractPaddleLiteLibs(type: DefaultTask) { + doFirst { + println "Downloading and extracting Paddle Lite libs" + } + doLast { + // Prepare cache folder for libs + if (!file("cache").exists()) { + mkdir "cache" + } + // Generate cache name for libs + MessageDigest messageDigest = MessageDigest.getInstance('MD5') + messageDigest.update(paddleLiteLibs.bytes) + String cacheName = new BigInteger(1, messageDigest.digest()).toString(32) + // Download libs + if (!file("cache/${cacheName}.tar.gz").exists()) { + ant.get(src: paddleLiteLibs, dest: file("cache/${cacheName}.tar.gz")) + } + // Unpack libs + copy { + from tarTree("cache/${cacheName}.tar.gz") + into "cache/${cacheName}" + } + // Copy PaddlePredictor.jar + if (!file("libs/PaddlePredictor.jar").exists()) { + copy { + from "cache/${cacheName}/java/PaddlePredictor.jar" + into "libs" + } + } + // Copy libpaddle_lite_jni.so for armeabi-v7a and arm64-v8a + if (!file("src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so").exists()) { + copy { + from "cache/${cacheName}/java/libs/armeabi-v7a/" + into "src/main/jniLibs/armeabi-v7a" + } + } + if (!file("src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so").exists()) { + copy { + from "cache/${cacheName}/java/libs/arm64-v8a/" + into "src/main/jniLibs/arm64-v8a" + } + } + } +} +preBuild.dependsOn downloadAndExtractPaddleLiteLibs + +def paddleLiteModels = [ + [ + 'src' : 'https://paddlelite-demo.bj.bcebos.com/models/deeplab_mobilenet_fp32_for_cpu_v2_1_0.tar.gz', + 'dest' : 'src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu' + ], +] +task downloadAndExtractPaddleLiteModels(type: DefaultTask) { + doFirst { + println "Downloading and extracting Paddle Lite models" + } + doLast { + // Prepare cache folder for models + if (!file("cache").exists()) { + mkdir "cache" + } + paddleLiteModels.eachWithIndex { model, index -> + MessageDigest messageDigest = MessageDigest.getInstance('MD5') + messageDigest.update(model.src.bytes) + String cacheName = new BigInteger(1, messageDigest.digest()).toString(32) + // Download model file + if (!file("cache/${cacheName}.tar.gz").exists()) { + ant.get(src: model.src, dest: file("cache/${cacheName}.tar.gz")) + } + // Unpack model file + copy { + from tarTree("cache/${cacheName}.tar.gz") + into "cache/${cacheName}" + } + // Copy model file + if (!file("${model.dest}/__model__.nb").exists() || !file("${model.dest}/param.nb").exists()) { + copy { + from "cache/${cacheName}" + into "${model.dest}" + } + } + } + } +} +preBuild.dependsOn downloadAndExtractPaddleLiteModels diff --git a/deploy/lite/humanseg-android-demo/app/gradle/wrapper/gradle-wrapper.jar b/deploy/lite/human_segmentation_demo/app/gradle/wrapper/gradle-wrapper.jar similarity index 100% rename from deploy/lite/humanseg-android-demo/app/gradle/wrapper/gradle-wrapper.jar rename to deploy/lite/human_segmentation_demo/app/gradle/wrapper/gradle-wrapper.jar diff --git a/deploy/lite/humanseg-android-demo/app/gradle/wrapper/gradle-wrapper.properties b/deploy/lite/human_segmentation_demo/app/gradle/wrapper/gradle-wrapper.properties similarity index 100% rename from deploy/lite/humanseg-android-demo/app/gradle/wrapper/gradle-wrapper.properties rename to deploy/lite/human_segmentation_demo/app/gradle/wrapper/gradle-wrapper.properties diff --git a/deploy/lite/humanseg-android-demo/app/gradlew b/deploy/lite/human_segmentation_demo/app/gradlew similarity index 100% rename from deploy/lite/humanseg-android-demo/app/gradlew rename to deploy/lite/human_segmentation_demo/app/gradlew diff --git a/deploy/lite/humanseg-android-demo/app/gradlew.bat b/deploy/lite/human_segmentation_demo/app/gradlew.bat similarity index 100% rename from deploy/lite/humanseg-android-demo/app/gradlew.bat rename to deploy/lite/human_segmentation_demo/app/gradlew.bat diff --git a/deploy/lite/human_segmentation_demo/app/local.properties b/deploy/lite/human_segmentation_demo/app/local.properties new file mode 100644 index 0000000000000000000000000000000000000000..f3bc0d0f5319e7573b7cba2cd997b979060f3eec --- /dev/null +++ b/deploy/lite/human_segmentation_demo/app/local.properties @@ -0,0 +1,8 @@ +## This file must *NOT* be checked into Version Control Systems, +# as it contains information specific to your local configuration. +# +# Location of the SDK. This is only used by Gradle. +# For customization when using a Version Control System, please read the +# header note. +#Mon Nov 25 17:01:52 CST 2019 +sdk.dir=/Users/chenlingchi/Library/Android/sdk diff --git a/deploy/lite/humanseg-android-demo/app/proguard-rules.pro b/deploy/lite/human_segmentation_demo/app/proguard-rules.pro similarity index 100% rename from deploy/lite/humanseg-android-demo/app/proguard-rules.pro rename to deploy/lite/human_segmentation_demo/app/proguard-rules.pro diff --git a/deploy/lite/humanseg-android-demo/app/src/androidTest/java/com/baidu/paddle/lite/demo/ExampleInstrumentedTest.java b/deploy/lite/human_segmentation_demo/app/src/androidTest/java/com/baidu/paddle/lite/demo/ExampleInstrumentedTest.java similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/androidTest/java/com/baidu/paddle/lite/demo/ExampleInstrumentedTest.java rename to deploy/lite/human_segmentation_demo/app/src/androidTest/java/com/baidu/paddle/lite/demo/ExampleInstrumentedTest.java diff --git a/deploy/lite/humanseg-android-demo/app/src/main/AndroidManifest.xml b/deploy/lite/human_segmentation_demo/app/src/main/AndroidManifest.xml similarity index 79% rename from deploy/lite/humanseg-android-demo/app/src/main/AndroidManifest.xml rename to deploy/lite/human_segmentation_demo/app/src/main/AndroidManifest.xml index 67e06269f4b2764034d4d7c400f1c93c1504fe6a..39789e0370b04e67a6e80e1b21e79ef058500370 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/AndroidManifest.xml +++ b/deploy/lite/human_segmentation_demo/app/src/main/AndroidManifest.xml @@ -1,6 +1,6 @@ + package="com.baidu.paddle.lite.demo.segmentation"> @@ -17,15 +17,11 @@ - - diff --git a/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/images/human.jpg b/deploy/lite/human_segmentation_demo/app/src/main/assets/image_segmentation/images/human.jpg similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/images/human.jpg rename to deploy/lite/human_segmentation_demo/app/src/main/assets/image_segmentation/images/human.jpg diff --git a/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/labels/label_list b/deploy/lite/human_segmentation_demo/app/src/main/assets/image_segmentation/labels/label_list similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/labels/label_list rename to deploy/lite/human_segmentation_demo/app/src/main/assets/image_segmentation/labels/label_list diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/AppCompatPreferenceActivity.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/AppCompatPreferenceActivity.java similarity index 98% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/AppCompatPreferenceActivity.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/AppCompatPreferenceActivity.java index 960f34257d58b9b19d3e9701f92659575be8a701..314c045620e5edc8911196cbe8ff5d1eadfb7a16 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/AppCompatPreferenceActivity.java +++ b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/AppCompatPreferenceActivity.java @@ -14,7 +14,7 @@ * limitations under the License. */ -package com.baidu.paddle.lite.demo; +package com.baidu.paddle.lite.demo.segmentation; import android.content.res.Configuration; import android.os.Bundle; diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/CommonActivity.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/MainActivity.java similarity index 56% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/CommonActivity.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/MainActivity.java index 88146b3961e5f2c8ed366816e505144ba3ac9f6b..aab9f54c30c2a963b845970a8cf42480eb3fcf17 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/CommonActivity.java +++ b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/MainActivity.java @@ -1,39 +1,45 @@ -package com.baidu.paddle.lite.demo; +package com.baidu.paddle.lite.demo.segmentation; import android.Manifest; import android.app.ProgressDialog; import android.content.ContentResolver; import android.content.Intent; +import android.content.SharedPreferences; import android.content.pm.PackageManager; import android.database.Cursor; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.net.Uri; import android.os.Bundle; -import android.os.Environment; import android.os.Handler; import android.os.HandlerThread; import android.os.Message; +import android.preference.PreferenceManager; import android.provider.MediaStore; import android.support.annotation.NonNull; import android.support.v4.app.ActivityCompat; import android.support.v4.content.ContextCompat; -import android.support.v4.content.FileProvider; -import android.support.v7.app.ActionBar; import android.support.v7.app.AppCompatActivity; +import android.text.method.ScrollingMovementMethod; import android.util.Log; import android.view.Menu; import android.view.MenuInflater; import android.view.MenuItem; +import android.widget.ImageView; +import android.widget.TextView; import android.widget.Toast; +import com.baidu.paddle.lite.demo.segmentation.config.Config; +import com.baidu.paddle.lite.demo.segmentation.preprocess.Preprocess; +import com.baidu.paddle.lite.demo.segmentation.visual.Visualize; + import java.io.File; import java.io.IOException; -import java.text.SimpleDateFormat; -import java.util.Date; +import java.io.InputStream; + +public class MainActivity extends AppCompatActivity { -public class CommonActivity extends AppCompatActivity { - private static final String TAG = CommonActivity.class.getSimpleName(); + private static final String TAG = MainActivity.class.getSimpleName(); public static final int OPEN_GALLERY_REQUEST_CODE = 0; public static final int TAKE_PHOTO_REQUEST_CODE = 1; @@ -51,14 +57,25 @@ public class CommonActivity extends AppCompatActivity { protected Handler sender = null; // send command to worker thread protected HandlerThread worker = null; // worker thread to load&run model + + protected TextView tvInputSetting; + protected ImageView ivInputImage; + protected TextView tvOutputResult; + protected TextView tvInferenceTime; + + // model config + Config config = new Config(); + + protected Predictor predictor = new Predictor(); + + Preprocess preprocess = new Preprocess(); + + Visualize visualize = new Visualize(); + @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); - ActionBar supportActionBar = getSupportActionBar(); - if (supportActionBar != null) { - supportActionBar.setDisplayHomeAsUpEnabled(true); - } - + setContentView(R.layout.activity_main); receiver = new Handler() { @Override public void handleMessage(Message msg) { @@ -69,7 +86,7 @@ public class CommonActivity extends AppCompatActivity { break; case RESPONSE_LOAD_MODEL_FAILED: pbLoadModel.dismiss(); - Toast.makeText(CommonActivity.this, "Load model failed!", Toast.LENGTH_SHORT).show(); + Toast.makeText(MainActivity.this, "Load model failed!", Toast.LENGTH_SHORT).show(); onLoadModelFailed(); break; case RESPONSE_RUN_MODEL_SUCCESSED: @@ -78,7 +95,7 @@ public class CommonActivity extends AppCompatActivity { break; case RESPONSE_RUN_MODEL_FAILED: pbRunModel.dismiss(); - Toast.makeText(CommonActivity.this, "Run model failed!", Toast.LENGTH_SHORT).show(); + Toast.makeText(MainActivity.this, "Run model failed!", Toast.LENGTH_SHORT).show(); onRunModelFailed(); break; default: @@ -113,6 +130,29 @@ public class CommonActivity extends AppCompatActivity { } } }; + + tvInputSetting = findViewById(R.id.tv_input_setting); + ivInputImage = findViewById(R.id.iv_input_image); + tvInferenceTime = findViewById(R.id.tv_inference_time); + tvOutputResult = findViewById(R.id.tv_output_result); + tvInputSetting.setMovementMethod(ScrollingMovementMethod.getInstance()); + tvOutputResult.setMovementMethod(ScrollingMovementMethod.getInstance()); + } + + + public boolean onLoadModel() { + return predictor.init(MainActivity.this, config); + } + + + public boolean onRunModel() { + return predictor.isLoaded() && predictor.runModel(preprocess,visualize); + } + + public void onLoadModelFailed() { + + } + public void onRunModelFailed() { } public void loadModel() { @@ -125,33 +165,61 @@ public class CommonActivity extends AppCompatActivity { sender.sendEmptyMessage(REQUEST_RUN_MODEL); } - public boolean onLoadModel() { - return true; - } - - public boolean onRunModel() { - return true; - } - public void onLoadModelSuccessed() { - } - - public void onLoadModelFailed() { + // load test image from file_paths and run model + try { + if (config.imagePath.isEmpty()) { + return; + } + Bitmap image = null; + // read test image file from custom file_paths if the first character of mode file_paths is '/', otherwise read test + // image file from assets + if (!config.imagePath.substring(0, 1).equals("/")) { + InputStream imageStream = getAssets().open(config.imagePath); + image = BitmapFactory.decodeStream(imageStream); + } else { + if (!new File(config.imagePath).exists()) { + return; + } + image = BitmapFactory.decodeFile(config.imagePath); + } + if (image != null && predictor.isLoaded()) { + predictor.setInputImage(image); + runModel(); + } + } catch (IOException e) { + Toast.makeText(MainActivity.this, "Load image failed!", Toast.LENGTH_SHORT).show(); + e.printStackTrace(); + } } public void onRunModelSuccessed() { + // obtain results and update UI + tvInferenceTime.setText("Inference time: " + predictor.inferenceTime() + " ms"); + Bitmap outputImage = predictor.outputImage(); + if (outputImage != null) { + ivInputImage.setImageBitmap(outputImage); + } + tvOutputResult.setText(predictor.outputResult()); + tvOutputResult.scrollTo(0, 0); } - public void onRunModelFailed() { - } public void onImageChanged(Bitmap image) { + // rerun model if users pick test image from gallery or camera + if (image != null && predictor.isLoaded()) { + predictor.setInputImage(image); + runModel(); + } } public void onImageChanged(String path) { - + Bitmap image = BitmapFactory.decodeFile(path); + predictor.setInputImage(image); + runModel(); } public void onSettingsClicked() { + startActivity(new Intent(MainActivity.this, SettingsActivity.class)); } @Override @@ -186,7 +254,6 @@ public class CommonActivity extends AppCompatActivity { } return super.onOptionsItemSelected(item); } - @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { @@ -195,33 +262,6 @@ public class CommonActivity extends AppCompatActivity { Toast.makeText(this, "Permission Denied", Toast.LENGTH_SHORT).show(); } } - - private boolean requestAllPermissions() { - if (ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) - != PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this, - Manifest.permission.CAMERA) - != PackageManager.PERMISSION_GRANTED) { - ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE, - Manifest.permission.CAMERA}, - 0); - return false; - } - return true; - } - - private void openGallery() { - Intent intent = new Intent(Intent.ACTION_PICK, null); - intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*"); - startActivityForResult(intent, OPEN_GALLERY_REQUEST_CODE); - } - - private void takePhoto() { - Intent takePhotoIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); - if (takePhotoIntent.resolveActivity(getPackageManager()) != null) { - startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE); - } - } - @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); @@ -251,14 +291,97 @@ public class CommonActivity extends AppCompatActivity { } } } + private boolean requestAllPermissions() { + if (ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) + != PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this, + Manifest.permission.CAMERA) + != PackageManager.PERMISSION_GRANTED) { + ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE, + Manifest.permission.CAMERA}, + 0); + return false; + } + return true; + } + + private void openGallery() { + Intent intent = new Intent(Intent.ACTION_PICK, null); + intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*"); + startActivityForResult(intent, OPEN_GALLERY_REQUEST_CODE); + } + + private void takePhoto() { + Intent takePhotoIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); + if (takePhotoIntent.resolveActivity(getPackageManager()) != null) { + startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE); + } + } + + @Override + public boolean onPrepareOptionsMenu(Menu menu) { + boolean isLoaded = predictor.isLoaded(); + menu.findItem(R.id.open_gallery).setEnabled(isLoaded); + menu.findItem(R.id.take_photo).setEnabled(isLoaded); + return super.onPrepareOptionsMenu(menu); + } @Override protected void onResume() { + Log.i(TAG,"begin onResume"); super.onResume(); + + SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this); + boolean settingsChanged = false; + String model_path = sharedPreferences.getString(getString(R.string.MODEL_PATH_KEY), + getString(R.string.MODEL_PATH_DEFAULT)); + String label_path = sharedPreferences.getString(getString(R.string.LABEL_PATH_KEY), + getString(R.string.LABEL_PATH_DEFAULT)); + String image_path = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY), + getString(R.string.IMAGE_PATH_DEFAULT)); + settingsChanged |= !model_path.equalsIgnoreCase(config.modelPath); + settingsChanged |= !label_path.equalsIgnoreCase(config.labelPath); + settingsChanged |= !image_path.equalsIgnoreCase(config.imagePath); + int cpu_thread_num = Integer.parseInt(sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY), + getString(R.string.CPU_THREAD_NUM_DEFAULT))); + settingsChanged |= cpu_thread_num != config.cpuThreadNum; + String cpu_power_mode = + sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY), + getString(R.string.CPU_POWER_MODE_DEFAULT)); + settingsChanged |= !cpu_power_mode.equalsIgnoreCase(config.cpuPowerMode); + String input_color_format = + sharedPreferences.getString(getString(R.string.INPUT_COLOR_FORMAT_KEY), + getString(R.string.INPUT_COLOR_FORMAT_DEFAULT)); + settingsChanged |= !input_color_format.equalsIgnoreCase(config.inputColorFormat); + long[] input_shape = + Utils.parseLongsFromString(sharedPreferences.getString(getString(R.string.INPUT_SHAPE_KEY), + getString(R.string.INPUT_SHAPE_DEFAULT)), ","); + + settingsChanged |= input_shape.length != config.inputShape.length; + + if (!settingsChanged) { + for (int i = 0; i < input_shape.length; i++) { + settingsChanged |= input_shape[i] != config.inputShape[i]; + } + } + + if (settingsChanged) { + config.init(model_path,label_path,image_path,cpu_thread_num,cpu_power_mode, + input_color_format,input_shape); + preprocess.init(config); + // update UI + tvInputSetting.setText("Model: " + config.modelPath.substring(config.modelPath.lastIndexOf("/") + 1) + "\n" + "CPU" + + " Thread Num: " + Integer.toString(config.cpuThreadNum) + "\n" + "CPU Power Mode: " + config.cpuPowerMode); + tvInputSetting.scrollTo(0, 0); + // reload model if configure has been changed + loadModel(); + } } @Override protected void onDestroy() { + if (predictor != null) { + predictor.releaseModel(); + } worker.quit(); super.onDestroy(); } diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegPredictor.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/Predictor.java similarity index 50% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegPredictor.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/Predictor.java index 717e086adf078a2eea69bf3fc720af8c233fd9a3..27bfe3544a9913f77c56b6f059616b6e83ca5dc8 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegPredictor.java +++ b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/Predictor.java @@ -4,9 +4,12 @@ import android.content.Context; import android.graphics.Bitmap; import android.util.Log; +import com.baidu.paddle.lite.MobileConfig; +import com.baidu.paddle.lite.PaddlePredictor; +import com.baidu.paddle.lite.PowerMode; import com.baidu.paddle.lite.Tensor; -import com.baidu.paddle.lite.demo.Predictor; import com.baidu.paddle.lite.demo.segmentation.config.Config; + import com.baidu.paddle.lite.demo.segmentation.preprocess.Preprocess; import com.baidu.paddle.lite.demo.segmentation.visual.Visualize; @@ -14,15 +17,11 @@ import java.io.InputStream; import java.util.Date; import java.util.Vector; -import static android.graphics.Color.blue; -import static android.graphics.Color.green; -import static android.graphics.Color.red; - -public class ImgSegPredictor extends Predictor { - private static final String TAG = ImgSegPredictor.class.getSimpleName(); +public class Predictor { + private static final String TAG = Predictor.class.getSimpleName(); protected Vector wordLabels = new Vector(); - Config config; + Config config = new Config(); protected Bitmap inputImage = null; protected Bitmap scaledImage = null; @@ -31,10 +30,27 @@ public class ImgSegPredictor extends Predictor { protected float preprocessTime = 0; protected float postprocessTime = 0; - public ImgSegPredictor() { + public boolean isLoaded = false; + public int warmupIterNum = 0; + public int inferIterNum = 1; + protected Context appCtx = null; + public int cpuThreadNum = 1; + public String cpuPowerMode = "LITE_POWER_HIGH"; + public String modelPath = ""; + public String modelName = ""; + protected PaddlePredictor paddlePredictor = null; + protected float inferenceTime = 0; + + public Predictor() { super(); } + public boolean init(Context appCtx, String modelPath, int cpuThreadNum, String cpuPowerMode) { + this.appCtx = appCtx; + isLoaded = loadModel(modelPath, cpuThreadNum, cpuPowerMode); + return isLoaded; + } + public boolean init(Context appCtx, Config config) { if (config.inputShape.length != 4) { @@ -55,8 +71,9 @@ public class ImgSegPredictor extends Predictor { Log.i(TAG, "only RGB and BGR color format is supported."); return false; } - super.init(appCtx, config.modelPath, config.cpuThreadNum, config.cpuPowerMode); - if (!super.isLoaded()) { + init(appCtx, config.modelPath, config.cpuThreadNum, config.cpuPowerMode); + + if (!isLoaded()) { return false; } this.config = config; @@ -64,6 +81,11 @@ public class ImgSegPredictor extends Predictor { return isLoaded; } + + public boolean isLoaded() { + return paddlePredictor != null && isLoaded; + } + protected boolean loadLabel(String labelPath) { wordLabels.clear(); // load word labels from file @@ -87,11 +109,80 @@ public class ImgSegPredictor extends Predictor { } public Tensor getInput(int idx) { - return super.getInput(idx); + if (!isLoaded()) { + return null; + } + return paddlePredictor.getInput(idx); } public Tensor getOutput(int idx) { - return super.getOutput(idx); + if (!isLoaded()) { + return null; + } + return paddlePredictor.getOutput(idx); + } + + protected boolean loadModel(String modelPath, int cpuThreadNum, String cpuPowerMode) { + // release model if exists + releaseModel(); + + // load model + if (modelPath.isEmpty()) { + return false; + } + String realPath = modelPath; + if (!modelPath.substring(0, 1).equals("/")) { + // read model files from custom file_paths if the first character of mode file_paths is '/' + // otherwise copy model to cache from assets + realPath = appCtx.getCacheDir() + "/" + modelPath; + Utils.copyDirectoryFromAssets(appCtx, modelPath, realPath); + } + if (realPath.isEmpty()) { + return false; + } + MobileConfig modelConfig = new MobileConfig(); + modelConfig.setModelDir(realPath); + modelConfig.setThreads(cpuThreadNum); + if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_HIGH")) { + modelConfig.setPowerMode(PowerMode.LITE_POWER_HIGH); + } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_LOW")) { + modelConfig.setPowerMode(PowerMode.LITE_POWER_LOW); + } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_FULL")) { + modelConfig.setPowerMode(PowerMode.LITE_POWER_FULL); + } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_NO_BIND")) { + modelConfig.setPowerMode(PowerMode.LITE_POWER_NO_BIND); + } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_HIGH")) { + modelConfig.setPowerMode(PowerMode.LITE_POWER_RAND_HIGH); + } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_LOW")) { + modelConfig.setPowerMode(PowerMode.LITE_POWER_RAND_LOW); + } else { + Log.e(TAG, "unknown cpu power mode!"); + return false; + } + paddlePredictor = PaddlePredictor.createPaddlePredictor(modelConfig); + this.cpuThreadNum = cpuThreadNum; + this.cpuPowerMode = cpuPowerMode; + this.modelPath = realPath; + this.modelName = realPath.substring(realPath.lastIndexOf("/") + 1); + return true; + } + + public boolean runModel() { + if (!isLoaded()) { + return false; + } + // warm up + for (int i = 0; i < warmupIterNum; i++){ + paddlePredictor.run(); + } + // inference + Date start = new Date(); + for (int i = 0; i < inferIterNum; i++) { + paddlePredictor.run(); + } + Date end = new Date(); + inferenceTime = (end.getTime() - start.getTime()) / (float) inferIterNum; + return true; } public boolean runModel(Bitmap image) { @@ -106,39 +197,42 @@ public class ImgSegPredictor extends Predictor { // set input shape Tensor inputTensor = getInput(0); - inputTensor.resize(config.inputShape); // pre-process image Date start = new Date(); preprocess.init(config); - preprocess.to_array(scaledImage); // feed input tensor with pre-processed data - inputTensor.setData(preprocess.inputData); Date end = new Date(); preprocessTime = (float) (end.getTime() - start.getTime()); // inference - super.runModel(); + runModel(); + start = new Date(); Tensor outputTensor = getOutput(0); // post-process - this.outputImage = visualize.draw(inputImage,outputTensor); - + this.outputImage = visualize.draw(inputImage, outputTensor); postprocessTime = (float) (end.getTime() - start.getTime()); - start = new Date(); outputResult = new String(); - end = new Date(); return true; } + public void releaseModel() { + paddlePredictor = null; + isLoaded = false; + cpuThreadNum = 1; + cpuPowerMode = "LITE_POWER_HIGH"; + modelPath = ""; + modelName = ""; + } public void setConfig(Config config){ this.config = config; @@ -164,13 +258,32 @@ public class ImgSegPredictor extends Predictor { return postprocessTime; } + public String modelPath() { + return modelPath; + } + + public String modelName() { + return modelName; + } + + public int cpuThreadNum() { + return cpuThreadNum; + } + + public String cpuPowerMode() { + return cpuPowerMode; + } + + public float inferenceTime() { + return inferenceTime; + } + public void setInputImage(Bitmap image) { if (image == null) { return; } // scale image to the size of input tensor Bitmap rgbaImage = image.copy(Bitmap.Config.ARGB_8888, true); - Bitmap scaleImage = Bitmap.createScaledBitmap(rgbaImage, (int) this.config.inputShape[3], (int) this.config.inputShape[2], true); this.inputImage = rgbaImage; this.scaledImage = scaleImage; diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegSettingsActivity.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/SettingsActivity.java similarity index 60% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegSettingsActivity.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/SettingsActivity.java index 710d318572088f63f97d298df0ce931d7ecd323e..8f53974d48ed572cd3ccf5d9da4ea74dcdd718c0 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegSettingsActivity.java +++ b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/SettingsActivity.java @@ -7,14 +7,10 @@ import android.preference.EditTextPreference; import android.preference.ListPreference; import android.support.v7.app.ActionBar; -import com.baidu.paddle.lite.demo.AppCompatPreferenceActivity; -import com.baidu.paddle.lite.demo.R; -import com.baidu.paddle.lite.demo.Utils; - import java.util.ArrayList; import java.util.List; -public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener { +public class SettingsActivity extends AppCompatPreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener { ListPreference lpChoosePreInstalledModel = null; CheckBoxPreference cbEnableCustomSettings = null; EditTextPreference etModelPath = null; @@ -23,24 +19,21 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen ListPreference lpCPUThreadNum = null; ListPreference lpCPUPowerMode = null; ListPreference lpInputColorFormat = null; - EditTextPreference etInputShape = null; - EditTextPreference etInputMean = null; - EditTextPreference etInputStd = null; + + List preInstalledModelPaths = null; List preInstalledLabelPaths = null; List preInstalledImagePaths = null; - List preInstalledInputShapes = null; List preInstalledCPUThreadNums = null; List preInstalledCPUPowerModes = null; List preInstalledInputColorFormats = null; - List preInstalledInputMeans = null; - List preInstalledInputStds = null; + @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); - addPreferencesFromResource(R.xml.settings_img_seg); + addPreferencesFromResource(R.xml.settings); ActionBar supportActionBar = getSupportActionBar(); if (supportActionBar != null) { supportActionBar.setDisplayHomeAsUpEnabled(true); @@ -50,24 +43,20 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen preInstalledModelPaths = new ArrayList(); preInstalledLabelPaths = new ArrayList(); preInstalledImagePaths = new ArrayList(); - preInstalledInputShapes = new ArrayList(); + preInstalledCPUThreadNums = new ArrayList(); preInstalledCPUPowerModes = new ArrayList(); preInstalledInputColorFormats = new ArrayList(); - preInstalledInputMeans = new ArrayList(); - preInstalledInputStds = new ArrayList(); // add deeplab_mobilenet_for_cpu - preInstalledModelPaths.add(getString(R.string.ISG_MODEL_PATH_DEFAULT)); - preInstalledLabelPaths.add(getString(R.string.ISG_LABEL_PATH_DEFAULT)); - preInstalledImagePaths.add(getString(R.string.ISG_IMAGE_PATH_DEFAULT)); - preInstalledCPUThreadNums.add(getString(R.string.ISG_CPU_THREAD_NUM_DEFAULT)); - preInstalledCPUPowerModes.add(getString(R.string.ISG_CPU_POWER_MODE_DEFAULT)); - preInstalledInputColorFormats.add(getString(R.string.ISG_INPUT_COLOR_FORMAT_DEFAULT)); - preInstalledInputShapes.add(getString(R.string.ISG_INPUT_SHAPE_DEFAULT)); - + preInstalledModelPaths.add(getString(R.string.MODEL_PATH_DEFAULT)); + preInstalledLabelPaths.add(getString(R.string.LABEL_PATH_DEFAULT)); + preInstalledImagePaths.add(getString(R.string.IMAGE_PATH_DEFAULT)); + preInstalledCPUThreadNums.add(getString(R.string.CPU_THREAD_NUM_DEFAULT)); + preInstalledCPUPowerModes.add(getString(R.string.CPU_POWER_MODE_DEFAULT)); + preInstalledInputColorFormats.add(getString(R.string.INPUT_COLOR_FORMAT_DEFAULT)); // initialize UI components lpChoosePreInstalledModel = - (ListPreference) findPreference(getString(R.string.ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY)); + (ListPreference) findPreference(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY)); String[] preInstalledModelNames = new String[preInstalledModelPaths.size()]; for (int i = 0; i < preInstalledModelPaths.size(); i++) { preInstalledModelNames[i] = @@ -76,38 +65,36 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen lpChoosePreInstalledModel.setEntries(preInstalledModelNames); lpChoosePreInstalledModel.setEntryValues(preInstalledModelPaths.toArray(new String[preInstalledModelPaths.size()])); cbEnableCustomSettings = - (CheckBoxPreference) findPreference(getString(R.string.ISG_ENABLE_CUSTOM_SETTINGS_KEY)); - etModelPath = (EditTextPreference) findPreference(getString(R.string.ISG_MODEL_PATH_KEY)); + (CheckBoxPreference) findPreference(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY)); + etModelPath = (EditTextPreference) findPreference(getString(R.string.MODEL_PATH_KEY)); etModelPath.setTitle("Model Path (SDCard: " + Utils.getSDCardDirectory() + ")"); - etLabelPath = (EditTextPreference) findPreference(getString(R.string.ISG_LABEL_PATH_KEY)); - etImagePath = (EditTextPreference) findPreference(getString(R.string.ISG_IMAGE_PATH_KEY)); + etLabelPath = (EditTextPreference) findPreference(getString(R.string.LABEL_PATH_KEY)); + etImagePath = (EditTextPreference) findPreference(getString(R.string.IMAGE_PATH_KEY)); lpCPUThreadNum = - (ListPreference) findPreference(getString(R.string.ISG_CPU_THREAD_NUM_KEY)); + (ListPreference) findPreference(getString(R.string.CPU_THREAD_NUM_KEY)); lpCPUPowerMode = - (ListPreference) findPreference(getString(R.string.ISG_CPU_POWER_MODE_KEY)); + (ListPreference) findPreference(getString(R.string.CPU_POWER_MODE_KEY)); lpInputColorFormat = - (ListPreference) findPreference(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY)); - etInputShape = (EditTextPreference) findPreference(getString(R.string.ISG_INPUT_SHAPE_KEY)); + (ListPreference) findPreference(getString(R.string.INPUT_COLOR_FORMAT_KEY)); } private void reloadPreferenceAndUpdateUI() { SharedPreferences sharedPreferences = getPreferenceScreen().getSharedPreferences(); boolean enableCustomSettings = - sharedPreferences.getBoolean(getString(R.string.ISG_ENABLE_CUSTOM_SETTINGS_KEY), false); - String modelPath = sharedPreferences.getString(getString(R.string.ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY), - getString(R.string.ISG_MODEL_PATH_DEFAULT)); + sharedPreferences.getBoolean(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY), false); + String modelPath = sharedPreferences.getString(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY), + getString(R.string.MODEL_PATH_DEFAULT)); int modelIdx = lpChoosePreInstalledModel.findIndexOfValue(modelPath); if (modelIdx >= 0 && modelIdx < preInstalledModelPaths.size()) { if (!enableCustomSettings) { SharedPreferences.Editor editor = sharedPreferences.edit(); - editor.putString(getString(R.string.ISG_MODEL_PATH_KEY), preInstalledModelPaths.get(modelIdx)); - editor.putString(getString(R.string.ISG_LABEL_PATH_KEY), preInstalledLabelPaths.get(modelIdx)); - editor.putString(getString(R.string.ISG_IMAGE_PATH_KEY), preInstalledImagePaths.get(modelIdx)); - editor.putString(getString(R.string.ISG_CPU_THREAD_NUM_KEY), preInstalledCPUThreadNums.get(modelIdx)); - editor.putString(getString(R.string.ISG_CPU_POWER_MODE_KEY), preInstalledCPUPowerModes.get(modelIdx)); - editor.putString(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY), + editor.putString(getString(R.string.MODEL_PATH_KEY), preInstalledModelPaths.get(modelIdx)); + editor.putString(getString(R.string.LABEL_PATH_KEY), preInstalledLabelPaths.get(modelIdx)); + editor.putString(getString(R.string.IMAGE_PATH_KEY), preInstalledImagePaths.get(modelIdx)); + editor.putString(getString(R.string.CPU_THREAD_NUM_KEY), preInstalledCPUThreadNums.get(modelIdx)); + editor.putString(getString(R.string.CPU_POWER_MODE_KEY), preInstalledCPUPowerModes.get(modelIdx)); + editor.putString(getString(R.string.INPUT_COLOR_FORMAT_KEY), preInstalledInputColorFormats.get(modelIdx)); - editor.putString(getString(R.string.ISG_INPUT_SHAPE_KEY), preInstalledInputShapes.get(modelIdx)); editor.commit(); } lpChoosePreInstalledModel.setSummary(modelPath); @@ -119,23 +106,18 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen lpCPUThreadNum.setEnabled(enableCustomSettings); lpCPUPowerMode.setEnabled(enableCustomSettings); lpInputColorFormat.setEnabled(enableCustomSettings); - etInputShape.setEnabled(enableCustomSettings); - etInputMean.setEnabled(enableCustomSettings); - etInputStd.setEnabled(enableCustomSettings); - modelPath = sharedPreferences.getString(getString(R.string.ISG_MODEL_PATH_KEY), - getString(R.string.ISG_MODEL_PATH_DEFAULT)); - String labelPath = sharedPreferences.getString(getString(R.string.ISG_LABEL_PATH_KEY), - getString(R.string.ISG_LABEL_PATH_DEFAULT)); - String imagePath = sharedPreferences.getString(getString(R.string.ISG_IMAGE_PATH_KEY), - getString(R.string.ISG_IMAGE_PATH_DEFAULT)); - String cpuThreadNum = sharedPreferences.getString(getString(R.string.ISG_CPU_THREAD_NUM_KEY), - getString(R.string.ISG_CPU_THREAD_NUM_DEFAULT)); - String cpuPowerMode = sharedPreferences.getString(getString(R.string.ISG_CPU_POWER_MODE_KEY), - getString(R.string.ISG_CPU_POWER_MODE_DEFAULT)); - String inputColorFormat = sharedPreferences.getString(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY), - getString(R.string.ISG_INPUT_COLOR_FORMAT_DEFAULT)); - String inputShape = sharedPreferences.getString(getString(R.string.ISG_INPUT_SHAPE_KEY), - getString(R.string.ISG_INPUT_SHAPE_DEFAULT)); + modelPath = sharedPreferences.getString(getString(R.string.MODEL_PATH_KEY), + getString(R.string.MODEL_PATH_DEFAULT)); + String labelPath = sharedPreferences.getString(getString(R.string.LABEL_PATH_KEY), + getString(R.string.LABEL_PATH_DEFAULT)); + String imagePath = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY), + getString(R.string.IMAGE_PATH_DEFAULT)); + String cpuThreadNum = sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY), + getString(R.string.CPU_THREAD_NUM_DEFAULT)); + String cpuPowerMode = sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY), + getString(R.string.CPU_POWER_MODE_DEFAULT)); + String inputColorFormat = sharedPreferences.getString(getString(R.string.INPUT_COLOR_FORMAT_KEY), + getString(R.string.INPUT_COLOR_FORMAT_DEFAULT)); etModelPath.setSummary(modelPath); etModelPath.setText(modelPath); etLabelPath.setSummary(labelPath); @@ -148,8 +130,7 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen lpCPUPowerMode.setSummary(cpuPowerMode); lpInputColorFormat.setValue(inputColorFormat); lpInputColorFormat.setSummary(inputColorFormat); - etInputShape.setSummary(inputShape); - etInputShape.setText(inputShape); + } @Override @@ -167,9 +148,9 @@ public class ImgSegSettingsActivity extends AppCompatPreferenceActivity implemen @Override public void onSharedPreferenceChanged(SharedPreferences sharedPreferences, String key) { - if (key.equals(getString(R.string.ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY))) { + if (key.equals(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY))) { SharedPreferences.Editor editor = sharedPreferences.edit(); - editor.putBoolean(getString(R.string.ISG_ENABLE_CUSTOM_SETTINGS_KEY), false); + editor.putBoolean(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY), false); editor.commit(); } reloadPreferenceAndUpdateUI(); diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/Utils.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/Utils.java similarity index 98% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/Utils.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/Utils.java index a8b252365d05313d847d4ccd491fb44596f31227..3d581592dfc78b0e26bedb201c704fe9eff79ebc 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/Utils.java +++ b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/Utils.java @@ -1,4 +1,4 @@ -package com.baidu.paddle.lite.demo; +package com.baidu.paddle.lite.demo.segmentation; import android.content.Context; import android.os.Environment; diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/config/Config.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/config/Config.java similarity index 99% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/config/Config.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/config/Config.java index 3f059878334cb324dbf01a9fb7b7e91632c1333f..4f09eb53cb642bfbeca001c75e410774bb984fa8 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/config/Config.java +++ b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/config/Config.java @@ -9,7 +9,6 @@ public class Config { public String imagePath = ""; public int cpuThreadNum = 1; public String cpuPowerMode = ""; - public String inputColorFormat = ""; public long[] inputShape = new long[]{}; @@ -22,7 +21,6 @@ public class Config { this.imagePath = imagePath; this.cpuThreadNum = cpuThreadNum; this.cpuPowerMode = cpuPowerMode; - this.inputColorFormat = inputColorFormat; this.inputShape = inputShape; } @@ -30,7 +28,6 @@ public class Config { public void setInputShape(Bitmap inputImage){ this.inputShape[0] = 1; this.inputShape[1] = 3; - this.inputShape[2] = inputImage.getHeight(); this.inputShape[3] = inputImage.getWidth(); diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/preprocess/Preprocess.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/preprocess/Preprocess.java similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/preprocess/Preprocess.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/preprocess/Preprocess.java diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/visual/Visualize.java b/deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/visual/Visualize.java similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/visual/Visualize.java rename to deploy/lite/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/visual/Visualize.java diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml b/deploy/lite/human_segmentation_demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml rename to deploy/lite/human_segmentation_demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/drawable/ic_launcher_background.xml b/deploy/lite/human_segmentation_demo/app/src/main/res/drawable/ic_launcher_background.xml similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/res/drawable/ic_launcher_background.xml rename to deploy/lite/human_segmentation_demo/app/src/main/res/drawable/ic_launcher_background.xml diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/layout/activity_img_seg.xml b/deploy/lite/human_segmentation_demo/app/src/main/res/layout/activity_main.xml similarity index 98% rename from deploy/lite/humanseg-android-demo/app/src/main/res/layout/activity_img_seg.xml rename to deploy/lite/human_segmentation_demo/app/src/main/res/layout/activity_main.xml index a2839ba627ef41bda0676d225d3bd95508795b2b..356b0069df58b2f33eaab1dc1077daeda946f9e5 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/res/layout/activity_img_seg.xml +++ b/deploy/lite/human_segmentation_demo/app/src/main/res/layout/activity_main.xml @@ -4,7 +4,7 @@ xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" - tools:context=".segmentation.ImgSegActivity"> + tools:context=".segmentation.MainActivity"> +Human Segmentation + +CHOOSE_PRE_INSTALLED_MODEL_KEY +ENABLE_CUSTOM_SETTINGS_KEY +MODEL_PATH_KEY +LABEL_PATH_KEY +IMAGE_PATH_KEY +CPU_THREAD_NUM_KEY +CPU_POWER_MODE_KEY +INPUT_COLOR_FORMAT_KEY +INPUT_SHAPE_KEY +image_segmentation/models/deeplab_mobilenet_for_cpu +image_segmentation/labels/label_list +image_segmentation/images/human.jpg +1 +LITE_POWER_HIGH +RGB +1,3,513,513 + diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/values/styles.xml b/deploy/lite/human_segmentation_demo/app/src/main/res/values/styles.xml similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/main/res/values/styles.xml rename to deploy/lite/human_segmentation_demo/app/src/main/res/values/styles.xml diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/xml/settings_img_seg.xml b/deploy/lite/human_segmentation_demo/app/src/main/res/xml/settings.xml similarity index 62% rename from deploy/lite/humanseg-android-demo/app/src/main/res/xml/settings_img_seg.xml rename to deploy/lite/human_segmentation_demo/app/src/main/res/xml/settings.xml index 8f9e5e76634fae82cf800cc75d3e058c14a255f3..8f1a723ceb4a3b6860c05a3b09c5b3f61e1a6ae2 100644 --- a/deploy/lite/humanseg-android-demo/app/src/main/res/xml/settings_img_seg.xml +++ b/deploy/lite/human_segmentation_demo/app/src/main/res/xml/settings.xml @@ -2,42 +2,42 @@ - + diff --git a/deploy/lite/humanseg-android-demo/app/src/test/java/com/baidu/paddle/lite/demo/ExampleUnitTest.java b/deploy/lite/human_segmentation_demo/app/src/test/java/com/baidu/paddle/lite/demo/ExampleUnitTest.java similarity index 100% rename from deploy/lite/humanseg-android-demo/app/src/test/java/com/baidu/paddle/lite/demo/ExampleUnitTest.java rename to deploy/lite/human_segmentation_demo/app/src/test/java/com/baidu/paddle/lite/demo/ExampleUnitTest.java diff --git a/deploy/lite/humanseg-android-demo/build.gradle b/deploy/lite/human_segmentation_demo/build.gradle similarity index 100% rename from deploy/lite/humanseg-android-demo/build.gradle rename to deploy/lite/human_segmentation_demo/build.gradle diff --git a/deploy/lite/humanseg-android-demo/gradle.properties b/deploy/lite/human_segmentation_demo/gradle.properties similarity index 100% rename from deploy/lite/humanseg-android-demo/gradle.properties rename to deploy/lite/human_segmentation_demo/gradle.properties diff --git a/deploy/lite/humanseg-android-demo/gradle/wrapper/gradle-wrapper.jar b/deploy/lite/human_segmentation_demo/gradle/wrapper/gradle-wrapper.jar similarity index 100% rename from deploy/lite/humanseg-android-demo/gradle/wrapper/gradle-wrapper.jar rename to deploy/lite/human_segmentation_demo/gradle/wrapper/gradle-wrapper.jar diff --git a/deploy/lite/humanseg-android-demo/gradle/wrapper/gradle-wrapper.properties b/deploy/lite/human_segmentation_demo/gradle/wrapper/gradle-wrapper.properties similarity index 100% rename from deploy/lite/humanseg-android-demo/gradle/wrapper/gradle-wrapper.properties rename to deploy/lite/human_segmentation_demo/gradle/wrapper/gradle-wrapper.properties diff --git a/deploy/lite/humanseg-android-demo/gradlew b/deploy/lite/human_segmentation_demo/gradlew similarity index 100% rename from deploy/lite/humanseg-android-demo/gradlew rename to deploy/lite/human_segmentation_demo/gradlew diff --git a/deploy/lite/humanseg-android-demo/gradlew.bat b/deploy/lite/human_segmentation_demo/gradlew.bat similarity index 100% rename from deploy/lite/humanseg-android-demo/gradlew.bat rename to deploy/lite/human_segmentation_demo/gradlew.bat diff --git a/deploy/lite/humanseg-android-demo/settings.gradle b/deploy/lite/human_segmentation_demo/settings.gradle similarity index 100% rename from deploy/lite/humanseg-android-demo/settings.gradle rename to deploy/lite/human_segmentation_demo/settings.gradle diff --git a/deploy/lite/humanseg-android-demo/app/build.gradle b/deploy/lite/humanseg-android-demo/app/build.gradle deleted file mode 100644 index 087d90ca07b67a94030346989c9b1e8597693f61..0000000000000000000000000000000000000000 --- a/deploy/lite/humanseg-android-demo/app/build.gradle +++ /dev/null @@ -1,30 +0,0 @@ -apply plugin: 'com.android.application' - -android { - compileSdkVersion 28 - defaultConfig { - applicationId "com.baidu.paddle.lite.demo" - minSdkVersion 15 - targetSdkVersion 28 - versionCode 1 - versionName "1.0" - testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" - } - buildTypes { - release { - minifyEnabled false - proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' - } - } -} - -dependencies { - implementation fileTree(include: ['*.jar'], dir: 'libs') - implementation 'com.android.support:appcompat-v7:28.0.0' - implementation 'com.android.support.constraint:constraint-layout:1.1.3' - implementation 'com.android.support:design:28.0.0' - testImplementation 'junit:junit:4.12' - androidTestImplementation 'com.android.support.test:runner:1.0.2' - androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' - implementation files('libs/PaddlePredictor.jar') -} diff --git a/deploy/lite/humanseg-android-demo/app/libs/PaddlePredictor.jar b/deploy/lite/humanseg-android-demo/app/libs/PaddlePredictor.jar deleted file mode 100644 index 037d569f712578c5cda766b1160654ea491115df..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/libs/PaddlePredictor.jar and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu/__model__.nb b/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu/__model__.nb deleted file mode 100644 index 1a83251934c56c808de90abb5bc20887305da87e..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu/__model__.nb and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu/param.nb b/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu/param.nb deleted file mode 100644 index 1a184669cadd6e435c143d8a43eb09e33a1ef9c2..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/assets/image_segmentation/models/deeplab_mobilenet_for_cpu/param.nb and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/MainActivity.java b/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/MainActivity.java deleted file mode 100644 index 00728f865a77e601ec60dc30d2f8dc047aa42472..0000000000000000000000000000000000000000 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/MainActivity.java +++ /dev/null @@ -1,43 +0,0 @@ -package com.baidu.paddle.lite.demo; - -import android.content.Intent; -import android.content.SharedPreferences; -import android.os.Bundle; -import android.preference.PreferenceManager; -import android.support.v7.app.AppCompatActivity; -import android.util.Log; -import android.view.View; - -import com.baidu.paddle.lite.demo.segmentation.ImgSegActivity; - -public class MainActivity extends AppCompatActivity implements View.OnClickListener { - private static final String TAG = MainActivity.class.getSimpleName(); - - @Override - protected void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - setContentView(R.layout.activity_main); - - // clear all setting items to avoid app crashing due to the incorrect settings - SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this); - SharedPreferences.Editor editor = sharedPreferences.edit(); - editor.clear(); - editor.commit(); - } - - @Override - public void onClick(View v) { - switch (v.getId()) { - case R.id.v_img_seg: { - Intent intent = new Intent(MainActivity.this, ImgSegActivity.class); - startActivity(intent); - } break; - } - } - - @Override - protected void onDestroy() { - super.onDestroy(); - System.exit(0); - } -} diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/Predictor.java b/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/Predictor.java deleted file mode 100644 index 27bd971017eba6bb52901a7e2aa1e0a8e3cf5ef0..0000000000000000000000000000000000000000 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/Predictor.java +++ /dev/null @@ -1,143 +0,0 @@ -package com.baidu.paddle.lite.demo; - -import android.content.Context; -import android.util.Log; -import com.baidu.paddle.lite.*; - -import java.util.ArrayList; -import java.util.Date; - -public class Predictor { - private static final String TAG = Predictor.class.getSimpleName(); - - public boolean isLoaded = false; - public int warmupIterNum = 0; - public int inferIterNum = 1; - protected Context appCtx = null; - public int cpuThreadNum = 1; - public String cpuPowerMode = "LITE_POWER_HIGH"; - public String modelPath = ""; - public String modelName = ""; - protected PaddlePredictor paddlePredictor = null; - protected float inferenceTime = 0; - - public Predictor() { - } - - public boolean init(Context appCtx, String modelPath, int cpuThreadNum, String cpuPowerMode) { - this.appCtx = appCtx; - isLoaded = loadModel(modelPath, cpuThreadNum, cpuPowerMode); - return isLoaded; - } - - protected boolean loadModel(String modelPath, int cpuThreadNum, String cpuPowerMode) { - // release model if exists - releaseModel(); - - // load model - if (modelPath.isEmpty()) { - return false; - } - String realPath = modelPath; - if (!modelPath.substring(0, 1).equals("/")) { - // read model files from custom file_paths if the first character of mode file_paths is '/' - // otherwise copy model to cache from assets - realPath = appCtx.getCacheDir() + "/" + modelPath; - Utils.copyDirectoryFromAssets(appCtx, modelPath, realPath); - } - if (realPath.isEmpty()) { - return false; - } - MobileConfig config = new MobileConfig(); - config.setModelDir(realPath); - config.setThreads(cpuThreadNum); - if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_HIGH")) { - config.setPowerMode(PowerMode.LITE_POWER_HIGH); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_LOW")) { - config.setPowerMode(PowerMode.LITE_POWER_LOW); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_FULL")) { - config.setPowerMode(PowerMode.LITE_POWER_FULL); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_NO_BIND")) { - config.setPowerMode(PowerMode.LITE_POWER_NO_BIND); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_HIGH")) { - config.setPowerMode(PowerMode.LITE_POWER_RAND_HIGH); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_LOW")) { - config.setPowerMode(PowerMode.LITE_POWER_RAND_LOW); - } else { - Log.e(TAG, "unknown cpu power mode!"); - return false; - } - paddlePredictor = PaddlePredictor.createPaddlePredictor(config); - - this.cpuThreadNum = cpuThreadNum; - this.cpuPowerMode = cpuPowerMode; - this.modelPath = realPath; - this.modelName = realPath.substring(realPath.lastIndexOf("/") + 1); - return true; - } - - public void releaseModel() { - paddlePredictor = null; - isLoaded = false; - cpuThreadNum = 1; - cpuPowerMode = "LITE_POWER_HIGH"; - modelPath = ""; - modelName = ""; - } - - public Tensor getInput(int idx) { - if (!isLoaded()) { - return null; - } - return paddlePredictor.getInput(idx); - } - - public Tensor getOutput(int idx) { - if (!isLoaded()) { - return null; - } - return paddlePredictor.getOutput(idx); - } - - public boolean runModel() { - if (!isLoaded()) { - return false; - } - // warm up - for (int i = 0; i < warmupIterNum; i++){ - paddlePredictor.run(); - } - // inference - Date start = new Date(); - for (int i = 0; i < inferIterNum; i++) { - paddlePredictor.run(); - } - Date end = new Date(); - inferenceTime = (end.getTime() - start.getTime()) / (float) inferIterNum; - return true; - } - - public boolean isLoaded() { - return paddlePredictor != null && isLoaded; - } - - public String modelPath() { - return modelPath; - } - - public String modelName() { - return modelName; - } - - public int cpuThreadNum() { - return cpuThreadNum; - } - - public String cpuPowerMode() { - return cpuPowerMode; - } - - public float inferenceTime() { - return inferenceTime; - } -} diff --git a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegActivity.java b/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegActivity.java deleted file mode 100644 index d18895aedb892405783c030167cb3e9d1ed2d304..0000000000000000000000000000000000000000 --- a/deploy/lite/humanseg-android-demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/ImgSegActivity.java +++ /dev/null @@ -1,210 +0,0 @@ -package com.baidu.paddle.lite.demo.segmentation; - -import android.content.Intent; -import android.content.SharedPreferences; -import android.graphics.Bitmap; -import android.graphics.BitmapFactory; -import android.os.Bundle; -import android.preference.PreferenceManager; -import android.text.method.ScrollingMovementMethod; -import android.util.Log; -import android.view.Menu; -import android.widget.ImageView; -import android.widget.TextView; -import android.widget.Toast; - -import com.baidu.paddle.lite.demo.CommonActivity; -import com.baidu.paddle.lite.demo.R; -import com.baidu.paddle.lite.demo.Utils; -import com.baidu.paddle.lite.demo.segmentation.config.Config; -import com.baidu.paddle.lite.demo.segmentation.preprocess.Preprocess; -import com.baidu.paddle.lite.demo.segmentation.visual.Visualize; - -import java.io.File; -import java.io.IOException; -import java.io.InputStream; - -public class ImgSegActivity extends CommonActivity { - private static final String TAG = ImgSegActivity.class.getSimpleName(); - - protected TextView tvInputSetting; - protected ImageView ivInputImage; - protected TextView tvOutputResult; - protected TextView tvInferenceTime; - - // model config - Config config = new Config(); - - protected ImgSegPredictor predictor = new ImgSegPredictor(); - - Preprocess preprocess = new Preprocess(); - - Visualize visualize = new Visualize(); - - @Override - protected void onCreate(Bundle savedInstanceState) { - - super.onCreate(savedInstanceState); - setContentView(R.layout.activity_img_seg); - tvInputSetting = findViewById(R.id.tv_input_setting); - ivInputImage = findViewById(R.id.iv_input_image); - tvInferenceTime = findViewById(R.id.tv_inference_time); - tvOutputResult = findViewById(R.id.tv_output_result); - tvInputSetting.setMovementMethod(ScrollingMovementMethod.getInstance()); - tvOutputResult.setMovementMethod(ScrollingMovementMethod.getInstance()); - } - - @Override - public boolean onLoadModel() { - return super.onLoadModel() && predictor.init(ImgSegActivity.this, config); - } - - @Override - public boolean onRunModel() { - return super.onRunModel() && predictor.isLoaded() && predictor.runModel(preprocess,visualize); - } - - @Override - public void onLoadModelSuccessed() { - super.onLoadModelSuccessed(); - // load test image from file_paths and run model - try { - if (config.imagePath.isEmpty()) { - return; - } - Bitmap image = null; - // read test image file from custom file_paths if the first character of mode file_paths is '/', otherwise read test - // image file from assets - if (!config.imagePath.substring(0, 1).equals("/")) { - InputStream imageStream = getAssets().open(config.imagePath); - image = BitmapFactory.decodeStream(imageStream); - } else { - if (!new File(config.imagePath).exists()) { - return; - } - image = BitmapFactory.decodeFile(config.imagePath); - } - if (image != null && predictor.isLoaded()) { - predictor.setInputImage(image); - runModel(); - } - } catch (IOException e) { - Toast.makeText(ImgSegActivity.this, "Load image failed!", Toast.LENGTH_SHORT).show(); - e.printStackTrace(); - } - } - - @Override - public void onLoadModelFailed() { - super.onLoadModelFailed(); - } - - @Override - public void onRunModelSuccessed() { - super.onRunModelSuccessed(); - // obtain results and update UI - tvInferenceTime.setText("Inference time: " + predictor.inferenceTime() + " ms"); - Bitmap outputImage = predictor.outputImage(); - if (outputImage != null) { - ivInputImage.setImageBitmap(outputImage); - } - tvOutputResult.setText(predictor.outputResult()); - tvOutputResult.scrollTo(0, 0); - } - - @Override - public void onRunModelFailed() { - super.onRunModelFailed(); - } - - @Override - public void onImageChanged(Bitmap image) { - super.onImageChanged(image); - // rerun model if users pick test image from gallery or camera - if (image != null && predictor.isLoaded()) { -// predictor.setConfig(config); - predictor.setInputImage(image); - runModel(); - } - } - - @Override - public void onImageChanged(String path) { - super.onImageChanged(path); - Bitmap image = BitmapFactory.decodeFile(path); - predictor.setInputImage(image); - runModel(); - } - public void onSettingsClicked() { - super.onSettingsClicked(); - startActivity(new Intent(ImgSegActivity.this, ImgSegSettingsActivity.class)); - } - - @Override - public boolean onPrepareOptionsMenu(Menu menu) { - boolean isLoaded = predictor.isLoaded(); - menu.findItem(R.id.open_gallery).setEnabled(isLoaded); - menu.findItem(R.id.take_photo).setEnabled(isLoaded); - return super.onPrepareOptionsMenu(menu); - } - - @Override - protected void onResume() { - Log.i(TAG,"begin onResume"); - super.onResume(); - - SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this); - boolean settingsChanged = false; - String model_path = sharedPreferences.getString(getString(R.string.ISG_MODEL_PATH_KEY), - getString(R.string.ISG_MODEL_PATH_DEFAULT)); - String label_path = sharedPreferences.getString(getString(R.string.ISG_LABEL_PATH_KEY), - getString(R.string.ISG_LABEL_PATH_DEFAULT)); - String image_path = sharedPreferences.getString(getString(R.string.ISG_IMAGE_PATH_KEY), - getString(R.string.ISG_IMAGE_PATH_DEFAULT)); - settingsChanged |= !model_path.equalsIgnoreCase(config.modelPath); - settingsChanged |= !label_path.equalsIgnoreCase(config.labelPath); - settingsChanged |= !image_path.equalsIgnoreCase(config.imagePath); - int cpu_thread_num = Integer.parseInt(sharedPreferences.getString(getString(R.string.ISG_CPU_THREAD_NUM_KEY), - getString(R.string.ISG_CPU_THREAD_NUM_DEFAULT))); - settingsChanged |= cpu_thread_num != config.cpuThreadNum; - String cpu_power_mode = - sharedPreferences.getString(getString(R.string.ISG_CPU_POWER_MODE_KEY), - getString(R.string.ISG_CPU_POWER_MODE_DEFAULT)); - settingsChanged |= !cpu_power_mode.equalsIgnoreCase(config.cpuPowerMode); - String input_color_format = - sharedPreferences.getString(getString(R.string.ISG_INPUT_COLOR_FORMAT_KEY), - getString(R.string.ISG_INPUT_COLOR_FORMAT_DEFAULT)); - settingsChanged |= !input_color_format.equalsIgnoreCase(config.inputColorFormat); - long[] input_shape = - Utils.parseLongsFromString(sharedPreferences.getString(getString(R.string.ISG_INPUT_SHAPE_KEY), - getString(R.string.ISG_INPUT_SHAPE_DEFAULT)), ","); - - settingsChanged |= input_shape.length != config.inputShape.length; - - if (!settingsChanged) { - for (int i = 0; i < input_shape.length; i++) { - settingsChanged |= input_shape[i] != config.inputShape[i]; - } - } - - if (settingsChanged) { - config.init(model_path,label_path,image_path,cpu_thread_num,cpu_power_mode, - input_color_format,input_shape); - preprocess.init(config); - // update UI - tvInputSetting.setText("Model: " + config.modelPath.substring(config.modelPath.lastIndexOf("/") + 1) + "\n" + "CPU" + - " Thread Num: " + Integer.toString(config.cpuThreadNum) + "\n" + "CPU Power Mode: " + config.cpuPowerMode); - tvInputSetting.scrollTo(0, 0); - // reload model if configure has been changed - loadModel(); - } - } - - @Override - protected void onDestroy() { - if (predictor != null) { - predictor.releaseModel(); - } - super.onDestroy(); - } -} diff --git a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/arm64-v8a/libhiai.so b/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/arm64-v8a/libhiai.so deleted file mode 100644 index 8b6c40b403ecaa9ace3dbc44eb328c1ad928775b..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/arm64-v8a/libhiai.so and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so b/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so deleted file mode 100644 index b8d79c61f2981f6c7581ad2dd5aa4547ca11aad6..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/armeabi-v7a/libhiai.so b/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/armeabi-v7a/libhiai.so deleted file mode 100644 index f0ba095c525217f288d9db98dc853882bf7ba6ed..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/armeabi-v7a/libhiai.so and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so b/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so deleted file mode 100644 index 5947bf2b4dd44d11585dc53f2a7e9256c3396cf4..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/drawable/image_segementation.jpg b/deploy/lite/humanseg-android-demo/app/src/main/res/drawable/image_segementation.jpg deleted file mode 100644 index 234044abb6b978124c811c9a632b80e29c002c3f..0000000000000000000000000000000000000000 Binary files a/deploy/lite/humanseg-android-demo/app/src/main/res/drawable/image_segementation.jpg and /dev/null differ diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/layout/activity_main.xml b/deploy/lite/humanseg-android-demo/app/src/main/res/layout/activity_main.xml deleted file mode 100644 index 84f15a20fde16981d3d05c8389c66cffd35633ef..0000000000000000000000000000000000000000 --- a/deploy/lite/humanseg-android-demo/app/src/main/res/layout/activity_main.xml +++ /dev/null @@ -1,58 +0,0 @@ - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/deploy/lite/humanseg-android-demo/app/src/main/res/values/strings.xml b/deploy/lite/humanseg-android-demo/app/src/main/res/values/strings.xml deleted file mode 100644 index 88b26f593cdc0d619a163221d59f96b931392f21..0000000000000000000000000000000000000000 --- a/deploy/lite/humanseg-android-demo/app/src/main/res/values/strings.xml +++ /dev/null @@ -1,20 +0,0 @@ - -Segmentation-demo - -ISG_CHOOSE_PRE_INSTALLED_MODEL_KEY -ISG_ENABLE_CUSTOM_SETTINGS_KEY -ISG_MODEL_PATH_KEY -ISG_LABEL_PATH_KEY -ISG_IMAGE_PATH_KEY -ISG_CPU_THREAD_NUM_KEY -ISG_CPU_POWER_MODE_KEY -ISG_INPUT_COLOR_FORMAT_KEY -ISG_INPUT_SHAPE_KEY -image_segmentation/models/deeplab_mobilenet_for_cpu -image_segmentation/labels/label_list -image_segmentation/images/human.jpg -1 -LITE_POWER_HIGH -RGB -1,3,513,513 - diff --git a/deploy/python/docs/PaddleSeg_Infer_Benchmark.md b/deploy/python/docs/PaddleSeg_Infer_Benchmark.md index bfe0f4eca91a50c7112cb8678f6b008d5bb26a21..196e3c3055fa3be6d71b18716216713a6127d301 100644 --- a/deploy/python/docs/PaddleSeg_Infer_Benchmark.md +++ b/deploy/python/docs/PaddleSeg_Infer_Benchmark.md @@ -1,4 +1,4 @@ -# PaddleSeg 分割模型预测性能测试 +# PaddleSeg 分割模型预测Benchmark ## 测试软件环境 - CUDA 9.0 @@ -9,15 +9,6 @@ - GPU: Tesla V100 - CPU:Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz -## 测试方法 -- 输入采用 1000张RGB图片,batch_size 统一为 1。 -- 重复跑多轮,去掉第一轮预热时间,计后续几轮的平均时间:包括数据拷贝到GPU,预测引擎计算时间,预测结果拷贝回CPU 时间。 -- 采用Fluid C++预测引擎 -- 测试时开启了 FLAGS_cudnn_exhaustive_search=True,使用exhaustive方式搜索卷积计算算法。 -- 对于每个模型,同事测试了`OP`优化模型和原生模型的推理速度, 并分别就是否开启`FP16`和`FP32`的进行了测试 - - - ## 推理速度测试数据 **说明**: `OP优化模型`指的是`PaddleSeg 0.3.0`版以后导出的新版模型,把图像的预处理和后处理部分放入 GPU 中进行加速,提高性能。每个模型包含了三种`eval_crop_size`:`192x192`/`512x512`/`768x768`。 @@ -501,7 +492,7 @@ -### 3. 不同的EVAL_CROP_SIZE对图片想能的影响 +### 3. 不同的EVAL_CROP_SIZE对图片性能的影响 在 `deeplabv3p_xception`上的数据对比图: ![xception](https://paddleseg.bj.bcebos.com/inference/benchmark/xception.png) diff --git a/deploy/python/docs/compile_paddle_with_tensorrt.md b/deploy/python/docs/compile_paddle_with_tensorrt.md index e2afad0519867a98363776ebd2879d06adf08bc4..cca07daf7b50a7fd926024ed912db94a869a64d4 100644 --- a/deploy/python/docs/compile_paddle_with_tensorrt.md +++ b/deploy/python/docs/compile_paddle_with_tensorrt.md @@ -11,11 +11,11 @@ ## 2. 安装 TensorRT 5.1 -请参考`Nvidia`的[官方安装教程](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html) +请参考Nvidia的[官方安装教程](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html) ## 3. 编译 PaddlePaddle - 这里假设`Python`版本为`3.7`以及`cuda` `cudnn` `tensorRT`安装路径如下: + 这里假设`Python`版本为`3.7`以及`CUDA` `cuDNN` `TensorRT`安装路径如下: ```bash # 假设 cuda 安装路径 /usr/local/cuda-9.0/ diff --git a/deploy/python/infer.py b/deploy/python/infer.py index e9c87d5b9be6e02969eddf2b707e83e0485fd87a..19a21a3f33917631c7a20858a2c7067b5e2e02be 100644 --- a/deploy/python/infer.py +++ b/deploy/python/infer.py @@ -31,6 +31,7 @@ gflags.DEFINE_string("conf", default="", help="Configuration File Path") gflags.DEFINE_string("input_dir", default="", help="Directory of Input Images") gflags.DEFINE_boolean("use_pr", default=False, help="Use optimized model") gflags.DEFINE_string("trt_mode", default="", help="Use optimized model") +gflags.DEFINE_string("ext", default=".jpeg|.jpg", help="Input Image File Extensions") gflags.FLAGS = gflags.FLAGS @@ -146,9 +147,9 @@ class ImageReader: # process multiple images with multithreading def process(self, imgs, use_pr=False): imgs_data = [] - with ThreadPoolExecutor(max_workers=self.config.batch_size) as exec: + with ThreadPoolExecutor(max_workers=self.config.batch_size) as exe_pool: tasks = [ - exec.submit(self.process_worker, imgs, idx, use_pr) + exe_pool.submit(self.process_worker, imgs, idx, use_pr) for idx in range(len(imgs)) ] for task in as_completed(tasks): @@ -315,4 +316,4 @@ if __name__ == "__main__": "Invalid trt_mode [%s], only support[int8, fp16, fp32]" % trt_mode) exit(-1) # run inference - run(gflags.FLAGS.conf, gflags.FLAGS.input_dir) + run(gflags.FLAGS.conf, gflags.FLAGS.input_dir, gflags.FLAGS.ext) diff --git a/docs/annotation/jingling2seg.md b/docs/annotation/jingling2seg.md index 2637df5146e6bd5027600a26a42d5c2a6d3ece80..de36b5395bc599f83875c6732c1372bd86862c3c 100644 --- a/docs/annotation/jingling2seg.md +++ b/docs/annotation/jingling2seg.md @@ -44,7 +44,7 @@ **注意:导出的标注文件位于`保存位置`下的`outputs`目录。** -精灵标注产出的真值文件可参考我们给出的文件夹`docs/annotation/jingling_demo`。 +精灵标注产出的真值文件可参考我们给出的文件夹[docs/annotation/jingling_demo](jingling_demo)
@@ -54,6 +54,7 @@ **注意:** 对于中间有空洞的目标(例如游泳圈),暂不支持对空洞部分的标注。如有需要,可借助[labelme](./labelme2seg.md)。 ## 3 数据格式转换 +最后用我们提供的数据转换脚本将上述标注工具产出的数据格式转换为模型训练时所需的数据格式。 * 经过数据格式转换后的数据集目录结构如下: @@ -84,13 +85,18 @@ pip install pillow * 运行以下代码,将标注后的数据转换成满足以上格式的数据集: ``` - python pdseg/tools/jingling2seg.py +python pdseg/tools/jingling2seg.py ``` -其中,``为精灵标注产出的json文件所在文件夹的目录,一般为精灵工具使用(3)中`保存位置`下的`outputs`目录。 +其中,``为精灵标注产出的json文件所在文件夹的目录,一般为精灵工具使用(3)中`保存位置`下的`outputs`目录。 +我们已内置了一个标注的示例,可运行以下代码进行体验: -转换得到的数据集可参考我们给出的文件夹`docs/annotation/jingling_demo`。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。 +``` +python pdseg/tools/jingling2seg.py docs/annotation/jingling_demo/outputs/ +``` + +转换得到的数据集可参考我们给出的文件夹[docs/annotation/jingling_demo](jingling_demo)。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
diff --git a/docs/annotation/jingling_demo/aa63d7e6db0d03137883772c246c6761fc201059.jpg b/docs/annotation/jingling_demo/jingling.jpg similarity index 100% rename from docs/annotation/jingling_demo/aa63d7e6db0d03137883772c246c6761fc201059.jpg rename to docs/annotation/jingling_demo/jingling.jpg diff --git a/docs/annotation/jingling_demo/outputs/aa63d7e6db0d03137883772c246c6761fc201059.json b/docs/annotation/jingling_demo/outputs/aa63d7e6db0d03137883772c246c6761fc201059.json deleted file mode 100644 index 69d80205de92afc9cffa304b32ff0e3e95502687..0000000000000000000000000000000000000000 --- a/docs/annotation/jingling_demo/outputs/aa63d7e6db0d03137883772c246c6761fc201059.json +++ /dev/null @@ -1 +0,0 @@ -{"path":"/Users/dataset/aa63d7e6db0d03137883772c246c6761fc201059.jpg","outputs":{"object":[{"name":"person","polygon":{"x1":321.99,"y1":63,"x2":293,"y2":98.00999999999999,"x3":245.01,"y3":141.01,"x4":221,"y4":194,"x5":231.99,"y5":237,"x6":231.99,"y6":348.01,"x7":191,"y7":429,"x8":197,"y8":465.01,"x9":193,"y9":586,"x10":151,"y10":618.01,"x11":124,"y11":622,"x12":100,"y12":703,"x13":121.99,"y13":744,"x14":141.99,"y14":724,"x15":163,"y15":658.01,"x16":238.01,"y16":646,"x17":259,"y17":627,"x18":313,"y18":618.01,"x19":416,"y19":639,"x20":464,"y20":606,"x21":454,"y21":555.01,"x22":404,"y22":508.01,"x23":430,"y23":489,"x24":407,"y24":464,"x25":397,"y25":365.01,"x26":407,"y26":290,"x27":361.99,"y27":252,"x28":376,"y28":215.01,"x29":391.99,"y29":189,"x30":388.01,"y30":135.01,"x31":340,"y31":120,"x32":313,"y32":161.01,"x33":307,"y33":188.01,"x34":311,"y34":207,"x35":277,"y35":186,"x36":293,"y36":137,"x37":308.01,"y37":117,"x38":361,"y38":93}}]},"time_labeled":1568101256852,"labeled":true,"size":{"width":706,"height":1000,"depth":3}} \ No newline at end of file diff --git a/docs/annotation/jingling_demo/outputs/annotations/aa63d7e6db0d03137883772c246c6761fc201059.png b/docs/annotation/jingling_demo/outputs/annotations/aa63d7e6db0d03137883772c246c6761fc201059.png deleted file mode 100644 index 8dfbff7b73bcfff7ef79b904667241731641d4a4..0000000000000000000000000000000000000000 Binary files a/docs/annotation/jingling_demo/outputs/annotations/aa63d7e6db0d03137883772c246c6761fc201059.png and /dev/null differ diff --git a/docs/annotation/jingling_demo/outputs/annotations/jingling.png b/docs/annotation/jingling_demo/outputs/annotations/jingling.png new file mode 100644 index 0000000000000000000000000000000000000000..526acefdcdd8317c5778a5d47495d7049a46269d Binary files /dev/null and b/docs/annotation/jingling_demo/outputs/annotations/jingling.png differ diff --git a/docs/annotation/jingling_demo/outputs/jingling.json b/docs/annotation/jingling_demo/outputs/jingling.json new file mode 100644 index 0000000000000000000000000000000000000000..0021522487a26f66dadc979a96ea631c0314adab --- /dev/null +++ b/docs/annotation/jingling_demo/outputs/jingling.json @@ -0,0 +1 @@ +{"path":"/Users/dataset/jingling.jpg","outputs":{"object":[{"name":"person","polygon":{"x1":321.99,"y1":63,"x2":293,"y2":98.00999999999999,"x3":245.01,"y3":141.01,"x4":221,"y4":194,"x5":231.99,"y5":237,"x6":231.99,"y6":348.01,"x7":191,"y7":429,"x8":197,"y8":465.01,"x9":193,"y9":586,"x10":151,"y10":618.01,"x11":124,"y11":622,"x12":100,"y12":703,"x13":121.99,"y13":744,"x14":141.99,"y14":724,"x15":163,"y15":658.01,"x16":238.01,"y16":646,"x17":259,"y17":627,"x18":313,"y18":618.01,"x19":416,"y19":639,"x20":464,"y20":606,"x21":454,"y21":555.01,"x22":404,"y22":508.01,"x23":430,"y23":489,"x24":407,"y24":464,"x25":397,"y25":365.01,"x26":407,"y26":290,"x27":361.99,"y27":252,"x28":376,"y28":215.01,"x29":391.99,"y29":189,"x30":388.01,"y30":135.01,"x31":340,"y31":120,"x32":313,"y32":161.01,"x33":307,"y33":188.01,"x34":311,"y34":207,"x35":277,"y35":186,"x36":293,"y36":137,"x37":308.01,"y37":117,"x38":361,"y38":93}}]},"time_labeled":1568101256852,"labeled":true,"size":{"width":706,"height":1000,"depth":3}} \ No newline at end of file diff --git a/docs/annotation/labelme2seg.md b/docs/annotation/labelme2seg.md index a270591d06131ec48f4ebb0d25ec206031956a24..235e3c41b6a79ece0b7512955aba04fe06faabe3 100644 --- a/docs/annotation/labelme2seg.md +++ b/docs/annotation/labelme2seg.md @@ -47,7 +47,7 @@ git clone https://github.com/wkentaro/labelme ​ (3) 图片中所有目标的标注都完成后,点击`Save`保存json文件,**请将json文件和图片放在同一个文件夹里**,点击`Next Image`标注下一张图片。 -LableMe产出的真值文件可参考我们给出的文件夹`docs/annotation/labelme_demo`。 +LableMe产出的真值文件可参考我们给出的文件夹[docs/annotation/labelme_demo](labelme_demo)。
@@ -64,6 +64,7 @@ LableMe产出的真值文件可参考我们给出的文件夹`docs/annotation/la
## 3 数据格式转换 +最后用我们提供的数据转换脚本将上述标注工具产出的数据格式转换为模型训练时所需的数据格式。 * 经过数据格式转换后的数据集目录结构如下: @@ -94,12 +95,18 @@ pip install pillow * 运行以下代码,将标注后的数据转换成满足以上格式的数据集: ``` - python pdseg/tools/labelme2seg.py + python pdseg/tools/labelme2seg.py ``` -其中,``为图片以及LabelMe产出的json文件所在文件夹的目录,同时也是转换后的标注集所在文件夹的目录。 +其中,``为图片以及LabelMe产出的json文件所在文件夹的目录,同时也是转换后的标注集所在文件夹的目录。 -转换得到的数据集可参考我们给出的文件夹`docs/annotation/labelme_demo`。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。 +我们已内置了一个标注的示例,可运行以下代码进行体验: + +``` +python pdseg/tools/labelme2seg.py docs/annotation/labelme_demo/ +``` + +转换得到的数据集可参考我们给出的文件夹[docs/annotation/labelme_demo](labelme_demo)。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`annotations`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
diff --git a/docs/annotation/labelme_demo/annotations/2011_000025.png b/docs/annotation/labelme_demo/annotations/2011_000025.png index dcf7c96517d4870f6e83293cef62e3285e5b37e3..0b5a56dda153c92f4411ac7d71665aaf93111e10 100644 Binary files a/docs/annotation/labelme_demo/annotations/2011_000025.png and b/docs/annotation/labelme_demo/annotations/2011_000025.png differ diff --git a/docs/benchmark.md b/docs/benchmark.md deleted file mode 100644 index c1e6de2fcee971437c29e370e9410f9d00c9145f..0000000000000000000000000000000000000000 --- a/docs/benchmark.md +++ /dev/null @@ -1,17 +0,0 @@ -# PaddleSeg 性能Benchmark - -## 训练性能 - -### 多GPU加速比 - -### 显存开销对比 - -## 预测性能对比 - -### Windows - -### Linux - -#### Naive - -#### Analysis diff --git a/docs/check.md b/docs/check.md index fac9520f11ef46d3628ecab3fcc4127a468a3ca5..20dc87f7e10d856f050a80554adb9c93d0ff05e3 100644 --- a/docs/check.md +++ b/docs/check.md @@ -55,7 +55,7 @@ Doing label pixel statistics: - 当`AUG.AUG_METHOD`为stepscaling时,`EVAL_CROP_SIZE`的宽高应不小于原图中最大的宽高。 -- 当`AUG.AUG_METHOD`为rangscaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最大的宽高。 +- 当`AUG.AUG_METHOD`为rangescaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最大的宽高。 ### 11 数据增强参数`AUG.INF_RESIZE_VALUE`校验 验证`AUG.INF_RESIZE_VALUE`是否在[`AUG.MIN_RESIZE_VALUE`~`AUG.MAX_RESIZE_VALUE`]范围内。若在范围内,则通过校验。 diff --git a/docs/config.md b/docs/config.md index 387af4d4e18dc0e5b8cee7baa96ecf5b713f03ab..67e1353a7d88994b584d5bd3da4dd36d81430a59 100644 --- a/docs/config.md +++ b/docs/config.md @@ -1,18 +1,281 @@ -# PaddleSeg 分割库配置说明 +# 脚本使用和配置说明 -PaddleSeg提供了提供了统一的配置用于 训练/评估/可视化/导出模型 +PaddleSeg提供了 **训练**/**评估**/**可视化**/**模型导出** 等4个功能的使用脚本。所有脚本都支持通过不同的Flags来开启特定功能,也支持通过Options来修改默认的训练配置。它们的使用方式非常接近,如下: + +```shell +# 训练 +python pdseg/train.py ${FLAGS} ${OPTIONS} +# 评估 +python pdseg/eval.py ${FLAGS} ${OPTIONS} +# 可视化 +python pdseg/vis.py ${FLAGS} ${OPTIONS} +# 模型导出 +python pdseg/export_model.py ${FLAGS} ${OPTIONS} +``` + +**Note:** FLAGS必须位于OPTIONS之前,否会将会遇到报错,例如如下的例子: + +```shell +# FLAGS "--cfg configs/unet_optic.yaml" 必须在 OPTIONS "BATCH_SIZE 1" 之前 +python pdseg/train.py BATCH_SIZE 1 --cfg configs/unet_optic.yaml +``` + +## 命令行FLAGS + +|FLAG|用途|支持脚本|默认值|备注| +|-|-|-|-|-| +|--cfg|配置文件路径|ALL|None|| +|--use_gpu|是否使用GPU进行训练|train/eval/vis|False|| +|--use_mpio|是否使用多进程进行IO处理|train/eval|False|打开该开关会占用一定量的CPU内存,但是可以提高训练速度。
**NOTE:** windows平台下不支持该功能, 建议使用自定义数据初次训练时不打开,打开会导致数据读取异常不可见。 | +|--use_tb|是否使用TensorBoard记录训练数据|train|False|| +|--log_steps|训练日志的打印周期(单位为step)|train|10|| +|--debug|是否打印debug信息|train|False|IOU等指标涉及到混淆矩阵的计算,会降低训练速度| +|--tb_log_dir                      |TensorBoard的日志路径|train|None|| +|--do_eval|是否在保存模型时进行效果评估                                                        |train|False|| +|--vis_dir|保存可视化图片的路径|vis|"visual"|| + +## OPTIONS + +PaddleSeg提供了统一的配置用于 训练/评估/可视化/导出模型。一共存在三套配置方案: +* 命令行窗口传递的参数。 +* configs目录下的yaml文件。 +* 默认参数,位于pdseg/utils/config.py。 + +三者的优先级顺序为 命令行窗口 > yaml > 默认配置。 配置包含以下Group: -* [通用](./configs/basic_group.md) -* [DATASET](./configs/dataset_group.md) -* [DATALOADER](./configs/dataloader_group.md) -* [FREEZE](./configs/freeze_group.md) -* [MODEL](./configs/model_group.md) -* [SOLVER](./configs/solver_group.md) -* [TRAIN](./configs/train_group.md) -* [TEST](./configs/test_group.md) - -`Note`: - - 代码详见pdseg/utils/config.py +|OPTIONS|用途|支持脚本| +|-|-|-| +|[BASIC](./configs/basic_group.md)|通用配置|ALL| +|[DATASET](./configs/dataset_group.md)|数据集相关|train/eval/vis| +|[MODEL](./configs/model_group.md)|模型相关|ALL| +|[TRAIN](./configs/train_group.md)|训练相关|train| +|[SOLVER](./configs/solver_group.md)|训练优化相关|train| +|[TEST](./configs/test_group.md)|测试模型相关|eval/vis/export_model| +|[AUG](./data_aug.md)|数据增强|ALL| +[FREEZE](./configs/freeze_group.md)|模型导出相关|export_model| +|[DATALOADER](./configs/dataloader_group.md)|数据加载相关|ALL| + +在进行自定义的分割任务之前,您需要准备一份yaml文件,建议参照[configs目录下的示例yaml](../configs)进行修改。 + +以下是PaddleSeg的默认配置,供查询使用。 + +```yaml +########################## 基本配置 ########################################### +# 批处理大小 +BATCH_SIZE: 1 +# 验证时图像裁剪尺寸(宽,高) +EVAL_CROP_SIZE: tuple() +# 训练时图像裁剪尺寸(宽,高) +TRAIN_CROP_SIZE: tuple() + +########################## 数据集配置 ######################################### +DATASET: + # 数据主目录目录 + DATA_DIR: './dataset/cityscapes/' + # 训练集列表 + TRAIN_FILE_LIST: './dataset/cityscapes/train.list' + # 验证集列表 + VAL_FILE_LIST: './dataset/cityscapes/val.list' + # 测试数据列表 + TEST_FILE_LIST: './dataset/cityscapes/test.list' + # Tensorboard 可视化的数据集 + VIS_FILE_LIST: None + # 类别数(需包括背景类) + NUM_CLASSES: 19 + # 输入图像类型, 支持三通道'rgb',四通道'rgba',单通道灰度图'gray' + IMAGE_TYPE: 'rgb' + # 输入图片的通道数 + DATA_DIM: 3 + # 数据列表分割符, 默认为空格 + SEPARATOR: ' ' + # 忽略的像素标签值, 默认为255,一般无需改动 + IGNORE_INDEX: 255 + +########################## 模型通用配置 ####################################### +MODEL: + # 模型名称, 已支持deeplabv3p, unet, icnet,pspnet,hrnet + MODEL_NAME: '' + # BatchNorm类型: bn、gn(group_norm) + DEFAULT_NORM_TYPE: 'bn' + # 多路损失加权值 + MULTI_LOSS_WEIGHT: [1.0] + # DEFAULT_NORM_TYPE为gn时group数 + DEFAULT_GROUP_NUMBER: 32 + # 极小值, 防止分母除0溢出,一般无需改动 + DEFAULT_EPSILON: 1e-5 + # BatchNorm动量, 一般无需改动 + BN_MOMENTUM: 0.99 + # 是否使用FP16训练 + FP16: False + + ########################## DeepLab模型配置 #################################### + DEEPLAB: + # DeepLab backbone 配置, 可选项xception_65, mobilenetv2 + BACKBONE: "xception_65" + # DeepLab output stride + OUTPUT_STRIDE: 16 + # MobileNet v2 backbone scale 设置 + DEPTH_MULTIPLIER: 1.0 + # MobileNet v2 backbone scale 设置 + ENCODER_WITH_ASPP: True + # MobileNet v2 backbone scale 设置 + ENABLE_DECODER: True + # ASPP是否使用可分离卷积 + ASPP_WITH_SEP_CONV: True + # 解码器是否使用可分离卷积 + DECODER_USE_SEP_CONV: True + + ########################## UNET模型配置 ####################################### + UNET: + # 上采样方式, 默认为双线性插值 + UPSAMPLE_MODE: 'bilinear' + + ########################## ICNET模型配置 ###################################### + ICNET: + # RESNET backbone scale 设置 + DEPTH_MULTIPLIER: 0.5 + # RESNET 层数 设置 + LAYERS: 50 + + ########################## PSPNET模型配置 ###################################### + PSPNET: + # RESNET backbone scale 设置 + DEPTH_MULTIPLIER: 1 + # RESNET backbone 层数 设置 + LAYERS: 50 + + ########################## HRNET模型配置 ###################################### + HRNET: + # HRNET STAGE2 设置 + STAGE2: + NUM_MODULES: 1 + NUM_CHANNELS: [40, 80] + # HRNET STAGE3 设置 + STAGE3: + NUM_MODULES: 4 + NUM_CHANNELS: [40, 80, 160] + # HRNET STAGE4 设置 + STAGE4: + NUM_MODULES: 3 + NUM_CHANNELS: [40, 80, 160, 320] + +########################### 训练配置 ########################################## +TRAIN: + # 模型保存路径 + MODEL_SAVE_DIR: '' + # 预训练模型路径 + PRETRAINED_MODEL_DIR: '' + # 是否resume,继续训练 + RESUME_MODEL_DIR: '' + # 是否使用多卡间同步BatchNorm均值和方差 + SYNC_BATCH_NORM: False + # 模型参数保存的epoch间隔数,可用来继续训练中断的模型 + SNAPSHOT_EPOCH: 10 + +########################### 模型优化相关配置 ################################## +SOLVER: + # 初始学习率 + LR: 0.1 + # 学习率下降方法, 支持poly piecewise cosine 三种 + LR_POLICY: "poly" + # 优化算法, 支持SGD和Adam两种算法 + OPTIMIZER: "sgd" + # 动量参数 + MOMENTUM: 0.9 + # 二阶矩估计的指数衰减率 + MOMENTUM2: 0.999 + # 学习率Poly下降指数 + POWER: 0.9 + # step下降指数 + GAMMA: 0.1 + # step下降间隔 + DECAY_EPOCH: [10, 20] + # 学习率权重衰减,0-1 + WEIGHT_DECAY: 0.00004 + # 训练开始epoch数,默认为1 + BEGIN_EPOCH: 1 + # 训练epoch数,正整数 + NUM_EPOCHS: 30 + # loss的选择,支持softmax_loss, bce_loss, dice_loss + LOSS: ["softmax_loss"] + # 是否开启warmup学习策略 + LR_WARMUP: False + # warmup的迭代次数 + LR_WARMUP_STEPS: 2000 + +########################## 测试配置 ########################################### +TEST: + # 测试模型路径 + TEST_MODEL: '' + +########################### 数据增强配置 ###################################### +AUG: + # 图像resize的方式有三种: + # unpadding(固定尺寸),stepscaling(按比例resize),rangescaling(长边对齐) + AUG_METHOD: 'unpadding' + + # 图像resize的固定尺寸(宽,高),非负 + FIX_RESIZE_SIZE: (500, 500) + + # 图像resize方式为stepscaling,resize最小尺度,非负 + MIN_SCALE_FACTOR: 0.5 + # 图像resize方式为stepscaling,resize最大尺度,不小于MIN_SCALE_FACTOR + MAX_SCALE_FACTOR: 2.0 + # 图像resize方式为stepscaling,resize尺度范围间隔,非负 + SCALE_STEP_SIZE: 0.25 + + # 图像resize方式为rangescaling,训练时长边resize的范围最小值,非负 + MIN_RESIZE_VALUE: 400 + # 图像resize方式为rangescaling,训练时长边resize的范围最大值, + # 不小于MIN_RESIZE_VALUE + MAX_RESIZE_VALUE: 600 + # 图像resize方式为rangescaling, 测试验证可视化模式下长边resize的长度, + # 在MIN_RESIZE_VALUE到MAX_RESIZE_VALUE范围内 + INF_RESIZE_VALUE: 500 + + # 图像镜像左右翻转 + MIRROR: True + # 图像上下翻转开关,True/False + FLIP: False + # 图像启动上下翻转的概率,0-1 + FLIP_RATIO: 0.5 + + RICH_CROP: + # RichCrop数据增广开关,用于提升模型鲁棒性 + ENABLE: False + # 图像旋转最大角度,0-90 + MAX_ROTATION: 15 + # 裁取图像与原始图像面积比,0-1 + MIN_AREA_RATIO: 0.5 + # 裁取图像宽高比范围,非负 + ASPECT_RATIO: 0.33 + # 亮度调节范围,0-1 + BRIGHTNESS_JITTER_RATIO: 0.5 + # 饱和度调节范围,0-1 + SATURATION_JITTER_RATIO: 0.5 + # 对比度调节范围,0-1 + CONTRAST_JITTER_RATIO: 0.5 + # 图像模糊开关,True/False + BLUR: False + # 图像启动模糊百分比,0-1 + BLUR_RATIO: 0.1 + +########################## 预测部署模型配置 ################################### +FREEZE: + # 预测保存的模型名称 + MODEL_FILENAME: '__model__' + # 预测保存的参数名称 + PARAMS_FILENAME: '__params__' + # 预测模型参数保存的路径 + SAVE_DIR: 'freeze_model' + +########################## 数据载入配置 ####################################### +DATALOADER: + # 数据载入时的并发数, 建议值8 + NUM_WORKERS: 8 + # 数据载入时缓存队列大小, 建议值256 + BUF_SIZE: 256 +``` + diff --git a/docs/configs/basic_group.md b/docs/configs/basic_group.md index c66752f38e153084601c89e0aeb0c9385f02885b..dbe22b91da0632ad6b0b435582495b784aa2b276 100644 --- a/docs/configs/basic_group.md +++ b/docs/configs/basic_group.md @@ -2,70 +2,58 @@ BASIC Group存放所有通用配置 -## `MEAN` +## `BATCH_SIZE` -图像预处理减去的均值(格式为 *[R, G, B]* ) +训练、评估、可视化时所用的BATCH大小 ### 默认值 -[0.5, 0.5, 0.5] +1(需要根据实际需求填写) -
-
+### 注意事项 -## `STD` +* 当指定了多卡运行时,PaddleSeg会将数据平分到每张卡上运行,因此每张卡单次运行的数量为 BATCH_SIZE // dev_count -图像预处理所除的标准差(格式为 *[R, G, B]* ) +* 多卡运行时,请确保BATCH_SIZE可被dev_count整除 -### 默认值 +* 增大BATCH_SIZE有利于模型训练时的收敛速度,但是会带来显存的开销。请根据实际情况评估后填写合适的值 -[0.5, 0.5, 0.5] +* 目前PaddleSeg提供的很多预训练模型都有BN层,如果BATCH SIZE设置为1,则此时训练可能不稳定导致nan

-## `EVAL_CROP_SIZE` +## `TRAIN_CROP_SIZE` -评估时所对图片裁剪的大小(格式为 *[宽, 高]* ) +训练时所对图片裁剪的大小(格式为 *[宽, 高]* ) ### 默认值 无(需要用户自己填写) ### 注意事项 -* 裁剪的大小不能小于原图,请将该字段的值填写为评估数据中最长的宽和高 +`TRAIN_CROP_SIZE`可以设置任意大小,具体如何设置根据数据集而定。

-## `TRAIN_CROP_SIZE` +## `EVAL_CROP_SIZE` -训练时所对图片裁剪的大小(格式为 *[宽, 高]* ) +评估时所对图片裁剪的大小(格式为 *[宽, 高]* ) ### 默认值 无(需要用户自己填写) -
-
- -## `BATCH_SIZE` - -训练、评估、可视化时所用的BATCH大小 - -### 默认值 - -1(需要根据实际需求填写) - ### 注意事项 +`EVAL_CROP_SIZE`的设置需要满足以下条件,共有3种情形: +- 当`AUG.AUG_METHOD`为unpadding时,`EVAL_CROP_SIZE`的宽高应不小于`AUG.FIX_RESIZE_SIZE`的宽高。 +- 当`AUG.AUG_METHOD`为stepscaling时,`EVAL_CROP_SIZE`的宽高应不小于原图中最长的宽高。 +- 当`AUG.AUG_METHOD`为rangescaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最长的宽高。 -* 当指定了多卡运行时,PaddleSeg会将数据平分到每张卡上运行,因此每张卡单次运行的数量为 BATCH_SIZE // dev_count +
+
-* 多卡运行时,请确保BATCH_SIZE可被dev_count整除 -* 增大BATCH_SIZE有利于模型训练时的收敛速度,但是会带来显存的开销。请根据实际情况评估后填写合适的值 -* 目前PaddleSeg提供的很多预训练模型都有BN层,如果BATCH SIZE设置为1,则此时训练可能不稳定导致nan -
-
diff --git a/docs/configs/model_group.md b/docs/configs/model_group.md index e11b769de7d8aabbd14583e6666045de6cfc5b42..ca8758cdf2e93337da9bcd4400d572e88f006445 100644 --- a/docs/configs/model_group.md +++ b/docs/configs/model_group.md @@ -5,11 +5,12 @@ MODEL Group存放所有和模型相关的配置,该Group还包含三个子Grou * [DeepLabv3p](./model_deeplabv3p_group.md) * [UNet](./model_unet_group.md) * [ICNet](./model_icnet_group.md) +* [PSPNet](./model_pspnet_group.md) * [HRNet](./model_hrnet_group.md) ## `MODEL_NAME` -所选模型,支持`deeplabv3p` `unet` `icnet` `hrnet`四种模型 +所选模型,支持`deeplabv3p` `unet` `icnet` `pspnet` `hrnet`五种模型 ### 默认值 @@ -20,7 +21,13 @@ MODEL Group存放所有和模型相关的配置,该Group还包含三个子Grou ## `DEFAULT_NORM_TYPE` -模型所用norm类型,支持`bn` [`gn`]() +模型所用norm类型,支持`bn`(Batch Norm)、`gn`(Group Norm) + +![](../imgs/gn.png) + +关于Group Norm的介绍可以参考论文:https://arxiv.org/abs/1803.08494 + +GN 把通道分为组,并计算每一组之内的均值和方差,以进行归一化。GN 的计算与批量大小无关,其精度也在各种批量大小下保持稳定。适应于网络参数很重的模型,比如deeplabv3+这种,可以在一个小batch下取得一个较好的训练效果。 ### 默认值 @@ -111,4 +118,3 @@ loss = 1.0 * loss1 + 0.4 * loss2 + 0.16 * loss3

- diff --git a/docs/configs/model_pspnet_group.md b/docs/configs/model_pspnet_group.md new file mode 100644 index 0000000000000000000000000000000000000000..c1acd31b296b8b64ac05730e0e92b840264a4f23 --- /dev/null +++ b/docs/configs/model_pspnet_group.md @@ -0,0 +1,25 @@ +# cfg.MODEL.PSPNET + +MODEL.PSPNET 子Group存放所有和PSPNet模型相关的配置 + +## `DEPTH_MULTIPER` + +Resnet backbone的depth multiper + +### 默认值 + +1 + +
+
+ +## `LAYERS` + +ResNet backbone的层数,支持`18` `34` `50` `101` `152`等五种 + +### 默认值 + +50 + +
+
diff --git a/docs/configs/train_group.md b/docs/configs/train_group.md index 6c8a0d79c79af665d8c7bf54a2b7555aa024bb8d..2fc8806c457d561978379589f6e05657e62a6e86 100644 --- a/docs/configs/train_group.md +++ b/docs/configs/train_group.md @@ -5,7 +5,7 @@ TRAIN Group存放所有和训练相关的配置 ## `MODEL_SAVE_DIR` 在训练周期内定期保存模型的主目录 -## 默认值 +### 默认值 无(需要用户自己填写)
@@ -14,10 +14,10 @@ TRAIN Group存放所有和训练相关的配置 ## `PRETRAINED_MODEL_DIR` 预训练模型路径 -## 默认值 +### 默认值 无 -## 注意事项 +### 注意事项 * 若未指定该字段,则模型会随机初始化所有的参数,从头开始训练 @@ -31,10 +31,10 @@ TRAIN Group存放所有和训练相关的配置 ## `RESUME_MODEL_DIR` 从指定路径中恢复参数并继续训练 -## 默认值 +### 默认值 无 -## 注意事项 +### 注意事项 * 当`RESUME_MODEL_DIR`存在时,PaddleSeg会恢复到上一次训练的最近一个epoch,并且恢复训练过程中的临时变量(如已经衰减过的学习率,Optimizer的动量数据等),`PRETRAINED_MODEL`路径的最后一个目录必须为int数值或者字符串final,PaddleSeg会将int数值作为当前起始EPOCH继续训练,若目录为final,则不会继续训练。若目录不满足上述条件,PaddleSeg会抛出错误。 @@ -42,12 +42,17 @@ TRAIN Group存放所有和训练相关的配置
## `SYNC_BATCH_NORM` -是否在多卡间同步BN的均值和方差 +是否在多卡间同步BN的均值和方差。 -## 默认值 +Synchronized Batch Norm跨GPU批归一化策略最早在[MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240) +论文中提出,在[Bag of Freebies for Training Object Detection Neural Networks](https://arxiv.org/pdf/1902.04103.pdf)论文中以Yolov3验证了这一策略的有效性,[PaddleCV/yolov3](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/yolov3)实现了这一系列策略并比Darknet框架版本在COCO17数据上mAP高5.9. + +PaddleSeg基于PaddlePaddle框架的sync_batch_norm策略,可以支持通过多卡实现大batch size的分割模型训练,可以得到更高的mIoU精度。 + +### 默认值 False -## 注意事项 +### 注意事项 * 打开该选项会带来一定的性能消耗(多卡间同步数据导致) diff --git a/docs/data_aug.md b/docs/data_aug.md index 2865d413b7090f414eb44c0681562837de21f19a..ed1b5c3c2dc66fa94f6dc1067bdec19161cda431 100644 --- a/docs/data_aug.md +++ b/docs/data_aug.md @@ -7,67 +7,108 @@ ## Resize -resize步骤是指将输入图像按照某种规则讲图片重新缩放到某一个尺寸,PaddleSeg支持以下3种resize方式: +Resize步骤是指将输入图像按照某种规则讲图片重新缩放到某一个尺寸,PaddleSeg支持以下3种resize方式: ![](imgs/aug_method.png) -- Un-padding -将输入图像直接resize到某一个固定大小下,送入到网络中间训练,对应参数为AUG.FIX_RESIZE_SIZE。预测时同样操作。 +- Unpadding +将输入图像直接resize到某一个固定大小下,送入到网络中间训练。预测时同样操作。 - Step-Scaling -将输入图像按照某一个比例resize,这个比例以某一个步长在一定范围内随机变动。设定最小比例参数为`AUG.MIN_SCALE_FACTOR`, 最大比例参数`AUG.MAX_SCALE_FACTOR`,步长参数为`AUG.SCALE_STEP_SIZE`。预测时不对输入图像做处理。 +将输入图像按照某一个比例resize,这个比例以某一个步长在一定范围内随机变动。预测时不对输入图像做处理。 - Range-Scaling -固定长宽比resize,即图像长边对齐到某一个固定大小,短边随同样的比例变化。设定最小大小参数为`AUG.MIN_RESIZE_VALUE`,设定最大大小参数为`AUG.MAX_RESIZE_VALUE`。预测时需要将长边对齐到`AUG.INF_RESIZE_VALUE`所指定的大小,其中`AUG.INF_RESIZE_VALUE`在`AUG.MIN_RESIZE_VALUE`和`AUG.MAX_RESIZE_VALUE`范围内。 +将输入图像按照长边变化进行resize,即图像长边对齐到某一长度,该长度在一定范围内随机变动,短边随同样的比例变化。 +预测时需要将长边对齐到另外指定的固定长度。 Range-Scaling示意图如下: ![](imgs/rangescale.png) +|Resize方式|配置参数|含义|备注| +|-|-|-|-| +|Unpadding|AUG.FIX_RESIZE_SIZE|Resize的固定尺寸| +|Step-Scaling|AUG.MIN_SCALE_FACTOR|Resize最小比例| +||AUG.MAX_SCALE_FACTOR|Resize最大比例| +||AUG.SCALE_STEP_SIZE|Resize比例选取的步长| +|Range-Scaling|AUG.MIN_RESIZE_VALUE|图像长边变动范围的最小值| +||AUG.MAX_RESIZE_VALUE|图像长边变动范围的最大值| +|                              |AUG.INF_RESIZE_VALUE|预测时长边对齐时所指定的固定长度|取值必须在
[AUG.MIN_RESIZE_VALUE,
AUG.MAX_RESIZE_VALUE]
范围内。| + +**注:本文所有配置参数可在configs目录下您的yaml文件中进行设置。** + ## 图像翻转 PaddleSeg支持以下2种翻转方式: - 左右翻转(Mirror) -使用开关`AUG.MIRROR`,为True时该项功能开启,为False时该项功能关闭。 +以50%概率对图像进行左右翻转。 - 上下翻转(Flip) -使用开关`AUG.FLIP`,为True时该项功能开启,`AUG.FLIP_RATIO`控制是否上下翻转的概率。为False时该项功能关闭。 +以一定概率对图像进行上下翻转。 以上2种开关独立运作,可组合使用。故图像翻转一共有如下4种可能的情况: +|图像翻转方式|配置参数|含义|备注| +|-|-|-|-| +|Mirror|AUG.MIRROR|左右翻转开关|为True时开启,为False时关闭| +|Flip|AUG.FLIP|上下翻转开关|为True时开启,为False时关闭| +||AUG.FLIP_RATIO|控制是否上下翻转的概率|当AUG.FLIP为False时无效| + + ## Rich Crop Rich Crop是PaddleSeg结合实际业务经验开放的一套数据增强策略,面向标注数据少,测试数据情况繁杂的分割业务场景使用的数据增强策略。流程如下图所示: ![RichCrop示意图](imgs/data_aug_example.png) -rich crop是指对图像进行多种变换,保证在训练过程中数据的丰富多样性,PaddleSeg支持以下几种变换。`AUG.RICH_CROP.ENABLE`为False时会直接跳过该步骤。 +Rich Crop是指对图像进行多种变换,保证在训练过程中数据的丰富多样性,包含以下4种变换: + +- Blur +使用高斯模糊对图像进行平滑。 + +- Rotation +图像旋转,旋转角度在一定范围内随机选取,旋转产生的多余的区域使用`DATASET.PADDING_VALUE`值进行填充。 -- blur -图像加模糊,使用开关`AUG.RICH_CROP.BLUR`,为False时该项功能关闭。`AUG.RICH_CROP.BLUR_RATIO`控制加入模糊的概率。 +- Aspect +图像长宽比调整,从图像中按一定大小和宽高比裁取一定区域出来之后进行resize。 -- rotation -图像旋转,`AUG.RICH_CROP.MAX_ROTATION`控制最大旋转角度。旋转产生的多余的区域的填充值为均值。 +- Color jitter +图像颜色抖动,共进行亮度、饱和度和对比度三种颜色属性的调节。 -- aspect -图像长宽比调整,从图像中crop一定区域出来之后在某一长宽比内进行resize。控制参数`AUG.RICH_CROP.MIN_AREA_RATIO`和`AUG.RICH_CROP.ASPECT_RATIO`。 +|Rich crop方式|配置参数|含义|备注| +|-|-|-|-| +|Rich crop|AUG.RICH_CROP.ENABLE|Rich crop总开关|为True时开启,为False时关闭所有变换| +|Blur|AUG.RICH_CROP.BLUR|图像模糊开关|为True时开启,为False时关闭| +||AUG.RICH_CROP.BLUR_RATIO|控制进行模糊的概率|当AUG.RICH_CROP.BLUR为False时无效| +|Rotation|AUG.RICH_CROP.MAX_ROTATION|图像正向旋转的最大角度|取值0~90°,实际旋转角度在\[-AUG.RICH_CROP.MAX_ROTATION, AUG.RICH_CROP.MAX_ROTATION]范围内随机选取| +|Aspect|AUG.RICH_CROP.MIN_AREA_RATIO|裁取图像与原始图像面积比最小值|取值0~1,取值越小则变化范围越大,若为0则不进行调节| +||AUG.RICH_CROP.ASPECT_RATIO|裁取图像宽高比范围|取值非负,越小则变化范围越大,若为0则不进行调节| +|Color jitter|AUG.RICH_CROP.BRIGHTNESS_JITTER_RATIO|亮度调节因子|取值0~1,取值越大则变化范围越大,若为0则不进行调节| +||AUG.RICH_CROP.SATURATION_JITTER_RATIO|饱和度调节因子|取值0~1,取值越大则变化范围越大,若为0则不进行调节| +|                              |AUG.RICH_CROP.CONTRAST_JITTER_RATIO|对比度调节因子                     |取值0~1,取值越大则变化范围越大,若为0则不进行调节| -- color jitter -图像颜色调整,控制参数`AUG.RICH_CROP.BRIGHTNESS_JITTER_RATIO`、`AUG.RICH_CROP.SATURATION_JITTER_RATIO`、`AUG.RICH_CROP.CONTRAST_JITTER_RATIO`。 ## Random Crop -该步骤主要是通过crop的方式使得输入到网络中的图像在某一个固定大小,控制该大小的参数为TRAIN_CROP_SIZE,类型为tuple,格式为(width, height). 当输入图像大小小于CROP_SIZE的时候会对输入图像进行padding,padding值为均值。 - -- 输入图片格式 - - 原图 - - 图片格式:RGB三通道图片和RGBA四通道图片两种类型的图片进行训练,但是在一次训练过程只能存在一种格式。 - - 图片转换:灰度图片经过预处理后之后会转变成三通道图片 - - 图片参数设置:当图片为三通道图片时IMAGE_TYPE设置为rgb, 对应MEAN和STD也必须是一个长度为3的list,当图片为四通道图片时IMAGE_TYPE设置为rgba,对应的MEAN和STD必须是一个长度为4的list。 - - 标注图 - - 图片格式:标注图片必须为png格式的单通道多值图,元素值代表的是这个元素所属于的类别。 - - 图片转换:在datalayer层对label图片进行的任何resize,以及旋转的操作,都必须采用最近邻的插值方式。 - - 图片ignore:设置TRAIN.IGNORE_INDEX 参数可以选择性忽略掉属于某一个类别的所有像素点。这个参数一般设置为255 +随机裁剪图片和标签图,该步骤主要是通过裁剪的方式使得输入到网络中的图像在某一个固定大小。 + +Random crop过程分为3种情形: +- 当输入图像尺寸等于CROP_SIZE时,返回原图。 +- 当输入图像尺寸大于CROP_SIZE时,直接裁剪。 +- 当输入图像尺寸小于CROP_SIZE时,分别使用`DATASET.PADDING_VALUE`值和`DATASET.IGNORE_INDEX`值对图像和标签图进行填充,再进行裁剪。 + +|Random crop方式|配置参数|含义|备注| +|-|-|-|-| +|Train crop|TRAIN_CROP_SIZE|训练过程进行random crop后的图像尺寸|类型为tuple,格式为(width, height) +|Eval crop                         |EVAL_CROP_SIZE|除训练外的过程进行random crop后的图像尺寸|类型为tuple,格式为(width, height) + +`TRAIN_CROP_SIZE`可以设置任意大小,具体如何设置根据数据集而定。 + +`EVAL_CROP_SIZE`的设置需要满足以下条件,共有3种情形: +- 当`AUG.AUG_METHOD`为unpadding时,`EVAL_CROP_SIZE`的宽高应不小于`AUG.FIX_RESIZE_SIZE`的宽高。 +- 当`AUG.AUG_METHOD`为stepscaling时,`EVAL_CROP_SIZE`的宽高应不小于原图中最长的宽高。 +- 当`AUG.AUG_METHOD`为rangescaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最长的宽高。 + diff --git a/docs/data_prepare.md b/docs/data_prepare.md index 50864a730a534c4a0e5eba84fb11dfb1bb9c542d..de1fd7965cf74efe22b5c126b94ae063ac8a52ca 100644 --- a/docs/data_prepare.md +++ b/docs/data_prepare.md @@ -2,6 +2,45 @@ ## 数据标注 +### 标注协议 +PaddleSeg采用单通道的标注图片,每一种像素值代表一种类别,像素标注类别需要从0开始递增,例如0,1,2,3表示有4种类别。 + +**NOTE:** 标注图像请使用PNG无损压缩格式的图片。标注类别最多为256类。 + +### 灰度标注vs伪彩色标注 +一般的分割库使用单通道灰度图作为标注图片,往往显示出来是全黑的效果。灰度标注图的弊端: +1. 对图像标注后,无法直接观察标注是否正确。 +2. 模型测试过程无法直接判断分割的实际效果。 + +**PaddleSeg支持伪彩色图作为标注图片,在原来的单通道图片基础上,注入调色板。在基本不增加图片大小的基础上,却可以显示出彩色的效果。** + +同时PaddleSeg也兼容灰度图标注,用户原来的灰度数据集可以不做修改,直接使用。 +![](./imgs/annotation/image-11.png) + +### 灰度标注转换为伪彩色标注 +如果用户需要转换成伪彩色标注图,可使用我们的转换工具。适用于以下两种常见的情况: +1. 如果您希望将指定目录下的所有灰度标注图转换为伪彩色标注图,则执行以下命令,指定灰度标注所在的目录即可。 +```buildoutcfg +python pdseg/tools/gray2pseudo_color.py +``` + +|参数|用途| +|-|-| +|dir_or_file|指定灰度标注所在目录| +|output_dir|彩色标注图片的输出目录| + +2. 如果您仅希望将指定数据集中的部分灰度标注图转换为伪彩色标注图,则执行以下命令,需要已有文件列表,按列表读取指定图片。 +```buildoutcfg +python pdseg/tools/gray2pseudo_color.py --dataset_dir --file_separator +``` +|参数|用途| +|-|-| +|dir_or_file|指定文件列表路径| +|output_dir|彩色标注图片的输出目录| +|--dataset_dir|数据集所在根目录| +|--file_separator|文件列表分隔符| + +### 标注教程 用户需预先采集好用于训练、评估和测试的图片,然后使用数据标注工具完成数据标注。 PddleSeg已支持2种标注工具:LabelMe、精灵数据标注工具。标注教程如下: @@ -9,63 +48,32 @@ PddleSeg已支持2种标注工具:LabelMe、精灵数据标注工具。标注 - [LabelMe标注教程](annotation/labelme2seg.md) - [精灵数据标注工具教程](annotation/jingling2seg.md) -最后用我们提供的数据转换脚本将上述标注工具产出的数据格式转换为模型训练时所需的数据格式。 ## 文件列表 ### 文件列表规范 -PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试集。像素标注类别需要从0开始递增。 - -**NOTE:** 标注图像请使用PNG无损压缩格式的图片 - -以Cityscapes数据集为例, 我们需要整理出训练集、验证集、测试集对应的原图和标注文件列表用于PaddleSeg训练即可。 - -其中`DATASET.DATA_DIR`为数据根目录,文件列表的路径以数据集根目录作为相对路径起始点。 - -``` -./cityscapes/ # 数据集根目录 -├── gtFine # 标注目录 -│   ├── test -│   │   ├── berlin -│   │   └── ... -│   ├── train -│   │   ├── aachen -│   │   └── ... -│   └── val -│   ├── frankfurt -│   └── ... -└── leftImg8bit # 原图目录 - ├── test - │   ├── berlin - │   └── ... - ├── train - │   ├── aachen - │   └── ... - └── val - ├── frankfurt - └── ... -``` +PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试集。在训练、评估、可视化过程前必须准备好相应的文件列表。 文件列表组织形式如下 ``` 原始图片路径 [SEP] 标注图片路径 ``` +其中`[SEP]`是文件路径分割符,可以在`DATASET.SEPARATOR`配置项中修改, 默认为空格。文件列表的路径以数据集根目录作为相对路径起始点,`DATASET.DATA_DIR`即为数据集根目录。 + +如下图所示,左边为原图的图片路径,右边为图片对应的标注路径。 -其中`[SEP]`是文件路径分割符,可以在`DATASET.SEPARATOR`配置项中修改, 默认为空格。 +![cityscapes_filelist](./imgs/file_list.png) **注意事项** -* 务必保证分隔符在文件列表中每行只存在一次, 如文件名中存在空格,请使用'|'等文件名不可用字符进行切分 +* 务必保证分隔符在文件列表中每行只存在一次, 如文件名中存在空格,请使用"|"等文件名不可用字符进行切分 * 文件列表请使用**UTF-8**格式保存, PaddleSeg默认使用UTF-8编码读取file_list文件 -如下图所示,左边为原图的图片路径,右边为图片对应的标注路径。 - -![cityscapes_filelist](./imgs/file_list.png) - 若数据集缺少标注图片,则文件列表不用包含分隔符和标注图片路径,如下图所示。 + ![cityscapes_filelist](./imgs/file_list2.png) **注意事项** @@ -75,32 +83,14 @@ PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试 不可在`DATASET.TRAIN_FILE_LIST`和`DATASET.VAL_FILE_LIST`配置项中使用。 -完整的配置信息可以参考[`./docs/annotation/cityscapes_demo`](../docs/annotation/cityscapes_demo/)目录下的yaml和文件列表。 +**符合规范的文件列表是什么样的呢?** -### 文件列表生成 -PaddleSeg提供了生成文件列表的使用脚本,可适用于自定义数据集或cityscapes数据集,并支持通过不同的Flags来开启特定功能。 -``` -python pdseg/tools/create_dataset_list.py ${FLAGS} -``` -运行后将在数据集根目录下生成训练/验证/测试集的文件列表(文件主名与`--second_folder`一致,扩展名为`.txt`)。 - -**Note:** 若训练/验证/测试集缺少标注图片,仍可自动生成不含分隔符和标注图片路径的文件列表。 - -#### 命令行FLAGS列表 +请参考目录[`./docs/annotation/cityscapes_demo`](../docs/annotation/cityscapes_demo/)。 -|FLAG|用途|默认值|参数数目| -|-|-|-|-| -|--type|指定数据集类型,`cityscapes`或`自定义`|`自定义`|1| -|--separator|文件列表分隔符|'|'|1| -|--folder|图片和标签集的文件夹名|'images' 'annotations'|2| -|--second_folder|训练/验证/测试集的文件夹名|'train' 'val' 'test'|若干| -|--format|图片和标签集的数据格式|'jpg' 'png'|2| -|--postfix|按文件主名(无扩展名)是否包含指定后缀对图片和标签集进行筛选|'' ''(2个空字符)|2| +### 数据集目录结构整理 -#### 使用示例 -- **对于自定义数据集** +如果用户想要生成数据集的文件列表,需要整理成如下的目录结构(类似于Cityscapes数据集): -如果用户想要生成自己数据集的文件列表,需要整理成如下的目录结构: ``` ./dataset/ # 数据集根目录 ├── annotations # 标注目录 @@ -125,9 +115,32 @@ python pdseg/tools/create_dataset_list.py ${FLAGS} └── ... Note:以上目录名可任意 ``` -必须指定自定义数据集目录,可以按需要设定FLAG。 -**Note:** 无需指定`--type`。 +### 文件列表生成 +PaddleSeg提供了生成文件列表的使用脚本,可适用于自定义数据集或cityscapes数据集,并支持通过不同的Flags来开启特定功能。 +``` +python pdseg/tools/create_dataset_list.py ${FLAGS} +``` +运行后将在数据集根目录下生成训练/验证/测试集的文件列表(文件主名与`--second_folder`一致,扩展名为`.txt`)。 + +**Note:** 生成文件列表要求:要么原图和标注图片数量一致,要么只有原图,没有标注图片。若数据集缺少标注图片,仍可自动生成不含分隔符和标注图片路径的文件列表。 + +#### 命令行FLAGS列表 + +|FLAG|用途|默认值|参数数目| +|-|-|-|-| +|--type|指定数据集类型,`cityscapes`或`自定义`|`自定义`|1| +|--separator|文件列表分隔符|"|"|1| +|--folder|图片和标签集的文件夹名|"images" "annotations"|2| +|--second_folder|训练/验证/测试集的文件夹名|"train" "val" "test"|若干| +|--format|图片和标签集的数据格式|"jpg" "png"|2| +|--postfix|按文件主名(无扩展名)是否包含指定后缀对图片和标签集进行筛选|"" ""(2个空字符)|2| + +#### 使用示例 +- **对于自定义数据集** + +若您已经按上述说明整理好了数据集目录结构,可以运行下面的命令生成文件列表。 + ``` # 生成文件列表,其分隔符为空格,图片和标签集的数据格式都为png python pdseg/tools/create_dataset_list.py --separator " " --format png png @@ -137,22 +150,26 @@ python pdseg/tools/create_dataset_list.py --separator " " --f python pdseg/tools/create_dataset_list.py \ --folder img gt --second_folder training validation ``` - +**Note:** 必须指定自定义数据集目录,可以按需要设定FLAG。无需指定`--type`。 - **对于cityscapes数据集** +若您使用的是cityscapes数据集,可以运行下面的命令生成文件列表。 + +``` +# 生成cityscapes文件列表,其分隔符为逗号 +python pdseg/tools/create_dataset_list.py --type cityscapes --separator "," +``` +**Note:** + 必须指定cityscapes数据集目录,`--type`必须为`cityscapes`。 在cityscapes类型下,部分FLAG将被重新设定,无需手动指定,具体如下: |FLAG|固定值| |-|-| -|--folder|'leftImg8bit' 'gtFine'| -|--format|'png' 'png'| -|--postfix|'_leftImg8bit' '_gtFine_labelTrainIds'| +|--folder|"leftImg8bit" "gtFine"| +|--format|"png" "png"| +|--postfix|"_leftImg8bit" "_gtFine_labelTrainIds"| 其余FLAG可以按需要设定。 -``` -# 生成cityscapes文件列表,其分隔符为逗号 -python pdseg/tools/create_dataset_list.py --type cityscapes --separator "," -``` diff --git a/docs/imgs/annotation/image-11.png b/docs/imgs/annotation/image-11.png new file mode 100644 index 0000000000000000000000000000000000000000..2e3b6ff1f1ffd33fb57a35b547bcce31ca248e19 Binary files /dev/null and b/docs/imgs/annotation/image-11.png differ diff --git a/docs/imgs/annotation/image-7.png b/docs/imgs/annotation/image-7.png index b65d56e92b2b5c1633f5c3168eee2971b476e8f3..7c24ca50361e0f602bc5a603e6377af021dbb63d 100644 Binary files a/docs/imgs/annotation/image-7.png and b/docs/imgs/annotation/image-7.png differ diff --git a/docs/imgs/annotation/jingling-5.png b/docs/imgs/annotation/jingling-5.png index 59a15567a3e25df338a3577fe9a9035c5bd0c719..5106559099570140fe91a94e2cdffffe2fdbdaca 100644 Binary files a/docs/imgs/annotation/jingling-5.png and b/docs/imgs/annotation/jingling-5.png differ diff --git a/docs/imgs/deeplabv3p.png b/docs/imgs/deeplabv3p.png index c0f12db6474e28f68ea45aa498026ef5261bcbe9..ba754f3e8b75c49630a96d4cd9fcb4aa45d6e5bd 100644 Binary files a/docs/imgs/deeplabv3p.png and b/docs/imgs/deeplabv3p.png differ diff --git a/docs/imgs/dice.png b/docs/imgs/dice.png new file mode 100644 index 0000000000000000000000000000000000000000..56f443dfade0a02240dad61d6554a23c91213bb5 Binary files /dev/null and b/docs/imgs/dice.png differ diff --git a/docs/imgs/dice1.png b/docs/imgs/dice1.png deleted file mode 100644 index f8520802296cc264849fae4a8442792cf56cb20a..0000000000000000000000000000000000000000 Binary files a/docs/imgs/dice1.png and /dev/null differ diff --git a/docs/imgs/dice2.png b/docs/imgs/dice2.png new file mode 100644 index 0000000000000000000000000000000000000000..37c3da1f1906421c0d3928ab18212a4d1a0966a0 Binary files /dev/null and b/docs/imgs/dice2.png differ diff --git a/docs/imgs/dice3.png b/docs/imgs/dice3.png new file mode 100644 index 0000000000000000000000000000000000000000..50b422385ee1e6b0cf7652ac63571652ce1d52ef Binary files /dev/null and b/docs/imgs/dice3.png differ diff --git a/docs/imgs/hrnet.png b/docs/imgs/hrnet.png new file mode 100644 index 0000000000000000000000000000000000000000..a4733a7b7c62534f8cfc8f8cfeb4fe049d6dfba8 Binary files /dev/null and b/docs/imgs/hrnet.png differ diff --git a/docs/imgs/icnet.png b/docs/imgs/icnet.png index 7d9659db01bfb7a887f94b36fdaad303284deab7..125889691edcc5857d8e1322704dda652412d33f 100644 Binary files a/docs/imgs/icnet.png and b/docs/imgs/icnet.png differ diff --git a/docs/imgs/pspnet.png b/docs/imgs/pspnet.png new file mode 100644 index 0000000000000000000000000000000000000000..2963aeadb89aef05bfb19163f89d413d620c6564 Binary files /dev/null and b/docs/imgs/pspnet.png differ diff --git a/docs/imgs/pspnet2.png b/docs/imgs/pspnet2.png new file mode 100644 index 0000000000000000000000000000000000000000..401263a9b5fddc4c6ca77ef2dc172c7cb565c00f Binary files /dev/null and b/docs/imgs/pspnet2.png differ diff --git a/docs/imgs/softmax_loss.png b/docs/imgs/softmax_loss.png new file mode 100644 index 0000000000000000000000000000000000000000..3c5cbbce470fe48ca5f500c59995776c2fbd5ec5 Binary files /dev/null and b/docs/imgs/softmax_loss.png differ diff --git a/docs/imgs/tensorboard_image.JPG b/docs/imgs/tensorboard_image.JPG index 2d5d0ceb001cb7fc9f68622842710afd9d032463..140aa2a0ed6a9b1a2d0a98477685b9e6d434a113 100644 Binary files a/docs/imgs/tensorboard_image.JPG and b/docs/imgs/tensorboard_image.JPG differ diff --git a/docs/imgs/tensorboard_scalar.JPG b/docs/imgs/tensorboard_scalar.JPG index 2de89c32a3469764631352597f0e55f8a431ad4b..322c98dc8ba7e5ca96477f3dbe193a70a8cf4609 100644 Binary files a/docs/imgs/tensorboard_scalar.JPG and b/docs/imgs/tensorboard_scalar.JPG differ diff --git a/docs/imgs/unet.png b/docs/imgs/unet.png index 960f289321a9a6b894d3054ec4f257a36cb8969e..5a7b691ae54f9fe29dded913d8e6f6cacac494f7 100644 Binary files a/docs/imgs/unet.png and b/docs/imgs/unet.png differ diff --git a/docs/imgs/usage_vis_demo.jpg b/docs/imgs/usage_vis_demo.jpg index 50bedf2f547d11cb4aaefa0435022acc0392ba3c..40b35f13418e7c68e0bfaabf992d8411bd87bc77 100644 Binary files a/docs/imgs/usage_vis_demo.jpg and b/docs/imgs/usage_vis_demo.jpg differ diff --git a/docs/imgs/usage_vis_demo2.jpg b/docs/imgs/usage_vis_demo2.jpg deleted file mode 100644 index 9665e9e2f4d90d6db75411d43d0dc5a34d8b28e7..0000000000000000000000000000000000000000 Binary files a/docs/imgs/usage_vis_demo2.jpg and /dev/null differ diff --git a/docs/imgs/usage_vis_demo3.jpg b/docs/imgs/usage_vis_demo3.jpg deleted file mode 100644 index 318c06bcf7debf76b7bff504648df056802130df..0000000000000000000000000000000000000000 Binary files a/docs/imgs/usage_vis_demo3.jpg and /dev/null differ diff --git a/docs/installation.md b/docs/installation.md deleted file mode 100644 index 80cc341bb8764065dc7fd871e81fdb31225d636a..0000000000000000000000000000000000000000 --- a/docs/installation.md +++ /dev/null @@ -1,44 +0,0 @@ -# PaddleSeg 安装说明 - -## 1. 安装PaddlePaddle - -版本要求 -* PaddlePaddle >= 1.6.1 -* Python 2.7 or 3.5+ - -更多详细安装信息如CUDA版本、cuDNN版本等兼容信息请查看[PaddlePaddle安装](https://www.paddlepaddle.org.cn/install/doc/index) - -### pip安装 - -由于图像分割模型计算开销大,推荐在GPU版本的PaddlePaddle下使用PaddleSeg. - -``` -pip install paddlepaddle-gpu -``` - -### Conda安装 - -PaddlePaddle最新版本1.5支持Conda安装,可以减少相关依赖安装成本,conda相关使用说明可以参考[Anaconda](https://www.anaconda.com/distribution/) - -``` -conda install -c paddle paddlepaddle-gpu cudatoolkit=9.0 -``` - - * 如果有多卡训练需求,请安装 NVIDIA NCCL >= 2.4.7,并在Linux环境下运行 - -更多安装方式详情可以查看 [PaddlePaddle安装说明](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/install/index_cn.html) - - -## 2. 下载PaddleSeg代码 - -``` -git clone https://github.com/PaddlePaddle/PaddleSeg -``` - - -## 3. 安装PaddleSeg依赖 - -``` -cd PaddleSeg -pip install -r requirements.txt -``` diff --git a/docs/loss_select.md b/docs/loss_select.md index 454085c9c22a5c3308c77c93c961628b53157042..6749979821de5cd7387f3161e0a2bd25a9f02e4e 100644 --- a/docs/loss_select.md +++ b/docs/loss_select.md @@ -1,41 +1,66 @@ -# dice loss解决二分类中样本不均衡问题 +# 如何解决二分类中类别不均衡问题 +对于二类图像分割任务中,经常出现类别分布不均匀的情况,例如:工业产品的瑕疵检测、道路提取及病变区域提取等。 + +目前PaddleSeg提供了三种loss函数,分别为softmax loss(sotfmax with cross entroy loss)、dice loss(dice coefficient loss)和bce loss(binary cross entroy loss). 我们可使用dice loss解决这个问题。 + +注:dice loss和bce loss仅支持二分类。 + +## Dice loss +Dice loss的定义如下: -对于二类图像分割任务中,往往存在类别分布不均的情况,如:瑕疵检测,道路提取及病变区域提取等等。 -在DeepGlobe比赛的Road Extraction中,训练数据道路占比为:%4.5。如下为其图片样例:

-
+

-可以看出道路在整张图片中的比例很小。 - -## 数据集下载 -我们从DeepGlobe比赛的Road Extraction的训练集中随机抽取了800张图片作为训练集,200张图片作为验证集, -制作了一个小型的道路提取数据集[MiniDeepGlobeRoadExtraction](https://paddleseg.bj.bcebos.com/dataset/MiniDeepGlobeRoadExtraction.zip) -## softmax loss与dice loss -在图像分割中,softmax loss(sotfmax with cross entroy loss)同等的对待每一像素,因此当背景占据绝大部分的情况下, -网络将偏向于背景的学习,使网络对目标的提取能力变差。`dice loss(dice coefficient loss)`通过计算预测与标注之间的重叠部分计算损失函数,避免了类别不均衡带来的影响,能够取得更好的效果。 -在实际应用中`dice loss`往往与`bce loss(binary cross entroy loss)`结合使用,提高模型训练的稳定性。 +其中 Y 表示ground truth,P 表示预测结果。| |表示矩阵元素之和。![](./imgs/dice2.png) 表示*Y*和*P*的共有元素数, +实际通过求两者的逐像素乘积之和进行计算。例如: + +

+
+

+ +其中 1 表示前景,0 表示背景。 + +**Note:** 在标注图片中,务必保证前景像素值为1,背景像素值为0. -dice loss的定义如下: +Dice系数请参见[维基百科](https://zh.wikipedia.org/wiki/Dice%E7%B3%BB%E6%95%B0) -![equation](http://latex.codecogs.com/gif.latex?dice\\_loss=1-\frac{2|Y\bigcap{P}|}{|Y|+|P|}) +**为什么在类别不均衡问题上,dice loss效果比softmax loss更好?** -其中 ![equation](http://latex.codecogs.com/gif.latex?|Y\bigcap{P}|) 表示*Y*和*P*的共有元素数, -实际计算通过求两者的乘积之和进行计算。如下所示: +首先来看softmax loss的定义:

-
+

+ +其中 y 表示ground truth,p 表示网络输出。 + +在图像分割中,`softmax loss`评估每一个像素点的类别预测,然后平均所有的像素点。这个本质上就是对图片上的每个像素进行平等的学习。这就造成了一个问题,如果在图像上的多种类别有不平衡的表征,那么训练会由最主流的类别主导。以上面DeepGlobe道路提取的数据为例子,网络将偏向于背景的学习,降低了网络对前景目标的提取能力。 +而`dice loss(dice coefficient loss)`通过预测和标注的交集除以它们的总体像素进行计算,它将一个类别的所有像素作为一个整体作为考量,而且计算交集在总体中的占比,所以不受大量背景像素的影响,能够取得更好的效果。 + +在实际应用中`dice loss`往往与`bce loss(binary cross entroy loss)`结合使用,提高模型训练的稳定性。 -[dice系数详解](https://zh.wikipedia.org/wiki/Dice%E7%B3%BB%E6%95%B0) ## PaddleSeg指定训练loss PaddleSeg通过`cfg.SOLVER.LOSS`参数可以选择训练时的损失函数, 如`cfg.SOLVER.LOSS=['dice_loss','bce_loss']`将指定训练loss为`dice loss`与`bce loss`的组合 -## 实验比较 +## Dice loss解决类别不均衡问题的示例 + +我们以道路提取任务为例应用dice loss. +在DeepGlobe比赛的Road Extraction中,训练数据道路占比为:4.5%. 如下为其图片样例: +

+
+

+可以看出道路在整张图片中的比例很小。 + +### 数据集下载 +我们从DeepGlobe比赛的Road Extraction的训练集中随机抽取了800张图片作为训练集,200张图片作为验证集, +制作了一个小型的道路提取数据集[MiniDeepGlobeRoadExtraction](https://paddleseg.bj.bcebos.com/dataset/MiniDeepGlobeRoadExtraction.zip) + +### 实验比较 在MiniDeepGlobeRoadExtraction数据集进行了实验比较。 @@ -73,5 +98,4 @@ softmax loss和dice loss + bce loss实验结果如下图所示。

- diff --git a/docs/model_zoo.md b/docs/model_zoo.md index 7e625db73a5ae185b8db00e8dd6f04e26d4e11e5..8cd89fa41d6b7fc88759cf1250d88ec067755a6c 100644 --- a/docs/model_zoo.md +++ b/docs/model_zoo.md @@ -1,6 +1,7 @@ # PaddleSeg 预训练模型 -PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训练模型,通过加载预训练模型后训练可以在自定义数据集中得到更稳定地效果。 +PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训练模型。因为对于自定 +义数据集的场景,使用预训练模型进行训练可以得到更稳定地效果。用户可以根据模型类型、自己的数据集和预训练数据集的相似程度,选择并下载预训练模型。 ## ImageNet预训练模型 @@ -32,6 +33,11 @@ PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训 | HRNet_W48 | ImageNet | [hrnet_w48_imagenet.tar](https://paddleseg.bj.bcebos.com/models/hrnet_w48_imagenet.tar) | 78.95%/94.42% | | HRNet_W64 | ImageNet | [hrnet_w64_imagenet.tar](https://paddleseg.bj.bcebos.com/models/hrnet_w64_imagenet.tar) | 79.30%/94.61% | +| 模型 | 数据集合 | 下载地址 | Accuray Top1/5 Error | +|---|---|---|---| +| ResNet50(适配PSPNet) | ImageNet | [resnet50_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet50_v2_pspnet.tgz)| -- | +| ResNet101(适配PSPNet) | ImageNet | [resnet101_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet101_v2_pspnet.tgz)| -- | + ## COCO预训练模型 数据集为COCO实例分割数据集合转换成的语义分割数据集合 @@ -57,3 +63,6 @@ train数据集合为Cityscapes训练集合,测试为Cityscapes的验证集合 | PSPNet/bn | Cityscapes |[pspnet50_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/pspnet50_cityscapes.tgz) |16|false| 0.7013 | | PSPNet/bn | Cityscapes |[pspnet101_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/pspnet101_cityscapes.tgz) |16|false| 0.7734 | | HRNet_W18/bn | Cityscapes |[hrnet_w18_bn_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/hrnet_w18_bn_cityscapes.tgz) | 4 | false | 0.7936 | +| Fast-SCNN/bn | Cityscapes |[fast_scnn_cityscapes.tar](https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape.tar) | 32 | false | 0.6964 | + +测试环境为python 3.7.3,v100,cudnn 7.6.2。 diff --git a/docs/models.md b/docs/models.md index 680dfe87356db9dd6be181e003598d3eb8967ffe..a452aa3639c3901d8f75d1aa4f5f1b7f393ce0b7 100644 --- a/docs/models.md +++ b/docs/models.md @@ -1,56 +1,74 @@ # PaddleSeg 分割模型介绍 -### U-Net -U-Net 起源于医疗图像分割,整个网络是标准的encoder-decoder网络,特点是参数少,计算快,应用性强,对于一般场景适应度很高。 +- [U-Net](#U-Net) +- [DeepLabv3+](#DeepLabv3) +- [PSPNet](#PSPNet) +- [ICNet](#ICNet) +- [HRNet](#HRNet) + +## U-Net +U-Net [1] 起源于医疗图像分割,整个网络是标准的encoder-decoder网络,特点是参数少,计算快,应用性强,对于一般场景适应度很高。U-Net最早于2015年提出,并在ISBI 2015 Cell Tracking Challenge取得了第一。经过发展,目前有多个变形和应用。 + +原始U-Net的结构如下图所示,由于网络整体结构类似于大写的英文字母U,故得名U-net。左侧可视为一个编码器,右侧可视为一个解码器。编码器有四个子模块,每个子模块包含两个卷积层,每个子模块之后通过max pool进行下采样。由于卷积使用的是valid模式,故实际输出比输入图像小一些。具体来说,后一个子模块的分辨率=(前一个子模块的分辨率-4)/2。U-Net使用了Overlap-tile 策略用于补全输入图像的上下信息,使得任意大小的输入图像都可获得无缝分割。同样解码器也包含四个子模块,分辨率通过上采样操作依次上升,直到与输入图像的分辨率基本一致。该网络还使用了跳跃连接,以拼接的方式将解码器和编码器中相同分辨率的feature map进行特征融合,帮助解码器更好地恢复目标的细节。 + ![](./imgs/unet.png) -### DeepLabv3+ +## DeepLabv3+ -DeepLabv3+ 是DeepLab系列的最后一篇文章,其前作有 DeepLabv1,DeepLabv2, DeepLabv3, -在最新作中,DeepLab的作者通过encoder-decoder进行多尺度信息的融合,同时保留了原来的空洞卷积和ASSP层, -其骨干网络使用了Xception模型,提高了语义分割的健壮性和运行速率,在 PASCAL VOC 2012 dataset取得新的state-of-art performance,89.0mIOU。 +DeepLabv3+ [2] 是DeepLab系列的最后一篇文章,其前作有 DeepLabv1, DeepLabv2, DeepLabv3. +在最新作中,作者通过encoder-decoder进行多尺度信息的融合,以优化分割效果,尤其是目标边缘的效果。 +并且其使用了Xception模型作为骨干网络,并将深度可分离卷积(depthwise separable convolution)应用到atrous spatial pyramid pooling(ASPP)中和decoder模块,提高了语义分割的健壮性和运行速率,在 PASCAL VOC 2012 和 Cityscapes 数据集上取得新的state-of-art performance. ![](./imgs/deeplabv3p.png) -在PaddleSeg当前实现中,支持两种分类Backbone网络的切换 +在PaddleSeg当前实现中,支持两种分类Backbone网络的切换: -- MobileNetv2: +- MobileNetv2 适用于移动设备的快速网络,如果对分割性能有较高的要求,请使用这一backbone网络。 -- Xception: +- Xception DeepLabv3+原始实现的backbone网络,兼顾了精度和性能,适用于服务端部署。 +## PSPNet + +Pyramid Scene Parsing Network (PSPNet) [3] 起源于场景解析(Scene Parsing)领域。如下图所示,普通FCN [4] 面向复杂场景出现三种误分割现象:(1)关系不匹配。将船误分类成车,显然车一般不会出现在水面上。(2)类别混淆。摩天大厦和建筑物这两个类别相近,误将摩天大厦分类成建筑物。(3)类别不显著。枕头区域较小且纹理与床相近,误将枕头分类成床。 + +![](./imgs/pspnet2.png) -### ICNet +PSPNet的出发点是在算法中引入更多的上下文信息来解决上述问题。为了融合了图像中不同区域的上下文信息,PSPNet通过特殊设计的全局均值池化操作(global average pooling)和特征融合构造金字塔池化模块 (Pyramid Pooling Module)。PSPNet最终获得了2016年ImageNet场景解析挑战赛的冠军,并在PASCAL VOC 2012 和 Cityscapes 数据集上取得当时的最佳效果。整个网络结构如下: -Image Cascade Network(ICNet)主要用于图像实时语义分割。相较于其它压缩计算的方法,ICNet即考虑了速度,也考虑了准确性。 ICNet的主要思想是将输入图像变换为不同的分辨率,然后用不同计算复杂度的子网络计算不同分辨率的输入,然后将结果合并。ICNet由三个子网络组成,计算复杂度高的网络处理低分辨率输入,计算复杂度低的网络处理分辨率高的网络,通过这种方式在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡。 +![](./imgs/pspnet.png) + + +## ICNet + +Image Cascade Network(ICNet) [5] 是一个基于PSPNet的语义分割网络,设计目的是减少PSPNet推断时期的耗时。ICNet主要用于图像实时语义分割。ICNet由三个不同分辨率的子网络组成,将输入图像变换为不同的分辨率,随后使用计算复杂度高的网络处理低分辨率输入,计算复杂度低的网络处理分辨率高的网络,通过这种方式在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡。并在PSPNet的基础上引入级联特征融合单元(cascade feature fusion unit),实现快速且高质量的分割模型。 整个网络结构如下: ![](./imgs/icnet.png) -## 参考 +### HRNet -- [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611) +High-Resolution Network (HRNet) [6] 在整个训练过程中始终维持高分辨率表示。 +HRNet具有两个特点:(1)从高分辨率到低分辨率并行连接各子网络,(2)反复交换跨分辨率子网络信息。这两个特点使HRNet网络能够学习到更丰富的语义信息和细节信息。 +HRNet在人体姿态估计、语义分割和目标检测领域都取得了显著的性能提升。 -- [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597) - -- [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545) +整个网络结构如下: -# PaddleSeg特殊网络结构介绍 +![](./imgs/hrnet.png) -### Group Norm +## 参考文献 -![](./imgs/gn.png) -关于Group Norm的介绍可以参考论文:https://arxiv.org/abs/1803.08494 +[1] [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597) -GN 把通道分为组,并计算每一组之内的均值和方差,以进行归一化。GN 的计算与批量大小无关,其精度也在各种批量大小下保持稳定。适应于网络参数很重的模型,比如deeplabv3+这种,可以在一个小batch下取得一个较好的训练效果。 +[2] [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611) +[3] [Pyramid Scene Parsing Network](https://arxiv.org/abs/1612.01105) -### Synchronized Batch Norm +[4] [Fully Convolutional Networks for Semantic Segmentation](https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf) -Synchronized Batch Norm跨GPU批归一化策略最早在[MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240) -论文中提出,在[Bag of Freebies for Training Object Detection Neural Networks](https://arxiv.org/pdf/1902.04103.pdf)论文中以Yolov3验证了这一策略的有效性,[PaddleCV/yolov3](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/yolov3)实现了这一系列策略并比Darknet框架版本在COCO17数据上mAP高5.9. +[5] [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545) -PaddleSeg基于PaddlePaddle框架的sync_batch_norm策略,可以支持通过多卡实现大batch size的分割模型训练,可以得到更高的mIoU精度。 +[6] [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919) diff --git a/docs/multiple_gpus_train_and_mixed_precision_train.md b/docs/multiple_gpus_train_and_mixed_precision_train.md index 7826d88171bec71cba7ae2db9327ce3dfd47efd9..206a9409d0326ee6d4cd7c07569e7698f7d9c469 100644 --- a/docs/multiple_gpus_train_and_mixed_precision_train.md +++ b/docs/multiple_gpus_train_and_mixed_precision_train.md @@ -4,7 +4,7 @@ * PaddlePaddle >= 1.6.1 * NVIDIA NCCL >= 2.4.7 -环境配置,数据,预训练模型准备等工作请参考[安装说明](./installation.md),[PaddleSeg使用说明](./usage.md) +环境配置,数据,预训练模型准备等工作请参考[PaddleSeg使用说明](./usage.md) ### 多进程训练示例 diff --git a/docs/usage.md b/docs/usage.md index e38d16e047b4b97a71278b1ba17682d20c4586ee..6da85a2de7b8be220e955a9e20a351c2d306b489 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -1,98 +1,74 @@ -# 训练/评估/可视化 +# PaddleSeg快速入门 -PaddleSeg提供了 **训练**/**评估**/**可视化** 等三个功能的使用脚本。三个脚本都支持通过不同的Flags来开启特定功能,也支持通过Options来修改默认的[训练配置](./config.md)。三者的使用方式非常接近,如下: +本教程通过一个简单的示例,说明如何基于PaddleSeg启动训练(训练可视化)、评估和可视化。我们选择基于COCO数据集预训练的unet模型作为预训练模型,以一个眼底医疗分割数据集为例。 -```shell -# 训练 -python pdseg/train.py ${FLAGS} ${OPTIONS} -# 评估 -python pdseg/eval.py ${FLAGS} ${OPTIONS} -# 可视化 -python pdseg/vis.py ${FLAGS} ${OPTIONS} -``` - -**Note:** - -* FLAGS必须位于OPTIONS之前,否会将会遇到报错,例如如下的例子: - -```shell -# FLAGS "--cfg configs/cityscapes.yaml" 必须在 OPTIONS "BATCH_SIZE 1" 之前 -python pdseg/train.py BATCH_SIZE 1 --cfg configs/cityscapes.yaml -``` - -## 命令行FLAGS列表 - -|FLAG|支持脚本|用途|默认值|备注| -|-|-|-|-|-| -|--cfg|ALL|配置文件路径|None|| -|--use_gpu|ALL|是否使用GPU进行训练|False|| -|--use_mpio|train/eval|是否使用多进程进行IO处理|False|打开该开关会占用一定量的CPU内存,但是可以提高训练速度。
**NOTE:** windows平台下不支持该功能, 建议使用自定义数据初次训练时不打开,打开会导致数据读取异常不可见。
| -|--use_tb|train|是否使用TensorBoard记录训练数据|False|| -|--log_steps|train|训练日志的打印周期(单位为step)|10|| -|--debug|train|是否打印debug信息|False|IOU等指标涉及到混淆矩阵的计算,会降低训练速度| -|--tb_log_dir|train|TensorBoard的日志路径|None|| -|--do_eval|train|是否在保存模型时进行效果评估|False|| -|--vis_dir|vis|保存可视化图片的路径|"visual"|| -|--also_save_raw_results|vis|是否保存原始的预测图片|False|| - -## OPTIONS - -详见[训练配置](./config.md) +- [1.准备工作](#1准备工作) +- [2.下载待训练数据](#2下载待训练数据) +- [3.下载预训练模型](#3下载预训练模型) +- [4.模型训练](#4模型训练) +- [5.训练过程可视化](#5训练过程可视化) +- [6.模型评估](#6模型评估) +- [7.模型可视化](#7模型可视化) +- [在线体验](#在线体验) -## 使用示例 -下面通过一个简单的示例,说明如何基于PaddleSeg提供的预训练模型启动训练。我们选择基于COCO数据集预训练的unet模型作为预训练模型,在一个Oxford-IIIT Pet数据集上进行训练。 -**Note:** 为了快速体验,我们使用Oxford-IIIT Pet做了一个小型数据集,后续数据都使用该小型数据集。 -### 准备工作 +## 1.准备工作 在开始教程前,请先确认准备工作已经完成: 1. 正确安装了PaddlePaddle 2. PaddleSeg相关依赖已经安装 -如果有不确认的地方,请参考[安装说明](./installation.md) +如果有不确认的地方,请参考[首页安装说明](../README.md#安装) + +## 2.下载待训练数据 + +![](../turtorial/imgs/optic.png) + +我们提前准备好了一份眼底医疗分割数据集--视盘分割(optic disc segmentation),包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: -### 下载预训练模型 ```shell -# 下载预训练模型并进行解压 -python pretrained_model/download_model.py unet_bn_coco +# 下载待训练数据集 +python dataset/download_optic.py ``` -### 下载Oxford-IIIT Pet数据集 -我们使用了Oxford-IIIT中的猫和狗两个类别数据制作了一个小数据集mini_pet,用于快速体验。 -更多关于数据集的介绍情参考[Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) +## 3.下载预训练模型 ```shell # 下载预训练模型并进行解压 -python dataset/download_pet.py +python pretrained_model/download_model.py unet_bn_coco ``` -### 模型训练 +## 4.模型训练 -为了方便体验,我们在configs目录下放置了mini_pet所对应的配置文件`unet_pet.yaml`,可以通过`--cfg`指向该文件来设置训练配置。 +为了方便体验,我们在configs目录下放置了配置文件`unet_optic.yaml`,可以通过`--cfg`指向该文件来设置训练配置。 -我们选择GPU 0号卡进行训练,这可以通过环境变量`CUDA_VISIBLE_DEVICES`来指定。 +可以通过环境变量`CUDA_VISIBLE_DEVICES`来指定GPU卡号。 ``` +# 指定GPU卡号(以0号卡为例) export CUDA_VISIBLE_DEVICES=0 -python pdseg/train.py --use_gpu \ +# 训练 +python pdseg/train.py --cfg configs/unet_optic.yaml \ + --use_gpu \ --do_eval \ --use_tb \ --tb_log_dir train_log \ - --cfg configs/unet_pet.yaml \ BATCH_SIZE 4 \ - TRAIN.PRETRAINED_MODEL_DIR pretrained_model/unet_bn_coco \ - SOLVER.LR 5e-5 + SOLVER.LR 0.001 + +``` +若需要使用多块GPU,以0、1、2号卡为例,可输入 +``` +export CUDA_VISIBLE_DEVICES=0,1,2 ``` **NOTE:** -* 上述示例中,一共存在三套配置方案: PaddleSeg默认配置/unet_pet.yaml/OPTIONS,三者的优先级顺序为 OPTIONS > yaml > 默认配置。这个原则对于train.py/eval.py/vis.py都适用 - -* 如果发现因为内存不足而Crash。请适当调低BATCH_SIZE。如果本机GPU内存充足,则可以调高BATCH_SIZE的大小以获得更快的训练速度,BATCH_SIZE增大时,可以适当调高学习率。 +* 如果发现因为内存不足而Crash。请适当调低`BATCH_SIZE`。如果本机GPU内存充足,则可以调高`BATCH_SIZE`的大小以获得更快的训练速度,`BATCH_SIZE`增大时,可以适当调高学习率`SOLVER.LR`. * 如果在Linux系统下训练,可以使用`--use_mpio`使用多进程I/O,通过提升数据增强的处理速度进而大幅度提升GPU利用率。 -### 训练过程可视化 +## 5.训练过程可视化 当打开do_eval和use_tb两个开关后,我们可以通过TensorBoard查看边训练边评估的效果。 @@ -101,40 +77,42 @@ tensorboard --logdir train_log --host {$HOST_IP} --port {$PORT} ``` NOTE: -1. 上述示例中,$HOST\_IP为机器IP地址,请替换为实际IP,$PORT请替换为可访问的端口 -2. 数据量较大时,前端加载速度会比较慢,请耐心等待 +1. 上述示例中,$HOST\_IP为机器IP地址,请替换为实际IP,$PORT请替换为可访问的端口。 +2. 数据量较大时,前端加载速度会比较慢,请耐心等待。 -启动TensorBoard命令后,我们可以在浏览器中查看对应的训练数据 -在`SCALAR`这个tab中,查看训练loss、iou、acc的变化趋势 +启动TensorBoard命令后,我们可以在浏览器中查看对应的训练数据。 +在`SCALAR`这个tab中,查看训练loss、iou、acc的变化趋势。 ![](./imgs/tensorboard_scalar.JPG) -在`IMAGE`这个tab中,查看样本的预测情况 +在`IMAGE`这个tab中,查看样本图片。 ![](./imgs/tensorboard_image.JPG) -### 模型评估 -训练完成后,我们可以通过eval.py来评估模型效果。由于我们设置的训练EPOCH数量为100,保存间隔为10,因此一共会产生10个定期保存的模型,加上最终保存的final模型,一共有11个模型。我们选择最后保存的模型进行效果的评估: +## 6.模型评估 +训练完成后,我们可以通过eval.py来评估模型效果。由于我们设置的训练EPOCH数量为10,保存间隔为5,因此一共会产生2个定期保存的模型,加上最终保存的final模型,一共有3个模型。我们选择最后保存的模型进行效果的评估: ```shell python pdseg/eval.py --use_gpu \ - --cfg configs/unet_pet.yaml \ - TEST.TEST_MODEL saved_model/unet_pet/final + --cfg configs/unet_optic.yaml \ + TEST.TEST_MODEL saved_model/unet_optic/final ``` -可以看到,在经过训练后,模型在验证集上的mIoU指标达到了0.70+(由于随机种子等因素的影响,效果会有小范围波动,属于正常情况)。 +可以看到,在经过训练后,模型在验证集上的mIoU指标达到了0.85+(由于随机种子等因素的影响,效果会有小范围波动,属于正常情况)。 -### 模型可视化 -通过vis.py来评估模型效果,我们选择最后保存的模型进行效果的评估: +## 7.模型可视化 +通过vis.py进行测试和可视化,以选择最后保存的模型进行测试为例: ```shell python pdseg/vis.py --use_gpu \ - --cfg configs/unet_pet.yaml \ - TEST.TEST_MODEL saved_model/unet_pet/final + --cfg configs/unet_optic.yaml \ + TEST.TEST_MODEL saved_model/unet_optic/final ``` -执行上述脚本后,会在主目录下产生一个visual/visual_results文件夹,里面存放着测试集图片的预测结果,我们选择其中几张图片进行查看,可以看到,在测试集中的图片上的预测效果已经很不错: +执行上述脚本后,会在主目录下产生一个visual文件夹,里面存放着测试集图片的预测结果,我们选择其中1张图片进行查看: ![](./imgs/usage_vis_demo.jpg) -![](./imgs/usage_vis_demo2.jpg) -![](./imgs/usage_vis_demo3.jpg) `NOTE` -1. 可视化的图片会默认保存在visual/visual_results目录下,可以通过`--vis_dir`来指定输出目录 -2. 训练过程中会使用DATASET.VIS_FILE_LIST中的图片进行可视化显示,而vis.py则会使用DATASET.TEST_FILE_LIST +1. 可视化的图片会默认保存在visual目录下,可以通过`--vis_dir`来指定输出目录。 +2. 训练过程中会使用`DATASET.VIS_FILE_LIST`中的图片进行可视化显示,而vis.py则会使用`DATASET.TEST_FILE_LIST`. + +## 在线体验 + +PaddleSeg在AI Studio平台上提供了在线体验的快速入门教程,欢迎[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/100798) diff --git a/pdseg/__init__.py b/pdseg/__init__.py index 7f051e1e16ed29046c6ea46e341d62e4280f412d..e1cb8ed082023155b95e6b6778b797a571b20ca8 100644 --- a/pdseg/__init__.py +++ b/pdseg/__init__.py @@ -14,3 +14,4 @@ # limitations under the License. import models import utils +from . import tools \ No newline at end of file diff --git a/pdseg/data_aug.py b/pdseg/data_aug.py index 15186150a3734a3a0c026386a04206ac036c7858..ae976bf7e4bb6751ba6ec4186a137cbf5644ce84 100644 --- a/pdseg/data_aug.py +++ b/pdseg/data_aug.py @@ -327,7 +327,7 @@ def random_jitter(cv_img, saturation_range, brightness_range, contrast_range): brightness_ratio = np.random.uniform(-brightness_range, brightness_range) contrast_ratio = np.random.uniform(-contrast_range, contrast_range) - order = [1, 2, 3] + order = [0, 1, 2] np.random.shuffle(order) for i in range(3): @@ -368,7 +368,7 @@ def hsv_color_jitter(crop_img, def rand_crop(crop_img, crop_seg, mode=ModelPhase.TRAIN): """ - 随机裁剪图片和标签图, 若crop尺寸大于原始尺寸,分别使用均值和ignore值填充再进行crop, + 随机裁剪图片和标签图, 若crop尺寸大于原始尺寸,分别使用DATASET.PADDING_VALUE值和DATASET.IGNORE_INDEX值填充再进行crop, crop尺寸与原始尺寸一致,返回原图,crop尺寸小于原始尺寸直接crop Args: diff --git a/pdseg/loss.py b/pdseg/loss.py index 36ba43b27fca957a31f9ba68160f66792686c619..14f1b3794b6c8a15f4da5cf2a838ab7339eeffc4 100644 --- a/pdseg/loss.py +++ b/pdseg/loss.py @@ -20,7 +20,7 @@ import importlib from utils.config import cfg -def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2): +def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2, weight=None): ignore_mask = fluid.layers.cast(ignore_mask, 'float32') label = fluid.layers.elementwise_min( label, fluid.layers.assign(np.array([num_classes - 1], dtype=np.int32))) @@ -29,12 +29,40 @@ def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2): label = fluid.layers.reshape(label, [-1, 1]) label = fluid.layers.cast(label, 'int64') ignore_mask = fluid.layers.reshape(ignore_mask, [-1, 1]) - - loss, probs = fluid.layers.softmax_with_cross_entropy( - logit, - label, - ignore_index=cfg.DATASET.IGNORE_INDEX, - return_softmax=True) + if weight is None: + loss, probs = fluid.layers.softmax_with_cross_entropy( + logit, + label, + ignore_index=cfg.DATASET.IGNORE_INDEX, + return_softmax=True) + else: + label_one_hot = fluid.layers.one_hot(input=label, depth=num_classes) + if isinstance(weight, list): + assert len(weight) == num_classes, "weight length must equal num of classes" + weight = fluid.layers.assign(np.array([weight], dtype='float32')) + elif isinstance(weight, str): + assert weight.lower() == 'dynamic', 'if weight is string, must be dynamic!' + tmp = [] + total_num = fluid.layers.cast(fluid.layers.shape(label)[0], 'float32') + for i in range(num_classes): + cls_pixel_num = fluid.layers.reduce_sum(label_one_hot[:, i]) + ratio = total_num / (cls_pixel_num + 1) + tmp.append(ratio) + weight = fluid.layers.concat(tmp) + weight = weight / fluid.layers.reduce_sum(weight) * num_classes + elif isinstance(weight, fluid.layers.Variable): + pass + else: + raise ValueError('Expect weight is a list, string or Variable, but receive {}'.format(type(weight))) + weight = fluid.layers.reshape(weight, [1, num_classes]) + weighted_label_one_hot = fluid.layers.elementwise_mul(label_one_hot, weight) + probs = fluid.layers.softmax(logit) + loss = fluid.layers.cross_entropy( + probs, + weighted_label_one_hot, + soft_label=True, + ignore_index=cfg.DATASET.IGNORE_INDEX) + weighted_label_one_hot.stop_gradient = True loss = loss * ignore_mask avg_loss = fluid.layers.mean(loss) / fluid.layers.mean(ignore_mask) @@ -43,6 +71,7 @@ def softmax_with_loss(logit, label, ignore_mask=None, num_classes=2): ignore_mask.stop_gradient = True return avg_loss + # to change, how to appicate ignore index and ignore mask def dice_loss(logit, label, ignore_mask=None, epsilon=0.00001): if logit.shape[1] != 1 or label.shape[1] != 1 or ignore_mask.shape[1] != 1: @@ -65,6 +94,7 @@ def dice_loss(logit, label, ignore_mask=None, epsilon=0.00001): ignore_mask.stop_gradient = True return fluid.layers.reduce_mean(dice_score) + def bce_loss(logit, label, ignore_mask=None): if logit.shape[1] != 1 or label.shape[1] != 1 or ignore_mask.shape[1] != 1: raise Exception("bce loss is only applicable to binary classfication") @@ -80,20 +110,22 @@ def bce_loss(logit, label, ignore_mask=None): return loss -def multi_softmax_with_loss(logits, label, ignore_mask=None, num_classes=2): +def multi_softmax_with_loss(logits, label, ignore_mask=None, num_classes=2, weight=None): if isinstance(logits, tuple): avg_loss = 0 for i, logit in enumerate(logits): - logit_label = fluid.layers.resize_nearest(label, logit.shape[2:]) - logit_mask = (logit_label.astype('int32') != + if label.shape[2] != logit.shape[2] or label.shape[3] != logit.shape[3]: + label = fluid.layers.resize_nearest(label, logit.shape[2:]) + logit_mask = (label.astype('int32') != cfg.DATASET.IGNORE_INDEX).astype('int32') - loss = softmax_with_loss(logit, logit_label, logit_mask, + loss = softmax_with_loss(logit, label, logit_mask, num_classes) avg_loss += cfg.MODEL.MULTI_LOSS_WEIGHT[i] * loss else: - avg_loss = softmax_with_loss(logits, label, ignore_mask, num_classes) + avg_loss = softmax_with_loss(logits, label, ignore_mask, num_classes, weight=weight) return avg_loss + def multi_dice_loss(logits, label, ignore_mask=None): if isinstance(logits, tuple): avg_loss = 0 @@ -107,6 +139,7 @@ def multi_dice_loss(logits, label, ignore_mask=None): avg_loss = dice_loss(logits, label, ignore_mask) return avg_loss + def multi_bce_loss(logits, label, ignore_mask=None): if isinstance(logits, tuple): avg_loss = 0 diff --git a/pdseg/models/__init__.py b/pdseg/models/__init__.py index f2a9093490fc284154c8e09dc5c58e638c567d26..f1465913991c5aaffefff26c1f5a5d668edd1596 100644 --- a/pdseg/models/__init__.py +++ b/pdseg/models/__init__.py @@ -14,5 +14,3 @@ # limitations under the License. import models.modeling -import models.libs -import models.backbone diff --git a/pdseg/models/backbone/vgg.py b/pdseg/models/backbone/vgg.py new file mode 100644 index 0000000000000000000000000000000000000000..7e9df0a66cd85b291aad8846eed30c9bb7b4e947 --- /dev/null +++ b/pdseg/models/backbone/vgg.py @@ -0,0 +1,81 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import paddle +import paddle.fluid as fluid +from paddle.fluid import ParamAttr + +__all__ = ["VGGNet"] + + +def check_points(count, points): + if points is None: + return False + else: + if isinstance(points, list): + return (True if count in points else False) + else: + return (True if count == points else False) + + +class VGGNet(): + def __init__(self, layers=16): + self.layers = layers + + def net(self, input, class_dim=1000, end_points=None, decode_points=None): + short_cuts = dict() + layers_count = 0 + layers = self.layers + vgg_spec = { + 11: ([1, 1, 2, 2, 2]), + 13: ([2, 2, 2, 2, 2]), + 16: ([2, 2, 3, 3, 3]), + 19: ([2, 2, 4, 4, 4]) + } + assert layers in vgg_spec.keys(), \ + "supported layers are {} but input layer is {}".format(vgg_spec.keys(), layers) + + nums = vgg_spec[layers] + channels = [64, 128, 256, 512, 512] + conv = input + for i in range(len(nums)): + conv = self.conv_block(conv, channels[i], nums[i], name="conv" + str(i + 1) + "_") + layers_count += nums[i] + if check_points(layers_count, decode_points): + short_cuts[layers_count] = conv + if check_points(layers_count, end_points): + return conv, short_cuts + + return conv + + def conv_block(self, input, num_filter, groups, name=None): + conv = input + for i in range(groups): + conv = fluid.layers.conv2d( + input=conv, + num_filters=num_filter, + filter_size=3, + stride=1, + padding=1, + act='relu', + param_attr=fluid.param_attr.ParamAttr( + name=name + str(i + 1) + "_weights"), + bias_attr=False) + return fluid.layers.pool2d( + input=conv, pool_size=2, pool_type='max', pool_stride=2) diff --git a/pdseg/models/libs/model_libs.py b/pdseg/models/libs/model_libs.py index 19afe54224f259cbd98c189d6bc7196138ed8863..84494a9dd892105c799119c7a467b584c23f4241 100644 --- a/pdseg/models/libs/model_libs.py +++ b/pdseg/models/libs/model_libs.py @@ -164,3 +164,37 @@ def separate_conv(input, channel, stride, filter, dilation=1, act=None): input = bn(input) if act: input = act(input) return input + + +def conv_bn_layer(input, + filter_size, + num_filters, + stride, + padding, + channels=None, + num_groups=1, + if_act=True, + name=None, + use_cudnn=True): + conv = fluid.layers.conv2d( + input=input, + num_filters=num_filters, + filter_size=filter_size, + stride=stride, + padding=padding, + groups=num_groups, + act=None, + use_cudnn=use_cudnn, + param_attr=fluid.ParamAttr(name=name + '_weights'), + bias_attr=False) + bn_name = name + '_bn' + bn = fluid.layers.batch_norm( + input=conv, + param_attr=fluid.ParamAttr(name=bn_name + "_scale"), + bias_attr=fluid.ParamAttr(name=bn_name + "_offset"), + moving_mean_name=bn_name + '_mean', + moving_variance_name=bn_name + '_variance') + if if_act: + return fluid.layers.relu6(bn) + else: + return bn \ No newline at end of file diff --git a/pdseg/models/model_builder.py b/pdseg/models/model_builder.py index 495652464f8cd14fef650bf5bdc77c14ebdbb4e7..668d69e44aeb91cc7705a79f092730ae6a1fdb09 100644 --- a/pdseg/models/model_builder.py +++ b/pdseg/models/model_builder.py @@ -24,7 +24,7 @@ from utils.config import cfg from loss import multi_softmax_with_loss from loss import multi_dice_loss from loss import multi_bce_loss -from models.modeling import deeplab, unet, icnet, pspnet, hrnet +from models.modeling import deeplab, unet, icnet, pspnet, hrnet, fast_scnn class ModelPhase(object): @@ -81,9 +81,11 @@ def seg_model(image, class_num): logits = pspnet.pspnet(image, class_num) elif model_name == 'hrnet': logits = hrnet.hrnet(image, class_num) + elif model_name == 'fast_scnn': + logits = fast_scnn.fast_scnn(image, class_num) else: raise Exception( - "unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet" + "unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet, fast_scnn" ) return logits @@ -223,8 +225,9 @@ def build_model(main_prog, start_prog, phase=ModelPhase.TRAIN): avg_loss_list = [] valid_loss = [] if "softmax_loss" in loss_type: + weight = cfg.SOLVER.CROSS_ENTROPY_WEIGHT avg_loss_list.append( - multi_softmax_with_loss(logits, label, mask, class_num)) + multi_softmax_with_loss(logits, label, mask, class_num, weight)) loss_valid = True valid_loss.append("softmax_loss") if "dice_loss" in loss_type: diff --git a/pdseg/models/modeling/deeplab.py b/pdseg/models/modeling/deeplab.py index e7ed9604b2227bb498c2eb0b863804fbe0159333..186e2406d90d291de43133550875072d790a805f 100644 --- a/pdseg/models/modeling/deeplab.py +++ b/pdseg/models/modeling/deeplab.py @@ -27,6 +27,7 @@ from models.libs.model_libs import separate_conv from models.backbone.mobilenet_v2 import MobileNetV2 as mobilenet_backbone from models.backbone.xception import Xception as xception_backbone + def encoder(input): # 编码器配置,采用ASPP架构,pooling + 1x1_conv + 三个不同尺度的空洞卷积并行, concat后1x1conv # ASPP_WITH_SEP_CONV:默认为真,使用depthwise可分离卷积,否则使用普通卷积 @@ -47,8 +48,7 @@ def encoder(input): with scope('encoder'): channel = 256 with scope("image_pool"): - image_avg = fluid.layers.reduce_mean( - input, [2, 3], keep_dim=True) + image_avg = fluid.layers.reduce_mean(input, [2, 3], keep_dim=True) image_avg = bn_relu( conv( image_avg, @@ -250,14 +250,15 @@ def deeplabv3p(img, num_classes): regularization_coeff=0.0), initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.01)) with scope('logit'): - logit = conv( - data, - num_classes, - 1, - stride=1, - padding=0, - bias_attr=True, - param_attr=param_attr) + with fluid.name_scope('last_conv'): + logit = conv( + data, + num_classes, + 1, + stride=1, + padding=0, + bias_attr=True, + param_attr=param_attr) logit = fluid.layers.resize_bilinear(logit, img.shape[2:]) return logit diff --git a/pdseg/models/modeling/fast_scnn.py b/pdseg/models/modeling/fast_scnn.py new file mode 100644 index 0000000000000000000000000000000000000000..b1ecdffea6625992e0c7e9e635e67ee79b7b4522 --- /dev/null +++ b/pdseg/models/modeling/fast_scnn.py @@ -0,0 +1,263 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import paddle.fluid as fluid +from models.libs.model_libs import scope +from models.libs.model_libs import bn, bn_relu, relu, conv_bn_layer +from models.libs.model_libs import conv, avg_pool +from models.libs.model_libs import separate_conv +from utils.config import cfg + + +def learning_to_downsample(x, dw_channels1=32, dw_channels2=48, out_channels=64): + x = relu(bn(conv(x, dw_channels1, 3, 2))) + with scope('dsconv1'): + x = separate_conv(x, dw_channels2, stride=2, filter=3, act=fluid.layers.relu) + with scope('dsconv2'): + x = separate_conv(x, out_channels, stride=2, filter=3, act=fluid.layers.relu) + return x + + +def shortcut(input, data_residual): + return fluid.layers.elementwise_add(input, data_residual) + + +def dropout2d(input, prob, is_train=False): + if not is_train: + return input + channels = input.shape[1] + keep_prob = 1.0 - prob + random_tensor = keep_prob + fluid.layers.uniform_random_batch_size_like(input, [-1, channels, 1, 1], min=0., max=1.) + binary_tensor = fluid.layers.floor(random_tensor) + output = input / keep_prob * binary_tensor + return output + + +def inverted_residual_unit(input, + num_in_filter, + num_filters, + ifshortcut, + stride, + filter_size, + padding, + expansion_factor, + name=None): + num_expfilter = int(round(num_in_filter * expansion_factor)) + + channel_expand = conv_bn_layer( + input=input, + num_filters=num_expfilter, + filter_size=1, + stride=1, + padding=0, + num_groups=1, + if_act=True, + name=name + '_expand') + + bottleneck_conv = conv_bn_layer( + input=channel_expand, + num_filters=num_expfilter, + filter_size=filter_size, + stride=stride, + padding=padding, + num_groups=num_expfilter, + if_act=True, + name=name + '_dwise', + use_cudnn=False) + + depthwise_output = bottleneck_conv + + linear_out = conv_bn_layer( + input=bottleneck_conv, + num_filters=num_filters, + filter_size=1, + stride=1, + padding=0, + num_groups=1, + if_act=False, + name=name + '_linear') + + if ifshortcut: + out = shortcut(input=input, data_residual=linear_out) + return out, depthwise_output + else: + return linear_out, depthwise_output + + +def inverted_blocks(input, in_c, t, c, n, s, name=None): + first_block, depthwise_output = inverted_residual_unit( + input=input, + num_in_filter=in_c, + num_filters=c, + ifshortcut=False, + stride=s, + filter_size=3, + padding=1, + expansion_factor=t, + name=name + '_1') + + last_residual_block = first_block + last_c = c + + for i in range(1, n): + last_residual_block, depthwise_output = inverted_residual_unit( + input=last_residual_block, + num_in_filter=last_c, + num_filters=c, + ifshortcut=True, + stride=1, + filter_size=3, + padding=1, + expansion_factor=t, + name=name + '_' + str(i + 1)) + return last_residual_block, depthwise_output + + +def psp_module(input, out_features): + + cat_layers = [] + sizes = (1, 2, 3, 6) + for size in sizes: + psp_name = "psp" + str(size) + with scope(psp_name): + pool = fluid.layers.adaptive_pool2d(input, + pool_size=[size, size], + pool_type='avg', + name=psp_name + '_adapool') + data = conv(pool, out_features, + filter_size=1, + bias_attr=False, + name=psp_name + '_conv') + data_bn = bn(data, act='relu') + interp = fluid.layers.resize_bilinear(data_bn, + out_shape=input.shape[2:], + name=psp_name + '_interp', align_mode=0) + cat_layers.append(interp) + cat_layers = [input] + cat_layers + out = fluid.layers.concat(cat_layers, axis=1, name='psp_cat') + + return out + + +class FeatureFusionModule: + """Feature fusion module""" + + def __init__(self, higher_in_channels, lower_in_channels, out_channels, scale_factor=4): + self.higher_in_channels = higher_in_channels + self.lower_in_channels = lower_in_channels + self.out_channels = out_channels + self.scale_factor = scale_factor + + def net(self, higher_res_feature, lower_res_feature): + h, w = higher_res_feature.shape[2:] + lower_res_feature = fluid.layers.resize_bilinear(lower_res_feature, [h, w], align_mode=0) + + with scope('dwconv'): + lower_res_feature = relu(bn(conv(lower_res_feature, self.out_channels, 1)))#(lower_res_feature) + with scope('conv_lower_res'): + lower_res_feature = bn(conv(lower_res_feature, self.out_channels, 1, bias_attr=True)) + with scope('conv_higher_res'): + higher_res_feature = bn(conv(higher_res_feature, self.out_channels, 1, bias_attr=True)) + out = higher_res_feature + lower_res_feature + + return relu(out) + + +class GlobalFeatureExtractor(): + """Global feature extractor module""" + + def __init__(self, in_channels=64, block_channels=(64, 96, 128), out_channels=128, + t=6, num_blocks=(3, 3, 3)): + self.in_channels = in_channels + self.block_channels = block_channels + self.out_channels = out_channels + self.t = t + self.num_blocks = num_blocks + + def net(self, x): + x, _ = inverted_blocks(x, self.in_channels, self.t, self.block_channels[0], + self.num_blocks[0], 2, 'inverted_block_1') + x, _ = inverted_blocks(x, self.block_channels[0], self.t, self.block_channels[1], + self.num_blocks[1], 2, 'inverted_block_2') + x, _ = inverted_blocks(x, self.block_channels[1], self.t, self.block_channels[2], + self.num_blocks[2], 1, 'inverted_block_3') + x = psp_module(x, self.block_channels[2] // 4) + with scope('out'): + x = relu(bn(conv(x, self.out_channels, 1))) + return x + + +class Classifier: + """Classifier""" + + def __init__(self, dw_channels, num_classes, stride=1): + self.dw_channels = dw_channels + self.num_classes = num_classes + self.stride = stride + + def net(self, x): + with scope('dsconv1'): + x = separate_conv(x, self.dw_channels, stride=self.stride, filter=3, act=fluid.layers.relu) + with scope('dsconv2'): + x = separate_conv(x, self.dw_channels, stride=self.stride, filter=3, act=fluid.layers.relu) + x = dropout2d(x, 0.1, is_train=cfg.PHASE=='train') + x = conv(x, self.num_classes, 1, bias_attr=True) + return x + + +def aux_layer(x, num_classes): + x = relu(bn(conv(x, 32, 3, padding=1))) + x = dropout2d(x, 0.1, is_train=(cfg.PHASE == 'train')) + with scope('logit'): + x = conv(x, num_classes, 1, bias_attr=True) + return x + + +def fast_scnn(img, num_classes): + size = img.shape[2:] + classifier = Classifier(128, num_classes) + + global_feature_extractor = GlobalFeatureExtractor(64, [64, 96, 128], 128, 6, [3, 3, 3]) + feature_fusion = FeatureFusionModule(64, 128, 128) + + with scope('learning_to_downsample'): + higher_res_features = learning_to_downsample(img, 32, 48, 64) + with scope('global_feature_extractor'): + lower_res_feature = global_feature_extractor.net(higher_res_features) + with scope('feature_fusion'): + x = feature_fusion.net(higher_res_features, lower_res_feature) + with scope('classifier'): + logit = classifier.net(x) + logit = fluid.layers.resize_bilinear(logit, size, align_mode=0) + + if len(cfg.MODEL.MULTI_LOSS_WEIGHT) == 3: + with scope('aux_layer_higher'): + higher_logit = aux_layer(higher_res_features, num_classes) + higher_logit = fluid.layers.resize_bilinear(higher_logit, size, align_mode=0) + with scope('aux_layer_lower'): + lower_logit = aux_layer(lower_res_feature, num_classes) + lower_logit = fluid.layers.resize_bilinear(lower_logit, size, align_mode=0) + return logit, higher_logit, lower_logit + elif len(cfg.MODEL.MULTI_LOSS_WEIGHT) == 2: + with scope('aux_layer_higher'): + higher_logit = aux_layer(higher_res_features, num_classes) + higher_logit = fluid.layers.resize_bilinear(higher_logit, size, align_mode=0) + return logit, higher_logit + + return logit \ No newline at end of file diff --git a/pdseg/reader.py b/pdseg/reader.py index d3c3659e5064cd8a11e463267a4b046ffdf105ca..7f1fd6fbbe25f1199c9247aa9e42ae7cb682c03d 100644 --- a/pdseg/reader.py +++ b/pdseg/reader.py @@ -98,8 +98,8 @@ class SegDataset(object): # Re-shuffle file list if self.shuffle and cfg.NUM_TRAINERS > 1: np.random.RandomState(self.shuffle_seed).shuffle(self.all_lines) - num_lines = len(self.all_lines) // self.num_trainers - self.lines = self.all_lines[num_lines * self.trainer_id: num_lines * (self.trainer_id + 1)] + num_lines = len(self.all_lines) // cfg.NUM_TRAINERS + self.lines = self.all_lines[num_lines * cfg.TRAINER_ID: num_lines * (cfg.TRAINER_ID + 1)] self.shuffle_seed += 1 elif self.shuffle: np.random.shuffle(self.lines) diff --git a/pdseg/tools/create_dataset_list.py b/pdseg/tools/create_dataset_list.py index aca6d95d20bc645c1843399c99f5e56d4560f7f8..6c7d7c943c9baf916533621d353d5f2700388a01 100644 --- a/pdseg/tools/create_dataset_list.py +++ b/pdseg/tools/create_dataset_list.py @@ -116,18 +116,19 @@ def generate_list(args): label_files = get_files(1, dataset_split, args) if not image_files: img_dir = os.path.join(dataset_root, args.folder[0], dataset_split) - print("No files in {}".format(img_dir)) + warnings.warn("No images in {} !!!".format(img_dir)) num_images = len(image_files) if not label_files: label_dir = os.path.join(dataset_root, args.folder[1], dataset_split) - print("No files in {}".format(label_dir)) + warnings.warn("No labels in {} !!!".format(label_dir)) num_label = len(label_files) - if num_images < num_label: - warnings.warn("number of images = {} < number of labels = {}." - .format(num_images, num_label)) - continue + if num_images != num_label and num_label > 0: + raise Exception("Number of images = {} number of labels = {} \n" + "Either number of images is equal to number of labels, " + "or number of labels is equal to 0.\n" + "Please check your dataset!".format(num_images, num_label)) file_list = os.path.join(dataset_root, dataset_split + '.txt') with open(file_list, "w") as f: diff --git a/pdseg/tools/gray2pseudo_color.py b/pdseg/tools/gray2pseudo_color.py index b385049172c4b134aca849682cbf76193c569f62..3627db0b216175b04a50d9012999d441f4df69fb 100644 --- a/pdseg/tools/gray2pseudo_color.py +++ b/pdseg/tools/gray2pseudo_color.py @@ -2,13 +2,11 @@ from __future__ import print_function import argparse -import glob import os import os.path as osp import sys import numpy as np from PIL import Image -from pdseg.vis import get_color_map_list def parse_args(): @@ -26,6 +24,28 @@ def parse_args(): return parser.parse_args() +def get_color_map_list(num_classes): + """ Returns the color map for visualizing the segmentation mask, + which can support arbitrary number of classes. + Args: + num_classes: Number of classes + Returns: + The color map + """ + color_map = num_classes * [0, 0, 0] + for i in range(0, num_classes): + j = 0 + lab = i + while lab: + color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j)) + color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)) + color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)) + j += 1 + lab >>= 3 + + return color_map + + def gray2pseudo_color(args): """将灰度标注图片转换为伪彩色图片""" input = args.dir_or_file @@ -36,18 +56,28 @@ def gray2pseudo_color(args): color_map = get_color_map_list(256) if os.path.isdir(input): - for grt_path in glob.glob(osp.join(input, '*.png')): - print('Converting original label:', grt_path) - basename = osp.basename(grt_path) + for fpath, dirs, fs in os.walk(input): + for f in fs: + try: + grt_path = osp.join(fpath, f) + _output_dir = fpath.replace(input, '') + _output_dir = _output_dir.lstrip(os.path.sep) - im = Image.open(grt_path) - lbl = np.asarray(im) + im = Image.open(grt_path) + lbl = np.asarray(im) - lbl_pil = Image.fromarray(lbl.astype(np.uint8), mode='P') - lbl_pil.putpalette(color_map) + lbl_pil = Image.fromarray(lbl.astype(np.uint8), mode='P') + lbl_pil.putpalette(color_map) - new_file = osp.join(output_dir, basename) - lbl_pil.save(new_file) + real_dir = osp.join(output_dir, _output_dir) + if not osp.exists(real_dir): + os.makedirs(real_dir) + new_grt_path = osp.join(real_dir, f) + + lbl_pil.save(new_grt_path) + print('New label path:', new_grt_path) + except: + continue elif os.path.isfile(input): if args.dataset_dir is None or args.file_separator is None: print('No dataset_dir or file_separator input!') @@ -58,17 +88,20 @@ def gray2pseudo_color(args): grt_name = parts[1] grt_path = os.path.join(args.dataset_dir, grt_name) - print('Converting original label:', grt_path) - basename = osp.basename(grt_path) - im = Image.open(grt_path) lbl = np.asarray(im) lbl_pil = Image.fromarray(lbl.astype(np.uint8), mode='P') lbl_pil.putpalette(color_map) - new_file = osp.join(output_dir, basename) - lbl_pil.save(new_file) + grt_dir, _ = osp.split(grt_name) + new_dir = osp.join(output_dir, grt_dir) + if not osp.exists(new_dir): + os.makedirs(new_dir) + new_grt_path = osp.join(output_dir, grt_name) + + lbl_pil.save(new_grt_path) + print('New label path:', new_grt_path) else: print('It\'s neither a dir nor a file') diff --git a/pdseg/tools/jingling2seg.py b/pdseg/tools/jingling2seg.py index 9c1d663685cb357017387c54ed25115e6117408e..28bce3b0436242f5174087c0852dde99a7878684 100644 --- a/pdseg/tools/jingling2seg.py +++ b/pdseg/tools/jingling2seg.py @@ -12,7 +12,7 @@ import numpy as np import PIL.Image import labelme -from pdseg.vis import get_color_map_list +from gray2pseudo_color import get_color_map_list def parse_args(): diff --git a/pdseg/tools/labelme2seg.py b/pdseg/tools/labelme2seg.py index be1c99ee32c249cda29fea3d628b707415bf8b23..6ae3ad3a50a6df750ce321d94b7235ef57dcf80b 100755 --- a/pdseg/tools/labelme2seg.py +++ b/pdseg/tools/labelme2seg.py @@ -12,7 +12,7 @@ import numpy as np import PIL.Image import labelme -from pdseg.vis import get_color_map_list +from gray2pseudo_color import get_color_map_list def parse_args(): diff --git a/pdseg/train.py b/pdseg/train.py index 4f6a90e003c0b2997daceab684b7199f52c9aafc..8254f1655c97c09204d2e4a64e2404907270fcfc 100644 --- a/pdseg/train.py +++ b/pdseg/train.py @@ -24,12 +24,14 @@ os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" import sys import argparse import pprint +import random import shutil import functools import paddle import numpy as np import paddle.fluid as fluid +from paddle.fluid import profiler from utils.config import cfg from utils.timer import Timer, calculate_eta @@ -95,6 +97,24 @@ def parse_args(): help='See utils/config.py for all options', default=None, nargs=argparse.REMAINDER) + parser.add_argument( + '--enable_ce', + dest='enable_ce', + help='If set True, enable continuous evaluation job.' + 'This flag is only used for internal test.', + action='store_true') + + # NOTE: This for benchmark + parser.add_argument( + '--is_profiler', + help='the profiler switch.(used for benchmark)', + default=0, + type=int) + parser.add_argument( + '--profiler_path', + help='the profiler output file path.(used for benchmark)', + default='./seg.profiler', + type=str) return parser.parse_args() @@ -194,6 +214,9 @@ def print_info(*msg): def train(cfg): startup_prog = fluid.Program() train_prog = fluid.Program() + if args.enable_ce: + startup_prog.random_seed = 1000 + train_prog.random_seed = 1000 drop_last = True dataset = SegDataset( @@ -431,6 +454,13 @@ def train(cfg): sys.stdout.flush() avg_loss = 0.0 timer.restart() + + # NOTE : used for benchmark, profiler tools + if args.is_profiler and epoch == 1 and global_step == args.log_steps: + profiler.start_profiler("All") + elif args.is_profiler and epoch == 1 and global_step == args.log_steps + 5: + profiler.stop_profiler("total", args.profiler_path) + return except fluid.core.EOFException: py_reader.reset() @@ -483,6 +513,9 @@ def main(args): cfg.update_from_file(args.cfg_file) if args.opts: cfg.update_from_list(args.opts) + if args.enable_ce: + random.seed(0) + np.random.seed(0) cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0)) cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) diff --git a/pdseg/utils/config.py b/pdseg/utils/config.py index 5d66c2f076ca964fcdf23d1cfd427e61acf68876..c3d84216752838a388fd2cda1946949d77960fb9 100644 --- a/pdseg/utils/config.py +++ b/pdseg/utils/config.py @@ -72,17 +72,11 @@ cfg.DATASET.IGNORE_INDEX = 255 cfg.DATASET.PADDING_VALUE = [127.5, 127.5, 127.5] ########################### 数据增强配置 ###################################### -# 图像镜像左右翻转 -cfg.AUG.MIRROR = True -# 图像上下翻转开关,True/False -cfg.AUG.FLIP = False -# 图像启动上下翻转的概率,0-1 -cfg.AUG.FLIP_RATIO = 0.5 -# 图像resize的固定尺寸(宽,高),非负 -cfg.AUG.FIX_RESIZE_SIZE = tuple() # 图像resize的方式有三种: # unpadding(固定尺寸),stepscaling(按比例resize),rangescaling(长边对齐) -cfg.AUG.AUG_METHOD = 'rangescaling' +cfg.AUG.AUG_METHOD = 'unpadding' +# 图像resize的固定尺寸(宽,高),非负 +cfg.AUG.FIX_RESIZE_SIZE = (512, 512) # 图像resize方式为stepscaling,resize最小尺度,非负 cfg.AUG.MIN_SCALE_FACTOR = 0.5 # 图像resize方式为stepscaling,resize最大尺度,不小于MIN_SCALE_FACTOR @@ -98,6 +92,13 @@ cfg.AUG.MAX_RESIZE_VALUE = 600 # 在MIN_RESIZE_VALUE到MAX_RESIZE_VALUE范围内 cfg.AUG.INF_RESIZE_VALUE = 500 +# 图像镜像左右翻转 +cfg.AUG.MIRROR = True +# 图像上下翻转开关,True/False +cfg.AUG.FLIP = False +# 图像启动上下翻转的概率,0-1 +cfg.AUG.FLIP_RATIO = 0.5 + # RichCrop数据增广开关,用于提升模型鲁棒性 cfg.AUG.RICH_CROP.ENABLE = False # 图像旋转最大角度,0-90 @@ -158,13 +159,16 @@ cfg.SOLVER.LOSS = ["softmax_loss"] cfg.SOLVER.LR_WARMUP = False # warmup的迭代次数 cfg.SOLVER.LR_WARMUP_STEPS = 2000 - +# cross entropy weight, 默认为None,如果设置为'dynamic',会根据每个batch中各个类别的数目, +# 动态调整类别权重。 +# 也可以设置一个静态权重(list的方式),比如有3类,每个类别权重可以设置为[0.1, 2.0, 0.9] +cfg.SOLVER.CROSS_ENTROPY_WEIGHT = None ########################## 测试配置 ########################################### # 测试模型路径 cfg.TEST.TEST_MODEL = '' ########################## 模型通用配置 ####################################### -# 模型名称, 支持deeplab, unet, icnet三种 +# 模型名称, 已支持deeplabv3p, unet, icnet,pspnet,hrnet cfg.MODEL.MODEL_NAME = '' # BatchNorm类型: bn、gn(group_norm) cfg.MODEL.DEFAULT_NORM_TYPE = 'bn' @@ -232,3 +236,19 @@ cfg.FREEZE.MODEL_FILENAME = '__model__' cfg.FREEZE.PARAMS_FILENAME = '__params__' # 预测模型参数保存的路径 cfg.FREEZE.SAVE_DIR = 'freeze_model' + +########################## paddle-slim ###################################### +cfg.SLIM.KNOWLEDGE_DISTILL_IS_TEACHER = False +cfg.SLIM.KNOWLEDGE_DISTILL = False +cfg.SLIM.KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR = "" + +cfg.SLIM.NAS_PORT = 23333 +cfg.SLIM.NAS_ADDRESS = "" +cfg.SLIM.NAS_SEARCH_STEPS = 100 +cfg.SLIM.NAS_START_EVAL_EPOCH = 0 +cfg.SLIM.NAS_IS_SERVER = True +cfg.SLIM.NAS_SPACE_NAME = "" + +cfg.SLIM.PRUNE_PARAMS = '' +cfg.SLIM.PRUNE_RATIOS = [] + diff --git a/pdseg/vis.py b/pdseg/vis.py index 9fc349a3876f2667f8cc86bc1b9556594acfa638..d94221c0be1a0b4abe241e75966215863d8fd35d 100644 --- a/pdseg/vis.py +++ b/pdseg/vis.py @@ -34,6 +34,7 @@ from utils.config import cfg from reader import SegDataset from models.model_builder import build_model from models.model_builder import ModelPhase +from tools.gray2pseudo_color import get_color_map_list def parse_args(): @@ -73,28 +74,6 @@ def makedirs(directory): os.makedirs(directory) -def get_color_map_list(num_classes): - """ Returns the color map for visualizing the segmentation mask, - which can support arbitrary number of classes. - Args: - num_classes: Number of classes - Returns: - The color map - """ - color_map = num_classes * [0, 0, 0] - for i in range(0, num_classes): - j = 0 - lab = i - while lab: - color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j)) - color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)) - color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)) - j += 1 - lab >>= 3 - - return color_map - - def to_png_fn(fn): """ Append png as filename postfix @@ -108,7 +87,7 @@ def to_png_fn(fn): def visualize(cfg, vis_file_list=None, use_gpu=False, - vis_dir="visual_predict", + vis_dir="visual", ckpt_dir=None, log_writer=None, local_test=False, @@ -138,7 +117,7 @@ def visualize(cfg, fluid.io.load_params(exe, ckpt_dir, main_program=test_prog) - save_dir = os.path.join('visual', vis_dir) + save_dir = vis_dir makedirs(save_dir) fetch_list = [pred.name] diff --git a/pretrained_model/download_model.py b/pretrained_model/download_model.py index 12b01472457bd25e22005141b21bb9d3014bf4fe..28b5ae421425a42e959fa6cf792c3e536e53c964 100644 --- a/pretrained_model/download_model.py +++ b/pretrained_model/download_model.py @@ -81,6 +81,8 @@ model_urls = { "https://paddleseg.bj.bcebos.com/models/pspnet101_cityscapes.tgz", "hrnet_w18_bn_cityscapes": "https://paddleseg.bj.bcebos.com/models/hrnet_w18_bn_cityscapes.tgz", + "fast_scnn_cityscapes": + "https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape.tar", } if __name__ == "__main__": diff --git a/slim/distillation/README.md b/slim/distillation/README.md new file mode 100644 index 0000000000000000000000000000000000000000..2bd772a1001e11efa89324315fa32d44032ade05 --- /dev/null +++ b/slim/distillation/README.md @@ -0,0 +1,100 @@ +>运行该示例前请安装PaddleSlim和Paddle1.6或更高版本 + +# PaddleSeg蒸馏教程 + +在阅读本教程前,请确保您已经了解过[PaddleSeg使用说明](../../docs/usage.md)等章节,以便对PaddleSeg有一定的了解 + +该文档介绍如何使用[PaddleSlim](https://paddlepaddle.github.io/PaddleSlim)对分割库中的模型进行蒸馏。 + +该教程中所示操作,如无特殊说明,均在`PaddleSeg/`路径下执行。 + +## 概述 + +该示例使用PaddleSlim提供的[蒸馏策略](https://paddlepaddle.github.io/PaddleSlim/algo/algo/#3)对分割库中的模型进行蒸馏训练。 +在阅读该示例前,建议您先了解以下内容: + +- [PaddleSlim蒸馏API文档](https://paddlepaddle.github.io/PaddleSlim/api/single_distiller_api/) + +## 安装PaddleSlim +可按照[PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/)中的步骤安装PaddleSlim + +## 蒸馏策略说明 + +关于蒸馏API如何使用您可以参考PaddleSlim蒸馏API文档 + +这里以Deeplabv3-xception蒸馏训练Deeplabv3-mobilenet模型为例,首先,为了对`student model`和`teacher model`有个总体的认识,进一步确认蒸馏的对象,我们通过以下命令分别观察两个网络变量(Variables)的名称和形状: + +```python +# 观察student model的Variables +student_vars = [] +for v in fluid.default_main_program().list_vars(): + try: + student_vars.append((v.name, v.shape)) + except: + pass +print("="*50+"student_model_vars"+"="*50) +print(student_vars) +# 观察teacher model的Variables +teacher_vars = [] +for v in teacher_program.list_vars(): + try: + teacher_vars.append((v.name, v.shape)) + except: + pass +print("="*50+"teacher_model_vars"+"="*50) +print(teacher_vars) +``` + +经过对比可以发现,`student model`和`teacher model`输入到`loss`的特征图分别为: + +```bash +# student model +bilinear_interp_0.tmp_0 +# teacher model +bilinear_interp_2.tmp_0 +``` + + +它们形状两两相同,且分别处于两个网络的输出部分。所以,我们用`l2_loss`对这几个特征图两两对应添加蒸馏loss。需要注意的是,teacher的Variable在merge过程中被自动添加了一个`name_prefix`,所以这里也需要加上这个前缀`"teacher_"`,merge过程请参考[蒸馏API文档](https://paddlepaddle.github.io/PaddleSlim/api/single_distiller_api/#merge) + +```python +distill_loss = l2_loss('teacher_bilinear_interp_2.tmp_0', 'bilinear_interp_0.tmp_0') +``` + +我们也可以根据上述操作为蒸馏策略选择其他loss,PaddleSlim支持的有`FSP_loss`, `L2_loss`, `softmax_with_cross_entropy_loss` 以及自定义的任何loss。 + +## 训练 + +根据[PaddleSeg/pdseg/train.py](../../pdseg/train.py)编写压缩脚本`train_distill.py`。 +在该脚本中定义了teacher_model和student_model,用teacher_model的输出指导student_model的训练 + +### 执行示例 + +下载teacher的预训练模型([deeplabv3p_xception65_bn_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/xception65_bn_cityscapes.tgz))和student的预训练模型([mobilenet_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz)), +修改student config file(./slim/distillation/cityscape.yaml)中预训练模型的路径: +``` +TRAIN: + PRETRAINED_MODEL_DIR: your_student_pretrained_model_dir +``` +修改teacher config file(./slim/distillation/cityscape_teacher.yaml)中预训练模型的路径: +``` +SLIM: + KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR: your_teacher_pretrained_model_dir +``` + +执行如下命令启动训练,每间隔```cfg.TRAIN.SNAPSHOT_EPOCH```会进行一次评估。 +```shell +CUDA_VISIBLE_DEVICES=0,1 +python -m paddle.distributed.launch ./slim/distillation/train_distill.py \ +--log_steps 10 --cfg ./slim/distillation/cityscape.yaml \ +--teacher_cfg ./slim/distillation/cityscape_teacher.yaml \ +--use_gpu \ +--use_mpio \ +--do_eval +``` + +注意:如需修改配置文件中的参数,请在对应的配置文件中直接修改,暂不支持命令行输入覆盖。 + +## 评估预测 + +训练完成后的评估和预测请参考PaddleSeg的[快速入门](../../README.md#快速入门)和[基础功能](../../README.md#基础功能)等章节 diff --git a/slim/distillation/cityscape.yaml b/slim/distillation/cityscape.yaml new file mode 100644 index 0000000000000000000000000000000000000000..703a6a2483fcf68f9ea801369ff0675c41ad286c --- /dev/null +++ b/slim/distillation/cityscape.yaml @@ -0,0 +1,59 @@ +EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling +TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling +AUG: + AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling + FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding + INF_RESIZE_VALUE: 500 # for rangescaling + MAX_RESIZE_VALUE: 600 # for rangescaling + MIN_RESIZE_VALUE: 400 # for rangescaling + MAX_SCALE_FACTOR: 2.0 # for stepscaling + MIN_SCALE_FACTOR: 0.5 # for stepscaling + SCALE_STEP_SIZE: 0.25 # for stepscaling + MIRROR: True + FLIP: True + FLIP_RATIO: 0.2 + RICH_CROP: + ENABLE: False + ASPECT_RATIO: 0.33 + BLUR: True + BLUR_RATIO: 0.1 + MAX_ROTATION: 15 + MIN_AREA_RATIO: 0.5 + BRIGHTNESS_JITTER_RATIO: 0.5 + CONTRAST_JITTER_RATIO: 0.5 + SATURATION_JITTER_RATIO: 0.5 +BATCH_SIZE: 16 +MEAN: [0.5, 0.5, 0.5] +STD: [0.5, 0.5, 0.5] +DATASET: + DATA_DIR: "./dataset/cityscapes/" + IMAGE_TYPE: "rgb" # choice rgb or rgba + NUM_CLASSES: 19 + TEST_FILE_LIST: "dataset/cityscapes/val.list" + TRAIN_FILE_LIST: "dataset/cityscapes/train.list" + VAL_FILE_LIST: "dataset/cityscapes/val.list" + IGNORE_INDEX: 255 +FREEZE: + MODEL_FILENAME: "model" + PARAMS_FILENAME: "params" +MODEL: + DEFAULT_NORM_TYPE: "bn" + MODEL_NAME: "deeplabv3p" + DEEPLAB: + BACKBONE: "mobilenet" + ASPP_WITH_SEP_CONV: True + DECODER_USE_SEP_CONV: True + ENCODER_WITH_ASPP: False + ENABLE_DECODER: False +TEST: + TEST_MODEL: "snapshots/cityscape_v5/final/" +TRAIN: + MODEL_SAVE_DIR: "snapshots/cityscape_mbv2_kd_e100_1/" + PRETRAINED_MODEL_DIR: u"pretrained_model/mobilenet_cityscapes" + SNAPSHOT_EPOCH: 5 + SYNC_BATCH_NORM: True +SOLVER: + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "sgd" + NUM_EPOCHS: 100 diff --git a/slim/distillation/cityscape_teacher.yaml b/slim/distillation/cityscape_teacher.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ff7df807bbb782e4d5862f8963104f07fa147bb1 --- /dev/null +++ b/slim/distillation/cityscape_teacher.yaml @@ -0,0 +1,65 @@ +EVAL_CROP_SIZE: (2049, 1025) # (width, height), for unpadding rangescaling and stepscaling +TRAIN_CROP_SIZE: (769, 769) # (width, height), for unpadding rangescaling and stepscaling +AUG: + AUG_METHOD: "stepscaling" # choice unpadding rangescaling and stepscaling + FIX_RESIZE_SIZE: (640, 640) # (width, height), for unpadding + INF_RESIZE_VALUE: 500 # for rangescaling + MAX_RESIZE_VALUE: 600 # for rangescaling + MIN_RESIZE_VALUE: 400 # for rangescaling + MAX_SCALE_FACTOR: 2.0 # for stepscaling + MIN_SCALE_FACTOR: 0.5 # for stepscaling + SCALE_STEP_SIZE: 0.25 # for stepscaling + MIRROR: True + FLIP: True + FLIP_RATIO: 0.2 + RICH_CROP: + ENABLE: False + ASPECT_RATIO: 0.33 + BLUR: True + BLUR_RATIO: 0.1 + MAX_ROTATION: 15 + MIN_AREA_RATIO: 0.5 + BRIGHTNESS_JITTER_RATIO: 0.5 + CONTRAST_JITTER_RATIO: 0.5 + SATURATION_JITTER_RATIO: 0.5 +BATCH_SIZE: 16 +MEAN: [0.5, 0.5, 0.5] +STD: [0.5, 0.5, 0.5] +DATASET: + DATA_DIR: "./dataset/cityscapes/" + IMAGE_TYPE: "rgb" # choice rgb or rgba + NUM_CLASSES: 19 + TEST_FILE_LIST: "dataset/cityscapes/val.list" + TRAIN_FILE_LIST: "dataset/cityscapes/train.list" + VAL_FILE_LIST: "dataset/cityscapes/val.list" + IGNORE_INDEX: 255 +FREEZE: + MODEL_FILENAME: "model" + PARAMS_FILENAME: "params" +MODEL: + DEFAULT_NORM_TYPE: "bn" + MODEL_NAME: "deeplabv3p" + DEEPLAB: + BACKBONE: "xception_65" + ASPP_WITH_SEP_CONV: True + DECODER_USE_SEP_CONV: True + ENCODER_WITH_ASPP: True + ENABLE_DECODER: True +TEST: + TEST_MODEL: "snapshots/cityscape_v5/final/" +TRAIN: + MODEL_SAVE_DIR: "snapshots/cityscape_v7/" + PRETRAINED_MODEL_DIR: u"pretrain/deeplabv3plus_gn_init" + SNAPSHOT_EPOCH: 5 + SYNC_BATCH_NORM: True +SOLVER: + LR: 0.001 + LR_POLICY: "poly" + OPTIMIZER: "sgd" + NUM_EPOCHS: 100 + +SLIM: + KNOWLEDGE_DISTILL_IS_TEACHER: True + KNOWLEDGE_DISTILL: True + KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR: "pretrained_model/xception65_bn_cityscapes" + diff --git a/slim/distillation/model_builder.py b/slim/distillation/model_builder.py new file mode 100644 index 0000000000000000000000000000000000000000..f903b8dd2b635fa10070dcc3da488be66746d539 --- /dev/null +++ b/slim/distillation/model_builder.py @@ -0,0 +1,342 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import struct + +import paddle.fluid as fluid +import numpy as np +from paddle.fluid.proto.framework_pb2 import VarType + +import solver +from utils.config import cfg +from loss import multi_softmax_with_loss +from loss import multi_dice_loss +from loss import multi_bce_loss +from models.modeling import deeplab, unet, icnet, pspnet, hrnet, fast_scnn + + +class ModelPhase(object): + """ + Standard name for model phase in PaddleSeg + + The following standard keys are defined: + * `TRAIN`: training mode. + * `EVAL`: testing/evaluation mode. + * `PREDICT`: prediction/inference mode. + * `VISUAL` : visualization mode + """ + + TRAIN = 'train' + EVAL = 'eval' + PREDICT = 'predict' + VISUAL = 'visual' + + @staticmethod + def is_train(phase): + return phase == ModelPhase.TRAIN + + @staticmethod + def is_predict(phase): + return phase == ModelPhase.PREDICT + + @staticmethod + def is_eval(phase): + return phase == ModelPhase.EVAL + + @staticmethod + def is_visual(phase): + return phase == ModelPhase.VISUAL + + @staticmethod + def is_valid_phase(phase): + """ Check valid phase """ + if ModelPhase.is_train(phase) or ModelPhase.is_predict(phase) \ + or ModelPhase.is_eval(phase) or ModelPhase.is_visual(phase): + return True + + return False + + +def seg_model(image, class_num): + model_name = cfg.MODEL.MODEL_NAME + if model_name == 'unet': + logits = unet.unet(image, class_num) + elif model_name == 'deeplabv3p': + logits = deeplab.deeplabv3p(image, class_num) + elif model_name == 'icnet': + logits = icnet.icnet(image, class_num) + elif model_name == 'pspnet': + logits = pspnet.pspnet(image, class_num) + elif model_name == 'hrnet': + logits = hrnet.hrnet(image, class_num) + elif model_name == 'fast_scnn': + logits = fast_scnn.fast_scnn(image, class_num) + else: + raise Exception( + "unknow model name, only support unet, deeplabv3p, icnet, pspnet, hrnet" + ) + return logits + + +def softmax(logit): + logit = fluid.layers.transpose(logit, [0, 2, 3, 1]) + logit = fluid.layers.softmax(logit) + logit = fluid.layers.transpose(logit, [0, 3, 1, 2]) + return logit + + +def sigmoid_to_softmax(logit): + """ + one channel to two channel + """ + logit = fluid.layers.transpose(logit, [0, 2, 3, 1]) + logit = fluid.layers.sigmoid(logit) + logit_back = 1 - logit + logit = fluid.layers.concat([logit_back, logit], axis=-1) + logit = fluid.layers.transpose(logit, [0, 3, 1, 2]) + return logit + + +def export_preprocess(image): + """导出模型的预处理流程""" + + image = fluid.layers.transpose(image, [0, 3, 1, 2]) + origin_shape = fluid.layers.shape(image)[-2:] + + # 不同AUG_METHOD方法的resize + if cfg.AUG.AUG_METHOD == 'unpadding': + h_fix = cfg.AUG.FIX_RESIZE_SIZE[1] + w_fix = cfg.AUG.FIX_RESIZE_SIZE[0] + image = fluid.layers.resize_bilinear( + image, out_shape=[h_fix, w_fix], align_corners=False, align_mode=0) + elif cfg.AUG.AUG_METHOD == 'rangescaling': + size = cfg.AUG.INF_RESIZE_VALUE + value = fluid.layers.reduce_max(origin_shape) + scale = float(size) / value.astype('float32') + image = fluid.layers.resize_bilinear( + image, scale=scale, align_corners=False, align_mode=0) + + # 存储resize后图像shape + valid_shape = fluid.layers.shape(image)[-2:] + + # padding到eval_crop_size大小 + width = cfg.EVAL_CROP_SIZE[0] + height = cfg.EVAL_CROP_SIZE[1] + pad_target = fluid.layers.assign( + np.array([height, width]).astype('float32')) + up = fluid.layers.assign(np.array([0]).astype('float32')) + down = pad_target[0] - valid_shape[0] + left = up + right = pad_target[1] - valid_shape[1] + paddings = fluid.layers.concat([up, down, left, right]) + paddings = fluid.layers.cast(paddings, 'int32') + image = fluid.layers.pad2d(image, paddings=paddings, pad_value=127.5) + + # normalize + mean = np.array(cfg.MEAN).reshape(1, len(cfg.MEAN), 1, 1) + mean = fluid.layers.assign(mean.astype('float32')) + std = np.array(cfg.STD).reshape(1, len(cfg.STD), 1, 1) + std = fluid.layers.assign(std.astype('float32')) + image = (image / 255 - mean) / std + # 使后面的网络能通过类似image.shape获取特征图的shape + image = fluid.layers.reshape( + image, shape=[-1, cfg.DATASET.DATA_DIM, height, width]) + return image, valid_shape, origin_shape + + +def build_model(main_prog=None, start_prog=None, phase=ModelPhase.TRAIN, **kwargs): + + if not ModelPhase.is_valid_phase(phase): + raise ValueError("ModelPhase {} is not valid!".format(phase)) + if ModelPhase.is_train(phase): + width = cfg.TRAIN_CROP_SIZE[0] + height = cfg.TRAIN_CROP_SIZE[1] + else: + width = cfg.EVAL_CROP_SIZE[0] + height = cfg.EVAL_CROP_SIZE[1] + + image_shape = [cfg.DATASET.DATA_DIM, height, width] + grt_shape = [1, height, width] + class_num = cfg.DATASET.NUM_CLASSES + + #with fluid.program_guard(main_prog, start_prog): + # with fluid.unique_name.guard(): + # 在导出模型的时候,增加图像标准化预处理,减小预测部署时图像的处理流程 + # 预测部署时只须对输入图像增加batch_size维度即可 + if cfg.SLIM.KNOWLEDGE_DISTILL_IS_TEACHER: + image = main_prog.global_block()._clone_variable(kwargs['image'], + force_persistable=False) + label = main_prog.global_block()._clone_variable(kwargs['label'], + force_persistable=False) + mask = main_prog.global_block()._clone_variable(kwargs['mask'], + force_persistable=False) + else: + if ModelPhase.is_predict(phase): + origin_image = fluid.layers.data( + name='image', + shape=[-1, -1, -1, cfg.DATASET.DATA_DIM], + dtype='float32', + append_batch_size=False) + image, valid_shape, origin_shape = export_preprocess( + origin_image) + + else: + image = fluid.layers.data( + name='image', shape=image_shape, dtype='float32') + label = fluid.layers.data( + name='label', shape=grt_shape, dtype='int32') + mask = fluid.layers.data( + name='mask', shape=grt_shape, dtype='int32') + + + # use PyReader when doing traning and evaluation + if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase): + py_reader = None + if not cfg.SLIM.KNOWLEDGE_DISTILL_IS_TEACHER: + py_reader = fluid.io.PyReader( + feed_list=[image, label, mask], + capacity=cfg.DATALOADER.BUF_SIZE, + iterable=False, + use_double_buffer=True) + + loss_type = cfg.SOLVER.LOSS + if not isinstance(loss_type, list): + loss_type = list(loss_type) + + # dice_loss或bce_loss只适用两类分割中 + if class_num > 2 and (("dice_loss" in loss_type) or + ("bce_loss" in loss_type)): + raise Exception( + "dice loss and bce loss is only applicable to binary classfication" + ) + + # 在两类分割情况下,当loss函数选择dice_loss或bce_loss的时候,最后logit输出通道数设置为1 + if ("dice_loss" in loss_type) or ("bce_loss" in loss_type): + class_num = 1 + if "softmax_loss" in loss_type: + raise Exception( + "softmax loss can not combine with dice loss or bce loss" + ) + logits = seg_model(image, class_num) + + # 根据选择的loss函数计算相应的损失函数 + if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase): + loss_valid = False + avg_loss_list = [] + valid_loss = [] + if "softmax_loss" in loss_type: + weight = cfg.SOLVER.CROSS_ENTROPY_WEIGHT + avg_loss_list.append( + multi_softmax_with_loss(logits, label, mask, class_num, weight)) + loss_valid = True + valid_loss.append("softmax_loss") + if "dice_loss" in loss_type: + avg_loss_list.append(multi_dice_loss(logits, label, mask)) + loss_valid = True + valid_loss.append("dice_loss") + if "bce_loss" in loss_type: + avg_loss_list.append(multi_bce_loss(logits, label, mask)) + loss_valid = True + valid_loss.append("bce_loss") + if not loss_valid: + raise Exception( + "SOLVER.LOSS: {} is set wrong. it should " + "include one of (softmax_loss, bce_loss, dice_loss) at least" + " example: ['softmax_loss'], ['dice_loss'], ['bce_loss', 'dice_loss']" + .format(cfg.SOLVER.LOSS)) + + invalid_loss = [x for x in loss_type if x not in valid_loss] + if len(invalid_loss) > 0: + print( + "Warning: the loss {} you set is invalid. it will not be included in loss computed." + .format(invalid_loss)) + + avg_loss = 0 + for i in range(0, len(avg_loss_list)): + avg_loss += avg_loss_list[i] + + #get pred result in original size + if isinstance(logits, tuple): + logit = logits[0] + else: + logit = logits + + if logit.shape[2:] != label.shape[2:]: + logit = fluid.layers.resize_bilinear(logit, label.shape[2:]) + + # return image input and logit output for inference graph prune + if ModelPhase.is_predict(phase): + # 两类分割中,使用dice_loss或bce_loss返回的logit为单通道,进行到两通道的变换 + if class_num == 1: + logit = sigmoid_to_softmax(logit) + else: + logit = softmax(logit) + + # 获取有效部分 + logit = fluid.layers.slice( + logit, axes=[2, 3], starts=[0, 0], ends=valid_shape) + + logit = fluid.layers.resize_bilinear( + logit, + out_shape=origin_shape, + align_corners=False, + align_mode=0) + logit = fluid.layers.argmax(logit, axis=1) + return origin_image, logit + + if class_num == 1: + out = sigmoid_to_softmax(logit) + out = fluid.layers.transpose(out, [0, 2, 3, 1]) + else: + out = fluid.layers.transpose(logit, [0, 2, 3, 1]) + + pred = fluid.layers.argmax(out, axis=3) + pred = fluid.layers.unsqueeze(pred, axes=[3]) + if ModelPhase.is_visual(phase): + if class_num == 1: + logit = sigmoid_to_softmax(logit) + else: + logit = softmax(logit) + return pred, logit + + if ModelPhase.is_eval(phase): + return py_reader, avg_loss, pred, label, mask + + if ModelPhase.is_train(phase): + decayed_lr = None + if not cfg.SLIM.KNOWLEDGE_DISTILL: + optimizer = solver.Solver(main_prog, start_prog) + decayed_lr = optimizer.optimise(avg_loss) + # optimizer = solver.Solver(main_prog, start_prog) + # decayed_lr = optimizer.optimise(avg_loss) + return py_reader, avg_loss, decayed_lr, pred, label, mask, image + + +def to_int(string, dest="I"): + return struct.unpack(dest, string)[0] + + +def parse_shape_from_file(filename): + with open(filename, "rb") as file: + version = file.read(4) + lod_level = to_int(file.read(8), dest="Q") + for i in range(lod_level): + _size = to_int(file.read(8), dest="Q") + _ = file.read(_size) + version = file.read(4) + tensor_desc_size = to_int(file.read(4)) + tensor_desc = VarType.TensorDesc() + tensor_desc.ParseFromString(file.read(tensor_desc_size)) + return tuple(tensor_desc.dims) diff --git a/slim/distillation/train_distill.py b/slim/distillation/train_distill.py new file mode 100644 index 0000000000000000000000000000000000000000..c1e23253ffcde9eea034bd7f67906ca9e534d2e2 --- /dev/null +++ b/slim/distillation/train_distill.py @@ -0,0 +1,584 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +import sys + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg") +sys.path.append(SEG_PATH) + +import argparse +import pprint +import random +import shutil +import functools + +import paddle +import numpy as np +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from metrics import ConfusionMatrix +from reader import SegDataset +from model_builder import build_model +from model_builder import ModelPhase +from model_builder import parse_shape_from_file +from eval import evaluate +from vis import visualize +from utils import dist_utils + +import solver +from paddleslim.dist.single_distiller import merge, l2_loss + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg training') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--teacher_cfg', + dest='teacher_cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess I/O or not', + action='store_true', + default=False) + parser.add_argument( + '--log_steps', + dest='log_steps', + help='Display logging information at every log_steps', + default=10, + type=int) + parser.add_argument( + '--debug', + dest='debug', + help='debug mode, display detail information of training', + action='store_true') + parser.add_argument( + '--use_tb', + dest='use_tb', + help='whether to record the data during training to Tensorboard', + action='store_true') + parser.add_argument( + '--tb_log_dir', + dest='tb_log_dir', + help='Tensorboard logging directory', + default=None, + type=str) + parser.add_argument( + '--do_eval', + dest='do_eval', + help='Evaluation models result on every new checkpoint', + action='store_true') + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + parser.add_argument( + '--enable_ce', + dest='enable_ce', + help='If set True, enable continuous evaluation job.' + 'This flag is only used for internal test.', + action='store_true') + return parser.parse_args() + + +def save_vars(executor, dirname, program=None, vars=None): + """ + Temporary resolution for Win save variables compatability. + Will fix in PaddlePaddle v1.5.2 + """ + + save_program = fluid.Program() + save_block = save_program.global_block() + + for each_var in vars: + # NOTE: don't save the variable which type is RAW + if each_var.type == fluid.core.VarDesc.VarType.RAW: + continue + new_var = save_block.create_var( + name=each_var.name, + shape=each_var.shape, + dtype=each_var.dtype, + type=each_var.type, + lod_level=each_var.lod_level, + persistable=True) + file_path = os.path.join(dirname, new_var.name) + file_path = os.path.normpath(file_path) + save_block.append_op( + type='save', + inputs={'X': [new_var]}, + outputs={}, + attrs={'file_path': file_path}) + + executor.run(save_program) + + +def save_checkpoint(exe, program, ckpt_name): + """ + Save checkpoint for evaluation or resume training + """ + ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name)) + print("Save model checkpoint to {}".format(ckpt_dir)) + if not os.path.isdir(ckpt_dir): + os.makedirs(ckpt_dir) + + save_vars( + exe, + ckpt_dir, + program, + vars=list(filter(fluid.io.is_persistable, program.list_vars()))) + + return ckpt_dir + + +def load_checkpoint(exe, program): + """ + Load checkpoiont from pretrained model directory for resume training + """ + + print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR) + if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR): + raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format( + cfg.TRAIN.RESUME_MODEL_DIR)) + + fluid.io.load_persistables( + exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program) + + model_path = cfg.TRAIN.RESUME_MODEL_DIR + # Check is path ended by path spearator + if model_path[-1] == os.sep: + model_path = model_path[0:-1] + epoch_name = os.path.basename(model_path) + # If resume model is final model + if epoch_name == 'final': + begin_epoch = cfg.SOLVER.NUM_EPOCHS + # If resume model path is end of digit, restore epoch status + elif epoch_name.isdigit(): + epoch = int(epoch_name) + begin_epoch = epoch + 1 + else: + raise ValueError("Resume model path is not valid!") + print("Model checkpoint loaded successfully!") + + return begin_epoch + + +def update_best_model(ckpt_dir): + best_model_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model') + if os.path.exists(best_model_dir): + shutil.rmtree(best_model_dir) + shutil.copytree(ckpt_dir, best_model_dir) + + +def print_info(*msg): + if cfg.TRAINER_ID == 0: + print(*msg) + + +def train(cfg): + # startup_prog = fluid.Program() + # train_prog = fluid.Program() + + drop_last = True + + dataset = SegDataset( + file_list=cfg.DATASET.TRAIN_FILE_LIST, + mode=ModelPhase.TRAIN, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + if args.use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + batch_data = [] + for b in data_gen: + batch_data.append(b) + if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS): + for item in batch_data: + yield item[0], item[1], item[2] + batch_data = [] + # If use sync batch norm strategy, drop last batch if number of samples + # in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues + if not cfg.TRAIN.SYNC_BATCH_NORM: + for item in batch_data: + yield item[0], item[1], item[2] + + # Get device environment + # places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + # place = places[0] + gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0)) + place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace() + places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + + # Get number of GPU + dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places) + print_info("#Device count: {}".format(dev_count)) + + # Make sure BATCH_SIZE can divided by GPU cards + assert cfg.BATCH_SIZE % dev_count == 0, ( + 'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format( + cfg.BATCH_SIZE, dev_count)) + # If use multi-gpu training mode, batch data will allocated to each GPU evenly + batch_size_per_dev = cfg.BATCH_SIZE // dev_count + print_info("batch_size_per_dev: {}".format(batch_size_per_dev)) + + py_reader, loss, lr, pred, grts, masks, image = build_model(phase=ModelPhase.TRAIN) + py_reader.decorate_sample_generator( + data_generator, batch_size=batch_size_per_dev, drop_last=drop_last) + + exe = fluid.Executor(place) + + cfg.update_from_file(args.teacher_cfg_file) + # teacher_arch = teacher_cfg.architecture + teacher_program = fluid.Program() + teacher_startup_program = fluid.Program() + + with fluid.program_guard(teacher_program, teacher_startup_program): + with fluid.unique_name.guard(): + _, teacher_loss, _, _, _, _, _ = build_model( + teacher_program, teacher_startup_program, phase=ModelPhase.TRAIN, image=image, + label=grts, mask=masks) + + exe.run(teacher_startup_program) + + teacher_program = teacher_program.clone(for_test=True) + ckpt_dir = cfg.SLIM.KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR + assert ckpt_dir is not None + print('load teacher model:', ckpt_dir) + fluid.io.load_params(exe, ckpt_dir, main_program=teacher_program) + + # cfg = load_config(FLAGS.config) + cfg.update_from_file(args.cfg_file) + data_name_map = { + 'image': 'image', + 'label': 'label', + 'mask': 'mask', + } + merge(teacher_program, fluid.default_main_program(), data_name_map, place) + distill_pairs = [['teacher_bilinear_interp_2.tmp_0', 'bilinear_interp_0.tmp_0']] + + def distill(pairs, weight): + """ + Add 3 pairs of distillation losses, each pair of feature maps is the + input of teacher and student's yolov3_loss respectively + """ + loss = l2_loss(pairs[0][0], pairs[0][1]) + weighted_loss = loss * weight + return weighted_loss + + distill_loss = distill(distill_pairs, 0.1) + cfg.update_from_file(args.cfg_file) + optimizer = solver.Solver(None, None) + all_loss = loss + distill_loss + lr = optimizer.optimise(all_loss) + + exe.run(fluid.default_startup_program()) + + exec_strategy = fluid.ExecutionStrategy() + # Clear temporary variables every 100 iteration + if args.use_gpu: + exec_strategy.num_threads = fluid.core.get_cuda_device_count() + exec_strategy.num_iteration_per_drop_scope = 100 + build_strategy = fluid.BuildStrategy() + build_strategy.fuse_all_reduce_ops = False + build_strategy.fuse_all_optimizer_ops = False + build_strategy.fuse_elewise_add_act_ops = True + if cfg.NUM_TRAINERS > 1 and args.use_gpu: + dist_utils.prepare_for_multi_process(exe, build_strategy, fluid.default_main_program()) + exec_strategy.num_threads = 1 + + if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu: + if dev_count > 1: + # Apply sync batch norm strategy + print_info("Sync BatchNorm strategy is effective.") + build_strategy.sync_batch_norm = True + else: + print_info( + "Sync BatchNorm strategy will not be effective if GPU device" + " count <= 1") + compiled_train_prog = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel( + loss_name=all_loss.name, + exec_strategy=exec_strategy, + build_strategy=build_strategy) + + # Resume training + begin_epoch = cfg.SOLVER.BEGIN_EPOCH + if cfg.TRAIN.RESUME_MODEL_DIR: + begin_epoch = load_checkpoint(exe, fluid.default_main_program()) + # Load pretrained model + elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR): + print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR) + load_vars = [] + load_fail_vars = [] + + def var_shape_matched(var, shape): + """ + Check whehter persitable variable shape is match with current network + """ + var_exist = os.path.exists( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + if var_exist: + var_shape = parse_shape_from_file( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + return var_shape == shape + return False + + for x in fluid.default_main_program().list_vars(): + if isinstance(x, fluid.framework.Parameter): + shape = tuple(fluid.global_scope().find_var( + x.name).get_tensor().shape()) + if var_shape_matched(x, shape): + load_vars.append(x) + else: + load_fail_vars.append(x) + + fluid.io.load_vars( + exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars) + for var in load_vars: + print_info("Parameter[{}] loaded sucessfully!".format(var.name)) + for var in load_fail_vars: + print_info( + "Parameter[{}] don't exist or shape does not match current network, skip" + " to load it.".format(var.name)) + print_info("{}/{} pretrained parameters loaded successfully!".format( + len(load_vars), + len(load_vars) + len(load_fail_vars))) + else: + print_info( + 'Pretrained model dir {} not exists, training from scratch...'. + format(cfg.TRAIN.PRETRAINED_MODEL_DIR)) + + #fetch_list = [avg_loss.name, lr.name] + fetch_list = [loss.name, 'teacher_' + teacher_loss.name, distill_loss.name, lr.name] + + if args.debug: + # Fetch more variable info and use streaming confusion matrix to + # calculate IoU results if in debug mode + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + fetch_list.extend([pred.name, grts.name, masks.name]) + cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + + if args.use_tb: + if not args.tb_log_dir: + print_info("Please specify the log directory by --tb_log_dir.") + exit(1) + + from tb_paddle import SummaryWriter + log_writer = SummaryWriter(args.tb_log_dir) + + # trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0)) + # num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + global_step = 0 + all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE + if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True: + all_step += 1 + all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1) + + avg_loss = 0.0 + avg_t_loss = 0.0 + avg_d_loss = 0.0 + best_mIoU = 0.0 + + timer = Timer() + timer.start() + if begin_epoch > cfg.SOLVER.NUM_EPOCHS: + raise ValueError( + ("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format( + begin_epoch, cfg.SOLVER.NUM_EPOCHS)) + + if args.use_mpio: + print_info("Use multiprocess reader") + else: + print_info("Use multi-thread reader") + + for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1): + py_reader.start() + while True: + try: + if args.debug: + # Print category IoU and accuracy to check whether the + # traning process is corresponed to expectation + loss, lr, pred, grts, masks = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + cm.calculate(pred, grts, masks) + avg_loss += np.mean(np.array(loss)) + global_step += 1 + + if global_step % args.log_steps == 0: + speed = args.log_steps / timer.elapsed_time() + avg_loss /= args.log_steps + category_acc, mean_acc = cm.accuracy() + category_iou, mean_iou = cm.mean_iou() + + print_info(( + "epoch={} step={} lr={:.5f} loss={:.4f} acc={:.5f} mIoU={:.5f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, mean_acc, + mean_iou, speed, + calculate_eta(all_step - global_step, speed))) + print_info("Category IoU: ", category_iou) + print_info("Category Acc: ", category_acc) + if args.use_tb: + log_writer.add_scalar('Train/mean_iou', mean_iou, + global_step) + log_writer.add_scalar('Train/mean_acc', mean_acc, + global_step) + log_writer.add_scalar('Train/loss', avg_loss, + global_step) + log_writer.add_scalar('Train/lr', lr[0], + global_step) + log_writer.add_scalar('Train/step/sec', speed, + global_step) + sys.stdout.flush() + avg_loss = 0.0 + cm.zero_matrix() + timer.restart() + else: + # If not in debug mode, avoid unnessary log and calculate + loss, t_loss, d_loss, lr = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + avg_loss += np.mean(np.array(loss)) + avg_t_loss += np.mean(np.array(t_loss)) + avg_d_loss += np.mean(np.array(d_loss)) + global_step += 1 + + if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0: + avg_loss /= args.log_steps + avg_t_loss /= args.log_steps + avg_d_loss /= args.log_steps + speed = args.log_steps / timer.elapsed_time() + print(( + "epoch={} step={} lr={:.5f} loss={:.4f} teacher loss={:.4f} distill loss={:.4f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, avg_t_loss, avg_d_loss, speed, + calculate_eta(all_step - global_step, speed))) + if args.use_tb: + log_writer.add_scalar('Train/loss', avg_loss, + global_step) + log_writer.add_scalar('Train/lr', lr[0], + global_step) + log_writer.add_scalar('Train/speed', speed, + global_step) + sys.stdout.flush() + avg_loss = 0.0 + avg_t_loss = 0.0 + avg_d_loss = 0.0 + timer.restart() + + except fluid.core.EOFException: + py_reader.reset() + break + except Exception as e: + print(e) + + if (epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0 + or epoch == cfg.SOLVER.NUM_EPOCHS) and cfg.TRAINER_ID == 0: + ckpt_dir = save_checkpoint(exe, fluid.default_main_program(), epoch) + + if args.do_eval: + print("Evaluation start") + _, mean_iou, _, mean_acc = evaluate( + cfg=cfg, + ckpt_dir=ckpt_dir, + use_gpu=args.use_gpu, + use_mpio=args.use_mpio) + if args.use_tb: + log_writer.add_scalar('Evaluate/mean_iou', mean_iou, + global_step) + log_writer.add_scalar('Evaluate/mean_acc', mean_acc, + global_step) + + if mean_iou > best_mIoU: + best_mIoU = mean_iou + update_best_model(ckpt_dir) + print_info("Save best model {} to {}, mIoU = {:.4f}".format( + ckpt_dir, + os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model'), + mean_iou)) + + # Use Tensorboard to visualize results + if args.use_tb and cfg.DATASET.VIS_FILE_LIST is not None: + visualize( + cfg=cfg, + use_gpu=args.use_gpu, + vis_file_list=cfg.DATASET.VIS_FILE_LIST, + vis_dir="visual", + ckpt_dir=ckpt_dir, + log_writer=log_writer) + if cfg.TRAINER_ID == 0: + ckpt_dir = save_checkpoint(exe, fluid.default_main_program(), epoch) + + # save final model + if cfg.TRAINER_ID == 0: + save_checkpoint(exe, fluid.default_main_program(), 'final') + + +def main(args): + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + if args.enable_ce: + random.seed(0) + np.random.seed(0) + + cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0)) + cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + + cfg.check_and_infer() + print_info(pprint.pformat(cfg)) + train(cfg) + + +if __name__ == '__main__': + args = parse_args() + if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True: + print( + "You can not set use_gpu = True in the model because you are using paddlepaddle-cpu." + ) + print( + "Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU." + ) + sys.exit(1) + main(args) diff --git a/slim/nas/README.md b/slim/nas/README.md new file mode 100644 index 0000000000000000000000000000000000000000..cddfc5a82f07ab0b3f2e2acad6a4c0f7b2ed650c --- /dev/null +++ b/slim/nas/README.md @@ -0,0 +1,63 @@ +>运行该示例前请安装Paddle1.6或更高版本 + +# PaddleSeg神经网络搜索(NAS)示例 + +在阅读本教程前,请确保您已经了解过[PaddleSeg使用说明](../../docs/usage.md)等章节,以便对PaddleSeg有一定的了解 + +该文档介绍如何使用[PaddleSlim](https://paddlepaddle.github.io/PaddleSlim)对分割库中的模型进行搜索。 + +该教程中所示操作,如无特殊说明,均在`PaddleSeg/`路径下执行。 + +## 概述 + +我们选取Deeplab+mobilenetv2模型作为神经网络搜索示例,该示例使用[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim) +辅助完成神经网络搜索实验,具体技术细节,请您参考[神经网络搜索策略](https://github.com/PaddlePaddle/PaddleSlim/blob/4670a79343c191b61a78e416826d122eea52a7ab/docs/zh_cn/tutorials/image_classification_nas_quick_start.ipynb)。 + + +## 定义搜索空间 +搜索实验中,我们采用了SANAS的方式进行搜索,本次实验会对网络模型中的通道数和卷积核尺寸进行搜索。 +所以我们定义了如下搜索空间: +- head通道模块`head_num`:定义了MobilenetV2 head模块中通道数变化区间; +- inverse_res_block1-6`filter_num1-6`: 定义了inverse_res_block模块中通道数变化区间; +- inverse_res_block`repeat`:定义了MobilenetV2 inverse_res_block模块中unit的个数; +- inverse_res_block`multiply`:定义了MobilenetV2 inverse_res_block模块中expansion_factor变化区间; +- 卷积核尺寸`k_size`:定义了MobilenetV2中卷积和尺寸大小是3x3或者5x5。 + +根据定义的搜索空间各个区间,我们的搜索空间tokens共9位,变化区间在([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [7, 5, 8, 6, 2, 5, 8, 6, 2, 5, 8, 6, 2, 5, 10, 6, 2, 5, 10, 6, 2, 5, 12, 6, 2])范围内。 + + +初始化tokens为:[4, 4, 5, 1, 0, 4, 4, 1, 0, 4, 4, 3, 0, 4, 5, 2, 0, 4, 7, 2, 0, 4, 9, 0, 0]。 + +## 开始搜索 +首先需要安装PaddleSlim,请参考[安装教程](https://paddlepaddle.github.io/PaddleSlim/#_2)。 + +配置paddleseg的config, 下面只展示nas相关的内容 + +```shell +SLIM: + NAS_PORT: 23333 # 端口 + NAS_ADDRESS: "" # ip地址,作为server不用填写,作为client的时候需要填写server的ip + NAS_SEARCH_STEPS: 100 # 搜索多少个结构 + NAS_START_EVAL_EPOCH: -1 # 第几个epoch开始对模型进行评估 + NAS_IS_SERVER: True # 是否为server + NAS_SPACE_NAME: "MobileNetV2SpaceSeg" # 搜索空间 +``` + +## 训练与评估 +执行以下命令,边训练边评估 +```shell +CUDA_VISIBLE_DEVICES=0 python -u ./slim/nas/train_nas.py --log_steps 10 --cfg configs/deeplabv3p_mobilenetv2_cityscapes.yaml --use_gpu --use_mpio \ +SLIM.NAS_PORT 23333 \ +SLIM.NAS_ADDRESS "" \ +SLIM.NAS_SEARCH_STEPS 2 \ +SLIM.NAS_START_EVAL_EPOCH -1 \ +SLIM.NAS_IS_SERVER True \ +SLIM.NAS_SPACE_NAME "MobileNetV2SpaceSeg" \ +``` + + +## FAQ +- 运行报错:`socket.error: [Errno 98] Address already in use`。 + +解决方法:当前端口被占用,请修改`SLIM.NAS_PORT`端口。 + diff --git a/slim/nas/deeplab.py b/slim/nas/deeplab.py new file mode 100644 index 0000000000000000000000000000000000000000..6cbf840927b107a36273e9890f1ba4d076ddb417 --- /dev/null +++ b/slim/nas/deeplab.py @@ -0,0 +1,225 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function +import contextlib +import paddle +import paddle.fluid as fluid +from utils.config import cfg +from models.libs.model_libs import scope, name_scope +from models.libs.model_libs import bn, bn_relu, relu +from models.libs.model_libs import conv +from models.libs.model_libs import separate_conv +from models.backbone.mobilenet_v2 import MobileNetV2 as mobilenet_backbone +from models.backbone.xception import Xception as xception_backbone + +def encoder(input): + # 编码器配置,采用ASPP架构,pooling + 1x1_conv + 三个不同尺度的空洞卷积并行, concat后1x1conv + # ASPP_WITH_SEP_CONV:默认为真,使用depthwise可分离卷积,否则使用普通卷积 + # OUTPUT_STRIDE: 下采样倍数,8或16,决定aspp_ratios大小 + # aspp_ratios:ASPP模块空洞卷积的采样率 + + if cfg.MODEL.DEEPLAB.OUTPUT_STRIDE == 16: + aspp_ratios = [6, 12, 18] + elif cfg.MODEL.DEEPLAB.OUTPUT_STRIDE == 8: + aspp_ratios = [12, 24, 36] + else: + raise Exception("deeplab only support stride 8 or 16") + + param_attr = fluid.ParamAttr( + name=name_scope + 'weights', + regularizer=None, + initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.06)) + with scope('encoder'): + channel = 256 + with scope("image_pool"): + image_avg = fluid.layers.reduce_mean( + input, [2, 3], keep_dim=True) + image_avg = bn_relu( + conv( + image_avg, + channel, + 1, + 1, + groups=1, + padding=0, + param_attr=param_attr)) + image_avg = fluid.layers.resize_bilinear(image_avg, input.shape[2:]) + + with scope("aspp0"): + aspp0 = bn_relu( + conv( + input, + channel, + 1, + 1, + groups=1, + padding=0, + param_attr=param_attr)) + with scope("aspp1"): + if cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV: + aspp1 = separate_conv( + input, channel, 1, 3, dilation=aspp_ratios[0], act=relu) + else: + aspp1 = bn_relu( + conv( + input, + channel, + stride=1, + filter_size=3, + dilation=aspp_ratios[0], + padding=aspp_ratios[0], + param_attr=param_attr)) + with scope("aspp2"): + if cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV: + aspp2 = separate_conv( + input, channel, 1, 3, dilation=aspp_ratios[1], act=relu) + else: + aspp2 = bn_relu( + conv( + input, + channel, + stride=1, + filter_size=3, + dilation=aspp_ratios[1], + padding=aspp_ratios[1], + param_attr=param_attr)) + with scope("aspp3"): + if cfg.MODEL.DEEPLAB.ASPP_WITH_SEP_CONV: + aspp3 = separate_conv( + input, channel, 1, 3, dilation=aspp_ratios[2], act=relu) + else: + aspp3 = bn_relu( + conv( + input, + channel, + stride=1, + filter_size=3, + dilation=aspp_ratios[2], + padding=aspp_ratios[2], + param_attr=param_attr)) + with scope("concat"): + data = fluid.layers.concat([image_avg, aspp0, aspp1, aspp2, aspp3], + axis=1) + data = bn_relu( + conv( + data, + channel, + 1, + 1, + groups=1, + padding=0, + param_attr=param_attr)) + data = fluid.layers.dropout(data, 0.9) + return data + + +def decoder(encode_data, decode_shortcut): + # 解码器配置 + # encode_data:编码器输出 + # decode_shortcut: 从backbone引出的分支, resize后与encode_data concat + # DECODER_USE_SEP_CONV: 默认为真,则concat后连接两个可分离卷积,否则为普通卷积 + param_attr = fluid.ParamAttr( + name=name_scope + 'weights', + regularizer=None, + initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.06)) + with scope('decoder'): + with scope('concat'): + decode_shortcut = bn_relu( + conv( + decode_shortcut, + 48, + 1, + 1, + groups=1, + padding=0, + param_attr=param_attr)) + + encode_data = fluid.layers.resize_bilinear( + encode_data, decode_shortcut.shape[2:]) + encode_data = fluid.layers.concat([encode_data, decode_shortcut], + axis=1) + if cfg.MODEL.DEEPLAB.DECODER_USE_SEP_CONV: + with scope("separable_conv1"): + encode_data = separate_conv( + encode_data, 256, 1, 3, dilation=1, act=relu) + with scope("separable_conv2"): + encode_data = separate_conv( + encode_data, 256, 1, 3, dilation=1, act=relu) + else: + with scope("decoder_conv1"): + encode_data = bn_relu( + conv( + encode_data, + 256, + stride=1, + filter_size=3, + dilation=1, + padding=1, + param_attr=param_attr)) + with scope("decoder_conv2"): + encode_data = bn_relu( + conv( + encode_data, + 256, + stride=1, + filter_size=3, + dilation=1, + padding=1, + param_attr=param_attr)) + return encode_data + + +def nas_backbone(input, arch): + # scale = cfg.MODEL.DEEPLAB.DEPTH_MULTIPLIER + # output_stride = cfg.MODEL.DEEPLAB.OUTPUT_STRIDE + # model = mobilenet_backbone(scale=scale, output_stride=output_stride) + end_points = 8 + decode_point = 3 + data, decode_shortcuts = arch( + input, end_points=end_points, return_block=decode_point, output_stride=16) + decode_shortcut = decode_shortcuts[decode_point] + return data, decode_shortcut + + +def deeplabv3p_nas(img, num_classes, arch=None): + data, decode_shortcut = nas_backbone(img, arch) + # 编码器解码器设置 + cfg.MODEL.DEFAULT_EPSILON = 1e-5 + if cfg.MODEL.DEEPLAB.ENCODER_WITH_ASPP: + data = encoder(data) + if cfg.MODEL.DEEPLAB.ENABLE_DECODER: + data = decoder(data, decode_shortcut) + + # 根据类别数设置最后一个卷积层输出,并resize到图片原始尺寸 + param_attr = fluid.ParamAttr( + name=name_scope + 'weights', + regularizer=fluid.regularizer.L2DecayRegularizer( + regularization_coeff=0.0), + initializer=fluid.initializer.TruncatedNormal(loc=0.0, scale=0.01)) + with scope('logit'): + logit = conv( + data, + num_classes, + 1, + stride=1, + padding=0, + bias_attr=True, + param_attr=param_attr) + logit = fluid.layers.resize_bilinear(logit, img.shape[2:]) + + return logit diff --git a/slim/nas/eval_nas.py b/slim/nas/eval_nas.py new file mode 100644 index 0000000000000000000000000000000000000000..08f75f5d8ee8d6afbcf9b038e4f8dcf0237a5b56 --- /dev/null +++ b/slim/nas/eval_nas.py @@ -0,0 +1,185 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg") +sys.path.append(SEG_PATH) + +import time +import argparse +import functools +import pprint +import cv2 +import numpy as np +import paddle +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from model_builder import build_model +from model_builder import ModelPhase +from reader import SegDataset +from metrics import ConfusionMatrix + +from mobilenetv2_search_space import MobileNetV2SpaceSeg + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg model evalution') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess IO or not', + action='store_true', + default=False) + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + if len(sys.argv) == 1: + parser.print_help() + sys.exit(1) + return parser.parse_args() + + +def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs): + np.set_printoptions(precision=5, suppress=True) + + startup_prog = fluid.Program() + test_prog = fluid.Program() + dataset = SegDataset( + file_list=cfg.DATASET.VAL_FILE_LIST, + mode=ModelPhase.EVAL, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + #TODO: check is batch reader compatitable with Windows + if use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + for b in data_gen: + yield b[0], b[1], b[2] + + py_reader, avg_loss, pred, grts, masks = build_model( + test_prog, startup_prog, phase=ModelPhase.EVAL, arch=kwargs['arch']) + + py_reader.decorate_sample_generator( + data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE) + + # Get device environment + places = fluid.cuda_places() if use_gpu else fluid.cpu_places() + place = places[0] + dev_count = len(places) + print("#Device count: {}".format(dev_count)) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + test_prog = test_prog.clone(for_test=True) + + ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir + + if not os.path.exists(ckpt_dir): + raise ValueError('The TEST.TEST_MODEL {} is not found'.format(ckpt_dir)) + + if ckpt_dir is not None: + print('load test model:', ckpt_dir) + fluid.io.load_params(exe, ckpt_dir, main_program=test_prog) + + # Use streaming confusion matrix to calculate mean_iou + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + conf_mat = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + fetch_list = [avg_loss.name, pred.name, grts.name, masks.name] + num_images = 0 + step = 0 + all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1 + timer = Timer() + timer.start() + py_reader.start() + while True: + try: + step += 1 + loss, pred, grts, masks = exe.run( + test_prog, fetch_list=fetch_list, return_numpy=True) + + loss = np.mean(np.array(loss)) + + num_images += pred.shape[0] + conf_mat.calculate(pred, grts, masks) + _, iou = conf_mat.mean_iou() + _, acc = conf_mat.accuracy() + + speed = 1.0 / timer.elapsed_time() + + print( + "[EVAL]step={} loss={:.5f} acc={:.4f} IoU={:.4f} step/sec={:.2f} | ETA {}" + .format(step, loss, acc, iou, speed, + calculate_eta(all_step - step, speed))) + timer.restart() + sys.stdout.flush() + except fluid.core.EOFException: + break + + category_iou, avg_iou = conf_mat.mean_iou() + category_acc, avg_acc = conf_mat.accuracy() + print("[EVAL]#image={} acc={:.4f} IoU={:.4f}".format( + num_images, avg_acc, avg_iou)) + print("[EVAL]Category IoU:", category_iou) + print("[EVAL]Category Acc:", category_acc) + print("[EVAL]Kappa:{:.4f}".format(conf_mat.kappa())) + + return category_iou, avg_iou, category_acc, avg_acc + + +def main(): + args = parse_args() + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + cfg.check_and_infer() + print(pprint.pformat(cfg)) + evaluate(cfg, **args.__dict__) + + +if __name__ == '__main__': + main() diff --git a/slim/nas/mobilenetv2_search_space.py b/slim/nas/mobilenetv2_search_space.py new file mode 100644 index 0000000000000000000000000000000000000000..2703e161f02e9659040b827fff8d345db5bf5946 --- /dev/null +++ b/slim/nas/mobilenetv2_search_space.py @@ -0,0 +1,323 @@ +# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License" +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import numpy as np +import paddle.fluid as fluid +from paddle.fluid.param_attr import ParamAttr +from paddleslim.nas.search_space.search_space_base import SearchSpaceBase +from paddleslim.nas.search_space.base_layer import conv_bn_layer +from paddleslim.nas.search_space.search_space_registry import SEARCHSPACE +from paddleslim.nas.search_space.utils import check_points + +__all__ = ["MobileNetV2SpaceSeg"] + + +@SEARCHSPACE.register +class MobileNetV2SpaceSeg(SearchSpaceBase): + def __init__(self, input_size, output_size, block_num, block_mask=None): + super(MobileNetV2SpaceSeg, self).__init__(input_size, output_size, + block_num, block_mask) + # self.head_num means the first convolution channel + self.head_num = np.array([3, 4, 8, 12, 16, 24, 32]) #7 + # self.filter_num1 ~ self.filter_num6 means following convlution channel + self.filter_num1 = np.array([3, 4, 8, 12, 16, 24, 32, 48]) #8 + self.filter_num2 = np.array([8, 12, 16, 24, 32, 48, 64, 80]) #8 + self.filter_num3 = np.array([16, 24, 32, 48, 64, 80, 96, 128]) #8 + self.filter_num4 = np.array( + [24, 32, 48, 64, 80, 96, 128, 144, 160, 192]) #10 + self.filter_num5 = np.array( + [32, 48, 64, 80, 96, 128, 144, 160, 192, 224]) #10 + self.filter_num6 = np.array( + [64, 80, 96, 128, 144, 160, 192, 224, 256, 320, 384, 512]) #12 + # self.k_size means kernel size + self.k_size = np.array([3, 5]) #2 + # self.multiply means expansion_factor of each _inverted_residual_unit + self.multiply = np.array([1, 2, 3, 4, 6]) #5 + # self.repeat means repeat_num _inverted_residual_unit in each _invresi_blocks + self.repeat = np.array([1, 2, 3, 4, 5, 6]) #6 + + def init_tokens(self): + """ + The initial token. + The first one is the index of the first layers' channel in self.head_num, + each line in the following represent the index of the [expansion_factor, filter_num, repeat_num, kernel_size] + """ + # original MobileNetV2 + # yapf: disable + init_token_base = [4, # 1, 16, 1 + 4, 5, 1, 0, # 6, 24, 2 + 4, 4, 2, 0, # 6, 32, 3 + 4, 4, 3, 0, # 6, 64, 4 + 4, 5, 2, 0, # 6, 96, 3 + 4, 7, 2, 0, # 6, 160, 3 + 4, 9, 0, 0] # 6, 320, 1 + # yapf: enable + + return init_token_base + + def range_table(self): + """ + Get range table of current search space, constrains the range of tokens. + """ + # head_num + 6 * [multiple(expansion_factor), filter_num, repeat, kernel_size] + # yapf: disable + range_table_base = [len(self.head_num), + len(self.multiply), len(self.filter_num1), len(self.repeat), len(self.k_size), + len(self.multiply), len(self.filter_num2), len(self.repeat), len(self.k_size), + len(self.multiply), len(self.filter_num3), len(self.repeat), len(self.k_size), + len(self.multiply), len(self.filter_num4), len(self.repeat), len(self.k_size), + len(self.multiply), len(self.filter_num5), len(self.repeat), len(self.k_size), + len(self.multiply), len(self.filter_num6), len(self.repeat), len(self.k_size)] + # yapf: enable + return range_table_base + + def token2arch(self, tokens=None): + """ + return net_arch function + """ + + if tokens is None: + tokens = self.init_tokens() + + self.bottleneck_params_list = [] + self.bottleneck_params_list.append( + (1, self.head_num[tokens[0]], 1, 1, 3)) + self.bottleneck_params_list.append( + (self.multiply[tokens[1]], self.filter_num1[tokens[2]], + self.repeat[tokens[3]], 2, self.k_size[tokens[4]])) + self.bottleneck_params_list.append( + (self.multiply[tokens[5]], self.filter_num2[tokens[6]], + self.repeat[tokens[7]], 2, self.k_size[tokens[8]])) + self.bottleneck_params_list.append( + (self.multiply[tokens[9]], self.filter_num3[tokens[10]], + self.repeat[tokens[11]], 2, self.k_size[tokens[12]])) + self.bottleneck_params_list.append( + (self.multiply[tokens[13]], self.filter_num4[tokens[14]], + self.repeat[tokens[15]], 1, self.k_size[tokens[16]])) + self.bottleneck_params_list.append( + (self.multiply[tokens[17]], self.filter_num5[tokens[18]], + self.repeat[tokens[19]], 2, self.k_size[tokens[20]])) + self.bottleneck_params_list.append( + (self.multiply[tokens[21]], self.filter_num6[tokens[22]], + self.repeat[tokens[23]], 1, self.k_size[tokens[24]])) + + def _modify_bottle_params(output_stride=None): + if output_stride is not None and output_stride % 2 != 0: + raise Exception("output stride must to be even number") + if output_stride is None: + return + else: + stride = 2 + for i, layer_setting in enumerate(self.bottleneck_params_list): + t, c, n, s, ks = layer_setting + stride = stride * s + if stride > output_stride: + s = 1 + self.bottleneck_params_list[i] = (t, c, n, s, ks) + + def net_arch(input, + scale=1.0, + return_block=None, + end_points=None, + output_stride=None): + self.scale = scale + _modify_bottle_params(output_stride) + + decode_ends = dict() + + def check_points(count, points): + if points is None: + return False + else: + if isinstance(points, list): + return (True if count in points else False) + else: + return (True if count == points else False) + + #conv1 + # all padding is 'SAME' in the conv2d, can compute the actual padding automatic. + input = conv_bn_layer( + input, + num_filters=int(32 * self.scale), + filter_size=3, + stride=2, + padding='SAME', + act='relu6', + name='mobilenetv2_conv1') + layer_count = 1 + + depthwise_output = None + # bottleneck sequences + in_c = int(32 * self.scale) + for i, layer_setting in enumerate(self.bottleneck_params_list): + t, c, n, s, k = layer_setting + layer_count += 1 + ### return_block and end_points means block num + if check_points((layer_count - 1), return_block): + decode_ends[layer_count - 1] = depthwise_output + + if check_points((layer_count - 1), end_points): + return input, decode_ends + input, depthwise_output = self._invresi_blocks( + input=input, + in_c=in_c, + t=t, + c=int(c * self.scale), + n=n, + s=s, + k=int(k), + name='mobilenetv2_conv' + str(i)) + in_c = int(c * self.scale) + + ### return_block and end_points means block num + if check_points(layer_count, return_block): + decode_ends[layer_count] = depthwise_output + + if check_points(layer_count, end_points): + return input, decode_ends + # last conv + input = conv_bn_layer( + input=input, + num_filters=int(1280 * self.scale) + if self.scale > 1.0 else 1280, + filter_size=1, + stride=1, + padding='SAME', + act='relu6', + name='mobilenetv2_conv' + str(i + 1)) + + input = fluid.layers.pool2d( + input=input, + pool_type='avg', + global_pooling=True, + name='mobilenetv2_last_pool') + + return input + + return net_arch + + def _shortcut(self, input, data_residual): + """Build shortcut layer. + Args: + input(Variable): input. + data_residual(Variable): residual layer. + Returns: + Variable, layer output. + """ + return fluid.layers.elementwise_add(input, data_residual) + + def _inverted_residual_unit(self, + input, + num_in_filter, + num_filters, + ifshortcut, + stride, + filter_size, + expansion_factor, + reduction_ratio=4, + name=None): + """Build inverted residual unit. + Args: + input(Variable), input. + num_in_filter(int), number of in filters. + num_filters(int), number of filters. + ifshortcut(bool), whether using shortcut. + stride(int), stride. + filter_size(int), filter size. + padding(str|int|list), padding. + expansion_factor(float), expansion factor. + name(str), name. + Returns: + Variable, layers output. + """ + num_expfilter = int(round(num_in_filter * expansion_factor)) + channel_expand = conv_bn_layer( + input=input, + num_filters=num_expfilter, + filter_size=1, + stride=1, + padding='SAME', + num_groups=1, + act='relu6', + name=name + '_expand') + + bottleneck_conv = conv_bn_layer( + input=channel_expand, + num_filters=num_expfilter, + filter_size=filter_size, + stride=stride, + padding='SAME', + num_groups=num_expfilter, + act='relu6', + name=name + '_dwise', + use_cudnn=False) + + depthwise_output = bottleneck_conv + + linear_out = conv_bn_layer( + input=bottleneck_conv, + num_filters=num_filters, + filter_size=1, + stride=1, + padding='SAME', + num_groups=1, + act=None, + name=name + '_linear') + out = linear_out + if ifshortcut: + out = self._shortcut(input=input, data_residual=out) + return out, depthwise_output + + def _invresi_blocks(self, input, in_c, t, c, n, s, k, name=None): + """Build inverted residual blocks. + Args: + input: Variable, input. + in_c: int, number of in filters. + t: float, expansion factor. + c: int, number of filters. + n: int, number of layers. + s: int, stride. + k: int, filter size. + name: str, name. + Returns: + Variable, layers output. + """ + first_block, depthwise_output = self._inverted_residual_unit( + input=input, + num_in_filter=in_c, + num_filters=c, + ifshortcut=False, + stride=s, + filter_size=k, + expansion_factor=t, + name=name + '_1') + + last_residual_block = first_block + last_c = c + + for i in range(1, n): + last_residual_block, depthwise_output = self._inverted_residual_unit( + input=last_residual_block, + num_in_filter=last_c, + num_filters=c, + ifshortcut=True, + stride=1, + filter_size=k, + expansion_factor=t, + name=name + '_' + str(i + 1)) + return last_residual_block, depthwise_output diff --git a/slim/nas/model_builder.py b/slim/nas/model_builder.py new file mode 100644 index 0000000000000000000000000000000000000000..3dfbacb0cd41a14bb81c6f6c82b81479fb1c30c8 --- /dev/null +++ b/slim/nas/model_builder.py @@ -0,0 +1,316 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import struct + +import paddle.fluid as fluid +import numpy as np +from paddle.fluid.proto.framework_pb2 import VarType + +import solver +from utils.config import cfg +from loss import multi_softmax_with_loss +from loss import multi_dice_loss +from loss import multi_bce_loss +import deeplab + + +class ModelPhase(object): + """ + Standard name for model phase in PaddleSeg + + The following standard keys are defined: + * `TRAIN`: training mode. + * `EVAL`: testing/evaluation mode. + * `PREDICT`: prediction/inference mode. + * `VISUAL` : visualization mode + """ + + TRAIN = 'train' + EVAL = 'eval' + PREDICT = 'predict' + VISUAL = 'visual' + + @staticmethod + def is_train(phase): + return phase == ModelPhase.TRAIN + + @staticmethod + def is_predict(phase): + return phase == ModelPhase.PREDICT + + @staticmethod + def is_eval(phase): + return phase == ModelPhase.EVAL + + @staticmethod + def is_visual(phase): + return phase == ModelPhase.VISUAL + + @staticmethod + def is_valid_phase(phase): + """ Check valid phase """ + if ModelPhase.is_train(phase) or ModelPhase.is_predict(phase) \ + or ModelPhase.is_eval(phase) or ModelPhase.is_visual(phase): + return True + + return False + + +def seg_model(image, class_num, arch): + model_name = cfg.MODEL.MODEL_NAME + if model_name == 'deeplabv3p': + logits = deeplab.deeplabv3p_nas(image, class_num, arch) + else: + raise Exception( + "unknow model name, only support deeplabv3p" + ) + return logits + + +def softmax(logit): + logit = fluid.layers.transpose(logit, [0, 2, 3, 1]) + logit = fluid.layers.softmax(logit) + logit = fluid.layers.transpose(logit, [0, 3, 1, 2]) + return logit + + +def sigmoid_to_softmax(logit): + """ + one channel to two channel + """ + logit = fluid.layers.transpose(logit, [0, 2, 3, 1]) + logit = fluid.layers.sigmoid(logit) + logit_back = 1 - logit + logit = fluid.layers.concat([logit_back, logit], axis=-1) + logit = fluid.layers.transpose(logit, [0, 3, 1, 2]) + return logit + + +def export_preprocess(image): + """导出模型的预处理流程""" + + image = fluid.layers.transpose(image, [0, 3, 1, 2]) + origin_shape = fluid.layers.shape(image)[-2:] + + # 不同AUG_METHOD方法的resize + if cfg.AUG.AUG_METHOD == 'unpadding': + h_fix = cfg.AUG.FIX_RESIZE_SIZE[1] + w_fix = cfg.AUG.FIX_RESIZE_SIZE[0] + image = fluid.layers.resize_bilinear( + image, out_shape=[h_fix, w_fix], align_corners=False, align_mode=0) + elif cfg.AUG.AUG_METHOD == 'rangescaling': + size = cfg.AUG.INF_RESIZE_VALUE + value = fluid.layers.reduce_max(origin_shape) + scale = float(size) / value.astype('float32') + image = fluid.layers.resize_bilinear( + image, scale=scale, align_corners=False, align_mode=0) + + # 存储resize后图像shape + valid_shape = fluid.layers.shape(image)[-2:] + + # padding到eval_crop_size大小 + width = cfg.EVAL_CROP_SIZE[0] + height = cfg.EVAL_CROP_SIZE[1] + pad_target = fluid.layers.assign( + np.array([height, width]).astype('float32')) + up = fluid.layers.assign(np.array([0]).astype('float32')) + down = pad_target[0] - valid_shape[0] + left = up + right = pad_target[1] - valid_shape[1] + paddings = fluid.layers.concat([up, down, left, right]) + paddings = fluid.layers.cast(paddings, 'int32') + image = fluid.layers.pad2d(image, paddings=paddings, pad_value=127.5) + + # normalize + mean = np.array(cfg.MEAN).reshape(1, len(cfg.MEAN), 1, 1) + mean = fluid.layers.assign(mean.astype('float32')) + std = np.array(cfg.STD).reshape(1, len(cfg.STD), 1, 1) + std = fluid.layers.assign(std.astype('float32')) + image = (image / 255 - mean) / std + # 使后面的网络能通过类似image.shape获取特征图的shape + image = fluid.layers.reshape( + image, shape=[-1, cfg.DATASET.DATA_DIM, height, width]) + return image, valid_shape, origin_shape + + +def build_model(main_prog, start_prog, phase=ModelPhase.TRAIN, arch=None): + if not ModelPhase.is_valid_phase(phase): + raise ValueError("ModelPhase {} is not valid!".format(phase)) + if ModelPhase.is_train(phase): + width = cfg.TRAIN_CROP_SIZE[0] + height = cfg.TRAIN_CROP_SIZE[1] + else: + width = cfg.EVAL_CROP_SIZE[0] + height = cfg.EVAL_CROP_SIZE[1] + + image_shape = [cfg.DATASET.DATA_DIM, height, width] + grt_shape = [1, height, width] + class_num = cfg.DATASET.NUM_CLASSES + + with fluid.program_guard(main_prog, start_prog): + with fluid.unique_name.guard(): + # 在导出模型的时候,增加图像标准化预处理,减小预测部署时图像的处理流程 + # 预测部署时只须对输入图像增加batch_size维度即可 + if ModelPhase.is_predict(phase): + origin_image = fluid.layers.data( + name='image', + shape=[-1, -1, -1, cfg.DATASET.DATA_DIM], + dtype='float32', + append_batch_size=False) + image, valid_shape, origin_shape = export_preprocess( + origin_image) + + else: + image = fluid.layers.data( + name='image', shape=image_shape, dtype='float32') + label = fluid.layers.data( + name='label', shape=grt_shape, dtype='int32') + mask = fluid.layers.data( + name='mask', shape=grt_shape, dtype='int32') + + # use PyReader when doing traning and evaluation + if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase): + py_reader = fluid.io.PyReader( + feed_list=[image, label, mask], + capacity=cfg.DATALOADER.BUF_SIZE, + iterable=False, + use_double_buffer=True) + + loss_type = cfg.SOLVER.LOSS + if not isinstance(loss_type, list): + loss_type = list(loss_type) + + # dice_loss或bce_loss只适用两类分割中 + if class_num > 2 and (("dice_loss" in loss_type) or + ("bce_loss" in loss_type)): + raise Exception( + "dice loss and bce loss is only applicable to binary classfication" + ) + + # 在两类分割情况下,当loss函数选择dice_loss或bce_loss的时候,最后logit输出通道数设置为1 + if ("dice_loss" in loss_type) or ("bce_loss" in loss_type): + class_num = 1 + if "softmax_loss" in loss_type: + raise Exception( + "softmax loss can not combine with dice loss or bce loss" + ) + logits = seg_model(image, class_num, arch) + + # 根据选择的loss函数计算相应的损失函数 + if ModelPhase.is_train(phase) or ModelPhase.is_eval(phase): + loss_valid = False + avg_loss_list = [] + valid_loss = [] + if "softmax_loss" in loss_type: + weight = cfg.SOLVER.CROSS_ENTROPY_WEIGHT + avg_loss_list.append( + multi_softmax_with_loss(logits, label, mask, class_num, weight)) + loss_valid = True + valid_loss.append("softmax_loss") + if "dice_loss" in loss_type: + avg_loss_list.append(multi_dice_loss(logits, label, mask)) + loss_valid = True + valid_loss.append("dice_loss") + if "bce_loss" in loss_type: + avg_loss_list.append(multi_bce_loss(logits, label, mask)) + loss_valid = True + valid_loss.append("bce_loss") + if not loss_valid: + raise Exception( + "SOLVER.LOSS: {} is set wrong. it should " + "include one of (softmax_loss, bce_loss, dice_loss) at least" + " example: ['softmax_loss'], ['dice_loss'], ['bce_loss', 'dice_loss']" + .format(cfg.SOLVER.LOSS)) + + invalid_loss = [x for x in loss_type if x not in valid_loss] + if len(invalid_loss) > 0: + print( + "Warning: the loss {} you set is invalid. it will not be included in loss computed." + .format(invalid_loss)) + + avg_loss = 0 + for i in range(0, len(avg_loss_list)): + avg_loss += avg_loss_list[i] + + #get pred result in original size + if isinstance(logits, tuple): + logit = logits[0] + else: + logit = logits + + if logit.shape[2:] != label.shape[2:]: + logit = fluid.layers.resize_bilinear(logit, label.shape[2:]) + + # return image input and logit output for inference graph prune + if ModelPhase.is_predict(phase): + # 两类分割中,使用dice_loss或bce_loss返回的logit为单通道,进行到两通道的变换 + if class_num == 1: + logit = sigmoid_to_softmax(logit) + else: + logit = softmax(logit) + + # 获取有效部分 + logit = fluid.layers.slice( + logit, axes=[2, 3], starts=[0, 0], ends=valid_shape) + + logit = fluid.layers.resize_bilinear( + logit, + out_shape=origin_shape, + align_corners=False, + align_mode=0) + logit = fluid.layers.argmax(logit, axis=1) + return origin_image, logit + + if class_num == 1: + out = sigmoid_to_softmax(logit) + out = fluid.layers.transpose(out, [0, 2, 3, 1]) + else: + out = fluid.layers.transpose(logit, [0, 2, 3, 1]) + + pred = fluid.layers.argmax(out, axis=3) + pred = fluid.layers.unsqueeze(pred, axes=[3]) + if ModelPhase.is_visual(phase): + if class_num == 1: + logit = sigmoid_to_softmax(logit) + else: + logit = softmax(logit) + return pred, logit + + if ModelPhase.is_eval(phase): + return py_reader, avg_loss, pred, label, mask + + if ModelPhase.is_train(phase): + optimizer = solver.Solver(main_prog, start_prog) + decayed_lr = optimizer.optimise(avg_loss) + return py_reader, avg_loss, decayed_lr, pred, label, mask + + +def to_int(string, dest="I"): + return struct.unpack(dest, string)[0] + + +def parse_shape_from_file(filename): + with open(filename, "rb") as file: + version = file.read(4) + lod_level = to_int(file.read(8), dest="Q") + for i in range(lod_level): + _size = to_int(file.read(8), dest="Q") + _ = file.read(_size) + version = file.read(4) + tensor_desc_size = to_int(file.read(4)) + tensor_desc = VarType.TensorDesc() + tensor_desc.ParseFromString(file.read(tensor_desc_size)) + return tuple(tensor_desc.dims) diff --git a/slim/nas/train_nas.py b/slim/nas/train_nas.py new file mode 100644 index 0000000000000000000000000000000000000000..7822657fa264d053360199d5691098ae85fcd12c --- /dev/null +++ b/slim/nas/train_nas.py @@ -0,0 +1,456 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg") +sys.path.append(SEG_PATH) + +import argparse +import pprint +import random +import shutil +import functools + +import paddle +import numpy as np +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from metrics import ConfusionMatrix +from reader import SegDataset +from model_builder import build_model +from model_builder import ModelPhase +from model_builder import parse_shape_from_file +from eval_nas import evaluate +from vis import visualize +from utils import dist_utils + +from mobilenetv2_search_space import MobileNetV2SpaceSeg +from paddleslim.nas.search_space.search_space_factory import SearchSpaceFactory +from paddleslim.analysis import flops +from paddleslim.nas.sa_nas import SANAS +from paddleslim.nas import search_space + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg training') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess I/O or not', + action='store_true', + default=False) + parser.add_argument( + '--log_steps', + dest='log_steps', + help='Display logging information at every log_steps', + default=10, + type=int) + parser.add_argument( + '--debug', + dest='debug', + help='debug mode, display detail information of training', + action='store_true') + parser.add_argument( + '--use_tb', + dest='use_tb', + help='whether to record the data during training to Tensorboard', + action='store_true') + parser.add_argument( + '--tb_log_dir', + dest='tb_log_dir', + help='Tensorboard logging directory', + default=None, + type=str) + parser.add_argument( + '--do_eval', + dest='do_eval', + help='Evaluation models result on every new checkpoint', + action='store_true') + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + parser.add_argument( + '--enable_ce', + dest='enable_ce', + help='If set True, enable continuous evaluation job.' + 'This flag is only used for internal test.', + action='store_true') + return parser.parse_args() + + +def save_vars(executor, dirname, program=None, vars=None): + """ + Temporary resolution for Win save variables compatability. + Will fix in PaddlePaddle v1.5.2 + """ + + save_program = fluid.Program() + save_block = save_program.global_block() + + for each_var in vars: + # NOTE: don't save the variable which type is RAW + if each_var.type == fluid.core.VarDesc.VarType.RAW: + continue + new_var = save_block.create_var( + name=each_var.name, + shape=each_var.shape, + dtype=each_var.dtype, + type=each_var.type, + lod_level=each_var.lod_level, + persistable=True) + file_path = os.path.join(dirname, new_var.name) + file_path = os.path.normpath(file_path) + save_block.append_op( + type='save', + inputs={'X': [new_var]}, + outputs={}, + attrs={'file_path': file_path}) + + executor.run(save_program) + + +def save_checkpoint(exe, program, ckpt_name): + """ + Save checkpoint for evaluation or resume training + """ + ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name)) + print("Save model checkpoint to {}".format(ckpt_dir)) + if not os.path.isdir(ckpt_dir): + os.makedirs(ckpt_dir) + + save_vars( + exe, + ckpt_dir, + program, + vars=list(filter(fluid.io.is_persistable, program.list_vars()))) + + return ckpt_dir + + +def load_checkpoint(exe, program): + """ + Load checkpoiont from pretrained model directory for resume training + """ + + print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR) + if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR): + raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format( + cfg.TRAIN.RESUME_MODEL_DIR)) + + fluid.io.load_persistables( + exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program) + + model_path = cfg.TRAIN.RESUME_MODEL_DIR + # Check is path ended by path spearator + if model_path[-1] == os.sep: + model_path = model_path[0:-1] + epoch_name = os.path.basename(model_path) + # If resume model is final model + if epoch_name == 'final': + begin_epoch = cfg.SOLVER.NUM_EPOCHS + # If resume model path is end of digit, restore epoch status + elif epoch_name.isdigit(): + epoch = int(epoch_name) + begin_epoch = epoch + 1 + else: + raise ValueError("Resume model path is not valid!") + print("Model checkpoint loaded successfully!") + + return begin_epoch + + +def update_best_model(ckpt_dir): + best_model_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model') + if os.path.exists(best_model_dir): + shutil.rmtree(best_model_dir) + shutil.copytree(ckpt_dir, best_model_dir) + + +def print_info(*msg): + if cfg.TRAINER_ID == 0: + print(*msg) + + +def train(cfg): + startup_prog = fluid.Program() + train_prog = fluid.Program() + if args.enable_ce: + startup_prog.random_seed = 1000 + train_prog.random_seed = 1000 + drop_last = True + + dataset = SegDataset( + file_list=cfg.DATASET.TRAIN_FILE_LIST, + mode=ModelPhase.TRAIN, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + if args.use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + batch_data = [] + for b in data_gen: + batch_data.append(b) + if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS): + for item in batch_data: + yield item[0], item[1], item[2] + batch_data = [] + # If use sync batch norm strategy, drop last batch if number of samples + # in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues + if not cfg.TRAIN.SYNC_BATCH_NORM: + for item in batch_data: + yield item[0], item[1], item[2] + + # Get device environment + # places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + # place = places[0] + gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0)) + place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace() + places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + + # Get number of GPU + dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places) + print_info("#Device count: {}".format(dev_count)) + + # Make sure BATCH_SIZE can divided by GPU cards + assert cfg.BATCH_SIZE % dev_count == 0, ( + 'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format( + cfg.BATCH_SIZE, dev_count)) + # If use multi-gpu training mode, batch data will allocated to each GPU evenly + batch_size_per_dev = cfg.BATCH_SIZE // dev_count + print_info("batch_size_per_dev: {}".format(batch_size_per_dev)) + + config_info = {'input_size': 769, 'output_size': 1, 'block_num': 7} + config = ([(cfg.SLIM.NAS_SPACE_NAME, config_info)]) + factory = SearchSpaceFactory() + space = factory.get_search_space(config) + + port = cfg.SLIM.NAS_PORT + server_address = (cfg.SLIM.NAS_ADDRESS, port) + sa_nas = SANAS(config, server_addr=server_address, search_steps=cfg.SLIM.NAS_SEARCH_STEPS, + is_server=cfg.SLIM.NAS_IS_SERVER) + for step in range(cfg.SLIM.NAS_SEARCH_STEPS): + arch = sa_nas.next_archs()[0] + + start_prog = fluid.Program() + train_prog = fluid.Program() + + py_reader, avg_loss, lr, pred, grts, masks = build_model( + train_prog, start_prog, arch=arch, phase=ModelPhase.TRAIN) + + cur_flops = flops(train_prog) + print('current step:', step, 'flops:', cur_flops) + + py_reader.decorate_sample_generator( + data_generator, batch_size=batch_size_per_dev, drop_last=drop_last) + + exe = fluid.Executor(place) + exe.run(start_prog) + + exec_strategy = fluid.ExecutionStrategy() + # Clear temporary variables every 100 iteration + if args.use_gpu: + exec_strategy.num_threads = fluid.core.get_cuda_device_count() + exec_strategy.num_iteration_per_drop_scope = 100 + build_strategy = fluid.BuildStrategy() + + if cfg.NUM_TRAINERS > 1 and args.use_gpu: + dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog) + exec_strategy.num_threads = 1 + + if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu: + if dev_count > 1: + # Apply sync batch norm strategy + print_info("Sync BatchNorm strategy is effective.") + build_strategy.sync_batch_norm = True + else: + print_info( + "Sync BatchNorm strategy will not be effective if GPU device" + " count <= 1") + compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel( + loss_name=avg_loss.name, + exec_strategy=exec_strategy, + build_strategy=build_strategy) + + # Resume training + begin_epoch = cfg.SOLVER.BEGIN_EPOCH + if cfg.TRAIN.RESUME_MODEL_DIR: + begin_epoch = load_checkpoint(exe, train_prog) + # Load pretrained model + elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR): + print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR) + load_vars = [] + load_fail_vars = [] + + def var_shape_matched(var, shape): + """ + Check whehter persitable variable shape is match with current network + """ + var_exist = os.path.exists( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + if var_exist: + var_shape = parse_shape_from_file( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + return var_shape == shape + return False + + for x in train_prog.list_vars(): + if isinstance(x, fluid.framework.Parameter): + shape = tuple(fluid.global_scope().find_var( + x.name).get_tensor().shape()) + if var_shape_matched(x, shape): + load_vars.append(x) + else: + load_fail_vars.append(x) + + fluid.io.load_vars( + exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars) + for var in load_vars: + print_info("Parameter[{}] loaded sucessfully!".format(var.name)) + for var in load_fail_vars: + print_info( + "Parameter[{}] don't exist or shape does not match current network, skip" + " to load it.".format(var.name)) + print_info("{}/{} pretrained parameters loaded successfully!".format( + len(load_vars), + len(load_vars) + len(load_fail_vars))) + else: + print_info( + 'Pretrained model dir {} not exists, training from scratch...'. + format(cfg.TRAIN.PRETRAINED_MODEL_DIR)) + + fetch_list = [avg_loss.name, lr.name] + + global_step = 0 + all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE + if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True: + all_step += 1 + all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1) + + avg_loss = 0.0 + timer = Timer() + timer.start() + if begin_epoch > cfg.SOLVER.NUM_EPOCHS: + raise ValueError( + ("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format( + begin_epoch, cfg.SOLVER.NUM_EPOCHS)) + + if args.use_mpio: + print_info("Use multiprocess reader") + else: + print_info("Use multi-thread reader") + + best_miou = 0.0 + for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1): + py_reader.start() + while True: + try: + loss, lr = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + avg_loss += np.mean(np.array(loss)) + global_step += 1 + + if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0: + avg_loss /= args.log_steps + speed = args.log_steps / timer.elapsed_time() + print(( + "epoch={} step={} lr={:.5f} loss={:.4f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, speed, + calculate_eta(all_step - global_step, speed))) + + sys.stdout.flush() + avg_loss = 0.0 + timer.restart() + + except fluid.core.EOFException: + py_reader.reset() + break + except Exception as e: + print(e) + if epoch > cfg.SLIM.NAS_START_EVAL_EPOCH: + ckpt_dir = save_checkpoint(exe, train_prog, '{}_tmp'.format(port)) + _, mean_iou, _, mean_acc = evaluate( + cfg=cfg, + arch=arch, + ckpt_dir=ckpt_dir, + use_gpu=args.use_gpu, + use_mpio=args.use_mpio) + if best_miou < mean_iou: + print('search step {}, epoch {} best iou {}'.format(step, epoch, mean_iou)) + best_miou = mean_iou + + sa_nas.reward(float(best_miou)) + + +def main(args): + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + if args.enable_ce: + random.seed(0) + np.random.seed(0) + + cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0)) + cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + + cfg.check_and_infer() + print_info(pprint.pformat(cfg)) + train(cfg) + + +if __name__ == '__main__': + args = parse_args() + if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True: + print( + "You can not set use_gpu = True in the model because you are using paddlepaddle-cpu." + ) + print( + "Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU." + ) + sys.exit(1) + main(args) diff --git a/slim/prune/README.md b/slim/prune/README.md new file mode 100644 index 0000000000000000000000000000000000000000..b6a45238938567a845b44ff768db6982bfeab55c --- /dev/null +++ b/slim/prune/README.md @@ -0,0 +1,58 @@ +# PaddleSeg剪裁教程 + +在阅读本教程前,请确保您已经了解过[PaddleSeg使用说明](../../docs/usage.md)等章节,以便对PaddleSeg有一定的了解 + +该文档介绍如何使用[PaddleSlim](https://paddlepaddle.github.io/PaddleSlim)的卷积通道剪裁接口对检测库中的模型的卷积层的通道数进行剪裁。 + +在分割库中,可以直接调用`PaddleSeg/slim/prune/train_prune.py`脚本实现剪裁,在该脚本中调用了PaddleSlim的[paddleslim.prune.Pruner](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/#Pruner)接口。 + +该教程中所示操作,如无特殊说明,均在`PaddleSeg/`路径下执行。 + +## 1. 数据与预训练模型准备 +执行如下命令,下载cityscapes数据集 +``` +python dataset/download_cityscapes.py +``` +参照[预训练模型列表](../../docs/model_zoo.md)获取所需预训练模型 + +## 2. 确定待分析参数 + +我们通过剪裁卷积层参数达到缩减卷积层通道数的目的,在剪裁之前,我们需要确定待裁卷积层的参数的名称。 +通过以下命令查看当前模型的所有参数: + +```python +# 查看模型所有Paramters +for x in train_prog.list_vars(): + if isinstance(x, fluid.framework.Parameter): + print(x.name, x.shape) + +``` + +通过观察参数名称和参数的形状,筛选出所有卷积层参数,并确定要裁剪的卷积层参数。 + +## 3. 启动剪裁任务 + +使用`train_prune.py`启动裁剪任务时,通过`SLIM.PRUNE_PARAMS`选项指定待裁剪的参数名称列表,参数名之间用逗号分隔,通过`SLIM.PRUNE_RATIOS`选项指定各个参数被裁掉的比例。 + +```shell +CUDA_VISIBLE_DEVICES=0 +python -u ./slim/prune/train_prune.py --log_steps 10 --cfg configs/cityscape_fast_scnn.yaml --use_gpu --use_mpio \ +SLIM.PRUNE_PARAMS 'learning_to_downsample/weights,learning_to_downsample/dsconv1/pointwise/weights,learning_to_downsample/dsconv2/pointwise/weights' \ +SLIM.PRUNE_RATIOS '[0.1,0.1,0.1]' +``` +这里我们选取三个参数,按0.1的比例剪裁。 + +## 4. 评估 + +```shell +CUDA_VISIBLE_DEVICES=0 +python -u ./slim/prune/eval_prune.py --cfg configs/cityscape_fast_scnn.yaml --use_gpu --use_mpio \ +TEST.TEST_MODEL your_trained_model \ +``` + +## 5. 模型 + +| 模型 | 数据集合 | 下载地址 |剪裁方法| flops | mIoU on val| +|---|---|---|---|---|---| +| Fast-SCNN/bn | Cityscapes |[fast_scnn_cityscapes.tar](https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape.tar) | 无 | 7.21g | 0.6964 | +| Fast-SCNN/bn | Cityscapes |[fast_scnn_cityscapes-uniform-51.tar](https://paddleseg.bj.bcebos.com/models/fast_scnn_cityscape-uniform-51.tar) | uniform | 3.54g | 0.6990 | diff --git a/slim/prune/eval_prune.py b/slim/prune/eval_prune.py new file mode 100644 index 0000000000000000000000000000000000000000..b8275d03475b8fea67d73682b54a38172fbc25e2 --- /dev/null +++ b/slim/prune/eval_prune.py @@ -0,0 +1,185 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg") +sys.path.append(SEG_PATH) + +import time +import argparse +import functools +import pprint +import cv2 +import numpy as np +import paddle +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from models.model_builder import build_model +from models.model_builder import ModelPhase +from reader import SegDataset +from metrics import ConfusionMatrix + +from paddleslim.prune import load_model + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg model evalution') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess IO or not', + action='store_true', + default=False) + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + if len(sys.argv) == 1: + parser.print_help() + sys.exit(1) + return parser.parse_args() + + +def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs): + np.set_printoptions(precision=5, suppress=True) + + startup_prog = fluid.Program() + test_prog = fluid.Program() + dataset = SegDataset( + file_list=cfg.DATASET.VAL_FILE_LIST, + mode=ModelPhase.EVAL, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + #TODO: check is batch reader compatitable with Windows + if use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + for b in data_gen: + yield b[0], b[1], b[2] + + py_reader, avg_loss, pred, grts, masks = build_model( + test_prog, startup_prog, phase=ModelPhase.EVAL) + + py_reader.decorate_sample_generator( + data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE) + + # Get device environment + places = fluid.cuda_places() if use_gpu else fluid.cpu_places() + place = places[0] + dev_count = len(places) + print("#Device count: {}".format(dev_count)) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + test_prog = test_prog.clone(for_test=True) + + ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir + + if not os.path.exists(ckpt_dir): + raise ValueError('The TEST.TEST_MODEL {} is not found'.format(ckpt_dir)) + + if ckpt_dir is not None: + print('load test model:', ckpt_dir) + load_model(exe, test_prog, ckpt_dir) + + # Use streaming confusion matrix to calculate mean_iou + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + conf_mat = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + fetch_list = [avg_loss.name, pred.name, grts.name, masks.name] + num_images = 0 + step = 0 + all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1 + timer = Timer() + timer.start() + py_reader.start() + while True: + try: + step += 1 + loss, pred, grts, masks = exe.run( + test_prog, fetch_list=fetch_list, return_numpy=True) + + loss = np.mean(np.array(loss)) + + num_images += pred.shape[0] + conf_mat.calculate(pred, grts, masks) + _, iou = conf_mat.mean_iou() + _, acc = conf_mat.accuracy() + + speed = 1.0 / timer.elapsed_time() + + print( + "[EVAL]step={} loss={:.5f} acc={:.4f} IoU={:.4f} step/sec={:.2f} | ETA {}" + .format(step, loss, acc, iou, speed, + calculate_eta(all_step - step, speed))) + timer.restart() + sys.stdout.flush() + except fluid.core.EOFException: + break + + category_iou, avg_iou = conf_mat.mean_iou() + category_acc, avg_acc = conf_mat.accuracy() + print("[EVAL]#image={} acc={:.4f} IoU={:.4f}".format( + num_images, avg_acc, avg_iou)) + print("[EVAL]Category IoU:", category_iou) + print("[EVAL]Category Acc:", category_acc) + print("[EVAL]Kappa:{:.4f}".format(conf_mat.kappa())) + + return category_iou, avg_iou, category_acc, avg_acc + + +def main(): + args = parse_args() + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + cfg.check_and_infer() + print(pprint.pformat(cfg)) + evaluate(cfg, **args.__dict__) + + +if __name__ == '__main__': + main() diff --git a/slim/prune/train_prune.py b/slim/prune/train_prune.py new file mode 100644 index 0000000000000000000000000000000000000000..06e1658f1a3f721842fbe780820103aceac87a16 --- /dev/null +++ b/slim/prune/train_prune.py @@ -0,0 +1,504 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +# GPU memory garbage collection optimization flags +os.environ['FLAGS_eager_delete_tensor_gb'] = "0.0" + +import sys + +LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) +SEG_PATH = os.path.join(LOCAL_PATH, "../../", "pdseg") +sys.path.append(SEG_PATH) + +import argparse +import pprint +import shutil +import functools + +import paddle +import numpy as np +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from metrics import ConfusionMatrix +from reader import SegDataset +from models.model_builder import build_model +from models.model_builder import ModelPhase +from models.model_builder import parse_shape_from_file +from eval_prune import evaluate +from vis import visualize +from utils import dist_utils + +from paddleslim.prune import Pruner, save_model +from paddleslim.analysis import flops + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg training') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess I/O or not', + action='store_true', + default=False) + parser.add_argument( + '--log_steps', + dest='log_steps', + help='Display logging information at every log_steps', + default=10, + type=int) + parser.add_argument( + '--debug', + dest='debug', + help='debug mode, display detail information of training', + action='store_true') + parser.add_argument( + '--use_tb', + dest='use_tb', + help='whether to record the data during training to Tensorboard', + action='store_true') + parser.add_argument( + '--tb_log_dir', + dest='tb_log_dir', + help='Tensorboard logging directory', + default=None, + type=str) + parser.add_argument( + '--do_eval', + dest='do_eval', + help='Evaluation models result on every new checkpoint', + action='store_true') + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + return parser.parse_args() + + +def save_vars(executor, dirname, program=None, vars=None): + """ + Temporary resolution for Win save variables compatability. + Will fix in PaddlePaddle v1.5.2 + """ + + save_program = fluid.Program() + save_block = save_program.global_block() + + for each_var in vars: + # NOTE: don't save the variable which type is RAW + if each_var.type == fluid.core.VarDesc.VarType.RAW: + continue + new_var = save_block.create_var( + name=each_var.name, + shape=each_var.shape, + dtype=each_var.dtype, + type=each_var.type, + lod_level=each_var.lod_level, + persistable=True) + file_path = os.path.join(dirname, new_var.name) + file_path = os.path.normpath(file_path) + save_block.append_op( + type='save', + inputs={'X': [new_var]}, + outputs={}, + attrs={'file_path': file_path}) + + executor.run(save_program) + + +def save_prune_checkpoint(exe, program, ckpt_name): + """ + Save checkpoint for evaluation or resume training + """ + ckpt_dir = os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, str(ckpt_name)) + print("Save model checkpoint to {}".format(ckpt_dir)) + if not os.path.isdir(ckpt_dir): + os.makedirs(ckpt_dir) + + save_model(exe, program, ckpt_dir) + + return ckpt_dir + + +def load_checkpoint(exe, program): + """ + Load checkpoiont from pretrained model directory for resume training + """ + + print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR) + if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR): + raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format( + cfg.TRAIN.RESUME_MODEL_DIR)) + + fluid.io.load_persistables( + exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program) + + model_path = cfg.TRAIN.RESUME_MODEL_DIR + # Check is path ended by path spearator + if model_path[-1] == os.sep: + model_path = model_path[0:-1] + epoch_name = os.path.basename(model_path) + # If resume model is final model + if epoch_name == 'final': + begin_epoch = cfg.SOLVER.NUM_EPOCHS + # If resume model path is end of digit, restore epoch status + elif epoch_name.isdigit(): + epoch = int(epoch_name) + begin_epoch = epoch + 1 + else: + raise ValueError("Resume model path is not valid!") + print("Model checkpoint loaded successfully!") + + return begin_epoch + +def print_info(*msg): + if cfg.TRAINER_ID == 0: + print(*msg) + +def train(cfg): + startup_prog = fluid.Program() + train_prog = fluid.Program() + drop_last = True + + dataset = SegDataset( + file_list=cfg.DATASET.TRAIN_FILE_LIST, + mode=ModelPhase.TRAIN, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + if args.use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + batch_data = [] + for b in data_gen: + batch_data.append(b) + if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS): + for item in batch_data: + yield item[0], item[1], item[2] + batch_data = [] + # If use sync batch norm strategy, drop last batch if number of samples + # in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues + if not cfg.TRAIN.SYNC_BATCH_NORM: + for item in batch_data: + yield item[0], item[1], item[2] + + # Get device environment + # places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + # place = places[0] + gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0)) + place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace() + places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + + # Get number of GPU + dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places) + print_info("#Device count: {}".format(dev_count)) + + # Make sure BATCH_SIZE can divided by GPU cards + assert cfg.BATCH_SIZE % dev_count == 0, ( + 'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format( + cfg.BATCH_SIZE, dev_count)) + # If use multi-gpu training mode, batch data will allocated to each GPU evenly + batch_size_per_dev = cfg.BATCH_SIZE // dev_count + print_info("batch_size_per_dev: {}".format(batch_size_per_dev)) + + py_reader, avg_loss, lr, pred, grts, masks = build_model( + train_prog, startup_prog, phase=ModelPhase.TRAIN) + py_reader.decorate_sample_generator( + data_generator, batch_size=batch_size_per_dev, drop_last=drop_last) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + exec_strategy = fluid.ExecutionStrategy() + # Clear temporary variables every 100 iteration + if args.use_gpu: + exec_strategy.num_threads = fluid.core.get_cuda_device_count() + exec_strategy.num_iteration_per_drop_scope = 100 + build_strategy = fluid.BuildStrategy() + + if cfg.NUM_TRAINERS > 1 and args.use_gpu: + dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog) + exec_strategy.num_threads = 1 + + if cfg.TRAIN.SYNC_BATCH_NORM and args.use_gpu: + if dev_count > 1: + # Apply sync batch norm strategy + print_info("Sync BatchNorm strategy is effective.") + build_strategy.sync_batch_norm = True + else: + print_info("Sync BatchNorm strategy will not be effective if GPU device" + " count <= 1") + + pruned_params = cfg.SLIM.PRUNE_PARAMS.strip().split(',') + pruned_ratios = cfg.SLIM.PRUNE_RATIOS + + if isinstance(pruned_ratios, float): + pruned_ratios = [pruned_ratios] * len(pruned_params) + elif isinstance(pruned_ratios, (list, tuple)): + pruned_ratios = list(pruned_ratios) + else: + raise ValueError('expect SLIM.PRUNE_RATIOS type is float, list, tuple, ' + 'but received {}'.format(type(pruned_ratios))) + + # Resume training + begin_epoch = cfg.SOLVER.BEGIN_EPOCH + if cfg.TRAIN.RESUME_MODEL_DIR: + begin_epoch = load_checkpoint(exe, train_prog) + # Load pretrained model + elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR): + print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR) + load_vars = [] + load_fail_vars = [] + + def var_shape_matched(var, shape): + """ + Check whehter persitable variable shape is match with current network + """ + var_exist = os.path.exists( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + if var_exist: + var_shape = parse_shape_from_file( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + return var_shape == shape + return False + + for x in train_prog.list_vars(): + if isinstance(x, fluid.framework.Parameter): + shape = tuple(fluid.global_scope().find_var( + x.name).get_tensor().shape()) + if var_shape_matched(x, shape): + load_vars.append(x) + else: + load_fail_vars.append(x) + + fluid.io.load_vars( + exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars) + for var in load_vars: + print_info("Parameter[{}] loaded sucessfully!".format(var.name)) + for var in load_fail_vars: + print_info("Parameter[{}] don't exist or shape does not match current network, skip" + " to load it.".format(var.name)) + print_info("{}/{} pretrained parameters loaded successfully!".format( + len(load_vars), + len(load_vars) + len(load_fail_vars))) + else: + print_info('Pretrained model dir {} not exists, training from scratch...'. + format(cfg.TRAIN.PRETRAINED_MODEL_DIR)) + + fetch_list = [avg_loss.name, lr.name] + if args.debug: + # Fetch more variable info and use streaming confusion matrix to + # calculate IoU results if in debug mode + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + fetch_list.extend([pred.name, grts.name, masks.name]) + cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + + if args.use_tb: + if not args.tb_log_dir: + print_info("Please specify the log directory by --tb_log_dir.") + exit(1) + + from tb_paddle import SummaryWriter + log_writer = SummaryWriter(args.tb_log_dir) + + pruner = Pruner() + train_prog = pruner.prune( + train_prog, + fluid.global_scope(), + params=pruned_params, + ratios=pruned_ratios, + place=place, + only_graph=False)[0] + + compiled_train_prog = fluid.CompiledProgram(train_prog).with_data_parallel( + loss_name=avg_loss.name, + exec_strategy=exec_strategy, + build_strategy=build_strategy) + + global_step = 0 + all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE + if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True: + all_step += 1 + all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1) + + avg_loss = 0.0 + timer = Timer() + timer.start() + if begin_epoch > cfg.SOLVER.NUM_EPOCHS: + raise ValueError( + ("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format( + begin_epoch, cfg.SOLVER.NUM_EPOCHS)) + + if args.use_mpio: + print_info("Use multiprocess reader") + else: + print_info("Use multi-thread reader") + + for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1): + py_reader.start() + while True: + try: + if args.debug: + # Print category IoU and accuracy to check whether the + # traning process is corresponed to expectation + loss, lr, pred, grts, masks = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + cm.calculate(pred, grts, masks) + avg_loss += np.mean(np.array(loss)) + global_step += 1 + + if global_step % args.log_steps == 0: + speed = args.log_steps / timer.elapsed_time() + avg_loss /= args.log_steps + category_acc, mean_acc = cm.accuracy() + category_iou, mean_iou = cm.mean_iou() + + print_info(( + "epoch={} step={} lr={:.5f} loss={:.4f} acc={:.5f} mIoU={:.5f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, mean_acc, + mean_iou, speed, + calculate_eta(all_step - global_step, speed))) + print_info("Category IoU: ", category_iou) + print_info("Category Acc: ", category_acc) + if args.use_tb: + log_writer.add_scalar('Train/mean_iou', mean_iou, + global_step) + log_writer.add_scalar('Train/mean_acc', mean_acc, + global_step) + log_writer.add_scalar('Train/loss', avg_loss, + global_step) + log_writer.add_scalar('Train/lr', lr[0], + global_step) + log_writer.add_scalar('Train/step/sec', speed, + global_step) + sys.stdout.flush() + avg_loss = 0.0 + cm.zero_matrix() + timer.restart() + else: + # If not in debug mode, avoid unnessary log and calculate + loss, lr = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + avg_loss += np.mean(np.array(loss)) + global_step += 1 + + if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0: + avg_loss /= args.log_steps + speed = args.log_steps / timer.elapsed_time() + print(( + "epoch={} step={} lr={:.5f} loss={:.4f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, speed, + calculate_eta(all_step - global_step, speed))) + if args.use_tb: + log_writer.add_scalar('Train/loss', avg_loss, + global_step) + log_writer.add_scalar('Train/lr', lr[0], + global_step) + log_writer.add_scalar('Train/speed', speed, + global_step) + sys.stdout.flush() + avg_loss = 0.0 + timer.restart() + + except fluid.core.EOFException: + py_reader.reset() + break + except Exception as e: + print(e) + + if epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0 and cfg.TRAINER_ID == 0: + + ckpt_dir = save_prune_checkpoint(exe, train_prog, epoch) + + if args.do_eval: + print("Evaluation start") + _, mean_iou, _, mean_acc = evaluate( + cfg=cfg, + ckpt_dir=ckpt_dir, + use_gpu=args.use_gpu, + use_mpio=args.use_mpio) + if args.use_tb: + log_writer.add_scalar('Evaluate/mean_iou', mean_iou, + global_step) + log_writer.add_scalar('Evaluate/mean_acc', mean_acc, + global_step) + + # Use Tensorboard to visualize results + if args.use_tb and cfg.DATASET.VIS_FILE_LIST is not None: + visualize( + cfg=cfg, + use_gpu=args.use_gpu, + vis_file_list=cfg.DATASET.VIS_FILE_LIST, + vis_dir="visual", + ckpt_dir=ckpt_dir, + log_writer=log_writer) + + # save final model + if cfg.TRAINER_ID == 0: + save_prune_checkpoint(exe, train_prog, 'final') + +def main(args): + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts is not None: + cfg.update_from_list(args.opts) + + cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0)) + cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + + cfg.check_and_infer() + print_info(pprint.pformat(cfg)) + train(cfg) + + +if __name__ == '__main__': + args = parse_args() + if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True: + print( + "You can not set use_gpu = True in the model because you are using paddlepaddle-cpu." + ) + print( + "Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU." + ) + sys.exit(1) + main(args) diff --git a/slim/quantization/README.md b/slim/quantization/README.md new file mode 100644 index 0000000000000000000000000000000000000000..9af04033b3a9af84d4b1fdf081f156be6f8dc0c2 --- /dev/null +++ b/slim/quantization/README.md @@ -0,0 +1,142 @@ +>运行该示例前请安装Paddle1.6或更高版本和PaddleSlim + +# 分割模型量化压缩示例 + +## 概述 + +该示例使用PaddleSlim提供的[量化压缩API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/)对分割模型进行压缩。 +在阅读该示例前,建议您先了解以下内容: + +- [分割模型的常规训练方法](../../docs/usage.md) +- [PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/) + + +## 安装PaddleSlim +可按照[PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/)中的步骤安装PaddleSlim。 + + +## 训练 + + +### 数据集 +请按照分割库的教程下载数据集并放到对应位置。 + +### 下载训练好的分割模型 + +在分割库根目录下运行以下命令: +```bash +mkdir pretrain +cd pretrain +wget https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz +tar xf mobilenet_cityscapes.tgz +``` + +### 定义量化配置 +config = { + 'weight_quantize_type': 'channel_wise_abs_max', + 'activation_quantize_type': 'moving_average_abs_max', + 'quantize_op_types': ['depthwise_conv2d', 'mul', 'conv2d'], + 'not_quant_pattern': ['last_conv'] + } + +如何配置以及含义请参考[PaddleSlim 量化API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/)。 + +### 插入量化反量化OP +使用[PaddleSlim quant_aware API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/#quant_aware)在Program中插入量化和反量化OP。 +``` +compiled_train_prog = quant_aware(train_prog, place, config, for_test=False) +``` + +### 关闭一些训练策略 + +因为量化要对Program做修改,所以一些会修改Program的训练策略需要关闭。``sync_batch_norm`` 和量化多卡训练同时使用时会出错, 需要将其关闭。 +``` +build_strategy.fuse_all_reduce_ops = False +build_strategy.sync_batch_norm = False +``` + +### 开始训练 + + +step1: 设置gpu卡 +``` +export CUDA_VISIBLE_DEVICES=0 +``` +step2: 将``pdseg``文件夹加到系统路径 + +分割库根目录下运行以下命令 +``` +export PYTHONPATH=$PYTHONPATH:./pdseg +``` + +step2: 开始训练 + + +在分割库根目录下运行以下命令进行训练。 +``` +python -u ./slim/quantization/train_quant.py --log_steps 10 --not_quant_pattern last_conv --cfg configs/deeplabv3p_mobilenetv2_cityscapes.yaml --use_gpu --use_mpio --do_eval \ +TRAIN.PRETRAINED_MODEL_DIR "./pretrain/mobilenet_cityscapes/" \ +TRAIN.MODEL_SAVE_DIR "./snapshots/mobilenetv2_quant" \ +MODEL.DEEPLAB.ENCODER_WITH_ASPP False \ +MODEL.DEEPLAB.ENABLE_DECODER False \ +TRAIN.SYNC_BATCH_NORM False \ +SOLVER.LR 0.0001 \ +TRAIN.SNAPSHOT_EPOCH 1 \ +SOLVER.NUM_EPOCHS 30 \ +BATCH_SIZE 16 \ +``` + + +### 训练时的模型结构 +[PaddleSlim 量化API](https://paddlepaddle.github.io/PaddleSlim/api/quantization_api/)文档中介绍了``paddleslim.quant.quant_aware``和``paddleslim.quant.convert``两个接口。 +``paddleslim.quant.quant_aware`` 作用是在网络中的conv2d、depthwise_conv2d、mul等算子的各个输入前插入连续的量化op和反量化op,并改变相应反向算子的某些输入。示例图如下: + +

+
+图1:应用 paddleslim.quant.quant_aware 后的结果 +

+ + +### 边训练边测试 + +在脚本中边训练边测试得到的测试精度是基于图1中的网络结构进行的。 + +## 评估 + +### 最终评估模型 + +``paddleslim.quant.convert`` 主要用于改变Program中量化op和反量化op的顺序,即将类似图1中的量化op和反量化op顺序改变为图2中的布局。除此之外,``paddleslim.quant.convert`` 还会将`conv2d`、`depthwise_conv2d`、`mul`等算子参数变为量化后的int8_t范围内的值(但数据类型仍为float32),示例如图2: + +

+
+图2:paddleslim.quant.convert 后的结果 +

+ +所以在调用 ``paddleslim.quant.convert`` 之后,才得到最终的量化模型。此模型可使用PaddleLite进行加载预测,可参见教程[Paddle-Lite如何加载运行量化模型](https://github.com/PaddlePaddle/Paddle-Lite/wiki/model_quantization)。 + +### 评估脚本 +使用脚本[slim/quantization/eval_quant.py](./eval_quant.py)进行评估。 + +- 定义配置。使用和训练脚本中一样的量化配置,以得到和量化训练时同样的模型。 +- 使用 ``paddleslim.quant.quant_aware`` 插入量化和反量化op。 +- 使用 ``paddleslim.quant.convert`` 改变op顺序,得到最终量化模型进行评估。 + +评估命令: + +分割库根目录下运行 +``` +python -u ./slim/quantization/eval_quant.py --cfg configs/deeplabv3p_mobilenetv2_cityscapes.yaml --use_gpu --not_quant_pattern last_conv --use_mpio --convert \ +TEST.TEST_MODEL "./snapshots/mobilenetv2_quant/best_model" \ +MODEL.DEEPLAB.ENCODER_WITH_ASPP False \ +MODEL.DEEPLAB.ENABLE_DECODER False \ +TRAIN.SYNC_BATCH_NORM False \ +BATCH_SIZE 16 \ +``` + + + +## 量化结果 + + + +## FAQ diff --git a/slim/quantization/eval_quant.py b/slim/quantization/eval_quant.py new file mode 100644 index 0000000000000000000000000000000000000000..f40021df10ac5cabee789ca4de04b7489b37f182 --- /dev/null +++ b/slim/quantization/eval_quant.py @@ -0,0 +1,203 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os + +import sys +import time +import argparse +import functools +import pprint +import cv2 +import numpy as np +import paddle +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from models.model_builder import build_model +from models.model_builder import ModelPhase +from reader import SegDataset +from metrics import ConfusionMatrix +from paddleslim.quant import quant_aware, convert + + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg model evalution') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess IO or not', + action='store_true', + default=False) + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + parser.add_argument( + '--convert', + dest='convert', + help='Convert or not', + action='store_true', + default=False) + parser.add_argument( + "--not_quant_pattern", + nargs='+', + type=str, + help= + "Layers which name_scope contains string in not_quant_pattern will not be quantized" + ) + + if len(sys.argv) == 1: + parser.print_help() + sys.exit(1) + return parser.parse_args() + + +def evaluate(cfg, ckpt_dir=None, use_gpu=False, use_mpio=False, **kwargs): + np.set_printoptions(precision=5, suppress=True) + + startup_prog = fluid.Program() + test_prog = fluid.Program() + dataset = SegDataset( + file_list=cfg.DATASET.VAL_FILE_LIST, + mode=ModelPhase.EVAL, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + #TODO: check is batch reader compatitable with Windows + if use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + for b in data_gen: + yield b[0], b[1], b[2] + + py_reader, avg_loss, pred, grts, masks = build_model( + test_prog, startup_prog, phase=ModelPhase.EVAL) + + py_reader.decorate_sample_generator( + data_generator, drop_last=False, batch_size=cfg.BATCH_SIZE) + + # Get device environment + places = fluid.cuda_places() if use_gpu else fluid.cpu_places() + place = places[0] + dev_count = len(places) + print("#Device count: {}".format(dev_count)) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + test_prog = test_prog.clone(for_test=True) + not_quant_pattern_list = [] + if kwargs['not_quant_pattern'] is not None: + not_quant_pattern_list = kwargs['not_quant_pattern'] + config = { + 'weight_quantize_type': 'channel_wise_abs_max', + 'activation_quantize_type': 'moving_average_abs_max', + 'quantize_op_types': ['depthwise_conv2d', 'mul', 'conv2d'], + 'not_quant_pattern': not_quant_pattern_list + } + test_prog = quant_aware(test_prog, place, config, for_test=True) + + ckpt_dir = cfg.TEST.TEST_MODEL if not ckpt_dir else ckpt_dir + + if not os.path.exists(ckpt_dir): + raise ValueError('The TEST.TEST_MODEL {} is not found'.format(ckpt_dir)) + + if ckpt_dir is not None: + print('load test model:', ckpt_dir) + fluid.io.load_persistables(exe, ckpt_dir, main_program=test_prog) + if kwargs['convert']: + test_prog = convert(test_prog, place, config) + # Use streaming confusion matrix to calculate mean_iou + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + conf_mat = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + fetch_list = [avg_loss.name, pred.name, grts.name, masks.name] + num_images = 0 + step = 0 + all_step = cfg.DATASET.TEST_TOTAL_IMAGES // cfg.BATCH_SIZE + 1 + timer = Timer() + timer.start() + py_reader.start() + while True: + try: + step += 1 + loss, pred, grts, masks = exe.run( + test_prog, fetch_list=fetch_list, return_numpy=True) + + loss = np.mean(np.array(loss)) + + num_images += pred.shape[0] + conf_mat.calculate(pred, grts, masks) + _, iou = conf_mat.mean_iou() + _, acc = conf_mat.accuracy() + + speed = 1.0 / timer.elapsed_time() + + print( + "[EVAL]step={} loss={:.5f} acc={:.4f} IoU={:.4f} step/sec={:.2f} | ETA {}" + .format(step, loss, acc, iou, speed, + calculate_eta(all_step - step, speed))) + timer.restart() + sys.stdout.flush() + except fluid.core.EOFException: + break + + category_iou, avg_iou = conf_mat.mean_iou() + category_acc, avg_acc = conf_mat.accuracy() + print("[EVAL]#image={} acc={:.4f} IoU={:.4f}".format( + num_images, avg_acc, avg_iou)) + print("[EVAL]Category IoU:", category_iou) + print("[EVAL]Category Acc:", category_acc) + print("[EVAL]Kappa:{:.4f}".format(conf_mat.kappa())) + + return category_iou, avg_iou, category_acc, avg_acc + + +def main(): + args = parse_args() + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + cfg.check_and_infer() + print(pprint.pformat(cfg)) + evaluate(cfg, **args.__dict__) + + +if __name__ == '__main__': + main() diff --git a/slim/quantization/images/ConvertToInt8Pass.png b/slim/quantization/images/ConvertToInt8Pass.png new file mode 100644 index 0000000000000000000000000000000000000000..8b5849819c0bc8e592dc8f864d8945330df85ab1 Binary files /dev/null and b/slim/quantization/images/ConvertToInt8Pass.png differ diff --git a/slim/quantization/images/FreezePass.png b/slim/quantization/images/FreezePass.png new file mode 100644 index 0000000000000000000000000000000000000000..acd2b0a890a8af85bec6eecdb22e47ad386a178c Binary files /dev/null and b/slim/quantization/images/FreezePass.png differ diff --git a/slim/quantization/images/TransformForMobilePass.png b/slim/quantization/images/TransformForMobilePass.png new file mode 100644 index 0000000000000000000000000000000000000000..4104cacc67af0be1c7bc152696e2ae544127aace Binary files /dev/null and b/slim/quantization/images/TransformForMobilePass.png differ diff --git a/slim/quantization/images/TransformPass.png b/slim/quantization/images/TransformPass.png new file mode 100644 index 0000000000000000000000000000000000000000..f29ab62753e0e6ddf28d0c1dda7139705fc24b18 Binary files /dev/null and b/slim/quantization/images/TransformPass.png differ diff --git a/slim/quantization/train_quant.py b/slim/quantization/train_quant.py new file mode 100644 index 0000000000000000000000000000000000000000..6a29dccdbaeda54b06c11299fb37e979cec6e401 --- /dev/null +++ b/slim/quantization/train_quant.py @@ -0,0 +1,388 @@ +# coding: utf8 +# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os + +import sys +import argparse +import pprint +import random +import shutil +import functools + +import paddle +import numpy as np +import paddle.fluid as fluid + +from utils.config import cfg +from utils.timer import Timer, calculate_eta +from metrics import ConfusionMatrix +from reader import SegDataset +from models.model_builder import build_model +from models.model_builder import ModelPhase +from models.model_builder import parse_shape_from_file +from eval_quant import evaluate +from vis import visualize +from utils import dist_utils +from train import save_vars, save_checkpoint, load_checkpoint, update_best_model, print_info + +from paddleslim.quant import quant_aware + + +def parse_args(): + parser = argparse.ArgumentParser(description='PaddleSeg training') + parser.add_argument( + '--cfg', + dest='cfg_file', + help='Config file for training (and optionally testing)', + default=None, + type=str) + parser.add_argument( + '--use_gpu', + dest='use_gpu', + help='Use gpu or cpu', + action='store_true', + default=False) + parser.add_argument( + '--use_mpio', + dest='use_mpio', + help='Use multiprocess I/O or not', + action='store_true', + default=False) + parser.add_argument( + '--log_steps', + dest='log_steps', + help='Display logging information at every log_steps', + default=10, + type=int) + parser.add_argument( + '--debug', + dest='debug', + help='debug mode, display detail information of training', + action='store_true') + parser.add_argument( + '--do_eval', + dest='do_eval', + help='Evaluation models result on every new checkpoint', + action='store_true') + parser.add_argument( + 'opts', + help='See utils/config.py for all options', + default=None, + nargs=argparse.REMAINDER) + parser.add_argument( + '--enable_ce', + dest='enable_ce', + help='If set True, enable continuous evaluation job.' + 'This flag is only used for internal test.', + action='store_true') + parser.add_argument( + "--not_quant_pattern", + nargs='+', + type=str, + help= + "Layers which name_scope contains string in not_quant_pattern will not be quantized" + ) + + return parser.parse_args() + + +def train_quant(cfg): + startup_prog = fluid.Program() + train_prog = fluid.Program() + if args.enable_ce: + startup_prog.random_seed = 1000 + train_prog.random_seed = 1000 + drop_last = True + + dataset = SegDataset( + file_list=cfg.DATASET.TRAIN_FILE_LIST, + mode=ModelPhase.TRAIN, + shuffle=True, + data_dir=cfg.DATASET.DATA_DIR) + + def data_generator(): + if args.use_mpio: + data_gen = dataset.multiprocess_generator( + num_processes=cfg.DATALOADER.NUM_WORKERS, + max_queue_size=cfg.DATALOADER.BUF_SIZE) + else: + data_gen = dataset.generator() + + batch_data = [] + for b in data_gen: + batch_data.append(b) + if len(batch_data) == (cfg.BATCH_SIZE // cfg.NUM_TRAINERS): + for item in batch_data: + yield item[0], item[1], item[2] + batch_data = [] + # If use sync batch norm strategy, drop last batch if number of samples + # in batch_data is less then cfg.BATCH_SIZE to avoid NCCL hang issues + if not cfg.TRAIN.SYNC_BATCH_NORM: + for item in batch_data: + yield item[0], item[1], item[2] + + # Get device environment + # places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + # place = places[0] + gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0)) + place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace() + places = fluid.cuda_places() if args.use_gpu else fluid.cpu_places() + + # Get number of GPU + dev_count = cfg.NUM_TRAINERS if cfg.NUM_TRAINERS > 1 else len(places) + print_info("#Device count: {}".format(dev_count)) + + # Make sure BATCH_SIZE can divided by GPU cards + assert cfg.BATCH_SIZE % dev_count == 0, ( + 'BATCH_SIZE:{} not divisble by number of GPUs:{}'.format( + cfg.BATCH_SIZE, dev_count)) + # If use multi-gpu training mode, batch data will allocated to each GPU evenly + batch_size_per_dev = cfg.BATCH_SIZE // dev_count + print_info("batch_size_per_dev: {}".format(batch_size_per_dev)) + + py_reader, avg_loss, lr, pred, grts, masks = build_model( + train_prog, startup_prog, phase=ModelPhase.TRAIN) + py_reader.decorate_sample_generator( + data_generator, batch_size=batch_size_per_dev, drop_last=drop_last) + + exe = fluid.Executor(place) + exe.run(startup_prog) + + exec_strategy = fluid.ExecutionStrategy() + # Clear temporary variables every 100 iteration + if args.use_gpu: + exec_strategy.num_threads = fluid.core.get_cuda_device_count() + exec_strategy.num_iteration_per_drop_scope = 100 + build_strategy = fluid.BuildStrategy() + + if cfg.NUM_TRAINERS > 1 and args.use_gpu: + dist_utils.prepare_for_multi_process(exe, build_strategy, train_prog) + exec_strategy.num_threads = 1 + + # Resume training + begin_epoch = cfg.SOLVER.BEGIN_EPOCH + if cfg.TRAIN.RESUME_MODEL_DIR: + begin_epoch = load_checkpoint(exe, train_prog) + # Load pretrained model + elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR): + print_info('Pretrained model dir: ', cfg.TRAIN.PRETRAINED_MODEL_DIR) + load_vars = [] + load_fail_vars = [] + + def var_shape_matched(var, shape): + """ + Check whehter persitable variable shape is match with current network + """ + var_exist = os.path.exists( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + if var_exist: + var_shape = parse_shape_from_file( + os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name)) + return var_shape == shape + return False + + for x in train_prog.list_vars(): + if isinstance(x, fluid.framework.Parameter): + shape = tuple(fluid.global_scope().find_var( + x.name).get_tensor().shape()) + if var_shape_matched(x, shape): + load_vars.append(x) + else: + load_fail_vars.append(x) + + fluid.io.load_vars( + exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars) + for var in load_vars: + print_info("Parameter[{}] loaded sucessfully!".format(var.name)) + for var in load_fail_vars: + print_info( + "Parameter[{}] don't exist or shape does not match current network, skip" + " to load it.".format(var.name)) + print_info("{}/{} pretrained parameters loaded successfully!".format( + len(load_vars), + len(load_vars) + len(load_fail_vars))) + else: + print_info( + 'Pretrained model dir {} not exists, training from scratch...'. + format(cfg.TRAIN.PRETRAINED_MODEL_DIR)) + + fetch_list = [avg_loss.name, lr.name] + if args.debug: + # Fetch more variable info and use streaming confusion matrix to + # calculate IoU results if in debug mode + np.set_printoptions( + precision=4, suppress=True, linewidth=160, floatmode="fixed") + fetch_list.extend([pred.name, grts.name, masks.name]) + cm = ConfusionMatrix(cfg.DATASET.NUM_CLASSES, streaming=True) + + not_quant_pattern = [] + if args.not_quant_pattern: + not_quant_pattern = args.not_quant_pattern + config = { + 'weight_quantize_type': 'channel_wise_abs_max', + 'activation_quantize_type': 'moving_average_abs_max', + 'quantize_op_types': ['depthwise_conv2d', 'mul', 'conv2d'], + 'not_quant_pattern': not_quant_pattern + } + compiled_train_prog = quant_aware(train_prog, place, config, for_test=False) + eval_prog = quant_aware(train_prog, place, config, for_test=True) + build_strategy.fuse_all_reduce_ops = False + build_strategy.sync_batch_norm = False + compiled_train_prog = compiled_train_prog.with_data_parallel( + loss_name=avg_loss.name, + exec_strategy=exec_strategy, + build_strategy=build_strategy) + + # trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0)) + # num_trainers = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + global_step = 0 + all_step = cfg.DATASET.TRAIN_TOTAL_IMAGES // cfg.BATCH_SIZE + if cfg.DATASET.TRAIN_TOTAL_IMAGES % cfg.BATCH_SIZE and drop_last != True: + all_step += 1 + all_step *= (cfg.SOLVER.NUM_EPOCHS - begin_epoch + 1) + + avg_loss = 0.0 + best_mIoU = 0.0 + + timer = Timer() + timer.start() + if begin_epoch > cfg.SOLVER.NUM_EPOCHS: + raise ValueError( + ("begin epoch[{}] is larger than cfg.SOLVER.NUM_EPOCHS[{}]").format( + begin_epoch, cfg.SOLVER.NUM_EPOCHS)) + + if args.use_mpio: + print_info("Use multiprocess reader") + else: + print_info("Use multi-thread reader") + + for epoch in range(begin_epoch, cfg.SOLVER.NUM_EPOCHS + 1): + py_reader.start() + while True: + try: + if args.debug: + # Print category IoU and accuracy to check whether the + # traning process is corresponed to expectation + loss, lr, pred, grts, masks = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + cm.calculate(pred, grts, masks) + avg_loss += np.mean(np.array(loss)) + global_step += 1 + + if global_step % args.log_steps == 0: + speed = args.log_steps / timer.elapsed_time() + avg_loss /= args.log_steps + category_acc, mean_acc = cm.accuracy() + category_iou, mean_iou = cm.mean_iou() + + print_info(( + "epoch={} step={} lr={:.5f} loss={:.4f} acc={:.5f} mIoU={:.5f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, mean_acc, + mean_iou, speed, + calculate_eta(all_step - global_step, speed))) + print_info("Category IoU: ", category_iou) + print_info("Category Acc: ", category_acc) + sys.stdout.flush() + avg_loss = 0.0 + cm.zero_matrix() + timer.restart() + else: + # If not in debug mode, avoid unnessary log and calculate + loss, lr = exe.run( + program=compiled_train_prog, + fetch_list=fetch_list, + return_numpy=True) + avg_loss += np.mean(np.array(loss)) + global_step += 1 + + if global_step % args.log_steps == 0 and cfg.TRAINER_ID == 0: + avg_loss /= args.log_steps + speed = args.log_steps / timer.elapsed_time() + print(( + "epoch={} step={} lr={:.5f} loss={:.4f} step/sec={:.3f} | ETA {}" + ).format(epoch, global_step, lr[0], avg_loss, speed, + calculate_eta(all_step - global_step, speed))) + sys.stdout.flush() + avg_loss = 0.0 + timer.restart() + + except fluid.core.EOFException: + py_reader.reset() + break + except Exception as e: + print(e) + + if (epoch % cfg.TRAIN.SNAPSHOT_EPOCH == 0 + or epoch == cfg.SOLVER.NUM_EPOCHS) and cfg.TRAINER_ID == 0: + ckpt_dir = save_checkpoint(exe, eval_prog, epoch) + + if args.do_eval: + print("Evaluation start") + _, mean_iou, _, mean_acc = evaluate( + cfg=cfg, + ckpt_dir=ckpt_dir, + use_gpu=args.use_gpu, + use_mpio=args.use_mpio, + not_quant_pattern=args.not_quant_pattern, + convert=False) + + if mean_iou > best_mIoU: + best_mIoU = mean_iou + update_best_model(ckpt_dir) + print_info("Save best model {} to {}, mIoU = {:.4f}".format( + ckpt_dir, + os.path.join(cfg.TRAIN.MODEL_SAVE_DIR, 'best_model'), + mean_iou)) + + # save final model + if cfg.TRAINER_ID == 0: + save_checkpoint(exe, eval_prog, 'final') + + +def main(args): + if args.cfg_file is not None: + cfg.update_from_file(args.cfg_file) + if args.opts: + cfg.update_from_list(args.opts) + if args.enable_ce: + random.seed(0) + np.random.seed(0) + + cfg.TRAINER_ID = int(os.getenv("PADDLE_TRAINER_ID", 0)) + cfg.NUM_TRAINERS = int(os.environ.get('PADDLE_TRAINERS_NUM', 1)) + + cfg.check_and_infer() + print_info(pprint.pformat(cfg)) + train_quant(cfg) + + +if __name__ == '__main__': + args = parse_args() + if fluid.core.is_compiled_with_cuda() != True and args.use_gpu == True: + print( + "You can not set use_gpu = True in the model because you are using paddlepaddle-cpu." + ) + print( + "Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_gpu=False to run models on CPU." + ) + sys.exit(1) + main(args) diff --git a/turtorial/finetune_deeplabv3plus.md b/turtorial/finetune_deeplabv3plus.md index 35fb677d9d416512a79ded14bcdcadf516aa6b70..d254ce5eb6e7cbe62b64deac78e003a04fe027bf 100644 --- a/turtorial/finetune_deeplabv3plus.md +++ b/turtorial/finetune_deeplabv3plus.md @@ -1,29 +1,32 @@ -# DeepLabv3+模型训练教程 +# DeepLabv3+模型使用教程 -* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`DeeplabV3+/Xception65/BatchNorm`*** 预训练模型在自定义数据集上进行训练。除了该配置之外,DeeplabV3+还支持以下不同[模型组合](#模型组合)的预训练模型,如果需要使用对应模型作为预训练模型,将下述内容中的Xception Backbone中的内容进行替换即可 +本教程旨在介绍如何使用`DeepLabv3+`预训练模型在自定义数据集上进行训练、评估和可视化。我们以`DeeplabV3+/Xception65/BatchNorm`预训练模型为例。 -* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解。 -* 本教程的所有命令都基于PaddleSeg主目录进行执行 +* 本教程的所有命令都基于PaddleSeg主目录进行执行。 ## 一. 准备待训练数据 -我们提前准备好了一份数据集,通过以下代码进行下载 +![](./imgs/optic.png) + +我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: ```shell -python dataset/download_pet.py +python dataset/download_optic.py ``` ## 二. 下载预训练模型 -关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置 - 接着下载对应的预训练模型 ```shell python pretrained_model/download_model.py deeplabv3p_xception65_bn_coco ``` +关于已有的DeepLabv3+预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。 + + ## 三. 准备配置 接着我们需要确定相关配置,从本教程的角度,配置分为三部分: @@ -45,19 +48,19 @@ python pretrained_model/download_model.py deeplabv3p_xception65_bn_coco 在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。 -数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`中 +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`中。 -其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/deeplabv3p_xception65_pet.yaml** +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/deeplabv3p_xception65_optic.yaml**。 ```yaml # 数据集配置 DATASET: - DATA_DIR: "./dataset/mini_pet/" - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" # 预训练模型配置 MODEL: @@ -75,15 +78,15 @@ AUG: BATCH_SIZE: 4 TRAIN: PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_coco/" - MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_pet/" - SNAPSHOT_EPOCH: 10 + MODEL_SAVE_DIR: "./saved_model/deeplabv3p_xception65_bn_optic/" + SNAPSHOT_EPOCH: 5 TEST: - TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_pet/final" + TEST_MODEL: "./saved_model/deeplabv3p_xception65_bn_optic/final" SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 + NUM_EPOCHS: 10 + LR: 0.001 LR_POLICY: "poly" - OPTIMIZER: "sgd" + OPTIMIZER: "adam" ``` ## 四. 配置/数据校验 @@ -91,7 +94,7 @@ SOLVER: 在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程 ```shell -python pdseg/check.py --cfg ./configs/deeplabv3p_xception65_pet.yaml +python pdseg/check.py --cfg ./configs/deeplabv3p_xception65_optic.yaml ``` @@ -100,7 +103,10 @@ python pdseg/check.py --cfg ./configs/deeplabv3p_xception65_pet.yaml 校验通过后,使用下述命令启动训练 ```shell -python pdseg/train.py --use_gpu --cfg ./configs/deeplabv3p_xception65_pet.yaml +# 指定GPU卡号(以0号卡为例) +export CUDA_VISIBLE_DEVICES=0 +# 训练 +python pdseg/train.py --use_gpu --cfg ./configs/deeplabv3p_xception65_optic.yaml ``` ## 六. 进行评估 @@ -108,22 +114,39 @@ python pdseg/train.py --use_gpu --cfg ./configs/deeplabv3p_xception65_pet.yaml 模型训练完成,使用下述命令启动评估 ```shell -python pdseg/eval.py --use_gpu --cfg ./configs/deeplabv3p_xception65_pet.yaml +python pdseg/eval.py --use_gpu --cfg ./configs/deeplabv3p_xception65_optic.yaml +``` + +## 七. 进行可视化 + +使用下述命令启动预测和可视化 + +```shell +python pdseg/vis.py --use_gpu --cfg ./configs/deeplabv3p_xception65_optic.yaml ``` +预测结果将保存在`visual`目录下,以下展示其中1张图片的预测效果: + +![](imgs/optic_deeplab.png) + +## 在线体验 + +PaddleSeg在AI Studio平台上提供了在线体验的DeepLabv3+图像分割教程,欢迎[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/226703)。 + + ## 模型组合 -|预训练模型名称|BackBone|Norm Type|数据集|配置| +|预训练模型名称|Backbone|Norm Type|数据集|配置| |-|-|-|-|-| -|mobilenetv2-2-0_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 2.0
MODEL.DEFAULT_NORM_TYPE: bn| -|mobilenetv2-1-5_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.5
MODEL.DEFAULT_NORM_TYPE: bn| -|mobilenetv2-1-0_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0
MODEL.DEFAULT_NORM_TYPE: bn| -|mobilenetv2-0-5_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.5
MODEL.DEFAULT_NORM_TYPE: bn| -|mobilenetv2-0-25_bn_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.25
MODEL.DEFAULT_NORM_TYPE: bn| -|xception41_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_41
MODEL.DEFAULT_NORM_TYPE: bn| -|xception65_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: bn| -|deeplabv3p_mobilenetv2-1-0_bn_coco|MobileNet V2|bn|COCO|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0
MODEL.DEEPLAB.ENCODER_WITH_ASPP: False
MODEL.DEEPLAB.ENABLE_DECODER: False
MODEL.DEFAULT_NORM_TYPE: bn| -|**deeplabv3p_xception65_bn_coco**|Xception|bn|COCO|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: bn | -|deeplabv3p_mobilenetv2-1-0_bn_cityscapes|MobileNet V2|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0
MODEL.DEEPLAB.ENCODER_WITH_ASPP: False
MODEL.DEEPLAB.ENABLE_DECODER: False
MODEL.DEFAULT_NORM_TYPE: bn| -|deeplabv3p_xception65_gn_cityscapes|Xception|gn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: gn| -|deeplabv3p_xception65_bn_cityscapes|Xception|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: bn| +|mobilenetv2-2-0_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 2.0
MODEL.DEFAULT_NORM_TYPE: bn| +|mobilenetv2-1-5_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.5
MODEL.DEFAULT_NORM_TYPE: bn| +|mobilenetv2-1-0_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0
MODEL.DEFAULT_NORM_TYPE: bn| +|mobilenetv2-0-5_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.5
MODEL.DEFAULT_NORM_TYPE: bn| +|mobilenetv2-0-25_bn_imagenet|MobileNetV2|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 0.25
MODEL.DEFAULT_NORM_TYPE: bn| +|xception41_imagenet|Xception41|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_41
MODEL.DEFAULT_NORM_TYPE: bn| +|xception65_imagenet|Xception65|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: bn| +|deeplabv3p_mobilenetv2-1-0_bn_coco|MobileNetV2|bn|COCO|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0
MODEL.DEEPLAB.ENCODER_WITH_ASPP: False
MODEL.DEEPLAB.ENABLE_DECODER: False
MODEL.DEFAULT_NORM_TYPE: bn| +|**deeplabv3p_xception65_bn_coco**|Xception65|bn|COCO|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: bn | +|deeplabv3p_mobilenetv2-1-0_bn_cityscapes|MobileNetV2|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: mobilenetv2
MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0
MODEL.DEEPLAB.ENCODER_WITH_ASPP: False
MODEL.DEEPLAB.ENABLE_DECODER: False
MODEL.DEFAULT_NORM_TYPE: bn| +|deeplabv3p_xception65_gn_cityscapes|Xception65|gn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: gn| +|deeplabv3p_xception65_bn_cityscapes|Xception65|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p
MODEL.DEEPLAB.BACKBONE: xception_65
MODEL.DEFAULT_NORM_TYPE: bn| diff --git a/turtorial/finetune_fast_scnn.md b/turtorial/finetune_fast_scnn.md new file mode 100644 index 0000000000000000000000000000000000000000..188a51edf9d138bb6832849c9ab2ad8afbcd3cd4 --- /dev/null +++ b/turtorial/finetune_fast_scnn.md @@ -0,0 +1,119 @@ +# Fast-SCNN模型训练教程 + +* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`Fast_scnn_cityscapes`*** 预训练模型在自定义数据集上进行训练。 + +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 + +* 本教程的所有命令都基于PaddleSeg主目录进行执行 + +## 一. 准备待训练数据 + +我们提前准备好了一份数据集,通过以下代码进行下载 + +```shell +python dataset/download_pet.py +``` + +## 二. 下载预训练模型 + +```shell +python pretrained_model/download_model.py fast_scnn_cityscapes +``` + +## 三. 准备配置 + +接着我们需要确定相关配置,从本教程的角度,配置分为三部分: + +* 数据集 + * 训练集主目录 + * 训练集文件列表 + * 测试集文件列表 + * 评估集文件列表 +* 预训练模型 + * 预训练模型名称 + * 预训练模型的backbone网络 + * 预训练模型的Normalization类型 + * 预训练模型路径 +* 其他 + * 学习率 + * Batch大小 + * ... + +在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。 + +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`中 + +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/fast_scnn_pet.yaml** + +```yaml +# 数据集配置 +DATASET: + DATA_DIR: "./dataset/mini_pet/" + NUM_CLASSES: 3 + TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" + TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" + VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" + VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" + +# 预训练模型配置 +MODEL: + MODEL_NAME: "fast_scnn" + DEFAULT_NORM_TYPE: "bn" + +# 其他配置 +TRAIN_CROP_SIZE: (512, 512) +EVAL_CROP_SIZE: (512, 512) +AUG: + AUG_METHOD: "unpadding" + FIX_RESIZE_SIZE: (512, 512) +BATCH_SIZE: 4 +TRAIN: + PRETRAINED_MODEL_DIR: "./pretrained_model/fast_scnn_cityscapes/" + MODEL_SAVE_DIR: "./saved_model/fast_scnn_pet/" + SNAPSHOT_EPOCH: 10 +TEST: + TEST_MODEL: "./saved_model/fast_scnn_pet/final" +SOLVER: + NUM_EPOCHS: 100 + LR: 0.005 + LR_POLICY: "poly" + OPTIMIZER: "sgd" +``` + +## 四. 配置/数据校验 + +在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程 + +```shell +python pdseg/check.py --cfg ./configs/fast_scnn_pet.yaml +``` + + +## 五. 开始训练 + +校验通过后,使用下述命令启动训练 + +```shell +python pdseg/train.py --use_gpu --cfg ./configs/fast_scnn_pet.yaml +``` + +## 六. 进行评估 + +模型训练完成,使用下述命令启动评估 + +```shell +python pdseg/eval.py --use_gpu --cfg ./configs/fast_scnn_pet.yaml +``` + + +## 七. 实时分割模型推理时间比较 + +| 模型 | eval size | inference time | mIoU on cityscape val| +|---|---|---|---| +| DeepLabv3+/MobileNetv2/bn | (1024, 2048) |16.14ms| 0.698| +| ICNet/bn |(1024, 2048) |8.76ms| 0.6831 | +| Fast-SCNN/bn | (1024, 2048) |6.28ms| 0.6964 | + +上述测试环境为v100. 测试使用paddle的推理接口[zero_copy](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_usage/deploy/inference/python_infer_cn.html#id8)的方式,模型输出是类别,即argmax后的值。 + + diff --git a/turtorial/finetune_hrnet.md b/turtorial/finetune_hrnet.md index f7feb9ddafd909fa829cf5f3e3d1c66c82505f57..9475a8aab8386364ab6be7e976ac30dae73d4645 100644 --- a/turtorial/finetune_hrnet.md +++ b/turtorial/finetune_hrnet.md @@ -1,22 +1,23 @@ -# HRNet模型训练教程 +# HRNet模型使用教程 -* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`HRNet`*** 预训练模型在自定义数据集上进行训练。 +本教程旨在介绍如何通过使用PaddleSeg提供的 ***`HRNet`*** 预训练模型在自定义数据集上进行训练、评估和可视化。 -* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解。 -* 本教程的所有命令都基于PaddleSeg主目录进行执行 +* 本教程的所有命令都基于PaddleSeg主目录进行执行。 ## 一. 准备待训练数据 -我们提前准备好了一份数据集,通过以下代码进行下载 +![](./imgs/optic.png) + +我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: ```shell -python dataset/download_pet.py +python dataset/download_optic.py ``` -## 二. 下载预训练模型 -关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置 +## 二. 下载预训练模型 接着下载对应的预训练模型 @@ -24,6 +25,8 @@ python dataset/download_pet.py python pretrained_model/download_model.py hrnet_w18_bn_cityscapes ``` +关于已有的HRNet预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。 + ## 三. 准备配置 接着我们需要确定相关配置,从本教程的角度,配置分为三部分: @@ -45,19 +48,19 @@ python pretrained_model/download_model.py hrnet_w18_bn_cityscapes 在三者中,预训练模型的配置尤为重要,如果模型配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。 -数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`中 +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`中 -其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/hrnet_w18_pet.yaml** +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/hrnet_optic.yaml** ```yaml # 数据集配置 DATASET: - DATA_DIR: "./dataset/mini_pet/" - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" # 预训练模型配置 MODEL: @@ -80,15 +83,15 @@ AUG: BATCH_SIZE: 4 TRAIN: PRETRAINED_MODEL_DIR: "./pretrained_model/hrnet_w18_bn_cityscapes/" - MODEL_SAVE_DIR: "./saved_model/hrnet_w18_bn_pet/" - SNAPSHOT_EPOCH: 10 + MODEL_SAVE_DIR: "./saved_model/hrnet_optic/" + SNAPSHOT_EPOCH: 5 TEST: - TEST_MODEL: "./saved_model/hrnet_w18_bn_pet/final" + TEST_MODEL: "./saved_model/hrnet_optic/final" SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 + NUM_EPOCHS: 10 + LR: 0.001 LR_POLICY: "poly" - OPTIMIZER: "sgd" + OPTIMIZER: "adam" ``` ## 四. 配置/数据校验 @@ -96,7 +99,7 @@ SOLVER: 在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程 ```shell -python pdseg/check.py --cfg ./configs/hrnet_w18_pet.yaml +python pdseg/check.py --cfg ./configs/hrnet_optic.yaml ``` @@ -105,7 +108,10 @@ python pdseg/check.py --cfg ./configs/hrnet_w18_pet.yaml 校验通过后,使用下述命令启动训练 ```shell -python pdseg/train.py --use_gpu --cfg ./configs/hrnet_w18_pet.yaml +# 指定GPU卡号(以0号卡为例) +export CUDA_VISIBLE_DEVICES=0 +# 训练 +python pdseg/train.py --use_gpu --cfg ./configs/hrnet_optic.yaml ``` ## 六. 进行评估 @@ -113,19 +119,30 @@ python pdseg/train.py --use_gpu --cfg ./configs/hrnet_w18_pet.yaml 模型训练完成,使用下述命令启动评估 ```shell -python pdseg/eval.py --use_gpu --cfg ./configs/hrnet_w18_pet.yaml +python pdseg/eval.py --use_gpu --cfg ./configs/hrnet_optic.yaml +``` + +## 七. 进行可视化 +使用下述命令启动预测和可视化 + +```shell +python pdseg/vis.py --use_gpu --cfg ./configs/hrnet_optic.yaml ``` +预测结果将保存在visual目录下,以下展示其中1张图片的预测效果: + +![](imgs/optic_hrnet.png) + ## 模型组合 -|预训练模型名称|BackBone|Norm Type|数据集|配置| -|-|-|-|-|-| -|hrnet_w18_bn_cityscapes|-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144]
MODEL.DEFAULT_NORM_TYPE: bn| -| hrnet_w18_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144]
MODEL.DEFAULT_NORM_TYPE: bn | -| hrnet_w30_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [30, 60]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [30, 60, 120]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [30, 60, 120, 240]
MODEL.DEFAULT_NORM_TYPE: bn | -| hrnet_w32_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [32, 64]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [32, 64, 128]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [32, 64, 128, 256]
MODEL.DEFAULT_NORM_TYPE: bn | -| hrnet_w40_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [40, 80]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [40, 80, 160]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [40, 80, 160, 320]
MODEL.DEFAULT_NORM_TYPE: bn | -| hrnet_w44_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [44, 88]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [44, 88, 176]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [44, 88, 176, 352]
MODEL.DEFAULT_NORM_TYPE: bn | -| hrnet_w48_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [48, 96]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [48, 96, 192]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [48, 96, 192, 384]
MODEL.DEFAULT_NORM_TYPE: bn | -| hrnet_w64_bn_imagenet |-|bn| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [64, 128]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [64, 128, 256]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [64, 128, 256, 512]
MODEL.DEFAULT_NORM_TYPE: bn | +|预训练模型名称|Backbone|数据集|配置| +|-|-|-|-| +|hrnet_w18_bn_cityscapes|HRNet| Cityscapes | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144]
MODEL.DEFAULT_NORM_TYPE: bn| +| hrnet_w18_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [18, 36]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [18, 36, 72]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [18, 36, 72, 144]
MODEL.DEFAULT_NORM_TYPE: bn | +| hrnet_w30_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [30, 60]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [30, 60, 120]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [30, 60, 120, 240]
MODEL.DEFAULT_NORM_TYPE: bn | +| hrnet_w32_bn_imagenet |HRNet|ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [32, 64]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [32, 64, 128]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [32, 64, 128, 256]
MODEL.DEFAULT_NORM_TYPE: bn | +| hrnet_w40_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [40, 80]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [40, 80, 160]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [40, 80, 160, 320]
MODEL.DEFAULT_NORM_TYPE: bn | +| hrnet_w44_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [44, 88]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [44, 88, 176]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [44, 88, 176, 352]
MODEL.DEFAULT_NORM_TYPE: bn | +| hrnet_w48_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [48, 96]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [48, 96, 192]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [48, 96, 192, 384]
MODEL.DEFAULT_NORM_TYPE: bn | +| hrnet_w64_bn_imagenet |HRNet| ImageNet | MODEL.MODEL_NAME: hrnet
MODEL.HRNET.STAGE2.NUM_CHANNELS: [64, 128]
MODEL.HRNET.STAGE3.NUM_CHANNELS: [64, 128, 256]
MODEL.HRNET.STAGE4.NUM_CHANNELS: [64, 128, 256, 512]
MODEL.DEFAULT_NORM_TYPE: bn | diff --git a/turtorial/finetune_icnet.md b/turtorial/finetune_icnet.md index 00caf4f87f206000bc2dde8440bdbe08ff03f555..57adc200d9d4857768d5055d8160b7b729332389 100644 --- a/turtorial/finetune_icnet.md +++ b/turtorial/finetune_icnet.md @@ -1,32 +1,34 @@ -# ICNet模型训练教程 +# ICNet模型使用教程 -* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`ICNet`*** 预训练模型在自定义数据集上进行训练 +本教程旨在介绍如何通过使用PaddleSeg提供的 ***`ICNet`*** 预训练模型在自定义数据集上进行训练、评估和可视化。 -* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解。 -* 本教程的所有命令都基于PaddleSeg主目录进行执行 +* 本教程的所有命令都基于PaddleSeg主目录进行执行。 * 注意 ***`ICNet`*** 不支持在cpu环境上训练和评估 ## 一. 准备待训练数据 -我们提前准备好了一份数据集,通过以下代码进行下载 +![](./imgs/optic.png) + +我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: ```shell -python dataset/download_pet.py +python dataset/download_optic.py ``` ## 二. 下载预训练模型 -关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置。 - 接着下载对应的预训练模型 ```shell python pretrained_model/download_model.py icnet_bn_cityscapes ``` +关于已有的ICNet预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。 + ## 三. 准备配置 接着我们需要确定相关配置,从本教程的角度,配置分为三部分: @@ -48,20 +50,19 @@ python pretrained_model/download_model.py icnet_bn_cityscapes 在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所示。 -数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`中 +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`中 -其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/icnet_pet.yaml** +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/icnet_optic.yaml** ```yaml # 数据集配置 DATASET: - DATA_DIR: "./dataset/mini_pet/" - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" # 预训练模型配置 MODEL: @@ -80,15 +81,15 @@ AUG: BATCH_SIZE: 4 TRAIN: PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/" - MODEL_SAVE_DIR: "./saved_model/icnet_pet/" - SNAPSHOT_EPOCH: 10 + MODEL_SAVE_DIR: "./saved_model/icnet_optic/" + SNAPSHOT_EPOCH: 5 TEST: - TEST_MODEL: "./saved_model/icnet_pet/final" + TEST_MODEL: "./saved_model/icnet_optic/final" SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 + NUM_EPOCHS: 10 + LR: 0.001 LR_POLICY: "poly" - OPTIMIZER: "sgd" + OPTIMIZER: "adam" ``` ## 四. 配置/数据校验 @@ -96,7 +97,7 @@ SOLVER: 在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程 ```shell -python pdseg/check.py --cfg ./configs/icnet_pet.yaml +python pdseg/check.py --cfg ./configs/icnet_optic.yaml ``` @@ -105,7 +106,10 @@ python pdseg/check.py --cfg ./configs/icnet_pet.yaml 校验通过后,使用下述命令启动训练 ```shell -python pdseg/train.py --use_gpu --cfg ./configs/icnet_pet.yaml +# 指定GPU卡号(以0号卡为例) +export CUDA_VISIBLE_DEVICES=0 +# 训练 +python pdseg/train.py --use_gpu --cfg ./configs/icnet_optic.yaml ``` ## 六. 进行评估 @@ -113,11 +117,22 @@ python pdseg/train.py --use_gpu --cfg ./configs/icnet_pet.yaml 模型训练完成,使用下述命令启动评估 ```shell -python pdseg/eval.py --use_gpu --cfg ./configs/icnet_pet.yaml +python pdseg/eval.py --use_gpu --cfg ./configs/icnet_optic.yaml +``` + +## 七. 进行可视化 +使用下述命令启动预测和可视化 + +```shell +python pdseg/vis.py --use_gpu --cfg ./configs/icnet_optic.yaml ``` +预测结果将保存在visual目录下,以下展示其中1张图片的预测效果: + +![](imgs/optic_icnet.png) + ## 模型组合 -|预训练模型名称|BackBone|Norm|数据集|配置| -|-|-|-|-|-| -|icnet_bn_cityscapes|-|bn|Cityscapes|MODEL.MODEL_NAME: icnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.MULTI_LOSS_WEIGHT: [1.0, 0.4, 0.16]| +|预训练模型名称|Backbone|数据集|配置| +|-|-|-|-| +|icnet_bn_cityscapes|ResNet50|Cityscapes|MODEL.MODEL_NAME: icnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.MULTI_LOSS_WEIGHT: [1.0, 0.4, 0.16]| diff --git a/turtorial/finetune_pspnet.md b/turtorial/finetune_pspnet.md index 931c3c5f7515e2ebec3d4fccf3069ecc6d6c00fb..8c52bbe4646d253f70a24001ed6e414a1bee3cc3 100644 --- a/turtorial/finetune_pspnet.md +++ b/turtorial/finetune_pspnet.md @@ -1,29 +1,31 @@ # PSPNET模型训练教程 -* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`PSPNET`*** 预训练模型在自定义数据集上进行训练 +本教程旨在介绍如何通过使用PaddleSeg提供的 ***`PSPNET`*** 预训练模型在自定义数据集上进行训练。 -* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解。 -* 本教程的所有命令都基于PaddleSeg主目录进行执行 +* 本教程的所有命令都基于PaddleSeg主目录进行执行。 ## 一. 准备待训练数据 -我们提前准备好了一份数据集,通过以下代码进行下载 +![](./imgs/optic.png) + +我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: ```shell -python dataset/download_pet.py +python dataset/download_optic.py ``` ## 二. 下载预训练模型 -关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置。 - 接着下载对应的预训练模型 ```shell python pretrained_model/download_model.py pspnet50_bn_cityscapes ``` +关于已有的PSPNet预训练模型的列表,请参见[PSPNet预训练模型组合](#PSPNet预训练模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。 + ## 三. 准备配置 接着我们需要确定相关配置,从本教程的角度,配置分为三部分: @@ -45,20 +47,19 @@ python pretrained_model/download_model.py pspnet50_bn_cityscapes 在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所示。 -数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`中 +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`中 -其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为`configs/test_pet.yaml` +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为`configs/pspnet_optic.yaml` ```yaml # 数据集配置 DATASET: - DATA_DIR: "./dataset/mini_pet/" - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" # 预训练模型配置 MODEL: @@ -77,15 +78,15 @@ AUG: BATCH_SIZE: 4 TRAIN: PRETRAINED_MODEL_DIR: "./pretrained_model/pspnet50_bn_cityscapes/" - MODEL_SAVE_DIR: "./saved_model/pspnet_pet/" - SNAPSHOT_EPOCH: 10 + MODEL_SAVE_DIR: "./saved_model/pspnet_optic/" + SNAPSHOT_EPOCH: 5 TEST: - TEST_MODEL: "./saved_model/pspnet_pet/final" + TEST_MODEL: "./saved_model/pspnet_optic/final" SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 + NUM_EPOCHS: 10 + LR: 0.001 LR_POLICY: "poly" - OPTIMIZER: "sgd" + OPTIMIZER: "adam" ``` ## 四. 配置/数据校验 @@ -93,7 +94,7 @@ SOLVER: 在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程 ```shell -python pdseg/check.py --cfg ./configs/test_pet.yaml +python pdseg/check.py --cfg ./configs/pspnet_optic.yaml ``` @@ -102,7 +103,10 @@ python pdseg/check.py --cfg ./configs/test_pet.yaml 校验通过后,使用下述命令启动训练 ```shell -python pdseg/train.py --use_gpu --cfg ./configs/test_pet.yaml +# 指定GPU卡号(以0号卡为例) +export CUDA_VISIBLE_DEVICES=0 +# 训练 +python pdseg/train.py --use_gpu --cfg ./configs/pspnet_optic.yaml ``` ## 六. 进行评估 @@ -110,12 +114,27 @@ python pdseg/train.py --use_gpu --cfg ./configs/test_pet.yaml 模型训练完成,使用下述命令启动评估 ```shell -python pdseg/eval.py --use_gpu --cfg ./configs/test_pet.yaml +python pdseg/eval.py --use_gpu --cfg ./configs/pspnet_optic.yaml +``` + +## 七. 进行可视化 +使用下述命令启动预测和可视化 + +```shell +python pdseg/vis.py --use_gpu --cfg ./configs/pspnet_optic.yaml ``` -## 模型组合 +预测结果将保存在visual目录下,以下展示其中1张图片的预测效果: + +![](imgs/optic_pspnet.png) + +## PSPNet预训练模型组合 -|预训练模型名称|BackBone|Norm|数据集|配置| -|-|-|-|-|-| -|pspnet50_bn_cityscapes|ResNet50|bn|Cityscapes|MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 50| -|pspnet101_bn_cityscapes|ResNet101|bn|Cityscapes|MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 101| +|模型|BackBone|数据集|配置| +|-|-|-|-| +|[pspnet50_cityscapes](https://paddleseg.bj.bcebos.com/models/pspnet50_cityscapes.tgz)|ResNet50(适配PSPNet)|Cityscapes |MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 50| +|[pspnet101_cityscapes](https://paddleseg.bj.bcebos.com/models/pspnet101_cityscapes.tgz)|ResNet101(适配PSPNet)|Cityscapes |MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 101| +| [pspnet50_coco](https://paddleseg.bj.bcebos.com/models/pspnet50_coco.tgz)|ResNet50(适配PSPNet)|COCO |MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 50| +| [pspnet101_coco](https://paddleseg.bj.bcebos.com/models/pspnet101_coco.tgz) |ResNet101(适配PSPNet)| COCO |MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 101| +| [resnet50_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet50_v2_pspnet.tgz)| ResNet50(适配PSPNet) | ImageNet | MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 50 | +| [resnet101_v2_pspnet](https://paddleseg.bj.bcebos.com/resnet101_v2_pspnet.tgz)| ResNet101(适配PSPNet) | ImageNet | MODEL.MODEL_NAME: pspnet
MODEL.DEFAULT_NORM_TYPE: bn
MODEL.PSPNET.LAYERS: 101 | diff --git a/turtorial/finetune_unet.md b/turtorial/finetune_unet.md index b1baff8b0d6a9438df0ae4ed6a5f0dfdae4d3414..dd2945cf587fc18ed760639a56ad7b8edebc0087 100644 --- a/turtorial/finetune_unet.md +++ b/turtorial/finetune_unet.md @@ -1,29 +1,31 @@ -# U-Net模型训练教程 +# U-Net模型使用教程 -* 本教程旨在介绍如何通过使用PaddleSeg提供的 ***`U-Net`*** 预训练模型在自定义数据集上进行训练 +本教程旨在介绍如何通过使用PaddleSeg提供的 ***`U-Net`*** 预训练模型在自定义数据集上进行训练、评估和可视化。 -* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解 +* 在阅读本教程前,请确保您已经了解过PaddleSeg的[快速入门](../README.md#快速入门)和[基础功能](../README.md#基础功能)等章节,以便对PaddleSeg有一定的了解。 -* 本教程的所有命令都基于PaddleSeg主目录进行执行 +* 本教程的所有命令都基于PaddleSeg主目录进行执行。 ## 一. 准备待训练数据 -我们提前准备好了一份数据集,通过以下代码进行下载 +![](./imgs/optic.png) + +我们提前准备好了一份眼底医疗分割数据集,包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: ```shell -python dataset/download_pet.py +python dataset/download_optic.py ``` ## 二. 下载预训练模型 -关于PaddleSeg支持的所有预训练模型的列表,我们可以从[模型组合](#模型组合)中查看我们所需模型的名字和配置。 - 接着下载对应的预训练模型 ```shell python pretrained_model/download_model.py unet_bn_coco ``` +关于已有的U-Net预训练模型的列表,请参见[模型组合](#模型组合)。如果需要使用其他预训练模型,下载该模型并将配置中的BACKBONE、NORM_TYPE等进行替换即可。 + ## 三. 准备配置 接着我们需要确定相关配置,从本教程的角度,配置分为三部分: @@ -45,20 +47,19 @@ python pretrained_model/download_model.py unet_bn_coco 在三者中,预训练模型的配置尤为重要,如果模型或者BACKBONE配置错误,会导致预训练的参数没有加载,进而影响收敛速度。预训练模型相关的配置如第二步所展示。 -数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/mini_pet`中 +数据集的配置和数据路径有关,在本教程中,数据存放在`dataset/optic_disc_seg`中 -其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/unet_pet.yaml** +其他配置则根据数据集和机器环境的情况进行调节,最终我们保存一个如下内容的yaml配置文件,存放路径为**configs/unet_optic.yaml** ```yaml # 数据集配置 DATASET: - DATA_DIR: "./dataset/mini_pet/" - NUM_CLASSES: 3 - TEST_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - TRAIN_FILE_LIST: "./dataset/mini_pet/file_list/train_list.txt" - VAL_FILE_LIST: "./dataset/mini_pet/file_list/val_list.txt" - VIS_FILE_LIST: "./dataset/mini_pet/file_list/test_list.txt" - + DATA_DIR: "./dataset/optic_disc_seg/" + NUM_CLASSES: 2 + TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" + TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt" + VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt" + VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt" # 预训练模型配置 MODEL: @@ -74,13 +75,13 @@ AUG: BATCH_SIZE: 4 TRAIN: PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/" - MODEL_SAVE_DIR: "./saved_model/unet_pet/" - SNAPSHOT_EPOCH: 10 + MODEL_SAVE_DIR: "./saved_model/unet_optic/" + SNAPSHOT_EPOCH: 5 TEST: - TEST_MODEL: "./saved_model/unet_pet/final" + TEST_MODEL: "./saved_model/unet_optic/final" SOLVER: - NUM_EPOCHS: 100 - LR: 0.005 + NUM_EPOCHS: 10 + LR: 0.001 LR_POLICY: "poly" OPTIMIZER: "adam" ``` @@ -90,7 +91,7 @@ SOLVER: 在开始训练和评估之前,我们还需要对配置和数据进行一次校验,确保数据和配置是正确的。使用下述命令启动校验流程 ```shell -python pdseg/check.py --cfg ./configs/unet_pet.yaml +python pdseg/check.py --cfg ./configs/unet_optic.yaml ``` @@ -99,7 +100,10 @@ python pdseg/check.py --cfg ./configs/unet_pet.yaml 校验通过后,使用下述命令启动训练 ```shell -python pdseg/train.py --use_gpu --cfg ./configs/unet_pet.yaml +# 指定GPU卡号(以0号卡为例) +export CUDA_VISIBLE_DEVICES=0 +# 训练 +python pdseg/train.py --use_gpu --cfg ./configs/unet_optic.yaml ``` ## 六. 进行评估 @@ -107,11 +111,26 @@ python pdseg/train.py --use_gpu --cfg ./configs/unet_pet.yaml 模型训练完成,使用下述命令启动评估 ```shell -python pdseg/eval.py --use_gpu --cfg ./configs/unet_pet.yaml +python pdseg/eval.py --use_gpu --cfg ./configs/unet_optic.yaml +``` + +## 七. 进行可视化 +使用下述命令启动预测和可视化 + +```shell +python pdseg/vis.py --use_gpu --cfg ./configs/unet_optic.yaml ``` +预测结果将保存在visual目录下,以下展示其中1张图片的预测效果: + +![](imgs/optic_unet.png) + +## 在线体验 + +PaddleSeg在AI Studio平台上提供了在线体验的U-Net分割教程,欢迎[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)。 + ## 模型组合 -|预训练模型名称|BackBone|Norm|数据集|配置| -|-|-|-|-|-| -|unet_bn_coco|-|bn|COCO|MODEL.MODEL_NAME: unet
MODEL.DEFAULT_NORM_TYPE: bn| +|预训练模型名称|Backbone|数据集|配置| +|-|-|-|-| +|unet_bn_coco|VGG16|COCO|MODEL.MODEL_NAME: unet
MODEL.DEFAULT_NORM_TYPE: bn| diff --git a/turtorial/imgs/optic.png b/turtorial/imgs/optic.png new file mode 100644 index 0000000000000000000000000000000000000000..34acaae49303e71e6b59db26202a9079965f05eb Binary files /dev/null and b/turtorial/imgs/optic.png differ diff --git a/turtorial/imgs/optic_deeplab.png b/turtorial/imgs/optic_deeplab.png new file mode 100644 index 0000000000000000000000000000000000000000..8edc957362715bb742042d6f0f6e6c36fd7aec52 Binary files /dev/null and b/turtorial/imgs/optic_deeplab.png differ diff --git a/turtorial/imgs/optic_hrnet.png b/turtorial/imgs/optic_hrnet.png new file mode 100644 index 0000000000000000000000000000000000000000..8d19190aa5a057fe5aa72cd800c1c9fed642d9ef Binary files /dev/null and b/turtorial/imgs/optic_hrnet.png differ diff --git a/turtorial/imgs/optic_icnet.png b/turtorial/imgs/optic_icnet.png new file mode 100644 index 0000000000000000000000000000000000000000..a4d36b7ab0f086af46840a5c1e8f1624054048be Binary files /dev/null and b/turtorial/imgs/optic_icnet.png differ diff --git a/turtorial/imgs/optic_pspnet.png b/turtorial/imgs/optic_pspnet.png new file mode 100644 index 0000000000000000000000000000000000000000..44fd2795d6edfdc95378046da906949ad01431d9 Binary files /dev/null and b/turtorial/imgs/optic_pspnet.png differ diff --git a/turtorial/imgs/optic_unet.png b/turtorial/imgs/optic_unet.png new file mode 100644 index 0000000000000000000000000000000000000000..9ca439ebc76427516127d56aac56b5d09dd68263 Binary files /dev/null and b/turtorial/imgs/optic_unet.png differ