未验证 提交 8ec2454d 编写于 作者: L LielinJiang 提交者: GitHub

Merge pull request #1 from PaddlePaddle/master

track official update
language: python
python:
- '2.7'
- '3.5'
- '3.6'
script:
- /bin/bash ./test/ci/test_download_dataset.sh
notifications:
email:
on_success: change
on_failure: always
# PaddleSeg 语义分割库 # PaddleSeg 语义分割库
[![Build Status](https://travis-ci.org/PaddlePaddle/PaddleSeg.svg?branch=master)](https://travis-ci.org/PaddlePaddle/PaddleSeg)
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
## 简介 ## 简介
...@@ -19,7 +20,7 @@ PaddleSeg是基于[PaddlePaddle](https://www.paddlepaddle.org.cn)开发的语义 ...@@ -19,7 +20,7 @@ PaddleSeg是基于[PaddlePaddle](https://www.paddlepaddle.org.cn)开发的语义
- **高性能** - **高性能**
- PaddleSeg支持多进程IO、多卡并行、多卡Batch Norm同步等训练加速策略,通过飞桨核心框架的显存优化算法,可以大幅度节约分割模型的显存开销,更快完成分割模型训练。 - PaddleSeg支持多进程IO、多卡并行、多卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化算法,可以大幅度减少分割模型的显存开销,更快完成分割模型训练。
- **工业级部署** - **工业级部署**
...@@ -76,7 +77,7 @@ A: 降低Batch size,使用Group Norm策略等。 ...@@ -76,7 +77,7 @@ A: 降低Batch size,使用Group Norm策略等。
* PaddleSeg分割库初始版本发布,包含DeepLabv3+, U-Net, ICNet三类分割模型, 其中DeepLabv3+支持Xception, MobileNet两种可调节的骨干网络。 * PaddleSeg分割库初始版本发布,包含DeepLabv3+, U-Net, ICNet三类分割模型, 其中DeepLabv3+支持Xception, MobileNet两种可调节的骨干网络。
* CVPR 19' LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P) * CVPR 19' LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P)
* 预置基于DeepLabv3+网络的[人像分割](./contrib/HumanSeg/)[车道线分割](./contrib/RoadLine)预测模型发布 * 预置基于DeepLabv3+网络的人像分割和车道线分割预测模型发布
## 如何贡献代码 ## 如何贡献代码
......
# Augmented Context Embedding with Edge Perceiving(ACE2P) # Augmented Context Embedding with Edge Perceiving(ACE2P)
- 类别: 图像-语义分割
- 网络: ACE2P
- 数据集: LIP
## 模型概述 ## 模型概述
人体解析(Human Parsing)是细粒度的语义分割任务,旨在识别像素级别的人类图像的组成部分(例如,身体部位和服装)。ACE2P通过融合底层特征、全局上下文信息和边缘细节, 人体解析(Human Parsing)是细粒度的语义分割任务,旨在识别像素级别的人类图像的组成部分(例如,身体部位和服装)。ACE2P通过融合底层特征、全局上下文信息和边缘细节,
端到端训练学习人体解析任务。以ACE2P单人人体解析网络为基础的解决方案在CVPR2019第三届LIP挑战赛中赢得了全部三个人体解析任务的第一名 端到端训练学习人体解析任务。以ACE2P单人人体解析网络为基础的解决方案在CVPR2019第三届LIP挑战赛中赢得了全部三个人体解析任务的第一名
......
...@@ -22,7 +22,7 @@ CVPR 19 Look into Person (LIP) 单人人像分割比赛冠军模型,详见[ACE ...@@ -22,7 +22,7 @@ CVPR 19 Look into Person (LIP) 单人人像分割比赛冠军模型,详见[ACE
### 4. 运行 ### 4. 运行
**NOTE:** 运行该模型需要需至少2.5G显存 **NOTE:** 运行该模型需要2G左右显存
使用GPU预测 使用GPU预测
``` ```
......
...@@ -118,10 +118,10 @@ def infer(): ...@@ -118,10 +118,10 @@ def infer():
output_im.putpalette(palette) output_im.putpalette(palette)
output_im.save(result_path) output_im.save(result_path)
if idx % 100 == 0: if (idx + 1) % 100 == 0:
print('%d processd' % (idx)) print('%d processd' % (idx + 1))
print('%d processd done' % (idx)) print('%d processd done' % (idx + 1))
return 0 return 0
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
def download_cityscapes_dataset(savepath, extrapath):
url = "https://paddleseg.bj.bcebos.com/dataset/cityscapes.tar"
download_file_and_uncompress(
url=url, savepath=savepath, extrapath=extrapath)
if __name__ == "__main__":
download_cityscapes_dataset(LOCAL_PATH, LOCAL_PATH)
print("Dataset download finish!")
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
def download_pet_dataset(savepath, extrapath):
url = "https://paddleseg.bj.bcebos.com/dataset/mini_pet.zip"
download_file_and_uncompress(
url=url, savepath=savepath, extrapath=extrapath)
if __name__ == "__main__":
download_pet_dataset(LOCAL_PATH, LOCAL_PATH)
print("Dataset download finish!")
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
打开终端输入`labelme`会出现LableMe的交互界面,可以先预览`LabelMe`给出的已标注好的图片,再开始标注自定义数据集。 打开终端输入`labelme`会出现LableMe的交互界面,可以先预览`LabelMe`给出的已标注好的图片,再开始标注自定义数据集。
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-1.png" width="600px"/> <img src="../imgs/annotation/image-1.png" width="600px"/>
<p>图1 LableMe交互界面的示意图</p> <p>图1 LableMe交互界面的示意图</p>
</div> </div>
...@@ -24,7 +24,7 @@ git clone https://github.com/wkentaro/labelme ...@@ -24,7 +24,7 @@ git clone https://github.com/wkentaro/labelme
终端输入`labelme`会出现LableMe的交互界面,点击`OpenDir`打开`<path/to/labelme>/examples/semantic_segmentation/data_annotated`,其中`<path/to/labelme>`为克隆下来的`labelme`的路径,打开后示意的是语义分割的真值标注。 终端输入`labelme`会出现LableMe的交互界面,点击`OpenDir`打开`<path/to/labelme>/examples/semantic_segmentation/data_annotated`,其中`<path/to/labelme>`为克隆下来的`labelme`的路径,打开后示意的是语义分割的真值标注。
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-2.png" width="600px"/> <img src="../imgs/annotation/image-2.png" width="600px"/>
<p>图2 已标注图片的示意图</p> <p>图2 已标注图片的示意图</p>
</div> </div>
...@@ -35,15 +35,15 @@ git clone https://github.com/wkentaro/labelme ...@@ -35,15 +35,15 @@ git clone https://github.com/wkentaro/labelme
​ (1) 点击`OpenDir`打开待标注图片所在目录,点击`Create Polygons`,沿着目标的边缘画多边形,完成后输入目标的类别。在标注过程中,如果某个点画错了,可以按撤销快捷键可撤销该点。Mac下的撤销快捷键为`command+Z` ​ (1) 点击`OpenDir`打开待标注图片所在目录,点击`Create Polygons`,沿着目标的边缘画多边形,完成后输入目标的类别。在标注过程中,如果某个点画错了,可以按撤销快捷键可撤销该点。Mac下的撤销快捷键为`command+Z`
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-3.png" width="600px"/> <img src="../imgs/annotation/image-3.png" width="600px"/>
<p>图3 标注单个目标的示意图</p> <p>图3 标注单个目标的示意图</p>
</div> </div>
​ (2) 右击选择`Edit Polygons`可以整体移动多边形的位置,也可以移动某个点的位置;右击选择`Edit Label`可以修改每个目标的类别。请根据自己的需要执行这一步骤,若不需要修改,可跳过。 ​ (2) 右击选择`Edit Polygons`可以整体移动多边形的位置,也可以移动某个点的位置;右击选择`Edit Label`可以修改每个目标的类别。请根据自己的需要执行这一步骤,若不需要修改,可跳过。
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-4-1.png" width="00px" /> <img src="../imgs/annotation/image-4-1.png" width="00px" />
<img src="./docs/imgs/annotation/image-4-2.png" width="600px"/> <img src="../imgs/annotation/image-4-2.png" width="600px"/>
<p>图4 修改标注的示意图</p> <p>图4 修改标注的示意图</p>
</div> </div>
...@@ -52,7 +52,7 @@ git clone https://github.com/wkentaro/labelme ...@@ -52,7 +52,7 @@ git clone https://github.com/wkentaro/labelme
LableMe产出的真值文件可参考我们给出的文件夹`data_annotated` LableMe产出的真值文件可参考我们给出的文件夹`data_annotated`
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-5.png" width="600px"/> <img src="../imgs/annotation/image-5.png" width="600px"/>
<p>图5 LableMe产出的真值文件的示意图</p> <p>图5 LableMe产出的真值文件的示意图</p>
</div> </div>
...@@ -71,7 +71,7 @@ LableMe产出的真值文件可参考我们给出的文件夹`data_annotated`。 ...@@ -71,7 +71,7 @@ LableMe产出的真值文件可参考我们给出的文件夹`data_annotated`。
``` ```
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-6.png" width="600px"/> <img src="../imgs/annotation/image-6.png" width="600px"/>
<p>图6 训练所需的数据集目录的结构示意图</p> <p>图6 训练所需的数据集目录的结构示意图</p>
</div> </div>
...@@ -92,6 +92,6 @@ pip install pillow ...@@ -92,6 +92,6 @@ pip install pillow
转换得到的数据集可参考我们给出的文件夹`my_dataset`。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`JPEGImages`保存的是数据集的图片;文件夹`SegmentationClassPNG`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。 转换得到的数据集可参考我们给出的文件夹`my_dataset`。其中,文件`class_names.txt`是数据集中所有标注类别的名称,包含背景类;文件夹`JPEGImages`保存的是数据集的图片;文件夹`SegmentationClassPNG`保存的是各图片的像素级别的真值信息,背景类`_background_`对应为0,其它目标类别从1开始递增,至多为255。
<div align="center"> <div align="center">
<img src="./docs/imgs/annotation/image-7.png" width="600px"/> <img src="../imgs/annotation/image-7.png" width="600px"/>
<p>图7 训练所需的数据集各目录的内容示意图</p> <p>图7 训练所需的数据集各目录的内容示意图</p>
</div> </div>
...@@ -13,7 +13,7 @@ DATALOADER Group存放所有与数据加载相关的配置 ...@@ -13,7 +13,7 @@ DATALOADER Group存放所有与数据加载相关的配置
### 注意事项 ### 注意事项
* 该选项只在`pdseg/train.py``pdseg/eval.py`中使用到 * 该选项只在`pdseg/train.py``pdseg/eval.py`中使用到
* 当使用多线程时,该字段表示线程适量,使用多进程时,该字段表示进程数量。一般该字段使用默认值即可 * 该字段表示数据预处理时的进程数量,只有在`pdseg/train.py`或者`pdseg/eval.py`中打开了`--use_mpio`开关有效。一般该字段使用默认值即可
<br/> <br/>
<br/> <br/>
......
...@@ -55,7 +55,7 @@ rich crop是指对图像进行多种变换,保证在训练过程中数据的 ...@@ -55,7 +55,7 @@ rich crop是指对图像进行多种变换,保证在训练过程中数据的
- 输入图片格式 - 输入图片格式
- 原图 - 原图
- 图片格式:rgb三通道图片和rgba四通道图片两种类型的图片进行训练,但是在一次训练过程只能存在一种格式。 - 图片格式:RGB三通道图片和RGBA四通道图片两种类型的图片进行训练,但是在一次训练过程只能存在一种格式。
- 图片转换:灰度图片经过预处理后之后会转变成三通道图片 - 图片转换:灰度图片经过预处理后之后会转变成三通道图片
- 图片参数设置:当图片为三通道图片时IMAGE_TYPE设置为rgb, 对应MEAN和STD也必须是一个长度为3的list,当图片为四通道图片时IMAGE_TYPE设置为rgba,对应的MEAN和STD必须是一个长度为4的list。 - 图片参数设置:当图片为三通道图片时IMAGE_TYPE设置为rgb, 对应MEAN和STD也必须是一个长度为3的list,当图片为四通道图片时IMAGE_TYPE设置为rgba,对应的MEAN和STD必须是一个长度为4的list。
- 标注图 - 标注图
......
...@@ -45,7 +45,7 @@ PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试 ...@@ -45,7 +45,7 @@ PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试
``` ```
其中`[SEP]`是文件路径分割符,可以在`DATASET.SEPRATOR`配置项中修改, 默认为空格。 其中`[SEP]`是文件路径分割符,可以在`DATASET.SEPARATOR`配置项中修改, 默认为空格。
**注意事项** **注意事项**
...@@ -60,42 +60,50 @@ PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试 ...@@ -60,42 +60,50 @@ PaddleSeg采用通用的文件列表方式组织训练集、验证集和测试
完整的配置信息可以参考[`./dataset/cityscapes_demo`](../dataset/cityscapes_demo/)目录下的yaml和文件列表。 完整的配置信息可以参考[`./dataset/cityscapes_demo`](../dataset/cityscapes_demo/)目录下的yaml和文件列表。
## 数据校验 ## 数据校验
从7方面对用户自定义的数据集和yaml配置进行校验,帮助用户排查基本的数据和配置问题。 对用户自定义的数据集和yaml配置进行校验,帮助用户排查基本的数据和配置问题。
数据校验脚本如下,支持通过`YAML_FILE_PATH`来指定配置文件。 数据校验脚本如下,支持通过`YAML_FILE_PATH`来指定配置文件。
``` ```
# YAML_FILE_PATH为yaml配置文件路径 # YAML_FILE_PATH为yaml配置文件路径
python pdseg/check.py --cfg ${YAML_FILE_PATH} python pdseg/check.py --cfg ${YAML_FILE_PATH}
``` ```
### 1 数据集基本校验 运行后,命令行将显示校验结果的概览信息,详细信息可到detail.log文件中查看。
* 数据集路径检查,包括`DATASET.TRAIN_FILE_LIST``DATASET.VAL_FILE_LIST``DATASET.TEST_FILE_LIST`设置是否正确。
* 列表分割符检查,判断在`TRAIN_FILE_LIST``VAL_FILE_LIST``TEST_FILE_LIST`列表文件中的分隔符`DATASET.SEPARATOR`设置是否正确。
### 2 标注类别校验 ### 1 列表分割符校验
判断在`TRAIN_FILE_LIST``VAL_FILE_LIST``TEST_FILE_LIST`列表文件中的分隔符`DATASET.SEPARATOR`设置是否正确。
### 2 数据集读取校验
通过是否能成功读取`DATASET.TRAIN_FILE_LIST``DATASET.VAL_FILE_LIST``DATASET.TEST_FILE_LIST`中所有图片,判断这3项设置是否正确。
若不正确返回错误信息。错误可能有多种情况,如数据集路径设置错误、图片损坏等。
### 3 标注格式校验
检查标注图像是否为PNG格式。
**NOTE:** 标注图像请使用PNG无损压缩格式的图片,若使用其他格式则可能影响精度。
### 4 标注通道数校验
检查标注图的通道数。正确的标注图应该为单通道图像。
### 5 标注类别校验
检查实际标注类别是否和配置参数`DATASET.NUM_CLASSES``DATASET.IGNORE_INDEX`匹配。 检查实际标注类别是否和配置参数`DATASET.NUM_CLASSES``DATASET.IGNORE_INDEX`匹配。
**NOTE:** **NOTE:**
标注图像类别数值必须在[0~(`DATASET.NUM_CLASSES`-1)]范围内或者为`DATASET.IGNORE_INDEX` 标注图像类别数值必须在[0~(`DATASET.NUM_CLASSES`-1)]范围内或者为`DATASET.IGNORE_INDEX`
标注类别最好从0开始,否则可能影响精度。 标注类别最好从0开始,否则可能影响精度。
### 3 标注像素统计 ### 6 标注像素统计
统计每种类别像素数量,显示以供参考。 统计每种类别像素数量,显示以供参考。
### 4 标注格式校验 ### 7 图像格式校验
检查标注图像是否为PNG格式。
**NOTE:** 标注图像请使用PNG无损压缩格式的图片,若使用其他格式则可能影响精度。
### 5 图像格式校验
检查图片类型`DATASET.IMAGE_TYPE`是否设置正确。 检查图片类型`DATASET.IMAGE_TYPE`是否设置正确。
**NOTE:** 当数据集包含三通道图片时`DATASET.IMAGE_TYPE`设置为rgb; **NOTE:** 当数据集包含三通道图片时`DATASET.IMAGE_TYPE`设置为rgb;
当数据集全部为四通道图片时`DATASET.IMAGE_TYPE`设置为rgba; 当数据集全部为四通道图片时`DATASET.IMAGE_TYPE`设置为rgba;
### 6 图像与标注图尺寸一致性校验 ### 8 图像与标注图尺寸一致性校验
验证图像尺寸和对应标注图尺寸是否一致。 验证图像尺寸和对应标注图尺寸是否一致。
### 7 模型验证参数`EVAL_CROP_SIZE`校验 ### 9 模型验证参数`EVAL_CROP_SIZE`校验
验证`EVAL_CROP_SIZE`是否设置正确,共有3种情形: 验证`EVAL_CROP_SIZE`是否设置正确,共有3种情形:
-`AUG.AUG_METHOD`为unpadding时,`EVAL_CROP_SIZE`的宽高应不小于`AUG.FIX_RESIZE_SIZE`的宽高。 -`AUG.AUG_METHOD`为unpadding时,`EVAL_CROP_SIZE`的宽高应不小于`AUG.FIX_RESIZE_SIZE`的宽高。
...@@ -105,3 +113,6 @@ python pdseg/check.py --cfg ${YAML_FILE_PATH} ...@@ -105,3 +113,6 @@ python pdseg/check.py --cfg ${YAML_FILE_PATH}
-`AUG.AUG_METHOD`为rangscaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最大的宽高。 -`AUG.AUG_METHOD`为rangscaling时,`EVAL_CROP_SIZE`的宽高应不小于缩放后图像中最大的宽高。
我们将计算并给出`EVAL_CROP_SIZE`的建议值。 我们将计算并给出`EVAL_CROP_SIZE`的建议值。
### 10 数据增强参数`AUG.INF_RESIZE_VALUE`校验
验证`AUG.INF_RESIZE_VALUE`是否在[`AUG.MIN_RESIZE_VALUE`~`AUG.MAX_RESIZE_VALUE`]范围内。若在范围内,则通过校验。
...@@ -5,7 +5,8 @@ ...@@ -5,7 +5,8 @@
* Python2.7 or 3.5+ * Python2.7 or 3.5+
* CUDA 9.2 * CUDA 9.2
* cudnn v7.1 * cudnn v7.1
* paddlepaddle >= 1.5.2
* nccl >= 2.4.7
## 1. 安装PaddlePaddle ## 1. 安装PaddlePaddle
...@@ -26,7 +27,7 @@ PaddlePaddle最新版本1.5支持Conda安装,可以减少相关依赖安装成 ...@@ -26,7 +27,7 @@ PaddlePaddle最新版本1.5支持Conda安装,可以减少相关依赖安装成
conda install -c paddle paddlepaddle-gpu cudatoolkit=9.0 conda install -c paddle paddlepaddle-gpu cudatoolkit=9.0
``` ```
更多安装方式详情可以查看 [PaddlePaddle快速开始](https://www.paddlepaddle.org.cn/start) 更多安装方式详情可以查看 [PaddlePaddle安装说明](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/install/index_cn.html)
## 2. 下载PaddleSeg代码 ## 2. 下载PaddleSeg代码
...@@ -39,14 +40,6 @@ git clone https://github.com/PaddlePaddle/PaddleSeg ...@@ -39,14 +40,6 @@ git clone https://github.com/PaddlePaddle/PaddleSeg
## 3. 安装PaddleSeg依赖 ## 3. 安装PaddleSeg依赖
``` ```
cd PaddleSeg
pip install -r requirements.txt pip install -r requirements.txt
``` ```
## 4. 本地流程测试
通过执行以下命令,会完整执行数据下载,训练,可视化,预测模型导出四个环节,用于验证PaddleSeg安装和依赖是否正常。
```
python test/local_test_cityscapes.py
```
\ No newline at end of file
...@@ -30,7 +30,7 @@ train数据集为coco instance分割数据集合转换成的语义分割数据 ...@@ -30,7 +30,7 @@ train数据集为coco instance分割数据集合转换成的语义分割数据
|---|---|---|---|---|---|---| |---|---|---|---|---|---|---|
| DeepLabv3+/MobileNetv2/bn | COCO | MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenet <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|[deeplabv3plus_coco_bn_init.tgz](https://bj.bcebos.com/v1/paddleseg/deeplabv3plus_coco_bn_init.tgz) | 16 | --| -- | | DeepLabv3+/MobileNetv2/bn | COCO | MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenet <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|[deeplabv3plus_coco_bn_init.tgz](https://bj.bcebos.com/v1/paddleseg/deeplabv3plus_coco_bn_init.tgz) | 16 | --| -- |
| DeeplabV3+/Xception65/bn | COCO | MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn | [xception65_coco.tgz](https://paddleseg.bj.bcebos.com/models/xception65_coco.tgz)| 16 | -- | -- | | DeeplabV3+/Xception65/bn | COCO | MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn | [xception65_coco.tgz](https://paddleseg.bj.bcebos.com/models/xception65_coco.tgz)| 16 | -- | -- |
| UNet/bn | COCO | MODEL.MODEL_NEME: unet <br> MODEL.DEFAULT_NORM_TYPE: bn | [unet](https://paddleseg.bj.bcebos.com/models/unet_coco_v2.tgz) | 16 | -- | -- | | UNet/bn | COCO | MODEL.MODEL_NEME: unet <br> MODEL.DEFAULT_NORM_TYPE: bn | [unet](https://paddleseg.bj.bcebos.com/models/unet_coco_v3.tgz) | 16 | -- | -- |
## Cityscapes预训练模型 ## Cityscapes预训练模型
...@@ -40,5 +40,5 @@ train数据集合为Cityscapes 训练集合,测试为Cityscapes的验证集合 ...@@ -40,5 +40,5 @@ train数据集合为Cityscapes 训练集合,测试为Cityscapes的验证集合
|---|---|---|---|---|---|---| |---|---|---|---|---|---|---|
| DeepLabv3+/MobileNetv2/bn | Cityscapes |MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenet <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|[mobilenet_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz) |16|false| 0.698| | DeepLabv3+/MobileNetv2/bn | Cityscapes |MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenet <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|[mobilenet_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz) |16|false| 0.698|
| DeepLabv3+/Xception65/gn | Cityscapes |MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: gn | [deeplabv3p_xception65_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/deeplabv3p_xception65_cityscapes.tgz) |16|false| 0.7804 | | DeepLabv3+/Xception65/gn | Cityscapes |MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: gn | [deeplabv3p_xception65_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/deeplabv3p_xception65_cityscapes.tgz) |16|false| 0.7804 |
| DeepLabv3+/Xception65/bn | Cityscapes | MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn| [Xception65_deeplab_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/Xception65_deeplab_cityscapes.tgz) | 16 | false | 0.7715 | | DeepLabv3+/Xception65/bn | Cityscapes | MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn| [Xception65_deeplab_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/xception65_bn_cityscapes.tgz) | 16 | false | 0.7715 |
| ICNet/bn | Cityscapes | MODEL.MODEL_NAME: icnet <br> MODEL.DEFAULT_NORM_TYPE: bn | [icnet_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/icnet_cityscapes.tgz) |16|false| 0.6854 | | ICNet/bn | Cityscapes | MODEL.MODEL_NAME: icnet <br> MODEL.DEFAULT_NORM_TYPE: bn | [icnet_cityscapes.tgz](https://paddleseg.bj.bcebos.com/models/icnet6831.tar.gz) |16|false| 0.6831 |
# 训练/评估/预测(可视化)
PaddleSeg提供了 `训练`/`评估`/`预测(可视化)`/`模型导出` 等四个功能的使用脚本。四个脚本都支持通过不同的Flags来开启特定功能,也支持通过Options来修改默认的[训练配置](./config.md)。四者的使用方式非常接近,如下: PaddleSeg提供了 `训练`/`评估`/`预测(可视化)`/`模型导出` 等四个功能的使用脚本。四个脚本都支持通过不同的Flags来开启特定功能,也支持通过Options来修改默认的[训练配置](./config.md)。四者的使用方式非常接近,如下:
```shell ```shell
...@@ -40,12 +42,12 @@ python pdseg/export_model.py ${FLAGS} ${OPTIONS} ...@@ -40,12 +42,12 @@ python pdseg/export_model.py ${FLAGS} ${OPTIONS}
详见[训练配置](./config.md) 详见[训练配置](./config.md)
## 使用示例 ## 使用示例
下面通过一个简单的示例,说明如何使用PaddleSeg提供的预训练模型进行finetune。我们选择基于COCO数据集预训练的unet模型作为pretrained模型,在一个Oxford-IIIT Pet数据集上进行finetune 下面通过一个简单的示例,说明如何基于PaddleSeg提供的预训练模型启动训练。我们选择基于COCO数据集预训练的unet模型作为预训练模型,在一个Oxford-IIIT Pet数据集上进行训练
**Note:** 为了快速体验,我们使用Oxford-IIIT Pet做了一个小型数据集,后续数据都使用该小型数据集。 **Note:** 为了快速体验,我们使用Oxford-IIIT Pet做了一个小型数据集,后续数据都使用该小型数据集。
### 准备工作 ### 准备工作
在开始教程前,请先确认准备工作已经完成: 在开始教程前,请先确认准备工作已经完成:
1. 下载合适版本的paddlepaddle 1. 正确安装了PaddlePaddle
2. PaddleSeg相关依赖已经安装 2. PaddleSeg相关依赖已经安装
如果有不确认的地方,请参考[安装说明](./installation.md) 如果有不确认的地方,请参考[安装说明](./installation.md)
...@@ -65,15 +67,14 @@ wget https://paddleseg.bj.bcebos.com/dataset/mini_pet.zip --no-check-certificate ...@@ -65,15 +67,14 @@ wget https://paddleseg.bj.bcebos.com/dataset/mini_pet.zip --no-check-certificate
unzip mini_pet.zip unzip mini_pet.zip
``` ```
### Finetune ### 模型训练
接着开始Finetune,为了方便体验,我们在configs目录下放置了Oxford-IIIT Pet所对应的配置文件`unet_pet.yaml`,可以通过`--cfg`指向该文件来设置训练配置。
我们选择两张GPU进行训练,这可以通过环境变量`CUDA_VISIBLE_DEVICES`来指定 为了方便体验,我们在configs目录下放置了Oxford-IIIT Pet所对应的配置文件`unet_pet.yaml`,可以通过`--cfg`指向该文件来设置训练配置
除此之外,我们指定总BATCH_SIZE为4,PaddleSeg会根据可用的GPU数量,将数据平分到每张卡上,务必确保BATCH_SIZE为GPU数量的整数倍(在本例中,每张卡的BATCH_SIZE为2) 我们选择GPU 0号卡进行训练,这可以通过环境变量`CUDA_VISIBLE_DEVICES`来指定
``` ```
export CUDA_VISIBLE_DEVICES=0,1 export CUDA_VISIBLE_DEVICES=0
python pdseg/train.py --use_gpu \ python pdseg/train.py --use_gpu \
--do_eval \ --do_eval \
--use_tb \ --use_tb \
...@@ -95,12 +96,11 @@ python pdseg/train.py --use_gpu \ ...@@ -95,12 +96,11 @@ python pdseg/train.py --use_gpu \
> * 上述示例中,一共存在三套配置方案: PaddleSeg默认配置/unet_pet.yaml/OPTIONS,三者的优先级顺序为 OPTIONS > yaml > 默认配置。这个原则对于train.py/eval.py/vis.py/export_model.py都适用 > * 上述示例中,一共存在三套配置方案: PaddleSeg默认配置/unet_pet.yaml/OPTIONS,三者的优先级顺序为 OPTIONS > yaml > 默认配置。这个原则对于train.py/eval.py/vis.py/export_model.py都适用
> >
> * 如果发现因为内存不足而Crash。请适当调低BATCH_SIZE。如果本机GPU内存充足,则可以调高BATCH_SIZE的大小以获得更快的训练速度 > * 如果发现因为内存不足而Crash。请适当调低BATCH_SIZE。如果本机GPU内存充足,则可以调高BATCH_SIZE的大小以获得更快的训练速度
>
> * windows并不支持多卡训练
### 训练过程可视化 ### 训练过程可视化
当打开do_eval和use_tb两个开关后,我们可以通过TensorBoard查看训练的效果 当打开do_eval和use_tb两个开关后,我们可以通过TensorBoard查看训练的效果
```shell ```shell
tensorboard --logdir train_log --host {$HOST_IP} --port {$PORT} tensorboard --logdir train_log --host {$HOST_IP} --port {$PORT}
``` ```
...@@ -141,10 +141,10 @@ python pdseg/vis.py --use_gpu \ ...@@ -141,10 +141,10 @@ python pdseg/vis.py --use_gpu \
2. 训练过程中会使用DATASET.VIS_FILE_LIST中的图片进行可视化显示,而vis.py则会使用DATASET.TEST_FILE_LIST 2. 训练过程中会使用DATASET.VIS_FILE_LIST中的图片进行可视化显示,而vis.py则会使用DATASET.TEST_FILE_LIST
### 模型导出 ### 模型导出
当确定模型效果满足预期后,我们需要通过export_model.py来导出一个可用于部署到服务端预测的模型: 当确定模型效果满足预期后,我们需要通过export_model.py来导出可用于C++预测库部署的模型:
```shell ```shell
python pdseg/export_model.py --cfg configs/unet_pet.yaml \ python pdseg/export_model.py --cfg configs/unet_pet.yaml \
TEST.TEST_MODEL test/saved_models/unet_pet/final TEST.TEST_MODEL test/saved_models/unet_pet/final
``` ```
模型会导出到freeze_model目录,接下来就是进行模型的部署,相关步骤请查看[模型部署](../inference/README.md) 模型会导出到freeze_model目录,接下来就是进行模型的部署,相关步骤请查看[模型部署](../inference/README.md)
...@@ -36,7 +36,7 @@ if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "") ...@@ -36,7 +36,7 @@ if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "")
endif() endif()
include_directories("${CMAKE_SOURCE_DIR}/") include_directories("${CMAKE_SOURCE_DIR}/")
include_directories("${CMAKE_CURRENT_BINARY_DIR}/ext/yaml-cpp/src/yaml-cpp/include") include_directories("${CMAKE_CURRENT_BINARY_DIR}/ext/yaml-cpp/src/ext-yaml-cpp/include")
include_directories("${PADDLE_DIR}/") include_directories("${PADDLE_DIR}/")
include_directories("${PADDLE_DIR}/third_party/install/protobuf/include") include_directories("${PADDLE_DIR}/third_party/install/protobuf/include")
include_directories("${PADDLE_DIR}/third_party/install/glog/include") include_directories("${PADDLE_DIR}/third_party/install/glog/include")
...@@ -82,7 +82,7 @@ if (WIN32) ...@@ -82,7 +82,7 @@ if (WIN32)
add_definitions(-DSTATIC_LIB) add_definitions(-DSTATIC_LIB)
endif() endif()
else() else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -o2 -std=c++11")
set(CMAKE_STATIC_LIBRARY_PREFIX "") set(CMAKE_STATIC_LIBRARY_PREFIX "")
endif() endif()
...@@ -160,14 +160,13 @@ if (NOT WIN32) ...@@ -160,14 +160,13 @@ if (NOT WIN32)
set(EXTERNAL_LIB "-lrt -ldl -lpthread") set(EXTERNAL_LIB "-lrt -ldl -lpthread")
set(DEPS ${DEPS} set(DEPS ${DEPS}
${MATH_LIB} ${MKLDNN_LIB} ${MATH_LIB} ${MKLDNN_LIB}
glog gflags protobuf snappystream snappy z xxhash glog gflags protobuf yaml-cpp snappystream snappy z xxhash
${EXTERNAL_LIB}) ${EXTERNAL_LIB})
else() else()
set(DEPS ${DEPS} set(DEPS ${DEPS}
${MATH_LIB} ${MKLDNN_LIB} ${MATH_LIB} ${MKLDNN_LIB}
opencv_world346 glog libyaml-cppmt gflags_static libprotobuf snappy zlibstatic xxhash snappystream ${EXTERNAL_LIB}) opencv_world346 glog libyaml-cppmt gflags_static libprotobuf snappy zlibstatic xxhash snappystream ${EXTERNAL_LIB})
set(DEPS ${DEPS} libcmt shlwapi) set(DEPS ${DEPS} libcmt shlwapi)
set(DEPS ${DEPS} ${YAML_CPP_LIBRARY})
endif(NOT WIN32) endif(NOT WIN32)
if(WITH_GPU) if(WITH_GPU)
...@@ -206,13 +205,17 @@ ADD_LIBRARY(libpaddleseg_inference STATIC ${PADDLESEG_INFERENCE_SRCS}) ...@@ -206,13 +205,17 @@ ADD_LIBRARY(libpaddleseg_inference STATIC ${PADDLESEG_INFERENCE_SRCS})
target_link_libraries(libpaddleseg_inference ${DEPS}) target_link_libraries(libpaddleseg_inference ${DEPS})
add_executable(demo demo.cpp) add_executable(demo demo.cpp)
ADD_DEPENDENCIES(libpaddleseg_inference yaml-cpp) ADD_DEPENDENCIES(libpaddleseg_inference ext-yaml-cpp)
ADD_DEPENDENCIES(demo yaml-cpp libpaddleseg_inference) ADD_DEPENDENCIES(demo ext-yaml-cpp libpaddleseg_inference)
target_link_libraries(demo ${DEPS} libpaddleseg_inference) target_link_libraries(demo ${DEPS} libpaddleseg_inference)
if (WIN32)
add_custom_command(TARGET demo POST_BUILD add_custom_command(TARGET demo POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/bin/mkldnn.dll ./mkldnn.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll
) )
endif()
...@@ -4,132 +4,133 @@ ...@@ -4,132 +4,133 @@
本目录提供一个跨平台的图像分割模型的C++预测部署方案,用户通过一定的配置,加上少量的代码,即可把模型集成到自己的服务中,完成图像分割的任务。 本目录提供一个跨平台的图像分割模型的C++预测部署方案,用户通过一定的配置,加上少量的代码,即可把模型集成到自己的服务中,完成图像分割的任务。
主要设计的目标包括以下点: 主要设计的目标包括以下点:
- 跨平台,支持在 windows和Linux完成编译、开发和部署 - 跨平台,支持在 windows 和 Linux 完成编译、开发和部署
- 支持主流图像分割任务,用户通过少量配置即可加载模型完成常见预测任务,比如人像分割等 - 支持主流图像分割任务,用户通过少量配置即可加载模型完成常见预测任务,比如人像分割等
- 可扩展性,支持用户针对新模型开发自己特殊的数据预处理、后处理等逻辑 - 可扩展性,支持用户针对新模型开发自己特殊的数据预处理、后处理等逻辑
- 高性能,除了`PaddlePaddle`自身带来的性能优势,我们还针对图像分割的特点对关键步骤进行了性能优化
## 主要目录和文件 ## 主要目录和文件
| 文件 | 作用 |
|-------|----------|
| CMakeList.txt | cmake 编译配置文件 |
| external-cmake| 依赖的外部项目 cmake (目前仅有yaml-cpp)|
| demo.cpp | 示例C++代码,演示加载模型完成预测任务 |
| predictor | 加载模型并预测的类代码|
| preprocess |数据预处理相关的类代码|
| utils | 一些基础公共函数|
| images/humanseg | 样例人像分割模型的测试图片目录|
| conf/humanseg.yaml | 示例人像分割模型配置|
| tools/visualize.py | 预测结果彩色可视化脚本 |
## Windows平台编译
### 前置条件
* Visual Studio 2015+
* CUDA 8.0 / CUDA 9.0 + CuDNN 7
* CMake 3.0+
我们分别在 `Visual Studio 2015``Visual Studio 2019 Community` 两个版本下做了测试.
**下面所有示例,以根目录为 `D:\`演示**
### Step1: 下载代码
1. `git clone http://gitlab.baidu.com/Paddle/PaddleSeg.git`
2. 拷贝 `D:\PaddleSeg\inference\` 目录到 `D:\PaddleDeploy`下
目录`D:\PaddleDeploy\inference` 目录包含了`CMakelist.txt`以及代码等项目文件.
### Step2: 下载PaddlePaddle预测库fluid_inference
根据Windows环境,下载相应版本的PaddlePaddle预测库,并解压到`D:\PaddleDeploy\`目录
| CUDA | GPU | 下载地址 |
|------|------|--------|
| 8.0 | Yes | [fluid_inference.zip](https://bj.bcebos.com/v1/paddleseg/fluid_inference_win.zip) |
| 9.0 | Yes | [fluid_inference_cuda90.zip](https://paddleseg.bj.bcebos.com/fluid_inference_cuda9_cudnn7.zip) |
`D:\PaddleDeploy\fluid_inference`目录包含内容为:
```bash
paddle # paddle核心目录
third_party # paddle 第三方依赖
version.txt # 编译的版本信息
``` ```
inference
├── demo.cpp # 演示加载模型、读入数据、完成预测任务C++代码
|
├── conf
│ └── humanseg.yaml # 示例人像分割模型配置
├── images
│ └── humanseg # 示例人像分割模型测试图片目录
├── tools
│ └── visualize.py # 示例人像分割模型结果可视化脚本
├── docs
| ├── linux_build.md # Linux 编译指南
| ├── windows_vs2015_build.md # windows VS2015编译指南
│ └── windows_vs2019_build.md # Windows VS2019编译指南
|
├── utils # 一些基础公共函数
|
├── preprocess # 数据预处理相关代码
|
├── predictor # 模型加载和预测相关代码
|
├── CMakeList.txt # cmake编译入口文件
|
└── external-cmake # 依赖的外部项目cmake(目前仅有yaml-cpp)
```
### Step3: 安装配置OpenCV ## 编译
支持在`Windows``Linux`平台编译和使用:
1. 在OpenCV官网下载适用于Windows平台的3.4.6版本, [下载地址](https://sourceforge.net/projects/opencvlibrary/files/3.4.6/opencv-3.4.6-vc14_vc15.exe/download) - [Linux 编译指南](./docs/linux_build.md)
2. 运行下载的可执行文件,将OpenCV解压至指定目录,如`D:\PaddleDeploy\opencv` - [Windows 使用 Visual Studio 2019 Community 编译指南](./docs/windows_vs2019_build.md)
3. 配置环境变量,如下流程所示 - [Windows 使用 Visual Studio 2015 编译指南](./docs/windows_vs2015_build.md)
1. 我的电脑->属性->高级系统设置->环境变量
2. 在系统变量中找到Path(如没有,自行创建),并双击编辑
3. 新建,将opencv路径填入并保存,如`D:\PaddleDeploy\opencv\build\x64\vc14\bin`
### Step4: 以VS2015为例编译代码
以下命令需根据自己系统中各相关依赖的路径进行修改 `Windows`上推荐使用最新的`Visual Studio 2019 Community`直接编译`CMake`项目。
* 调用VS2015, 请根据实际VS安装路径进行调整,打开cmd命令行工具执行以下命令 ## 预测并可视化结果
* 其他vs版本,请查找到对应版本的`vcvarsall.bat`路径,替换本命令即可
``` 完成编译后,便生成了需要的可执行文件和链接库,然后执行以下步骤:
call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64
```
* CMAKE编译工程 ### 1. 下载模型文件
* PADDLE_DIR: fluid_inference预测库目录 我们提供了一个人像分割模型示例用于测试,点击右侧地址下载:[示例模型下载地址](https://paddleseg.bj.bcebos.com/inference_model/deeplabv3p_xception65_humanseg.tgz)
* CUDA_LIB: CUDA动态库目录, 请根据实际安装情况调整
* OPENCV_DIR: OpenCV解压目录
下载并解压,解压后目录结构如下:
``` ```
# 创建CMake的build目录 deeplabv3p_xception65_humanseg
D: ├── __model__ # 模型文件
cd PaddleDeploy\inference |
mkdir build └── __params__ # 参数文件
cd build
D:\PaddleDeploy\inference\build> cmake .. -G "Visual Studio 14 2015 Win64" -DWITH_GPU=ON -DPADDLE_DIR=D:\PaddleDeploy\fluid_inference -DCUDA_LIB=D:\PaddleDeploy\cudalib\v8.0\lib\x64 -DOPENCV_DIR=D:\PaddleDeploy\opencv -T host=x64
```
这里的`cmake`参数`-G`, 可以根据自己的`VS`版本调整,具体请参考[cmake文档](https://cmake.org/cmake/help/v3.15/manual/cmake-generators.7.html)
* 生成可执行文件
``` ```
D:\PaddleDeploy\inference\build> msbuild /m /p:Configuration=Release cpp_inference_demo.sln 解压后把上述目录拷贝到合适的路径:
**假设**`Windows`系统上,我们模型和参数文件所在路径为`D:\projects\models\deeplabv3p_xception65_humanseg`
**假设**`Linux`上对应的路径则为`/root/projects/models/deeplabv3p_xception65_humanseg`
### 2. 修改配置
源代码的`conf`目录下提供了示例人像分割模型的配置文件`humanseg.yaml`, 相关的字段含义和说明如下:
```yaml
DEPLOY:
# 是否使用GPU预测
USE_GPU: 1
# 模型和参数文件所在目录路径
MODEL_PATH: "/root/projects/models/deeplabv3p_xception65_humanseg"
# 模型文件名
MODEL_FILENAME: "__model__"
# 参数文件名
PARAMS_FILENAME: "__params__"
# 预测图片的的标准输入尺寸,输入尺寸不一致会做resize
EVAL_CROP_SIZE: (513, 513)
# 均值
MEAN: [104.008, 116.669, 122.675]
# 方差
STD: [1.0, 1.0, 1.0]
# 图片类型, rgb 或者 rgba
IMAGE_TYPE: "rgb"
# 分类类型数
NUM_CLASSES: 2
# 图片通道数
CHANNELS : 3
# 预处理方式,目前提供图像分割的通用处理类SegPreProcessor
PRE_PROCESSOR: "SegPreProcessor"
# 预测模式,支持 NATIVE 和 ANALYSIS
PREDICTOR_MODE: "ANALYSIS"
# 每次预测的 batch_size
BATCH_SIZE : 3
``` ```
修改字段`MODEL_PATH`的值为你在**上一步**下载并解压的模型文件所放置的目录即可。
### Step5: 预测及可视化
上步骤中编译生成的可执行文件和相关动态链接库并保存在build/Release目录下,可通过Windows命令行直接调用。 ### 3. 执行预测
可下载并解压示例模型进行测试,点击下载示例的人像分割模型[下载地址](https://paddleseg.bj.bcebos.com/inference_model/deeplabv3p_xception65_humanseg.tgz)
假设解压至 `D:\PaddleDeploy\models\deeplabv3p_xception65_humanseg` ,执行以下命令: 在终端中切换到生成的可执行文件所在目录为当前目录(Windows系统为`cmd`)。
`Linux` 系统中执行以下命令:
```shell
./demo --conf=/root/projects/PaddleSeg/inference/conf/humanseg.yaml --input_dir=/root/projects/PaddleSeg/inference/images/humanseg/
``` ```
cd Release `Windows` 中执行以下命令:
D:\PaddleDeploy\inference\build\Release> demo.exe --conf=D:\\PaddleDeploy\\inference\\conf\\humanseg.yaml --input_dir=D:\\PaddleDeploy\\inference\\images\humanseg\\ ```shell
D:\projects\PaddleSeg\inference\build\Release>demo.exe --conf=D:\\projects\\PaddleSeg\\inference\\conf\\humanseg.yaml --input_dir=D:\\projects\\PaddleSeg\\inference\\images\humanseg\\
``` ```
预测使用的两个命令参数说明如下: 预测使用的两个命令参数说明如下:
| 参数 | 含义 | | 参数 | 含义 |
|-------|----------| |-------|----------|
| conf | 模型配置的yaml文件路径 | | conf | 模型配置的Yaml文件路径 |
| input_dir | 需要预测的图片目录 | | input_dir | 需要预测的图片目录 |
**配置文件**的样例以及字段注释说明请参考: [conf/humanseg.yaml](./conf/humanseg.yaml)
样例程序会扫描input_dir目录下的所有图片,并生成对应的预测结果图片。 配置文件说明请参考上一步,样例程序会扫描input_dir目录下的所有图片,并生成对应的预测结果图片:
文件`demo.jpg`预测的结果存储在`demo_jpg.png`中,可视化结果在`demo_jpg_scoremap.png`中, 原始尺寸的预测结果在`demo_jpg_recover.png`中。 文件`demo.jpg`预测的结果存储在`demo_jpg.png`中,可视化结果在`demo_jpg_scoremap.png`中, 原始尺寸的预测结果在`demo_jpg_recover.png`中。
输入原图 输入原图
![avatar](images/humanseg/demo.jpg) ![avatar](images/humanseg/demo2.jpeg)
输出预测结果 输出预测结果
![avatar](images/humanseg/demo_jpg_recover.png) ![avatar](images/humanseg/demo2_jpeg_recover.png)
DEPLOY: DEPLOY:
USE_GPU: 1 USE_GPU: 1
MODEL_PATH: "C:\\PaddleDeploy\\models\\deeplabv3p_xception65_humanseg" MODEL_PATH: "/root/projects/models/deeplabv3p_xception65_humanseg"
MODEL_NAME: "unet" MODEL_NAME: "unet"
MODEL_FILENAME: "__model__" MODEL_FILENAME: "__model__"
PARAMS_FILENAME: "__params__" PARAMS_FILENAME: "__params__"
...@@ -11,5 +11,5 @@ DEPLOY: ...@@ -11,5 +11,5 @@ DEPLOY:
NUM_CLASSES: 2 NUM_CLASSES: 2
CHANNELS : 3 CHANNELS : 3
PRE_PROCESSOR: "SegPreProcessor" PRE_PROCESSOR: "SegPreProcessor"
PREDICTOR_MODE: "ANALYSIS" PREDICTOR_MODE: "NATIVE"
BATCH_SIZE : 3 BATCH_SIZE : 3
...@@ -21,7 +21,6 @@ int main(int argc, char** argv) { ...@@ -21,7 +21,6 @@ int main(int argc, char** argv) {
// 2. get all the images with extension '.jpeg' at input_dir // 2. get all the images with extension '.jpeg' at input_dir
auto imgs = PaddleSolution::utils::get_directory_images(FLAGS_input_dir, ".jpeg|.jpg"); auto imgs = PaddleSolution::utils::get_directory_images(FLAGS_input_dir, ".jpeg|.jpg");
// 3. predict // 3. predict
predictor.predict(imgs); predictor.predict(imgs);
return 0; return 0;
......
# Linux平台 编译指南
## 说明
本文档在 `Linux`平台使用`GCC 4.8.5``GCC 4.9.4`测试过,如果需要使用更高G++版本编译使用,则需要重新编译Paddle预测库,请参考: [从源码编译Paddle预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_usage/deploy/inference/build_and_install_lib_cn.html#id15)
## 前置条件
* G++ 4.8.2 ~ 4.9.4
* CUDA 8.0/ CUDA 9.0
* CMake 3.0+
请确保系统已经安装好上述基本软件,**下面所有示例以工作目录为 `/root/projects/`演示**
### Step1: 下载代码
1. `mkdir -p /root/projects/ && cd /root/projects`
2. `git clone https://github.com/PaddlePaddle/PaddleSeg.git`
`C++`预测代码在`/root/projects/PaddleSeg/inference` 目录,该目录不依赖任何`PaddleSeg`下其他目录。
### Step2: 下载PaddlePaddle C++ 预测库 fluid_inference
目前仅支持`CUDA 8``CUDA 9`,请点击 [PaddlePaddle预测库下载地址](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_usage/deploy/inference/build_and_install_lib_cn.html)下载对应的版本。
下载并解压后`/root/projects/fluid_inference`目录包含内容为:
```
fluid_inference
├── paddle # paddle核心库和头文件
|
├── third_party # 第三方依赖库和头文件
|
└── version.txt # 版本和编译信息
```
### Step3: 安装配置OpenCV
```shell
# 0. 切换到/root/projects目录
cd /root/projects
# 1. 下载OpenCV3.4.6版本源代码
wget -c https://paddleseg.bj.bcebos.com/inference/opencv-3.4.6.zip
# 2. 解压
unzip opencv-3.4.6.zip && cd opencv-3.4.6
# 3. 创建build目录并编译, 这里安装到/usr/local/opencv3目录
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=/root/projects/opencv3 -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF -DWITH_ZLIB=ON -DBUILD_ZLIB=ON -DWITH_JPEG=ON -DBUILD_JPEG=ON -DWITH_PNG=ON -DBUILD_PNG=ON -DWITH_TIFF=ON -DBUILD_TIFF=ON
make -j4
make install
```
**注意:** 上述操作完成后,`opencv` 被安装在 `/root/projects/opencv3` 目录。
### Step4: 编译
`CMake`编译时,涉及到四个编译参数用于指定核心依赖库的路径, 他们的定义如下:
| 参数名 | 含义 |
| ---- | ---- |
| CUDA_LIB | cuda的库路径 |
| CUDNN_LIB | cuDnn的库路径|
| OPENCV_DIR | OpenCV的安装路径, |
| PADDLE_DIR | Paddle预测库的路径 |
执行下列操作时,**注意**把对应的参数改为你的上述依赖库实际路径:
```shell
cd /root/projects/PaddleSeg/inference
mkdir build && cd build
cmake .. -DWITH_GPU=ON -DPADDLE_DIR=/root/projects/fluid_inference -DCUDA_LIB=/usr/local/cuda/lib64/ -DOPENCV_DIR=/root/projects/opencv3/ -DCUDNN_LIB=/usr/local/cuda/lib64/
make
```
### Step5: 预测及可视化
执行命令:
```
./demo --conf=/path/to/your/conf --input_dir=/path/to/your/input/data/directory
```
更详细说明请参考ReadMe文档: [预测和可视化部分](../README.md)
# Windows平台使用 Visual Studio 2015 编译指南
本文档步骤,我们同时在`Visual Studio 2015``Visual Studio 2019 Community` 两个版本进行了测试,我们推荐使用[`Visual Studio 2019`直接编译`CMake`项目](./windows_vs2019_build.md)
## 前置条件
* Visual Studio 2015
* CUDA 8.0/ CUDA 9.0
* CMake 3.0+
请确保系统已经安装好上述基本软件,**下面所有示例以工作目录为 `D:\projects`演示**
### Step1: 下载代码
1. 打开`cmd`, 执行 `cd /d D:\projects`
2. `git clone http://gitlab.baidu.com/Paddle/PaddleSeg.git`
`C++`预测库代码在`D:\projects\PaddleSeg\inference` 目录,该目录不依赖任何`PaddleSeg`下其他目录。
### Step2: 下载PaddlePaddle C++ 预测库 fluid_inference
根据Windows环境,下载相应版本的PaddlePaddle预测库,并解压到`D:\projects\`目录
| CUDA | GPU | 下载地址 |
|------|------|--------|
| 8.0 | Yes | [fluid_inference.zip](https://bj.bcebos.com/v1/paddleseg/fluid_inference_win.zip) |
| 9.0 | Yes | [fluid_inference_cuda90.zip](https://paddleseg.bj.bcebos.com/fluid_inference_cuda9_cudnn7.zip) |
解压后`D:\projects\fluid_inference`目录包含内容为:
```
fluid_inference
├── paddle # paddle核心库和头文件
|
├── third_party # 第三方依赖库和头文件
|
└── version.txt # 版本和编译信息
```
### Step3: 安装配置OpenCV
1. 在OpenCV官网下载适用于Windows平台的3.4.6版本, [下载地址](https://sourceforge.net/projects/opencvlibrary/files/3.4.6/opencv-3.4.6-vc14_vc15.exe/download)
2. 运行下载的可执行文件,将OpenCV解压至指定目录,如`D:\PaddleDeploy\opencv`
3. 配置环境变量,如下流程所示
- 我的电脑->属性->高级系统设置->环境变量
- 在系统变量中找到Path(如没有,自行创建),并双击编辑
- 新建,将opencv路径填入并保存,如`D:\PaddleDeploy\opencv\build\x64\vc14\bin`
### Step4: 以VS2015为例编译代码
以下命令需根据自己系统中各相关依赖的路径进行修改
* 调用VS2015, 请根据实际VS安装路径进行调整,打开cmd命令行工具执行以下命令
* 其他vs版本(比如vs2019),请查找到对应版本的`vcvarsall.bat`路径,替换本命令即可
```
call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64
```
* CMAKE编译工程
* PADDLE_DIR: fluid_inference预测库路径
* CUDA_LIB: CUDA动态库目录, 请根据实际安装情况调整
* OPENCV_DIR: OpenCV解压目录
```
# 切换到预测库所在目录
cd /d D:\projects\PaddleSeg\inference\
# 创建构建目录, 重新构建只需要删除该目录即可
mkdir build
cd build
# cmake构建VS项目
D:\projects\PaddleSeg\inference\build> cmake .. -G "Visual Studio 14 2015 Win64" -DWITH_GPU=ON -DPADDLE_DIR=D:\projects\fluid_inference -DCUDA_LIB=D:\projects\cudalib\v8.0\lib\x64 -DOPENCV_DIR=D:\projects\opencv -T host=x64
```
这里的`cmake`参数`-G`, 表示生成对应的VS版本的工程,可以根据自己的`VS`版本调整,具体请参考[cmake文档](https://cmake.org/cmake/help/v3.15/manual/cmake-generators.7.html)
* 生成可执行文件
```
D:\projects\PaddleSeg\inference\build> msbuild /m /p:Configuration=Release cpp_inference_demo.sln
```
### Step5: 预测及可视化
上述`Visual Studio 2015`编译产出的可执行文件在`build\release`目录下,切换到该目录:
```
cd /d D:\projects\PaddleSeg\inference\build\release
```
之后执行命令:
```
demo.exe --conf=/path/to/your/conf --input_dir=/path/to/your/input/data/directory
```
更详细说明请参考ReadMe文档: [预测和可视化部分](../README.md)
# Visual Studio 2019 Community CMake 编译指南
Windows 平台下,我们使用`Visual Studio 2015``Visual Studio 2019 Community` 进行了测试。微软从`Visual Studio 2017`开始即支持直接管理`CMake`跨平台编译项目,但是直到`2019`才提供了稳定和完全的支持,所以如果你想使用CMake管理项目编译构建,我们推荐你使用`Visual Studio 2019`环境下构建。
你也可以使用和`VS2015`一样,通过把`CMake`项目转化成`VS`项目来编译,其中**有差别的部分**在文档中我们有说明,请参考:[使用Visual Studio 2015 编译指南](./windows_vs2015_build.md)
## 前置条件
* Visual Studio 2019
* CUDA 8.0/ CUDA 9.0
* CMake 3.0+
请确保系统已经安装好上述基本软件,我们使用的是`VS2019`的社区版。
**下面所有示例以工作目录为 `D:\projects`演示**
### Step1: 下载代码
1. 点击下载源代码:[下载地址](https://github.com/PaddlePaddle/PaddleSeg/archive/master.zip)
2. 解压,解压后目录重命名为`PaddleSeg`
以下代码目录路径为`D:\projects\PaddleSeg` 为例。
### Step2: 下载PaddlePaddle C++ 预测库 fluid_inference
根据Windows环境,下载相应版本的PaddlePaddle预测库,并解压到`D:\projects\`目录
| CUDA | GPU | 下载地址 |
|------|------|--------|
| 8.0 | Yes | [fluid_inference.zip](https://bj.bcebos.com/v1/paddleseg/fluid_inference_win.zip) |
| 9.0 | Yes | [fluid_inference_cuda90.zip](https://paddleseg.bj.bcebos.com/fluid_inference_cuda9_cudnn7.zip) |
解压后`D:\projects\fluid_inference`目录包含内容为:
```
fluid_inference
├── paddle # paddle核心库和头文件
|
├── third_party # 第三方依赖库和头文件
|
└── version.txt # 版本和编译信息
```
**注意:** `CUDA90`版本解压后目录名称为`fluid_inference_cuda90`。
### Step3: 安装配置OpenCV
1. 在OpenCV官网下载适用于Windows平台的3.4.6版本, [下载地址](https://sourceforge.net/projects/opencvlibrary/files/3.4.6/opencv-3.4.6-vc14_vc15.exe/download)
2. 运行下载的可执行文件,将OpenCV解压至指定目录,如`D:\projects\opencv`
3. 配置环境变量,如下流程所示
- 我的电脑->属性->高级系统设置->环境变量
- 在系统变量中找到Path(如没有,自行创建),并双击编辑
- 新建,将opencv路径填入并保存,如`D:\projects\opencv\build\x64\vc14\bin`
### Step4: 使用Visual Studio 2019直接编译CMake
1. 打开Visual Studio 2019 Community,点击`继续但无需代码`
![step2](https://paddleseg.bj.bcebos.com/inference/vs2019_step1.png)
2. 点击: `文件`->`打开`->`CMake`
![step2.1](https://paddleseg.bj.bcebos.com/inference/vs2019_step2.png)
选择项目代码所在路径,并打开`CMakeList.txt`:
![step2.2](https://paddleseg.bj.bcebos.com/inference/vs2019_step3.png)
3. 点击:`项目`->`cpp_inference_demo的CMake设置`
![step3](https://paddleseg.bj.bcebos.com/inference/vs2019_step4.png)
4. 点击`浏览`,分别设置编译选项指定`CUDA`、`OpenCV`、`Paddle预测库`的路径
![step4](https://paddleseg.bj.bcebos.com/inference/vs2019_step5.png)
三个编译参数的含义说明如下:
| 参数名 | 含义 |
| ---- | ---- |
| CUDA_LIB | cuda的库路径 |
| OPENCV_DIR | OpenCV的安装路径, |
| PADDLE_DIR | Paddle预测库的路径 |
**设置完成后**, 点击上图中`保存并生成CMake缓存以加载变量`。
5. 点击`生成`->`全部生成`
![step6](https://paddleseg.bj.bcebos.com/inference/vs2019_step6.png)
### Step5: 预测及可视化
上述`Visual Studio 2019`编译产出的可执行文件在`out\build\x64-Release`目录下,打开`cmd`,并切换到该目录:
```
cd /d D:\projects\PaddleSeg\inference\out\x64-Release
```
之后执行命令:
```
demo.exe --conf=/path/to/your/conf --input_dir=/path/to/your/input/data/directory
```
更详细说明请参考ReadMe文档: [预测和可视化部分](../ReadMe.md)
...@@ -6,7 +6,7 @@ include(ExternalProject) ...@@ -6,7 +6,7 @@ include(ExternalProject)
message("${CMAKE_BUILD_TYPE}") message("${CMAKE_BUILD_TYPE}")
ExternalProject_Add( ExternalProject_Add(
yaml-cpp ext-yaml-cpp
GIT_REPOSITORY https://github.com/jbeder/yaml-cpp.git GIT_REPOSITORY https://github.com/jbeder/yaml-cpp.git
GIT_TAG e0e01d53c27ffee6c86153fa41e7f5e57d3e5c90 GIT_TAG e0e01d53c27ffee6c86153fa41e7f5e57d3e5c90
CMAKE_ARGS CMAKE_ARGS
......
...@@ -125,6 +125,10 @@ namespace PaddleSolution { ...@@ -125,6 +125,10 @@ namespace PaddleSolution {
int Predictor::native_predict(const std::vector<std::string>& imgs) int Predictor::native_predict(const std::vector<std::string>& imgs)
{ {
if (imgs.size() == 0) {
LOG(ERROR) << "No image found";
return -1;
}
int config_batch_size = _model_config._batch_size; int config_batch_size = _model_config._batch_size;
int channels = _model_config._channels; int channels = _model_config._channels;
...@@ -205,6 +209,11 @@ namespace PaddleSolution { ...@@ -205,6 +209,11 @@ namespace PaddleSolution {
int Predictor::analysis_predict(const std::vector<std::string>& imgs) { int Predictor::analysis_predict(const std::vector<std::string>& imgs) {
if (imgs.size() == 0) {
LOG(ERROR) << "No image found";
return -1;
}
int config_batch_size = _model_config._batch_size; int config_batch_size = _model_config._batch_size;
int channels = _model_config._channels; int channels = _model_config._channels;
int eval_width = _model_config._resize[0]; int eval_width = _model_config._resize[0];
......
...@@ -42,7 +42,7 @@ namespace PaddleSolution { ...@@ -42,7 +42,7 @@ namespace PaddleSolution {
for (int c = 0; c < channels; ++c) { for (int c = 0; c < channels; ++c) {
int top_index = (c * rh + h) * rw + w; int top_index = (c * rh + h) * rw + w;
float pixel = static_cast<float>(ptr[im_index++]); float pixel = static_cast<float>(ptr[im_index++]);
pixel = (pixel - pmean[c]) / pscale[c]; pixel = (pixel / 255 - pmean[c]) / pscale[c];
data[top_index] = pixel; data[top_index] = pixel;
} }
} }
......
...@@ -28,7 +28,6 @@ namespace PaddleSolution { ...@@ -28,7 +28,6 @@ namespace PaddleSolution {
_channels = 0; _channels = 0;
_use_gpu = 0; _use_gpu = 0;
_batch_size = 1; _batch_size = 1;
_model_name.clear();
_model_file_name.clear(); _model_file_name.clear();
_model_path.clear(); _model_path.clear();
_param_file_name.clear(); _param_file_name.clear();
...@@ -79,8 +78,6 @@ namespace PaddleSolution { ...@@ -79,8 +78,6 @@ namespace PaddleSolution {
_img_type = config["DEPLOY"]["IMAGE_TYPE"].as<std::string>(); _img_type = config["DEPLOY"]["IMAGE_TYPE"].as<std::string>();
// 5. get class number // 5. get class number
_class_num = config["DEPLOY"]["NUM_CLASSES"].as<int>(); _class_num = config["DEPLOY"]["NUM_CLASSES"].as<int>();
// 6. get model_name
_model_name = config["DEPLOY"]["MODEL_NAME"].as<std::string>();
// 7. set model path // 7. set model path
_model_path = config["DEPLOY"]["MODEL_PATH"].as<std::string>(); _model_path = config["DEPLOY"]["MODEL_PATH"].as<std::string>();
// 8. get model file_name // 8. get model file_name
...@@ -129,7 +126,6 @@ namespace PaddleSolution { ...@@ -129,7 +126,6 @@ namespace PaddleSolution {
std::cout << "DEPLOY.NUM_CLASSES: " << _class_num << std::endl; std::cout << "DEPLOY.NUM_CLASSES: " << _class_num << std::endl;
std::cout << "DEPLOY.CHANNELS: " << _channels << std::endl; std::cout << "DEPLOY.CHANNELS: " << _channels << std::endl;
std::cout << "DEPLOY.MODEL_PATH: " << _model_path << std::endl; std::cout << "DEPLOY.MODEL_PATH: " << _model_path << std::endl;
std::cout << "DEPLOY.MODEL_NAME: " << _model_name << std::endl;
std::cout << "DEPLOY.MODEL_FILENAME: " << _model_file_name << std::endl; std::cout << "DEPLOY.MODEL_FILENAME: " << _model_file_name << std::endl;
std::cout << "DEPLOY.PARAMS_FILENAME: " << _param_file_name << std::endl; std::cout << "DEPLOY.PARAMS_FILENAME: " << _param_file_name << std::endl;
std::cout << "DEPLOY.PRE_PROCESSOR: " << _pre_processor << std::endl; std::cout << "DEPLOY.PRE_PROCESSOR: " << _pre_processor << std::endl;
...@@ -152,8 +148,6 @@ namespace PaddleSolution { ...@@ -152,8 +148,6 @@ namespace PaddleSolution {
int _channels; int _channels;
// DEPLOY.MODEL_PATH // DEPLOY.MODEL_PATH
std::string _model_path; std::string _model_path;
// DEPLOY.MODEL_NAME
std::string _model_name;
// DEPLOY.MODEL_FILENAME // DEPLOY.MODEL_FILENAME
std::string _model_file_name; std::string _model_file_name;
// DEPLOY.PARAMS_FILENAME // DEPLOY.PARAMS_FILENAME
......
...@@ -3,7 +3,13 @@ ...@@ -3,7 +3,13 @@
#include <iostream> #include <iostream>
#include <vector> #include <vector>
#include <string> #include <string>
#ifdef _WIN32
#include <filesystem> #include <filesystem>
#else
#include <dirent.h>
#include <sys/types.h>
#endif
namespace PaddleSolution { namespace PaddleSolution {
namespace utils { namespace utils {
...@@ -14,7 +20,31 @@ namespace PaddleSolution { ...@@ -14,7 +20,31 @@ namespace PaddleSolution {
#endif #endif
return dir + seperator + path; return dir + seperator + path;
} }
#ifndef _WIN32
// scan a directory and get all files with input extensions
inline std::vector<std::string> get_directory_images(const std::string& path, const std::string& exts)
{
std::vector<std::string> imgs;
struct dirent *entry;
DIR *dir = opendir(path.c_str());
if (dir == NULL) {
closedir(dir);
return imgs;
}
while ((entry = readdir(dir)) != NULL) {
std::string item = entry->d_name;
auto ext = strrchr(entry->d_name, '.');
if (!ext || std::string(ext) == "." || std::string(ext) == "..") {
continue;
}
if (exts.find(ext) != std::string::npos) {
imgs.push_back(path_join(path, entry->d_name));
}
}
return imgs;
}
#else
// scan a directory and get all files with input extensions // scan a directory and get all files with input extensions
inline std::vector<std::string> get_directory_images(const std::string& path, const std::string& exts) inline std::vector<std::string> get_directory_images(const std::string& path, const std::string& exts)
{ {
...@@ -28,5 +58,6 @@ namespace PaddleSolution { ...@@ -28,5 +58,6 @@ namespace PaddleSolution {
} }
return imgs; return imgs;
} }
#endif
} }
} }
...@@ -12,28 +12,45 @@ import argparse ...@@ -12,28 +12,45 @@ import argparse
import cv2 import cv2
from tqdm import tqdm from tqdm import tqdm
import imghdr import imghdr
import logging
from utils.config import cfg from utils.config import cfg
def init_global_variable(): def init_global_variable():
""" """
初始化全局变量 初始化全局变量
""" """
global png_format_right_num # 格式错误的标签图数量 global png_format_right_num # 格式正确的标注图数量
global png_format_wrong_num # 格式错误的标图数量 global png_format_wrong_num # 格式错误的标图数量
global total_grt_classes # 总的标类别 global total_grt_classes # 总的标类别
global total_num_of_each_class # 每个类别总的像素数 global total_num_of_each_class # 每个类别总的像素数
global shape_unequal # 图片和标签shape不一致 global shape_unequal_image # 图片和标注shape不一致列表
global png_format_wrong # 标签格式错误 global png_format_wrong_image # 标注格式错误列表
global max_width # 图片最长宽
global max_height # 图片最长高
global min_aspectratio # 图片最小宽高比
global max_aspectratio # 图片最大宽高比
global img_dim # 图片的通道数
global list_wrong #文件名格式错误列表
global imread_failed #图片读取失败列表, 二元列表
global label_wrong # 标注图片出错列表
global label_gray_wrong # 标注图非灰度图列表
png_format_right_num = 0 png_format_right_num = 0
png_format_wrong_num = 0 png_format_wrong_num = 0
total_grt_classes = [] total_grt_classes = []
total_num_of_each_class = [] total_num_of_each_class = []
shape_unequal = [] shape_unequal_image = []
png_format_wrong = [] png_format_wrong_image = []
max_width = 0
max_height = 0
min_aspectratio = sys.float_info.max
max_aspectratio = 0
img_dim = []
list_wrong = []
imread_failed = []
label_wrong = []
label_gray_wrong = []
def parse_args(): def parse_args():
parser = argparse.ArgumentParser(description='PaddleSeg check') parser = argparse.ArgumentParser(description='PaddleSeg check')
...@@ -42,46 +59,99 @@ def parse_args(): ...@@ -42,46 +59,99 @@ def parse_args():
dest='cfg_file', dest='cfg_file',
help='Config file for training (and optionally testing)', help='Config file for training (and optionally testing)',
default=None, default=None,
type=str) type=str
)
return parser.parse_args() return parser.parse_args()
def error_print(str):
return "".join(["\nNOT PASS ", str])
def correct_print(str):
return "".join(["\nPASS ", str])
def cv2_imread(file_path, flag=cv2.IMREAD_COLOR): def cv2_imread(file_path, flag=cv2.IMREAD_COLOR):
# resolve cv2.imread open Chinese file path issues on Windows Platform. """
解决 cv2.imread 在window平台打开中文路径的问题.
"""
return cv2.imdecode(np.fromfile(file_path, dtype=np.uint8), flag) return cv2.imdecode(np.fromfile(file_path, dtype=np.uint8), flag)
def get_image_max_height_width(img):
def get_image_max_height_width(img, max_height, max_width): """获取图片最大宽和高"""
global max_width, max_height
img_shape = img.shape img_shape = img.shape
height, width = img_shape[0], img_shape[1] height, width = img_shape[0], img_shape[1]
max_height = max(height, max_height) max_height = max(height, max_height)
max_width = max(width, max_width) max_width = max(width, max_width)
return max_height, max_width
def get_image_min_max_aspectratio(img, min_aspectratio, max_aspectratio): def get_image_min_max_aspectratio(img):
"""计算图片最大宽高比"""
global min_aspectratio, max_aspectratio
img_shape = img.shape img_shape = img.shape
height, width = img_shape[0], img_shape[1] height, width = img_shape[0], img_shape[1]
min_aspectratio = min(width / height, min_aspectratio) min_aspectratio = min(width/height, min_aspectratio)
max_aspectratio = max(width / height, max_aspectratio) max_aspectratio = max(width/height, max_aspectratio)
return min_aspectratio, max_aspectratio return min_aspectratio, max_aspectratio
def get_image_dim(img):
def get_image_dim(img, img_dim): """获取图像的通道数"""
"""获取图像的维度"""
img_shape = img.shape img_shape = img.shape
if img_shape[-1] not in img_dim: if img_shape[-1] not in img_dim:
img_dim.append(img_shape[-1]) img_dim.append(img_shape[-1])
def is_label_gray(grt):
"""判断标签是否为灰度图"""
grt_shape = grt.shape
if len(grt_shape) == 2:
return True
else:
return False
def sum_gt_check(png_format, grt_classes, num_of_each_class): def image_label_shape_check(img, grt):
"""
验证图像和标注的大小是否匹配
"""
flag = True
img_height = img.shape[0]
img_width = img.shape[1]
grt_height = grt.shape[0]
grt_width = grt.shape[1]
if img_height != grt_height or img_width != grt_width:
flag = False
return flag
def ground_truth_check(grt, grt_path):
""" """
统计所有标签图上的格式、类别和每个类别的像素数 验证标注图像的格式
统计标注图类别和像素数
params: params:
grt: 标注图
grt_path: 标注图路径
return:
png_format: 返回是否是png格式图片 png_format: 返回是否是png格式图片
grt_classes: 标签类别 unique: 返回标注类别
counts: 返回标注的像素数
"""
if imghdr.what(grt_path) == "png":
png_format = True
else:
png_format = False
unique, counts = np.unique(grt, return_counts=True)
return png_format, unique, counts
def sum_gt_check(png_format, grt_classes, num_of_each_class):
"""
统计所有标注图上的格式、类别和每个类别的像素数
params:
png_format: 是否是png格式图片
grt_classes: 标注类别
num_of_each_class: 各个类别的像素数目 num_of_each_class: 各个类别的像素数目
""" """
is_label_correct = True
global png_format_right_num, png_format_wrong_num, total_grt_classes, total_num_of_each_class global png_format_right_num, png_format_wrong_num, total_grt_classes, total_num_of_each_class
if png_format: if png_format:
...@@ -90,12 +160,11 @@ def sum_gt_check(png_format, grt_classes, num_of_each_class): ...@@ -90,12 +160,11 @@ def sum_gt_check(png_format, grt_classes, num_of_each_class):
png_format_wrong_num += 1 png_format_wrong_num += 1
if cfg.DATASET.IGNORE_INDEX in grt_classes: if cfg.DATASET.IGNORE_INDEX in grt_classes:
grt_classes2 = np.delete( grt_classes2 = np.delete(grt_classes, np.where(grt_classes == cfg.DATASET.IGNORE_INDEX))
grt_classes, np.where(grt_classes == cfg.DATASET.IGNORE_INDEX)) else:
grt_classes2 = grt_classes
if min(grt_classes2) < 0 or max(grt_classes2) > cfg.DATASET.NUM_CLASSES - 1: if min(grt_classes2) < 0 or max(grt_classes2) > cfg.DATASET.NUM_CLASSES - 1:
print("fatal error: label class is out of range [0, {}]".format( is_label_correct = False
cfg.DATASET.NUM_CLASSES - 1))
add_class = [] add_class = []
add_num = [] add_num = []
for i in range(len(grt_classes)): for i in range(len(grt_classes)):
...@@ -108,145 +177,113 @@ def sum_gt_check(png_format, grt_classes, num_of_each_class): ...@@ -108,145 +177,113 @@ def sum_gt_check(png_format, grt_classes, num_of_each_class):
add_num.append(num_of_each_class[i]) add_num.append(num_of_each_class[i])
total_num_of_each_class += add_num total_num_of_each_class += add_num
total_grt_classes += add_class total_grt_classes += add_class
return is_label_correct
def gt_check(): def gt_check():
""" """
对标签进行校验,输出校验结果 对标注图像进行校验,输出校验结果
params:
png_format_wrong_num: 格式错误的标签图数量
png_format_right_num: 格式正确的标签图数量
total_grt_classes: 总的标签类别
total_num_of_each_class: 每个类别总的像素数目
return:
total_nc: 按升序排序后的总标签类别和像素数目
""" """
if png_format_wrong_num == 0: if png_format_wrong_num == 0:
print("Not pass label png format check!") if png_format_right_num:
logger.info(correct_print("label format check"))
else: else:
print("Pass label png format check!") logger.info(error_print("label format check"))
print( logger.info("No label image to check")
"total {} label imgs are png format, {} label imgs are not png fromat". return
format(png_format_right_num, png_format_wrong_num))
total_nc = sorted(zip(total_grt_classes, total_num_of_each_class))
print("total label calsses and their corresponding numbers:\n{} ".format(
total_nc))
if total_nc[0][0]:
print(
"Not pass label class check!\nWarning: label classes should start from 0 !!!"
)
else: else:
print("Pass label class check!") logger.info(error_print("label format check"))
logger.info("total {} label images are png format, {} label images are not png "
"format".format(png_format_right_num, png_format_wrong_num))
if len(png_format_wrong_image) > 0:
for i in png_format_wrong_image:
logger.debug(i)
def ground_truth_check(grt, grt_path): total_nc = sorted(zip(total_grt_classes, total_num_of_each_class))
""" logger.info("\nDoing label pixel statistics...\nTotal label classes "
验证标签是否重零开始,标签值为0,1,...,num_classes-1, ingnore_idx "and their corresponding numbers:\n{} ".format(total_nc))
验证标签图像的格式
返回标签的像素数
检查图像是否都是ignore_index
params:
grt: 标签图
grt_path: 标签图路径
return:
png_format: 返回是否是png格式图片
label_correct: 返回标签是否是正确的
label_pixel_num: 返回标签的像素数
"""
if imghdr.what(grt_path) == "png":
png_format = True
else:
png_format = False
unique, counts = np.unique(grt, return_counts=True) if len(label_wrong) == 0 and not total_nc[0][0]:
logger.info(correct_print("label class check!"))
else:
logger.info(error_print("label class check!"))
if total_nc[0][0]:
logger.info("Warning: label classes should start from 0")
if len(label_wrong) > 0:
logger.info("fatal error: label class is out of range [0, {}]".format(cfg.DATASET.NUM_CLASSES - 1))
for i in label_wrong:
logger.debug(i)
return png_format, unique, counts
def eval_crop_size_check(max_height, max_width, min_aspectratio, def eval_crop_size_check(max_height, max_width, min_aspectratio, max_aspectratio):
max_aspectratio):
""" """
判断eval_crop_siz与验证集及测试集的max_height, max_width的关系 判断eval_crop_siz与验证集及测试集的max_height, max_width的关系
param param
max_height: 数据集的最大高 max_height: 数据集的最大高
max_width: 数据集的最大宽 max_width: 数据集的最大宽
""" """
if cfg.AUG.AUG_METHOD == "stepscaling": if cfg.AUG.AUG_METHOD == "stepscaling":
flag = True if max_width <= cfg.EVAL_CROP_SIZE[0] and max_height <= cfg.EVAL_CROP_SIZE[1]:
logger.info(correct_print("EVAL_CROP_SIZE check"))
else:
logger.info(error_print("EVAL_CROP_SIZE check"))
if max_width > cfg.EVAL_CROP_SIZE[0]: if max_width > cfg.EVAL_CROP_SIZE[0]:
print( logger.info("The EVAL_CROP_SIZE[0]: {} should larger max width of images {}!".format(
"ERROR: The EVAL_CROP_SIZE[0]: {} should larger max width of images {}!" cfg.EVAL_CROP_SIZE[0], max_width))
.format(cfg.EVAL_CROP_SIZE[0], max_width))
flag = False
if max_height > cfg.EVAL_CROP_SIZE[1]: if max_height > cfg.EVAL_CROP_SIZE[1]:
print( logger.info(error_print("The EVAL_CROP_SIZE[1]: {} should larger max height of images {}!".format(
"ERROR: The EVAL_CROP_SIZE[1]: {} should larger max height of images {}!" cfg.EVAL_CROP_SIZE[1], max_height)))
.format(cfg.EVAL_CROP_SIZE[1], max_height))
flag = False
if flag:
print("EVAL_CROP_SIZE setting correct")
elif cfg.AUG.AUG_METHOD == "rangescaling": elif cfg.AUG.AUG_METHOD == "rangescaling":
if min_aspectratio <= 1 and max_aspectratio >= 1: if min_aspectratio <= 1 and max_aspectratio >= 1:
if cfg.EVAL_CROP_SIZE[ if cfg.EVAL_CROP_SIZE[0] >= cfg.AUG.INF_RESIZE_VALUE and cfg.EVAL_CROP_SIZE[1] >= cfg.AUG.INF_RESIZE_VALUE:
0] >= cfg.AUG.INF_RESIZE_VALUE and cfg.EVAL_CROP_SIZE[ logger.info(correct_print("EVAL_CROP_SIZE check"))
1] >= cfg.AUG.INF_RESIZE_VALUE:
print("EVAL_CROP_SIZE setting correct")
else: else:
print( logger.info(error_print("EVAL_CROP_SIZE check"))
"ERROR: EVAL_CROP_SIZE: ({},{}) must large than img size({},{})" logger.info("EVAL_CROP_SIZE: ({},{}) must large than img size({},{})"
.format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1], .format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1],
cfg.AUG.INF_RESIZE_VALUE, cfg.AUG.INF_RESIZE_VALUE)) cfg.AUG.INF_RESIZE_VALUE, cfg.AUG.INF_RESIZE_VALUE))
elif min_aspectratio > 1: elif min_aspectratio > 1:
max_height_rangscaling = cfg.AUG.INF_RESIZE_VALUE / min_aspectratio max_height_rangscaling = cfg.AUG.INF_RESIZE_VALUE / min_aspectratio
max_height_rangscaling = round(max_height_rangscaling) max_height_rangscaling = round(max_height_rangscaling)
if cfg.EVAL_CROP_SIZE[ if cfg.EVAL_CROP_SIZE[0] >= cfg.AUG.INF_RESIZE_VALUE and cfg.EVAL_CROP_SIZE[1] >= max_height_rangscaling:
0] >= cfg.AUG.INF_RESIZE_VALUE and cfg.EVAL_CROP_SIZE[ logger.info(correct_print("EVAL_CROP_SIZE check"))
1] >= max_height_rangscaling:
print("EVAL_CROP_SIZE setting correct")
else: else:
print( logger.info(error_print("EVAL_CROP_SIZE check"))
"ERROR: EVAL_CROP_SIZE: ({},{}) must large than img size({},{})" logger.info("EVAL_CROP_SIZE: ({},{}) must large than img size({},{})"
.format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1], .format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1],
cfg.AUG.INF_RESIZE_VALUE, max_height_rangscaling)) cfg.AUG.INF_RESIZE_VALUE, max_height_rangscaling))
elif max_aspectratio < 1: elif max_aspectratio < 1:
max_width_rangscaling = cfg.AUG.INF_RESIZE_VALUE * max_aspectratio max_width_rangscaling = cfg.AUG.INF_RESIZE_VALUE * max_aspectratio
max_width_rangscaling = round(max_width_rangscaling) max_width_rangscaling = round(max_width_rangscaling)
if cfg.EVAL_CROP_SIZE[ if cfg.EVAL_CROP_SIZE[0] >= max_width_rangscaling and cfg.EVAL_CROP_SIZE[1] >= cfg.AUG.INF_RESIZE_VALUE:
0] >= max_width_rangscaling and cfg.EVAL_CROP_SIZE[ logger.info(correct_print("EVAL_CROP_SIZE check"))
1] >= cfg.AUG.INF_RESIZE_VALUE:
print("EVAL_CROP_SIZE setting correct")
else: else:
print( logger.info(error_print("EVAL_CROP_SIZE check"))
"ERROR: EVAL_CROP_SIZE: ({},{}) must large than img size({},{})" logger.info("EVAL_CROP_SIZE: ({},{}) must large than img size({},{})"
.format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1], .format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1],
max_width_rangscaling, cfg.AUG.INF_RESIZE_VALUE)) max_width_rangscaling, cfg.AUG.INF_RESIZE_VALUE))
elif cfg.AUG.AUG_METHOD == "unpadding": elif cfg.AUG.AUG_METHOD == "unpadding":
if cfg.EVAL_CROP_SIZE[0] >= cfg.AUG.FIX_RESIZE_SIZE[ if cfg.EVAL_CROP_SIZE[0] >= cfg.AUG.FIX_RESIZE_SIZE[0] and cfg.EVAL_CROP_SIZE[1] >= cfg.AUG.FIX_RESIZE_SIZE[1]:
0] and cfg.EVAL_CROP_SIZE[1] >= cfg.AUG.FIX_RESIZE_SIZE[1]: logger.info(correct_print("EVAL_CROP_SIZE check"))
print("EVAL_CROP_SIZE setting correct")
else: else:
print( logger.info(error_print("EVAL_CROP_SIZE check"))
"ERROR: EVAL_CROP_SIZE: ({},{}) must large than img size({},{})" logger.info("EVAL_CROP_SIZE: ({},{}) must large than img size({},{})"
.format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1], .format(cfg.EVAL_CROP_SIZE[0], cfg.EVAL_CROP_SIZE[1],
cfg.AUG.FIX_RESIZE_SIZE[0], cfg.AUG.FIX_RESIZE_SIZE[1])) cfg.AUG.FIX_RESIZE_SIZE[0], cfg.AUG.FIX_RESIZE_SIZE[1]))
else: else:
print( logger.info("\nERROR! cfg.AUG.AUG_METHOD setting wrong, it should be one of "
"ERROR: cfg.AUG.AUG_METHOD setting wrong, it should be one of [unpadding, stepscaling, rangescaling]" "[unpadding, stepscaling, rangescaling]")
)
def inf_resize_value_check(): def inf_resize_value_check():
if cfg.AUG.AUG_METHOD == "rangescaling": if cfg.AUG.AUG_METHOD == "rangescaling":
if cfg.AUG.INF_RESIZE_VALUE < cfg.AUG.MIN_RESIZE_VALUE or \ if cfg.AUG.INF_RESIZE_VALUE < cfg.AUG.MIN_RESIZE_VALUE or \
cfg.AUG.INF_RESIZE_VALUE > cfg.AUG.MIN_RESIZE_VALUE: cfg.AUG.INF_RESIZE_VALUE > cfg.AUG.MIN_RESIZE_VALUE:
print( logger.info("\nWARNING! you set AUG.AUG_METHOD = 'rangescaling'"
"ERROR: you set AUG.AUG_METHOD = 'rangescaling'"
"AUG.INF_RESIZE_VALUE: {} not in [AUG.MIN_RESIZE_VALUE, AUG.MAX_RESIZE_VALUE]: " "AUG.INF_RESIZE_VALUE: {} not in [AUG.MIN_RESIZE_VALUE, AUG.MAX_RESIZE_VALUE]: "
"[{}, {}].".format(cfg.AUG.INF_RESIZE_VALUE, "[{}, {}].".format(cfg.AUG.INF_RESIZE_VALUE, cfg.AUG.MIN_RESIZE_VALUE, cfg.AUG.MAX_RESIZE_VALUE))
cfg.AUG.MIN_RESIZE_VALUE,
cfg.AUG.MAX_RESIZE_VALUE))
def image_type_check(img_dim): def image_type_check(img_dim):
""" """
...@@ -256,166 +293,218 @@ def image_type_check(img_dim): ...@@ -256,166 +293,218 @@ def image_type_check(img_dim):
return return
""" """
if (1 in img_dim or 3 in img_dim) and cfg.DATASET.IMAGE_TYPE == 'rgba': if (1 in img_dim or 3 in img_dim) and cfg.DATASET.IMAGE_TYPE == 'rgba':
print( logger.info(error_print("DATASET.IMAGE_TYPE check"))
"ERROR: DATASET.IMAGE_TYPE is {} but the type of image has gray or rgb\n" logger.info("DATASET.IMAGE_TYPE is {} but the type of image has "
.format(cfg.DATASET.IMAGE_TYPE)) "gray or rgb\n".format(cfg.DATASET.IMAGE_TYPE))
# elif (1 not in img_dim and 3 not in img_dim and 4 in img_dim) and cfg.DATASET.IMAGE_TYPE == 'rgb': elif (1 not in img_dim and 3 not in img_dim and 4 in img_dim) and cfg.DATASET.IMAGE_TYPE == 'rgb':
# print("ERROR: DATASET.IMAGE_TYPE is {} but the type of image is rgba\n".format(cfg.DATASET.IMAGE_TYPE)) logger.info(correct_print("DATASET.IMAGE_TYPE check"))
logger.info("\nWARNING: DATASET.IMAGE_TYPE is {} but the type of all image is rgba".format(cfg.DATASET.IMAGE_TYPE))
else: else:
print("DATASET.IMAGE_TYPE setting correct") logger.info(correct_print("DATASET.IMAGE_TYPE check"))
def shape_check():
"""输出shape校验结果"""
if len(shape_unequal_image) == 0:
logger.info(correct_print("shape check"))
logger.info("All images are the same shape as the labels")
else:
logger.info(error_print("shape check"))
logger.info("Some images are not the same shape as the labels as follow: ")
for i in shape_unequal_image:
logger.debug(i)
def image_label_shape_check(img, grt):
"""
验证图像和标签的大小是否匹配
"""
flag = True def file_list_check(list_name):
img_height = img.shape[0] """检查分割符是否复合要求"""
img_width = img.shape[1] if len(list_wrong) == 0:
grt_height = grt.shape[0] logger.info(correct_print(list_name.split(os.sep)[-1] + " DATASET.SEPARATOR check"))
grt_width = grt.shape[1] else:
logger.info(error_print(list_name.split(os.sep)[-1] + " DATASET.SEPARATOR check"))
logger.info("The following list is not separated by {}".format(cfg.DATASET.SEPARATOR))
for i in list_wrong:
logger.debug(i)
def imread_check():
if len(imread_failed) == 0:
logger.info(correct_print("dataset reading check"))
logger.info("All images can be read successfully")
else:
logger.info(error_print("dataset reading check"))
logger.info("Failed to read {} images".format(len(imread_failed)))
for i in imread_failed:
logger.debug(i)
def label_gray_check():
if len(label_gray_wrong) == 0:
logger.info(correct_print("label gray check"))
logger.info("All label images are gray")
else:
logger.info(error_print("label gray check"))
logger.info("{} label images are not gray\nLabel pixel statistics may "
"be insignificant".format(len(label_gray_wrong)))
for i in label_gray_wrong:
logger.debug(i)
if img_height != grt_height or img_width != grt_width:
flag = False
return flag
def check_train_dataset(): def check_train_dataset():
train_list = cfg.DATASET.TRAIN_FILE_LIST list_file = cfg.DATASET.TRAIN_FILE_LIST
print("\ncheck train dataset...") logger.info("-----------------------------\n1. Check train dataset...")
with open(train_list, 'r') as fid: with open(list_file, 'r') as fid:
img_dim = []
lines = fid.readlines() lines = fid.readlines()
for line in tqdm(lines): for line in tqdm(lines):
parts = line.strip().split(cfg.DATASET.SEPARATOR) line = line.strip()
parts = line.split(cfg.DATASET.SEPARATOR)
if len(parts) != 2: if len(parts) != 2:
print( list_wrong.append(line)
line, "File list format incorrect! It should be"
" image_name{}label_name\\n ".format(cfg.DATASET.SEPARATOR))
continue continue
img_name, grt_name = parts[0], parts[1] img_name, grt_name = parts[0], parts[1]
img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name) img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name)
grt_path = os.path.join(cfg.DATASET.DATA_DIR, grt_name) grt_path = os.path.join(cfg.DATASET.DATA_DIR, grt_name)
try:
img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED) img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED)
grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE) grt = cv2_imread(grt_path, cv2.IMREAD_UNCHANGED)
except Exception as e:
imread_failed.append((line, str(e)))
continue
get_image_dim(img, img_dim) is_gray = is_label_gray(grt)
if not is_gray:
label_gray_wrong.append(line)
grt = cv2.cvtColor(grt, cv2.COLOR_BGR2GRAY)
get_image_dim(img)
is_equal_img_grt_shape = image_label_shape_check(img, grt) is_equal_img_grt_shape = image_label_shape_check(img, grt)
if not is_equal_img_grt_shape: if not is_equal_img_grt_shape:
print(line, shape_unequal_image.append(line)
"ERROR: source img and label img must has the same size")
png_format, grt_classes, num_of_each_class = ground_truth_check(grt, grt_path)
if not png_format:
png_format_wrong_image.append(line)
is_label_correct = sum_gt_check(png_format, grt_classes, num_of_each_class)
if not is_label_correct:
label_wrong.append(line)
file_list_check(list_file)
imread_check()
label_gray_check()
gt_check()
image_type_check(img_dim)
shape_check()
png_format, grt_classes, num_of_each_class = ground_truth_check(
grt, grt_path)
sum_gt_check(png_format, grt_classes, num_of_each_class)
gt_check()
image_type_check(img_dim)
def check_val_dataset(): def check_val_dataset():
val_list = cfg.DATASET.VAL_FILE_LIST list_file = cfg.DATASET.VAL_FILE_LIST
with open(val_list) as fid: logger.info("\n-----------------------------\n2. Check val dataset...")
max_height = 0 with open(list_file) as fid:
max_width = 0
min_aspectratio = sys.float_info.max
max_aspectratio = 0.0
img_dim = []
print("check val dataset...")
lines = fid.readlines() lines = fid.readlines()
for line in tqdm(lines): for line in tqdm(lines):
parts = line.strip().split(cfg.DATASET.SEPARATOR) line = line.strip()
parts = line.split(cfg.DATASET.SEPARATOR)
if len(parts) != 2: if len(parts) != 2:
print( list_wrong.append(line)
line, "File list format incorrect! It should be"
" image_name{}label_name\\n ".format(cfg.DATASET.SEPARATOR))
continue continue
img_name, grt_name = parts[0], parts[1] img_name, grt_name = parts[0], parts[1]
img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name) img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name)
grt_path = os.path.join(cfg.DATASET.DATA_DIR, grt_name) grt_path = os.path.join(cfg.DATASET.DATA_DIR, grt_name)
try:
img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED) img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED)
grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE) grt = cv2_imread(grt_path, cv2.IMREAD_UNCHANGED)
except Exception as e:
max_height, max_width = get_image_max_height_width( imread_failed.append((line, e.message))
img, max_height, max_width)
min_aspectratio, max_aspectratio = get_image_min_max_aspectratio( is_gray = is_label_gray(grt)
img, min_aspectratio, max_aspectratio) if not is_gray:
get_image_dim(img, img_dim) label_gray_wrong.append(line)
grt = cv2.cvtColor(grt, cv2.COLOR_BGR2GRAY)
get_image_max_height_width(img)
get_image_min_max_aspectratio(img)
get_image_dim(img)
is_equal_img_grt_shape = image_label_shape_check(img, grt) is_equal_img_grt_shape = image_label_shape_check(img, grt)
if not is_equal_img_grt_shape: if not is_equal_img_grt_shape:
print(line, shape_unequal_image.append(line)
"ERROR: source img and label img must has the same size") png_format, grt_classes, num_of_each_class = ground_truth_check(grt, grt_path)
if not png_format:
png_format, grt_classes, num_of_each_class = ground_truth_check( png_format_wrong_image.append(line)
grt, grt_path) is_label_correct = sum_gt_check(png_format, grt_classes, num_of_each_class)
sum_gt_check(png_format, grt_classes, num_of_each_class) if not is_label_correct:
label_wrong.append(line)
file_list_check(list_file)
imread_check()
label_gray_check()
gt_check() gt_check()
eval_crop_size_check(max_height, max_width, min_aspectratio,
max_aspectratio)
image_type_check(img_dim) image_type_check(img_dim)
shape_check()
eval_crop_size_check(max_height, max_width, min_aspectratio, max_aspectratio)
def check_test_dataset(): def check_test_dataset():
test_list = cfg.DATASET.TEST_FILE_LIST list_file = cfg.DATASET.TEST_FILE_LIST
with open(test_list) as fid: has_label = False
max_height = 0 with open(list_file) as fid:
max_width = 0 logger.info("\n-----------------------------\n3. Check test dataset...")
min_aspectratio = sys.float_info.max
max_aspectratio = 0.0
img_dim = []
print("check test dataset...")
lines = fid.readlines() lines = fid.readlines()
for line in tqdm(lines): for line in tqdm(lines):
parts = line.strip().split(cfg.DATASET.SEPARATOR) line = line.strip()
parts = line.split(cfg.DATASET.SEPARATOR)
if len(parts) == 1: if len(parts) == 1:
img_name = parts img_name = parts
img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name) img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name[0])
try:
img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED) img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED)
except Exception as e:
imread_failed.append((line, str(e)))
continue
elif len(parts) == 2: elif len(parts) == 2:
has_label = True
img_name, grt_name = parts[0], parts[1] img_name, grt_name = parts[0], parts[1]
img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name) img_path = os.path.join(cfg.DATASET.DATA_DIR, img_name)
grt_path = os.path.join(cfg.DATASET.DATA_DIR, grt_name) grt_path = os.path.join(cfg.DATASET.DATA_DIR, grt_name)
try:
img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED) img = cv2_imread(img_path, cv2.IMREAD_UNCHANGED)
grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE) grt = cv2_imread(grt_path, cv2.IMREAD_UNCHANGED)
except Exception as e:
imread_failed.append((line, e.message))
continue
is_gray = is_label_gray(grt)
if not is_gray:
label_gray_wrong.append(line)
grt = cv2.cvtColor(grt, cv2.COLOR_BGR2GRAY)
is_equal_img_grt_shape = image_label_shape_check(img, grt) is_equal_img_grt_shape = image_label_shape_check(img, grt)
if not is_equal_img_grt_shape: if not is_equal_img_grt_shape:
print( shape_unequal_image.append(line)
line, png_format, grt_classes, num_of_each_class = ground_truth_check(grt, grt_path)
"ERROR: source img and label img must has the same size" if not png_format:
) png_format_wrong_image.append(line)
is_label_correct = sum_gt_check(png_format, grt_classes, num_of_each_class)
png_format, grt_classes, num_of_each_class = ground_truth_check( if not is_label_correct:
grt, grt_path) label_wrong.append(line)
sum_gt_check(png_format, grt_classes, num_of_each_class)
else: else:
print( list_wrong.append(lines)
line, "File list format incorrect! It should be"
" image_name{}label_name\\n or image_name\n ".format(
cfg.DATASET.SEPARATOR))
continue continue
get_image_max_height_width(img)
max_height, max_width = get_image_max_height_width( get_image_min_max_aspectratio(img)
img, max_height, max_width) get_image_dim(img)
min_aspectratio, max_aspectratio = get_image_min_max_aspectratio(
img, min_aspectratio, max_aspectratio) file_list_check(list_file)
get_image_dim(img, img_dim) imread_check()
if has_label:
label_gray_check()
if has_label:
gt_check() gt_check()
eval_crop_size_check(max_height, max_width, min_aspectratio,
max_aspectratio)
image_type_check(img_dim) image_type_check(img_dim)
if has_label:
shape_check()
eval_crop_size_check(max_height, max_width, min_aspectratio, max_aspectratio)
def main(args): def main(args):
if args.cfg_file is not None: if args.cfg_file is not None:
cfg.update_from_file(args.cfg_file) cfg.update_from_file(args.cfg_file)
cfg.check_and_infer(reset_dataset=True) cfg.check_and_infer(reset_dataset=True)
print(pprint.pformat(cfg)) logger.info(pprint.pformat(cfg))
init_global_variable() init_global_variable()
check_train_dataset() check_train_dataset()
...@@ -428,8 +517,19 @@ def main(args): ...@@ -428,8 +517,19 @@ def main(args):
inf_resize_value_check() inf_resize_value_check()
if __name__ == "__main__": if __name__ == "__main__":
args = parse_args() args = parse_args()
args.cfg_file = "../configs/cityscape.yaml" logger = logging.getLogger()
logger.setLevel('DEBUG')
BASIC_FORMAT = "%(message)s"
formatter = logging.Formatter(BASIC_FORMAT)
sh = logging.StreamHandler()
sh.setFormatter(formatter)
sh.setLevel('INFO')
th = logging.FileHandler('detail.log', 'w')
th.setFormatter(formatter)
logger.addHandler(sh)
logger.addHandler(th)
main(args) main(args)
...@@ -106,19 +106,21 @@ class SegDataset(object): ...@@ -106,19 +106,21 @@ class SegDataset(object):
def batch(self, reader, batch_size, is_test=False, drop_last=False): def batch(self, reader, batch_size, is_test=False, drop_last=False):
def batch_reader(is_test=False, drop_last=drop_last): def batch_reader(is_test=False, drop_last=drop_last):
if is_test: if is_test:
imgs, img_names, valid_shapes, org_shapes = [], [], [], [] imgs, grts, img_names, valid_shapes, org_shapes = [], [], [], [], []
for img, img_name, valid_shape, org_shape in reader(): for img, grt, img_name, valid_shape, org_shape in reader():
imgs.append(img) imgs.append(img)
grts.append(grt)
img_names.append(img_name) img_names.append(img_name)
valid_shapes.append(valid_shape) valid_shapes.append(valid_shape)
org_shapes.append(org_shape) org_shapes.append(org_shape)
if len(imgs) == batch_size: if len(imgs) == batch_size:
yield np.array(imgs), img_names, np.array( yield np.array(imgs), np.array(
valid_shapes), np.array(org_shapes) grts), img_names, np.array(valid_shapes), np.array(
imgs, img_names, valid_shapes, org_shapes = [], [], [], [] org_shapes)
imgs, grts, img_names, valid_shapes, org_shapes = [], [], [], [], []
if not drop_last and len(imgs) > 0: if not drop_last and len(imgs) > 0:
yield np.array(imgs), img_names, np.array( yield np.array(imgs), np.array(grts), img_names, np.array(
valid_shapes), np.array(org_shapes) valid_shapes), np.array(org_shapes)
else: else:
imgs, labs, ignore = [], [], [] imgs, labs, ignore = [], [], []
...@@ -146,94 +148,65 @@ class SegDataset(object): ...@@ -146,94 +148,65 @@ class SegDataset(object):
# reserver alpha channel # reserver alpha channel
cv2_imread_flag = cv2.IMREAD_UNCHANGED cv2_imread_flag = cv2.IMREAD_UNCHANGED
if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL:
parts = line.strip().split(cfg.DATASET.SEPARATOR) parts = line.strip().split(cfg.DATASET.SEPARATOR)
if len(parts) != 2: if len(parts) != 2:
if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL:
raise Exception("File list format incorrect! It should be" raise Exception("File list format incorrect! It should be"
" image_name{}label_name\\n".format( " image_name{}label_name\\n".format(
cfg.DATASET.SEPARATOR)) cfg.DATASET.SEPARATOR))
img_name, grt_name = parts[0], None
else:
img_name, grt_name = parts[0], parts[1] img_name, grt_name = parts[0], parts[1]
img_path = os.path.join(src_dir, img_name)
grt_path = os.path.join(src_dir, grt_name)
img_path = os.path.join(src_dir, img_name)
img = cv2_imread(img_path, cv2_imread_flag) img = cv2_imread(img_path, cv2_imread_flag)
if grt_name is not None:
grt_path = os.path.join(src_dir, grt_name)
grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE) grt = cv2_imread(grt_path, cv2.IMREAD_GRAYSCALE)
else:
grt = None
if img is None or grt is None: if img is None:
raise Exception( raise Exception(
"Empty image, src_dir: {}, img: {} & lab: {}".format( "Empty image, src_dir: {}, img: {} & lab: {}".format(
src_dir, img_path, grt_path)) src_dir, img_path, grt_path))
img_height = img.shape[0] img_height = img.shape[0]
img_width = img.shape[1] img_width = img.shape[1]
if grt is not None:
grt_height = grt.shape[0] grt_height = grt.shape[0]
grt_width = grt.shape[1] grt_width = grt.shape[1]
if img_height != grt_height or img_width != grt_width: if img_height != grt_height or img_width != grt_width:
raise Exception( raise Exception(
"source img and label img must has the same size") "source img and label img must has the same size")
else:
if mode == ModelPhase.TRAIN or mode == ModelPhase.EVAL:
raise Exception(
"Empty image, src_dir: {}, img: {} & lab: {}".format(
src_dir, img_path, grt_path))
if len(img.shape) < 3: if len(img.shape) < 3:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
img_channels = img.shape[2] img_channels = img.shape[2]
if img_channels < 3: if img_channels < 3:
raise Exception( raise Exception("PaddleSeg only supports gray, rgb or rgba image")
"PaddleSeg only supports gray, rgb or rgba image")
if img_channels != cfg.DATASET.DATA_DIM: if img_channels != cfg.DATASET.DATA_DIM:
raise Exception( raise Exception(
"Input image channel({}) is not match cfg.DATASET.DATA_DIM({}), img_name={}" "Input image channel({}) is not match cfg.DATASET.DATA_DIM({}), img_name={}"
.format(img_channels, cfg.DATASET.DATADIM, img_name)) .format(img_channels, cfg.DATASET.DATADIM, img_name))
if img_channels != len(cfg.MEAN): if img_channels != len(cfg.MEAN):
raise Exception( raise Exception(
"img name {}, img chns {} mean size {}, size unequal". "img name {}, img chns {} mean size {}, size unequal".format(
format(img_name, img_channels, len(cfg.MEAN))) img_name, img_channels, len(cfg.MEAN)))
if img_channels != len(cfg.STD):
raise Exception(
"img name {}, img chns {} std size {}, size unequal".format(
img_name, img_channels, len(cfg.STD)))
# visualization mode
elif mode == ModelPhase.VISUAL:
if cfg.DATASET.SEPARATOR in line:
parts = line.strip().split(cfg.DATASET.SEPARATOR)
img_name = parts[0]
else:
img_name = line.strip()
img_path = os.path.join(src_dir, img_name)
img = cv2_imread(img_path, cv2_imread_flag)
if img is None:
raise Exception("empty image, src_dir:{}, img: {}".format(
src_dir, img_name))
# Convert grayscale image to BGR 3 channel image
if len(img.shape) < 3:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
img_height = img.shape[0]
img_width = img.shape[1]
img_channels = img.shape[2]
if img_channels < 3:
raise Exception("this repo only recept gray, rgb or rgba image")
if img_channels != cfg.DATASET.DATA_DIM:
raise Exception("data dim must equal to image channels")
if img_channels != len(cfg.MEAN):
raise Exception(
"img name {}, img chns {} mean size {}, size unequal".
format(img_name, img_channels, len(cfg.MEAN)))
if img_channels != len(cfg.STD): if img_channels != len(cfg.STD):
raise Exception( raise Exception(
"img name {}, img chns {} std size {}, size unequal".format( "img name {}, img chns {} std size {}, size unequal".format(
img_name, img_channels, len(cfg.STD))) img_name, img_channels, len(cfg.STD)))
grt = None
grt_name = None
else:
raise ValueError("mode error: {}".format(mode))
return img, grt, img_name, grt_name return img, grt, img_name, grt_name
def normalize_image(self, img): def normalize_image(self, img):
...@@ -329,4 +302,4 @@ class SegDataset(object): ...@@ -329,4 +302,4 @@ class SegDataset(object):
elif ModelPhase.is_eval(mode): elif ModelPhase.is_eval(mode):
return (img, grt, ignore) return (img, grt, ignore)
elif ModelPhase.is_visual(mode): elif ModelPhase.is_visual(mode):
return (img, img_name, valid_shape, org_shape) return (img, grt, img_name, valid_shape, org_shape)
...@@ -171,7 +171,7 @@ def visualize(cfg, ...@@ -171,7 +171,7 @@ def visualize(cfg,
fetch_list = [pred.name] fetch_list = [pred.name]
test_reader = dataset.batch(dataset.generator, batch_size=1, is_test=True) test_reader = dataset.batch(dataset.generator, batch_size=1, is_test=True)
img_cnt = 0 img_cnt = 0
for imgs, img_names, valid_shapes, org_shapes in test_reader: for imgs, grts, img_names, valid_shapes, org_shapes in test_reader:
pred_shape = (imgs.shape[2], imgs.shape[3]) pred_shape = (imgs.shape[2], imgs.shape[3])
pred, = exe.run( pred, = exe.run(
program=test_prog, program=test_prog,
...@@ -185,6 +185,7 @@ def visualize(cfg, ...@@ -185,6 +185,7 @@ def visualize(cfg,
# Add more comments # Add more comments
res_map = np.squeeze(pred[i, :, :, :]).astype(np.uint8) res_map = np.squeeze(pred[i, :, :, :]).astype(np.uint8)
img_name = img_names[i] img_name = img_names[i]
grt = grts[i]
res_shape = (res_map.shape[0], res_map.shape[1]) res_shape = (res_map.shape[0], res_map.shape[1])
if res_shape[0] != pred_shape[0] or res_shape[1] != pred_shape[1]: if res_shape[0] != pred_shape[0] or res_shape[1] != pred_shape[1]:
res_map = cv2.resize( res_map = cv2.resize(
...@@ -196,6 +197,11 @@ def visualize(cfg, ...@@ -196,6 +197,11 @@ def visualize(cfg,
res_map, (org_shape[1], org_shape[0]), res_map, (org_shape[1], org_shape[0]),
interpolation=cv2.INTER_NEAREST) interpolation=cv2.INTER_NEAREST)
if grt is not None:
grt = cv2.resize(
grt, (org_shape[1], org_shape[0]),
interpolation=cv2.INTER_NEAREST)
png_fn = to_png_fn(img_names[i]) png_fn = to_png_fn(img_names[i])
if also_save_raw_results: if also_save_raw_results:
raw_fn = os.path.join(raw_save_dir, png_fn) raw_fn = os.path.join(raw_save_dir, png_fn)
...@@ -209,6 +215,8 @@ def visualize(cfg, ...@@ -209,6 +215,8 @@ def visualize(cfg,
makedirs(dirname) makedirs(dirname)
pred_mask = colorize(res_map, org_shapes[i], color_map) pred_mask = colorize(res_map, org_shapes[i], color_map)
if grt is not None:
grt = colorize(grt, org_shapes[i], color_map)
cv2.imwrite(vis_fn, pred_mask) cv2.imwrite(vis_fn, pred_mask)
img_cnt += 1 img_cnt += 1
...@@ -233,7 +241,13 @@ def visualize(cfg, ...@@ -233,7 +241,13 @@ def visualize(cfg,
img, img,
epoch, epoch,
dataformats='HWC') dataformats='HWC')
#TODO: add ground truth (label) images #add ground truth (label) images
if grt is not None:
log_writer.add_image(
"Label/{}".format(img_names[i]),
grt[..., ::-1],
epoch,
dataformats='HWC')
# If in local_test mode, only visualize 5 images just for testing # If in local_test mode, only visualize 5 images just for testing
# procedure # procedure
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
TEST_PATH = os.path.join(LOCAL_PATH, "..", "test")
sys.path.append(TEST_PATH)
from test_utils import download_file_and_uncompress
model_urls = {
"deeplabv3plus_mobilenetv2-1-0_bn_cityscapes":
"https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz",
"unet_bn_coco": "https://paddleseg.bj.bcebos.com/models/unet_coco_v3.tgz"
}
if __name__ == "__main__":
if len(sys.argv) != 2:
print("usage:\n python download_model.py ${MODEL_NAME}")
exit(1)
model_name = sys.argv[1]
if not model_name in model_urls.keys():
print("Only support: \n {}".format("\n ".join(
list(model_urls.keys()))))
exit(1)
url = model_urls[model_name]
download_file_and_uncompress(
url=url,
savepath=LOCAL_PATH,
extrapath=LOCAL_PATH,
extraname=model_name)
print("Pretrained Model download success!")
# 源码编译安装及搭建服务流程 # 源码编译安装及搭建服务流程
本文将介绍源码编译安装以及在服务搭建流程。 本文将介绍源码编译安装以及在服务搭建流程。编译前确保PaddleServing的依赖项安装完毕。依赖安装教程请前往[PaddleSegServing 依赖安装](./README.md).
## 1. 系统依赖项 ## 1. 编译安装PaddleServing
依赖项 | 验证过的版本
-- | --
Linux | Centos 6.10 / 7
CMake | 3.0+
GCC | 4.8.2/5.4.0
Python| 2.7
GO编译器| 1.9.2
openssl| 1.0.1+
bzip2 | 1.0.6+
如果需要使用GPU预测,还需安装以下几个依赖库
GPU库 | 验证过的版本
-- | --
CUDA | 9.2
cuDNN | 7.1.4
nccl | 2.4.7
## 2. 安装依赖项
以下流程在百度云CentOS7.5+CUDA9.2环境下进行。
### 2.1. 安装openssl、Go编译器以及bzip2
```bash
yum -y install openssl openssl-devel golang bzip2-libs bzip2-devel
```
### 2.2. 安装GPU预测的依赖项(如果需要使用GPU预测,必须执行此步骤)
#### 2.2.1. 安装配置CUDA9.2以及cuDNN 7.1.4
该百度云机器已经安装CUDA以及cuDNN,仅需复制相关头文件与链接库
```bash
# 看情况确定是否需要安装 cudnn
# 进入 cudnn 根目录
cd /home/work/cudnn/cudnn7.1.4
# 拷贝头文件
cp include/cudnn.h /usr/local/cuda/include/
# 拷贝链接库
cp lib64/libcudnn* /usr/local/cuda/lib64/
# 修改头文件、链接库访问权限
chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
```
#### 2.2.2. 安装nccl库
```bash
# 下载文件 nccl-repo-rhel7-2.4.7-ga-cuda9.2-1-1.x86_64.rpm
wget -c https://paddlehub.bj.bcebos.com/serving/nccl-repo-rhel7-2.4.7-ga-cuda9.2-1-1.x86_64.rpm
# 安装nccl的repo
rpm -i nccl-repo-rhel7-2.4.7-ga-cuda9.2-1-1.x86_64.rpm
# 更新索引
yum -y update
# 安装包
yum -y install libnccl-2.4.7-1+cuda9.2 libnccl-devel-2.4.7-1+cuda9.2 libnccl-static-2.4.7-1+cuda9.2
```
### 2.3. 安装 cmake 3.15
如果机器没有安装cmake或者已安装cmake的版本低于3.0,请执行以下步骤
```bash
# 如果原来的已经安装低于3.0版本的cmake,请先卸载原有低版本 cmake
yum -y remove cmake
# 下载源代码并解压
wget -c https://github.com/Kitware/CMake/releases/download/v3.15.0/cmake-3.15.0.tar.gz
tar xvfz cmake-3.15.0.tar.gz
# 编译cmake
cd cmake-3.15.0
./configure
make -j4
# 安装并检查cmake版本
make install
cmake --version
# 在cmake-3.15.0目录中,将相应的头文件目录(curl目录,为PaddleServing的依赖头文件目录)拷贝到系统include目录下
cp -r Utilities/cmcurl/include/curl/ /usr/include/
```
### 2.4. 为依赖库增加相应的软连接
现在Linux系统中大部分链接库的名称都以版本号作为后缀,如libcurl.so.4.3.0。这种命名方式最大的问题是,CMakeList.txt中find_library命令是无法识别使用这种命名方式的链接库,会导致CMake时候出错。由于本项目是用CMake构建,所以务必保证相应的链接库以 .so 或 .a为后缀命名。解决这个问题最简单的方式就是用创建一个软连接指向相应的链接库。在百度云的机器中,只有curl库的命名方式有问题。所以命令如下:(如果是其他库,解决方法也类似):
```bash
ln -s /usr/lib64/libcurl.so.4.3.0 /usr/lib64/libcurl.so
```
### 2.5. 编译安装PaddleServing
下列步骤介绍CPU版本以及GPU版本的PaddleServing编译安装过程。 下列步骤介绍CPU版本以及GPU版本的PaddleServing编译安装过程。
```bash ```bash
...@@ -134,7 +46,7 @@ serving ...@@ -134,7 +46,7 @@ serving
└── tools └── tools
``` ```
### 2.6. 安装PaddleSegServing ## 2. 安装PaddleSegServing
```bash ```bash
# Step 1. 在~目录下下载PaddleSeg代码 # Step 1. 在~目录下下载PaddleSeg代码
......
# PaddleSeg Serving # PaddleSegServing
## 1.简介 ## 1.简介
PaddleSegServing是基于PaddleSeg开发的实时图像分割服务的企业级解决方案。用户仅需关注模型本身,无需理解模型模型的加载、预测以及GPU/CPU资源的并发调度等细节操作,通过设置不同的参数配置,即可根据自身的业务需求定制化不同图像分割服务。目前,PaddleSegServing支持人脸分割、城市道路分割、宠物外形分割模型。本文将通过一个人脸分割服务的搭建示例,展示PaddleSeg服务通用的搭建流程。 PaddleSegServing是基于PaddleSeg开发的实时图像分割服务的企业级解决方案。用户仅需关注模型本身,无需理解模型模型的加载、预测以及GPU/CPU资源的并发调度等细节操作,通过设置不同的参数配置,即可根据自身的业务需求定制化不同图像分割服务。目前,PaddleSegServing支持人脸分割、城市道路分割、宠物外形分割模型。本文将通过一个人脸分割服务的搭建示例,展示PaddleSeg服务通用的搭建流程。
## 2.预编译版本安装及搭建服务流程 ## 2.预编译版本安装及搭建服务流程
### 2.1. 下载预编译的PaddleSegServing 运行PaddleSegServing需要依赖其他的链接库,请保证在下载安装前系统环境已经具有相应的依赖项。
安装以及搭建服务的流程均在Centos和Ubuntu系统上验证。以下是Centos系统上的搭建流程,Ubuntu版本的依赖项安装流程介绍在[Ubuntu系统下依赖项的安装教程](UBUNTU.md)
### 2.1. 系统依赖项
依赖项 | 验证过的版本
-- | --
Linux | Centos 6.10 / 7, Ubuntu16.07
CMake | 3.0+
GCC | 4.8.2
Python| 2.7
GO编译器| 1.9.2
openssl| 1.0.1+
bzip2 | 1.0.6+
如果需要使用GPU预测,还需安装以下几个依赖库
GPU库 | 验证过的版本
-- | --
CUDA | 9.2
cuDNN | 7.1.4
nccl | 2.4.7
### 2.2. 安装依赖项
#### 2.2.1. 安装openssl、Go编译器以及bzip2
```bash
yum -y install openssl openssl-devel golang bzip2-libs bzip2-devel
```
#### 2.2.2. 安装GPU预测的依赖项(如果需要使用GPU预测,必须执行此步骤)
#### 2.2.2.1. 安装配置CUDA 9.2以及cuDNN 7.1.4
请确保正确安装CUDA 9.2以及cuDNN 7.1.4. 以下为安装CUDA和cuDNN的官方教程。
```bash
安装CUDA教程: https://developer.nvidia.com/cuda-90-download-archive?target_os=Linux&target_arch=x86_64&target_distro=CentOS&target_version=7&target_type=rpmnetwork
安装cuDNN教程: https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html
```
#### 2.2.2.2. 安装nccl库(如果已安装nccl 2.4.7请忽略该步骤)
```bash
# 下载文件 nccl-repo-rhel7-2.4.7-ga-cuda9.2-1-1.x86_64.rpm
wget -c https://paddlehub.bj.bcebos.com/serving/nccl-repo-rhel7-2.4.7-ga-cuda9.2-1-1.x86_64.rpm
# 安装nccl的repo
rpm -i nccl-repo-rhel7-2.4.7-ga-cuda9.2-1-1.x86_64.rpm
# 更新索引
yum -y update
# 安装包
yum -y install libnccl-2.4.7-1+cuda9.2 libnccl-devel-2.4.7-1+cuda9.2 libnccl-static-2.4.7-1+cuda9.2
```
### 2.2.3. 安装 cmake 3.15
如果机器没有安装cmake或者已安装cmake的版本低于3.0,请执行以下步骤
```bash
# 如果原来的已经安装低于3.0版本的cmake,请先卸载原有低版本 cmake
yum -y remove cmake
# 下载源代码并解压
wget -c https://github.com/Kitware/CMake/releases/download/v3.15.0/cmake-3.15.0.tar.gz
tar xvfz cmake-3.15.0.tar.gz
# 编译cmake
cd cmake-3.15.0
./configure
make -j4
# 安装并检查cmake版本
make install
cmake --version
# 在cmake-3.15.0目录中,将相应的头文件目录(curl目录,为PaddleServing的依赖头文件目录)拷贝到系统include目录下
cp -r Utilities/cmcurl/include/curl/ /usr/include/
```
### 2.2.4. 为依赖库增加相应的软连接
现在Linux系统中大部分链接库的名称都以版本号作为后缀,如libcurl.so.4.3.0。这种命名方式最大的问题是,CMakeList.txt中find_library命令是无法识别使用这种命名方式的链接库,会导致CMake时候出错。由于本项目是用CMake构建,所以务必保证相应的链接库以 .so 或 .a为后缀命名。解决这个问题最简单的方式就是用创建一个软连接指向相应的链接库。在百度云的机器中,只有curl库的命名方式有问题。所以命令如下:(如果是其他库,解决方法也类似):
```bash
ln -s /usr/lib64/libcurl.so.4.3.0 /usr/lib64/libcurl.so
```
### 2.3. 下载预编译的PaddleSegServing
预编译版本在Centos7.6系统下编译,如果想快速体验PaddleSegServing,可在此系统下下载预编译版本进行安装。预编译版本有两个,一个是针对有GPU的机器,推荐安装GPU版本PaddleSegServing。另一个是CPU版本PaddleServing,针对无GPU的机器。 预编译版本在Centos7.6系统下编译,如果想快速体验PaddleSegServing,可在此系统下下载预编译版本进行安装。预编译版本有两个,一个是针对有GPU的机器,推荐安装GPU版本PaddleSegServing。另一个是CPU版本PaddleServing,针对无GPU的机器。
#### 2.1.1. 下载并解压GPU版本PaddleSegServing #### 2.3.1. 下载并解压GPU版本PaddleSegServing
```bash ```bash
cd ~ cd ~
wget -c XXXX/PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz wget -c --no-check-certificate https://paddleseg.bj.bcebos.com/serving/paddle_seg_serving_centos7.6_gpu_cuda9.2.tar.gz
tar xvfz PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz tar xvfz PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz seg-serving
``` ```
#### 2.1.2. 下载并解压CPU版本PaddleSegServing #### 2.3.2. 下载并解压CPU版本PaddleSegServing
```bash ```bash
cd ~ cd ~
wget -c XXXX/PaddleSegServing.centos7.6_cuda9.2_cpu.tar.gz wget -c --no-check-certificate https://paddleseg.bj.bcebos.com/serving/paddle_seg_serving_centos7.6_cpu.tar.gz
tar xvfz PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz tar xvfz PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz seg-serving
``` ```
解压后的PaddleSegServing目录如下。 解压后的PaddleSegServing目录如下。
...@@ -36,13 +116,22 @@ tar xvfz PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz ...@@ -36,13 +116,22 @@ tar xvfz PaddleSegServing.centos7.6_cuda9.2_gpu.tar.gz
└── log └── log
``` ```
### 2.2. 运行PaddleSegServing ### 2.4 安装动态库
把 libiomp5.so, libmklml_gnu.so, libmklml_intel.so拷贝到/usr/lib。
```bash
cd seg-serving/bin/
cp libiomp5.so libmklml_gnu.so libmklml_intel.so /usr/lib
```
### 2.5. 运行PaddleSegServing
本节将介绍如何运行以及测试PaddleSegServing。 本节将介绍如何运行以及测试PaddleSegServing。
#### 2.2.1. 搭建人脸分割服务 #### 2.5.1. 搭建人脸分割服务
搭建人脸分割服务只需完成一些配置文件的编写即可,其他分割服务的搭建流程类似。 搭建人脸分割服务只需完成一些配置文件的编写即可,其他分割服务的搭建流程类似。
##### 2.2.1.1. 下载人脸分割模型文件,并将其复制到相应目录。 #### 2.5.1.1. 下载人脸分割模型文件,并将其复制到相应目录。
```bash ```bash
# 下载人脸分割模型 # 下载人脸分割模型
wget -c https://paddleseg.bj.bcebos.com/inference_model/deeplabv3p_xception65_humanseg.tgz wget -c https://paddleseg.bj.bcebos.com/inference_model/deeplabv3p_xception65_humanseg.tgz
...@@ -52,11 +141,7 @@ cp -r deeplabv3p_xception65_humanseg seg-serving/bin/data/model/paddle/fluid ...@@ -52,11 +141,7 @@ cp -r deeplabv3p_xception65_humanseg seg-serving/bin/data/model/paddle/fluid
``` ```
##### 2.2.1.2. 配置参数文件 #### 2.5.1.2. 配置参数文件。参数文件如下。PaddleSegServing仅新增一个配置文件seg_conf.yaml,用来指定具体分割模型的一些参数,如均值、方差、图像尺寸等。该配置文件可在gflags.conf中通过--seg_conf_file指定。其他配置文件的字段解释可参考以下链接:https://github.com/PaddlePaddle/Serving/blob/develop/doc/SERVING_CONFIGURE.md
参数文件如,PaddleSegServing仅新增一个配置文件seg_conf.yaml,用来指定具体分割模型的一些参数,如均值、方差、图像尺寸等。该配置文件可在gflags.conf中通过--seg_conf_file指定。
其他配置文件的字段解释可参考以下链接:https://github.com/PaddlePaddle/Serving/blob/develop/doc/SERVING_CONFIGURE.md (TODO:介绍seg_conf.yaml中每个字段的含义)
```bash ```bash
conf/ conf/
...@@ -68,7 +153,25 @@ conf/ ...@@ -68,7 +153,25 @@ conf/
└── workflow.prototxt └── workflow.prototxt
``` ```
#### 2.2.2 运行服务端程序 以下为seg_conf.yaml文件内容以及每一个配置项的内容。
```bash
%YAML:1.0
# 输入到模型的图像的尺寸。会将任意图片resize到513*513尺寸的图像,再放入模型进行推测。
SIZE: [513, 513]
# 均值
MEAN: [104.008, 116.669, 122.675]
# 方差
STD: [1.0, 1.0, 1.0]
# 通道数
CHANNELS: 3
# 类别数量
CLASS_NUM: 2
# 加载的模型的名称,需要与model_toolkit.prototxt中对应模型的名称保持一致。
MODEL_NAME: "human_segmentation"
```
#### 2.5.2 运行服务端程序
```bash ```bash
# 1. 设置环境变量 # 1. 设置环境变量
...@@ -77,16 +180,43 @@ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/lib64:$LD_LIBRARY_PATH ...@@ -77,16 +180,43 @@ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/lib64:$LD_LIBRARY_PATH
cd ~/serving/build/output/demo/seg-serving/bin/ cd ~/serving/build/output/demo/seg-serving/bin/
./seg-serving ./seg-serving
``` ```
#### 2.2.3.运行客户端程序进行测试 (建议在windows、mac测试,可直接查看分割后的图像) #### 2.5.3.运行客户端程序
以下为PaddleSeg的目录结构,客户端在PaddleSeg/serving/tools目录。
客户端程序是用Python3编写的,代码简洁易懂,可以通过运行客户端验证服务的正确性以及性能表现。 ```bash
PaddleSeg
├── configs
├── contrib
├── dataset
├── docs
├── inference
├── pdseg
├── README.md
├── requirements.txt
├── scripts
├── serving
│ ├── COMPILE_GUIDE.md
│ ├── imgs
│ ├── README.md
│ ├── requirements.txt # 客户端程序依赖的包
│ ├── seg-serving
│ ├── tools # 客户端目录
│ │ ├── images # 测试的图像目录,可放置jpg格式或其他三通道格式的图像,以jpg或jpeg作为文件后缀名
│ │ │  ├── 1.jpg
│ │ │ ├── 2.jpg
│ │ │ └── 3.jpg
│ │ └── image_seg_client.py # 客户端测试代码
│ └── UBUNTU.md
├── test
└── test.md
```
客户端程序使用Python3编写,通过下载requirements.txt中的python依赖包(`pip3 install -r requirements.txt`),用户可以在Windows、Mac、Linux等平台上正常运行该客户端,测试的图像放在PaddleSeg/serving/tools/images目录,用户可以根据自己需要把其他三通道格式的图片放置到该目录下进行测试。从服务端返回的结果图像保存在PaddleSeg/serving/tools目录下。
```bash ```bash
# 使用Python3.6,需要安装opencv-python、requests、numpy包(建议安装anaconda)
cd tools cd tools
vim image_seg_client.py (修改IMAGE_SEG_URL变量,改成服务端的ip地址) vim image_seg_client.py (修改IMAGE_SEG_URL变量,改成服务端的ip地址)
python3.6 image_seg_client.py python3.6 image_seg_client.py
# 当前目录可以看到生成出分割结果的图片。 # 当前目录可以看到生成出分割结果的图片。
``` ```
## 3. 源码编译安装及搭建服务流程 (可选) ## 3. 源码编译安装及搭建服务流程 (可选)
......
# Ubuntu系统下依赖项的安装教程
运行PaddleSegServing需要系统安装一些依赖库。在不同发行版本的Linux系统下,安装依赖项的具体命令略有不同,以下介绍在Ubuntu 16.07下安装依赖项的方法。
## 1. 安装ssl、go、python、bzip2、crypto.
```bash
sudo apt-get install golang-1.10 python2.7 libssl1.0.0 libssl-dev libssl-doc libcrypto++-dev libcrypto++-doc libcrypto++-utils libbz2-1.0 libbz2-dev
```
## 2. 为ssl、crypto、curl链接库添加软连接
```bash
ln -s /lib/x86_64-linux-gnu/libssl.so.1.0.0 /usr/lib/x86_64-linux-gnu/libssl.so
ln -s /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /usr/lib/x86_64-linux-gnu/libcrypto.so.10
ln -s /usr/lib/x86_64-linux-gnu/libcurl.so.4.4.0 /usr/lib/x86_64-linux-gnu/libcurl.so
```
## 3. 安装GPU依赖项(如果需要使用GPU预测,必须执行此步骤)
### 3.1. 安装配置CUDA 9.2以及cuDNN 7.1.4
方法与[预编译安装流程](README.md) 2.2.2.1节一样。
### 3.2. 安装nccl库(如果已安装nccl 2.4.7请忽略该步骤)
```bash
# 下载nccl相关的deb包
wget -c --no-check-certificate https://paddleseg.bj.bcebos.com/serving/nccl-repo-ubuntu1604-2.4.8-ga-cuda9.2_1-1_amd64.deb
sudo apt-key add /var/nccl-repo-2.4.8-ga-cuda9.2/7fa2af80.pub
# 安装deb包
sudo dpkg -i nccl-repo-ubuntu1604-2.4.8-ga-cuda9.2_1-1_amd64.deb
# 更新索引
sudo apt update
# 安装nccl库
sudo apt-get install libnccl2 libnccl-dev
```
## 4. 安装cmake 3.15
如果机器没有安装cmake或者已安装cmake的版本低于3.0,请执行以下步骤
```bash
# 如果原来的已经安装低于3.0版本的cmake,请先卸载原有低版本 cmake
sudo apt-get autoremove cmake
```
其余安装cmake的流程请参考以下链接[预编译安装流程](README.md) 2.2.3节。
## 5. 安装PaddleSegServing
### 5.1. 下载并解压GPU版本PaddleSegServing
```bash
cd ~
wget -c --no-check-certificate https://paddleseg.bj.bcebos.com/serving/paddle_seg_serving_ubuntu16.07_gpu_cuda9.2.tar.gz
tar xvfz PaddleSegServing.ubuntu16.07_cuda9.2_gpu.tar.gz seg-serving
```
### 5.2. 下载并解压CPU版本PaddleSegServing
```bash
cd ~
wget -c --no-check-certificate https://paddleseg.bj.bcebos.com/serving%2Fpaddle_seg_serving_ubuntu16.07_cpu.tar.gz
tar xvfz PaddleSegServing.ubuntu16.07_cuda9.2_gpu.tar.gz seg-serving
```
## 6. gcc版本问题
在Ubuntu 16.07系统中,默认的gcc版本为5.4.0。而目前PaddleSegServing仅支持gcc 4.8编译,所以如果测试的机器gcc版本为5.4,请先进行降级(无需卸载原有的gcc)。
```bash
# 安装gcc 4.8
sudo apt-get install gcc-4.8
# 查看是否成功安装gcc4.8
ls /usr/bin/gcc*
# 设置gcc4.8的优先级,使其能被gcc命令优先连接gcc4.8
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100
# 查看设置结果(非必须)
sudo update-alternatives --config gcc
```
--enable_model_toolkit --enable_model_toolkit
--seg_conf_file=./conf/seg_conf.yaml --seg_conf_file=./conf/seg_conf.yaml
--num_threads=1
--bthread_min_concurrency=4
--bthread_concurrency=4
engines { engines {
name: "human_segmentation" name: "human_segmentation"
type: "FLUID_GPU_NATIVE" type: "FLUID_GPU_ANALYSIS"
reloadable_meta: "./data/model/paddle/fluid_time_file" reloadable_meta: "./data/model/paddle/fluid_time_file"
reloadable_type: "timestamp_ne" reloadable_type: "timestamp_ne"
model_data_path: "./data/model/paddle/fluid/deeplabv3p_xception65_humanseg" model_data_path: "./data/model/paddle/fluid/deeplabv3p_xception65_humanseg"
......
...@@ -128,16 +128,30 @@ int ImageSegOp::inference() { ...@@ -128,16 +128,30 @@ int ImageSegOp::inference() {
mask_raw[di] = label; mask_raw[di] = label;
} }
//cv::Mat mask_mat = cv::Mat(height, width, CV_32FC1);
cv::Mat mask_mat = cv::Mat(height, width, CV_8UC1); cv::Mat mask_mat = cv::Mat(height, width, CV_8UC1);
mask_mat.data = mask_raw.data(); //scoremap
// mask_mat.data = reinterpret_cast<uchar *>(data + out_size);
//mask_mat.data = mask_raw.data();
std::vector<uchar> temp_mat(out_size, 0);
for(int i = 0; i < out_size; ++i){
temp_mat[i] = 255 * data[i + out_size];
}
mask_mat.data = temp_mat.data();
cv::Mat mask_temp_mat((*height_vec)[si], (*width_vec)[si], mask_mat.type()); cv::Mat mask_temp_mat((*height_vec)[si], (*width_vec)[si], mask_mat.type());
//Size(cols, rows) //Size(cols, rows)
cv::resize(mask_mat, mask_temp_mat, mask_temp_mat.size()); cv::resize(mask_mat, mask_temp_mat, mask_temp_mat.size());
// cv::resize(mask_mat, mask_temp_mat, cv::Size((*width_vec)[si], (*height_vec)[si])); //debug
//for(int i = 0; i < (*height_vec)[si]; ++i){
// for(int j = 0; j < (*width_vec)[si]; ++j) {
// std::cout << mask_temp_mat.at<float>(i, j) << " ";
// }
// std::cout << std::endl;
//}
std::vector<uchar> mat_buff; std::vector<uchar> mat_buff;
cv::imencode(".png", mask_temp_mat, mat_buff); cv::imencode(".png", mask_temp_mat, mat_buff);
ins->set_mask(mat_buff.data(), mat_buff.size()); ins->set_mask(reinterpret_cast<char *>(mat_buff.data()), mat_buff.size());
} }
// release out tensor object resource // release out tensor object resource
......
...@@ -103,7 +103,8 @@ int ReaderOp::inference() { ...@@ -103,7 +103,8 @@ int ReaderOp::inference() {
const ImageSegReqItem& ins = req->instances(si); const ImageSegReqItem& ins = req->instances(si);
// read dense image from request bytes // read dense image from request bytes
const char* binary = ins.image_binary().c_str(); const char* binary = ins.image_binary().c_str();
size_t length = ins.image_length(); //size_t length = ins.image_length();
size_t length = ins.image_binary().length();
if (length == 0) { if (length == 0) {
LOG(ERROR) << "Empty image, length is 0"; LOG(ERROR) << "Empty image, length is 0";
return -1; return -1;
......
...@@ -50,7 +50,7 @@ int WriteJsonOp::inference() { ...@@ -50,7 +50,7 @@ int WriteJsonOp::inference() {
std::string err_string; std::string err_string;
uint32_t batch_size = seg_out->item_size(); uint32_t batch_size = seg_out->item_size();
LOG(INFO) << "batch_size = " << batch_size; LOG(INFO) << "batch_size = " << batch_size;
LOG(INFO) << seg_out->ShortDebugString(); // LOG(INFO) << seg_out->ShortDebugString();
for (uint32_t si = 0; si < batch_size; si++) { for (uint32_t si = 0; si < batch_size; si++) {
ResponseItem* ins = res->add_prediction(); ResponseItem* ins = res->add_prediction();
//LOG(INFO) << "Original image width = " << seg_out->width(si) << ", height = " << seg_out->height(si); //LOG(INFO) << "Original image width = " << seg_out->width(si) << ", height = " << seg_out->height(si);
...@@ -59,6 +59,7 @@ int WriteJsonOp::inference() { ...@@ -59,6 +59,7 @@ int WriteJsonOp::inference() {
return -1; return -1;
} }
std::string* text = ins->mutable_info(); std::string* text = ins->mutable_info();
LOG(INFO) << seg_out->item(si).ShortDebugString();
if (!ProtoMessageToJson(seg_out->item(si), text, &err_string)) { if (!ProtoMessageToJson(seg_out->item(si), text, &err_string)) {
LOG(ERROR) << "Failed convert message[" LOG(ERROR) << "Failed convert message["
<< seg_out->item(si).ShortDebugString() << seg_out->item(si).ShortDebugString()
......
# coding: utf-8 # coding: utf-8
import sys import os
import cv2 import cv2
import requests import requests
import json import json
...@@ -8,28 +7,68 @@ import base64 ...@@ -8,28 +7,68 @@ import base64
import numpy as np import numpy as np
import time import time
import threading import threading
import re
#分割服务的地址 #分割服务的地址
#IMAGE_SEG_URL = 'http://yq01-gpu-151-23-00.epc:8010/ImageSegService/inference' IMAGE_SEG_URL = 'http://xxx.xxx.xxx.xxx:8010/ImageSegService/inference'
#IMAGE_SEG_URL = 'http://106.12.25.202:8010/ImageSegService/inference'
IMAGE_SEG_URL = 'http://180.76.118.53:8010/ImageSegService/inference' class ClientThread(threading.Thread):
def __init__(self, thread_id, image_data_repo):
# 请求预测服务 threading.Thread.__init__(self)
# input_img 要预测的图片列表 self.__thread_id = thread_id
def get_item_json(input_img): self.__image_data_repo = image_data_repo
with open(input_img, mode="rb") as fp:
# 使用 http 协议请求服务时, 请使用 base64 编码发送图片 def run(self):
item_binary_b64 = str(base64.b64encode(fp.read()), 'utf-8') self.__request_image_seg_service()
item_size = len(item_binary_b64)
item_json = { def __request_image_seg_service(self):
"image_length": item_size, # 持续发送150个请求
"image_binary": item_binary_b64 for i in range(1, 151):
} print("Epoch %d, thread %d" % (i, self.__thread_id))
return item_json self.__benchmark_test()
# benchmark test
def __benchmark_test(self):
start = time.time()
for image_filename in self.__image_data_repo:
mask_mat_list = self.__request_predictor_server(image_filename)
input_img = self.__image_data_repo.get_image_matrix(image_filename)
# 将获得的mask matrix转换成可视化图像,并在当前目录下保存为图像文件
# 如果进行压测,可以把这句话注释掉
for j in range(len(mask_mat_list)):
self.__visualization(mask_mat_list[j], image_filename, 2, input_img)
latency = time.time() - start
print("total latency = %f s" % (latency))
# 对预测结果进行可视化
# input_raw_mask 是server返回的预测结果
# output_img 是可视化结果存储路径
def __visualization(self, mask_mat, output_img, num_cls, input_img):
# ColorMap for visualization more clearly
n = num_cls
color_map = []
for j in range(n):
lab = j
a = b = c = 0
color_map.append([a, b, c])
i = 0
while lab:
color_map[j][0] |= (((lab >> 0) & 1) << (7 - i))
color_map[j][1] |= (((lab >> 1) & 1) << (7 - i))
color_map[j][2] |= (((lab >> 2) & 1) << (7 - i))
i += 1
lab >>= 3
im = cv2.imdecode(mask_mat, 1)
w, h, c = im.shape
im2 = cv2.resize(im, (w, h))
im = im2
# I = aF + (1-a)B
a = im / 255.0
im = a * input_img + (1 - a) * [255, 255, 255]
cv2.imwrite(output_img, im)
def request_predictor_server(input_img_list, dir_name): def __request_predictor_server(self, input_img):
data = {"instances" : [get_item_json(dir_name + input_img) for input_img in input_img_list]} data = {"instances": [self.__get_item_json(input_img)]}
response = requests.post(IMAGE_SEG_URL, data=json.dumps(data)) response = requests.post(IMAGE_SEG_URL, data=json.dumps(data))
try: try:
response = json.loads(response.text) response = json.loads(response.text)
...@@ -37,86 +76,27 @@ def request_predictor_server(input_img_list, dir_name): ...@@ -37,86 +76,27 @@ def request_predictor_server(input_img_list, dir_name):
mask_response_list = [mask_response["info"] for mask_response in prediction_list] mask_response_list = [mask_response["info"] for mask_response in prediction_list]
mask_raw_list = [json.loads(mask_response)["mask"] for mask_response in mask_response_list] mask_raw_list = [json.loads(mask_response)["mask"] for mask_response in mask_response_list]
except Exception as err: except Exception as err:
print ("Exception[%s], server_message[%s]" % (str(err), response.text)) print("Exception[%s], server_message[%s]" % (str(err), response.text))
return None return None
# 使用 json 协议回复的包也是 base64 编码过的 # 使用 json 协议回复的包也是 base64 编码过的
mask_binary_list = [base64.b64decode(mask_raw) for mask_raw in mask_raw_list] mask_binary_list = [base64.b64decode(mask_raw) for mask_raw in mask_raw_list]
m = [np.fromstring(mask_binary, np.uint8) for mask_binary in mask_binary_list] m = [np.fromstring(mask_binary, np.uint8) for mask_binary in mask_binary_list]
return m return m
# 对预测结果进行可视化 # 请求预测服务
# input_raw_mask 是server返回的预测结果 # input_img 要预测的图片列表
# output_img 是可视化结果存储路径 def __get_item_json(self, input_img):
def visualization(mask_mat, output_img): # 使用 http 协议请求服务时, 请使用 base64 编码发送图片
# ColorMap for visualization more clearly item_binary_b64 = str(base64.b64encode(self.__image_data_repo.get_image_binary(input_img)), 'utf-8')
color_map = [[128, 64, 128], item_size = len(item_binary_b64)
[244, 35, 231], item_json = {
[69, 69, 69], "image_length": item_size,
[102, 102, 156], "image_binary": item_binary_b64
[190, 153, 153], }
[153, 153, 153], return item_json
[250, 170, 29],
[219, 219, 0],
[106, 142, 35],
[152, 250, 152],
[69, 129, 180],
[219, 19, 60],
[255, 0, 0],
[0, 0, 142],
[0, 0, 69],
[0, 60, 100],
[0, 79, 100],
[0, 0, 230],
[119, 10, 32]]
im = cv2.imdecode(mask_mat, 1)
w, h, c = im.shape
im2 = cv2.resize(im, (w, h))
im = im2
for i in range(0, h):
for j in range(0, w):
im[i, j] = color_map[im[i, j, 0]]
cv2.imwrite(output_img, im)
#benchmark test
def benchmark_test(batch_size, img_list):
start = time.time()
total_size = len(img_list)
for i in range(0, total_size, batch_size):
mask_mat_list = request_predictor_server(img_list[i : np.min([i + batch_size, total_size])], "images/")
# 将获得的mask matrix转换成可视化图像,并在当前目录下保存为图像文件
# 如果进行压测,可以把这句话注释掉
# for j in range(len(mask_mat_list)):
# visualization(mask_mat_list[j], img_list[j + i])
latency = time.time() - start
print("batch size = %d, total latency = %f s" % (batch_size, latency))
class ClientThread(threading.Thread):
def __init__(self, thread_id, batch_size):
threading.Thread.__init__(self)
self.__thread_id = thread_id
self.__batch_size = batch_size
def run(self):
self.request_image_seg_service(3)
def request_image_seg_service(self, imgs_num):
total_size = imgs_num
img_list = [str(i + 1) + ".jpg" for i in range(total_size)]
# batch_size_list = [2**i for i in range(0, 4)]
# 持续发送150个请求
batch_size_list = [self.__batch_size] * 150
i = 1
for batch_size in batch_size_list:
print("Epoch %d, thread %d" % (i, self.__thread_id))
i += 1
benchmark_test(batch_size, img_list)
def create_thread_pool(thread_num, batch_size): def create_thread_pool(thread_num, image_data_repo):
return [ClientThread(i + 1, batch_size) for i in range(thread_num)] return [ClientThread(i + 1, image_data_repo) for i in range(thread_num)]
def run_threads(thread_pool): def run_threads(thread_pool):
...@@ -126,7 +106,35 @@ def run_threads(thread_pool): ...@@ -126,7 +106,35 @@ def run_threads(thread_pool):
for thread in thread_pool: for thread in thread_pool:
thread.join() thread.join()
class ImageDataRepo:
def __init__(self, dir_name):
print("Loading images data...")
self.__data = {}
pattern = re.compile(".+\.(jpg|jpeg)", re.I)
if os.path.isdir(dir_name):
for image_filename in os.listdir(dir_name):
if pattern.match(image_filename):
full_path = os.path.join(dir_name, image_filename)
fp = open(full_path, mode="rb")
image_binary_data = fp.read()
image_mat_data = cv2.imread(full_path)
self.__data[image_filename] = (image_binary_data, image_mat_data)
else:
raise Exception("Please use directory to initialize");
print("Finish loading.")
def __iter__(self):
for filename in self.__data:
yield filename
def get_image_binary(self, image_name):
return self.__data[image_name][0]
def get_image_matrix(self, image_name):
return self.__data[image_name][1]
if __name__ == "__main__": if __name__ == "__main__":
thread_pool = create_thread_pool(thread_num=2, batch_size=1) #preprocess
IDR = ImageDataRepo("images")
thread_pool = create_thread_pool(thread_num=1, image_data_repo=IDR)
run_threads(thread_pool) run_threads(thread_pool)
#!/bin/bash
function abort(){
echo "Your change doesn't follow PaddleHub's code style." 1>&2
echo "Please use pre-commit to check what is wrong." 1>&2
exit 1
}
trap 'abort' 0
set -e
cd $TRAVIS_BUILD_DIR
export PATH=/usr/bin:$PATH
pre-commit install
if ! pre-commit run -a ; then
git diff
exit 1
fi
trap : 0
#!/bin/bash
set -o errexit
base_path=$(cd `dirname $0`/../..; pwd)
cd $base_path
python dataset/download_pet.py
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
from test_utils import download_file_and_uncompress, train, eval, vis, export_model from test_utils import download_file_and_uncompress, train, eval, vis, export_model
import os import os
import argparse
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
DATASET_PATH = os.path.join(LOCAL_PATH, "..", "dataset") DATASET_PATH = os.path.join(LOCAL_PATH, "..", "dataset")
...@@ -43,7 +44,16 @@ if __name__ == "__main__": ...@@ -43,7 +44,16 @@ if __name__ == "__main__":
vis_dir = os.path.join(LOCAL_PATH, "visual", model_name) vis_dir = os.path.join(LOCAL_PATH, "visual", model_name)
saved_model = os.path.join(LOCAL_PATH, "saved_model", model_name) saved_model = os.path.join(LOCAL_PATH, "saved_model", model_name)
devices = ['0'] parser = argparse.ArgumentParser(description="PaddleSeg loacl test")
parser.add_argument("--devices",
dest="devices",
help="GPU id of running. if more than one, use spacing to separate.",
nargs="+",
default=0,
type=int)
args = parser.parse_args()
devices = [str(x) for x in args.devices]
export_model( export_model(
flags=["--cfg", cfg], flags=["--cfg", cfg],
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
from test_utils import download_file_and_uncompress, train, eval, vis, export_model from test_utils import download_file_and_uncompress, train, eval, vis, export_model
import os import os
import argparse
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
DATASET_PATH = os.path.join(LOCAL_PATH, "..", "dataset") DATASET_PATH = os.path.join(LOCAL_PATH, "..", "dataset")
...@@ -44,7 +45,16 @@ if __name__ == "__main__": ...@@ -44,7 +45,16 @@ if __name__ == "__main__":
vis_dir = os.path.join(LOCAL_PATH, "visual", model_name) vis_dir = os.path.join(LOCAL_PATH, "visual", model_name)
saved_model = os.path.join(LOCAL_PATH, "saved_model", model_name) saved_model = os.path.join(LOCAL_PATH, "saved_model", model_name)
devices = ['0'] parser = argparse.ArgumentParser(description="PaddleSeg loacl test")
parser.add_argument("--devices",
dest="devices",
help="GPU id of running. if more than one, use spacing to separate.",
nargs="+",
default=0,
type=int)
args = parser.parse_args()
devices = [str(x) for x in args.devices]
train( train(
flags=["--cfg", cfg, "--use_gpu", "--log_steps", "10"], flags=["--cfg", cfg, "--use_gpu", "--log_steps", "10"],
......
...@@ -20,6 +20,7 @@ import sys ...@@ -20,6 +20,7 @@ import sys
import tarfile import tarfile
import zipfile import zipfile
import platform import platform
import functools
lasttime = time.time() lasttime = time.time()
FLUSH_INTERVAL = 0.1 FLUSH_INTERVAL = 0.1
...@@ -78,8 +79,10 @@ def _uncompress_file(filepath, extrapath, delete_file, print_progress): ...@@ -78,8 +79,10 @@ def _uncompress_file(filepath, extrapath, delete_file, print_progress):
if filepath.endswith("zip"): if filepath.endswith("zip"):
handler = _uncompress_file_zip handler = _uncompress_file_zip
else: elif filepath.endswith("tgz"):
handler = _uncompress_file_tar handler = _uncompress_file_tar
else:
handler = functools.partial(_uncompress_file_tar, mode="r")
for total_num, index in handler(filepath, extrapath): for total_num, index in handler(filepath, extrapath):
if print_progress: if print_progress:
...@@ -104,8 +107,8 @@ def _uncompress_file_zip(filepath, extrapath): ...@@ -104,8 +107,8 @@ def _uncompress_file_zip(filepath, extrapath):
yield total_num, index yield total_num, index
def _uncompress_file_tar(filepath, extrapath): def _uncompress_file_tar(filepath, extrapath, mode="r:gz"):
files = tarfile.open(filepath, "r:gz") files = tarfile.open(filepath, mode)
filelist = files.getnames() filelist = files.getnames()
total_num = len(filelist) total_num = len(filelist)
for index, file in enumerate(filelist): for index, file in enumerate(filelist):
...@@ -118,6 +121,7 @@ def _uncompress_file_tar(filepath, extrapath): ...@@ -118,6 +121,7 @@ def _uncompress_file_tar(filepath, extrapath):
def download_file_and_uncompress(url, def download_file_and_uncompress(url,
savepath=None, savepath=None,
extrapath=None, extrapath=None,
extraname=None,
print_progress=True, print_progress=True,
cover=False, cover=False,
delete_file=True): delete_file=True):
...@@ -129,19 +133,25 @@ def download_file_and_uncompress(url, ...@@ -129,19 +133,25 @@ def download_file_and_uncompress(url,
savename = url.split("/")[-1] savename = url.split("/")[-1]
savepath = os.path.join(savepath, savename) savepath = os.path.join(savepath, savename)
extraname = ".".join(savename.split(".")[:-1]) savename = ".".join(savename.split(".")[:-1])
extraname = os.path.join(extrapath, extraname) savename = os.path.join(extrapath, savename)
extraname = savename if extraname is None else os.path.join(
extrapath, extraname)
if cover: if cover:
if os.path.exists(savepath): if os.path.exists(savepath):
shutil.rmtree(savepath) shutil.rmtree(savepath)
if os.path.exists(savename):
shutil.rmtree(savename)
if os.path.exists(extraname): if os.path.exists(extraname):
shutil.rmtree(extraname) shutil.rmtree(extraname)
if not os.path.exists(extraname): if not os.path.exists(extraname):
if not os.path.exists(savename):
if not os.path.exists(savepath): if not os.path.exists(savepath):
_download_file(url, savepath, print_progress) _download_file(url, savepath, print_progress)
_uncompress_file(savepath, extrapath, delete_file, print_progress) _uncompress_file(savepath, extrapath, delete_file, print_progress)
shutil.move(savename, extraname)
def _pdseg(command, flags, options, devices): def _pdseg(command, flags, options, devices):
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册