未验证 提交 3a5bbc5c 编写于 作者: F Feng Ni 提交者: GitHub

[cherry-pick] fix infer dataset download and label (#5701)

上级 81581bab
......@@ -16,5 +16,5 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
......@@ -16,5 +16,5 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
......@@ -39,7 +39,7 @@ from utils import argsparser, Timer, get_current_memory_mb
# Global dictionary
SUPPORT_MODELS = {
'YOLO', 'RCNN', 'SSD', 'Face', 'FCOS', 'SOLOv2', 'TTFNet', 'S2ANet', 'JDE',
'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet', 'TOOD',
'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet', 'TOOD', 'RetinaNet',
'StrongBaseline', 'STGCN'
}
......
......@@ -259,9 +259,11 @@ Reader相关的类定义在`reader.py`, 其中定义了`BaseDataLoader`类。`Ba
### 5.配置及运行
#### 5.1配置
#### 5.1 配置
与数据预处理相关的模块的配置文件包含所有模型公用的Dataset的配置文件,以及不同模型专用的Reader的配置文件。
与数据预处理相关的模块的配置文件包含所有模型公用的Datas set的配置文件以及不同模型专用的Reader的配置文件。关于Dataset的配置文件存在于`configs/datasets`文件夹。比如COCO数据集的配置文件如下:
##### 5.1.1 Dataset配置
关于Dataset的配置文件存在于`configs/datasets`文件夹。比如COCO数据集的配置文件如下:
```
metric: COCO # 目前支持COCO, VOC, OID, WiderFace等评估标准
num_classes: 80 # num_classes数据集的类别数,不包含背景类
......@@ -271,7 +273,7 @@ TrainDataset:
image_dir: train2017 # 训练集的图片所在文件相对于dataset_dir的路径
anno_path: annotations/instances_train2017.json # 训练集的标注文件相对于dataset_dir的路径
dataset_dir: dataset/coco #数据集所在路径,相对于PaddleDetection路径
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # 控制dataset输出的sample所包含的字段
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # 控制dataset输出的sample所包含的字段,注意此为TrainDataset独有的且必须配置的字段
EvalDataset:
!COCODataSet
......@@ -281,9 +283,16 @@ EvalDataset:
TestDataset:
!ImageFolder
anno_path: dataset/coco/annotations/instances_val2017.json # 验证集的标注文件所在路径,相对于PaddleDetection的路径
anno_path: annotations/instances_val2017.json # 标注文件所在路径,仅用于读取数据集的类别信息,支持json和txt格式
dataset_dir: dataset/coco # 数据集所在路径,若添加了此行,则`anno_path`路径为`dataset_dir/anno_path`,若此行不设置或去掉此行,则`anno_path`路径即为`anno_path`
```
在PaddleDetection的yml配置文件中,使用`!`直接序列化模块实例(可以是函数,实例等),上述的配置文件均使用Dataset进行了序列化。
**注意:**
请运行前自行仔细检查数据集的配置路径,在训练或验证时如果TrainDataset和EvalDataset的路径配置有误,会提示自动下载数据集。若使用自定义数据集,在推理时如果TestDataset路径配置有误,会提示使用默认COCO数据集的类别信息。
##### 5.1.2 Reader配置
不同模型专用的Reader定义在每一个模型的文件夹下,如yolov3的Reader配置文件定义在`configs/yolov3/_base_/yolov3_reader.yml`。一个Reader的示例配置如下:
```
worker_num: 2
......
......@@ -260,9 +260,11 @@ The Reader class is defined in `reader.py`, where the `BaseDataLoader` class is
### 5.Configuration and Operation
#### 5.1Configuration
#### 5.1 Configuration
The configuration files for modules related to data preprocessing contain the configuration files for Datasets common to all models and the configuration files for readers specific to different models.
The configuration files for modules related to data preprocessing contain the configuration files for Datas sets common to all models and the configuration files for readers specific to different models. The configuration file for the Dataset exists in the `configs/datasets` folder. For example, the COCO dataset configuration file is as follows:
##### 5.1.1 Dataset Configuration
The configuration file for the Dataset exists in the `configs/datasets` folder. For example, the COCO dataset configuration file is as follows:
```
metric: COCO # Currently supports COCO, VOC, OID, Wider Face and other evaluation standards
num_classes: 80 # num_classes: The number of classes in the dataset, excluding background classes
......@@ -272,7 +274,7 @@ TrainDataset:
image_dir: train2017 # The path where the training set image resides relative to the dataset_dir
anno_path: annotations/instances_train2017.json # Path to the annotation file of the training set relative to the dataset_dir
dataset_dir: dataset/coco #The path where the dataset is located relative to the PaddleDetection path
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # Controls the fields contained in the sample output of the dataset
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # Controls the fields contained in the sample output of the dataset, note data_fields are unique to the TrainDataset and must be configured
EvalDataset:
!COCODataSet
......@@ -281,9 +283,16 @@ EvalDataset:
dataset_dir: dataset/coco # The path where the dataset is located relative to the PaddleDetection path
TestDataset:
!ImageFolder
anno_path: dataset/coco/annotations/instances_val2017.json # The path of the annotation file of the verification set, relative to the path of PaddleDetection
anno_path: dataset/coco/annotations/instances_val2017.json # The path of the annotation file, it is only used to read the category information of the dataset. JSON and TXT formats are supported
dataset_dir: dataset/coco # The path of the dataset, note if this row is added, `anno_path` will be 'dataset_dir/anno_path`, if not set or removed, `anno_path` is `anno_path`
```
In the YML profile for Paddle Detection, use `!`directly serializes module instances (functions, instances, etc.). The above configuration files are serialized using Dataset.
**Note:**
Please carefully check the configuration path of the dataset before running. During training or verification, if the path of TrainDataset or EvalDataset is wrong, it will download the dataset automatically. When using a user-defined dataset, if the TestDataset path is incorrectly configured during inference, the category of the default COCO dataset will be used.
##### 5.1.2 Reader configuration
The Reader configuration files for yolov3 are defined in `configs/yolov3/_base_/yolov3_reader.yml`. An example Reader configuration is as follows:
```
worker_num: 2
......
......@@ -328,9 +328,36 @@ dataset/xxx/
...
```
##### 用户数据自定义reader
如果数据集有新的数据需要添加进PaddleDetection中,您可参考数据处理文档中的[添加新数据源](../advanced_tutorials/READER.md#2.3自定义数据集)文档部分,开发相应代码完成新的数据源支持,同时数据处理具体代码解析等可阅读[数据处理文档](../advanced_tutorials/READER.md)
##### 用户数据自定义reader
如果数据集有新的数据需要添加进PaddleDetection中,您可参考数据处理文档中的[添加新数据源](../advanced_tutorials/READER.md#2.3自定义数据集)文档部分,开发相应代码完成新的数据源支持,同时数据处理具体代码解析等可阅读[数据处理文档](../advanced_tutorials/READER.md)
关于Dataset的配置文件存在于`configs/datasets`文件夹。比如COCO数据集的配置文件如下:
```
metric: COCO # 目前支持COCO, VOC, OID, WiderFace等评估标准
num_classes: 80 # num_classes数据集的类别数,不包含背景类
TrainDataset:
!COCODataSet
image_dir: train2017 # 训练集的图片所在文件相对于dataset_dir的路径
anno_path: annotations/instances_train2017.json # 训练集的标注文件相对于dataset_dir的路径
dataset_dir: dataset/coco #数据集所在路径,相对于PaddleDetection路径
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # 控制dataset输出的sample所包含的字段,注意此为训练集Reader独有的且必须配置的字段
EvalDataset:
!COCODataSet
image_dir: val2017 # 验证集的图片所在文件夹相对于dataset_dir的路径
anno_path: annotations/instances_val2017.json # 验证集的标注文件相对于dataset_dir的路径
dataset_dir: dataset/coco # 数据集所在路径,相对于PaddleDetection路径
TestDataset:
!ImageFolder
anno_path: annotations/instances_val2017.json # 标注文件所在路径,仅用于读取数据集的类别信息,支持json和txt格式
dataset_dir: dataset/coco # 数据集所在路径,若添加了此行,则`anno_path`路径为`dataset_dir/anno_path`,若此行不设置或去掉此行,则`anno_path`路径即为`anno_path`
```
在PaddleDetection的yml配置文件中,使用`!`直接序列化模块实例(可以是函数,实例等),上述的配置文件均使用Dataset进行了序列化。
**注意:**
请运行前自行仔细检查数据集的配置路径,在训练或验证时如果TrainDataset和EvalDataset的路径配置有误,会提示自动下载数据集。若使用自定义数据集,在推理时如果TestDataset路径配置有误,会提示使用默认COCO数据集的类别信息。
#### 用户数据数据转换示例
......
......@@ -250,7 +250,7 @@ There are three processing methods for user data:
(1) Convert user data into VOC data (only include labels necessary for object detection as required)
(2) Convert user data into coco data (only include labels necessary for object detection as required)
(3) Customize a reader for user data (for complex data, you need to customize the reader)
##### Convert User Data to VOC Data
After the user dataset is converted to VOC data, the directory structure is as follows (note that the path name and file name in the dataset should not use Chinese as far as possible to avoid errors caused by Chinese coding problems):
......@@ -332,6 +332,33 @@ dataset/xxx/
##### Reader of User Define Data
If new data in the dataset needs to be added to paddedetection, you can refer to the [add new data source] (../advanced_tutorials/READER.md#2.3_Customizing_Dataset) document section in the data processing document to develop corresponding code to complete the new data source support. At the same time, you can read the [data processing document] (../advanced_tutorials/READER.md) for specific code analysis of data processing
The configuration file for the Dataset exists in the `configs/datasets` folder. For example, the COCO dataset configuration file is as follows:
```
metric: COCO # Currently supports COCO, VOC, OID, Wider Face and other evaluation standards
num_classes: 80 # num_classes: The number of classes in the dataset, excluding background classes
TrainDataset:
!COCODataSet
image_dir: train2017 # The path where the training set image resides relative to the dataset_dir
anno_path: annotations/instances_train2017.json # Path to the annotation file of the training set relative to the dataset_dir
dataset_dir: dataset/coco #The path where the dataset is located relative to the PaddleDetection path
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # Controls the fields contained in the sample output of the dataset, note data_fields are unique to the trainreader and must be configured
EvalDataset:
!COCODataSet
image_dir: val2017 # The path where the images of the validation set reside relative to the dataset_dir
anno_path: annotations/instances_val2017.json # The path to the annotation file of the validation set relative to the dataset_dir
dataset_dir: dataset/coco # The path where the dataset is located relative to the PaddleDetection path
TestDataset:
!ImageFolder
anno_path: dataset/coco/annotations/instances_val2017.json # The path of the annotation file, it is only used to read the category information of the dataset. JSON and TXT formats are supported
dataset_dir: dataset/coco # The path of the dataset, note if this row is added, `anno_path` will be 'dataset_dir/anno_path`, if not set or removed, `anno_path` is `anno_path`
```
In the YML profile for Paddle Detection, use `!`directly serializes module instances (functions, instances, etc.). The above configuration files are serialized using Dataset.
**Note:**
Please carefully check the configuration path of the dataset before running. During training or verification, if the path of TrainDataset or EvalDataset is wrong, it will download the dataset automatically. When using a user-defined dataset, if the TestDataset path is incorrectly configured during inference, the category of the default COCO dataset will be used.
#### Example of User Data Conversion
Take [Kaggle Dataset](https://www.kaggle.com/andrewmvd/road-sign-detection) competition data as an example to illustrate how to prepare custom data. The dataset of Kaggle [road-sign-detection](https://www.kaggle.com/andrewmvd/road-sign-detection) competition contains 877 images, four categories:crosswalk,speedlimit,stop,trafficlight. Available for download from kaggle, also available from [link](https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar).
......
......@@ -40,29 +40,48 @@ def get_categories(metric_type, anno_file=None, arch=None):
return (None, {'id': 'keypoint'})
if anno_file == None or (not os.path.isfile(anno_file)):
logger.warning("anno_file '{}' is None or not set or not exist, "
logger.warning(
"anno_file '{}' is None or not set or not exist, "
"please recheck TrainDataset/EvalDataset/TestDataset.anno_path, "
"otherwise the default categories will be used by metric_type.".format(anno_file))
"otherwise the default categories will be used by metric_type.".
format(anno_file))
if metric_type.lower() == 'coco' or metric_type.lower(
) == 'rbox' or metric_type.lower() == 'snipercoco':
if anno_file and os.path.isfile(anno_file):
# lazy import pycocotools here
from pycocotools.coco import COCO
coco = COCO(anno_file)
cats = coco.loadCats(coco.getCatIds())
clsid2catid = {i: cat['id'] for i, cat in enumerate(cats)}
catid2name = {cat['id']: cat['name'] for cat in cats}
if anno_file.endswith('json'):
# lazy import pycocotools here
from pycocotools.coco import COCO
coco = COCO(anno_file)
cats = coco.loadCats(coco.getCatIds())
clsid2catid = {i: cat['id'] for i, cat in enumerate(cats)}
catid2name = {cat['id']: cat['name'] for cat in cats}
elif anno_file.endswith('txt'):
cats = []
with open(anno_file) as f:
for line in f.readlines():
cats.append(line.strip())
if cats[0] == 'background': cats = cats[1:]
clsid2catid = {i: i for i in range(len(cats))}
catid2name = {i: name for i, name in enumerate(cats)}
else:
raise ValueError("anno_file {} should be json or txt.".format(
anno_file))
return clsid2catid, catid2name
# anno file not exist, load default categories of COCO17
else:
if metric_type.lower() == 'rbox':
logger.warning("metric_type: {}, load default categories of DOTA.".format(metric_type))
logger.warning(
"metric_type: {}, load default categories of DOTA.".format(
metric_type))
return _dota_category()
logger.warning("metric_type: {}, load default categories of COCO.".format(metric_type))
logger.warning("metric_type: {}, load default categories of COCO.".
format(metric_type))
return _coco17_category()
elif metric_type.lower() == 'voc':
......@@ -83,7 +102,8 @@ def get_categories(metric_type, anno_file=None, arch=None):
# anno file not exist, load default categories of
# VOC all 20 categories
else:
logger.warning("metric_type: {}, load default categories of VOC.".format(metric_type))
logger.warning("metric_type: {}, load default categories of VOC.".
format(metric_type))
return _vocall_category()
elif metric_type.lower() == 'oid':
......@@ -111,7 +131,9 @@ def get_categories(metric_type, anno_file=None, arch=None):
return clsid2catid, catid2name
# anno file not exist, load default category 'pedestrian'.
else:
logger.warning("metric_type: {}, load default categories of pedestrian MOT.".format(metric_type))
logger.warning(
"metric_type: {}, load default categories of pedestrian MOT.".
format(metric_type))
return _mot_category(category='pedestrian')
elif metric_type.lower() in ['kitti', 'bdd100kmot']:
......@@ -130,7 +152,9 @@ def get_categories(metric_type, anno_file=None, arch=None):
return clsid2catid, catid2name
# anno file not exist, load default categories of visdrone all 10 categories
else:
logger.warning("metric_type: {}, load default categories of VisDrone.".format(metric_type))
logger.warning(
"metric_type: {}, load default categories of VisDrone.".format(
metric_type))
return _visdrone_category()
else:
......
......@@ -149,12 +149,15 @@ class ImageFolder(DetDataset):
self.sample_num = sample_num
def check_or_download_dataset(self):
return
def get_anno(self):
if self.anno_path is None:
return
if self.dataset_dir:
# NOTE: ImageFolder is only used for prediction, in
# infer mode, image_dir is set by set_images
# so we only check anno_path here
self.dataset_dir = get_dataset_path(self.dataset_dir,
self.anno_path, None)
return os.path.join(self.dataset_dir, self.anno_path)
else:
return self.anno_path
def parse_dataset(self, ):
if not self.roidbs:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册