@@ -6,21 +6,21 @@ This document elaborates on the dataset format adopted by PaddleClas for image c
...
@@ -6,21 +6,21 @@ This document elaborates on the dataset format adopted by PaddleClas for image c
## Catalogue
## Catalogue
-[1.Dataset Format](#1)
-[1.Dataset Format](#1)
-[2.Common Datasets for Image Classification](#2)
-[2.Common Datasets for Image Classification](#2)
-[2.1 ImageNet1k](#2.1)
-[2.1 ImageNet1k](#2.1)
-[2.2 Flowers102](#2.2)
-[2.2 Flowers102](#2.2)
-[2.3 CIFAR10 / CIFAR100](#2.3)
-[2.3 CIFAR10 / CIFAR100](#2.3)
-[2.4 MNIST](#2.4)
-[2.4 MNIST](#2.4)
-[2.5 NUS-WIDE](#2.5)
-[2.5 NUS-WIDE](#2.5)
<aname="1"></a>
## 1. Dataset Format
<aname="1"></a>
## 1.Dataset Format
PaddleClas adopts `txt` files to assign the training and test sets. Taking the `ImageNet1k` dataset as an example, where `train_list.txt` and `val_list.txt` have the following formats:
PaddleClas adopts `txt` files to assign the training and test sets. Taking the `ImageNet1k` dataset as an example, where `train_list.txt` and `val_list.txt` have the following formats:
```shell
```
# Separate the image path and annotation with "space" for each line
# Separate the image path and annotation with "space" for each line
Here we present a compilation of commonly used image classification datasets, which is continuously updated and expects your supplement.
Here we present a compilation of commonly used image classification datasets, which is continuously updated and expects your supplement.
<aname="2.1"></a>
<aname="2.1"></a>
### 2.1 ImageNet1k
### 2.1 ImageNet1k
[ImageNet](https://image-net.org/) is a large visual database for visual target recognition research with over 14 million manually labeled images. ImageNet-1k is a subset of the ImageNet dataset, which contains 1000 categories with 1281167 images for the training set and 50000 for the validation set. Since 2010, ImageNet began to hold an annual image classification competition, namely, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with ImageNet-1k as its specified dataset. To date, ImageNet-1k has become one of the most significant contributors to the development of computer vision, based on which numerous initial models of downstream computer vision tasks are trained.
[ImageNet](https://image-net.org/) is a large visual database for visual target recognition research with over 14 million manually labeled images. ImageNet-1k is a subset of the ImageNet dataset, which contains 1000 categories with 1281167 images for the training set and 50000 for the validation set. Since 2010, ImageNet began to hold an annual image classification competition, namely, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with ImageNet-1k as its specified dataset. To date, ImageNet-1k has become one of the most significant contributors to the development of computer vision, based on which numerous initial models of downstream computer vision tasks are trained.
...
@@ -69,8 +68,8 @@ PaddleClas/dataset/ILSVRC2012/
...
@@ -69,8 +68,8 @@ PaddleClas/dataset/ILSVRC2012/
|_ val_list.txt
|_ val_list.txt
```
```
<aname="2.2"></a>
<aname="2.2"></a>
### 2.2 Flowers102
### 2.2 Flowers102
| Dataset | Size of Training Set | Size of Test Set | Number of Category | Note |
| Dataset | Size of Training Set | Size of Test Set | Number of Category | Note |
The CIFAR-10 dataset comprises 60,000 color images of 10 classes with 32x32 image resolution, each with 6,000 images including 5,000 images in the training set and 1,000 images in the validation set. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The CIFAR-100 dataset is an extension of CIFAR-10 and consists of 60,000 color images of 100 classes with 32x32 image resolution, each with 600 images including 500 images in the training set and 100 images in the validation set.
The CIFAR-10 dataset comprises 60,000 color images of 10 classes with 32x32 image resolution, each with 6,000 images including 5,000 images in the training set and 1,000 images in the validation set. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The CIFAR-100 dataset is an extension of CIFAR-10 and consists of 60,000 color images of 100 classes with 32x32 image resolution, each with 600 images including 500 images in the training set and 100 images in the validation set.
MMNIST is a renowned dataset for handwritten digit recognition and is used as an introductory sample for deep learning in many sources. It contains 60,000 images, 50,000 for the training set and 10,000 for the validation set, with a size of 28 * 28.
MMNIST is a renowned dataset for handwritten digit recognition and is used as an introductory sample for deep learning in many sources. It contains 60,000 images, 50,000 for the training set and 10,000 for the validation set, with a size of 28 * 28.
Website:http://yann.lecun.com/exdb/mnist/
Website:http://yann.lecun.com/exdb/mnist/
<aname="2.5"></a>
<aname="2.5"></a>
### 2.5 NUS-WIDE
### 2.5 NUS-WIDE
NUS-WIDE is a multi-category dataset. It contains 269,648 images and 81 categories with each image being labeled as one or more of the 81 categories.
NUS-WIDE is a multi-category dataset. It contains 269,648 images and 81 categories with each image being labeled as one or more of the 81 categories.
This document elaborates on the dataset format adopted by PaddleClas for image classification tasks, as well as other common datasets in this field.
------
## Catalogue
- [1.Dataset Format](#1)
- [2.Common Datasets for Image Classification](#2)
- [2.1 ImageNet1k](#2.1)
- [2.2 Flowers102](#2.2)
- [2.3 CIFAR10 / CIFAR100](#2.3)
- [2.4 MNIST](#2.4)
- [2.5 NUS-WIDE](#2.5)
<a name="1"></a>
## 1.Dataset Format
PaddleClas adopts `txt` files to assign the training and test sets. Taking the `ImageNet1k` dataset as an example, where `train_list.txt` and `val_list.txt` have the following formats:
```
# Separate the image path and annotation with "space" for each line
# train_list.txt has the following format
train/n01440764/n01440764_10026.JPEG 0
...
# val_list.txt has the following format
val/ILSVRC2012_val_00000001.JPEG 65
...
```
<a name="2"></a>
## 2.Common Datasets for Image Classification
Here we present a compilation of commonly used image classification datasets, which is continuously updated and expects your supplement.
<a name="2.1"></a>
### 2.1 ImageNet1k
[ImageNet](https://image-net.org/) is a large visual database for visual target recognition research with over 14 million manually labeled images. ImageNet-1k is a subset of the ImageNet dataset, which contains 1000 categories with 1281167 images for the training set and 50000 for the validation set. Since 2010, ImageNet began to hold an annual image classification competition, namely, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with ImageNet-1k as its specified dataset. To date, ImageNet-1k has become one of the most significant contributors to the development of computer vision, based on which numerous initial models of downstream computer vision tasks are trained.
| Dataset | Size of Training Set | Size of Test Set | Number of Category | Note |
The CIFAR-10 dataset comprises 60,000 color images of 10 classes with 32x32 image resolution, each with 6,000 images including 5,000 images in the training set and 1,000 images in the validation set. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The CIFAR-100 dataset is an extension of CIFAR-10 and consists of 60,000 color images of 100 classes with 32x32 image resolution, each with 600 images including 500 images in the training set and 100 images in the validation set.
MMNIST is a renowned dataset for handwritten digit recognition and is used as an introductory sample for deep learning in many sources. It contains 60,000 images, 50,000 for the training set and 10,000 for the validation set, with a size of 28 * 28.
Website:http://yann.lecun.com/exdb/mnist/
<a name="2.5"></a>
### 2.5 NUS-WIDE
NUS-WIDE is a multi-category dataset. It contains 269,648 images and 81 categories with each image being labeled as one or more of the 81 categories.
@@ -6,18 +6,18 @@ This document elaborates on the dataset format adopted by PaddleClas for image r
...
@@ -6,18 +6,18 @@ This document elaborates on the dataset format adopted by PaddleClas for image r
## Catalogue
## Catalogue
-[1.Dataset Format](#1)
-[1.Dataset Format](#1)
-[2.Common Datasets for Image Recognition](#2)
-[2.Common Datasets for Image Recognition](#2)
-[2.1 General Datasets](#2.1)
-[2.1 General Datasets](#2.1)
-[2.2 Vertical Datasets](#2.2)
-[2.2 Vertical Class Datasets](#2.2)
-[2.2.1 Animation Character Recognition](#2.2.1)
-[2.2.1 Animation Character Recognition](#2.2.1)
-[2.2.2 Product Recognition](#2.2.2)
-[2.2.2 Product Recognition](#2.2.2)
-[2.2.3 Logo Recognition](#2.2.3)
-[2.2.3 Logo Recognition](#2.2.3)
-[2.2.4 Vehicle Recognition](#2.2.4)
-[2.2.4 Vehicle Recognition](#2.2.4)
<aname="1"></a>
## 1. Dataset Format
<aname="1"></a>
## 1.Dataset Format
The dataset for the vector search, unlike those for classification tasks, is divided into the following three parts:
The dataset for the vector search, unlike those for classification tasks, is divided into the following three parts:
...
@@ -27,7 +27,7 @@ The dataset for the vector search, unlike those for classification tasks, is div
...
@@ -27,7 +27,7 @@ The dataset for the vector search, unlike those for classification tasks, is div
The above three datasets all adopt `txt` files for assignment. Taking the `CUB_200_2011` dataset as an example, the `train_list.txt` of the train dataset has the following format:
The above three datasets all adopt `txt` files for assignment. Taking the `CUB_200_2011` dataset as an example, the `train_list.txt` of the train dataset has the following format:
@@ -55,14 +55,13 @@ Each row of data is separated by "space", and the three columns of data stand fo
...
@@ -55,14 +55,13 @@ Each row of data is separated by "space", and the three columns of data stand fo
2. When the gallery dataset and query dataset are different, there is no need to add a unique id. Both `query_list.txt` and `gallery_list.txt` contain two columns, which are the path and label information of the training data. The dataset of yaml configuration file is ` ImageNetDataset`.
2. When the gallery dataset and query dataset are different, there is no need to add a unique id. Both `query_list.txt` and `gallery_list.txt` contain two columns, which are the path and label information of the training data. The dataset of yaml configuration file is ` ImageNetDataset`.
<aname="2"></a>
## 2. Common Datasets for Image Recognition
<aname="2"></a>
## 2.Common Datasets for Image Recognition
Here we present a compilation of commonly used image recognition datasets, which is continuously updated and expects your supplement.
Here we present a compilation of commonly used image recognition datasets, which is continuously updated and expects your supplement.
<aname="2.1"></a>
<aname="2.1"></a>
### 2.1 General Datasets
### 2.1 General Datasets
- SOP: The SOP dataset is a common product dataset in general recognition research and MetricLearning technology research, which contains 120,053 images of 22,634 products downloaded from eBay.com. There are 59,551 images of 11,318 in the training set and 60,502 images of 11,316 categories in the validation set.
- SOP: The SOP dataset is a common product dataset in general recognition research and MetricLearning technology research, which contains 120,053 images of 22,634 products downloaded from eBay.com. There are 59,551 images of 11,318 in the training set and 60,502 images of 11,316 categories in the validation set.
...
@@ -79,12 +78,12 @@ Here we present a compilation of commonly used image recognition datasets, which
...
@@ -79,12 +78,12 @@ Here we present a compilation of commonly used image recognition datasets, which
- iCartoonFace: iCartoonFace, developed by iQiyi (an online video platform), is the world's largest manual labeled detection and recognition dataset for cartoon characters, which contains more than 5013 cartoon characters and 389,678 high-quality live images. Compared with other datasets, it boasts features of large scale, high quality, rich diversity, and challenging difficulty, making it one of the most commonly used datasets to study cartoon character recognition.
- iCartoonFace: iCartoonFace, developed by iQiyi (an online video platform), is the world's largest manual labeled detection and recognition dataset for cartoon characters, which contains more than 5013 cartoon characters and 389,678 high-quality live images. Compared with other datasets, it boasts features of large scale, high quality, rich diversity, and challenging difficulty, making it one of the most commonly used datasets to study cartoon character recognition.
...
@@ -99,8 +98,8 @@ Here we present a compilation of commonly used image recognition datasets, which
...
@@ -99,8 +98,8 @@ Here we present a compilation of commonly used image recognition datasets, which
- AliProduct: The AliProduct dataset is the largest open source product dataset. As an SKU-level image classification dataset, it contains 50,000 categories and 3 million images, ranking the first in both aspects in the industry. This dataset covers a large number of household goods, food, etc. Due to its lack of manual annotation, the data is messy and unevenly distributed with many similar product images.
- AliProduct: The AliProduct dataset is the largest open source product dataset. As an SKU-level image classification dataset, it contains 50,000 categories and 3 million images, ranking the first in both aspects in the industry. This dataset covers a large number of household goods, food, etc. Due to its lack of manual annotation, the data is messy and unevenly distributed with many similar product images.
...
@@ -113,8 +112,8 @@ Here we present a compilation of commonly used image recognition datasets, which
...
@@ -113,8 +112,8 @@ Here we present a compilation of commonly used image recognition datasets, which
- DeepFashion-Inshop: The same as the common datasets In-shop Clothes.
- DeepFashion-Inshop: The same as the common datasets In-shop Clothes.
<aname="2.2.3"></a>
<aname="2.2.3"></a>
### 2.2.3 Logo Recognition
### 2.2.3 Logo Recognition
- Logo-2K+: Logo-2K+ is a dataset exclusively for logo image recognition, which contains 10 major categories, 2341 minor categories, and 167,140 images.
- Logo-2K+: Logo-2K+ is a dataset exclusively for logo image recognition, which contains 10 major categories, 2341 minor categories, and 167,140 images.
...
@@ -125,8 +124,8 @@ Here we present a compilation of commonly used image recognition datasets, which
...
@@ -125,8 +124,8 @@ Here we present a compilation of commonly used image recognition datasets, which
- CompCars: The images, 136,726 images of the whole car and 27,618 partial ones, are mainly from network and surveillance data. The network data contains 163 vehicle manufacturers and 1,716 vehicle models and includes the bounding box, viewing angle, and 5 attributes (maximum speed, displacement, number of doors, number of seats, and vehicle type). And the surveillance data comprises 50,000 front view images.
- CompCars: The images, 136,726 images of the whole car and 27,618 partial ones, are mainly from network and surveillance data. The network data contains 163 vehicle manufacturers and 1,716 vehicle models and includes the bounding box, viewing angle, and 5 attributes (maximum speed, displacement, number of doors, number of seats, and vehicle type). And the surveillance data comprises 50,000 front view images.
This document elaborates on the dataset format adopted by PaddleClas for image recognition tasks, as well as other common datasets in this field.
------
## Catalogue
- [1.Dataset Format](#1)
- [2.Common Datasets for Image Recognition](#2)
- [2.1 General Datasets](#2.1)
- [2.2 Vertical Class Datasets](#2.2)
- [2.2.1 Animation Character Recognition](#2.2.1)
- [2.2.2 Product Recognition](#2.2.2)
- [2.2.3 Logo Recognition](#2.2.3)
- [2.2.4 Vehicle Recognition](#2.2.4)
<a name="1"></a>
## 1.Dataset Format
The dataset for the vector search, unlike those for classification tasks, is divided into the following three parts:
- Train dataset: Used to train the model to learn the image features involved.
- Gallery dataset: Used to provide the gallery data in the vector search task. It can either be the same as the train or query datasets or different, and when it is the same as the train dataset, the category system of the query dataset and train dataset should be the same.
- Query dataset: Used to test the performance of the model. It usually extracts features from each query image of the dataset, followed by distance matching with those in the gallery dataset to get the recognition results, based on which the metrics of the whole query dataset are calculated.
The above three datasets all adopt `txt` files for assignment. Taking the `CUB_200_2011` dataset as an example, the `train_list.txt` of the train dataset has the following format:
```
# Use "space" as the separator
...
train/99/Ovenbird_0136_92859.jpg 99 2
...
train/99/Ovenbird_0128_93366.jpg 99 6
...
```
The `test_list.txt` of the query dataset (both gallery dataset and query dataset in`CUB_200_2011`) has the following format:
Each row of data is separated by "space", and the three columns of data stand for the path, label information, and unique id of training data.
**Note**:
1. When the gallery dataset and query dataset are the same, to remove the first retrieved data (the images themselves require no evaluation), each data should have its unique id (ensuring that each image has a different id, which can be represented by the row number) for subsequent evaluation of mAP, recall@1, and other metrics. The dataset of yaml configuration file is `VeriWild`.
2. When the gallery dataset and query dataset are different, there is no need to add a unique id. Both `query_list.txt` and `gallery_list.txt` contain two columns, which are the path and label information of the training data. The dataset of yaml configuration file is ` ImageNetDataset`.
<a name="2"></a>
## 2.Common Datasets for Image Recognition
Here we present a compilation of commonly used image recognition datasets, which is continuously updated and expects your supplement.
<a name="2.1"></a>
### 2.1 General Datasets
- SOP: The SOP dataset is a common product dataset in general recognition research and MetricLearning technology research, which contains 120,053 images of 22,634 products downloaded from eBay.com. There are 59,551 images of 11,318 in the training set and 60,502 images of 11,316 categories in the validation set.
- Cars196: The Cars dataset contains 16,185 images of 196 categories of cars. The data is classified into 8144 training images and 8041 query images, with each category split roughly in a 50-50 ratio. The classification is normally based on the manufacturing, model and year of the car, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe.
- CUB_200_2011: The CUB_200_2011 dataset is a fine-grained dataset proposed by the California Institute of Technology (Caltech) in 2010 and is currently the benchmark image dataset for fine-grained classification recognition research. There are 11788 bird images in this dataset with 200 subclasses, including 5994 images in the train dataset and 5794 images in the query dataset. Each image provides label information, the bounding box of the bird, the key part information of the bird, and the attribute of the bird. The dataset is shown in the figure below.
- In-shop Clothes: In-shop Clothes is one of the 4 subsets of the DeepFashion dataset. It is a seller show image dataset with multi-angle images of each product id being collected in the same folder. The dataset contains 7982 items with 52712 images, each with 463 attributes, Bbox, landmarks, and store descriptions.
- iCartoonFace: iCartoonFace, developed by iQiyi (an online video platform), is the world's largest manual labeled detection and recognition dataset for cartoon characters, which contains more than 5013 cartoon characters and 389,678 high-quality live images. Compared with other datasets, it boasts features of large scale, high quality, rich diversity, and challenging difficulty, making it one of the most commonly used datasets to study cartoon character recognition.
- Manga109: Manga109 is a dataset released in May 2020 for the study of cartoon character detection and recognition, which contains 21142 images and is officially banned from commercial use. Manga109-s, a subset of this dataset, is available for industrial use, mainly for tasks such as text detection, sketch line-based search, and character image generation.
Website:http://www.manga109.org/en/
- IIT-CFW: The IIF-CFW dataset contains a total of 8928 labeled cartoon portraits of celebrity characters, covering 100 characters with varying numbers of portraits for each. In addition, it also provides 1000 real face photos (10 real portraits for 100 public figures). This dataset can be employed to study both animation character recognition and cross-modal search tasks.
- AliProduct: The AliProduct dataset is the largest open source product dataset. As an SKU-level image classification dataset, it contains 50,000 categories and 3 million images, ranking the first in both aspects in the industry. This dataset covers a large number of household goods, food, etc. Due to its lack of manual annotation, the data is messy and unevenly distributed with many similar product images.
- Product-10k: Products-10k dataset has all its images from Jingdong Mall, covering 10,000 frequently purchased SKUs that are organized into a hierarchy. In total, there are nearly 190,000 images. In the real application scenario, the distribution of image volume is uneven. All images are manually checked/labeled by a team of production experts.
- DeepFashion-Inshop: The same as the common datasets In-shop Clothes.
<a name="2.2.3"></a>
### 2.2.3 Logo Recognition
- Logo-2K+: Logo-2K+ is a dataset exclusively for logo image recognition, which contains 10 major categories, 2341 minor categories, and 167,140 images.
- Tsinghua-Tencent 100K: This dataset is a large traffic sign benchmark dataset based on 100,000 Tencent Street View panoramas. 30,000 traffic sign instances included, it provides 100,000 images covering a wide range of illumination, and weather conditions. Each traffic sign in the benchmark test is labeled with the category, bounding box and pixel mask. A total of 222 categories (0 background + 221 traffic signs) are incorporated.
- CompCars: The images, 136,726 images of the whole car and 27,618 partial ones, are mainly from network and surveillance data. The network data contains 163 vehicle manufacturers and 1,716 vehicle models and includes the bounding box, viewing angle, and 5 attributes (maximum speed, displacement, number of doors, number of seats, and vehicle type). And the surveillance data comprises 50,000 front view images.
- BoxCars: The dataset contains a total of 21,250 vehicles, 63,750 images, 27 vehicle manufacturers, and 148 subcategories. All of them are derived from surveillance data.
Website: https://github.com/JakubSochor/BoxCars
- PKU-VD Dataset: The dataset contains two large vehicle datasets (VD1 and VD2) that capture images from real-world unrestricted scenes in two cities. VD1 is obtained from high-resolution traffic cameras, while images in VD2 are acquired from surveillance videos. The authors have performed vehicle detection on the raw data to ensure that each image contains only one vehicle. Due to privacy constraints, all the license numbers have been obscured with black overlays. All images are captured from the front view, and diverse attribute annotations are provided for each image in the dataset, including identification numbers, accurate vehicle models, and colors. VD1 originally contained 1097649 images, 1232 vehicle models, and 11 vehicle colors, and remains 846358 images and 141756 vehicles after removing images with multiple vehicles inside and those taken from the rear of the vehicle. VD2 contains 807260 images, 79763 vehicles, 1112 vehicle models, and 11 vehicle colors.
* Why are the metrics different for different cards?
* A: Fleet is the default option for the use of PaddleClas. Each GPU card is taken as a single trainer and deals with different images, which cause the final small difference. Single card evalution is suggested to get the accurate results if you use `tools/eval.py`. You can also use `tools/eval_multi_platform.py` to evalute the models on multiple GPU cards, which is also supported on Windows and CPU.
>>
* Q: Why `Mixup` or `Cutmix` is not used even if I have already add the data operation in the configuration file?
* A: When using `Mixup` or `Cutmix`, you also need to add `use_mix: True` in the configuration file to make it work properly.
>>
* Q: During evaluation and inference, pretrained model address is assgined, but the weights can not be imported. Why?
* A: Prefix of the pretrained model is needed. For example, if the pretained weights are located in `output/ResNet50_vd/19`, with the filename `output/ResNet50_vd/19/ppcls.pdparams`, then `pretrained_model` in the configuration file needs to be `output/ResNet50_vd/19/ppcls`.
>>
* Q: Why are the metrics 0.3% lower than that shown in the model zoo for `EfficientNet` series of models?
* A: Resize method is set as `Cubic` for `EfficientNet`(interpolation is set as 2 in OpenCV), while other models are set as `Bilinear`(interpolation is set as None in OpenCV). Therefore, you need to modify the interpolation explicitly in `ResizeImage`. Specifically, the following configuration is a demo for EfficientNet.
```
VALID:
batch_size: 16
num_workers: 4
file_list: "./dataset/ILSVRC2012/val_list.txt"
data_dir: "./dataset/ILSVRC2012/"
shuffle_seed: 0
transforms:
- DecodeImage:
to_rgb: True
to_np: False
channel_first: False
- ResizeImage:
resize_short: 256
interpolation: 2
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
```
>>
* Q: The error occured when using visualdl under python2, shows that: `TypeError: __init__() missing 1 required positional argument: 'sync_cycle'`.
* A: `Visualdl` is only supported on python3 as now, whose version needs also be higher than `2.0`. If your visualdl version is lower than 2.0, you can also install visualdl 2.0 by `pip3 install visualdl==2.0.0b8 -i https://mirror.baidu.com/pypi/simple`.
@@ -79,9 +79,9 @@ Not really, increasing all the convolutional kernels in the network may not lead
...
@@ -79,9 +79,9 @@ Not really, increasing all the convolutional kernels in the network may not lead
**A**:The process is as follows:
**A**:The process is as follows:
- First, create a new model structure file under the folder ppcls/arch/backbone/model_zoo/, i.e. your own backbone. You can refer to resnet.py for model construction;
- First, create a new model structure file under the folder `ppcls/arch/backbone/model_zoo/`, i.e. your own backbone. You can refer to resnet.py for model construction;
- Then add your own backbone class in ppcls/arch/backbone/\__init__.py;
- Then add your own backbone class in `ppcls/arch/backbone/__init__.py`;
- Next, configure the yaml file for training, here you can refer to ppcls/configs/ImageNet/ResNet/ResNet50.yaml;
- Next, configure the yaml file for training, here you can refer to `ppcls/configs/ImageNet/ResNet/ResNet50.yaml`;
- Now you can start the training.
- Now you can start the training.
### Q2.2: How to transfer the existing models and weights to your own classification tasks?
### Q2.2: How to transfer the existing models and weights to your own classification tasks?
...
@@ -96,7 +96,7 @@ Not really, increasing all the convolutional kernels in the network may not lead
...
@@ -96,7 +96,7 @@ Not really, increasing all the convolutional kernels in the network may not lead
**A**:
**A**:
The default parameter of the configuration file under ppcls/configs/ImageNet/ in PaddleClas is the training parameter of ImageNet-1k, which is not suitable for all datasets, and the specific datasets need to be further debugged on this basis.
The default parameter of the configuration file under `ppcls/configs/ImageNet/` in PaddleClas is the training parameter of ImageNet-1k, which is not suitable for all datasets, and the specific datasets need to be further debugged on this basis.
### Q2.4 The resolution varies for different models in PaddleClas, so what is the standard?
### Q2.4 The resolution varies for different models in PaddleClas, so what is the standard?
@@ -41,7 +41,7 @@ This may be caused by the small shared memory in docker. When creating docker, t
...
@@ -41,7 +41,7 @@ This may be caused by the small shared memory in docker. When creating docker, t
**A**:
**A**:
Based on ResNet50_vd, Baidu open-sourced its own large-scale classification pre-training model with 100,000 categories and 43 million images. The former is available for download at [download address](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_10w_pretrained.tar), where it should be noted that the pre-training model does not provide the final FC layer parameters and thus cannot be used directly for inference; however, it can be used as a pre-training model to fine-tune it on your own dataset. It is verified that this pre-training model has a more significant accuracy gain of up to 30% on different datasets than the ResNet50_vd pre-training model based on the ImageNet1k dataset.
Based on ResNet50_vd, Baidu open-sourced its own large-scale classification pre-training model with 100,000 categories and 43 million images. The former is available for download at [download address](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_10w_pretrained.tar), where it should be noted that the pre-training model does not provide the final FC layer parameters and thus cannot be used directly for inference; however, it can be used as a pre-training model to fine-tune it on your own dataset. It is verified that this pre-training model has a more significant accuracy gain of up to 30% on different datasets than the ResNet50_vd pre-training model based on the ImageNet1k dataset.
### Q1.5 How to accelerate when using C++ for inference deployment?
### Q1.5 How to accelerate when using C++ for inference deployment?
Here `m` is the `momentum`, which is the weighted value of the cumulative momentum, generally taken as `0.9`. And when the value is less than `1`, the earlier the gradient is, the smaller the impact on the current. For example, when the momentum parameter `m` takes `0.9`, the weighted value of the gradient of `t-5` is `0.9 ^ 5 = 0.59049` at time `t`, while the value at time `t-2` is `0.9 ^ 2 = 0.81`. Therefore, it is intuitive that gradient information that is too "far away" is of little significance for the current reference, while "recent" historical gradient information matters more.
Here `m` is the `momentum`, which is the weighted value of the cumulative momentum, generally taken as `0.9`. And when the value is less than `1`, the earlier the gradient is, the smaller the impact on the current. For example, when the momentum parameter `m` takes `0.9`, the weighted value of the gradient of `t-5` is `0.9 ^ 5 = 0.59049` at time `t`, while the value at time `t-2` is `0.9 ^ 2 = 0.81`. Therefore, it is intuitive that gradient information that is too "far away" is of little significance for the current reference, while "recent" historical gradient information matters more.
By introducing the concept of momentum, the effect of historical updates is taken into account in parameter updates, thus speeding up the convergence and improving the loss (cost, loss) oscillation caused by the `SGD` optimizer.
By introducing the concept of momentum, the effect of historical updates is taken into account in parameter updates, thus speeding up the convergence and improving the loss (cost, loss) oscillation caused by the `SGD` optimizer.
...
@@ -93,7 +93,7 @@ Among them, RandAngment provides a variety of random combinations of data augmen
...
@@ -93,7 +93,7 @@ Among them, RandAngment provides a variety of random combinations of data augmen
**A**:
**A**:
The training data is a randomly selected subset of publicly available datasets such as COCO, Object365, RPC, and LogoDet. We are currently introducing an ultra-lightweight mainbody detection model in version 2.3, which can be found in [Mainbody Detection](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/image_recognition_ pipeline/mainbody_detection.md#2-Model Selection).
The training data is a randomly selected subset of publicly available datasets such as COCO, Object365, RPC, and LogoDet. We are currently introducing an ultra-lightweight mainbody detection model in version 2.3, which can be found in [Mainbody Detection](../../en/image_recognition_pipeline/mainbody_detection_en.md#2-model-selection).
#### Q1.4.3: Is there any false detections in some scenarios with the current mainbody detection model?
#### Q1.4.3: Is there any false detections in some scenarios with the current mainbody detection model?
...
@@ -109,7 +109,7 @@ The training data is a randomly selected subset of publicly available datasets s
...
@@ -109,7 +109,7 @@ The training data is a randomly selected subset of publicly available datasets s
`circle loss` is a unified form of sample pair learning and classification learning, and `triplet loss` can be added if it is a classification learning.
`circle loss` is a unified form of sample pair learning and classification learning, and `triplet loss` can be added if it is a classification learning.
#### Q1.5.2 如果不是识别开源的四个方向的图片,该使用哪个识别模型?Which recognition model is better if not to recognize open source images in all four directions?
#### Q1.5.2 Which recognition model is better if not to recognize open source images in all four directions?
**A**:
**A**:
...
@@ -196,8 +196,8 @@ PaddleClas saves/updates the following three types of models during training.
...
@@ -196,8 +196,8 @@ PaddleClas saves/updates the following three types of models during training.
**A**:
**A**:
- For `Mixup`, please refer to [Mixup](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/DataAugment/ResNet50_ Mixup.yaml#L63-L65); and`Cuxmix`, please refer to [Cuxmix](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L63-L65).
- For `Mixup`, please refer to [Mixup](../../../ppcls/configs/ImageNet/DataAugment/ResNet50_ Mixup.yaml#L63-L65); and`Cuxmix`, please refer to [Cuxmix](../../../ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L63-L65).
- The training accuracy (Acc) metric cannot be calculated when using `Mixup` or `Cutmix` for training, so you need to remove the `Metric.Train.TopkAcc` field in the configuration file, please refer to [Metric.Train.TopkAcc](https://github.com/ PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L125-L128).
- The training accuracy (Acc) metric cannot be calculated when using `Mixup` or `Cutmix` for training, so you need to remove the `Metric.Train.TopkAcc` field in the configuration file, please refer to [Metric.Train.TopkAcc](../../../ppcls/configs/ImageNet/DataAugment/ResNet50_Cutmix.yaml#L125-L128).
#### Q2.1.9: What are the fields `Global.pretrain_model` and `Global.checkpoints` used for in the training configuration file yaml?
#### Q2.1.9: What are the fields `Global.pretrain_model` and `Global.checkpoints` used for in the training configuration file yaml?
...
@@ -244,11 +244,11 @@ PaddleClas saves/updates the following three types of models during training.
...
@@ -244,11 +244,11 @@ PaddleClas saves/updates the following three types of models during training.
#### Q2.4.1: Why is `Illegal instruction` reported during the recognition inference?
#### Q2.4.1: Why is `Illegal instruction` reported during the recognition inference?
**A**:If you are using the release/2.2 branch, it is recommended to update it to the release/2.3 branch, where we replaced the Möbius search model with the faiss search module, as described in [Vector Search Tutorial](https://github.com/PaddlePaddle/ PaddleClas/blob/release/2.3/deploy/vector_search/README.md). If you still have problems, you can contact us in the WeChat group or raise an issue on GitHub.
**A**:If you are using the release/2.2 branch, it is recommended to update it to the release/2.3 branch, where we replaced the Möbius search model with the faiss search module, as described in [Vector Search Tutorial](../image_recognition_pipeline/vector_search_en.md). If you still have problems, you can contact us in the WeChat group or raise an issue on GitHub.
#### Q2.4.2: How can recognition models be fine-tuned to train on the basis of pre-trained models?
#### Q2.4.2: How can recognition models be fine-tuned to train on the basis of pre-trained models?
**A**:The fine-tuning training of the recognition model is similar to that of the classification model. The recognition model can be loaded with a pre-trained model of the product, and the training process can be found in [recognition model training](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/ models_training/recognition.md), and we will continue to refine the documentation.
**A**:The fine-tuning training of the recognition model is similar to that of the classification model. The recognition model can be loaded with a pre-trained model of the product, and the training process can be found in [recognition model training](../../models_training/recognition_en.md), and we will continue to refine the documentation.
#### Q2.4.3: Why does it fail to run all mini-batches in each epoch when training metric learning?
#### Q2.4.3: Why does it fail to run all mini-batches in each epoch when training metric learning?
...
@@ -268,13 +268,13 @@ PaddleClas saves/updates the following three types of models during training.
...
@@ -268,13 +268,13 @@ PaddleClas saves/updates the following three types of models during training.
#### Q2.5.2: Do I need to rebuild the index to add new base data?
#### Q2.5.2: Do I need to rebuild the index to add new base data?
**A**:Starting from release/2.3 branch, we have replaced the Möbius search model with the faiss search module, which already supports the addition of base data without building the base library, as described in [Vector Search Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/ release/2.3/deploy/vector_search/README.md).
**A**:Starting from release/2.3 branch, we have replaced the Möbius search model with the faiss search module, which already supports the addition of base data without building the base library, as described in [Vector Search Tutorial](../image_recognition_pipeline/vector_search_en.md).
#### Q2.5.3: How to deal with the reported error clang: error: unsupported option '-fopenmp' when recompiling index.so in Mac?
#### Q2.5.3: How to deal with the reported error clang: error: unsupported option '-fopenmp' when recompiling index.so in Mac?
**A**:
**A**:
If you are using the release/2.2 branch, it is recommended to update it to the release/2.3 branch, where we replaced the Möbius search model with the faiss search module, as described in [Vector Search Tutorial](https://github.com/PaddlePaddle/ PaddleClas/blob/release/2.3/deploy/vector_search/README.md). If you still have problems, you can contact us in the user WeChat group or raise an issue on GitHub.
If you are using the release/2.2 branch, it is recommended to update it to the release/2.3 branch, where we replaced the Möbius search model with the faiss search module, as described in [Vector Search Tutorial](../image_recognition_pipeline/vector_search_en.md). If you still have problems, you can contact us in the user WeChat group or raise an issue on GitHub.
#### Q2.5.4: How to set the parameter `pq_size` when build searches the base library?
#### Q2.5.4: How to set the parameter `pq_size` when build searches the base library?
...
@@ -288,7 +288,7 @@ If you are using the release/2.2 branch, it is recommended to update it to the r
...
@@ -288,7 +288,7 @@ If you are using the release/2.2 branch, it is recommended to update it to the r
#### Q2.6.1: How to add the parameter of a module that is enabled by hub serving?
#### Q2.6.1: How to add the parameter of a module that is enabled by hub serving?
**A**:See [hub serving parameters](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/deploy/hubserving/clas/params.py) for more details.
**A**:See [hub serving parameters](../../../deploy/hubserving/clas/params.py) for more details.
#### Q2.6.2: Why is the result not accurate enough when exporting the inference model for inference deployment?
#### Q2.6.2: Why is the result not accurate enough when exporting the inference model for inference deployment?
...
@@ -327,13 +327,13 @@ pip install paddle2onnx
...
@@ -327,13 +327,13 @@ pip install paddle2onnx
-`params_filename`: this parameter is used to specify the path of the `.pdiparams` file under the parameter `model_dir`.
-`params_filename`: this parameter is used to specify the path of the `.pdiparams` file under the parameter `model_dir`.
-`save_file`: this parameter is used to specify the path to the directory where the converted model is saved.
-`save_file`: this parameter is used to specify the path to the directory where the converted model is saved.
For the conversion of a non-`combined` format inference model exported from a static diagram (usually containing the file `__model__` and multiple parameter files), and more parameter descriptions, please refer to the official documentation of [paddle2onnx](https://github.com/ PaddlePaddle/Paddle2ONNX/blob/develop/README_zh.md#Parameter options).
For the conversion of a non-`combined` format inference model exported from a static diagram (usually containing the file `__model__` and multiple parameter files), and more parameter descriptions, please refer to the official documentation of [paddle2onnx](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README.md#parameters).
- Exporting ONNX format models directly from the model networking code.
- Exporting ONNX format models directly from the model networking code.
Take the model networking code of dynamic graphs as an example, the model class is a subclass that inherits from `paddle.nn.Layer` and the code is shown below:
Take the model networking code of dynamic graphs as an example, the model class is a subclass that inherits from `paddle.nn.Layer` and the code is shown below:
- Q: Why `TypeError: __init__() missing 1 required positional argument: 'sync_cycle'` is reported when using visualdl under python2?
- Q: Why `TypeError: __init__() missing 1 required positional argument: 'sync_cycle'` is reported when using visualdl under python2?
- A: Currently visualdl only supports running under python3 with a required version of 2.0 or higher. If visualdl is not the right version, you can install it as follows: `pip3 install visualdl -i https://mirror.baidu.com/pypi/simple`
- A: Currently visualdl only supports running under python3 with a required version of 2.0 or higher. If visualdl is not the right version, you can install it as follows: `pip3 install visualdl -i https://mirror.baidu.com/pypi/simple`
> >
> >
...
@@ -252,7 +252,7 @@
...
@@ -252,7 +252,7 @@
> >
> >
- Q: How to train the model on windows or cpu?
- Q: How to train the model on windows or cpu?
- A: You can refer to [Getting Started Tutorial](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models_training/classification.md) for detailed tutorials on model training, evaluation and inference in Linux , Windows, CPU, and other environments.
- A: You can refer to [Getting Started Tutorial](../models_training/classification_en.md) for detailed tutorials on model training, evaluation and inference in Linux , Windows, CPU, and other environments.
> >
> >
...
@@ -275,12 +275,12 @@ Loss:
...
@@ -275,12 +275,12 @@ Loss:
> >
> >
- Q: Why is `Error: Pass tensorrt_subgraph_pass has not been registered` reported When using `deploy/python/predict_cls.py` for model prediction?
- Q: Why is `Error: Pass tensorrt_subgraph_pass has not been registered` reported When using `deploy/python/predict_cls.py` for model prediction?
- A: If you want to use TensorRT for model prediction and inference, you need to install or compile PaddlePaddle with TensorRT by yourself. For Linux, Windows, macOS users, you can refer to [download inference library](https://paddleinference. paddlepaddle.org.cn/user_guides/download_lib.html). If there is no required version, you need to compile and install it locally, which is detailed in [source code compilation](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html).
- A: If you want to use TensorRT for model prediction and inference, you need to install or compile PaddlePaddle with TensorRT by yourself. For Linux, Windows, macOS users, you can refer to [download inference library](https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html). If there is no required version, you need to compile and install it locally, which is detailed in [source code compilation](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html).
> >
> >
- Q: How to train with Automatic Mixed Precision (AMP) during training?
- Q: How to train with Automatic Mixed Precision (AMP) during training?
- A: You can refer to [ResNet50_fp16.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/ppcls/configs/ImageNet/ResNet/ResNet50_fp16.yaml). Specifically, if you want your configuration file to support automatic mixed precision during model training, you can add the following information to the file.
- A: You can refer to [ResNet50_fp16.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_fp16.yaml). Specifically, if you want your configuration file to support automatic mixed precision during model training, you can add the following information to the file.