提交 c8c3ddda 编写于 作者: Q qingqing01 提交者: GitHub

Fix some docs. (#2698)

* Fix some docs
* Unify COCO and VOC
* Change PASCAL to Pascal
* Unify dataset/coco
上级 a134bf74
...@@ -79,7 +79,7 @@ FasterRCNNTrainFeed: ...@@ -79,7 +79,7 @@ FasterRCNNTrainFeed:
- !PadBatch - !PadBatch
pad_to_stride: 128 pad_to_stride: 128
dataset: dataset:
dataset_dir: data/coco dataset_dir: dataset/coco
annotation: annotations/instances_train2017.json annotation: annotations/instances_train2017.json
image_dir: train2017 image_dir: train2017
num_workers: 2 num_workers: 2
...@@ -90,7 +90,7 @@ FasterRCNNEvalFeed: ...@@ -90,7 +90,7 @@ FasterRCNNEvalFeed:
- !PadBatch - !PadBatch
pad_to_stride: 128 pad_to_stride: 128
dataset: dataset:
dataset_dir: data/coco dataset_dir: dataset/coco
annotation: annotations/instances_val2017.json annotation: annotations/instances_val2017.json
image_dir: val2017 image_dir: val2017
num_workers: 2 num_workers: 2
......
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"/home/yang/models/PaddleCV/object_detection\n" "/home/yang/models/PaddleCV/PaddleDetection\n"
] ]
} }
], ],
...@@ -111,13 +111,13 @@ The corresponding(generated) YAML snippet is as follows, note this is the config ...@@ -111,13 +111,13 @@ The corresponding(generated) YAML snippet is as follows, note this is the config
```yaml ```yaml
RPNHead: RPNHead:
test_prop: test_proposal:
eta: 1.0 eta: 1.0
min_size: 0.1 min_size: 0.1
nms_thresh: 0.5 nms_thresh: 0.5
post_nms_top_n: 1000 post_nms_top_n: 1000
pre_nms_top_n: 6000 pre_nms_top_n: 6000
train_prop: train_proposal:
eta: 1.0 eta: 1.0
min_size: 0.1 min_size: 0.1
nms_thresh: 0.5 nms_thresh: 0.5
......
...@@ -103,13 +103,13 @@ class RPNHead(object): ...@@ -103,13 +103,13 @@ class RPNHead(object):
```yaml ```yaml
RPNHead: RPNHead:
test_prop: test_proposal:
eta: 1.0 eta: 1.0
min_size: 0.1 min_size: 0.1
nms_thresh: 0.5 nms_thresh: 0.5
post_nms_top_n: 1000 post_nms_top_n: 1000
pre_nms_top_n: 6000 pre_nms_top_n: 6000
train_prop: train_proposal:
eta: 1.0 eta: 1.0
min_size: 0.1 min_size: 0.1
nms_thresh: 0.5 nms_thresh: 0.5
......
...@@ -26,7 +26,7 @@ following data sources are supported: ...@@ -26,7 +26,7 @@ following data sources are supported:
Loads `COCO` type datasets with directory structures like this: Loads `COCO` type datasets with directory structures like this:
``` ```
data/coco/ dataset/coco/
├── annotations ├── annotations
│ ├── instances_train2017.json │ ├── instances_train2017.json
│ ├── instances_val2017.json │ ├── instances_val2017.json
...@@ -167,7 +167,7 @@ The main APIs are as follows: ...@@ -167,7 +167,7 @@ The main APIs are as follows:
#### Canned Datasets #### Canned Datasets
Preset for common datasets, e.g., `MS-COCO` and `Pascal Voc` are included. In Preset for common datasets, e.g., `COCO` and `Pascal Voc` are included. In
most cases, user can simply use these canned dataset as is. Moreover, the most cases, user can simply use these canned dataset as is. Moreover, the
whole data pipeline is fully customizable through the yaml configuration files. whole data pipeline is fully customizable through the yaml configuration files.
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
该数据集目前分为COCO2012和COCO2017,主要由json文件和image文件组成,其组织结构如下所示: 该数据集目前分为COCO2012和COCO2017,主要由json文件和image文件组成,其组织结构如下所示:
``` ```
data/coco/ dataset/coco/
├── annotations ├── annotations
│ ├── instances_train2014.json │ ├── instances_train2014.json
│ ├── instances_train2017.json │ ├── instances_train2017.json
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
## Introduction ## Introduction
This document covers how to install PaddleDetection, its dependencies This document covers how to install PaddleDetection, its dependencies
(including PaddlePaddle), together with COCO and PASCAL VOC dataset. (including PaddlePaddle), together with COCO and Pascal VOC dataset.
For general information about PaddleDetection, please see [README.md](../README.md). For general information about PaddleDetection, please see [README.md](../README.md).
...@@ -68,12 +68,12 @@ with the following commands: ...@@ -68,12 +68,12 @@ with the following commands:
``` ```
cd <path/to/clone/models> cd <path/to/clone/models>
git clone https://github.com/PaddlePaddle/models git clone https://github.com/PaddlePaddle/models
cd models/PaddleCV/object_detection cd models/PaddleCV/PaddleDetection
``` ```
**Install Python dependencies:** **Install Python dependencies:**
Required python packages are specified in [requirements.txt](./requirements.txt), and can be installed with: Required python packages are specified in [requirements.txt](../requirements.txt), and can be installed with:
``` ```
pip install -r requirements.txt pip install -r requirements.txt
...@@ -89,31 +89,31 @@ python ppdet/modeling/tests/test_architectures.py ...@@ -89,31 +89,31 @@ python ppdet/modeling/tests/test_architectures.py
## Datasets ## Datasets
PaddleDetection includes support for [MSCOCO](http://cocodataset.org) and [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) by default, please follow these instructions to set up the dataset. PaddleDetection includes support for [COCO](http://cocodataset.org) and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) by default, please follow these instructions to set up the dataset.
**Create symlinks for local datasets:** **Create symlinks for local datasets:**
Default dataset path in config files is `data/coco` and `data/voc`, if the Default dataset path in config files is `dataset/coco` and `dataset/voc`, if the
datasets are already available on disk, you can simply create symlinks to datasets are already available on disk, you can simply create symlinks to
their directories: their directories:
``` ```
ln -sf <path/to/coco> <path/to/paddle_detection>/data/coco ln -sf <path/to/coco> <path/to/paddle_detection>/dataset/coco
ln -sf <path/to/voc> <path/to/paddle_detection>/data/voc ln -sf <path/to/voc> <path/to/paddle_detection>/dataset/voc
``` ```
**Download datasets manually:** **Download datasets manually:**
On the other hand, to download the datasets, run the following commands: On the other hand, to download the datasets, run the following commands:
- MS-COCO - COCO
``` ```
cd dataset/coco cd dataset/coco
./download.sh ./download.sh
``` ```
- PASCAL VOC - Pascal VOC
``` ```
cd dataset/voc cd dataset/voc
...@@ -123,8 +123,8 @@ cd dataset/voc ...@@ -123,8 +123,8 @@ cd dataset/voc
**Download datasets automatically:** **Download datasets automatically:**
If a training session is started but the dataset is not setup properly (e.g, If a training session is started but the dataset is not setup properly (e.g,
not found in `data/coc` or `data/voc`), PaddleDetection can automatically not found in `dataset/coco` or `dataset/voc`), PaddleDetection can automatically
download them from [MSCOCO-2017](http://images.cocodataset.org) and download them from [COCO-2017](http://images.cocodataset.org) and
[VOC2012](http://host.robots.ox.ac.uk/pascal/VOC), the decompressed datasets [VOC2012](http://host.robots.ox.ac.uk/pascal/VOC), the decompressed datasets
will be cached in `~/.cache/paddle/dataset/` and can be discovered automatically will be cached in `~/.cache/paddle/dataset/` and can be discovered automatically
subsequently. subsequently.
......
...@@ -77,7 +77,7 @@ randomly color distortion, randomly cropping, randomly expansion, randomly inter ...@@ -77,7 +77,7 @@ randomly color distortion, randomly cropping, randomly expansion, randomly inter
**Notes:** In RetinaNet, the base LR is changed to 0.01 for minibatch size 16. **Notes:** In RetinaNet, the base LR is changed to 0.01 for minibatch size 16.
### SSD on PascalVOC ### SSD on Pascal VOC
| Backbone | Size | Image/gpu | Lr schd | Box AP | Download | | Backbone | Size | Image/gpu | Lr schd | Box AP | Download |
| :----------- | :--: | :-----: | :-----: | :----: | :-------: | | :----------- | :--: | :-----: | :-----: | :----: | :-------: |
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册