GETTING_STARTED.md 8.9 KB
Newer Older
1 2
# Getting Started

K
Kaipeng Deng 已提交
3
For setting up the running environment, please refer to [installation
Y
Yang Zhang 已提交
4
instructions](INSTALL.md).
5 6


Y
Yang Zhang 已提交
7 8 9
## Training

#### Single-GPU Training
10 11 12 13


```bash
export CUDA_VISIBLE_DEVICES=0
14
export PYTHONPATH=$PYTHONPATH:.
15 16 17
python tools/train.py -c configs/faster_rcnn_r50_1x.yml
```

Y
Yang Zhang 已提交
18 19
#### Multi-GPU Training

20
```bash
Y
Yang Zhang 已提交
21
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
22 23 24 25 26 27 28 29 30
export PYTHONPATH=$PYTHONPATH:.
python tools/train.py -c configs/faster_rcnn_r50_1x.yml
```

#### CPU Training

```bash
export CPU_NUM=8
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
31
python tools/train.py -c configs/faster_rcnn_r50_1x.yml -o use_gpu=false
32 33
```

34 35 36 37
##### Optional arguments

- `-r` or `--resume_checkpoint`: Checkpoint path for resuming training. Such as: `-r output/faster_rcnn_r50_1x/10000`
- `--eval`: Whether to perform evaluation in training, default is `False`
38
- `--output_eval`: If perform evaluation in training, this edits evaluation directory, default is current directory.
39
- `-d` or `--dataset_dir`: Dataset path, same as `dataset_dir` of configs. Such as: `-d dataset/coco`
40 41
- `-c`: Select config file and all files are saved in `configs/`
- `-o`: Set configuration options in config file. Such as: `-o max_iters=180000`. `-o` has higher priority to file configured by `-c`
42 43
- `--use_tb`: Whether to record the data with [tb-paddle](https://github.com/linshuliang/tb-paddle), so as to display in Tensorboard, default is `False`
- `--tb_log_dir`: tb-paddle logging directory for scalar, default is `tb_log_dir/scalar`
44 45 46 47 48 49 50 51 52 53 54 55 56


##### Examples

- Perform evaluation in training
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export PYTHONPATH=$PYTHONPATH:.
python -u tools/train.py -c configs/faster_rcnn_r50_1x.yml --eval
```

Alternating between training epoch and evaluation run is possible, simply pass
in `--eval` to do so and evaluate at each snapshot_iter. It can be modified at `snapshot_iter` of the configuration file. If evaluation dataset is large and
57
causes time-consuming in training, we suggest decreasing evaluation times or evaluating after training. When perform evaluation in training,
58
the best model with highest MAP is saved at each `snapshot_iter`. `best_model` has the same path as `model_final`.
59 60


61
- Configure dataset path
62 63 64 65 66 67 68
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export PYTHONPATH=$PYTHONPATH:.
python -u tools/train.py -c configs/faster_rcnn_r50_1x.yml \
                         -d dataset/coco
```

69 70 71 72 73 74 75 76 77 78 79
- Fine-tune other task

When using pre-trained model to fine-tune other task, the excluded pre-trained parameters can be set by finetune_exclude_pretrained_params in YAML config or -o finetune_exclude_pretrained_params in the arguments.

```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export PYTHONPATH=$PYTHONPATH:.
python -u tools/train.py -c configs/faster_rcnn_r50_1x.yml \
                         -o pretrain_weights=output/faster_rcnn_r50_1x/model_final/ \
                            finetune_exclude_pretrained_params = ['cls_score','bbox_pred']
```
80 81 82 83 84 85

##### NOTES

- `CUDA_VISIBLE_DEVICES` can specify different gpu numbers. Such as: `export CUDA_VISIBLE_DEVICES=0,1,2,3`. GPU calculation rules can refer [FAQ](#faq)
- Dataset is stored in `dataset/coco` by default (configurable).
- Dataset will be downloaded automatically and cached in `~/.cache/paddle/dataset` if not be found locally.
Y
Yang Zhang 已提交
86
- Pretrained model is downloaded automatically and cached in `~/.cache/paddle/weights`.
87
- Model checkpoints are saved in `output` by default (configurable).
88
- When finetuning, users could set `pretrain_weights` to the models published by PaddlePaddle. Parameters matched by fields in finetune_exclude_pretrained_params will be ignored in loading and fields can be wildcard matching. For detailed information, please refer to [Transfer Learning](TRANSFER_LEARNING.md).
W
wangguanzhong 已提交
89
- To check out hyper parameters used, please refer to the [configs](../configs).
90
- RCNN models training on CPU is not supported on PaddlePaddle<=1.5.1 and will be fixed on later version.
91 92 93



Y
Yang Zhang 已提交
94
## Evaluation
95 96

```bash
W
wangguanzhong 已提交
97
# run on GPU with:
98
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
99
export CUDA_VISIBLE_DEVICES=0
100 101 102
python tools/eval.py -c configs/faster_rcnn_r50_1x.yml
```

103 104 105
#### Optional arguments

- `-d` or `--dataset_dir`: Dataset path, same as dataset_dir of configs. Such as: `-d dataset/coco`
106
- `--output_eval`: Evaluation directory, default is current directory.
107 108 109 110 111
- `-o`: Set configuration options in config file. Such as: `-o weights=output/faster_rcnn_r50_1x/model_final`
- `--json_eval`: Whether to eval with already existed bbox.json or mask.json. Default is `False`. Json file directory is assigned by `-f` argument.

#### Examples

112
- Evaluate by specified weights path and dataset path 
113
```bash
W
wangguanzhong 已提交
114
# run on GPU with:
115
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
116
export CUDA_VISIBLE_DEVICES=0
117 118 119 120 121
python -u tools/eval.py -c configs/faster_rcnn_r50_1x.yml \
                        -o weights=output/faster_rcnn_r50_1x/model_final \
                        -d dataset/coco
```

122
- Evaluate with json
123
```bash
W
wangguanzhong 已提交
124
# run on GPU with:
125
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
126
export CUDA_VISIBLE_DEVICES=0
127
python tools/eval.py -c configs/faster_rcnn_r50_1x.yml \
W
wangguanzhong 已提交
128 129
             --json_eval \
             -f evaluation/
130 131 132 133 134 135
```

The json file must be named bbox.json or mask.json, placed in the `evaluation/` directory. Or without the `-f` parameter, default is the current directory.

#### NOTES

Y
Yang Zhang 已提交
136 137 138 139
- Checkpoint is loaded from `output` by default (configurable)
- Multi-GPU evaluation for R-CNN and SSD models is not supported at the
moment, but it is a planned feature

140

Y
Yang Zhang 已提交
141
## Inference
142 143


Y
Yang Zhang 已提交
144
- Run inference on a single image:
145 146

```bash
W
wangguanzhong 已提交
147
# run on GPU with:
148
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
149
export CUDA_VISIBLE_DEVICES=0
Y
Yang Zhang 已提交
150
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_img=demo/000000570688.jpg
151 152
```

153
- Multi-image inference:
154 155

```bash
W
wangguanzhong 已提交
156
# run on GPU with:
157
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
158
export CUDA_VISIBLE_DEVICES=0
159 160 161
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_dir=demo
```

162 163 164 165 166
#### Optional arguments

- `--output_dir`: Directory for storing the output visualization files.
- `--draw_threshold`: Threshold to reserve the result for visualization. Default is 0.5.
- `--save_inference_model`: Save inference model in output_dir if True.
167 168
- `--use_tb`: Whether to record the data with [tb-paddle](https://github.com/linshuliang/tb-paddle), so as to display in Tensorboard, default is `False`
- `--tb_log_dir`: tb-paddle logging directory for image, default is `tb_log_dir/image`
169 170 171 172

#### Examples

- Output specified directory && Set up threshold
173

174
```bash
W
wangguanzhong 已提交
175
# run on GPU with:
176
export PYTHONPATH=$PYTHONPATH:.
W
wangguanzhong 已提交
177
export CUDA_VISIBLE_DEVICES=0
178 179 180
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml \
                      --infer_img=demo/000000570688.jpg \
                      --output_dir=infer_output/ \
181
                      --draw_threshold=0.5 \
182 183
                      -o weights=output/faster_rcnn_r50_1x/model_final \
                      --use_tb=Ture
184
```
185 186

The visualization files are saved in `output` by default, to specify a different path, simply add a `--output_dir=` flag.  
187
`--draw_threshold` is an optional argument. Default is 0.5.
188 189
Different thresholds will produce different results depending on the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659).
If users want to infer according to customized model path, `-o weights` can be set for specified path.
190
`--use_tb` is an optional argument, if `--use_tb` is `True`, the tb-paddle will record data in directory,
191
so users can see the results in Tensorboard.
192

193 194 195
- Save inference model

```bash
W
wangguanzhong 已提交
196
# run on GPU with:
197
export CUDA_VISIBLE_DEVICES=0
198 199 200
export PYTHONPATH=$PYTHONPATH:.
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml \
                      --infer_img=demo/000000570688.jpg \
201 202 203
                      --save_inference_model
```

K
Kaipeng Deng 已提交
204
Save inference model by set `--save_inference_model`, which can be loaded by PaddlePaddle predict library.
205

206 207 208

## FAQ

Q
qingqing01 已提交
209 210
**Q:**  Why do I get `NaN` loss values during single GPU training? </br>
**A:**  The default learning rate is tuned to multi-GPU training (8x GPUs), it must
W
wangguanzhong 已提交
211 212
be adapted for single GPU training accordingly (e.g., divide by 8).  
The calculation rules are as follows,they are equivalent: </br>  
213

214

W
wangguanzhong 已提交
215 216
| GPU number  | Learning rate  | Max_iters | Milestones       |  
| :---------: | :------------: | :-------: | :--------------: |  
217 218 219
| 2           | 0.0025         | 720000    | [480000, 640000] |
| 4           | 0.005          | 360000    | [240000, 320000] |
| 8           | 0.01           | 180000    | [120000, 160000] |
220

221

Q
qingqing01 已提交
222 223 224 225 226
**Q:**  How to reduce GPU memory usage? </br>
**A:**  Setting environment variable FLAGS_conv_workspace_size_limit to a smaller
number can reduce GPU memory footprint without affecting training speed.
Take Mask-RCNN (R50) as example, by setting `export FLAGS_conv_workspace_size_limit=512`,
batch size could reach 4 per GPU (Tesla V100 16GB).
227 228 229 230 231 232


**Q:**  How to change data preprocessing? </br>
**A:**  Set `sample_transform` in configuration. Note that **the whole transforms** need to be added in configuration.
For example, `DecodeImage`, `NormalizeImage` and `Permute` in RCNN models. For detail description, please refer
to [config_example](config_example).