QUICK_STARTED.md 2.4 KB
Newer Older
1 2 3 4
English | [简体中文](QUICK_STARTED_cn.md)

# Quick Start

W
wangguanzhong 已提交
5
This tutorial fine-tunes a tiny dataset by pretrained detection model for users to get a model and learn PaddleDetection quickly. The model can be trained in around 20min with good performance.
6

7
- **Note: before started, need to specifiy the GPU device as follows.**
8 9

```bash
W
wangguanzhong 已提交
10
export CUDA_VISIBLE_DEVICES=0
11 12
```

W
wangguanzhong 已提交
13 14
## Data Preparation

W
wangguanzhong 已提交
15
Dataset refers to [Kaggle](https://www.kaggle.com/mbkinaci/fruit-images-for-object-detection), which contains 240 images in train dataset and 60 images in test dataset. Data categories are apple, orange and banana. Download [here](https://dataset.bj.bcebos.com/PaddleDetection_demo/fruit-detection.tar) and uncompress the dataset after download, script for data preparation is located at [download_fruit.py](../../dataset/fruit/download_fruit.py). Command is as follows:
16 17

```bash
W
wangguanzhong 已提交
18
python dataset/fruit/download_fruit.py
W
wangguanzhong 已提交
19 20 21 22
```

Training:

23 24 25 26 27 28 29 30
```bash
python -u tools/train.py -c configs/yolov3_mobilenet_v1_fruit.yml --eval
```

Use `yolov3_mobilenet_v1` to fine-tune the model from COCO dataset.

Meanwhile, loss and mAP can be observed on VisualDL by set `--use_vdl` and `--vdl_log_dir`. But note  Python version required >= 3.5 for VisualDL.

W
wangguanzhong 已提交
31
```bash
32
python -u tools/train.py -c configs/yolov3_mobilenet_v1_fruit.yml \
走神的阿圆's avatar
走神的阿圆 已提交
33 34
                        --use_vdl=True \
                        --vdl_log_dir=vdl_fruit_dir/scalar \
G
Guanghua Yu 已提交
35
                        --eval
36 37
```

38
Then observe the loss and mAP curve through VisualDL command:
39 40

```bash
走神的阿圆's avatar
走神的阿圆 已提交
41
visualdl --logdir vdl_fruit_dir/scalar/ --host <host_IP> --port <port_num>
42 43
```

走神的阿圆's avatar
走神的阿圆 已提交
44
Result on VisualDL is shown below:
45

46 47 48
<div align="center">
  <img src='../images/visualdl_fruit.jpg' width='800'>
</div>
49 50 51 52 53 54 55 56 57 58 59 60

Model can be downloaded [here](https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_fruit.tar)

Evaluation:

```bash
python -u tools/eval.py -c configs/yolov3_mobilenet_v1_fruit.yml
```

Inference:

```bash
W
wangguanzhong 已提交
61 62
python -u tools/infer.py -c configs/yolov3_mobilenet_v1_fruit.yml \
                         -o weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_fruit.tar \
63
                         --infer_img=demo/orange_71.jpg
64 65 66 67
```

Inference images are shown below:

68 69 70 71
<div align="center">
  <img src='../../demo/orange_71.jpg' width='600'>
</div>

G
Guanghua Yu 已提交
72

73 74 75
<div align="center">
  <img src='../images/orange_71_detection.jpg' width='600'>
</div>
76 77

For detailed infomation of training and evalution, please refer to [GETTING_STARTED.md](GETTING_STARTED.md).