README_en.md 7.9 KB
Newer Older
1 2
English | [简体中文](README.md)

Q
qingqing01 已提交
3 4
Documentation:[https://paddledetection.readthedocs.io](https://paddledetection.readthedocs.io)

5 6
# PaddleDetection

K
Kaipeng Deng 已提交
7 8 9 10 11 12 13
PaddleDetection is an end-to-end object detection development kit based on PaddlePaddle, which
aims to help developers in the whole development of training models, optimizing performance and
inference speed, and deploying models. PaddleDetection provides varied object detection architectures
in modular design, and wealthy data augmentation methods, network components, loss functions, etc.
PaddleDetection supported practical projects such as industrial quality inspection, remote sensing
image object detection, and automatic inspection with its practical features such as model compression
and multi-platform deployment.
14

W
wangguanzhong 已提交
15
**Now all models in PaddleDetection require PaddlePaddle version 1.7 or higher, or suitable develop version.**
16 17

<div align="center">
G
Guanghua Yu 已提交
18
  <img src="docs/images/000000570688.jpg" />
19 20 21 22 23 24 25
</div>


## Introduction

Features:

Q
qingqing01 已提交
26 27 28 29 30 31
- Rich models:

  PaddleDetection provides rich of models, including 100+ pre-trained models
such as object detection, instance segmentation, face detection etc. It covers
the champion models, the practical detection models for cloud and edge device.

32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
- Production Ready:

  Key operations are implemented in C++ and CUDA, together with PaddlePaddle's
highly efficient inference engine, enables easy deployment in server environments.

- Highly Flexible:

  Components are designed to be modular. Model architectures, as well as data
preprocess pipelines, can be easily customized with simple configuration
changes.

- Performance Optimized:

  With the help of the underlying PaddlePaddle framework, faster training and
reduced GPU memory footprint is achieved. Notably, YOLOv3 training is
much faster compared to other frameworks. Another example is Mask-RCNN
(ResNet50), we managed to fit up to 4 images per GPU (Tesla V100 16GB) during
multi-GPU training.

Supported Architectures:

Q
qingqing01 已提交
53 54 55 56 57 58 59 60 61 62 63 64 65 66
|                     | ResNet | ResNet-vd <sup>[1](#vd)</sup> | ResNeXt-vd | SENet | MobileNet |  HRNet | Res2Net |
| ------------------- | :----: | ----------------------------: | :--------: | :---: | :-------: |:------:|:-----:  |
| Faster R-CNN        |   ✓    |                             ✓ |     x      |   ✓   |     ✗     |   ✗    |  ✗      |
| Faster R-CNN + FPN  |   ✓    |                             ✓ |     ✓      |   ✓   |     ✗     |   ✓    |  ✓      |
| Mask R-CNN          |   ✓    |                             ✓ |     x      |   ✓   |     ✗     |   ✗    |  ✗      |
| Mask R-CNN + FPN    |   ✓    |                             ✓ |     ✓      |   ✓   |     ✗     |   ✗    |  ✓      |
| Cascade Faster-RCNN |   ✓    |                             ✓ |     ✓      |   ✗   |     ✗     |   ✗    |  ✗      |
| Cascade Mask-RCNN   |   ✓    |                             ✗ |     ✗      |   ✓   |     ✗     |   ✗    |  ✗      |
| Libra R-CNN         |   ✗    |                             ✓ |     ✗      |   ✗   |     ✗     |   ✗    |  ✗      |
| RetinaNet           |   ✓    |                             ✗ |     ✗      |   ✗   |     ✗     |   ✗    |  ✗      |
| YOLOv3              |   ✓    |                             ✗ |     ✗      |   ✗   |     ✓     |   ✗    |  ✗      |
| SSD                 |   ✗    |                             ✗ |     ✗      |   ✗   |     ✓     |   ✗    |  ✗      |
| BlazeFace           |   ✗    |                             ✗ |     ✗      |   ✗   |     ✗     |   ✗    |  ✗      |
| Faceboxes           |   ✗    |                             ✗ |     ✗      |   ✗   |     ✗     |   ✗    |  ✗      |
67 68 69

<a name="vd">[1]</a> [ResNet-vd](https://arxiv.org/pdf/1812.01187) models offer much improved accuracy with negligible performance cost.

Q
qingqing01 已提交
70 71 72 73 74 75 76
More models:

- EfficientDet
- FCOS
- CornerNet-Squeeze
- YOLOv4

Q
qingqing01 已提交
77 78 79 80 81 82 83
More Backbones:

- DarkNet
- VGG
- GCNet
- CBNet

84 85
Advanced Features:

Q
qingqing01 已提交
86
- [x] **Synchronized Batch Norm**
87 88 89
- [x] **Group Norm**
- [x] **Modulated Deformable Convolution**
- [x] **Deformable PSRoI Pooling**
Q
qingqing01 已提交
90
- [x] **Non-local and GCNet**
91 92 93

**NOTE:** Synchronized batch normalization can only be used on multiple GPU devices, can not be used on CPU devices or single GPU device.

K
Kaipeng Deng 已提交
94 95 96 97 98 99 100 101 102 103 104 105
The following is the relationship between COCO mAP and FPS on Tesla V100 of representative models of each architectures and backbones.

<div align="center">
  <img src="docs/images/map_fps.png" />
</div>

**NOTE:**
- `CBResNet` stands for `Cascade-Faster-RCNN-CBResNet200vd-FPN`, which has highest mAP on COCO as 53.3% in PaddleDetection models
- `Cascade-Faster-RCNN` stands for `Cascade-Faster-RCNN-ResNet50vd-DCN`, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8%
- The enhanced `YOLOv3-ResNet50vd-DCN` is 10.6 absolute percentage points higher than paper on COCO mAP, and inference speed is nearly 70% faster than the darknet framework
- All these models can be get in [Model Zoo](#Model-Zoo)

G
Guanghua Yu 已提交
106
## Tutorials
107

G
Guanghua Yu 已提交
108 109 110 111 112

### Get Started

- [Installation guide](docs/tutorials/INSTALL.md)
- [Quick start on small dataset](docs/tutorials/QUICK_STARTED.md)
W
wangguanzhong 已提交
113
- [Train/Evaluation/Inference](docs/tutorials/GETTING_STARTED.md)
G
Guanghua Yu 已提交
114
- [FAQ](docs/FAQ.md)
G
Guanghua Yu 已提交
115 116 117 118 119 120

### Advanced Tutorial

- [Guide to preprocess pipeline and custom dataset](docs/advanced_tutorials/READER.md)
- [Models technical](docs/advanced_tutorials/MODEL_TECHNICAL.md)
- [Transfer learning document](docs/advanced_tutorials/TRANSFER_LEARNING.md)
G
Guanghua Yu 已提交
121 122 123
- [Parameter configuration](docs/advanced_tutorials/config_doc):
  - [Introduction to the configuration workflow](docs/advanced_tutorials/config_doc/CONFIG.md)
  - [Parameter configuration for RCNN model](docs/advanced_tutorials/config_doc/RCNN_PARAMS_DOC.md)
Q
qingqing01 已提交
124
- [IPython Notebook demo](demo/mask_rcnn_demo.ipynb)
G
Guanghua Yu 已提交
125
- [Model compression](slim)
G
Guanghua Yu 已提交
126 127 128 129 130
    - [Model compression benchmark](slim)
    - [Quantization](slim/quantization)
    - [Model pruning](slim/prune)
    - [Model distillation](slim/distillation)
    - [Neural Architecture Search](slim/nas)
G
Guanghua Yu 已提交
131 132 133 134
- [Deployment](deploy)
    - [Export model for inference](docs/advanced_tutorials/deploy/EXPORT_MODEL.md)
    - [Python inference](deploy/python)
    - [C++ inference](deploy/cpp)
G
Guanghua Yu 已提交
135
    - [Inference benchmark](docs/advanced_tutorials/inference/BENCHMARK_INFER_cn.md)
136 137 138 139

## Model Zoo

- Pretrained models are available in the [PaddleDetection model zoo](docs/MODEL_ZOO.md).
140
- [Mobile models](configs/mobile/README.md)
Q
qingqing01 已提交
141 142 143 144 145
- [Anchor free models](configs/anchor_free/README.md)
- [Face detection models](docs/featured_model/FACE_DETECTION_en.md)
- [Pretrained models for pedestrian detection](docs/featured_model/CONTRIB.md)
- [Pretrained models for vehicle detection](docs/featured_model/CONTRIB.md)
- [YOLOv3 enhanced model](docs/featured_model/YOLOv3_ENHANCEMENT.md): Compared to MAP of 33.0% in paper, enhanced YOLOv3 reaches the MAP of 43.6%, and inference speed is improved as well
146 147
- [Objects365 2019 Challenge champion model](docs/featured_model/champion_model/CACascadeRCNN.md)
- [Best single model of Open Images 2019-Object Detction](docs/featured_model/champion_model/OIDV5_BASELINE_MODEL.md)
148
- [Practical Server-side detection method](configs/rcnn_enhance/README_en.md): Inference speed on single V100 GPU can reach 20FPS when COCO mAP is 47.8%.
149 150


G
Guanghua Yu 已提交
151 152
## License
PaddleDetection is released under the [Apache 2.0 license](LICENSE).
153 154

## Updates
G
Guanghua Yu 已提交
155
v0.3.0 was released at `05/2020`, add anchor-free, EfficientDet, YOLOv4, etc. Launched mobile and server-side practical and efficient multiple models. For example, the YOLOv3-MobileNetv3 mobile side model is accelerated 3.5 times, the server side has optimized the two-stage model, and the speed and accuracy have high cost performance. We also refactored predictive deployment functions, and improved ease of use, fix many known bugs, etc.
G
Guanghua Yu 已提交
156
Please refer to [版本更新文档](docs/CHANGELOG.md) for details.
157 158 159 160

## Contributing

Contributions are highly welcomed and we would really appreciate your feedback!!