README.md

    English | 简体中文

    Documentation:https://paddledetection.readthedocs.io

    PaddleDetection

    PaddleDetection is an end-to-end object detection development kit based on PaddlePaddle, which aims to help developers in the whole development of training models, optimizing performance and inference speed, and deploying models. PaddleDetection provides varied object detection architectures in modular design, and wealthy data augmentation methods, network components, loss functions, etc. PaddleDetection supported practical projects such as industrial quality inspection, remote sensing image object detection, and automatic inspection with its practical features such as model compression and multi-platform deployment.

    PP-YOLO, which is faster and has higer performance than YOLOv4, has been released, it reached mAP(0.5:0.95) as 45.2% on COCO test2019 dataset and 72.9 FPS on single Test V100. Please refer to PP-YOLO for details.

    Now all models in PaddleDetection require PaddlePaddle version 1.8 or higher, or suitable develop version.

    Introduction

    Features:

    • Rich models:

      PaddleDetection provides rich of models, including 100+ pre-trained models such as object detection, instance segmentation, face detection etc. It covers the champion models, the practical detection models for cloud and edge device.

    • Production Ready:

      Key operations are implemented in C++ and CUDA, together with PaddlePaddle's highly efficient inference engine, enables easy deployment in server environments.

    • Highly Flexible:

      Components are designed to be modular. Model architectures, as well as data preprocess pipelines, can be easily customized with simple configuration changes.

    • Performance Optimized:

      With the help of the underlying PaddlePaddle framework, faster training and reduced GPU memory footprint is achieved. Notably, YOLOv3 training is much faster compared to other frameworks. Another example is Mask-RCNN (ResNet50), we managed to fit up to 4 images per GPU (Tesla V100 16GB) during multi-GPU training.

    Supported Architectures:

    ResNet ResNet-vd 1 ResNeXt-vd SENet MobileNet HRNet Res2Net
    Faster R-CNN x
    Faster R-CNN + FPN
    Mask R-CNN x
    Mask R-CNN + FPN
    Cascade Faster-RCNN
    Cascade Mask-RCNN
    Libra R-CNN
    RetinaNet
    YOLOv3
    SSD
    BlazeFace
    Faceboxes

    [1] ResNet-vd models offer much improved accuracy with negligible performance cost.

    NOTE: ✓ for config file and pretrain model provided in Model Zoo, ✗ for not provided but is supported generally.

    More models:

    • EfficientDet
    • FCOS
    • CornerNet-Squeeze
    • YOLOv4
    • PP-YOLO

    More Backbones:

    • DarkNet
    • VGG
    • GCNet
    • CBNet

    Advanced Features:

    • Synchronized Batch Norm
    • Group Norm
    • Modulated Deformable Convolution
    • Deformable PSRoI Pooling
    • Non-local and GCNet

    NOTE: Synchronized batch normalization can only be used on multiple GPU devices, can not be used on CPU devices or single GPU device.

    The following is the relationship between COCO mAP and FPS on Tesla V100 of representative models of each architectures and backbones.

    NOTE:

    • CBResNet stands for Cascade-Faster-RCNN-CBResNet200vd-FPN, which has highest mAP on COCO as 53.3% in PaddleDetection models
    • Cascade-Faster-RCNN stands for Cascade-Faster-RCNN-ResNet50vd-DCN, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8%
    • The enhanced YOLOv3-ResNet50vd-DCN is 10.6 absolute percentage points higher than paper on COCO mAP, and inference speed is nearly 70% faster than the darknet framework
    • All these models can be get in Model Zoo

    The following is the relationship between COCO mAP and FPS on Tesla V100 of SOTA object detecters and PP-YOLO, which is faster and has better performance than YOLOv4, and reached mAP(0.5:0.95) as 45.2% on COCO test2019 dataset and 72.9 FPS on single Test V100. Please refer to PP-YOLO for details.

    Tutorials

    Get Started

    Advanced Tutorial

    Model Zoo

    License

    PaddleDetection is released under the Apache 2.0 license.

    Updates

    v0.4.0 was released at 05/2020, add PP-YOLO, TTFNet, HTC, ACFPN, etc. And add BlaceFace face landmark detection model, add a series of optimized SSDLite models on mobile side, add data augmentations GridMask and RandomErasing, add Matrix NMS and EMA training, and improved ease of use, fix many known bugs, etc. Please refer to 版本更新文档 for details.

    Contributing

    Contributions are highly welcomed and we would really appreciate your feedback!!

    项目简介

    Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/PaddlePaddle/PaddleDetection

    发行版本

    当前项目没有发行版本

    贡献者 41

    全部贡献者

    开发语言

    • Python 92.8 %
    • C++ 3.5 %
    • Jupyter Notebook 1.6 %
    • Cuda 1.1 %
    • CMake 0.7 %