- PP-YOLO is trained on COCO train2017 datast and evaluated on test-dev2017 dataset,`Box AP` is evaluation results as`mAP(IoU=0.5:0.95)`.
- PP-YOLO is trained on COCO train2017 datast and evaluated on test-dev2017 dataset,Box AP<sup>test</sup> is evaluation results of`mAP(IoU=0.5:0.95)`.
- PP-YOLO used 8 GPUs for training and mini-batch size as 24 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](../../docs/FAQ.md).
- PP-YOLO used 8 GPUs for training and mini-batch size as 24 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](../../docs/FAQ.md).
- PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode.
- PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode.
- PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method.
- PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method.
...
@@ -59,12 +59,12 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods:
...
@@ -59,12 +59,12 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods:
### PP-YOLO tiny
### PP-YOLO tiny
| Model | GPU number | images/GPU | backbone | input shape | Box AP | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | download | config |
- PP-YOLO tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,`Box AP` is evaluation results as`mAP(IoU=0.5)`.
- PP-YOLO tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box AP50<sup>val</sup> is evaluation results of`mAP(IoU=0.5)`.
- PP-YOLO tiny used 4 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](../../docs/FAQ.md).
- PP-YOLO tiny used 4 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](../../docs/FAQ.md).
- PP-YOLO tiny inference speeding testing environment and configuration is same as PP-YOLO above.
- PP-YOLO tiny inference speeding testing environment and configuration is same as PP-YOLO above.
- Performance and inference spedd are measure with input shape as 608
- Performance and inference spedd are measure with input shape as 608
- All models are trained on COCO train2017 datast and evaluated on val2017 dataset,`Box AP` is evaluation results as `mAP(IoU=0.5:0.95)`.
- All models are trained on COCO train2017 datast and evaluated on val2017 & test-dev2017 dataset,`Box AP` is evaluation results as `mAP(IoU=0.5:0.95)`.
- Inference speed is tested on single Tesla V100 with batch size as 1 following test method and environment configuration in benchmark above.
- Inference speed is tested on single Tesla V100 with batch size as 1 following test method and environment configuration in benchmark above.
-[YOLOv3-DarkNet53](../yolov3_darknet.yml) with mAP as 38.9 is optimized YOLOv3 model in PaddleDetection,see [Model Zoo](../../docs/MODEL_ZOO.md) for details.
-[YOLOv3-DarkNet53](../yolov3_darknet.yml) with mAP as 38.9 is optimized YOLOv3 model in PaddleDetection,see [Model Zoo](../../docs/MODEL_ZOO.md) for details.