From f53a950d812e962715804028d807d3c15e93d6a9 Mon Sep 17 00:00:00 2001 From: wangxinxin08 <69842442+wangxinxin08@users.noreply.github.com> Date: Mon, 29 Nov 2021 11:23:36 +0800 Subject: [PATCH] modify docs of ppyolo and s2anet, test=document_fix (#4738) --- configs/dota/README.md | 3 +-- configs/dota/README_en.md | 3 +-- configs/ppyolo/README.md | 2 +- configs/ppyolo/README_cn.md | 2 +- static/configs/ppyolo/README.md | 2 +- static/configs/ppyolo/README_cn.md | 2 +- 6 files changed, 6 insertions(+), 8 deletions(-) diff --git a/configs/dota/README.md b/configs/dota/README.md index 56b162c17..9a5988a76 100644 --- a/configs/dota/README.md +++ b/configs/dota/README.md @@ -113,7 +113,7 @@ python3.7 tools/eval.py -c configs/dota/s2anet_1x_spine.yml -o weights=output/s2 # 使用提供训练好的模型评估 python3.7 tools/eval.py -c configs/dota/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams ``` -** 注意:** +** 注意:** (1) dota数据集中是train和val数据作为训练集一起训练的,对dota数据集进行评估时需要自定义设置评估数据集配置。 (2) 骨骼数据集是由分割数据转换而来,由于椎间盘不同类别对于检测任务而言区别很小,且s2anet算法最后得出的分数较低,评估时默认阈值为0.5,mAP较低是正常的。建议通过可视化查看检测结果。 @@ -154,7 +154,6 @@ Paddle中`multiclass_nms`算子的输入支持四边形输入,因此部署时 部署教程请参考[预测部署](../../deploy/README.md) -**注意:** 由于paddle.detach函数动转静时会导致导出模型尺寸错误,因此在配置文件中增加了`is_training`参数,导出模型预测部署时需要将改参数设置为`False` ## Citations ``` diff --git a/configs/dota/README_en.md b/configs/dota/README_en.md index 947efacf8..e299e0e81 100644 --- a/configs/dota/README_en.md +++ b/configs/dota/README_en.md @@ -124,7 +124,7 @@ python3.7 tools/eval.py -c configs/dota/s2anet_1x_spine.yml -o weights=output/s2 # Use a trained model to evaluate python3.7 tools/eval.py -c configs/dota/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams ``` -**Attention:** +**Attention:** (1) The DOTA dataset is trained together with train and val data as a training set, and the evaluation dataset configuration needs to be customized when evaluating the DOTA dataset. (2) Bone dataset is transformed from segmented data. As there is little difference between different types of discs for detection tasks, and the score obtained by S2ANET algorithm is low, the default threshold for evaluation is 0.5, a low mAP is normal. You are advised to view the detection result visually. @@ -164,7 +164,6 @@ The inputs of the `multiclass_nms` operator in Paddle support quadrilateral inpu Please refer to the deployment tutorial[Predict deployment](../../deploy/README_en.md) -**Attention:** The `is_training` parameter was added to the configuration file because the `paddle.Detach` function would cause the size error of the exported model when it went quiet, and the exported model would need to be set to `False` to predict deployment ## Citations ``` diff --git a/configs/ppyolo/README.md b/configs/ppyolo/README.md index 67a90f488..6b3e5fc61 100644 --- a/configs/ppyolo/README.md +++ b/configs/ppyolo/README.md @@ -88,7 +88,7 @@ PP-YOLO and PP-YOLOv2 improved performance and speed of YOLOv3 with following me **Notes:** - PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box APval is evaluation results of `mAP(IoU=0.5:0.95)`, Box APval is evaluation results of `mAP(IoU=0.5)`. -- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ.md). +- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ/README.md). - PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8 - we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance diff --git a/configs/ppyolo/README_cn.md b/configs/ppyolo/README_cn.md index 2f165223e..8ef2dbc7f 100644 --- a/configs/ppyolo/README_cn.md +++ b/configs/ppyolo/README_cn.md @@ -82,7 +82,7 @@ PP-YOLO和PP-YOLOv2从如下方面优化和提升YOLOv3模型的精度和速度 | PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 416 | 22.7 | 65.4 | [model](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_650e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_tiny_650e_coco.yml) | [预测模型](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) | - PP-YOLO-tiny 模型使用COCO数据集中train2017作为训练集,使用val2017作为测试集,Box APval为`mAP(IoU=0.5:0.95)`评估结果, Box AP50val为`mAP(IoU=0.5)`评估结果。 -- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ.md)调整学习率和迭代次数。 +- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ/README.md)调整学习率和迭代次数。 - PP-YOLO-tiny 模型推理速度测试环境配置为麒麟990芯片4线程,arm8架构。 - 我们也提供的PP-YOLO-tiny的后量化压缩模型,将模型体积压缩到**1.3M**,对精度和预测速度基本无影响 diff --git a/static/configs/ppyolo/README.md b/static/configs/ppyolo/README.md index 852138eb7..a993e119f 100644 --- a/static/configs/ppyolo/README.md +++ b/static/configs/ppyolo/README.md @@ -104,7 +104,7 @@ PP-YOLO and PP-YOLOv2 improved performance and speed of YOLOv3 with following me **Notes:** - PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box APval is evaluation results of `mAP(IoU=0.5:0.95)`, Box APval is evaluation results of `mAP(IoU=0.5)`. -- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob//develop/static/docs/FAQ.md). +- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md). - PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8 - we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance diff --git a/static/configs/ppyolo/README_cn.md b/static/configs/ppyolo/README_cn.md index 3025a7a1f..6af1912db 100644 --- a/static/configs/ppyolo/README_cn.md +++ b/static/configs/ppyolo/README_cn.md @@ -100,7 +100,7 @@ PP-YOLO和PP-YOLOv2从如下方面优化和提升YOLOv3模型的精度和速度 | PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 416 | 22.7 | 65.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_tiny.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_tiny.yml) | [预测模型](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) | - PP-YOLO-tiny 模型使用COCO数据集中train2017作为训练集,使用val2017作为测试集,Box APval为`mAP(IoU=0.5:0.95)`评估结果, Box AP50val为`mAP(IoU=0.5)`评估结果。 -- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](../../docs/FAQ.md)调整学习率和迭代次数。 +- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md)调整学习率和迭代次数。 - PP-YOLO-tiny 模型推理速度测试环境配置为麒麟990芯片4线程,arm8架构。 - 我们也提供的PP-YOLO-tiny的后量化压缩模型,将模型体积压缩到**1.3M**,对精度和预测速度基本无影响 -- GitLab