From 56ed45694cd9fc83efcadf01fc7dc328a8eeb1c0 Mon Sep 17 00:00:00 2001
From: wangxinxin08 <69842442+wangxinxin08@users.noreply.github.com>
Date: Mon, 29 Nov 2021 11:23:14 +0800
Subject: [PATCH] modify docs of ppyolo and s2anet, test=document_fix (#4739)
---
configs/dota/README.md | 1 -
configs/dota/README_en.md | 1 -
configs/ppyolo/README.md | 2 +-
configs/ppyolo/README_cn.md | 2 +-
static/configs/ppyolo/README.md | 2 +-
static/configs/ppyolo/README_cn.md | 2 +-
6 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/configs/dota/README.md b/configs/dota/README.md
index 386bdfbb1..9e6799fd4 100644
--- a/configs/dota/README.md
+++ b/configs/dota/README.md
@@ -154,7 +154,6 @@ Paddle中`multiclass_nms`算子的输入支持四边形输入,因此部署时
部署教程请参考[预测部署](../../deploy/README.md)
-**注意:** 由于paddle.detach函数动转静时会导致导出模型尺寸错误,因此在配置文件中增加了`is_training`参数,导出模型预测部署时需要将改参数设置为`False`
## Citations
```
diff --git a/configs/dota/README_en.md b/configs/dota/README_en.md
index 6c26151c4..e9bff33a3 100644
--- a/configs/dota/README_en.md
+++ b/configs/dota/README_en.md
@@ -164,7 +164,6 @@ The inputs of the `multiclass_nms` operator in Paddle support quadrilateral inpu
Please refer to the deployment tutorial[Predict deployment](../../deploy/README_en.md)
-**Attention:** The `is_training` parameter was added to the configuration file because the `paddle.Detach` function would cause the size error of the exported model when it went quiet, and the exported model would need to be set to `False` to predict deployment
## Citations
```
diff --git a/configs/ppyolo/README.md b/configs/ppyolo/README.md
index c7b0b9e12..7a5fa1d6e 100644
--- a/configs/ppyolo/README.md
+++ b/configs/ppyolo/README.md
@@ -88,7 +88,7 @@ PP-YOLO and PP-YOLOv2 improved performance and speed of YOLOv3 with following me
**Notes:**
- PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box APval is evaluation results of `mAP(IoU=0.5:0.95)`, Box APval is evaluation results of `mAP(IoU=0.5)`.
-- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ.md).
+- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ/README.md).
- PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8
- we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance
diff --git a/configs/ppyolo/README_cn.md b/configs/ppyolo/README_cn.md
index 08048e66d..88068eaba 100644
--- a/configs/ppyolo/README_cn.md
+++ b/configs/ppyolo/README_cn.md
@@ -82,7 +82,7 @@ PP-YOLO和PP-YOLOv2从如下方面优化和提升YOLOv3模型的精度和速度
| PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 416 | 22.7 | 65.4 | [model](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_650e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolo_tiny_650e_coco.yml) | [预测模型](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) |
- PP-YOLO-tiny 模型使用COCO数据集中train2017作为训练集,使用val2017作为测试集,Box APval为`mAP(IoU=0.5:0.95)`评估结果, Box AP50val为`mAP(IoU=0.5)`评估结果。
-- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ.md)调整学习率和迭代次数。
+- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ/README.md)调整学习率和迭代次数。
- PP-YOLO-tiny 模型推理速度测试环境配置为麒麟990芯片4线程,arm8架构。
- 我们也提供的PP-YOLO-tiny的后量化压缩模型,将模型体积压缩到**1.3M**,对精度和预测速度基本无影响
diff --git a/static/configs/ppyolo/README.md b/static/configs/ppyolo/README.md
index 852138eb7..a993e119f 100644
--- a/static/configs/ppyolo/README.md
+++ b/static/configs/ppyolo/README.md
@@ -104,7 +104,7 @@ PP-YOLO and PP-YOLOv2 improved performance and speed of YOLOv3 with following me
**Notes:**
- PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box APval is evaluation results of `mAP(IoU=0.5:0.95)`, Box APval is evaluation results of `mAP(IoU=0.5)`.
-- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob//develop/static/docs/FAQ.md).
+- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md).
- PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8
- we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance
diff --git a/static/configs/ppyolo/README_cn.md b/static/configs/ppyolo/README_cn.md
index 3025a7a1f..6af1912db 100644
--- a/static/configs/ppyolo/README_cn.md
+++ b/static/configs/ppyolo/README_cn.md
@@ -100,7 +100,7 @@ PP-YOLO和PP-YOLOv2从如下方面优化和提升YOLOv3模型的精度和速度
| PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 416 | 22.7 | 65.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_tiny.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_tiny.yml) | [预测模型](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) |
- PP-YOLO-tiny 模型使用COCO数据集中train2017作为训练集,使用val2017作为测试集,Box APval为`mAP(IoU=0.5:0.95)`评估结果, Box AP50val为`mAP(IoU=0.5)`评估结果。
-- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](../../docs/FAQ.md)调整学习率和迭代次数。
+- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md)调整学习率和迭代次数。
- PP-YOLO-tiny 模型推理速度测试环境配置为麒麟990芯片4线程,arm8架构。
- 我们也提供的PP-YOLO-tiny的后量化压缩模型,将模型体积压缩到**1.3M**,对精度和预测速度基本无影响
--
GitLab