diff --git a/docs/MODEL_ZOO.md b/docs/MODEL_ZOO.md index 55f72bc58e0c4be0af2d3f6d78540a2b9bd2ff4f..01cfb14be23afecd25359317018cae1ef0f8d97e 100644 --- a/docs/MODEL_ZOO.md +++ b/docs/MODEL_ZOO.md @@ -160,6 +160,7 @@ improved performance mainly by using L1 loss in bounding box width and height re randomly color distortion, randomly cropping, randomly expansion, randomly interpolation method, randomly flippling. YOLO v3 used randomly reshaped minibatch in training, inferences can be performed on different image sizes with the same model weights, and we provided evaluation results of image size 608/416/320 above. Deformable conv is added on stage 5 of backbone. +- YOLO v3 enhanced model improves the precision to 43.2 involved with deformable conv, dropblock and IoU loss. See more details in [YOLOv3_ENHANCEMENT](./featured_model/YOLOv3_ENHANCEMENT.md) ### RetinaNet diff --git a/docs/MODEL_ZOO_cn.md b/docs/MODEL_ZOO_cn.md index 30fcb25bf5e7d2dae918481af7fdb0840addb06e..f837b9c8b4457ef0483ee07281fc83285661ae40 100644 --- a/docs/MODEL_ZOO_cn.md +++ b/docs/MODEL_ZOO_cn.md @@ -153,6 +153,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 - 上表中也提供了原论文[YOLOv3](https://arxiv.org/abs/1804.02767)中YOLOv3-DarkNet53的精度,我们的实现版本主要从在bounding box的宽度和高度回归上使用了L1损失,图像mixup和label smooth等方法优化了其精度。 - YOLO v3在8卡,总batch size为64下训练270轮。数据增强包括:mixup, 随机颜色失真,随机剪裁,随机扩张,随机插值法,随机翻转。YOLO v3在训练阶段对minibatch采用随机reshape,可以采用相同的模型测试不同尺寸图片,我们分别提供了尺寸为608/416/320大小的测试结果。deformable卷积作用在骨架网络5阶段。 +- YOLO v3增强版模型通过引入可变形卷积,dropblock和IoU loss,将精度进一步提升至43.2, 详情见[YOLOv3增强模型](./featured_model/YOLOv3_ENHANCEMENT.md) ### RetinaNet diff --git a/docs/advanced_tutorials/CONFIG.md b/docs/advanced_tutorials/CONFIG.md index c73a6a9a6edabedf7c864243e46de6c4936f7f0f..709b6ecf7aa0bcce45acdcc7397b25deec28a840 100644 --- a/docs/advanced_tutorials/CONFIG.md +++ b/docs/advanced_tutorials/CONFIG.md @@ -184,7 +184,7 @@ A small utility (`tools/configure.py`) is included to simplify the configuration 4. `generate`: Generate a configuration template for a given list of modules. By default it generates a complete configuration file, which can be quite verbose; if a `--minimal` flag is given, it generates a template that only contain non optional settings. For example, to generate a configuration for Faster R-CNN architecture with `ResNet` backbone and `FPN`, run: ```shell - python tools/configure.py generate FasterRCNN ResNet RPNHead RoIAlign BBoxAssigner BBoxHead FasterRCNNTrainFeed FasterRCNNTestFeed LearningRate OptimizerBuilder + python tools/configure.py generate FasterRCNN ResNet RPNHead RoIAlign BBoxAssigner BBoxHead LearningRate OptimizerBuilder ``` For a minimal version, run: diff --git a/docs/advanced_tutorials/CONFIG_cn.md b/docs/advanced_tutorials/CONFIG_cn.md index 397af17f648edcb90f8b20182246f1d96cf9fed8..fe1f7e31f57925dbb525602533565f69a1488bad 100644 --- a/docs/advanced_tutorials/CONFIG_cn.md +++ b/docs/advanced_tutorials/CONFIG_cn.md @@ -174,7 +174,7 @@ pip install typeguard http://github.com/willthefrog/docstring_parser/tarball/mas 4. `generate`: 根据给出的模块列表生成配置文件,默认生成完整配置,如果指定 `--minimal` ,生成最小配置,即省略所有默认配置项。例如,执行下列命令可以生成Faster R-CNN (`ResNet` backbone + `FPN`) 架构的配置文件: ```shell - python tools/configure.py generate FasterRCNN ResNet RPNHead RoIAlign BBoxAssigner BBoxHead FasterRCNNTrainFeed FasterRCNNTestFeed LearningRate OptimizerBuilder + python tools/configure.py generate FasterRCNN ResNet RPNHead RoIAlign BBoxAssigner BBoxHead LearningRate OptimizerBuilder ``` 如需最小配置,运行: diff --git a/docs/advanced_tutorials/inference/INFERENCE.md b/docs/advanced_tutorials/inference/INFERENCE.md index 3d387ec8df721469f23bea8c77c092f79888ddd9..c97db89d083b1f6b64bd478c1c8fd9991cbc8bf1 100644 --- a/docs/advanced_tutorials/inference/INFERENCE.md +++ b/docs/advanced_tutorials/inference/INFERENCE.md @@ -2,7 +2,7 @@ 本篇教程使用Python API对[导出模型](EXPORT_MODEL.md)保存的inference_model进行预测。 -在PaddlePaddle中预测引擎和训练引擎底层有着不同的优化方法,代码走不通的分支,两者都可以进行预测。在入门教程的训练/评估/预测流程中介绍的预测流程,即tools/infer.py是使用训练引擎分支的预测流程。保存的inference_model,可以通过`fluid.io.load_inference_model`接口,走训练引擎分支预测。本文档也同时介绍通过预测引擎的Python API进行预测,一般而言这种方式的速度优于前者。 +在PaddlePaddle中预测引擎和训练引擎底层有着不同的优化方法,代码走不同的分支,两者都可以进行预测。在入门教程的训练/评估/预测流程中介绍的预测流程,即tools/infer.py是使用训练引擎分支的预测流程。保存的inference_model,可以通过`fluid.io.load_inference_model`接口,走训练引擎分支预测。本文档也同时介绍通过预测引擎的Python API进行预测,一般而言这种方式的速度优于前者。 这篇教程介绍的Python API预测示例,除了可视化部分依赖PaddleDetection外,预处理、模型结构、执行流程均不依赖PaddleDetection。 diff --git a/ppdet/utils/check.py b/ppdet/utils/check.py index 305fa3705f5c313569986cbdb15c8afeda5a79c1..a879bcffdde0e6876ce0c1d66c53365e9c9597eb 100644 --- a/ppdet/utils/check.py +++ b/ppdet/utils/check.py @@ -55,7 +55,7 @@ def check_version(): "Please make sure the version is good with your code." \ try: - fluid.require_version('1.6.0') + fluid.require_version('1.7.0') except Exception as e: logger.error(err) sys.exit(1)