未验证 提交 530fb7c4 编写于 作者: S Shuangchi He 提交者: GitHub

Fix some typos. (#5860)

* Fix some typos in *.md.

* Fix some typos in code.
上级 f06c9290
......@@ -232,9 +232,9 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
| Paddle Lite | - | [C++](../../deploy/lite) | ✔︎ |
| Android Demo | - | [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) | ✔︎ |
| PaddleInference | [Python](../../deploy/python) | [C++](../../deploy/cpp) | ✔︎ |
| ONNXRuntime | [Python](../../deploy/third_engine/demo_onnxruntime) | Comming soon | ✔︎ |
| NCNN | Comming soon | [C++](../../deploy/third_engine/demo_ncnn) | ✘ |
| MNN | Comming soon | [C++](../../deploy/third_engine/demo_mnn) | ✘ |
| ONNXRuntime | [Python](../../deploy/third_engine/demo_onnxruntime) | Coming soon | ✔︎ |
| NCNN | Coming soon | [C++](../../deploy/third_engine/demo_ncnn) | ✘ |
| MNN | Coming soon | [C++](../../deploy/third_engine/demo_mnn) | ✘ |
......
......@@ -223,13 +223,13 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \
| Infer Engine | Python | C++ | Predict With Postprocess |
| :-------- | :--------: | :---------------------: | :----------------: |
| OpenVINO | [Python](../../deploy/third_engine/demo_openvino/python) | [C++](../../deploy/third_engine/demo_openvino)(postprocess comming soon) | ✔︎ |
| OpenVINO | [Python](../../deploy/third_engine/demo_openvino/python) | [C++](../../deploy/third_engine/demo_openvino)(postprocess coming soon) | ✔︎ |
| Paddle Lite | - | [C++](../../deploy/lite) | ✔︎ |
| Android Demo | - | [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) | ✔︎ |
| PaddleInference | [Python](../../deploy/python) | [C++](../../deploy/cpp) | ✔︎ |
| ONNXRuntime | [Python](../../deploy/third_engine/demo_onnxruntime) | Comming soon | ✔︎ |
| NCNN | Comming soon | [C++](../../deploy/third_engine/demo_ncnn) | ✘ |
| MNN | Comming soon | [C++](../../deploy/third_engine/demo_mnn) | ✘ |
| ONNXRuntime | [Python](../../deploy/third_engine/demo_onnxruntime) | Coming soon | ✔︎ |
| NCNN | Coming soon | [C++](../../deploy/third_engine/demo_ncnn) | ✘ |
| MNN | Coming soon | [C++](../../deploy/third_engine/demo_mnn) | ✘ |
Android demo visualization:
......@@ -277,7 +277,7 @@ python tools/train.py -c configs/picodet/picodet_s_416_coco_lcnet.yml \
## Unstructured Pruning
<details open>
<summary>Toturial:</summary>
<summary>Tutorial:</summary>
Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet/legacy_model/pruner/README.md) for details such as requirements, training and deployment.
......
......@@ -2,7 +2,7 @@
### 简介
* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为61FPS时,COCO mAP可达41.2%。
* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为61FPS时,COCO mAP可达41.2%。
### 模型库
......
......@@ -116,7 +116,7 @@ class JDETracker(object):
Return:
output_stracks_dict (dict(list)): The list contains information
regarding the online_tracklets for the recieved image tensor.
regarding the online_tracklets for the received image tensor.
"""
self.frame_id += 1
if self.frame_id == 1:
......
......@@ -35,7 +35,7 @@ class HrHRNetPostProcess(object):
heat_thresh (float): value of topk below this threshhold will be ignored
tag_thresh (float): coord's value sampled in tagmap below this threshold belong to same people for init
inputs(list[heatmap]): the output list of modle, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk
inputs(list[heatmap]): the output list of model, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk
original_height, original_width (float): the original image size
"""
......
......@@ -59,7 +59,7 @@ TrainReader:
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
# 训练时batch_size
batch_size: 24
# 读取数据是否乱序
# 读取数据是否乱序
shuffle: true
# 是否丢弃最后不能完整组成batch的数据
drop_last: true
......
......@@ -90,7 +90,7 @@ TrainReader:
- PadBatch: {pad_to_stride: 32}
# 训练时batch_size
batch_size: 1
# 读取数据是否乱序
# 读取数据是否乱序
shuffle: true
# 是否丢弃最后不能完整组成batch的数据
drop_last: true
......@@ -110,7 +110,7 @@ EvalReader:
- PadBatch: {pad_to_stride: 32}
# 评估时batch_size
batch_size: 1
# 读取数据是否乱序
# 读取数据是否乱序
shuffle: false
# 是否丢弃最后不能完整组成batch的数据
drop_last: false
......@@ -130,7 +130,7 @@ TestReader:
- PadBatch: {pad_to_stride: 32}
# 测试时batch_size
batch_size: 1
# 读取数据是否乱序
# 读取数据是否乱序
shuffle: false
# 是否丢弃最后不能完整组成batch的数据
drop_last: false
......
......@@ -102,7 +102,7 @@ TrainReader:
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]}
# 训练时batch_size
batch_size: 24
# 读取数据是否乱序
# 读取数据是否乱序
shuffle: true
# 是否丢弃最后不能完整组成batch的数据
drop_last: true
......
......@@ -529,7 +529,7 @@ class Gt2FairMOTTarget(Gt2TTFTarget):
Generate FairMOT targets by ground truth data.
Difference between Gt2FairMOTTarget and Gt2TTFTarget are:
1. the gaussian kernal radius to generate a heatmap.
2. the targets needed during traing.
2. the targets needed during training.
Args:
num_classes(int): the number of classes.
......
......@@ -1054,7 +1054,7 @@ class CropWithSampling(BaseOperator):
[max sample, max trial, min scale, max scale,
min aspect ratio, max aspect ratio,
min overlap, max overlap]
avoid_no_bbox (bool): whether to to avoid the
avoid_no_bbox (bool): whether to avoid the
situation where the box does not appear.
"""
super(CropWithSampling, self).__init__()
......@@ -1145,7 +1145,7 @@ class CropWithDataAchorSampling(BaseOperator):
das_anchor_scales (list[float]): a list of anchor scales in data
anchor smapling.
min_size (float): minimum size of sampled bbox.
avoid_no_bbox (bool): whether to to avoid the
avoid_no_bbox (bool): whether to avoid the
situation where the box does not appear.
"""
super(CropWithDataAchorSampling, self).__init__()
......
......@@ -557,7 +557,7 @@ class KITTIEvaluation(object):
"track ids are not unique for sequence %d: frame %d"
% (seq, t_data.frame))
logger.info(
"track id %d occured at least twice for this frame"
"track id %d occurred at least twice for this frame"
% t_data.track_id)
logger.info("Exiting...")
#continue # this allows to evaluate non-unique result files
......
......@@ -153,7 +153,7 @@ class HrHRNetPostProcess(object):
heat_thresh (float): value of topk below this threshhold will be ignored
tag_thresh (float): coord's value sampled in tagmap below this threshold belong to same people for init
inputs(list[heatmap]): the output list of modle, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk
inputs(list[heatmap]): the output list of model, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk
original_height, original_width (float): the original image size
'''
......
......@@ -198,7 +198,7 @@ class SparseRCNNLoss(nn.Layer):
# Retrieve the matching between the outputs of the last layer and the targets
indices = self.matcher(outputs_without_aux, targets)
# Compute the average number of target boxes accross all nodes, for normalization purposes
# Compute the average number of target boxes across all nodes, for normalization purposes
num_boxes = sum(len(t["labels"]) for t in targets)
num_boxes = paddle.to_tensor(
[num_boxes],
......
......@@ -122,7 +122,7 @@ class JDETracker(object):
Return:
output_stracks_dict (dict(list)): The list contains information
regarding the online_tracklets for the recieved image tensor.
regarding the online_tracklets for the received image tensor.
"""
self.frame_id += 1
if self.frame_id == 1:
......
......@@ -280,7 +280,7 @@ def roi_align(input,
rois_num = paddle.static.data(name='rois_num', shape=[None], dtype='int32')
align_out = ops.roi_align(input=x,
rois=rois,
ouput_size=(7, 7),
output_size=(7, 7),
spatial_scale=0.5,
sampling_ratio=-1,
rois_num=rois_num)
......
......@@ -28,7 +28,7 @@ PaddleDetection也开源了基于faster rcnn的GIOU loss实现。使用GIOU loss
GIOU loss解决了IOU loss中预测边框A与真值B的交并比为0时,模型无法给出优化方向的问题,但是仍然有2种情况难以解决,
1. 当边框A和边框B处于包含关系的时候,GIOU loss退化为IOU loss,此时模型收敛较慢。
2. 当A与B相交,若A和B的x1与x2均相等或者y1与y2均相等,GIOU loss仍然会退化为IOU loss,收敛很慢。
2. 当A与B相交,若A和B的x1与x2均相等或者y1与y2均相等,GIOU loss仍然会退化为IOU loss,收敛很慢。
基于此,论文提出了DIOU loss与CIOU loss,解决收敛速度慢以及部分条件下无法收敛的问题。
为加速收敛,论文在改进的loss中引入距离的概念,具体地,边框loss可以定义为如下形式:
......
......@@ -90,7 +90,7 @@ PP-YOLO and PP-YOLOv2 improved performance and speed of YOLOv3 with following me
|:----------------------------:|:----------:|:----------:| :---------: | :-----------------------: | :--------: | :----------:| :------------------: | :-------------------: | :------: | :----------------------: | :-----: |
| PP-YOLO_MobileNetV3_small | 4 | 32 | 75% | PP-YOLO_MobileNetV3_large | 4.2MB | 320 | 16.2 | 39.8 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small_prune75_distillby_mobilenet_v3_large.pdparams) | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small_prune75_distillby_mobilenet_v3_large.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml) |
- Slim PP-YOLO is trained by slim traing method from [Distill pruned model](../../slim/extensions/distill_pruned_model/README.md),distill training pruned PP-YOLO_MobileNetV3_small model with PP-YOLO_MobileNetV3_large model as the teacher model
- Slim PP-YOLO is trained by slim training method from [Distill pruned model](../../slim/extensions/distill_pruned_model/README.md),distill training pruned PP-YOLO_MobileNetV3_small model with PP-YOLO_MobileNetV3_large model as the teacher model
- Pruning detectiom head of PP-YOLO model with ratio as 75%, while the arguments are `--pruned_params="yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights" --pruned_ratios="0.75,0.75,0.75,0.75"`
- For Slim PP-YOLO training, evaluation, inference and model exporting, please see [Distill pruned model](../../slim/extensions/distill_pruned_model/README.md)
......
......@@ -2,7 +2,7 @@
### 简介
* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为61FPS时,COCO mAP可达41.6%;预测速度为20FPS时,COCO mAP可达47.8%。
* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为61FPS时,COCO mAP可达41.6%;预测速度为20FPS时,COCO mAP可达47.8%。
* 以标准的Faster RCNN ResNet50_vd FPN为例,下表给出了PSS-DET不同的模块的速度与精度收益。
......
......@@ -65,7 +65,7 @@ def archives = [
'src' : 'https://paddlelite-demo.bj.bcebos.com/models/yolov3_mobilenet_v3_prune86_FPGM_320_fp32_for_hybrid_cpu_npu_v2_6_1.tar.gz',
'dest' : 'src/main/assets/models/yolov3_mobilenet_v3_for_hybrid_cpu_npu'
],
// pp-yolo tiny comming soon
// pp-yolo tiny coming soon
// ssd_mobilenet_v1 voc
[
'src' : 'https://paddlelite-demo.bj.bcebos.com/models/ssdlite_mobilenet_v3_large_for_cpu_nb.tar.gz',
......
......@@ -955,7 +955,7 @@ class CropImage(BaseOperator):
[max sample, max trial, min scale, max scale,
min aspect ratio, max aspect ratio,
min overlap, max overlap]
avoid_no_bbox (bool): whether to to avoid the
avoid_no_bbox (bool): whether to avoid the
situation where the box does not appear.
"""
super(CropImage, self).__init__()
......@@ -1047,7 +1047,7 @@ class CropImageWithDataAchorSampling(BaseOperator):
das_anchor_scales (list[float]): a list of anchor scales in data
anchor smapling.
min_size (float): minimum size of sampled bbox.
avoid_no_bbox (bool): whether to to avoid the
avoid_no_bbox (bool): whether to avoid the
situation where the box does not appear.
"""
super(CropImageWithDataAchorSampling, self).__init__()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册