Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
1199c33b
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
1199c33b
编写于
7月 01, 2019
作者:
J
jerrywgz
提交者:
GitHub
7月 01, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix model zoo doc (#2629)
* fix model zoo doc
上级
da2aac1d
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
31 addition
and
570 deletion
+31
-570
configs/faster_rcnn_se154_vd_1x.yml
configs/faster_rcnn_se154_vd_1x.yml
+0
-122
configs/faster_rcnn_se154_vd_fpn_1x.yml
configs/faster_rcnn_se154_vd_fpn_1x.yml
+0
-140
configs/faster_rcnn_se154_vd_fpn_s1x.yml
configs/faster_rcnn_se154_vd_fpn_s1x.yml
+1
-1
configs/faster_rcnn_x101_64x4d_fpn_1x.yml
configs/faster_rcnn_x101_64x4d_fpn_1x.yml
+0
-139
configs/faster_rcnn_x101_64x4d_fpn_2x.yml
configs/faster_rcnn_x101_64x4d_fpn_2x.yml
+0
-139
docs/MODEL_ZOO.md
docs/MODEL_ZOO.md
+30
-29
未找到文件。
configs/faster_rcnn_se154_vd_1x.yml
已删除
100644 → 0
浏览文件 @
da2aac1d
architecture
:
FasterRCNN
train_feed
:
FasterRCNNTrainFeed
eval_feed
:
FasterRCNNEvalFeed
test_feed
:
FasterRCNNTestFeed
max_iters
:
180000
snapshot_iter
:
10000
use_gpu
:
true
log_smooth_window
:
20
save_dir
:
output
pretrain_weights
:
https://paddle-imagenet-models-name.bj.bcebos.com/SE154_vd_pretrained.tar
weights
:
output/faster_rcnn_se154_1x/model_final
metric
:
COCO
FasterRCNN
:
backbone
:
SENet
rpn_head
:
RPNHead
roi_extractor
:
RoIAlign
bbox_head
:
BBoxHead
bbox_assigner
:
BBoxAssigner
SENet
:
depth
:
152
feature_maps
:
4
freeze_at
:
2
group_width
:
4
groups
:
64
norm_type
:
affine_channel
variant
:
d
SENetC5
:
depth
:
152
freeze_at
:
2
group_width
:
4
groups
:
64
norm_type
:
affine_channel
variant
:
d
RPNHead
:
anchor_generator
:
anchor_sizes
:
[
32
,
64
,
128
,
256
,
512
]
aspect_ratios
:
[
0.5
,
1.0
,
2.0
]
stride
:
[
16.0
,
16.0
]
variance
:
[
1.0
,
1.0
,
1.0
,
1.0
]
rpn_target_assign
:
rpn_batch_size_per_im
:
256
rpn_fg_fraction
:
0.5
rpn_negative_overlap
:
0.3
rpn_positive_overlap
:
0.7
rpn_straddle_thresh
:
0.0
train_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
2000
pre_nms_top_n
:
12000
test_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
1000
pre_nms_top_n
:
6000
RoIAlign
:
resolution
:
7
sampling_ratio
:
0
spatial_scale
:
0.0625
BBoxAssigner
:
batch_size_per_im
:
512
bbox_reg_weights
:
[
0.1
,
0.1
,
0.2
,
0.2
]
bg_thresh_hi
:
0.5
bg_thresh_lo
:
0.0
fg_fraction
:
0.25
fg_thresh
:
0.5
num_classes
:
81
BBoxHead
:
head
:
SENetC5
nms
:
keep_top_k
:
100
nms_threshold
:
0.5
score_threshold
:
0.05
num_classes
:
81
LearningRate
:
base_lr
:
0.01
schedulers
:
-
!PiecewiseDecay
gamma
:
0.1
milestones
:
[
120000
,
160000
]
-
!LinearWarmup
start_factor
:
0.1
steps
:
1000
OptimizerBuilder
:
optimizer
:
momentum
:
0.9
type
:
Momentum
regularizer
:
factor
:
0.0001
type
:
L2
FasterRCNNTrainFeed
:
# batch size per device
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
annotation
:
annotations/instances_val2017.json
image_dir
:
val2017
num_workers
:
2
FasterRCNNEvalFeed
:
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
annotation
:
annotations/instances_val2017.json
image_dir
:
val2017
num_workers
:
2
FasterRCNNTestFeed
:
batch_size
:
1
dataset
:
annotation
:
annotations/instances_val2017.json
num_workers
:
2
configs/faster_rcnn_se154_vd_fpn_1x.yml
已删除
100644 → 0
浏览文件 @
da2aac1d
architecture
:
FasterRCNN
train_feed
:
FasterRCNNTrainFeed
eval_feed
:
FasterRCNNEvalFeed
test_feed
:
FasterRCNNTestFeed
max_iters
:
180000
snapshot_iter
:
10000
use_gpu
:
true
log_smooth_window
:
20
save_dir
:
output
pretrain_weights
:
https://paddle-imagenet-models-name.bj.bcebos.com/SE154_vd_pretrained.tar
weights
:
output/faster_rcnn_se154_fpn_1x/model_final
metric
:
COCO
FasterRCNN
:
backbone
:
SENet
fpn
:
FPN
rpn_head
:
FPNRPNHead
roi_extractor
:
FPNRoIAlign
bbox_head
:
BBoxHead
bbox_assigner
:
BBoxAssigner
SENet
:
depth
:
152
feature_maps
:
[
2
,
3
,
4
,
5
]
freeze_at
:
2
group_width
:
4
groups
:
64
norm_type
:
affine_channel
variant
:
d
FPN
:
max_level
:
6
min_level
:
2
num_chan
:
256
spatial_scale
:
[
0.03125
,
0.0625
,
0.125
,
0.25
]
FPNRPNHead
:
anchor_generator
:
anchor_sizes
:
[
32
,
64
,
128
,
256
,
512
]
aspect_ratios
:
[
0.5
,
1.0
,
2.0
]
stride
:
[
16.0
,
16.0
]
variance
:
[
1.0
,
1.0
,
1.0
,
1.0
]
anchor_start_size
:
32
max_level
:
6
min_level
:
2
num_chan
:
256
rpn_target_assign
:
rpn_batch_size_per_im
:
256
rpn_fg_fraction
:
0.5
rpn_negative_overlap
:
0.3
rpn_positive_overlap
:
0.7
rpn_straddle_thresh
:
0.0
train_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
2000
pre_nms_top_n
:
2000
test_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
1000
pre_nms_top_n
:
1000
FPNRoIAlign
:
canconical_level
:
4
canonical_size
:
224
max_level
:
5
min_level
:
2
box_resolution
:
7
sampling_ratio
:
2
BBoxAssigner
:
batch_size_per_im
:
512
bbox_reg_weights
:
[
0.1
,
0.1
,
0.2
,
0.2
]
bg_thresh_hi
:
0.5
bg_thresh_lo
:
0.0
fg_fraction
:
0.25
fg_thresh
:
0.5
num_classes
:
81
BBoxHead
:
head
:
TwoFCHead
nms
:
keep_top_k
:
100
nms_threshold
:
0.5
score_threshold
:
0.05
num_classes
:
81
TwoFCHead
:
num_chan
:
1024
LearningRate
:
base_lr
:
0.01
schedulers
:
-
!PiecewiseDecay
gamma
:
0.1
milestones
:
[
120000
,
160000
]
-
!LinearWarmup
start_factor
:
0.1
steps
:
1000
OptimizerBuilder
:
optimizer
:
momentum
:
0.9
type
:
Momentum
regularizer
:
factor
:
0.0001
type
:
L2
FasterRCNNTrainFeed
:
# batch size per device
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
image_dir
:
train2017
annotation
:
annotations/instances_train2017.json
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
FasterRCNNEvalFeed
:
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
annotation
:
annotations/instances_val2017.json
image_dir
:
val2017
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
FasterRCNNTestFeed
:
batch_size
:
1
dataset
:
annotation
:
annotations/instances_val2017.json
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
configs/faster_rcnn_se154_vd_fpn_s1x.yml
浏览文件 @
1199c33b
...
...
@@ -8,7 +8,7 @@ use_gpu: true
log_smooth_window
:
20
save_dir
:
output
pretrain_weights
:
https://paddle-imagenet-models-name.bj.bcebos.com/SE154_vd_pretrained.tar
weights
:
output/faster_rcnn_se154_fpn_s1x/model_final
weights
:
output/faster_rcnn_se154_
vd_
fpn_s1x/model_final
metric
:
COCO
FasterRCNN
:
...
...
configs/faster_rcnn_x101_64x4d_fpn_1x.yml
已删除
100644 → 0
浏览文件 @
da2aac1d
architecture
:
FasterRCNN
train_feed
:
FasterRCNNTrainFeed
eval_feed
:
FasterRCNNEvalFeed
test_feed
:
FasterRCNNTestFeed
max_iters
:
180000
snapshot_iter
:
10000
use_gpu
:
true
log_smooth_window
:
20
save_dir
:
output
pretrain_weights
:
https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_64x4d_pretrained.tar
weights
:
output/faster_rcnn_x101_64x4d_fpn_1x/model_final
metric
:
COCO
FasterRCNN
:
backbone
:
ResNeXt
fpn
:
FPN
rpn_head
:
FPNRPNHead
roi_extractor
:
FPNRoIAlign
bbox_head
:
BBoxHead
bbox_assigner
:
BBoxAssigner
ResNeXt
:
depth
:
101
feature_maps
:
[
2
,
3
,
4
,
5
]
freeze_at
:
2
group_width
:
4
groups
:
64
norm_type
:
affine_channel
FPN
:
max_level
:
6
min_level
:
2
num_chan
:
256
spatial_scale
:
[
0.03125
,
0.0625
,
0.125
,
0.25
]
FPNRPNHead
:
anchor_generator
:
anchor_sizes
:
[
32
,
64
,
128
,
256
,
512
]
aspect_ratios
:
[
0.5
,
1.0
,
2.0
]
stride
:
[
16.0
,
16.0
]
variance
:
[
1.0
,
1.0
,
1.0
,
1.0
]
anchor_start_size
:
32
max_level
:
6
min_level
:
2
num_chan
:
256
rpn_target_assign
:
rpn_batch_size_per_im
:
256
rpn_fg_fraction
:
0.5
rpn_negative_overlap
:
0.3
rpn_positive_overlap
:
0.7
rpn_straddle_thresh
:
0.0
train_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
2000
pre_nms_top_n
:
2000
test_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
1000
pre_nms_top_n
:
1000
FPNRoIAlign
:
canconical_level
:
4
canonical_size
:
224
max_level
:
5
min_level
:
2
box_resolution
:
7
sampling_ratio
:
2
BBoxAssigner
:
batch_size_per_im
:
512
bbox_reg_weights
:
[
0.1
,
0.1
,
0.2
,
0.2
]
bg_thresh_hi
:
0.5
bg_thresh_lo
:
0.0
fg_fraction
:
0.25
fg_thresh
:
0.5
num_classes
:
81
BBoxHead
:
head
:
TwoFCHead
nms
:
keep_top_k
:
100
nms_threshold
:
0.5
score_threshold
:
0.05
num_classes
:
81
TwoFCHead
:
num_chan
:
1024
LearningRate
:
base_lr
:
0.01
schedulers
:
-
!PiecewiseDecay
gamma
:
0.1
milestones
:
[
120000
,
160000
]
-
!LinearWarmup
start_factor
:
0.3333333333333333
steps
:
500
OptimizerBuilder
:
optimizer
:
momentum
:
0.9
type
:
Momentum
regularizer
:
factor
:
0.0001
type
:
L2
FasterRCNNTrainFeed
:
# batch size per device
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
image_dir
:
train2017
annotation
:
annotations/instances_train2017.json
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
FasterRCNNEvalFeed
:
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
annotation
:
annotations/instances_val2017.json
image_dir
:
val2017
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
FasterRCNNTestFeed
:
batch_size
:
1
dataset
:
annotation
:
annotations/instances_val2017.json
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
configs/faster_rcnn_x101_64x4d_fpn_2x.yml
已删除
100644 → 0
浏览文件 @
da2aac1d
architecture
:
FasterRCNN
train_feed
:
FasterRCNNTrainFeed
eval_feed
:
FasterRCNNEvalFeed
test_feed
:
FasterRCNNTestFeed
max_iters
:
180000
snapshot_iter
:
10000
use_gpu
:
true
log_smooth_window
:
20
save_dir
:
output
pretrain_weights
:
https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_64x4d_pretrained.tar
weights
:
output/faster_rcnn_x101_64x4d_fpn_2x/model_final
metric
:
COCO
FasterRCNN
:
backbone
:
ResNeXt
fpn
:
FPN
rpn_head
:
FPNRPNHead
roi_extractor
:
FPNRoIAlign
bbox_head
:
BBoxHead
bbox_assigner
:
BBoxAssigner
ResNeXt
:
depth
:
101
feature_maps
:
[
2
,
3
,
4
,
5
]
freeze_at
:
2
group_width
:
4
groups
:
64
norm_type
:
affine_channel
FPN
:
max_level
:
6
min_level
:
2
num_chan
:
256
spatial_scale
:
[
0.03125
,
0.0625
,
0.125
,
0.25
]
FPNRPNHead
:
anchor_generator
:
anchor_sizes
:
[
32
,
64
,
128
,
256
,
512
]
aspect_ratios
:
[
0.5
,
1.0
,
2.0
]
stride
:
[
16.0
,
16.0
]
variance
:
[
1.0
,
1.0
,
1.0
,
1.0
]
anchor_start_size
:
32
max_level
:
6
min_level
:
2
num_chan
:
256
rpn_target_assign
:
rpn_batch_size_per_im
:
256
rpn_fg_fraction
:
0.5
rpn_negative_overlap
:
0.3
rpn_positive_overlap
:
0.7
rpn_straddle_thresh
:
0.0
train_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
2000
pre_nms_top_n
:
2000
test_proposal
:
min_size
:
0.0
nms_thresh
:
0.7
post_nms_top_n
:
1000
pre_nms_top_n
:
1000
FPNRoIAlign
:
canconical_level
:
4
canonical_size
:
224
max_level
:
5
min_level
:
2
box_resolution
:
7
sampling_ratio
:
2
BBoxAssigner
:
batch_size_per_im
:
512
bbox_reg_weights
:
[
0.1
,
0.1
,
0.2
,
0.2
]
bg_thresh_hi
:
0.5
bg_thresh_lo
:
0.0
fg_fraction
:
0.25
fg_thresh
:
0.5
num_classes
:
81
BBoxHead
:
head
:
TwoFCHead
nms
:
keep_top_k
:
100
nms_threshold
:
0.5
score_threshold
:
0.05
num_classes
:
81
TwoFCHead
:
num_chan
:
1024
LearningRate
:
base_lr
:
0.01
schedulers
:
-
!PiecewiseDecay
gamma
:
0.1
milestones
:
[
240000
,
320000
]
-
!LinearWarmup
start_factor
:
0.3333333333333333
steps
:
500
OptimizerBuilder
:
optimizer
:
momentum
:
0.9
type
:
Momentum
regularizer
:
factor
:
0.0001
type
:
L2
FasterRCNNTrainFeed
:
# batch size per device
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
image_dir
:
train2017
annotation
:
annotations/instances_train2017.json
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
FasterRCNNEvalFeed
:
batch_size
:
1
dataset
:
dataset_dir
:
dataset/coco
annotation
:
annotations/instances_val2017.json
image_dir
:
val2017
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
FasterRCNNTestFeed
:
batch_size
:
1
dataset
:
annotation
:
annotations/instances_val2017.json
batch_transforms
:
-
!PadBatch
pad_to_stride
:
32
num_workers
:
2
docs/MODEL_ZOO.md
浏览文件 @
1199c33b
...
...
@@ -30,57 +30,58 @@ The backbone models pretrained on ImageNet are available. All backbone models ar
### Faster & Mask R-CNN
| Backbone | Type | Im
g
/gpu | Lr schd | Box AP | Mask AP | Download |
| Backbone | Type | Im
age
/gpu | Lr schd | Box AP | Mask AP | Download |
| :------------------- | :------------- | :-----: | :-----: | :----: | :-----: | :----------------------------------------------------------: |
| ResNet50 | Faster | 1 | 1x | 35.2 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_1x.tar
)
|
| ResNet50 | Faster | 1 | 2x | 37.1 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_2x.tar
)
|
| ResNet50 | Mask | 1 | 1x | 36.5 | 32.2 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_1x.tar
)
|
| ResNet50 | Mask | 1 | 2x | | |
[
model
](
)
|
| ResNet50-D | Faster | 1 | 1x | 36.4 | - |
[
model
](
ttps://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_1x.tar
)
|
| ResNet50-vd | Faster | 1 | 1x | 36.4 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_1x.tar
)
|
| ResNet50-FPN | Faster | 2 | 1x | 37.2 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_fpn_1x.tar
)
|
| ResNet50-FPN | Faster | 2 | 2x | 37.7 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_fpn_2x.tar
)
|
| ResNet50-FPN | Mask | 2 | 1x | 37.9 | 34.2 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_fpn_1x.tar
)
|
| ResNet50-FPN | Cascade Faster | 2 | 1x | 40.9 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/cascade_rcnn_r50_fpn_1x.tar
)
|
| ResNet50-
D-FPN
| Faster | 2 | 2x | 38.9 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_2x.tar
)
|
| ResNet50-
D-FPN
| Mask | 2 | 2x | 39.8 | 35.4 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_vd_fpn_2x.tar
)
|
| ResNet50-
vd-FPN
| Faster | 2 | 2x | 38.9 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_2x.tar
)
|
| ResNet50-
vd-FPN
| Mask | 2 | 2x | 39.8 | 35.4 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_vd_fpn_2x.tar
)
|
| ResNet101 | Faster | 1 | 1x | 38.3 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_1x.tar
)
|
| ResNet101-FPN | Faster | 1 | 1x | 38.7 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_fpn_1x.tar
)
|
| ResNet101-FPN | Faster | 1 | 2x | 39.1 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_fpn_2x.tar
)
|
| ResNet101-FPN | Mask | 1 | 1x | 39.5 | 35.2 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r101_fpn_1x.tar
)
|
| ResNet101-
D-FPN
| Faster | 1 | 1x | 40.0 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_fpn_1x.tar
)
|
| ResNet101-
D-FPN
| Faster | 1 | 2x | 40.6 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_fpn_2x.tar
)
|
| SENet154-
D-FPN | Faster | 1 | 1.44x | 43.5 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_se154
_fpn_s1x.tar
)
|
| SENet154-
D-FPN
| Mask | 1 | 1.44x | 44.0 | 38.7 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_se154_vd_fpn_s1x.tar
)
|
| ResNet101-
vd-FPN
| Faster | 1 | 1x | 40.0 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_fpn_1x.tar
)
|
| ResNet101-
vd-FPN
| Faster | 1 | 2x | 40.6 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_fpn_2x.tar
)
|
| SENet154-
vd-FPN | Faster | 1 | 1.44x | 42.9 | - |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_se154_vd
_fpn_s1x.tar
)
|
| SENet154-
vd-FPN
| Mask | 1 | 1.44x | 44.0 | 38.7 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_se154_vd_fpn_s1x.tar
)
|
### Yolo v3
| Backbone | Size | Im
g
/gpu | Lr schd | Box AP | Download |
| Backbone | Size | Im
age
/gpu | Lr schd | Box AP | Download |
| :----------- | :--: | :-----: | :-----: | :----: | :-------: |
| DarkNet53 | 608 | 8 |
12
0e | 38.9 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar
)
|
| DarkNet53 | 416 | 8 |
12
0e | 37.5 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar
)
|
| DarkNet53 | 320 | 8 |
12
0e | 34.8 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar
)
|
| MobileNet-V1 | 608 | 8 |
12
0e | 29.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar
)
|
| MobileNet-V1 | 416 | 8 |
12
0e | 29.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar
)
|
| MobileNet-V1 | 320 | 8 |
12
0e | 27.1 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar
)
|
| ResNet34 | 608 | 8 |
12
0e | 36.2 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r34.tar
)
|
| ResNet34 | 416 | 8 |
12
0e | 34.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r34.tar
)
|
| ResNet34 | 320 | 8 |
12
0e | 31.4 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r34.tar
)
|
**NOTE**
: Yolo v3 trained in 8 GPU with total batch size as 64
. Yolo v3 training data augmentations: mixup image,
random
distort image, random crop image, random expand image, random interpolate, random flip image
.
| DarkNet53 | 608 | 8 |
27
0e | 38.9 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar
)
|
| DarkNet53 | 416 | 8 |
27
0e | 37.5 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar
)
|
| DarkNet53 | 320 | 8 |
27
0e | 34.8 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar
)
|
| MobileNet-V1 | 608 | 8 |
27
0e | 29.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar
)
|
| MobileNet-V1 | 416 | 8 |
27
0e | 29.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar
)
|
| MobileNet-V1 | 320 | 8 |
27
0e | 27.1 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar
)
|
| ResNet34 | 608 | 8 |
27
0e | 36.2 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r34.tar
)
|
| ResNet34 | 416 | 8 |
27
0e | 34.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r34.tar
)
|
| ResNet34 | 320 | 8 |
27
0e | 31.4 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r34.tar
)
|
**NOTE**
: Yolo v3 trained in 8 GPU with total batch size as 64
and trained 270 epoches. Yolo v3 training data augmentations: mixup,
random
ly color distortion, randomly cropping, randomly expansion, randomly interpolation method, randomly flippling
.
### RetinaNet
| Backbone | Size | Lr schd | Box AP | Download |
| :----------- | :--: | :-----: | :----: | :-------: |
| ResNet50-FPN | 300 | 120e | 36.0 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/retinanet_r50_fpn_1x.tar
)
|
| ResNet101-FPN | 300 | 120e | 37.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/retinanet_r101_fpn_1x.tar
)
|
| Backbone | Image/gpu | Lr schd | Box AP | Download |
| :----------- | :-----: | :-----: | :----: | :-------: |
| ResNet50-FPN | 2 | 1x | 36.0 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/retinanet_r50_fpn_1x.tar
)
|
| ResNet101-FPN | 2 | 1x | 37.3 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/retinanet_r101_fpn_1x.tar
)
|
**Notes:**
In RetinaNet, the base LR is changed to 0.01 for minibatch size 16.
### SSD on PascalVOC
| Backbone | Size | Im
g
/gpu | Lr schd | Box AP | Download |
| Backbone | Size | Im
age
/gpu | Lr schd | Box AP | Download |
| :----------- | :--: | :-----: | :-----: | :----: | :-------: |
| MobileNet v1 | 300 | 32 | 120e | 73.2 |
[
model
](
https://paddlemodels.bj.bcebos.com/object_detection/ssd_mobilenet_v1_voc.tar
)
|
**NOTE**
: SSD trained in 2 GPU with totoal batch size as 64
. SSD training data augmentations: random distort image,
random
crop image, random expand image, random flip image
.
**NOTE**
: SSD trained in 2 GPU with totoal batch size as 64
and trained 120 epoches. SSD training data augmentations: randomly color distortion,
random
ly cropping, randomly expansion, randomly flipping
.
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录