Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
70322886
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
70322886
编写于
7月 22, 2021
作者:
C
cuicheng01
提交者:
GitHub
7月 22, 2021
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1065 from cuicheng01/develop
Update Twins configs
上级
0ae9f6f5
3a871524
变更
7
隐藏空白更改
内联
并排
Showing
7 changed file
with
819 addition
and
27 deletion
+819
-27
ppcls/arch/backbone/model_zoo/gvt.py
ppcls/arch/backbone/model_zoo/gvt.py
+27
-27
ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml
ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml
+132
-0
ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml
ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml
+132
-0
ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml
ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml
+132
-0
ppcls/configs/ImageNet/Twins/pcpvt_base.yaml
ppcls/configs/ImageNet/Twins/pcpvt_base.yaml
+132
-0
ppcls/configs/ImageNet/Twins/pcpvt_large.yaml
ppcls/configs/ImageNet/Twins/pcpvt_large.yaml
+132
-0
ppcls/configs/ImageNet/Twins/pcpvt_small.yaml
ppcls/configs/ImageNet/Twins/pcpvt_small.yaml
+132
-0
未找到文件。
ppcls/arch/backbone/model_zoo/gvt.py
浏览文件 @
70322886
...
@@ -56,10 +56,10 @@ class GroupAttention(nn.Layer):
...
@@ -56,10 +56,10 @@ class GroupAttention(nn.Layer):
ws
=
1
):
ws
=
1
):
super
().
__init__
()
super
().
__init__
()
if
ws
==
1
:
if
ws
==
1
:
raise
Exception
(
f
"ws
{
ws
}
should not be 1"
)
raise
Exception
(
"ws {ws} should not be 1"
)
if
dim
%
num_heads
!=
0
:
if
dim
%
num_heads
!=
0
:
raise
Exception
(
raise
Exception
(
f
"dim
{
dim
}
should be divided by num_heads
{
num_heads
}
."
)
"dim {dim} should be divided by num_heads {num_heads}."
)
self
.
dim
=
dim
self
.
dim
=
dim
self
.
num_heads
=
num_heads
self
.
num_heads
=
num_heads
...
@@ -78,15 +78,15 @@ class GroupAttention(nn.Layer):
...
@@ -78,15 +78,15 @@ class GroupAttention(nn.Layer):
total_groups
=
h_group
*
w_group
total_groups
=
h_group
*
w_group
x
=
x
.
reshape
([
B
,
h_group
,
self
.
ws
,
w_group
,
self
.
ws
,
C
]).
transpose
(
x
=
x
.
reshape
([
B
,
h_group
,
self
.
ws
,
w_group
,
self
.
ws
,
C
]).
transpose
(
[
0
,
1
,
3
,
2
,
4
,
5
])
[
0
,
1
,
3
,
2
,
4
,
5
])
qkv
=
self
.
qkv
(
x
).
reshape
(
qkv
=
self
.
qkv
(
x
).
reshape
(
[
[
B
,
total_groups
,
-
1
,
3
,
self
.
num_heads
,
B
,
total_groups
,
self
.
ws
**
2
,
3
,
self
.
num_heads
,
C
//
self
.
num_heads
C
//
self
.
num_heads
]).
transpose
([
3
,
0
,
1
,
4
,
2
,
5
])
]).
transpose
([
3
,
0
,
1
,
4
,
2
,
5
])
q
,
k
,
v
=
qkv
[
0
],
qkv
[
1
],
qkv
[
2
]
q
,
k
,
v
=
qkv
[
0
],
qkv
[
1
],
qkv
[
2
]
attn
=
(
q
@
k
.
transpose
([
0
,
1
,
2
,
4
,
3
]))
*
self
.
scale
attn
=
(
q
@
k
.
transpose
([
0
,
1
,
2
,
4
,
3
]))
*
self
.
scale
attn
=
nn
.
Softmax
(
axis
=-
1
)(
attn
)
attn
=
nn
.
Softmax
(
axis
=-
1
)(
attn
)
attn
=
self
.
attn_drop
(
attn
)
attn
=
self
.
attn_drop
(
attn
)
attn
=
(
attn
@
v
).
transpose
([
0
,
1
,
3
,
2
,
4
]).
reshape
(
attn
=
(
attn
@
v
).
transpose
([
0
,
1
,
3
,
2
,
4
]).
reshape
(
[
B
,
h_group
,
w_group
,
self
.
ws
,
self
.
ws
,
C
])
[
B
,
h_group
,
w_group
,
self
.
ws
,
self
.
ws
,
C
])
x
=
attn
.
transpose
([
0
,
1
,
3
,
2
,
4
,
5
]).
reshape
([
B
,
N
,
C
])
x
=
attn
.
transpose
([
0
,
1
,
3
,
2
,
4
,
5
]).
reshape
([
B
,
N
,
C
])
...
@@ -135,22 +135,23 @@ class Attention(nn.Layer):
...
@@ -135,22 +135,23 @@ class Attention(nn.Layer):
if
self
.
sr_ratio
>
1
:
if
self
.
sr_ratio
>
1
:
x_
=
x
.
transpose
([
0
,
2
,
1
]).
reshape
([
B
,
C
,
H
,
W
])
x_
=
x
.
transpose
([
0
,
2
,
1
]).
reshape
([
B
,
C
,
H
,
W
])
x_
=
self
.
sr
(
x_
).
reshape
([
B
,
C
,
-
1
]).
transpose
([
0
,
2
,
1
])
tmp_n
=
H
*
W
//
self
.
sr_ratio
**
2
x_
=
self
.
sr
(
x_
).
reshape
([
B
,
C
,
tmp_n
]).
transpose
([
0
,
2
,
1
])
x_
=
self
.
norm
(
x_
)
x_
=
self
.
norm
(
x_
)
kv
=
self
.
kv
(
x_
).
reshape
(
kv
=
self
.
kv
(
x_
).
reshape
(
[
B
,
-
1
,
2
,
self
.
num_heads
,
C
//
self
.
num_heads
]).
transpose
(
[
B
,
tmp_n
,
2
,
self
.
num_heads
,
C
//
self
.
num_heads
]).
transpose
(
[
2
,
0
,
3
,
1
,
4
])
[
2
,
0
,
3
,
1
,
4
])
else
:
else
:
kv
=
self
.
kv
(
x
).
reshape
(
kv
=
self
.
kv
(
x
).
reshape
(
[
B
,
-
1
,
2
,
self
.
num_heads
,
C
//
self
.
num_heads
]).
transpose
(
[
B
,
N
,
2
,
self
.
num_heads
,
C
//
self
.
num_heads
]).
transpose
(
[
2
,
0
,
3
,
1
,
4
])
[
2
,
0
,
3
,
1
,
4
])
k
,
v
=
kv
[
0
],
kv
[
1
]
k
,
v
=
kv
[
0
],
kv
[
1
]
attn
=
(
q
@
k
.
transpose
([
0
,
1
,
3
,
2
]))
*
self
.
scale
attn
=
(
q
@
k
.
transpose
([
0
,
1
,
3
,
2
]))
*
self
.
scale
attn
=
nn
.
Softmax
(
axis
=-
1
)(
attn
)
attn
=
nn
.
Softmax
(
axis
=-
1
)(
attn
)
attn
=
self
.
attn_drop
(
attn
)
attn
=
self
.
attn_drop
(
attn
)
x
=
(
attn
@
v
).
transpose
([
0
,
2
,
1
,
3
]).
reshape
([
B
,
N
,
C
])
x
=
(
attn
@
v
).
transpose
([
0
,
2
,
1
,
3
]).
reshape
([
B
,
N
,
C
])
x
=
self
.
proj
(
x
)
x
=
self
.
proj
(
x
)
x
=
self
.
proj_drop
(
x
)
x
=
self
.
proj_drop
(
x
)
return
x
return
x
...
@@ -280,7 +281,7 @@ class PyramidVisionTransformer(nn.Layer):
...
@@ -280,7 +281,7 @@ class PyramidVisionTransformer(nn.Layer):
img_size
=
224
,
img_size
=
224
,
patch_size
=
16
,
patch_size
=
16
,
in_chans
=
3
,
in_chans
=
3
,
num_classes
=
1000
,
class_num
=
1000
,
embed_dims
=
[
64
,
128
,
256
,
512
],
embed_dims
=
[
64
,
128
,
256
,
512
],
num_heads
=
[
1
,
2
,
4
,
8
],
num_heads
=
[
1
,
2
,
4
,
8
],
mlp_ratios
=
[
4
,
4
,
4
,
4
],
mlp_ratios
=
[
4
,
4
,
4
,
4
],
...
@@ -294,7 +295,7 @@ class PyramidVisionTransformer(nn.Layer):
...
@@ -294,7 +295,7 @@ class PyramidVisionTransformer(nn.Layer):
sr_ratios
=
[
8
,
4
,
2
,
1
],
sr_ratios
=
[
8
,
4
,
2
,
1
],
block_cls
=
Block
):
block_cls
=
Block
):
super
().
__init__
()
super
().
__init__
()
self
.
num_classes
=
num_classes
self
.
class_num
=
class_num
self
.
depths
=
depths
self
.
depths
=
depths
# patch_embed
# patch_embed
...
@@ -317,7 +318,6 @@ class PyramidVisionTransformer(nn.Layer):
...
@@ -317,7 +318,6 @@ class PyramidVisionTransformer(nn.Layer):
self
.
create_parameter
(
self
.
create_parameter
(
shape
=
[
1
,
patch_num
,
embed_dims
[
i
]],
shape
=
[
1
,
patch_num
,
embed_dims
[
i
]],
default_initializer
=
zeros_
))
default_initializer
=
zeros_
))
self
.
add_parameter
(
f
"pos_embeds_
{
i
}
"
,
self
.
pos_embeds
[
i
])
self
.
pos_drops
.
append
(
nn
.
Dropout
(
p
=
drop_rate
))
self
.
pos_drops
.
append
(
nn
.
Dropout
(
p
=
drop_rate
))
dpr
=
[
dpr
=
[
...
@@ -354,7 +354,7 @@ class PyramidVisionTransformer(nn.Layer):
...
@@ -354,7 +354,7 @@ class PyramidVisionTransformer(nn.Layer):
# classification head
# classification head
self
.
head
=
nn
.
Linear
(
embed_dims
[
-
1
],
self
.
head
=
nn
.
Linear
(
embed_dims
[
-
1
],
num_classes
)
if
num_classes
>
0
else
Identity
()
class_num
)
if
class_num
>
0
else
Identity
()
# init weights
# init weights
for
pos_emb
in
self
.
pos_embeds
:
for
pos_emb
in
self
.
pos_embeds
:
...
@@ -433,7 +433,7 @@ class CPVTV2(PyramidVisionTransformer):
...
@@ -433,7 +433,7 @@ class CPVTV2(PyramidVisionTransformer):
img_size
=
224
,
img_size
=
224
,
patch_size
=
4
,
patch_size
=
4
,
in_chans
=
3
,
in_chans
=
3
,
num_classes
=
1000
,
class_num
=
1000
,
embed_dims
=
[
64
,
128
,
256
,
512
],
embed_dims
=
[
64
,
128
,
256
,
512
],
num_heads
=
[
1
,
2
,
4
,
8
],
num_heads
=
[
1
,
2
,
4
,
8
],
mlp_ratios
=
[
4
,
4
,
4
,
4
],
mlp_ratios
=
[
4
,
4
,
4
,
4
],
...
@@ -446,10 +446,10 @@ class CPVTV2(PyramidVisionTransformer):
...
@@ -446,10 +446,10 @@ class CPVTV2(PyramidVisionTransformer):
depths
=
[
3
,
4
,
6
,
3
],
depths
=
[
3
,
4
,
6
,
3
],
sr_ratios
=
[
8
,
4
,
2
,
1
],
sr_ratios
=
[
8
,
4
,
2
,
1
],
block_cls
=
Block
):
block_cls
=
Block
):
super
().
__init__
(
img_size
,
patch_size
,
in_chans
,
num_classe
s
,
super
().
__init__
(
img_size
,
patch_size
,
in_chans
,
class_num
,
embed_dim
s
,
embed_dims
,
num_heads
,
mlp_ratios
,
qkv_bias
,
qk_scal
e
,
num_heads
,
mlp_ratios
,
qkv_bias
,
qk_scale
,
drop_rat
e
,
drop_rate
,
attn_drop_rate
,
drop_path_rate
,
norm_layer
,
attn_drop_rate
,
drop_path_rate
,
norm_layer
,
depths
,
depths
,
sr_ratios
,
block_cls
)
sr_ratios
,
block_cls
)
del
self
.
pos_embeds
del
self
.
pos_embeds
del
self
.
cls_token
del
self
.
cls_token
self
.
pos_block
=
nn
.
LayerList
(
self
.
pos_block
=
nn
.
LayerList
(
...
@@ -488,7 +488,7 @@ class CPVTV2(PyramidVisionTransformer):
...
@@ -488,7 +488,7 @@ class CPVTV2(PyramidVisionTransformer):
x
=
self
.
pos_block
[
i
](
x
,
H
,
W
)
# PEG here
x
=
self
.
pos_block
[
i
](
x
,
H
,
W
)
# PEG here
if
i
<
len
(
self
.
depths
)
-
1
:
if
i
<
len
(
self
.
depths
)
-
1
:
x
=
x
.
reshape
([
B
,
H
,
W
,
-
1
]).
transpose
([
0
,
3
,
1
,
2
])
x
=
x
.
reshape
([
B
,
H
,
W
,
x
.
shape
[
-
1
]
]).
transpose
([
0
,
3
,
1
,
2
])
x
=
self
.
norm
(
x
)
x
=
self
.
norm
(
x
)
return
x
.
mean
(
axis
=
1
)
# GAP here
return
x
.
mean
(
axis
=
1
)
# GAP here
...
@@ -499,7 +499,7 @@ class PCPVT(CPVTV2):
...
@@ -499,7 +499,7 @@ class PCPVT(CPVTV2):
img_size
=
224
,
img_size
=
224
,
patch_size
=
4
,
patch_size
=
4
,
in_chans
=
3
,
in_chans
=
3
,
num_classes
=
1000
,
class_num
=
1000
,
embed_dims
=
[
64
,
128
,
256
],
embed_dims
=
[
64
,
128
,
256
],
num_heads
=
[
1
,
2
,
4
],
num_heads
=
[
1
,
2
,
4
],
mlp_ratios
=
[
4
,
4
,
4
],
mlp_ratios
=
[
4
,
4
,
4
],
...
@@ -512,10 +512,10 @@ class PCPVT(CPVTV2):
...
@@ -512,10 +512,10 @@ class PCPVT(CPVTV2):
depths
=
[
4
,
4
,
4
],
depths
=
[
4
,
4
,
4
],
sr_ratios
=
[
4
,
2
,
1
],
sr_ratios
=
[
4
,
2
,
1
],
block_cls
=
SBlock
):
block_cls
=
SBlock
):
super
().
__init__
(
img_size
,
patch_size
,
in_chans
,
num_classe
s
,
super
().
__init__
(
img_size
,
patch_size
,
in_chans
,
class_num
,
embed_dim
s
,
embed_dims
,
num_heads
,
mlp_ratios
,
qkv_bias
,
qk_scal
e
,
num_heads
,
mlp_ratios
,
qkv_bias
,
qk_scale
,
drop_rat
e
,
drop_rate
,
attn_drop_rate
,
drop_path_rate
,
norm_layer
,
attn_drop_rate
,
drop_path_rate
,
norm_layer
,
depths
,
depths
,
sr_ratios
,
block_cls
)
sr_ratios
,
block_cls
)
class
ALTGVT
(
PCPVT
):
class
ALTGVT
(
PCPVT
):
...
...
ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml
0 → 100644
浏览文件 @
70322886
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
120
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
# model architecture
Arch
:
name
:
alt_gvt_base
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Piecewise
learning_rate
:
0.1
decay_epochs
:
[
30
,
60
,
90
]
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0001
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml
0 → 100644
浏览文件 @
70322886
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
120
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
# model architecture
Arch
:
name
:
alt_gvt_large
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Piecewise
learning_rate
:
0.1
decay_epochs
:
[
30
,
60
,
90
]
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0001
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml
0 → 100644
浏览文件 @
70322886
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
120
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
# model architecture
Arch
:
name
:
alt_gvt_small
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Piecewise
learning_rate
:
0.1
decay_epochs
:
[
30
,
60
,
90
]
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0001
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
ppcls/configs/ImageNet/Twins/pcpvt_base.yaml
0 → 100644
浏览文件 @
70322886
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
120
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
# model architecture
Arch
:
name
:
pcpvt_base
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Piecewise
learning_rate
:
0.1
decay_epochs
:
[
30
,
60
,
90
]
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0001
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
ppcls/configs/ImageNet/Twins/pcpvt_large.yaml
0 → 100644
浏览文件 @
70322886
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
120
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
# model architecture
Arch
:
name
:
pcpvt_large
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Piecewise
learning_rate
:
0.1
decay_epochs
:
[
30
,
60
,
90
]
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0001
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
ppcls/configs/ImageNet/Twins/pcpvt_small.yaml
0 → 100644
浏览文件 @
70322886
# global configs
Global
:
checkpoints
:
null
pretrained_model
:
null
output_dir
:
./output/
device
:
gpu
save_interval
:
1
eval_during_train
:
True
eval_interval
:
1
epochs
:
120
print_batch_step
:
10
use_visualdl
:
False
# used for static mode and model export
image_shape
:
[
3
,
224
,
224
]
save_inference_dir
:
./inference
# training model under @to_static
to_static
:
False
# model architecture
Arch
:
name
:
pcpvt_small
class_num
:
1000
# loss function config for traing/eval process
Loss
:
Train
:
-
CELoss
:
weight
:
1.0
Eval
:
-
CELoss
:
weight
:
1.0
Optimizer
:
name
:
Momentum
momentum
:
0.9
lr
:
name
:
Piecewise
learning_rate
:
0.1
decay_epochs
:
[
30
,
60
,
90
]
values
:
[
0.1
,
0.01
,
0.001
,
0.0001
]
regularizer
:
name
:
'
L2'
coeff
:
0.0001
# data loader for train and eval
DataLoader
:
Train
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/train_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
RandCropImage
:
size
:
224
-
RandFlipImage
:
flip_code
:
1
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
True
loader
:
num_workers
:
4
use_shared_memory
:
True
Eval
:
dataset
:
name
:
ImageNetDataset
image_root
:
./dataset/ILSVRC2012/
cls_label_path
:
./dataset/ILSVRC2012/val_list.txt
transform_ops
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
sampler
:
name
:
DistributedBatchSampler
batch_size
:
64
drop_last
:
False
shuffle
:
False
loader
:
num_workers
:
4
use_shared_memory
:
True
Infer
:
infer_imgs
:
docs/images/whl/demo.jpg
batch_size
:
10
transforms
:
-
DecodeImage
:
to_rgb
:
True
channel_first
:
False
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
1.0/255.0
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
'
'
-
ToCHWImage
:
PostProcess
:
name
:
Topk
topk
:
5
class_id_map_file
:
ppcls/utils/imagenet1k_label_list.txt
Metric
:
Train
:
-
TopkAcc
:
topk
:
[
1
,
5
]
Eval
:
-
TopkAcc
:
topk
:
[
1
,
5
]
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录