Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
weixin_41840029
PaddleOCR
提交
c8f7a683
P
PaddleOCR
项目概览
weixin_41840029
/
PaddleOCR
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleOCR
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleOCR
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
c8f7a683
编写于
11月 17, 2020
作者:
W
WenmuZhou
浏览文件
操作
浏览文件
下载
差异文件
merge upstream
上级
4950c845
fc7b5d22
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
215 addition
and
10 deletion
+215
-10
configs/rec/ch_ppocr_v1.1/rec_chinese_lite_train_v1.1.yaml
configs/rec/ch_ppocr_v1.1/rec_chinese_lite_train_v1.1.yaml
+102
-0
configs/rec/rec_r34_vd_none_bilstm_ctc.yml
configs/rec/rec_r34_vd_none_bilstm_ctc.yml
+96
-0
ppocr/data/simple_dataset.py
ppocr/data/simple_dataset.py
+17
-10
未找到文件。
configs/rec/ch_ppocr_v1.1/rec_chinese_lite_train_v1.1.yaml
0 → 100644
浏览文件 @
c8f7a683
Global
:
use_gpu
:
true
epoch_num
:
500
log_smooth_window
:
20
print_batch_step
:
10
save_model_dir
:
./output/rec_chinese_lite_v1.1
save_epoch_step
:
3
# evaluation is run every 5000 iterations after the 4000th iteration
eval_batch_step
:
[
0
,
2000
]
# if pretrained_model is saved in static mode, load_static_weights must set to True
cal_metric_during_train
:
True
pretrained_model
:
checkpoints
:
save_inference_dir
:
use_visualdl
:
False
infer_img
:
doc/imgs_words/ch/word_1.jpg
# for data or label process
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_type
:
ch
max_text_length
:
25
infer_mode
:
False
use_space_char
:
False
Optimizer
:
name
:
Adam
beta1
:
0.9
beta2
:
0.999
lr
:
name
:
Cosine
learning_rate
:
0.001
regularizer
:
name
:
'
L2'
factor
:
0.00001
Architecture
:
model_type
:
rec
algorithm
:
CRNN
Transform
:
Backbone
:
name
:
MobileNetV3
scale
:
0.5
model_name
:
small
small_stride
:
[
1
,
2
,
2
,
2
]
Neck
:
name
:
SequenceEncoder
encoder_type
:
rnn
hidden_size
:
48
Head
:
name
:
CTCHead
fc_decay
:
0.00001
Loss
:
name
:
CTCLoss
PostProcess
:
name
:
CTCLabelDecode
Metric
:
name
:
RecMetric
main_indicator
:
acc
Train
:
dataset
:
name
:
SimpleDataSet
data_dir
:
./train_data/
label_file_list
:
[
"
./train_data/train_list.txt"
]
transforms
:
-
DecodeImage
:
# load image
img_mode
:
BGR
channel_first
:
False
-
RecAug
:
-
CTCLabelEncode
:
# Class handling label
-
RecResizeImg
:
image_shape
:
[
3
,
32
,
320
]
-
KeepKeys
:
keep_keys
:
[
'
image'
,
'
label'
,
'
length'
]
# dataloader will return list in this order
loader
:
shuffle
:
True
batch_size_per_card
:
256
drop_last
:
True
num_workers
:
8
Eval
:
dataset
:
name
:
SimpleDataSet
data_dir
:
./train_data
label_file_list
:
[
"
./train_data/val_list.txt"
]
transforms
:
-
DecodeImage
:
# load image
img_mode
:
BGR
channel_first
:
False
-
CTCLabelEncode
:
# Class handling label
-
RecResizeImg
:
image_shape
:
[
3
,
32
,
320
]
-
KeepKeys
:
keep_keys
:
[
'
image'
,
'
label'
,
'
length'
]
# dataloader will return list in this order
loader
:
shuffle
:
False
drop_last
:
False
batch_size_per_card
:
256
num_workers
:
8
configs/rec/rec_r34_vd_none_bilstm_ctc.yml
0 → 100644
浏览文件 @
c8f7a683
Global
:
use_gpu
:
true
epoch_num
:
72
log_smooth_window
:
20
print_batch_step
:
10
save_model_dir
:
./output/rec/r34_vd_none_bilstm_ctc/
save_epoch_step
:
3
# evaluation is run every 5000 iterations after the 4000th iteration
eval_batch_step
:
[
0
,
2000
]
# if pretrained_model is saved in static mode, load_static_weights must set to True
cal_metric_during_train
:
True
pretrained_model
:
checkpoints
:
save_inference_dir
:
use_visualdl
:
False
infer_img
:
doc/imgs_words/ch/word_1.jpg
# for data or label process
character_dict_path
:
character_type
:
en
max_text_length
:
25
infer_mode
:
False
use_space_char
:
False
Optimizer
:
name
:
Adam
beta1
:
0.9
beta2
:
0.999
lr
:
learning_rate
:
0.0005
regularizer
:
name
:
'
L2'
factor
:
0
Architecture
:
model_type
:
rec
algorithm
:
CRNN
Transform
:
Backbone
:
name
:
ResNet
layers
:
34
Neck
:
name
:
SequenceEncoder
encoder_type
:
rnn
hidden_size
:
256
Head
:
name
:
CTCHead
fc_decay
:
0
Loss
:
name
:
CTCLoss
PostProcess
:
name
:
CTCLabelDecode
Metric
:
name
:
RecMetric
main_indicator
:
acc
Train
:
dataset
:
name
:
LMDBDateSet
data_dir
:
./train_data/data_lmdb_release/training/
transforms
:
-
DecodeImage
:
# load image
img_mode
:
BGR
channel_first
:
False
-
CTCLabelEncode
:
# Class handling label
-
RecResizeImg
:
image_shape
:
[
3
,
32
,
100
]
-
KeepKeys
:
keep_keys
:
[
'
image'
,
'
label'
,
'
length'
]
# dataloader will return list in this order
loader
:
shuffle
:
False
batch_size_per_card
:
256
drop_last
:
True
num_workers
:
8
Eval
:
dataset
:
name
:
LMDBDateSet
data_dir
:
./train_data/data_lmdb_release/validation/
transforms
:
-
DecodeImage
:
# load image
img_mode
:
BGR
channel_first
:
False
-
CTCLabelEncode
:
# Class handling label
-
RecResizeImg
:
image_shape
:
[
3
,
32
,
100
]
-
KeepKeys
:
keep_keys
:
[
'
image'
,
'
label'
,
'
length'
]
# dataloader will return list in this order
loader
:
shuffle
:
False
drop_last
:
False
batch_size_per_card
:
256
num_workers
:
4
ppocr/data/simple_dataset.py
浏览文件 @
c8f7a683
...
@@ -22,6 +22,7 @@ from .imaug import transform, create_operators
...
@@ -22,6 +22,7 @@ from .imaug import transform, create_operators
class
SimpleDataSet
(
Dataset
):
class
SimpleDataSet
(
Dataset
):
def
__init__
(
self
,
config
,
mode
,
logger
):
def
__init__
(
self
,
config
,
mode
,
logger
):
super
(
SimpleDataSet
,
self
).
__init__
()
super
(
SimpleDataSet
,
self
).
__init__
()
self
.
logger
=
logger
global_config
=
config
[
'Global'
]
global_config
=
config
[
'Global'
]
dataset_config
=
config
[
mode
][
'dataset'
]
dataset_config
=
config
[
mode
][
'dataset'
]
...
@@ -100,16 +101,22 @@ class SimpleDataSet(Dataset):
...
@@ -100,16 +101,22 @@ class SimpleDataSet(Dataset):
def
__getitem__
(
self
,
idx
):
def
__getitem__
(
self
,
idx
):
dataset_idx
,
file_idx
=
self
.
data_idx_order_list
[
idx
]
dataset_idx
,
file_idx
=
self
.
data_idx_order_list
[
idx
]
data_line
=
self
.
data_lines_list
[
dataset_idx
][
file_idx
]
data_line
=
self
.
data_lines_list
[
dataset_idx
][
file_idx
]
data_line
=
data_line
.
decode
(
'utf-8'
)
try
:
substr
=
data_line
.
strip
(
"
\n
"
).
split
(
self
.
delimiter
)
data_line
=
data_line
.
decode
(
'utf-8'
)
file_name
=
substr
[
0
]
substr
=
data_line
.
strip
(
"
\n
"
).
split
(
self
.
delimiter
)
label
=
substr
[
1
]
file_name
=
substr
[
1
]
img_path
=
os
.
path
.
join
(
self
.
data_dir
,
file_name
)
label
=
substr
[
0
]
data
=
{
'img_path'
:
img_path
,
'label'
:
label
}
img_path
=
os
.
path
.
join
(
self
.
data_dir
,
file_name
)
with
open
(
data
[
'img_path'
],
'rb'
)
as
f
:
data
=
{
'img_path'
:
img_path
,
'label'
:
label
}
img
=
f
.
read
()
with
open
(
data
[
'img_path'
],
'rb'
)
as
f
:
data
[
'image'
]
=
img
img
=
f
.
read
()
outs
=
transform
(
data
,
self
.
ops
)
data
[
'image'
]
=
img
outs
=
transform
(
data
,
self
.
ops
)
except
Exception
as
e
:
self
.
logger
.
error
(
"When parsing line {}, error happened with msg: {}"
.
format
(
data_line
,
e
))
outs
=
None
if
outs
is
None
:
if
outs
is
None
:
return
self
.
__getitem__
(
np
.
random
.
randint
(
self
.
__len__
()))
return
self
.
__getitem__
(
np
.
random
.
randint
(
self
.
__len__
()))
return
outs
return
outs
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录