Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
d1b2da28
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
大约 1 年 前同步成功
通知
282
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
d1b2da28
编写于
9月 22, 2022
作者:
littletomatodonkey
提交者:
GitHub
9月 22, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix ocr det (#2035)
* fix ocr det * fix default thres value * fix readme
上级
14ad2546
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
20 addition
and
10 deletion
+20
-10
modules/image/text_recognition/ch_pp-ocrv3_det/README.md
modules/image/text_recognition/ch_pp-ocrv3_det/README.md
+1
-1
modules/image/text_recognition/ch_pp-ocrv3_det/module.py
modules/image/text_recognition/ch_pp-ocrv3_det/module.py
+18
-8
modules/image/text_recognition/ch_pp-ocrv3_det/processor.py
modules/image/text_recognition/ch_pp-ocrv3_det/processor.py
+1
-1
未找到文件。
modules/image/text_recognition/ch_pp-ocrv3_det/README.md
浏览文件 @
d1b2da28
...
...
@@ -58,7 +58,7 @@
```
-
通过命令行方式实现文字识别模型的调用,更多请见
[
PaddleHub命令行指令
](
../../../../docs/docs_ch/tutorial/cmd_usage.rst
)
-
### 2、代码示例
-
### 2、
预测
代码示例
-
```python
import paddlehub as hub
...
...
modules/image/text_recognition/ch_pp-ocrv3_det/module.py
浏览文件 @
d1b2da28
...
...
@@ -168,8 +168,9 @@ class ChPPOCRv3Det(hub.Module):
use_gpu
=
False
,
output_dir
=
'detection_result'
,
visualization
=
False
,
box_thresh
=
0.5
,
det_db_unclip_ratio
=
1.5
):
box_thresh
=
0.6
,
det_db_unclip_ratio
=
1.5
,
det_db_score_mode
=
"fast"
):
"""
Get the text box in the predicted images.
Args:
...
...
@@ -180,6 +181,7 @@ class ChPPOCRv3Det(hub.Module):
visualization (bool): Whether to save image or not.
box_thresh(float): the threshold of the detected text box's confidence
det_db_unclip_ratio(float): unclip ratio for post processing in DB detection.
det_db_score_mode(str): method to calc the final det score, one of fast(using box) and slow(using poly).
Returns:
res (list): The result of text detection box and save path of images.
"""
...
...
@@ -206,12 +208,14 @@ class ChPPOCRv3Det(hub.Module):
assert
predicted_data
!=
[],
"There is not any image to be predicted. Please check the input data."
preprocessor
=
DBProcessTest
(
params
=
{
'max_side_len'
:
960
})
postprocessor
=
DBPostProcess
(
params
=
{
'thresh'
:
0.3
,
'box_thresh'
:
0.6
,
'max_candidates'
:
1000
,
'unclip_ratio'
:
det_db_unclip_ratio
})
postprocessor
=
DBPostProcess
(
params
=
{
'thresh'
:
0.3
,
'box_thresh'
:
0.6
,
'max_candidates'
:
1000
,
'unclip_ratio'
:
det_db_unclip_ratio
,
'det_db_score_mode'
:
det_db_score_mode
,
})
all_imgs
=
[]
all_ratios
=
[]
...
...
@@ -288,6 +292,7 @@ class ChPPOCRv3Det(hub.Module):
use_gpu
=
args
.
use_gpu
,
output_dir
=
args
.
output_dir
,
det_db_unclip_ratio
=
args
.
det_db_unclip_ratio
,
det_db_score_mode
=
args
.
det_db_score_mode
,
visualization
=
args
.
visualization
)
return
results
...
...
@@ -311,6 +316,11 @@ class ChPPOCRv3Det(hub.Module):
type
=
float
,
default
=
1.5
,
help
=
"unclip ratio for post processing in DB detection."
)
self
.
arg_config_group
.
add_argument
(
'--det_db_score_mode'
,
type
=
str
,
default
=
"str"
,
help
=
"method to calc the final det score, one of fast(using box) and slow(using poly)."
)
def
add_module_input_arg
(
self
):
"""
...
...
modules/image/text_recognition/ch_pp-ocrv3_det/processor.py
浏览文件 @
d1b2da28
...
...
@@ -124,9 +124,9 @@ class DBPostProcess(object):
self
.
box_thresh
=
params
[
'box_thresh'
]
self
.
max_candidates
=
params
[
'max_candidates'
]
self
.
unclip_ratio
=
params
[
'unclip_ratio'
]
self
.
score_mode
=
params
[
'det_db_score_mode'
]
self
.
min_size
=
3
self
.
dilation_kernel
=
None
self
.
score_mode
=
'fast'
def
boxes_from_bitmap
(
self
,
pred
,
_bitmap
,
dest_width
,
dest_height
):
'''
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录