Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
1a9d6229
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
1a9d6229
编写于
3月 13, 2022
作者:
G
gaotingquan
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
support clas, rec, search
上级
e0572d85
变更
14
显示空白变更内容
内联
并排
Showing
14 changed file
with
323 addition
and
35 deletion
+323
-35
deploy/python/ppshitu_v2/configs/test_cls_config.yaml
deploy/python/ppshitu_v2/configs/test_cls_config.yaml
+38
-0
deploy/python/ppshitu_v2/configs/test_det_config.yaml
deploy/python/ppshitu_v2/configs/test_det_config.yaml
+33
-0
deploy/python/ppshitu_v2/configs/test_rec_config.yaml
deploy/python/ppshitu_v2/configs/test_rec_config.yaml
+34
-0
deploy/python/ppshitu_v2/configs/test_search_config.yaml
deploy/python/ppshitu_v2/configs/test_search_config.yaml
+16
-0
deploy/python/ppshitu_v2/examples/predict.py
deploy/python/ppshitu_v2/examples/predict.py
+10
-2
deploy/python/ppshitu_v2/examples/test_search.py
deploy/python/ppshitu_v2/examples/test_search.py
+31
-0
deploy/python/ppshitu_v2/processor/algo_mod/__init__.py
deploy/python/ppshitu_v2/processor/algo_mod/__init__.py
+24
-11
deploy/python/ppshitu_v2/processor/algo_mod/postprocessor/__init__.py
...n/ppshitu_v2/processor/algo_mod/postprocessor/__init__.py
+6
-5
deploy/python/ppshitu_v2/processor/algo_mod/postprocessor/classification.py
...itu_v2/processor/algo_mod/postprocessor/classification.py
+68
-0
deploy/python/ppshitu_v2/processor/algo_mod/postprocessor/rec.py
...python/ppshitu_v2/processor/algo_mod/postprocessor/rec.py
+16
-0
deploy/python/ppshitu_v2/processor/algo_mod/predictor/__init__.py
...ython/ppshitu_v2/processor/algo_mod/predictor/__init__.py
+4
-5
deploy/python/ppshitu_v2/processor/algo_mod/predictor/paddle_predictor.py
...shitu_v2/processor/algo_mod/predictor/paddle_predictor.py
+18
-5
deploy/python/ppshitu_v2/processor/algo_mod/preprocessor/__init__.py
...on/ppshitu_v2/processor/algo_mod/preprocessor/__init__.py
+4
-5
deploy/python/ppshitu_v2/processor/algo_mod/searcher/__init__.py
...python/ppshitu_v2/processor/algo_mod/searcher/__init__.py
+21
-2
未找到文件。
deploy/python/ppshitu_v2/configs/test_cls_config.yaml
0 → 100644
浏览文件 @
1a9d6229
Global
:
Engine
:
POPEngine
infer_imgs
:
"
../../images/wangzai.jpg"
Modules
:
-
name
:
type
:
AlgoMod
processors
:
-
name
:
ImageProcessor
type
:
preprocessor
ops
:
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
0.00392157
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
hwc
-
ToCHWImage
:
-
GetShapeInfo
:
configs
:
order
:
chw
-
ToBatch
:
-
name
:
PaddlePredictor
type
:
predictor
inference_model_dir
:
"
./MobileNetV2_infer"
input_names
:
inputs
:
image
output_names
:
save_infer_model/scale_0.tmp_1
:
logits
-
name
:
TopK
type
:
postprocessor
k
:
10
class_id_map_file
:
"
../ppcls/utils/imagenet1k_label_list.txt"
save_dir
:
None
\ No newline at end of file
deploy/python/ppshitu_v2/configs/test_det_config.yaml
0 → 100644
浏览文件 @
1a9d6229
Global
:
Engine
:
POPEngine
infer_imgs
:
"
../../images/wangzai.jpg"
Modules
:
-
name
:
type
:
AlgoMod
processors
:
-
name
:
ImageProcessor
type
:
preprocessor
ops
:
-
ResizeImage
:
size
:
[
640
,
640
]
interpolation
:
2
-
NormalizeImage
:
scale
:
0.00392157
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
hwc
-
ToCHWImage
:
-
GetShapeInfo
:
configs
:
order
:
chw
-
ToBatch
:
-
name
:
PaddlePredictor
type
:
predictor
inference_model_dir
:
./models/ppyolov2_r50vd_dcn_mainbody_v1.0_infer/
-
name
:
DetPostPro
type
:
postprocessor
threshold
:
0.2
max_det_results
:
1
label_list
:
-
foreground
\ No newline at end of file
deploy/python/ppshitu_v2/configs/test_rec_config.yaml
0 → 100644
浏览文件 @
1a9d6229
Global
:
Engine
:
POPEngine
infer_imgs
:
"
../../images/wangzai.jpg"
Modules
:
-
name
:
type
:
AlgoMod
processors
:
-
name
:
ImageProcessor
type
:
preprocessor
ops
:
-
ResizeImage
:
resize_short
:
256
-
CropImage
:
size
:
224
-
NormalizeImage
:
scale
:
0.00392157
mean
:
[
0.485
,
0.456
,
0.406
]
std
:
[
0.229
,
0.224
,
0.225
]
order
:
hwc
-
ToCHWImage
:
-
GetShapeInfo
:
configs
:
order
:
chw
-
ToBatch
:
-
name
:
PaddlePredictor
type
:
predictor
inference_model_dir
:
models/product_ResNet50_vd_aliproduct_v1.0_infer
input_names
:
x
:
image
output_names
:
save_infer_model/scale_0.tmp_1
:
features
-
name
:
FeatureNormalizer
type
:
postprocessor
\ No newline at end of file
deploy/python/ppshitu_v2/configs/test_search_config.yaml
0 → 100644
浏览文件 @
1a9d6229
Global
:
Engine
:
POPEngine
infer_imgs
:
"
./vector.npy"
Modules
:
-
name
:
type
:
AlgoMod
processors
:
-
name
:
Searcher
type
:
searcher
index_dir
:
"
./index"
dist_type
:
"
IP"
embedding_size
:
512
batch_size
:
32
return_k
:
5
score_thres
:
0.5
\ No newline at end of file
deploy/python/ppshitu_v2/examples/predict.py
浏览文件 @
1a9d6229
...
@@ -18,8 +18,16 @@ def main():
...
@@ -18,8 +18,16 @@ def main():
image_file
=
"../../images/wangzai.jpg"
image_file
=
"../../images/wangzai.jpg"
img
=
cv2
.
imread
(
image_file
)[:,
:,
::
-
1
]
img
=
cv2
.
imread
(
image_file
)[:,
:,
::
-
1
]
input_data
=
{
"input_image"
:
img
}
input_data
=
{
"input_image"
:
img
}
output
=
engine
.
process
(
input_data
)
data
=
engine
.
process
(
input_data
)
print
(
output
)
# for det, cls
# print(data)
# for rec
# features = data["pred"]["features"]
# print(features)
# print(features.shape)
# print(type(features))
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
deploy/python/ppshitu_v2/examples/test_search.py
0 → 100644
浏览文件 @
1a9d6229
import
os
import
sys
__dir__
=
os
.
path
.
dirname
(
os
.
path
.
abspath
(
__file__
))
sys
.
path
.
append
(
os
.
path
.
abspath
(
os
.
path
.
join
(
__dir__
,
'../'
)))
import
cv2
from
engine
import
build_engine
from
utils
import
config
from
utils.get_image_list
import
get_image_list
import
numpy
as
np
def
load_vector
(
path
):
return
np
.
load
(
path
)
def
main
():
args
=
config
.
parse_args
()
config_dict
=
config
.
get_config
(
args
.
config
,
overrides
=
args
.
override
,
show
=
False
)
config_dict
.
profiler_options
=
args
.
profiler_options
engine
=
build_engine
(
config_dict
)
vector
=
load_vector
(
config_dict
[
"Global"
][
"infer_imgs"
])
output
=
engine
.
process
({
"features"
:
vector
})
print
(
output
[
"search_res"
])
if
__name__
==
'__main__'
:
main
()
deploy/python/ppshitu_v2/processor/algo_mod/__init__.py
浏览文件 @
1a9d6229
from
.postprocessor
import
build_postprocessor
# from .postprocessor import build_postprocessor
from
.preprocessor
import
build_preprocessor
# from .preprocessor import build_preprocessor
from
.predictor
import
build_predictor
# from .predictor import build_predictor
import
importlib
from
processor.algo_mod
import
preprocessor
from
processor.algo_mod
import
predictor
from
processor.algo_mod
import
postprocessor
from
processor.algo_mod
import
searcher
from
..base_processor
import
BaseProcessor
from
..base_processor
import
BaseProcessor
...
@@ -10,14 +17,20 @@ class AlgoMod(BaseProcessor):
...
@@ -10,14 +17,20 @@ class AlgoMod(BaseProcessor):
self
.
processors
=
[]
self
.
processors
=
[]
for
processor_config
in
config
[
"processors"
]:
for
processor_config
in
config
[
"processors"
]:
processor_type
=
processor_config
.
get
(
"type"
)
processor_type
=
processor_config
.
get
(
"type"
)
if
processor_type
==
"preprocessor"
:
processor_name
=
processor_config
.
get
(
"name"
)
processor
=
build_preprocessor
(
processor_config
)
_mod
=
importlib
.
import_module
(
__name__
)
elif
processor_type
==
"predictor"
:
processor
=
getattr
(
processor
=
build_predictor
(
processor_config
)
getattr
(
_mod
,
processor_type
),
elif
processor_type
==
"postprocessor"
:
processor_name
)(
processor_config
)
processor
=
build_postprocessor
(
processor_config
)
else
:
# if processor_type == "preprocessor":
raise
NotImplemented
(
"processor type {} unknown."
.
format
(
processor_type
))
# processor = build_preprocessor(processor_config)
# elif processor_type == "predictor":
# processor = build_predictor(processor_config)
# elif processor_type == "postprocessor":
# processor = build_postprocessor(processor_config)
# else:
# raise NotImplemented("processor type {} unknown.".format(processor_type))
self
.
processors
.
append
(
processor
)
self
.
processors
.
append
(
processor
)
def
process
(
self
,
input_data
):
def
process
(
self
,
input_data
):
...
...
deploy/python/ppshitu_v2/processor/algo_mod/postprocessor/__init__.py
浏览文件 @
1a9d6229
import
importlib
import
importlib
from
.classification
import
TopK
from
.det
import
DetPostPro
from
.det
import
DetPostPro
from
.rec
import
FeatureNormalizer
# def build_postprocessor(config):
def
build_postprocessor
(
config
):
# processor_mod = importlib.import_module(__name__)
processor_mod
=
importlib
.
import_module
(
__name__
)
# processor_name = config.get("name")
processor_name
=
config
.
get
(
"name"
)
# return getattr(processor_mod, processor_name)(config)
return
getattr
(
processor_mod
,
processor_name
)(
config
)
deploy/python/ppshitu_v2/processor/algo_mod/postprocessor/classification.py
0 → 100644
浏览文件 @
1a9d6229
import
os
import
numpy
as
np
from
...base_processor
import
BaseProcessor
class
TopK
(
BaseProcessor
):
def
__init__
(
self
,
config
):
self
.
topk
=
config
[
"k"
]
assert
isinstance
(
self
.
topk
,
(
int
,
))
class_id_map_file
=
config
[
"class_id_map_file"
]
self
.
class_id_map
=
self
.
parse_class_id_map
(
class_id_map_file
)
self
.
multilabel
=
config
.
get
(
"multilabel"
,
False
)
def
parse_class_id_map
(
self
,
class_id_map_file
):
if
class_id_map_file
is
None
:
return
None
if
not
os
.
path
.
exists
(
class_id_map_file
):
print
(
"Warning: If want to use your own label_dict, please input legal path!
\n
Otherwise label_names will be empty!"
)
return
None
try
:
class_id_map
=
{}
with
open
(
class_id_map_file
,
"r"
)
as
fin
:
lines
=
fin
.
readlines
()
for
line
in
lines
:
partition
=
line
.
split
(
"
\n
"
)[
0
].
partition
(
" "
)
class_id_map
[
int
(
partition
[
0
])]
=
str
(
partition
[
-
1
])
except
Exception
as
ex
:
print
(
ex
)
class_id_map
=
None
return
class_id_map
def
process
(
self
,
data
):
x
=
data
[
"pred"
][
"logits"
]
# TODO(gaotingquan): support file_name
# if file_names is not None:
# assert x.shape[0] == len(file_names)
y
=
[]
for
idx
,
probs
in
enumerate
(
x
):
index
=
probs
.
argsort
(
axis
=
0
)[
-
self
.
topk
:][::
-
1
].
astype
(
"int32"
)
if
not
self
.
multilabel
else
np
.
where
(
probs
>=
0.5
)[
0
].
astype
(
"int32"
)
clas_id_list
=
[]
score_list
=
[]
label_name_list
=
[]
for
i
in
index
:
clas_id_list
.
append
(
i
.
item
())
score_list
.
append
(
probs
[
i
].
item
())
if
self
.
class_id_map
is
not
None
:
label_name_list
.
append
(
self
.
class_id_map
[
i
.
item
()])
result
=
{
"class_ids"
:
clas_id_list
,
"scores"
:
np
.
around
(
score_list
,
decimals
=
5
).
tolist
(),
}
# if file_names is not None:
# result["file_name"] = file_names[idx]
if
label_name_list
is
not
None
:
result
[
"label_names"
]
=
label_name_list
y
.
append
(
result
)
return
y
deploy/python/ppshitu_v2/processor/algo_mod/postprocessor/rec.py
0 → 100644
浏览文件 @
1a9d6229
import
numpy
as
np
from
...base_processor
import
BaseProcessor
class
FeatureNormalizer
(
BaseProcessor
):
def
__init__
(
self
,
config
=
None
):
pass
def
process
(
self
,
data
):
batch_output
=
data
[
"pred"
][
"features"
]
feas_norm
=
np
.
sqrt
(
np
.
sum
(
np
.
square
(
batch_output
),
axis
=
1
,
keepdims
=
True
))
batch_output
=
np
.
divide
(
batch_output
,
feas_norm
)
data
[
"pred"
][
"features"
]
=
batch_output
return
data
deploy/python/ppshitu_v2/processor/algo_mod/predictor/__init__.py
浏览文件 @
1a9d6229
...
@@ -3,8 +3,7 @@ import importlib
...
@@ -3,8 +3,7 @@ import importlib
from
processor.algo_mod.predictor.paddle_predictor
import
PaddlePredictor
from
processor.algo_mod.predictor.paddle_predictor
import
PaddlePredictor
from
processor.algo_mod.predictor.onnx_predictor
import
ONNXPredictor
from
processor.algo_mod.predictor.onnx_predictor
import
ONNXPredictor
# def build_predictor(config):
def
build_predictor
(
config
):
# processor_mod = importlib.import_module(__name__)
processor_mod
=
importlib
.
import_module
(
__name__
)
# processor_name = config.get("name")
processor_name
=
config
.
get
(
"name"
)
# return getattr(processor_mod, processor_name)(config)
return
getattr
(
processor_mod
,
processor_name
)(
config
)
deploy/python/ppshitu_v2/processor/algo_mod/predictor/paddle_predictor.py
浏览文件 @
1a9d6229
...
@@ -48,17 +48,30 @@ class PaddlePredictor(BaseProcessor):
...
@@ -48,17 +48,30 @@ class PaddlePredictor(BaseProcessor):
paddle_config
.
switch_use_feed_fetch_ops
(
False
)
paddle_config
.
switch_use_feed_fetch_ops
(
False
)
self
.
predictor
=
create_predictor
(
paddle_config
)
self
.
predictor
=
create_predictor
(
paddle_config
)
def
process
(
self
,
input_data
):
if
"input_names"
in
config
and
config
[
"input_names"
]:
self
.
input_name_mapping
=
config
[
"input_names"
]
else
:
self
.
input_name_mapping
=
[]
if
"output_names"
in
config
and
config
[
"output_names"
]:
self
.
output_name_mapping
=
config
[
"output_names"
]
else
:
self
.
output_name_mapping
=
[]
def
process
(
self
,
data
):
input_names
=
self
.
predictor
.
get_input_names
()
input_names
=
self
.
predictor
.
get_input_names
()
for
input_name
in
input_names
:
for
input_name
in
input_names
:
input_tensor
=
self
.
predictor
.
get_input_handle
(
input_name
)
input_tensor
=
self
.
predictor
.
get_input_handle
(
input_name
)
input_tensor
.
copy_from_cpu
(
input_data
[
input_name
])
name
=
self
.
input_name_mapping
[
input_name
]
if
input_name
in
self
.
input_name_mapping
else
input_name
input_tensor
.
copy_from_cpu
(
data
[
name
])
self
.
predictor
.
run
()
self
.
predictor
.
run
()
output_data
=
{}
output_data
=
{}
output_names
=
self
.
predictor
.
get_output_names
()
output_names
=
self
.
predictor
.
get_output_names
()
for
output_name
in
output_names
:
for
output_name
in
output_names
:
output
=
self
.
predictor
.
get_output_handle
(
output_name
)
output
=
self
.
predictor
.
get_output_handle
(
output_name
)
output_data
[
output_name
]
=
output
.
copy_to_cpu
()
name
=
self
.
output_name_mapping
[
input_data
[
"pred"
]
=
output_data
output_name
]
if
output_name
in
self
.
output_name_mapping
else
output_name
return
input_data
output_data
[
name
]
=
output
.
copy_to_cpu
()
data
[
"pred"
]
=
output_data
return
data
deploy/python/ppshitu_v2/processor/algo_mod/preprocessor/__init__.py
浏览文件 @
1a9d6229
...
@@ -2,8 +2,7 @@ import importlib
...
@@ -2,8 +2,7 @@ import importlib
from
processor.algo_mod.preprocessor.image_processor
import
ImageProcessor
from
processor.algo_mod.preprocessor.image_processor
import
ImageProcessor
# def build_preprocessor(config):
def
build_preprocessor
(
config
):
# processor_mod = importlib.import_module(__name__)
processor_mod
=
importlib
.
import_module
(
__name__
)
# processor_name = config.get("name")
processor_name
=
config
.
get
(
"name"
)
# return getattr(processor_mod, processor_name)(config)
return
getattr
(
processor_mod
,
processor_name
)(
config
)
deploy/python/ppshitu_v2/processor/algo_mod/searcher/__init__.py
浏览文件 @
1a9d6229
import
os
import
pickle
import
faiss
def
build_searcher
(
config
):
pass
class
Searcher
:
def
__init__
(
self
,
config
):
super
().
__init__
()
self
.
Searcher
=
faiss
.
read_index
(
os
.
path
.
join
(
config
[
"index_dir"
],
"vector.index"
))
with
open
(
os
.
path
.
join
(
config
[
"index_dir"
],
"id_map.pkl"
),
"rb"
)
as
fd
:
self
.
id_map
=
pickle
.
load
(
fd
)
self
.
return_k
=
config
[
"return_k"
]
def
process
(
self
,
data
):
features
=
data
[
"features"
]
scores
,
docs
=
self
.
Searcher
.
search
(
features
,
self
.
return_k
)
data
[
"search_res"
]
=
(
scores
,
docs
)
return
data
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录