Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
5d0d3ef6
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
5d0d3ef6
编写于
5月 12, 2022
作者:
Z
zhiboniu
提交者:
zhiboniu
5月 23, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update clas model deploy support in pipeline
上级
b1857334
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
23 addition
and
8 deletion
+23
-8
deploy/pptracking/python/det_infer.py
deploy/pptracking/python/det_infer.py
+9
-3
deploy/python/infer.py
deploy/python/infer.py
+14
-5
未找到文件。
deploy/pptracking/python/det_infer.py
浏览文件 @
5d0d3ef6
...
...
@@ -416,9 +416,15 @@ def load_predictor(model_dir,
raise
ValueError
(
"Predict by TensorRT mode: {}, expect device=='GPU', but device == {}"
.
format
(
run_mode
,
device
))
config
=
Config
(
os
.
path
.
join
(
model_dir
,
'model.pdmodel'
),
os
.
path
.
join
(
model_dir
,
'model.pdiparams'
))
infer_model
=
os
.
path
.
join
(
model_dir
,
'model.pdmodel'
)
infer_params
=
os
.
path
.
join
(
model_dir
,
'model.pdiparams'
)
if
not
os
.
path
.
exists
(
infer_model
):
infer_model
=
os
.
path
.
join
(
model_dir
,
'inference.pdmodel'
)
infer_params
=
os
.
path
.
join
(
model_dir
,
'inference.pdiparams'
)
if
not
os
.
path
.
exists
(
infer_model
):
raise
ValueError
(
"Cannot find any inference model in dir: {},"
.
format
(
model_dir
))
config
=
Config
(
infer_model
,
infer_params
)
if
device
==
'GPU'
:
# initial GPU memory(M), device ID
config
.
enable_use_gpu
(
200
,
0
)
...
...
deploy/python/infer.py
浏览文件 @
5d0d3ef6
...
...
@@ -158,7 +158,10 @@ class Detector(object):
input_names
=
self
.
predictor
.
get_input_names
()
for
i
in
range
(
len
(
input_names
)):
input_tensor
=
self
.
predictor
.
get_input_handle
(
input_names
[
i
])
input_tensor
.
copy_from_cpu
(
inputs
[
input_names
[
i
]])
if
input_names
[
i
]
==
'x'
:
input_tensor
.
copy_from_cpu
(
inputs
[
'image'
])
else
:
input_tensor
.
copy_from_cpu
(
inputs
[
input_names
[
i
]])
return
inputs
...
...
@@ -320,7 +323,7 @@ class Detector(object):
if
not
os
.
path
.
exists
(
self
.
output_dir
):
os
.
makedirs
(
self
.
output_dir
)
out_path
=
os
.
path
.
join
(
self
.
output_dir
,
video_out_name
)
fourcc
=
cv2
.
VideoWriter_fourcc
(
*
'mp4v'
)
fourcc
=
cv2
.
VideoWriter_fourcc
(
*
'mp4v'
)
writer
=
cv2
.
VideoWriter
(
out_path
,
fourcc
,
fps
,
(
width
,
height
))
index
=
1
while
(
1
):
...
...
@@ -704,9 +707,15 @@ def load_predictor(model_dir,
raise
ValueError
(
"Predict by TensorRT mode: {}, expect device=='GPU', but device == {}"
.
format
(
run_mode
,
device
))
config
=
Config
(
os
.
path
.
join
(
model_dir
,
'model.pdmodel'
),
os
.
path
.
join
(
model_dir
,
'model.pdiparams'
))
infer_model
=
os
.
path
.
join
(
model_dir
,
'model.pdmodel'
)
infer_params
=
os
.
path
.
join
(
model_dir
,
'model.pdiparams'
)
if
not
os
.
path
.
exists
(
infer_model
):
infer_model
=
os
.
path
.
join
(
model_dir
,
'inference.pdmodel'
)
infer_params
=
os
.
path
.
join
(
model_dir
,
'inference.pdiparams'
)
if
not
os
.
path
.
exists
(
infer_model
):
raise
ValueError
(
"Cannot find any inference model in dir: {},"
.
format
(
model_dir
))
config
=
Config
(
infer_model
,
infer_params
)
if
device
==
'GPU'
:
# initial GPU memory(M), device ID
config
.
enable_use_gpu
(
200
,
0
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录