Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
02ce5f94
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
02ce5f94
编写于
6月 08, 2022
作者:
G
gaotingquan
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix: use cpu instead when gpu is invalid
上级
3a3b11ae
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
20 addition
and
6 deletion
+20
-6
paddleclas.py
paddleclas.py
+20
-6
未找到文件。
paddleclas.py
浏览文件 @
02ce5f94
...
...
@@ -31,6 +31,7 @@ import cv2
import
numpy
as
np
from
tqdm
import
tqdm
from
prettytable
import
PrettyTable
import
paddle
from
deploy.python.predict_cls
import
ClsPredictor
from
deploy.utils.get_image_list
import
get_image_list
...
...
@@ -201,8 +202,13 @@ def init_config(model_type, model_name, inference_model_dir, **kwargs):
if
"batch_size"
in
kwargs
and
kwargs
[
"batch_size"
]:
cfg
.
Global
.
batch_size
=
kwargs
[
"batch_size"
]
if
"use_gpu"
in
kwargs
and
kwargs
[
"use_gpu"
]:
cfg
.
Global
.
use_gpu
=
kwargs
[
"use_gpu"
]
if
cfg
.
Global
.
use_gpu
and
not
paddle
.
device
.
is_compiled_with_cuda
():
msg
=
"The current running environment does not support the use of GPU. CPU has been used instead."
logger
.
warning
(
msg
)
cfg
.
Global
.
use_gpu
=
False
if
"infer_imgs"
in
kwargs
and
kwargs
[
"infer_imgs"
]:
cfg
.
Global
.
infer_imgs
=
kwargs
[
"infer_imgs"
]
...
...
@@ -267,17 +273,25 @@ def args_cfg():
type
=
str
,
help
=
"The directory of model files. Valid when model_name not specifed."
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
str
,
help
=
"Whether use GPU."
)
parser
.
add_argument
(
"--gpu_mem"
,
type
=
int
,
default
=
8000
,
help
=
""
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
str2bool
,
help
=
"Whether use GPU."
)
parser
.
add_argument
(
"--gpu_mem"
,
type
=
int
,
help
=
"The memory size of GPU allocated to predict."
)
parser
.
add_argument
(
"--enable_mkldnn"
,
type
=
str2bool
,
default
=
False
,
help
=
"Whether use MKLDNN. Valid when use_gpu is False"
)
parser
.
add_argument
(
"--cpu_num_threads"
,
type
=
int
,
default
=
1
,
help
=
""
)
parser
.
add_argument
(
"--use_tensorrt"
,
type
=
str2bool
,
default
=
False
,
help
=
""
)
parser
.
add_argument
(
"--use_fp16"
,
type
=
str2bool
,
default
=
False
,
help
=
""
)
"--cpu_num_threads"
,
type
=
int
,
help
=
"The threads number when predicting on CPU."
)
parser
.
add_argument
(
"--use_tensorrt"
,
type
=
str2bool
,
help
=
"Whether use TensorRT to accelerate. "
)
parser
.
add_argument
(
"--use_fp16"
,
type
=
str2bool
,
help
=
"Whether use FP16 to predict."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
help
=
"Batch size."
)
parser
.
add_argument
(
"--topk"
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录