Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
d5225145
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
d5225145
编写于
6月 07, 2021
作者:
W
wenbin
提交者:
GitHub
6月 07, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix inference prepare data (#33370)
上级
f17d6430
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
46 addition
and
2 deletion
+46
-2
paddle/fluid/framework/operator.cc
paddle/fluid/framework/operator.cc
+6
-1
paddle/fluid/inference/api/analysis_predictor.cc
paddle/fluid/inference/api/analysis_predictor.cc
+39
-0
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv_pass.py
.../fluid/tests/unittests/ir/inference/test_trt_conv_pass.py
+1
-1
未找到文件。
paddle/fluid/framework/operator.cc
浏览文件 @
d5225145
...
...
@@ -1525,7 +1525,12 @@ Scope* OperatorWithKernel::PrepareData(
// the rest iterations to save the elapsed time.
// We do not support skipping PrepareData in while block, because the Op's
// input may be changed by subsequent Ops, which may cause an error.
if
(
pre_scope_
==
&
scope
&&
new_scope
==
nullptr
)
{
// For inference, ops that behind conditional branch aren't supported well,
// so disable prepare optimization conservatively.
bool
force_prepare_data
=
HasAttr
(
"inference_force_prepare_data"
)
&&
Attr
<
bool
>
(
"inference_force_prepare_data"
);
if
(
pre_scope_
==
&
scope
&&
new_scope
==
nullptr
&&
!
force_prepare_data
)
{
need_prepare_data_
=
false
;
}
...
...
paddle/fluid/inference/api/analysis_predictor.cc
浏览文件 @
d5225145
...
...
@@ -270,7 +270,46 @@ bool AnalysisPredictor::CreateExecutor() {
executor_
.
reset
(
new
paddle
::
framework
::
NaiveExecutor
(
place_
));
return
true
;
}
static
bool
IsPrepareDataOptTargetOp
(
framework
::
OpDesc
*
op
)
{
// here is prepare data optimization related bad cases:
// let's assume an op behind conditional_block and if conditional_block
// chooses branch 1, the op need to call prepare data. else the op don't need
// to call prepare data. In running, if predictor chooses branch 2, then
// optimization takes effect, later issue is followed if predictor chooses
// branch 1, because the op lost chance to prepare data.
std
::
vector
<
std
::
string
>
op_type
=
{
"conditional_block_infer"
,
"select_input"
};
for
(
const
auto
&
type
:
op_type
)
{
if
(
op
->
Type
()
==
type
)
{
return
true
;
}
}
return
false
;
}
static
void
DisablePrepareDataOpt
(
std
::
shared_ptr
<
framework
::
ProgramDesc
>
inference_program
,
int
block
,
bool
pre_disable_opt
)
{
bool
disable_opt
=
false
;
auto
&
infer_block
=
inference_program
->
Block
(
block
);
for
(
auto
*
op
:
infer_block
.
AllOps
())
{
if
(
disable_opt
||
pre_disable_opt
)
{
op
->
SetAttr
(
"inference_force_prepare_data"
,
true
);
}
if
(
op
->
HasAttr
(
"sub_block"
))
{
int
blockID
=
op
->
GetBlockAttrId
(
"sub_block"
);
DisablePrepareDataOpt
(
inference_program
,
blockID
,
disable_opt
||
pre_disable_opt
);
}
// disable prepare data if unfriendly op is found
disable_opt
=
IsPrepareDataOptTargetOp
(
op
);
}
}
bool
AnalysisPredictor
::
PrepareExecutor
()
{
DisablePrepareDataOpt
(
inference_program_
,
0
,
false
);
executor_
->
Prepare
(
sub_scope_
,
*
inference_program_
,
0
,
config_
.
use_feed_fetch_ops_
);
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_conv_pass.py
浏览文件 @
d5225145
...
...
@@ -195,7 +195,7 @@ class DynamicShapeTensorRTSubgraphPassConvTest(InferencePassTest):
},
{
"conv2d_0.tmp_0"
:
[
16
,
6
,
16
,
16
],
"data"
:
[
16
,
6
,
16
,
16
],
"depthwise_conv2d_0.tmp_0"
:
[
32
,
6
,
64
,
64
]
"depthwise_conv2d_0.tmp_0"
:
[
16
,
6
,
16
,
16
]
},
False
)
self
.
fetch_list
=
[
conv_out
]
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录