Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
c7a6a1f9
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
c7a6a1f9
编写于
3月 25, 2021
作者:
W
winter-wang
提交者:
GitHub
3月 25, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix runtime crash when rnn model inference, test=develop (#31833) (#31846)
上级
d44d1730
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
18 addition
and
17 deletion
+18
-17
paddle/fluid/inference/analysis/passes/memory_optimize_pass.cc
...e/fluid/inference/analysis/passes/memory_optimize_pass.cc
+1
-0
paddle/fluid/operators/recurrent_op.cc
paddle/fluid/operators/recurrent_op.cc
+12
-13
python/paddle/nn/functional/norm.py
python/paddle/nn/functional/norm.py
+5
-4
未找到文件。
paddle/fluid/inference/analysis/passes/memory_optimize_pass.cc
浏览文件 @
c7a6a1f9
...
...
@@ -105,6 +105,7 @@ void MemoryOptimizePass::CollectVarMemorySize(
"merge_lod_tensor"
,
"equal"
,
"sequence_pool"
,
"recurrent"
,
"lod_reset"
};
for
(
auto
*
tmp
:
node
->
inputs
)
{
CHECK
(
tmp
->
IsOp
());
...
...
paddle/fluid/operators/recurrent_op.cc
浏览文件 @
c7a6a1f9
...
...
@@ -211,9 +211,10 @@ void RecurrentOp::RunImpl(const framework::Scope &scope,
auto
*
block
=
Attr
<
framework
::
BlockDesc
*>
(
kStepBlock
);
auto
*
program
=
block
->
Program
();
auto
ctx
=
executor
.
Prepare
(
*
program
,
block
->
ID
(),
Attr
<
std
::
vector
<
std
::
string
>>
(
kSkipEagerDeletionVars
)
/*skip_ref_cnt_vars*/
);
auto
ctx
=
executor
.
Prepare
(
*
program
,
block
->
ID
(),
Attr
<
std
::
vector
<
std
::
string
>>
(
kSkipEagerDeletionVars
),
/*skip_ref_cnt_vars*/
true
);
static
std
::
mutex
mutex
;
std
::
lock_guard
<
std
::
mutex
>
lock
(
mutex
);
...
...
@@ -256,16 +257,6 @@ void RecurrentOp::RunImpl(const framework::Scope &scope,
// Link inside::output -> outside::output
// outside::output[seq_offset: seq_offset + 1] = inside::output
executor
.
CreateVariables
(
ctx
->
prog_
,
&
cur_scope
,
ctx
->
block_id_
);
if
(
i
>
0
)
{
LinkTensorWithCallback
(
scope
,
Outputs
(
kOutputs
),
cur_scope
,
Outputs
(
kOutputs
),
[
&
](
const
framework
::
LoDTensor
&
src_tensor
,
framework
::
LoDTensor
*
dst_tensor
)
{
framework
::
Tensor
src_slice
=
src_tensor
.
Slice
(
seq_offset
,
seq_offset
+
1
);
dst_tensor
->
ShareDataWith
(
src_slice
);
});
}
// Linked now, execute!
executor
.
RunPreparedContext
(
ctx
.
get
(),
&
cur_scope
,
...
...
@@ -285,6 +276,14 @@ void RecurrentOp::RunImpl(const framework::Scope &scope,
// early.
framework
::
TensorCopy
(
src_tensor
,
place
,
dev_ctx
,
&
dst_out
);
});
}
else
{
LinkTensorWithCallback
(
cur_scope
,
Outputs
(
kOutputs
),
scope
,
Outputs
(
kOutputs
),
[
&
](
const
framework
::
LoDTensor
&
src_tensor
,
framework
::
LoDTensor
*
dst_tensor
)
{
auto
dst_out
=
dst_tensor
->
Slice
(
seq_offset
,
seq_offset
+
1
);
framework
::
TensorCopy
(
src_tensor
,
place
,
dev_ctx
,
&
dst_out
);
});
}
scopes
.
ForwardNext
();
...
...
python/paddle/nn/functional/norm.py
浏览文件 @
c7a6a1f9
...
...
@@ -189,10 +189,10 @@ def batch_norm(x,
if
in_dygraph_mode
():
# for dygraph need tuple
attrs
=
(
"momentum"
,
momentum
,
"epsilon"
,
epsilon
,
"
data_layou
t"
,
data_format
,
"use_mkldnn"
,
False
,
"fuse_with_relu
"
,
False
,
"
use_global_stats"
,
use_global_stats
,
"trainable_statistics"
,
trainable_statistics
)
attrs
=
(
"momentum"
,
momentum
,
"epsilon"
,
epsilon
,
"
is_tes
t"
,
not
training
,
"data_layout"
,
data_format
,
"use_mkldnn
"
,
False
,
"
fuse_with_relu"
,
False
,
"use_global_stats"
,
use_global_stats
,
"trainable_statistics"
,
trainable_statistics
)
batch_norm_out
,
_
,
_
,
_
,
_
,
_
=
core
.
ops
.
batch_norm
(
x
,
weight
,
bias
,
running_mean
,
running_var
,
mean_out
,
variance_out
,
*
attrs
)
...
...
@@ -207,6 +207,7 @@ def batch_norm(x,
attrs
=
{
"momentum"
:
momentum
,
"epsilon"
:
epsilon
,
"is_test"
:
not
training
,
"data_layout"
:
data_format
,
"use_mkldnn"
:
False
,
"fuse_with_relu"
:
False
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录