Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
d5114c60
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
d5114c60
编写于
9月 25, 2018
作者:
J
Jacek Czaja
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
- Reviewers suggesstions to fused_embedding_fc_lstm_op
上级
7ab5626d
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
6 addition
and
9 deletion
+6
-9
paddle/fluid/framework/ir/embedding_fc_lstm_fuse_pass.cc
paddle/fluid/framework/ir/embedding_fc_lstm_fuse_pass.cc
+6
-5
paddle/fluid/operators/fused_embedding_fc_lstm_op.cc
paddle/fluid/operators/fused_embedding_fc_lstm_op.cc
+0
-4
未找到文件。
paddle/fluid/framework/ir/embedding_fc_lstm_fuse_pass.cc
浏览文件 @
d5114c60
...
@@ -13,6 +13,7 @@
...
@@ -13,6 +13,7 @@
// limitations under the License.
// limitations under the License.
#include "paddle/fluid/framework/ir/embedding_fc_lstm_fuse_pass.h"
#include "paddle/fluid/framework/ir/embedding_fc_lstm_fuse_pass.h"
#include <algorithm>
#include <string>
#include <string>
#include "paddle/fluid/framework/lod_tensor.h"
#include "paddle/fluid/framework/lod_tensor.h"
...
@@ -98,17 +99,17 @@ static int BuildFusion(Graph* graph, const std::string& name_scope,
...
@@ -98,17 +99,17 @@ static int BuildFusion(Graph* graph, const std::string& name_scope,
// Copy only gate biases values (only actual bias data, not peephole
// Copy only gate biases values (only actual bias data, not peephole
// weights)
// weights)
std
::
vector
<
float
>
combined_biases
(
n
,
0.0
f
);
std
::
vector
<
float
>
combined_biases
;
memcpy
(
&
combined_biases
[
0
],
lstm_bias_tensor
.
data
<
float
>
(),
combined_biases
.
reserve
(
n
);
n
*
sizeof
(
float
));
std
::
copy_n
(
lstm_bias_tensor
.
data
<
float
>
(),
n
,
std
::
back_inserter
(
combined_biases
));
if
(
with_fc_bias
)
{
if
(
with_fc_bias
)
{
// Add FC-bias with LSTM-bias (into GEMM result to be)
// Add FC-bias with LSTM-bias (into GEMM result to be)
auto
*
fc_bias_var
=
scope
->
FindVar
(
fc_bias
->
Name
());
auto
*
fc_bias_var
=
scope
->
FindVar
(
fc_bias
->
Name
());
const
auto
&
fc_bias_tensor
=
fc_bias_var
->
Get
<
framework
::
LoDTensor
>
();
const
auto
&
fc_bias_tensor
=
fc_bias_var
->
Get
<
framework
::
LoDTensor
>
();
for
(
int
i
=
0
;
i
<
fc_bias_tensor
.
numel
();
i
++
)
{
for
(
int
i
=
0
;
i
<
fc_bias_tensor
.
numel
();
i
++
)
{
combined_biases
[
i
]
=
combined_biases
[
i
]
+=
fc_bias_tensor
.
data
<
float
>
()[
i
];
lstm_bias_tensor
.
data
<
float
>
()[
i
]
+
fc_bias_tensor
.
data
<
float
>
()[
i
];
}
}
}
}
...
...
paddle/fluid/operators/fused_embedding_fc_lstm_op.cc
浏览文件 @
d5114c60
...
@@ -63,10 +63,6 @@ void FusedEmbeddingFCLSTMOp::InferShape(
...
@@ -63,10 +63,6 @@ void FusedEmbeddingFCLSTMOp::InferShape(
auto
embeddings_dims
=
ctx
->
GetInputDim
(
"Embeddings"
);
auto
embeddings_dims
=
ctx
->
GetInputDim
(
"Embeddings"
);
PADDLE_ENFORCE_EQ
(
embeddings_dims
.
size
(),
2
,
PADDLE_ENFORCE_EQ
(
embeddings_dims
.
size
(),
2
,
"The rank of Input(Embeddings) should be 2."
);
"The rank of Input(Embeddings) should be 2."
);
// PADDLE_ENFORCE_EQ(wx_dims[0], x_dims[1],
// "The first dimension of Input(Embeddings) "
// "should be %d.",
// x_dims[1]);
auto
wh_dims
=
ctx
->
GetInputDim
(
"WeightH"
);
auto
wh_dims
=
ctx
->
GetInputDim
(
"WeightH"
);
int
frame_size
=
wh_dims
[
1
]
/
4
;
int
frame_size
=
wh_dims
[
1
]
/
4
;
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录