Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
X2Paddle
提交
848598d7
X
X2Paddle
项目概览
PaddlePaddle
/
X2Paddle
大约 1 年 前同步成功
通知
328
Star
698
Fork
167
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
26
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
X
X2Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
26
Issue
26
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
848598d7
编写于
3月 20, 2022
作者:
W
wjj19950828
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
rm useless code
上级
d425c921
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
2 addition
and
11 deletion
+2
-11
x2paddle/optimizer/fusion/onnx_layernorm_fuse_pass.py
x2paddle/optimizer/fusion/onnx_layernorm_fuse_pass.py
+1
-1
x2paddle/optimizer/fusion/onnx_layernorm_fuser.py
x2paddle/optimizer/fusion/onnx_layernorm_fuser.py
+1
-10
未找到文件。
x2paddle/optimizer/fusion/onnx_layernorm_fuse_pass.py
浏览文件 @
848598d7
...
...
@@ -29,5 +29,5 @@ class LayerNormFusePass(Pass):
fuser
.
operate
(
graph
,
match_kind
=
"edge"
)
#
用于注册
#
register layernorm pass
onnx_layernorm_fuse_pass
=
LayerNormFusePass
()
x2paddle/optimizer/fusion/onnx_layernorm_fuser.py
浏览文件 @
848598d7
# Copyright (c) 202
0
PaddlePaddle Authors. All Rights Reserved.
# Copyright (c) 202
2
PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
...
...
@@ -109,13 +109,9 @@ class LayerNormFuser(FuseBase):
def
insert_new_layer
(
self
,
graph
,
parameters
,
matches
):
new_layer
,
new_layer_id
=
self
.
gen_new_layer
(
parameters
,
matches
)
print
(
"new_layer:"
,
new_layer
)
print
(
"=========:"
,
new_layer
.
outputs
[
0
])
# new_layer_id = list(matches.keys())[0]
graph
.
layers
[
new_layer_id
]
=
new_layer
matches_copy
=
copy
.
deepcopy
(
matches
)
for
layer_id
,
layer
in
matches_copy
.
items
():
print
(
layer
.
kernel
)
if
layer
.
kernel
in
[
"self.create_parameter"
,
"paddle.full"
]:
matches
.
pop
(
layer_id
)
matches
.
pop
(
new_layer_id
)
...
...
@@ -130,14 +126,11 @@ class LayerNormFuser(FuseBase):
if
layer
.
kernel
==
"paddle.mean"
:
layer_inputs
.
append
(
layer
.
inputs
)
layer_inputs_ids
.
append
(
layer_id
)
print
(
"layer_inputs:"
,
layer_inputs
)
if
layer
.
kernel
==
"self.create_parameter"
:
param_name
.
append
(
layer
.
outputs
[
0
])
if
layer
.
kernel
==
"paddle.add"
:
output_name
=
layer
.
outputs
[
0
]
print
(
"output_name:"
,
output_name
)
param
=
parameters
[
param_name
[
0
]]
print
(
"param.shape:"
,
param
.
shape
)
c
=
param
.
shape
[
0
]
weight_param
=
parameters
.
pop
(
param_name
[
0
])
parameters
[
"{}.weight"
.
format
(
output_name
)]
=
weight_param
...
...
@@ -149,6 +142,4 @@ class LayerNormFuser(FuseBase):
inputs
=
layer_inputs
[
0
],
outputs
=
[
output_name
],
normalized_shape
=
[
c
])
# weight_attr=string(param_name[0]),
# bias_attr=string(param_name[1]))
return
new_layer
,
layer_inputs_ids
[
0
]
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录