Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
X2Paddle
提交
55d5eb24
X
X2Paddle
项目概览
PaddlePaddle
/
X2Paddle
大约 1 年 前同步成功
通知
328
Star
698
Fork
167
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
26
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
X
X2Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
26
Issue
26
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
55d5eb24
编写于
12月 22, 2020
作者:
S
SunAhong1993
提交者:
GitHub
12月 22, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #16 from PaddlePaddle/develop
Merge
上级
a1af51d3
42dc4e92
变更
16
展开全部
隐藏空白更改
内联
并排
Showing
16 changed file
with
925 addition
and
1110 deletion
+925
-1110
README.md
README.md
+4
-3
x2paddle/convert.py
x2paddle/convert.py
+2
-10
x2paddle/decoder/onnx_decoder.py
x2paddle/decoder/onnx_decoder.py
+7
-1
x2paddle/decoder/tf_decoder.py
x2paddle/decoder/tf_decoder.py
+5
-2
x2paddle/op_mapper/dygraph/onnx2paddle/opset9/opset.py
x2paddle/op_mapper/dygraph/onnx2paddle/opset9/opset.py
+2
-19
x2paddle/op_mapper/dygraph/tf2paddle/tf_op_mapper.py
x2paddle/op_mapper/dygraph/tf2paddle/tf_op_mapper.py
+19
-7
x2paddle/op_mapper/static/onnx2paddle/onnx_op_mapper.py
x2paddle/op_mapper/static/onnx2paddle/onnx_op_mapper.py
+24
-19
x2paddle/op_mapper/static/onnx2paddle/opset9/__init__.py
x2paddle/op_mapper/static/onnx2paddle/opset9/__init__.py
+0
-1
x2paddle/op_mapper/static/onnx2paddle/opset9/custom_layer/__init__.py
...mapper/static/onnx2paddle/opset9/custom_layer/__init__.py
+0
-111
x2paddle/op_mapper/static/onnx2paddle/opset9/custom_layer/register.py
...mapper/static/onnx2paddle/opset9/custom_layer/register.py
+0
-55
x2paddle/op_mapper/static/onnx2paddle/opset9/opset.py
x2paddle/op_mapper/static/onnx2paddle/opset9/opset.py
+824
-824
x2paddle/op_mapper/static/tf2paddle/tf_op_mapper.py
x2paddle/op_mapper/static/tf2paddle/tf_op_mapper.py
+33
-14
x2paddle/optimizer/elimination/dygraph/transpose_elimination.py
...le/optimizer/elimination/dygraph/transpose_elimination.py
+2
-7
x2paddle/optimizer/elimination/static/transpose_elimination.py
...dle/optimizer/elimination/static/transpose_elimination.py
+2
-7
x2paddle/optimizer/fusion/dygraph/adaptive_pool2d_fuser.py
x2paddle/optimizer/fusion/dygraph/adaptive_pool2d_fuser.py
+1
-1
x2paddle/optimizer/onnx_optimizer.py
x2paddle/optimizer/onnx_optimizer.py
+0
-29
未找到文件。
README.md
浏览文件 @
55d5eb24
...
@@ -35,15 +35,15 @@ pip install x2paddle==1.0.0rc0 --index https://pypi.Python.org/simple/
...
@@ -35,15 +35,15 @@ pip install x2paddle==1.0.0rc0 --index https://pypi.Python.org/simple/
## 使用方法
## 使用方法
### TensorFlow
### TensorFlow
```
```
x2paddle --framework=tensorflow --model=tf_model.pb --save_dir=pd_model
x2paddle --framework=tensorflow --model=tf_model.pb --save_dir=pd_model
--paddle_type dygraph
```
```
### Caffe
### Caffe
```
```
x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=deploy.caffemodel --save_dir=pd_model
x2paddle --framework=caffe --prototxt=deploy.prototxt --weight=deploy.caffemodel --save_dir=pd_model
--paddle_type dygraph
```
```
### ONNX
### ONNX
```
```
x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
--paddle_type dygraph
```
```
### PyTorch
### PyTorch
...
@@ -64,6 +64,7 @@ x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
...
@@ -64,6 +64,7 @@ x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model
|--caffe_proto |
**[可选]**
由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
|--caffe_proto |
**[可选]**
由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
|--define_input_shape |
**[可选]**
For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见
[
文档Q2
](
./docs/user_guides/FAQ.md
)
|
|--define_input_shape |
**[可选]**
For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见
[
文档Q2
](
./docs/user_guides/FAQ.md
)
|
|--params_merge |
**[可选]**
当指定该参数时,转换完成后,inference_model中的所有模型参数将合并保存为一个文件__params__ |
|--params_merge |
**[可选]**
当指定该参数时,转换完成后,inference_model中的所有模型参数将合并保存为一个文件__params__ |
|--paddle_type |
**[可选]**
该参数指定转换为动态图代码(dygraph)或者静态图代码(static),默认为dygraph|
...
...
x2paddle/convert.py
浏览文件 @
55d5eb24
...
@@ -185,16 +185,8 @@ def onnx2paddle(model_path, save_dir, paddle_type, params_merge=False):
...
@@ -185,16 +185,8 @@ def onnx2paddle(model_path, save_dir, paddle_type, params_merge=False):
from
x2paddle.op_mapper.static.onnx2paddle.onnx_op_mapper
import
ONNXOpMapper
from
x2paddle.op_mapper.static.onnx2paddle.onnx_op_mapper
import
ONNXOpMapper
model
=
ONNXDecoder
(
model_path
)
model
=
ONNXDecoder
(
model_path
)
mapper
=
ONNXOpMapper
(
model
)
mapper
=
ONNXOpMapper
(
model
)
if
paddle_type
==
"dygraph"
:
mapper
.
paddle_graph
.
build
()
mapper
.
paddle_graph
.
build
()
mapper
.
paddle_graph
.
gen_model
(
save_dir
)
mapper
.
paddle_graph
.
gen_model
(
save_dir
)
else
:
from
x2paddle.optimizer.onnx_optimizer
import
ONNXOptimizer
print
(
"Model optimizing ..."
)
optimizer
=
ONNXOptimizer
(
mapper
)
optimizer
.
delete_redundance_code
()
print
(
"Model optimized."
)
mapper
.
save_inference_model
(
save_dir
,
params_merge
)
def
pytorch2paddle
(
module
,
save_dir
,
jit_type
=
"trace"
,
input_examples
=
None
):
def
pytorch2paddle
(
module
,
save_dir
,
jit_type
=
"trace"
,
input_examples
=
None
):
...
...
x2paddle/decoder/onnx_decoder.py
浏览文件 @
55d5eb24
...
@@ -117,7 +117,13 @@ class ONNXGraphDataNode(GraphNode):
...
@@ -117,7 +117,13 @@ class ONNXGraphDataNode(GraphNode):
if
isinstance
(
self
.
layer
,
ValueInfoProto
):
if
isinstance
(
self
.
layer
,
ValueInfoProto
):
values
=
self
.
layer
.
type
.
tensor_type
.
shape
.
dim
values
=
self
.
layer
.
type
.
tensor_type
.
shape
.
dim
out_shapes
=
list
()
out_shapes
=
list
()
out_shapes
.
append
([
dim
.
dim_value
for
dim
in
values
])
shape
=
list
()
for
dim
in
values
:
if
dim
.
dim_value
==
0
:
shape
.
append
(
-
1
)
else
:
shape
.
append
(
dim
.
dim_value
)
out_shapes
.
append
(
shape
)
return
out_shapes
return
out_shapes
else
:
else
:
values
=
self
.
layer
.
dims
values
=
self
.
layer
.
dims
...
...
x2paddle/decoder/tf_decoder.py
浏览文件 @
55d5eb24
...
@@ -130,7 +130,7 @@ class TFGraph(Graph):
...
@@ -130,7 +130,7 @@ class TFGraph(Graph):
def
__init__
(
self
,
model
,
data_format
=
"NHWC"
):
def
__init__
(
self
,
model
,
data_format
=
"NHWC"
):
super
(
TFGraph
,
self
).
__init__
(
model
)
super
(
TFGraph
,
self
).
__init__
(
model
)
self
.
identity_map
=
dict
()
self
.
identity_map
=
dict
()
self
.
multi_out_ops
=
[
'Split'
,
'SplitV'
,
'IteratorV2'
]
self
.
multi_out_ops
=
[
'Split'
,
'SplitV'
,
'IteratorV2'
,
'Unpack'
]
self
.
tf_data_format
=
data_format
self
.
tf_data_format
=
data_format
self
.
graph_name
=
"TFModel"
self
.
graph_name
=
"TFModel"
...
@@ -173,6 +173,7 @@ class TFGraph(Graph):
...
@@ -173,6 +173,7 @@ class TFGraph(Graph):
self
.
_optimize_dialiation_conv
()
self
.
_optimize_dialiation_conv
()
self
.
_remove_identity_node
()
self
.
_remove_identity_node
()
self
.
_remove_cast_node
()
self
.
_remove_cast_node
()
def
get_node
(
self
,
node_name
,
copy
=
False
):
def
get_node
(
self
,
node_name
,
copy
=
False
):
items
=
node_name
.
strip
().
split
(
':'
)
items
=
node_name
.
strip
().
split
(
':'
)
...
@@ -192,6 +193,8 @@ class TFGraph(Graph):
...
@@ -192,6 +193,8 @@ class TFGraph(Graph):
def
get_input_node
(
self
,
node
,
idx
=
0
,
copy
=
False
):
def
get_input_node
(
self
,
node
,
idx
=
0
,
copy
=
False
):
input_node_name
=
node
.
layer
.
input
[
idx
]
input_node_name
=
node
.
layer
.
input
[
idx
]
if
idx
>
0
:
copy
=
True
return
self
.
get_node
(
input_node_name
,
copy
)
return
self
.
get_node
(
input_node_name
,
copy
)
def
remove_node
(
self
,
node_name
):
def
remove_node
(
self
,
node_name
):
...
@@ -402,7 +405,7 @@ class TFDecoder(object):
...
@@ -402,7 +405,7 @@ class TFDecoder(object):
right_shape_been_input
=
False
right_shape_been_input
=
False
while
not
right_shape_been_input
:
while
not
right_shape_been_input
:
try
:
try
:
shape
=
input
(
shape
=
raw_
input
(
"Shape of Input(e.g. None,224,224,3): "
)
"Shape of Input(e.g. None,224,224,3): "
)
except
:
except
:
shape
=
input
(
"Shape of Input(e.g. None,224,224,3): "
)
shape
=
input
(
"Shape of Input(e.g. None,224,224,3): "
)
...
...
x2paddle/op_mapper/dygraph/onnx2paddle/opset9/opset.py
浏览文件 @
55d5eb24
...
@@ -534,7 +534,7 @@ class OpSet9():
...
@@ -534,7 +534,7 @@ class OpSet9():
'bias_attr'
:
string
(
val_b
.
name
)
'bias_attr'
:
string
(
val_b
.
name
)
}
}
dim
=
len
(
val_x
.
out_shapes
[
0
])
dim
=
len
(
val_x
.
out_shapes
[
0
])
if
dim
==
2
or
dim
==
3
:
if
dim
==
3
:
paddle_op
=
"paddle.nn.InstanceNorm1D"
paddle_op
=
"paddle.nn.InstanceNorm1D"
elif
dim
==
4
:
elif
dim
==
4
:
paddle_op
=
"paddle.nn.InstanceNorm2D"
paddle_op
=
"paddle.nn.InstanceNorm2D"
...
@@ -1539,7 +1539,6 @@ class OpSet9():
...
@@ -1539,7 +1539,6 @@ class OpSet9():
layer_outputs
=
[
op_name
,
output_name
]
layer_outputs
=
[
op_name
,
output_name
]
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_w
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
val_w
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
val_y
=
self
.
graph
.
get_node
(
node
.
layer
.
output
[
0
],
copy
=
True
)
has_bias
=
len
(
node
.
layer
.
input
)
==
3
has_bias
=
len
(
node
.
layer
.
input
)
==
3
if
has_bias
:
if
has_bias
:
val_b
=
self
.
graph
.
get_input_node
(
node
,
idx
=
2
,
copy
=
True
)
val_b
=
self
.
graph
.
get_input_node
(
node
,
idx
=
2
,
copy
=
True
)
...
@@ -1620,23 +1619,7 @@ class OpSet9():
...
@@ -1620,23 +1619,7 @@ class OpSet9():
output_size
[
1
]
=
(
val_x
.
out_shapes
[
0
][
3
]
-
1
output_size
[
1
]
=
(
val_x
.
out_shapes
[
0
][
3
]
-
1
)
*
strides
[
1
]
-
2
*
paddings
[
1
]
+
dilations
[
1
]
*
(
)
*
strides
[
1
]
-
2
*
paddings
[
1
]
+
dilations
[
1
]
*
(
kernel_shape
[
1
]
-
1
)
+
1
+
out_padding
[
1
]
kernel_shape
[
1
]
-
1
)
+
1
+
out_padding
[
1
]
# layer_attrs = {
# Conv2DTranspose缺少output_size,只能在forward里头传进output_size
# 'in_channels': num_in_channels,
# 'out_channels': num_out_channels,
# 'output_size': output_size or None,
# 'kernel_size': kernel_shape,
# 'padding': paddings,
# 'stride': strides,
# 'dilation': dilations,
# 'groups': num_groups,
# 'weight_attr': string(val_w.name),
# 'bias_attr': None if val_b is None else string(val_b.name),
# }
# self.paddle_graph.add_layer(
# paddle_op,
# inputs={"x": val_x.name},
# outputs=layer_outputs,
# **layer_attrs)
inputs_dict
=
{
'x'
:
val_x
if
isinstance
(
val_x
,
str
)
else
val_x
.
name
,
inputs_dict
=
{
'x'
:
val_x
if
isinstance
(
val_x
,
str
)
else
val_x
.
name
,
"weight"
:
val_w
.
name
}
"weight"
:
val_w
.
name
}
layer_attrs
=
{
layer_attrs
=
{
...
...
x2paddle/op_mapper/dygraph/tf2paddle/tf_op_mapper.py
浏览文件 @
55d5eb24
...
@@ -73,15 +73,17 @@ class TFOpMapper(OpMapper):
...
@@ -73,15 +73,17 @@ class TFOpMapper(OpMapper):
'Sub'
:
'fluid.layers.elementwise_sub'
,
'Sub'
:
'fluid.layers.elementwise_sub'
,
'Maximum'
:
'paddle.maximum'
,
'Maximum'
:
'paddle.maximum'
,
'Minimum'
:
'paddle.minimum'
,
'Minimum'
:
'paddle.minimum'
,
'Mul'
:
'paddle.multiply'
,
'FloorDiv'
:
'paddle.floor_divide'
,
'FloorMod'
:
'paddle.floor_mod'
,
'LogicalAnd'
:
'logical_and'
,
}
bool_ops
=
{
'LessEqual'
:
'paddle.less_equal'
,
'LessEqual'
:
'paddle.less_equal'
,
'GreaterEqual'
:
'paddle.greater_equal'
,
'GreaterEqual'
:
'paddle.greater_equal'
,
'Greater'
:
'paddle.greater_than'
,
'Greater'
:
'paddle.greater_than'
,
'NotEqual'
:
'paddle.not_equal'
,
'NotEqual'
:
'paddle.not_equal'
,
'Equal'
:
'paddle.equal'
,
'Equal'
:
'paddle.equal'
,
'Mul'
:
'paddle.multiply'
,
'FloorDiv'
:
'paddle.floor_divide'
,
'FloorMod'
:
'paddle.floor_mod'
,
'LogicalAnd'
:
'logical_and'
,
}
}
def
__init__
(
self
,
decoder
):
def
__init__
(
self
,
decoder
):
...
@@ -123,6 +125,8 @@ class TFOpMapper(OpMapper):
...
@@ -123,6 +125,8 @@ class TFOpMapper(OpMapper):
self
.
directly_map
(
node
)
self
.
directly_map
(
node
)
elif
op
in
self
.
elementwise_ops
:
elif
op
in
self
.
elementwise_ops
:
self
.
elementwise_map
(
node
)
self
.
elementwise_map
(
node
)
elif
op
in
self
.
bool_ops
:
self
.
bool_map
(
node
)
elif
hasattr
(
self
,
op
):
elif
hasattr
(
self
,
op
):
func
=
getattr
(
self
,
op
)
func
=
getattr
(
self
,
op
)
func
(
node
)
func
(
node
)
...
@@ -138,7 +142,8 @@ class TFOpMapper(OpMapper):
...
@@ -138,7 +142,8 @@ class TFOpMapper(OpMapper):
op
=
node
.
layer_type
op
=
node
.
layer_type
if
not
hasattr
(
self
,
op
)
and
\
if
not
hasattr
(
self
,
op
)
and
\
op
not
in
self
.
directly_map_ops
and
\
op
not
in
self
.
directly_map_ops
and
\
op
not
in
self
.
elementwise_ops
:
op
not
in
self
.
elementwise_ops
and
\
op
not
in
self
.
bool_ops
:
unsupported_ops
.
add
(
op
)
unsupported_ops
.
add
(
op
)
if
len
(
unsupported_ops
)
==
0
:
if
len
(
unsupported_ops
)
==
0
:
return
True
return
True
...
@@ -178,8 +183,10 @@ class TFOpMapper(OpMapper):
...
@@ -178,8 +183,10 @@ class TFOpMapper(OpMapper):
outputs
=
[
node
.
name
],
outputs
=
[
node
.
name
],
**
layer_attrs
)
**
layer_attrs
)
def
elementwise_map
(
self
,
node
):
def
elementwise_map
(
self
,
node
,
op_type
=
None
):
op_type
=
self
.
elementwise_ops
[
node
.
layer_type
]
if
op_type
is
None
:
assert
node
.
layer_type
in
self
.
elementwise_ops
op_type
=
self
.
elementwise_ops
[
node
.
layer_type
]
x
=
self
.
graph
.
get_input_node
(
node
,
0
)
x
=
self
.
graph
.
get_input_node
(
node
,
0
)
y
=
self
.
graph
.
get_input_node
(
node
,
1
)
y
=
self
.
graph
.
get_input_node
(
node
,
1
)
x_shape
=
x
.
out_shapes
[
0
]
x_shape
=
x
.
out_shapes
[
0
]
...
@@ -190,6 +197,11 @@ class TFOpMapper(OpMapper):
...
@@ -190,6 +197,11 @@ class TFOpMapper(OpMapper):
"y"
:
y
.
name
},
"y"
:
y
.
name
},
outputs
=
[
node
.
name
])
outputs
=
[
node
.
name
])
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
def
bool_map
(
self
,
node
):
op_type
=
self
.
bool_ops
[
node
.
layer_type
]
self
.
elementwise_map
(
node
,
op_type
)
node
.
set_dtype
(
"bool"
)
def
Placeholder
(
self
,
node
):
def
Placeholder
(
self
,
node
):
shape
=
node
.
out_shapes
[
0
]
shape
=
node
.
out_shapes
[
0
]
...
...
x2paddle/op_mapper/static/onnx2paddle/onnx_op_mapper.py
浏览文件 @
55d5eb24
...
@@ -12,9 +12,11 @@
...
@@ -12,9 +12,11 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
from
x2paddle.op_mapper.static.onnx2paddle.opset9
import
OpSet9
,
custom_layers
import
sys
from
x2paddle.op_mapper.static.onnx2paddle.opset9
import
OpSet9
from
x2paddle.core.op_mapper
import
OpMapper
from
x2paddle.core.op_mapper
import
OpMapper
from
x2paddle.decoder.onnx_decoder
import
ONNXGraph
,
ONNXGraphNode
,
ONNXGraphDataNode
from
x2paddle.decoder.onnx_decoder
import
ONNXGraphNode
from
x2paddle.core.program
import
PaddleGraph
class
ONNXOpMapper
(
OpMapper
):
class
ONNXOpMapper
(
OpMapper
):
...
@@ -23,33 +25,36 @@ class ONNXOpMapper(OpMapper):
...
@@ -23,33 +25,36 @@ class ONNXOpMapper(OpMapper):
self
.
support_op_sets
=
[
9
,
]
self
.
support_op_sets
=
[
9
,
]
self
.
default_op_set
=
9
self
.
default_op_set
=
9
self
.
graph
=
decoder
.
graph
self
.
graph
=
decoder
.
graph
self
.
paddle_graph
=
PaddleGraph
(
parent_layer
=
None
,
graph_type
=
"static"
,
source_type
=
"onnx"
)
self
.
paddle_graph
.
outputs
=
self
.
graph
.
output_nodes
self
.
opset
=
self
.
create_opset
(
decoder
)
self
.
opset
=
self
.
create_opset
(
decoder
)
if
not
self
.
op_checker
():
if
not
self
.
op_checker
():
raise
Exception
(
"Model
are
not supported yet."
)
raise
Exception
(
"Model
is
not supported yet."
)
#mapping op
print
(
"Total nodes: {}"
.
format
(
print
(
"Total nodes: {}"
.
format
(
sum
([
sum
([
isinstance
(
node
,
ONNXGraphNode
)
isinstance
(
node
,
ONNXGraphNode
)
for
name
,
node
in
self
.
graph
.
node_map
.
items
()
for
name
,
node
in
self
.
graph
.
node_map
.
items
()
])))
])))
print
(
"Nodes converting ..."
)
print
(
"Nodes converting ..."
)
for
node_name
in
self
.
graph
.
topo_sort
:
for
i
,
node_name
in
enumerate
(
self
.
graph
.
topo_sort
):
sys
.
stderr
.
write
(
"
\r
Converting node {} ... "
.
format
(
i
+
1
))
node
=
self
.
graph
.
get_node
(
node_name
)
node
=
self
.
graph
.
get_node
(
node_name
)
op
=
node
.
layer_type
op
=
node
.
layer_type
if
hasattr
(
self
.
opset
,
op
):
if
hasattr
(
self
.
opset
,
op
):
func
=
getattr
(
self
.
opset
,
op
)
func
=
getattr
(
self
.
opset
,
op
)
func
(
node
)
func
(
node
)
elif
op
in
self
.
opset
.
d
efault_op_mapping
:
elif
op
in
self
.
opset
.
d
irectly_map_ops
:
self
.
opset
.
directly_map
(
node
)
self
.
opset
.
directly_map
(
node
)
elif
op
in
custom_layers
:
self
.
opset
.
deal_custom_layer
(
node
)
elif
op
in
self
.
opset
.
elementwise_ops
:
elif
op
in
self
.
opset
.
elementwise_ops
:
self
.
opset
.
elementwise_map
(
node
)
self
.
opset
.
elementwise_map
(
node
)
print
(
"Nodes converted."
)
print
(
"
\n
Nodes converted."
)
self
.
weights
=
self
.
opset
.
weights
self
.
paddle_graph
.
set_name
(
self
.
graph
.
graph_name
)
self
.
omit_nodes
=
self
.
opset
.
omit_nodes
self
.
paddle_graph
.
set_parameters
(
self
.
opset
.
params
)
self
.
used_custom_layers
=
self
.
opset
.
used_custom_layers
self
.
paddle_graph
.
set_inputs_info
(
self
.
opset
.
inputs_info
)
self
.
paddle_graph
.
inputs
=
self
.
graph
.
input_nodes
self
.
paddle_graph
.
outputs
=
self
.
graph
.
output_nodes
def
op_checker
(
self
):
def
op_checker
(
self
):
unsupported_ops
=
set
()
unsupported_ops
=
set
()
...
@@ -57,17 +62,17 @@ class ONNXOpMapper(OpMapper):
...
@@ -57,17 +62,17 @@ class ONNXOpMapper(OpMapper):
node
=
self
.
graph
.
get_node
(
node_name
)
node
=
self
.
graph
.
get_node
(
node_name
)
op
=
node
.
layer_type
op
=
node
.
layer_type
if
not
hasattr
(
self
.
opset
,
op
)
and
\
if
not
hasattr
(
self
.
opset
,
op
)
and
\
op
not
in
self
.
opset
.
default_op_mapping
and
\
op
not
in
self
.
opset
.
directly_map_ops
and
\
op
not
in
custom_layers
and
\
op
not
in
self
.
opset
.
elementwise_ops
:
op
not
in
self
.
opset
.
elementwise_ops
:
unsupported_ops
.
add
(
op
)
unsupported_ops
.
add
(
op
)
if
len
(
unsupported_ops
)
==
0
:
if
len
(
unsupported_ops
)
==
0
:
return
True
return
True
else
:
else
:
print
(
"There are {} ops not supported yet, list as below"
.
format
(
if
len
(
unsupported_ops
)
>
0
:
len
(
unsupported_ops
)))
print
(
"
\n
========= {} OPs are not supported yet ==========="
.
format
(
len
(
unsupported_ops
)))
for
op
in
unsupported_ops
:
for
op
in
unsupported_ops
:
print
(
op
)
print
(
"========== {} ============"
.
format
(
op
)
)
return
False
return
False
def
create_opset
(
self
,
decoder
):
def
create_opset
(
self
,
decoder
):
...
@@ -88,4 +93,4 @@ class ONNXOpMapper(OpMapper):
...
@@ -88,4 +93,4 @@ class ONNXOpMapper(OpMapper):
'Now, onnx2paddle support convert onnx model opset_verison {},'
'Now, onnx2paddle support convert onnx model opset_verison {},'
'opset_verison of your onnx model is {}, automatically treated as op_set: {}.'
'opset_verison of your onnx model is {}, automatically treated as op_set: {}.'
.
format
(
self
.
support_op_sets
,
decoder
.
op_set
,
run_op_set
))
.
format
(
self
.
support_op_sets
,
decoder
.
op_set
,
run_op_set
))
return
eval
(
opset
)(
decoder
)
return
eval
(
opset
)(
decoder
,
self
.
paddle_graph
)
x2paddle/op_mapper/static/onnx2paddle/opset9/__init__.py
浏览文件 @
55d5eb24
from
.opset
import
OpSet9
from
.opset
import
OpSet9
from
.custom_layer
import
custom_layers
x2paddle/op_mapper/static/onnx2paddle/opset9/custom_layer/__init__.py
已删除
100644 → 0
浏览文件 @
a1af51d3
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
.register
import
get_registered_layers
custom_layers
=
get_registered_layers
()
def
set_args
(
f
,
params
):
""" set args for function 'f' using the parameters in node.layer.param
Args:
f (function): a python function object
params (object): a object contains attributes needed by f's arguments
Returns:
arg_names (list): a list of argument names
kwargs (dict): a dict contains needed arguments
"""
argc
=
f
.
__code__
.
co_argcount
arg_list
=
f
.
__code__
.
co_varnames
[
0
:
argc
]
kwargs
=
{}
for
arg_name
in
arg_list
:
if
hasattr
(
params
,
arg_name
)
and
params
is
not
None
:
kwargs
[
arg_name
]
=
getattr
(
params
,
arg_name
)
return
arg_list
,
kwargs
def
has_layer
(
layer_type
):
""" test whether this layer exists in custom layer
"""
return
layer_type
in
custom_layers
def
get_params
(
layer
,
layer_type
):
import
re
if
layer_type
.
lower
()
==
"deconvolution"
or
layer_type
.
lower
(
)
==
"convolutiondepthwise"
:
param_name
=
'_'
.
join
((
'convolution'
,
'param'
))
elif
layer_type
.
lower
()
==
"normalize"
:
param_name
=
'_'
.
join
((
'norm'
,
'param'
))
elif
len
(
layer_type
)
-
len
(
re
.
sub
(
"[A-Z]"
,
""
,
layer_type
))
>=
2
:
s
=
''
tmp_name
=
''
for
i
,
ch
in
enumerate
(
layer_type
):
if
i
==
0
:
s
+=
ch
.
lower
()
continue
elif
ch
.
isupper
()
and
layer_type
[
i
-
1
].
islower
():
tmp_name
+=
(
s
+
'_'
)
s
=
''
s
+=
ch
.
lower
()
tmp_name
+=
s
param_name
=
'_'
.
join
((
tmp_name
,
'param'
))
else
:
param_name
=
'_'
.
join
((
layer_type
.
lower
(),
'param'
))
return
getattr
(
layer
,
param_name
,
None
)
def
compute_output_shape
(
node
):
""" compute the output shape of custom layer
"""
layer_type
=
node
.
layer_type
assert
layer_type
in
custom_layers
,
"layer[%s] not exist in custom layers"
%
(
layer_type
)
shape_func
=
custom_layers
[
layer_type
][
'shape'
]
layer
=
node
.
layer
params
=
get_params
(
layer
,
layer_type
)
arg_names
,
kwargs
=
set_args
(
shape_func
,
params
)
input_shape
=
node
.
input_shape
return
shape_func
(
input_shape
,
**
kwargs
)
def
make_custom_layer
(
node
):
""" get the code which implement the custom layer function
"""
layer_type
=
node
.
layer_type
assert
layer_type
in
custom_layers
,
"layer[%s] not exist in custom layers"
%
(
layer_type
)
layer_func
=
custom_layers
[
layer_type
][
'layer'
]
import
inspect
return
inspect
.
getsource
(
layer_func
),
layer_func
def
make_custom_child_func
(
node
):
""" get the code which implement the custom layer function
"""
layer_type
=
node
.
layer_type
child_func
=
custom_layers
[
layer_type
][
'child_func'
]
if
child_func
is
None
:
return
None
,
child_func
import
inspect
return
inspect
.
getsource
(
child_func
),
child_func
def
deal_weights
(
node
,
data
=
None
):
""" deal the weights of the custom layer
"""
layer_type
=
node
.
layer_type
weights_func
=
custom_layers
[
layer_type
][
'weights'
]
name
=
node
.
layer_name
return
weights_func
(
name
,
data
)
x2paddle/op_mapper/static/onnx2paddle/opset9/custom_layer/register.py
已删除
100644 → 0
浏览文件 @
a1af51d3
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" this module provides 'register' for registering customized layers
"""
g_custom_layers
=
{}
def
register
(
kind
,
shape
,
layer
,
child_func
,
weights
):
""" register a custom layer or a list of custom layers
Args:
@kind (str or list): type name of the layer
@shape (function): a function to generate the shape of layer's output
@layer (function): a function to generate the paddle code of layer
@weights (function): a function to deal with weights data
Returns:
None
"""
assert
type
(
shape
).
__name__
==
'function'
,
'shape should be a function'
assert
type
(
layer
).
__name__
==
'function'
,
'layer should be a function'
if
type
(
kind
)
is
str
:
kind
=
[
kind
]
else
:
assert
type
(
kind
)
is
list
,
'invalid param "kind" for register, not a list or str'
for
k
in
kind
:
assert
type
(
k
)
is
str
,
'invalid param "kind" for register, not a list of str'
assert
k
not
in
g_custom_layers
,
'this type[%s] has already been registered'
%
(
k
)
g_custom_layers
[
k
]
=
{
'shape'
:
shape
,
'layer'
:
layer
,
'child_func'
:
child_func
,
'weights'
:
weights
}
def
get_registered_layers
():
return
g_custom_layers
x2paddle/op_mapper/static/onnx2paddle/opset9/opset.py
浏览文件 @
55d5eb24
此差异已折叠。
点击以展开。
x2paddle/op_mapper/static/tf2paddle/tf_op_mapper.py
浏览文件 @
55d5eb24
...
@@ -75,15 +75,17 @@ class TFOpMapper(OpMapper):
...
@@ -75,15 +75,17 @@ class TFOpMapper(OpMapper):
'Sub'
:
'fluid.layers.elementwise_sub'
,
'Sub'
:
'fluid.layers.elementwise_sub'
,
'Maximum'
:
'paddle.maximum'
,
'Maximum'
:
'paddle.maximum'
,
'Minimum'
:
'paddle.minimum'
,
'Minimum'
:
'paddle.minimum'
,
'Mul'
:
'paddle.multiply'
,
'FloorDiv'
:
'paddle.floor_divide'
,
'FloorMod'
:
'paddle.floor_mod'
,
'LogicalAnd'
:
'logical_and'
,
}
bool_ops
=
{
'LessEqual'
:
'paddle.less_equal'
,
'LessEqual'
:
'paddle.less_equal'
,
'GreaterEqual'
:
'paddle.greater_equal'
,
'GreaterEqual'
:
'paddle.greater_equal'
,
'Greater'
:
'paddle.greater_than'
,
'Greater'
:
'paddle.greater_than'
,
'NotEqual'
:
'paddle.not_equal'
,
'NotEqual'
:
'paddle.not_equal'
,
'Equal'
:
'paddle.equal'
,
'Equal'
:
'paddle.equal'
,
'Mul'
:
'paddle.multiply'
,
'FloorDiv'
:
'paddle.floor_divide'
,
'FloorMod'
:
'paddle.floor_mod'
,
'LogicalAnd'
:
'logical_and'
,
}
}
def
__init__
(
self
,
decoder
):
def
__init__
(
self
,
decoder
):
...
@@ -94,6 +96,7 @@ class TFOpMapper(OpMapper):
...
@@ -94,6 +96,7 @@ class TFOpMapper(OpMapper):
raise
Exception
(
"Model is not supported yet."
)
raise
Exception
(
"Model is not supported yet."
)
self
.
params
=
dict
()
self
.
params
=
dict
()
self
.
paddle_graph
=
PaddleGraph
(
parent_layer
=
None
,
graph_type
=
"static"
,
source_type
=
"tf"
)
self
.
paddle_graph
=
PaddleGraph
(
parent_layer
=
None
,
graph_type
=
"static"
,
source_type
=
"tf"
)
self
.
params_output2id
=
dict
()
not_placeholder
=
list
()
not_placeholder
=
list
()
for
name
in
self
.
graph
.
input_nodes
:
for
name
in
self
.
graph
.
input_nodes
:
...
@@ -124,6 +127,8 @@ class TFOpMapper(OpMapper):
...
@@ -124,6 +127,8 @@ class TFOpMapper(OpMapper):
self
.
directly_map
(
node
)
self
.
directly_map
(
node
)
elif
op
in
self
.
elementwise_ops
:
elif
op
in
self
.
elementwise_ops
:
self
.
elementwise_map
(
node
)
self
.
elementwise_map
(
node
)
elif
op
in
self
.
bool_ops
:
self
.
bool_map
(
node
)
elif
hasattr
(
self
,
op
):
elif
hasattr
(
self
,
op
):
func
=
getattr
(
self
,
op
)
func
=
getattr
(
self
,
op
)
func
(
node
)
func
(
node
)
...
@@ -138,7 +143,8 @@ class TFOpMapper(OpMapper):
...
@@ -138,7 +143,8 @@ class TFOpMapper(OpMapper):
op
=
node
.
layer_type
op
=
node
.
layer_type
if
not
hasattr
(
self
,
op
)
and
\
if
not
hasattr
(
self
,
op
)
and
\
op
not
in
self
.
directly_map_ops
and
\
op
not
in
self
.
directly_map_ops
and
\
op
not
in
self
.
elementwise_ops
:
op
not
in
self
.
elementwise_ops
and
\
op
not
in
self
.
bool_ops
:
unsupported_ops
.
add
(
op
)
unsupported_ops
.
add
(
op
)
if
len
(
unsupported_ops
)
==
0
:
if
len
(
unsupported_ops
)
==
0
:
return
True
return
True
...
@@ -167,9 +173,10 @@ class TFOpMapper(OpMapper):
...
@@ -167,9 +173,10 @@ class TFOpMapper(OpMapper):
outputs
=
[
node
.
name
],
outputs
=
[
node
.
name
],
**
attr
)
**
attr
)
def
elementwise_map
(
self
,
node
):
def
elementwise_map
(
self
,
node
,
op_type
=
None
):
assert
node
.
layer_type
in
self
.
elementwise_ops
if
op_type
is
None
:
op_type
=
self
.
elementwise_ops
[
node
.
layer_type
]
assert
node
.
layer_type
in
self
.
elementwise_ops
op_type
=
self
.
elementwise_ops
[
node
.
layer_type
]
x
=
self
.
graph
.
get_node
(
node
.
layer
.
input
[
0
])
x
=
self
.
graph
.
get_node
(
node
.
layer
.
input
[
0
])
y
=
self
.
graph
.
get_node
(
node
.
layer
.
input
[
1
])
y
=
self
.
graph
.
get_node
(
node
.
layer
.
input
[
1
])
x_shape
=
x
.
out_shapes
[
0
]
x_shape
=
x
.
out_shapes
[
0
]
...
@@ -180,6 +187,11 @@ class TFOpMapper(OpMapper):
...
@@ -180,6 +187,11 @@ class TFOpMapper(OpMapper):
"y"
:
y
.
name
},
"y"
:
y
.
name
},
outputs
=
[
node
.
name
])
outputs
=
[
node
.
name
])
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
def
bool_map
(
self
,
node
):
op_type
=
self
.
bool_ops
[
node
.
layer_type
]
self
.
elementwise_map
(
node
,
op_type
)
node
.
set_dtype
(
"bool"
)
def
Placeholder
(
self
,
node
):
def
Placeholder
(
self
,
node
):
shape
=
node
.
out_shapes
[
0
]
shape
=
node
.
out_shapes
[
0
]
...
@@ -213,7 +225,7 @@ class TFOpMapper(OpMapper):
...
@@ -213,7 +225,7 @@ class TFOpMapper(OpMapper):
return
return
self
.
params
[
node
.
name
]
=
node
.
value
self
.
params
[
node
.
name
]
=
node
.
value
self
.
paddle_graph
.
add_layer
(
layer_id
=
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.static.create_parameter"
,
kernel
=
"paddle.static.create_parameter"
,
inputs
=
{},
inputs
=
{},
outputs
=
[
node
.
name
],
outputs
=
[
node
.
name
],
...
@@ -221,6 +233,7 @@ class TFOpMapper(OpMapper):
...
@@ -221,6 +233,7 @@ class TFOpMapper(OpMapper):
shape
=
shape
,
shape
=
shape
,
name
=
string
(
node
.
name
),
name
=
string
(
node
.
name
),
default_initializer
=
"paddle.nn.initializer.Constant(value=0.0)"
)
default_initializer
=
"paddle.nn.initializer.Constant(value=0.0)"
)
self
.
params_output2id
[
node
.
name
]
=
layer_id
def
Transpose
(
self
,
node
):
def
Transpose
(
self
,
node
):
input
=
self
.
graph
.
get_node
(
node
.
layer
.
input
[
0
])
input
=
self
.
graph
.
get_node
(
node
.
layer
.
input
[
0
])
...
@@ -763,11 +776,17 @@ class TFOpMapper(OpMapper):
...
@@ -763,11 +776,17 @@ class TFOpMapper(OpMapper):
data_format
=
node
.
get_attr
(
"data_format"
).
decode
()
data_format
=
node
.
get_attr
(
"data_format"
).
decode
()
pad_mode
=
node
.
get_attr
(
"padding"
).
decode
()
pad_mode
=
node
.
get_attr
(
"padding"
).
decode
()
self
.
paddle_graph
.
add_layer
(
if
len
(
kernel
.
outputs
)
==
1
:
kernel
=
"paddle.transpose"
,
self
.
params
[
kernel
.
name
]
=
numpy
.
transpose
(
self
.
params
[
kernel
.
name
],
inputs
=
{
"x"
:
kernel
.
name
},
(
2
,
3
,
0
,
1
))
outputs
=
[
kernel
.
name
],
layer
=
self
.
paddle_graph
.
layers
[
self
.
params_output2id
[
kernel
.
name
]]
perm
=
[
2
,
3
,
0
,
1
])
layer
.
attrs
[
"shape"
]
=
self
.
params
[
kernel
.
name
].
shape
else
:
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.transpose"
,
inputs
=
{
"x"
:
kernel
.
name
},
outputs
=
[
kernel
.
name
],
perm
=
[
2
,
3
,
0
,
1
])
input_name
=
input
.
name
input_name
=
input
.
name
if
data_format
==
"NHWC"
:
if
data_format
==
"NHWC"
:
...
...
x2paddle/optimizer/elimination/dygraph/transpose_elimination.py
浏览文件 @
55d5eb24
...
@@ -178,13 +178,13 @@ class DygraphTransposeElimination(FuseBase):
...
@@ -178,13 +178,13 @@ class DygraphTransposeElimination(FuseBase):
if
_graph
.
layers
[
ipt
].
outputs
[
if
_graph
.
layers
[
ipt
].
outputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
'x'
]:
'x'
]:
if
l
en
(
x_shape
)
<=
1
:
if
l
ist
(
x_shape
)
==
[
1
]
or
len
(
x_shape
)
<
1
:
elementwise_layers
.
append
(
current_id
)
elementwise_layers
.
append
(
current_id
)
continue
continue
elif
_graph
.
layers
[
ipt
].
outputs
[
elif
_graph
.
layers
[
ipt
].
outputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
'y'
]:
'y'
]:
if
l
en
(
y_shape
)
<=
1
:
if
l
ist
(
y_shape
)
==
[
1
]
or
len
(
y_shape
)
<
1
:
elementwise_layers
.
append
(
current_id
)
elementwise_layers
.
append
(
current_id
)
continue
continue
else
:
else
:
...
@@ -279,11 +279,6 @@ class DygraphTransposeElimination(FuseBase):
...
@@ -279,11 +279,6 @@ class DygraphTransposeElimination(FuseBase):
for
layer_id
in
list
(
set
(
optimized_concat_layers
)):
for
layer_id
in
list
(
set
(
optimized_concat_layers
)):
axis
=
graph
.
layers
[
layer_id
].
attrs
.
get
(
'axis'
,
0
)
axis
=
graph
.
layers
[
layer_id
].
attrs
.
get
(
'axis'
,
0
)
graph
.
layers
[
layer_id
].
attrs
[
'axis'
]
=
[
0
,
2
,
3
,
1
][
axis
]
graph
.
layers
[
layer_id
].
attrs
[
'axis'
]
=
[
0
,
2
,
3
,
1
][
axis
]
for
layer_id
in
list
(
set
(
optimized_elementwise_layers
)):
axis
=
graph
.
layers
[
layer_id
].
attrs
.
get
(
'axis'
,
-
1
)
graph
.
layers
[
layer_id
].
attrs
[
'axis'
]
=
[
0
,
2
,
3
,
1
][
axis
]
if
graph
.
layers
[
layer_id
].
kernel
==
"paddle.add"
:
graph
.
layers
[
layer_id
].
kernel
=
"fluid.layers.elementwise_add"
current_transpose_num
=
self
.
get_transpose_num
(
graph
)
current_transpose_num
=
self
.
get_transpose_num
(
graph
)
print
(
print
(
...
...
x2paddle/optimizer/elimination/static/transpose_elimination.py
浏览文件 @
55d5eb24
...
@@ -178,13 +178,13 @@ class StaticTransposeElimination(FuseBase):
...
@@ -178,13 +178,13 @@ class StaticTransposeElimination(FuseBase):
if
_graph
.
layers
[
ipt
].
outputs
[
if
_graph
.
layers
[
ipt
].
outputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
'x'
]:
'x'
]:
if
l
en
(
x_shape
)
<=
1
:
if
l
ist
(
x_shape
)
==
[
1
]
or
len
(
x_shape
)
<
1
:
elementwise_layers
.
append
(
current_id
)
elementwise_layers
.
append
(
current_id
)
continue
continue
elif
_graph
.
layers
[
ipt
].
outputs
[
elif
_graph
.
layers
[
ipt
].
outputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
output_index
]
==
_graph
.
layers
[
current_id
].
inputs
[
'y'
]:
'y'
]:
if
l
en
(
y_shape
)
<=
1
:
if
l
ist
(
y_shape
)
==
[
1
]
or
len
(
y_shape
)
<
1
:
elementwise_layers
.
append
(
current_id
)
elementwise_layers
.
append
(
current_id
)
continue
continue
else
:
else
:
...
@@ -279,11 +279,6 @@ class StaticTransposeElimination(FuseBase):
...
@@ -279,11 +279,6 @@ class StaticTransposeElimination(FuseBase):
for
layer_id
in
list
(
set
(
optimized_concat_layers
)):
for
layer_id
in
list
(
set
(
optimized_concat_layers
)):
axis
=
graph
.
layers
[
layer_id
].
attrs
.
get
(
'axis'
,
0
)
axis
=
graph
.
layers
[
layer_id
].
attrs
.
get
(
'axis'
,
0
)
graph
.
layers
[
layer_id
].
attrs
[
'axis'
]
=
[
0
,
2
,
3
,
1
][
axis
]
graph
.
layers
[
layer_id
].
attrs
[
'axis'
]
=
[
0
,
2
,
3
,
1
][
axis
]
for
layer_id
in
list
(
set
(
optimized_elementwise_layers
)):
axis
=
graph
.
layers
[
layer_id
].
attrs
.
get
(
'axis'
,
-
1
)
graph
.
layers
[
layer_id
].
attrs
[
'axis'
]
=
[
0
,
2
,
3
,
1
][
axis
]
if
graph
.
layers
[
layer_id
].
kernel
==
"paddle.add"
:
graph
.
layers
[
layer_id
].
kernel
=
"fluid.layers.elementwise_add"
current_transpose_num
=
self
.
get_transpose_num
(
graph
)
current_transpose_num
=
self
.
get_transpose_num
(
graph
)
print
(
print
(
...
...
x2paddle/optimizer/fusion/dygraph/adaptive_pool2d_fuser.py
浏览文件 @
55d5eb24
...
@@ -167,7 +167,7 @@ class DygraphAdaptivePool2dFuser(FuseBase):
...
@@ -167,7 +167,7 @@ class DygraphAdaptivePool2dFuser(FuseBase):
new_layer
=
PaddleLayer
(
new_layer
=
PaddleLayer
(
layers_id
[
0
],
layers_id
[
0
],
"paddle.nn.functional.adaptive_avg_pool2d"
,
"paddle.nn.functional.adaptive_avg_pool2d"
,
inputs
=
{
"
input
"
:
input_name
},
inputs
=
{
"
x
"
:
input_name
},
outputs
=
[
output_name
],
outputs
=
[
output_name
],
**
attrs
)
**
attrs
)
else
:
else
:
...
...
x2paddle/optimizer/onnx_optimizer.py
已删除
100644 → 0
浏览文件 @
a1af51d3
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO useless node remove
class
ONNXOptimizer
(
object
):
def
__init__
(
self
,
op_mapper
):
self
.
op_mapper
=
op_mapper
self
.
graph
=
op_mapper
.
graph
def
delete_redundance_code
(
self
):
for
node_name
in
self
.
graph
.
topo_sort
:
if
node_name
in
self
.
op_mapper
.
omit_nodes
:
node
=
self
.
graph
.
get_node
(
node_name
)
omit_freq
=
self
.
op_mapper
.
omit_nodes
.
count
(
node_name
)
if
len
(
node
.
outputs
)
<=
omit_freq
:
node
.
fluid_code
.
clear
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录