Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
X2Paddle
提交
2721c0a9
X
X2Paddle
项目概览
PaddlePaddle
/
X2Paddle
大约 1 年 前同步成功
通知
328
Star
698
Fork
167
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
26
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
X
X2Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
26
Issue
26
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
2721c0a9
编写于
5月 11, 2021
作者:
S
SunAhong1993
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
release
上级
13aeafee
变更
5
展开全部
显示空白变更内容
内联
并排
Showing
5 changed file
with
457 addition
and
266 deletion
+457
-266
README.md
README.md
+16
-11
x2paddle/__init__.py
x2paddle/__init__.py
+1
-1
x2paddle/convert.py
x2paddle/convert.py
+2
-1
x2paddle/op_mapper/dygraph/tf2paddle/tf_op_mapper.py
x2paddle/op_mapper/dygraph/tf2paddle/tf_op_mapper.py
+221
-136
x2paddle/op_mapper/static/tf2paddle/tf_op_mapper.py
x2paddle/op_mapper/static/tf2paddle/tf_op_mapper.py
+217
-117
未找到文件。
README.md
浏览文件 @
2721c0a9
...
...
@@ -33,7 +33,7 @@ X2Paddle的架构设计着重考虑了对多深度学习框架的的支持以及
-
pytorch:torch >=1.5.0 (script方式暂不支持1.7.0)
## 安装
### 方式一:源码安装
(推荐)
### 方式一:源码安装
```
git clone https://github.com/PaddlePaddle/X2Paddle.git
cd X2Paddle
...
...
@@ -41,7 +41,7 @@ git checkout develop
python setup.py install
```
### 方式二:pip安装
### 方式二:pip安装
(推荐)
我们会定期更新pip源上的x2paddle版本
```
pip install x2paddle --index https://pypi.python.org/simple/
...
...
@@ -95,10 +95,8 @@ X2Paddle提供了工具解决如下问题,详见[tools/README.md](tools/README
4.
[
X2Paddle添加内置的Caffe自定义层
](
./docs/user_guides/add_caffe_custom_layer.md
)
5.
[
转换后PaddlePaddle预测模型简介
](
./docs/user_guides/pd_folder_introduction.py
)
6.
[
Paddle到ONNX的转换
](
https://github.com/PaddlePaddle/Paddle2ONNX
)
## 支持列表文档
1.
[
X2Paddle测试模型库
](
./docs/introduction/x2paddle_model_zoo.md
)
2.
[
X2Paddle支持的op列表
](
./docs/introduction/op_list.md
)
7.
[
X2Paddle测试模型库
](
./docs/introduction/x2paddle_model_zoo.md
)
8.
[
X2Paddle支持的op列表
](
./docs/introduction/op_list.md
)
## 转换教程
...
...
@@ -111,9 +109,16 @@ X2Paddle提供了工具解决如下问题,详见[tools/README.md](tools/README
方式一:trace方式,转换后的代码有模块划分,每个模块的功能与PyTorch相同。
方式二:script方式,转换后的代码按执行顺序逐行出现。
2.
新增Caffe/ONNX/Tensorflow到Paddle动态图的转换。
3.
新增TensorFlow op
(14个):Neg、Greater、FloorMod、LogicalAdd、Prd、Equal、Conv3D、Ceil、AddN、DivNoNan、Where、MirrorPad、Size、TopKv2
3.
新增TensorFlow op
映射(14个):Neg、Greater、FloorMod、LogicalAdd、Prd、Equal、Conv3D、Ceil、AddN、DivNoNan、Where、MirrorPad、Size、TopKv2。
4.
新增Optimizer模块,主要包括op融合、op消除功能,转换后的代码可读性更强,进行预测时耗时更短。
2021.
04.30
1.
新增支持转换的模型:
[
SwinTransformer
](
https://github.com/microsoft/Swin-Transformer/
)
、
[
BASNet
](
https://github.com/xuebinqin/BASNet
)
、
[
DBFace
](
https://github.com/dlunion/DBFace
)
、
[
EasyOCR
](
https://github.com/JaidedAI/EasyOCR
)
、
[
CifarNet
](
https://github.com/tensorflow/models/blob/master/research/slim/nets/cifarnet.py
)
等。
2.
支持Windows上使用本工具。
3.
新增TensorFlow op映射(4个):SplitV、ReverseV2、BatchToSpaceND、SpaceToBatchND。
4.
新增PyTorch op映射(11个):aten::index、aten::roll、aten::adaptive_avg_pool1d、aten::reflection_pad2d、aten::reflection_pad1d、aten::instance_norm、aten::gru、aten::norm、aten::clamp_min、aten:prelu、aten:split_with_sizes。
5.
新增ONNX op映射(1个):DepthToSpace。
6.
新增Caffe op映射(1个):op:MemoryData。
## 贡献代码
...
...
x2paddle/__init__.py
浏览文件 @
2721c0a9
__version__
=
"1.
0.2
"
__version__
=
"1.
1.0
"
from
.core.program
import
PaddleGraph
...
...
x2paddle/convert.py
浏览文件 @
2721c0a9
...
...
@@ -41,7 +41,6 @@ def arg_parser():
parser
.
add_argument
(
"--save_dir"
,
"-s"
,
required
=
True
,
type
=
_text_type
,
default
=
None
,
help
=
"path to save translated model"
)
...
...
@@ -221,6 +220,8 @@ def main():
x2paddle
.
__version__
))
return
assert
args
.
save_dir
is
not
None
,
"--save_dir is not defined"
try
:
import
platform
v0
,
v1
,
v2
=
platform
.
python_version
().
split
(
'.'
)
...
...
x2paddle/op_mapper/dygraph/tf2paddle/tf_op_mapper.py
浏览文件 @
2721c0a9
此差异已折叠。
点击以展开。
x2paddle/op_mapper/static/tf2paddle/tf_op_mapper.py
浏览文件 @
2721c0a9
...
...
@@ -60,8 +60,8 @@ class TFOpMapper(OpMapper):
'swish_f32'
:
[
'paddle.nn.functional.swish'
],
'Tanh'
:
[
'paddle.tanh'
],
'Softplus'
:
[
'paddle.nn.functional.softplus'
],
'LeakyRelu'
:
[
'paddle.nn.functional.leaky_relu'
,
dict
(
alpha
=
'negative_slope'
)],
'LeakyRelu'
:
[
'paddle.nn.functional.leaky_relu'
,
dict
(
alpha
=
'negative_slope'
)],
'Floor'
:
[
'paddle.floor'
],
'Erf'
:
[
'paddle.erf'
],
'Square'
:
[
'paddle.square'
]
...
...
@@ -95,7 +95,8 @@ class TFOpMapper(OpMapper):
if
not
self
.
op_checker
():
raise
Exception
(
"Model is not supported yet."
)
self
.
params
=
dict
()
self
.
paddle_graph
=
PaddleGraph
(
parent_layer
=
None
,
graph_type
=
"static"
,
source_type
=
"tf"
)
self
.
paddle_graph
=
PaddleGraph
(
parent_layer
=
None
,
graph_type
=
"static"
,
source_type
=
"tf"
)
self
.
params_output2id
=
dict
()
not_placeholder
=
list
()
...
...
@@ -150,8 +151,8 @@ class TFOpMapper(OpMapper):
return
True
else
:
if
len
(
unsupported_ops
)
>
0
:
print
(
"
\n
========= {} OPs are not supported yet ==========="
.
format
(
len
(
unsupported_ops
)))
print
(
"
\n
========= {} OPs are not supported yet ==========="
.
format
(
len
(
unsupported_ops
)))
for
op
in
unsupported_ops
:
print
(
"========== {} ============"
.
format
(
op
))
return
False
...
...
@@ -186,7 +187,10 @@ class TFOpMapper(OpMapper):
inputs
=
{
"x"
:
x
.
name
,
"y"
:
y
.
name
},
outputs
=
[
node
.
name
])
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
def
bool_map
(
self
,
node
):
op_type
=
self
.
bool_ops
[
node
.
layer_type
]
...
...
@@ -241,7 +245,8 @@ class TFOpMapper(OpMapper):
if
perm
.
layer_type
==
"Const"
:
perm
=
perm
.
value
.
tolist
()
else
:
perm
=
self
.
decoder
.
infer_tensor
(
perm
,
use_diff_inputs
=
False
).
tolist
()
perm
=
self
.
decoder
.
infer_tensor
(
perm
,
use_diff_inputs
=
False
).
tolist
()
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.transpose"
,
...
...
@@ -263,10 +268,7 @@ class TFOpMapper(OpMapper):
attr
[
"fill_value"
]
=
input_value
.
value
self
.
paddle_graph
.
add_layer
(
"paddle.full"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
"paddle.full"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
if
dims
.
layer_type
!=
"Const"
:
self
.
paddle_graph
.
add_layer
(
"paddle.reshape"
,
...
...
@@ -333,9 +335,7 @@ class TFOpMapper(OpMapper):
if
len
(
node
.
layer
.
input
)
==
1
:
cond
=
self
.
graph
.
get_input_node
(
node
,
0
)
self
.
paddle_graph
.
add_layer
(
"paddle.nonzero"
,
inputs
=
{
"x"
:
cond
.
name
},
outputs
=
[
node
.
name
])
"paddle.nonzero"
,
inputs
=
{
"x"
:
cond
.
name
},
outputs
=
[
node
.
name
])
else
:
cond
=
self
.
graph
.
get_input_node
(
node
,
0
)
x
=
self
.
graph
.
get_input_node
(
node
,
1
)
...
...
@@ -409,7 +409,8 @@ class TFOpMapper(OpMapper):
kernel_value
=
kernel
.
value
kernel_weight_name
=
kernel
.
name
.
replace
(
'/'
,
'_'
)
else
:
kernel_value
=
self
.
decoder
.
infer_tensor
(
kernel
,
use_diff_inputs
=
False
)
kernel_value
=
self
.
decoder
.
infer_tensor
(
kernel
,
use_diff_inputs
=
False
)
if
kernel
.
layer_type
==
'Split'
:
kernel_weight_name
=
"{}_{}_kernel"
.
format
(
node
.
name
,
kernel
.
name
)
...
...
@@ -447,7 +448,8 @@ class TFOpMapper(OpMapper):
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.nn.functional.conv2d"
,
inputs
=
{
"x"
:
input_name
,
"weight"
:
kernel_weight_name
},
inputs
=
{
"x"
:
input_name
,
"weight"
:
kernel_weight_name
},
outputs
=
[
node
.
name
],
bias
=
None
,
stride
=
strides
[
2
:
4
],
...
...
@@ -479,7 +481,8 @@ class TFOpMapper(OpMapper):
kernel_value
=
kernel
.
value
kernel_weight_name
=
kernel
.
name
.
replace
(
'/'
,
'_'
)
else
:
kernel_value
=
self
.
decoder
.
infer_tensor
(
kernel
,
use_diff_inputs
=
False
)
kernel_value
=
self
.
decoder
.
infer_tensor
(
kernel
,
use_diff_inputs
=
False
)
if
kernel
.
layer_type
==
'Split'
:
kernel_weight_name
=
"{}_{}_kernel"
.
format
(
node
.
name
,
kernel
.
name
)
...
...
@@ -517,7 +520,8 @@ class TFOpMapper(OpMapper):
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.nn.functional.conv3d"
,
inputs
=
{
"x"
:
input_name
,
"weight"
:
kernel_weight_name
},
inputs
=
{
"x"
:
input_name
,
"weight"
:
kernel_weight_name
},
outputs
=
[
node
.
name
],
bias
=
None
,
stride
=
strides
[
2
:
5
],
...
...
@@ -565,11 +569,13 @@ class TFOpMapper(OpMapper):
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.nn.functional.batch_norm"
,
inputs
=
{
"x"
:
input_name
,
inputs
=
{
"x"
:
input_name
,
"running_mean"
:
moving_mean
.
name
,
"running_var"
:
moving_var
.
name
,
"weight"
:
gamma
.
name
,
"bias"
:
beta
.
name
},
"bias"
:
beta
.
name
},
outputs
=
[
node
.
name
],
epsilon
=
node
.
get_attr
(
"epsilon"
))
...
...
@@ -647,7 +653,6 @@ class TFOpMapper(OpMapper):
def
MirrorPad
(
self
,
node
):
self
.
Pad
(
node
)
def
PadV2
(
self
,
node
):
self
.
Pad
(
node
)
...
...
@@ -676,15 +681,12 @@ class TFOpMapper(OpMapper):
inputs
=
{
"input"
:
input_name
},
outputs
=
[
node
.
name
])
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.prod"
,
inputs
=
{
"x"
:
node
.
name
},
outputs
=
[
node
.
name
])
kernel
=
"paddle.prod"
,
inputs
=
{
"x"
:
node
.
name
},
outputs
=
[
node
.
name
])
def
Ceil
(
self
,
node
):
input
=
self
.
graph
.
get_input_node
(
node
,
0
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.ceil"
,
inputs
=
{
"x"
:
input
.
name
},
kernel
=
"paddle.ceil"
,
inputs
=
{
"x"
:
input
.
name
},
outputs
=
[
node
.
name
])
def
ArgMax
(
self
,
node
):
...
...
@@ -861,7 +863,9 @@ class TFOpMapper(OpMapper):
axis
=
1
else
:
raise
Exception
(
"Unexpected situation happend in Unpack OP"
)
layer_outputs
=
[
"{}_p{}"
.
format
(
node
.
layer_name
,
i
)
for
i
in
range
(
num
)]
layer_outputs
=
[
"{}_p{}"
.
format
(
node
.
layer_name
,
i
)
for
i
in
range
(
num
)
]
if
len
(
layer_outputs
)
==
1
:
layer_outputs
[
0
]
=
"[{}]"
.
format
(
node
.
layer_name
)
self
.
paddle_graph
.
add_layer
(
...
...
@@ -1064,7 +1068,8 @@ class TFOpMapper(OpMapper):
kernel
=
"paddle.split"
,
inputs
=
{
"x"
:
input
.
name
},
outputs
=
[
"{}_p{}"
.
format
(
node
.
layer_name
,
i
)
for
i
in
range
(
len
(
size_splits
))
"{}_p{}"
.
format
(
node
.
layer_name
,
i
)
for
i
in
range
(
len
(
size_splits
))
],
num_or_sections
=
size_splits
,
axis
=
dim
)
...
...
@@ -1080,15 +1085,8 @@ class TFOpMapper(OpMapper):
begin
=
begin
.
value
.
tolist
()
attrs
[
'offsets'
]
=
begin
else
:
# shape = begin.out_shapes[0]
# reshape_name = gen_name("slice", "reshape")
# self.paddle_graph.add_layer(
# kernel="fluid.layers.reshape",
# inputs={"x": begin.name},
# outputs=[reshape_name],
# shape=shape)
# inputs['offsets'] = reshape_name
begin
=
self
.
decoder
.
infer_tensor
(
begin
,
use_diff_inputs
=
False
).
tolist
()
begin
=
self
.
decoder
.
infer_tensor
(
begin
,
use_diff_inputs
=
False
).
tolist
()
attrs
[
'offsets'
]
=
begin
if
size
.
layer_type
==
"Const"
:
size
=
size
.
value
.
tolist
()
...
...
@@ -1103,19 +1101,18 @@ class TFOpMapper(OpMapper):
shape
=
shape
)
inputs
[
'shape'
]
=
reshape_name
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.crop"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attrs
)
kernel
=
"paddle.crop"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attrs
)
def
ResizeNearestNeighbor
(
self
,
node
):
input
=
self
.
graph
.
get_input_node
(
node
,
0
)
resize_shape
=
self
.
graph
.
get_input_node
(
node
,
1
)
data_format
=
"NHWC"
inputs
=
{
"x"
:
input
.
name
}
attrs
=
{
"align_corners"
:
node
.
get_attr
(
"align_corners"
),
attrs
=
{
"align_corners"
:
node
.
get_attr
(
"align_corners"
),
"mode"
:
string
(
"nearest"
),
"align_mode"
:
1
}
"align_mode"
:
1
}
if
resize_shape
.
layer_type
==
"Const"
:
resize_shape
=
resize_shape
.
value
.
tolist
()
...
...
@@ -1157,9 +1154,11 @@ class TFOpMapper(OpMapper):
resize_shape
=
self
.
graph
.
get_input_node
(
node
,
1
)
data_format
=
"NHWC"
inputs
=
{
"x"
:
input
.
name
}
attrs
=
{
"align_corners"
:
node
.
get_attr
(
"align_corners"
),
attrs
=
{
"align_corners"
:
node
.
get_attr
(
"align_corners"
),
"mode"
:
string
(
"bilinear"
),
"align_mode"
:
1
}
"align_mode"
:
1
}
if
resize_shape
.
layer_type
==
"Const"
:
resize_shape
=
resize_shape
.
value
.
tolist
()
...
...
@@ -1261,15 +1260,17 @@ class TFOpMapper(OpMapper):
if
out_shape
.
layer_type
==
"Const"
:
out_shape
=
out_shape
.
value
.
tolist
()
else
:
out_shape
=
self
.
decoder
.
infer_tensor
(
out_shape
,
out_shape
=
node
.
out_shapes
[
0
])
out_shape
=
self
.
decoder
.
infer_tensor
(
out_shape
,
out_shape
=
node
.
out_shapes
[
0
])
in_shape
=
input
.
out_shapes
[
0
]
if
in_shape
.
count
(
-
1
)
>
2
:
in_shape
=
self
.
decoder
.
infer_tensor
(
input
,
use_diff_inputs
=
False
).
shape
in_shape
=
self
.
decoder
.
infer_tensor
(
input
,
use_diff_inputs
=
False
).
shape
k_size
=
kernel
.
out_shapes
[
0
]
if
k_size
.
count
(
-
1
)
>
2
:
k_size
=
self
.
decoder
.
infer_tensor
(
kernel
,
use_diff_inputs
=
False
).
shape
k_size
=
self
.
decoder
.
infer_tensor
(
kernel
,
use_diff_inputs
=
False
).
shape
pad_mode
=
node
.
get_attr
(
"padding"
).
decode
()
strides
=
node
.
get_attr
(
"strides"
)
...
...
@@ -1302,8 +1303,11 @@ class TFOpMapper(OpMapper):
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.nn.functional.conv2d_transpose"
,
inputs
=
{
"x"
:
input_name
,
"weight"
:
"{}_{}"
.
format
(
node
.
name
,
kernel_name
).
replace
(
"."
,
"_"
)},
inputs
=
{
"x"
:
input_name
,
"weight"
:
"{}_{}"
.
format
(
node
.
name
,
kernel_name
).
replace
(
"."
,
"_"
)
},
outputs
=
[
node
.
name
],
bias
=
None
,
stride
=
strides
[
2
:
4
],
...
...
@@ -1330,12 +1334,10 @@ class TFOpMapper(OpMapper):
inputs
[
"repeat_times"
]
=
repeat_times
.
name
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.tile"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
kernel
=
"paddle.tile"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
if
not
isinstance
(
repeat_times
,
list
)
and
repeat_times
.
layer_type
!=
"Const"
:
if
not
isinstance
(
repeat_times
,
list
)
and
repeat_times
.
layer_type
!=
"Const"
:
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.reshape"
,
inputs
=
{
"x"
:
node
.
name
},
...
...
@@ -1372,10 +1374,7 @@ class TFOpMapper(OpMapper):
attr
[
"dtype"
]
=
string
(
node
.
dtype
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.arange"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
kernel
=
"paddle.arange"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
if
start
.
layer_type
!=
"Const"
or
\
limit
.
layer_type
!=
"Const"
or
\
delta
.
layer_type
!=
"Const"
:
...
...
@@ -1394,14 +1393,20 @@ class TFOpMapper(OpMapper):
# TODO(syf)
layer_id
=
self
.
paddle_graph
.
add_layer
(
"paddle.subtract"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
])
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
inputs
=
{
"x"
:
node
.
name
,
"y"
:
node
.
name
}
x_shape
=
node
.
out_shapes
[
0
]
y_shape
=
node
.
out_shapes
[
0
]
layer_id
=
self
.
paddle_graph
.
add_layer
(
"paddle.multiply"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
])
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
self
.
paddle_graph
.
layers
[
layer_id
].
input_shapes
=
{
"x"
:
x_shape
,
"y"
:
y_shape
}
def
OneHot
(
self
,
node
):
input
=
self
.
graph
.
get_input_node
(
node
,
0
)
...
...
@@ -1455,10 +1460,7 @@ class TFOpMapper(OpMapper):
outputs
=
[
input_name
],
dtype
=
string
(
"bool"
))
self
.
paddle_graph
.
add_layer
(
"paddle.all"
,
inputs
=
{
"x"
:
input_name
},
outputs
=
[
node
.
name
],
**
attr
)
"paddle.all"
,
inputs
=
{
"x"
:
input_name
},
outputs
=
[
node
.
name
],
**
attr
)
node
.
layer
.
attr
[
'dtype'
].
type
=
10
...
...
@@ -1479,10 +1481,7 @@ class TFOpMapper(OpMapper):
shape
=
[
-
1
])
inputs
=
{
'x'
:
embeddings
.
name
,
'index'
:
index_name
}
self
.
paddle_graph
.
add_layer
(
"paddle.gather"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
axis
=
axis
)
"paddle.gather"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
axis
=
axis
)
if
len
(
index
.
out_shapes
[
0
])
!=
1
:
out_shape
=
node
.
out_shapes
[
0
]
self
.
paddle_graph
.
add_layer
(
...
...
@@ -1496,9 +1495,7 @@ class TFOpMapper(OpMapper):
index
=
self
.
graph
.
get_input_node
(
node
,
1
)
inputs
=
{
'x'
:
x
.
name
,
'index'
:
index
.
name
}
self
.
paddle_graph
.
add_layer
(
"paddle.gather_nd"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
])
"paddle.gather_nd"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
])
def
ExpandDims
(
self
,
node
):
x
=
self
.
graph
.
get_input_node
(
node
,
0
,
copy
=
True
)
...
...
@@ -1513,10 +1510,7 @@ class TFOpMapper(OpMapper):
else
:
inputs
[
'axis'
]
=
y
.
name
self
.
paddle_graph
.
add_layer
(
"paddle.unsqueeze"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
"paddle.unsqueeze"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
def
ReverseV2
(
self
,
node
):
x
=
self
.
graph
.
get_input_node
(
node
,
0
)
...
...
@@ -1531,8 +1525,114 @@ class TFOpMapper(OpMapper):
else
:
inputs
[
'axis'
]
=
axis
.
name
self
.
paddle_graph
.
add_layer
(
"paddle.flip"
,
inputs
=
inputs
,
"paddle.flip"
,
inputs
=
inputs
,
outputs
=
[
node
.
name
],
**
attr
)
def
BatchToSpaceND
(
self
,
node
):
'''
reshape->transpose->reshape->crop
'''
x
=
self
.
graph
.
get_input_node
(
node
,
0
)
block_shape
=
self
.
graph
.
get_input_node
(
node
,
1
)
crops
=
self
.
graph
.
get_input_node
(
node
,
2
)
if
block_shape
.
layer_type
==
"Const"
:
block_shape
=
block_shape
.
value
.
tolist
()
if
crops
.
layer_type
==
"Const"
:
crops
=
crops
.
value
.
tolist
()
data_format
=
x
.
get_attr
(
"data_format"
).
decode
()
if
data_format
==
"NHWC"
:
n
,
h
,
w
,
c
=
x
.
out_shapes
[
0
]
else
:
n
,
c
,
h
,
w
=
x
.
out_shapes
[
0
]
input_name
=
x
.
name
#reshape
shape
=
block_shape
+
[
-
1
,
h
,
w
,
c
]
reshape_name
=
gen_name
(
"batch_to_space"
,
"reshape"
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.reshape"
,
inputs
=
{
"x"
:
input_name
},
outputs
=
[
reshape_name
],
shape
=
shape
)
#transpose
perm
=
[
len
(
block_shape
)]
+
list
(
j
for
i
in
range
(
len
(
block_shape
))
for
j
in
(
i
+
len
(
block_shape
)
+
1
,
i
))
+
\
list
(
i
+
2
*
len
(
block_shape
)
+
1
for
i
in
range
(
len
(
x
.
out_shapes
[
0
])
-
len
(
block_shape
)
-
1
))
transpose_name
=
gen_name
(
"batch_to_space"
,
"transpose"
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.transpose"
,
inputs
=
{
"x"
:
reshape_name
},
outputs
=
[
transpose_name
],
perm
=
perm
)
#reshape
shape
=
[
-
1
]
+
list
(
i
*
j
for
i
,
j
in
zip
(
block_shape
,
x
.
out_shapes
[
0
][
1
:]))
+
x
.
out_shapes
[
0
][
1
+
len
(
block_shape
):]
reshape_name
=
gen_name
(
"batch_to_space"
,
"reshape"
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.reshape"
,
inputs
=
{
"x"
:
transpose_name
},
outputs
=
[
reshape_name
],
shape
=
shape
)
#crop
attrs
=
{}
crop_shape
=
shape
crop_offsets
=
[
0
]
*
len
(
shape
)
for
i
in
range
(
len
(
crops
)):
crop_shape
[
i
+
1
]
=
crop_shape
[
i
+
1
]
-
crops
[
i
][
0
]
-
crops
[
i
][
1
]
crop_offsets
[
i
+
1
]
=
crops
[
i
][
0
]
attrs
[
'shape'
]
=
crop_shape
attrs
[
'offsets'
]
=
crop_offsets
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.crop"
,
inputs
=
{
"x"
:
reshape_name
},
outputs
=
[
node
.
name
],
**
attr
)
**
attr
s
)
def
SpaceToBatchND
(
self
,
node
):
'''
zero-pad->reshape->transpose->reshape
'''
x
=
self
.
graph
.
get_input_node
(
node
,
0
)
block_shape
=
self
.
graph
.
get_input_node
(
node
,
1
)
paddings
=
self
.
graph
.
get_input_node
(
node
,
2
)
if
block_shape
.
layer_type
==
"Const"
:
block_shape
=
block_shape
.
value
.
tolist
()
if
paddings
.
layer_type
==
"Const"
:
paddings
=
paddings
.
value
.
flatten
().
tolist
()
input_name
=
x
.
name
#zero-pad
constant_values
=
0
pad_name
=
gen_name
(
"space_to_batch"
,
"pad"
)
paddings
=
[
0
,
0
]
+
paddings
+
[
0
,
0
]
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.nn.functional.pad"
,
inputs
=
{
"x"
:
input_name
},
outputs
=
[
pad_name
],
pad
=
paddings
,
value
=
constant_values
)
#reshape
n
,
h
,
w
,
c
=
x
.
out_shapes
[
0
]
h
=
h
+
paddings
[
2
]
+
paddings
[
3
]
w
=
w
+
paddings
[
4
]
+
paddings
[
5
]
shape
=
[
n
,
h
//
block_shape
[
0
],
block_shape
[
0
],
w
//
block_shape
[
1
],
block_shape
[
1
],
c
]
reshape_name
=
gen_name
(
"space_to_batch"
,
"reshape"
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.reshape"
,
inputs
=
{
"x"
:
pad_name
},
outputs
=
[
reshape_name
],
shape
=
shape
)
#transpose
transpose_name
=
gen_name
(
"space_to_batch"
,
"transpose"
)
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.transpose"
,
inputs
=
{
"x"
:
reshape_name
},
outputs
=
[
transpose_name
],
perm
=
[
2
,
4
,
0
,
1
,
3
,
5
])
#reshape
shape
=
[
-
1
,
h
//
block_shape
[
0
],
w
//
block_shape
[
1
],
c
]
self
.
paddle_graph
.
add_layer
(
kernel
=
"paddle.reshape"
,
inputs
=
{
"x"
:
transpose_name
},
outputs
=
[
node
.
name
],
shape
=
shape
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录