Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
X2Paddle
提交
ce6ffee2
X
X2Paddle
项目概览
PaddlePaddle
/
X2Paddle
大约 1 年 前同步成功
通知
328
Star
698
Fork
167
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
26
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
X
X2Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
26
Issue
26
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ce6ffee2
编写于
9月 17, 2020
作者:
S
SunAhong1993
提交者:
GitHub
9月 17, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #7 from PaddlePaddle/develop
me
上级
bb7cd6b9
09d35587
变更
17
展开全部
显示空白变更内容
内联
并排
Showing
17 changed file
with
10090 addition
and
13500 deletion
+10090
-13500
README.md
README.md
+2
-2
op_list.md
op_list.md
+19
-15
x2paddle/__init__.py
x2paddle/__init__.py
+1
-1
x2paddle/convert.py
x2paddle/convert.py
+9
-8
x2paddle/decoder/caffe.proto
x2paddle/decoder/caffe.proto
+1947
-0
x2paddle/decoder/caffe_pb2.py
x2paddle/decoder/caffe_pb2.py
+7684
-13342
x2paddle/decoder/onnx_shape_inference.py
x2paddle/decoder/onnx_shape_inference.py
+3
-3
x2paddle/op_mapper/caffe_custom_layer/normalize.py
x2paddle/op_mapper/caffe_custom_layer/normalize.py
+1
-1
x2paddle/op_mapper/caffe_op_mapper.py
x2paddle/op_mapper/caffe_op_mapper.py
+1
-1
x2paddle/op_mapper/onnx2paddle/opset9/opset.py
x2paddle/op_mapper/onnx2paddle/opset9/opset.py
+221
-76
x2paddle/op_mapper/paddle2onnx/opset11/opset.py
x2paddle/op_mapper/paddle2onnx/opset11/opset.py
+12
-20
x2paddle/op_mapper/paddle2onnx/opset11/paddle_custom_layer/multiclass_nms.py
...paddle2onnx/opset11/paddle_custom_layer/multiclass_nms.py
+4
-2
x2paddle/op_mapper/paddle2onnx/opset9/opset.py
x2paddle/op_mapper/paddle2onnx/opset9/opset.py
+141
-14
x2paddle/op_mapper/paddle2onnx/opset9/paddle_custom_layer/multiclass_nms.py
.../paddle2onnx/opset9/paddle_custom_layer/multiclass_nms.py
+4
-2
x2paddle/op_mapper/tf_op_mapper_nhwc.py
x2paddle/op_mapper/tf_op_mapper_nhwc.py
+30
-8
x2paddle/optimizer/tf_optimizer.py
x2paddle/optimizer/tf_optimizer.py
+3
-0
x2paddle_model_zoo.md
x2paddle_model_zoo.md
+8
-5
未找到文件。
README.md
浏览文件 @
ce6ffee2
...
@@ -15,7 +15,7 @@ paddlepaddle >= 1.8.0
...
@@ -15,7 +15,7 @@ paddlepaddle >= 1.8.0
**按需安装以下依赖**
**按需安装以下依赖**
tensorflow : tensorflow == 1.14.0
tensorflow : tensorflow == 1.14.0
caffe : 无
caffe : 无
onnx : onnx
=
= 1.6.0
onnx : onnx
>
= 1.6.0
## 安装
## 安装
### 安装方式一(推荐)
### 安装方式一(推荐)
...
@@ -58,7 +58,7 @@ x2paddle --framework=paddle2onnx --model=paddle_infer_model_dir --save_dir=onnx_
...
@@ -58,7 +58,7 @@ x2paddle --framework=paddle2onnx --model=paddle_infer_model_dir --save_dir=onnx_
|--save_dir | 指定转换后的模型保存目录路径 |
|--save_dir | 指定转换后的模型保存目录路径 |
|--model | 当framework为tensorflow/onnx时,该参数指定tensorflow的pb模型文件或onnx模型路径 |
|--model | 当framework为tensorflow/onnx时,该参数指定tensorflow的pb模型文件或onnx模型路径 |
|--caffe_proto |
**[可选]**
由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
|--caffe_proto |
**[可选]**
由caffe.proto编译成caffe_pb2.py文件的存放路径,当存在自定义Layer时使用,默认为None |
|--without_data_format_optimization |
**[可选]**
For TensorFlow, 当指定该参数
时,关闭NHWC->NCHW的优化,见
[
文档Q2
](
FAQ.md
)
|
|--without_data_format_optimization |
**[可选]**
For TensorFlow, 当指定该参数
为False时,打开NHWC->NCHW的优化,见
[
文档Q2
](
FAQ.md
)
,默认为True
|
|--define_input_shape |
**[可选]**
For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见
[
文档Q2
](
FAQ.md
)
|
|--define_input_shape |
**[可选]**
For TensorFlow, 当指定该参数时,强制用户输入每个Placeholder的shape,见
[
文档Q2
](
FAQ.md
)
|
|--params_merge |
**[可选]**
当指定该参数时,转换完成后,inference_model中的所有模型参数将合并保存为一个文件__params__ |
|--params_merge |
**[可选]**
当指定该参数时,转换完成后,inference_model中的所有模型参数将合并保存为一个文件__params__ |
|--onnx_opset |
**[可选]**
当framework为paddle2onnx时,该参数可设置转换为ONNX的OpSet版本,目前支持9、10、11,默认为10 |
|--onnx_opset |
**[可选]**
当framework为paddle2onnx时,该参数可设置转换为ONNX的OpSet版本,目前支持9、10、11,默认为10 |
...
...
op_list.md
浏览文件 @
ce6ffee2
# X2Paddle支持OP列表
# X2Paddle支持OP列表
> 目前X2Paddle支持
5
0+的TensorFlow OP,30+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下列表中给出了目前X2Paddle支持的全部OP。
> 目前X2Paddle支持
7
0+的TensorFlow OP,30+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下列表中给出了目前X2Paddle支持的全部OP。
**注:**
目前,部分OP暂未支持,如您在转换过程中出现OP不支持的情况,可自行添加或反馈给我们。欢迎通过
[
ISSUE反馈
](
https://github.com/PaddlePaddle/X2Paddle/issues/new
)
的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
**注:**
目前,部分OP暂未支持,如您在转换过程中出现OP不支持的情况,可自行添加或反馈给我们。欢迎通过
[
ISSUE反馈
](
https://github.com/PaddlePaddle/X2Paddle/issues/new
)
的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
...
@@ -21,6 +21,10 @@
...
@@ -21,6 +21,10 @@
| 45 | Softmax | 46 | Range | 47 | ConcatV2 | 48 | MirrorPad |
| 45 | Softmax | 46 | Range | 47 | ConcatV2 | 48 | MirrorPad |
| 49 | Identity | 50 | GreaterEqual | 51 | StopGradient | 52 | Minimum |
| 49 | Identity | 50 | GreaterEqual | 51 | StopGradient | 52 | Minimum |
| 53 | RadnomUniform | 54 | Fill | 55 | Floor | 56 | DepthToSpace |
| 53 | RadnomUniform | 54 | Fill | 55 | Floor | 56 | DepthToSpace |
| 57 | Sqrt | 58 | Softplus | 59 | Erf | 60 | AddV2 |
| 61 | LessEqual | 62 | BatchMatMul | 63 | BatchMatMulV2 | 64 | ExpandDims |
| 65 | BatchToSpaceND | 66 | SpaceToBatchND | 67 | OneHot | 68 | Pow |
| 69 | All | 70 | GatherV2 | 71 | IteratorV2 | | |
## Caffe
## Caffe
...
...
x2paddle/__init__.py
浏览文件 @
ce6ffee2
__version__
=
"0.8.
1
"
__version__
=
"0.8.
4
"
from
.core.program
import
PaddleProgram
from
.core.program
import
PaddleProgram
...
...
x2paddle/convert.py
浏览文件 @
ce6ffee2
...
@@ -66,8 +66,8 @@ def arg_parser():
...
@@ -66,8 +66,8 @@ def arg_parser():
parser
.
add_argument
(
parser
.
add_argument
(
"--without_data_format_optimization"
,
"--without_data_format_optimization"
,
"-wo"
,
"-wo"
,
action
=
"store_true"
,
type
=
_text_type
,
default
=
False
,
default
=
"True"
,
help
=
"tf model conversion without data format optimization"
)
help
=
"tf model conversion without data format optimization"
)
parser
.
add_argument
(
parser
.
add_argument
(
"--define_input_shape"
,
"--define_input_shape"
,
...
@@ -93,7 +93,7 @@ def arg_parser():
...
@@ -93,7 +93,7 @@ def arg_parser():
def
tf2paddle
(
model_path
,
def
tf2paddle
(
model_path
,
save_dir
,
save_dir
,
without_data_format_optimization
=
False
,
without_data_format_optimization
,
define_input_shape
=
False
,
define_input_shape
=
False
,
params_merge
=
False
):
params_merge
=
False
):
# check tensorflow installation and version
# check tensorflow installation and version
...
@@ -170,8 +170,8 @@ def onnx2paddle(model_path, save_dir, params_merge=False):
...
@@ -170,8 +170,8 @@ def onnx2paddle(model_path, save_dir, params_merge=False):
try
:
try
:
import
onnx
import
onnx
version
=
onnx
.
version
.
version
version
=
onnx
.
version
.
version
if
version
!=
'1.6.0'
:
if
version
<
'1.6.0'
:
print
(
"[ERROR] onnx
=
=1.6.0 is required"
)
print
(
"[ERROR] onnx
>
=1.6.0 is required"
)
return
return
except
:
except
:
print
(
"[ERROR] onnx is not installed, use
\"
pip install onnx==1.6.0
\"
."
)
print
(
"[ERROR] onnx is not installed, use
\"
pip install onnx==1.6.0
\"
."
)
...
@@ -240,11 +240,12 @@ def main():
...
@@ -240,11 +240,12 @@ def main():
if
args
.
framework
==
"tensorflow"
:
if
args
.
framework
==
"tensorflow"
:
assert
args
.
model
is
not
None
,
"--model should be defined while translating tensorflow model"
assert
args
.
model
is
not
None
,
"--model should be defined while translating tensorflow model"
without_data_format_optimization
=
False
assert
args
.
without_data_format_optimization
in
[
"True"
,
"False"
],
"--the param without_data_format_optimization should be defined True or False"
define_input_shape
=
False
define_input_shape
=
False
params_merge
=
False
params_merge
=
False
if
args
.
without_data_format_optimization
:
without_data_format_optimization
=
True
if
args
.
without_data_format_optimization
==
"True"
else
False
without_data_format_optimization
=
True
if
args
.
define_input_shape
:
if
args
.
define_input_shape
:
define_input_shape
=
True
define_input_shape
=
True
if
args
.
params_merge
:
if
args
.
params_merge
:
...
...
x2paddle/decoder/caffe.proto
0 → 100644
浏览文件 @
ce6ffee2
此差异已折叠。
点击以展开。
x2paddle/decoder/caffe_pb2.py
浏览文件 @
ce6ffee2
此差异已折叠。
点击以展开。
x2paddle/decoder/onnx_shape_inference.py
浏览文件 @
ce6ffee2
...
@@ -267,9 +267,9 @@ class SymbolicShapeInference:
...
@@ -267,9 +267,9 @@ class SymbolicShapeInference:
if
pending_nodes
and
self
.
verbose_
>
0
:
if
pending_nodes
and
self
.
verbose_
>
0
:
print
(
'SymbolicShapeInference: orphaned nodes discarded: '
)
print
(
'SymbolicShapeInference: orphaned nodes discarded: '
)
print
(
for
n
in
pending_nodes
:
*
[
n
.
op_type
+
': '
+
n
.
output
[
0
]
for
n
in
pending_nodes
],
print
(
n
.
op_type
+
': '
+
n
.
output
[
0
])
sep
=
'
\n
'
)
if
input_shapes
is
not
None
:
if
input_shapes
is
not
None
:
for
input_name
,
shape
in
input_shapes
.
items
():
for
input_name
,
shape
in
input_shapes
.
items
():
for
idx
in
range
(
len
(
self
.
out_mp_
.
graph
.
input
)):
for
idx
in
range
(
len
(
self
.
out_mp_
.
graph
.
input
)):
...
...
x2paddle/op_mapper/caffe_custom_layer/normalize.py
浏览文件 @
ce6ffee2
...
@@ -17,7 +17,7 @@ def normalize_layer(inputs,
...
@@ -17,7 +17,7 @@ def normalize_layer(inputs,
scale_param
=
fluid
.
layers
.
create_parameter
(
scale_param
=
fluid
.
layers
.
create_parameter
(
shape
=
[
1
]
if
channel_shared
else
[
1
,
1
,
1
,
input_shape
[
0
][
1
]],
shape
=
[
1
]
if
channel_shared
else
[
1
,
1
,
1
,
input_shape
[
0
][
1
]],
dtype
=
input
.
dtype
,
dtype
=
input
.
dtype
,
attr
=
name
+
'_scale'
)
attr
=
fluid
.
ParamAttr
(
name
=
name
+
'_scale'
)
)
scale_param
=
fluid
.
layers
.
reshape
(
x
=
scale_param
,
\
scale_param
=
fluid
.
layers
.
reshape
(
x
=
scale_param
,
\
shape
=
[
1
]
if
channel_shared
else
[
input_shape
[
0
][
1
]])
shape
=
[
1
]
if
channel_shared
else
[
input_shape
[
0
][
1
]])
out
=
fluid
.
layers
.
elementwise_mul
(
out
=
fluid
.
layers
.
elementwise_mul
(
...
...
x2paddle/op_mapper/caffe_op_mapper.py
浏览文件 @
ce6ffee2
...
@@ -226,7 +226,7 @@ class CaffeOpMapper(OpMapper):
...
@@ -226,7 +226,7 @@ class CaffeOpMapper(OpMapper):
data
.
append
(
data
.
append
(
np
.
zeros
([
output_c
,
input_c
,
kernel
[
0
],
kernel
[
1
]]).
astype
(
np
.
zeros
([
output_c
,
input_c
,
kernel
[
0
],
kernel
[
1
]]).
astype
(
'float32'
))
'float32'
))
data
.
append
(
np
.
zeros
([
output_c
,
])
).
astype
(
'float32'
)
data
.
append
(
np
.
zeros
([
output_c
,
])
.
astype
(
'float32'
)
)
else
:
else
:
data
=
self
.
adjust_parameters
(
node
)
data
=
self
.
adjust_parameters
(
node
)
self
.
weights
[
node
.
layer_name
+
'_weights'
]
=
data
[
0
]
self
.
weights
[
node
.
layer_name
+
'_weights'
]
=
data
[
0
]
...
...
x2paddle/op_mapper/onnx2paddle/opset9/opset.py
浏览文件 @
ce6ffee2
...
@@ -43,6 +43,21 @@ def _const_weight_or_none(node, necessary=False):
...
@@ -43,6 +43,21 @@ def _const_weight_or_none(node, necessary=False):
return
None
return
None
def
_is_static_shape
(
shape
):
negtive_dims
=
0
error_dims
=
0
for
dim
in
shape
:
if
dim
<
0
:
negtive_dims
+=
1
if
dim
<
-
1
:
error_dims
+=
1
if
negtive_dims
>
1
:
return
False
if
error_dims
>
0
:
return
False
return
True
def
_get_same_padding
(
in_size
,
kernel_size
,
stride
):
def
_get_same_padding
(
in_size
,
kernel_size
,
stride
):
new_size
=
int
(
math
.
ceil
(
in_size
*
1.0
/
stride
))
new_size
=
int
(
math
.
ceil
(
in_size
*
1.0
/
stride
))
pad_size
=
(
new_size
-
1
)
*
stride
+
kernel_size
-
in_size
pad_size
=
(
new_size
-
1
)
*
stride
+
kernel_size
-
in_size
...
@@ -231,42 +246,9 @@ class OpSet9():
...
@@ -231,42 +246,9 @@ class OpSet9():
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_y
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
val_y
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
val_y_shape
=
val_y
.
out_shapes
[
0
]
val_x_shape
=
val_x
.
out_shapes
[
0
]
if
len
(
val_x_shape
)
<
len
(
val_y_shape
):
val_x
,
val_y
=
val_y
,
val_x
val_y_shape
,
val_x_shape
=
val_x_shape
,
val_y_shape
str_y_shape
=
','
.
join
(
str
(
e
)
for
e
in
val_y_shape
)
str_x_shape
=
','
.
join
(
str
(
e
)
for
e
in
val_x_shape
)
slice_idx
=
0
if
str_y_shape
not
in
str_x_shape
:
for
dim
in
val_y_shape
:
if
dim
==
1
:
slice_idx
+=
1
else
:
break
attr
=
{
"name"
:
string
(
node
.
layer_name
)}
if
slice_idx
<
len
(
val_y_shape
)
and
slice_idx
>
0
:
val_y_reshaped
=
val_y_shape
[
slice_idx
:]
var_y_reshaped
=
val_y
.
layer_name
+
'_reshaped'
attr_reshaped
=
{
'shape'
:
val_y_reshaped
,
'name'
:
string
(
var_y_reshaped
)
}
node
.
fluid_code
.
add_layer
(
'reshape'
,
inputs
=
val_y
,
output
=
var_y_reshaped
,
param_attr
=
attr_reshaped
)
inputs
=
{
'x'
:
val_x
,
'y'
:
var_y_reshaped
}
node
.
fluid_code
.
add_layer
(
op_type
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
attr
)
else
:
inputs
=
{
'x'
:
val_x
,
'y'
:
val_y
}
inputs
=
{
'x'
:
val_x
,
'y'
:
val_y
}
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
op_type
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
attr
)
op_type
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
None
)
@
print_mapping_info
@
print_mapping_info
def
place_holder
(
self
,
node
):
def
place_holder
(
self
,
node
):
...
@@ -429,6 +411,7 @@ class OpSet9():
...
@@ -429,6 +411,7 @@ class OpSet9():
output_shape
=
node
.
out_shapes
[
0
]
output_shape
=
node
.
out_shapes
[
0
]
assume_pad2d
=
False
assume_pad2d
=
False
attr
=
{}
attr
=
{}
paddings
=
[]
if
len
(
pads
)
==
4
:
if
len
(
pads
)
==
4
:
assume_pad2d
|=
mode
!=
'constant'
assume_pad2d
|=
mode
!=
'constant'
if
data_shape
:
if
data_shape
:
...
@@ -478,6 +461,19 @@ class OpSet9():
...
@@ -478,6 +461,19 @@ class OpSet9():
inputs
=
val_x
,
inputs
=
val_x
,
output
=
node
,
output
=
node
,
param_attr
=
{
'shape'
:
[
1
]})
param_attr
=
{
'shape'
:
[
1
]})
else
:
if
str
(
val_x
.
dtype
)
==
'bool'
:
val_x_cast
=
val_x
.
layer_name
+
'_cast'
node
.
fluid_code
.
add_layer
(
'cast'
,
inputs
=
val_x
,
output
=
val_x_cast
,
param_attr
=
{
'dtype'
:
string
(
'int64'
)})
node
.
fluid_code
.
add_layer
(
'unsqueeze'
,
inputs
=
val_x_cast
,
output
=
node
,
param_attr
=
attr
)
else
:
else
:
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'unsqueeze'
,
inputs
=
val_x
,
output
=
node
,
param_attr
=
attr
)
'unsqueeze'
,
inputs
=
val_x
,
output
=
node
,
param_attr
=
attr
)
...
@@ -492,16 +488,6 @@ class OpSet9():
...
@@ -492,16 +488,6 @@ class OpSet9():
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'hard_shrink'
,
inputs
=
val_x
,
output
=
node
,
param_attr
=
attr
)
'hard_shrink'
,
inputs
=
val_x
,
output
=
node
,
param_attr
=
attr
)
def
Greater
(
self
,
node
):
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_y
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
node
.
fluid_code
.
add_layer
(
'greater_than'
,
inputs
=
{
'x'
:
val_x
,
'y'
:
val_y
},
output
=
node
,
param_attr
=
None
)
@
print_mapping_info
@
print_mapping_info
def
Constant
(
self
,
node
):
def
Constant
(
self
,
node
):
val_output
=
self
.
graph
.
get_node
(
node
.
layer
.
output
[
0
],
copy
=
True
)
val_output
=
self
.
graph
.
get_node
(
node
.
layer
.
output
[
0
],
copy
=
True
)
...
@@ -571,25 +557,26 @@ class OpSet9():
...
@@ -571,25 +557,26 @@ class OpSet9():
def
Expand
(
self
,
node
):
def
Expand
(
self
,
node
):
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_shape
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
val_shape
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
if
len
(
val_shape
.
outputs
)
==
1
:
if
len
(
val_shape
.
outputs
)
==
1
:
self
.
omit_nodes
.
append
(
val_shape
.
layer_name
)
self
.
omit_nodes
.
append
(
val_shape
.
layer_name
)
val_y
=
self
.
graph
.
get_node
(
node
.
layer
.
output
[
0
],
copy
=
True
)
out_shape
=
node
.
out_shapes
[
0
]
val_x_dtype
=
val_x
.
dtype
val_x_dtype
=
val_x
.
dtype
name_ones
=
node
.
layer_name
+
'_ones'
name_ones
=
node
.
layer_name
+
'_ones'
attr_ones
=
{
'shape'
:
out_shape
,
'dtype'
:
string
(
val_x_dtype
)}
attr_ones
=
{
'shape'
:
val_shape
.
layer_name
,
'dtype'
:
string
(
val_x_dtype
),
'value'
:
1
}
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'ones'
,
inputs
=
None
,
output
=
name_ones
,
param_attr
=
attr_ones
)
'fill_constant'
,
inputs
=
None
,
output
=
name_ones
,
param_attr
=
attr_ones
)
inputs
=
{
'x'
:
name_ones
,
'y'
:
val_x
}
inputs
=
{
'x'
:
name_ones
,
'y'
:
val_x
}
attr
=
{
'name'
:
string
(
node
.
layer_name
)}
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'elementwise_mul'
,
'elementwise_mul'
,
inputs
=
inputs
,
inputs
=
inputs
,
output
=
node
.
layer_name
,
output
=
node
.
layer_name
,
param_attr
=
attr
)
param_attr
=
None
)
@
print_mapping_info
@
print_mapping_info
def
Gather
(
self
,
node
):
def
Gather
(
self
,
node
):
...
@@ -600,6 +587,29 @@ class OpSet9():
...
@@ -600,6 +587,29 @@ class OpSet9():
#assert len(
#assert len(
# indices_shape) <= 2, "Gather op don't support dim of indice >2 "
# indices_shape) <= 2, "Gather op don't support dim of indice >2 "
if
axis
==
0
and
len
(
indices_shape
)
<=
1
:
if
axis
==
0
and
len
(
indices_shape
)
<=
1
:
if
len
(
val_x
.
out_shapes
[
0
])
<=
1
:
node
.
fluid_code
.
add_layer
(
'gather'
,
inputs
=
{
'input'
:
val_x
,
'index'
:
indices
},
output
=
node
,
param_attr
=
None
)
elif
len
(
val_x
.
out_shapes
[
0
])
>
1
:
if
len
(
indices_shape
)
==
0
:
gather_
=
node
.
layer_name
+
'_1'
node
.
fluid_code
.
add_layer
(
'gather'
,
inputs
=
{
'input'
:
val_x
,
'index'
:
indices
},
output
=
gather_
,
param_attr
=
None
)
node
.
fluid_code
.
add_layer
(
'squeeze'
,
inputs
=
{
'input'
:
gather_
,
'axes'
:
[
0
]},
output
=
node
,
param_attr
=
None
)
else
:
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'gather'
,
'gather'
,
inputs
=
{
'input'
:
val_x
,
inputs
=
{
'input'
:
val_x
,
...
@@ -624,12 +634,25 @@ class OpSet9():
...
@@ -624,12 +634,25 @@ class OpSet9():
param_attr
=
None
)
param_attr
=
None
)
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'transpose'
,
inputs
=
node
,
output
=
node
,
param_attr
=
attr_trans
)
'transpose'
,
inputs
=
node
,
output
=
node
,
param_attr
=
attr_trans
)
if
len
(
indices_shape
)
<
1
:
node
.
fluid_code
.
add_layer
(
'squeeze'
,
inputs
=
{
'input'
:
node
,
'axes'
:
[
axis
]},
output
=
node
,
param_attr
=
None
)
elif
axis
==
0
and
len
(
indices_shape
)
>
1
:
elif
axis
==
0
and
len
(
indices_shape
)
>
1
:
if
val_x
.
out_shapes
[
0
]
is
not
None
and
isinstance
(
if
val_x
.
out_shapes
[
0
]
is
not
None
and
isinstance
(
val_x
,
ONNXGraphDataNode
):
val_x
,
ONNXGraphDataNode
):
indices_cast
=
indices
.
layer_name
+
'_cast'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'
embedding
'
,
'
cast
'
,
inputs
=
indices
,
inputs
=
indices
,
output
=
indices_cast
,
param_attr
=
{
'dtype'
:
string
(
'int64'
)})
node
.
fluid_code
.
add_layer
(
'embedding'
,
inputs
=
indices_cast
,
output
=
node
,
output
=
node
,
use_fluid
=
True
,
use_fluid
=
True
,
param_attr
=
{
param_attr
=
{
...
@@ -638,7 +661,6 @@ class OpSet9():
...
@@ -638,7 +661,6 @@ class OpSet9():
})
})
else
:
else
:
from
functools
import
reduce
from
functools
import
reduce
#indices_shape = [1,7]
reshape_shape
=
reduce
(
lambda
x
,
y
:
x
*
y
,
indices_shape
)
reshape_shape
=
reduce
(
lambda
x
,
y
:
x
*
y
,
indices_shape
)
indices_reshape
=
indices
.
layer_name
+
'_shape'
indices_reshape
=
indices
.
layer_name
+
'_shape'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
...
@@ -678,7 +700,7 @@ class OpSet9():
...
@@ -678,7 +700,7 @@ class OpSet9():
perm
=
list
(
range
(
len
(
val_x
.
out_shapes
[
0
])))
perm
=
list
(
range
(
len
(
val_x
.
out_shapes
[
0
])))
perm
=
[
axis
]
+
perm
[:
axis
]
+
perm
[
axis
+
1
:]
perm
=
[
axis
]
+
perm
[:
axis
]
+
perm
[
axis
+
1
:]
attr_trans
=
{
'perm'
:
perm
}
attr_trans
=
{
'perm'
:
perm
}
name_trans
=
val_x
.
layer_name
+
'_trans'
name_trans
=
val_x
.
layer_name
+
'_trans
pose
'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'transpose'
,
'transpose'
,
inputs
=
val_x
,
inputs
=
val_x
,
...
@@ -690,8 +712,12 @@ class OpSet9():
...
@@ -690,8 +712,12 @@ class OpSet9():
'index'
:
indices_reshape
},
'index'
:
indices_reshape
},
output
=
node
,
output
=
node
,
param_attr
=
None
)
param_attr
=
None
)
input_transpose
=
node
.
layer_name
+
'_transpose'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'transpose'
,
inputs
=
node
,
output
=
node
,
param_attr
=
attr_trans
)
'transpose'
,
inputs
=
node
,
output
=
input_transpose
,
param_attr
=
attr_trans
)
val_x_shape
=
val_x
.
out_shapes
[
0
]
val_x_shape
=
val_x
.
out_shapes
[
0
]
reshaped_shape
=
[]
reshaped_shape
=
[]
for
i
in
perm
:
for
i
in
perm
:
...
@@ -700,10 +726,90 @@ class OpSet9():
...
@@ -700,10 +726,90 @@ class OpSet9():
reshaped_shape
.
append
(
i
)
reshaped_shape
.
append
(
i
)
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'reshape'
,
'reshape'
,
inputs
=
nod
e
,
inputs
=
input_transpos
e
,
output
=
node
,
output
=
node
,
param_attr
=
{
'shape'
:
reshaped_shape
})
param_attr
=
{
'shape'
:
reshaped_shape
})
@
print_mapping_info
def
ScatterND
(
self
,
node
):
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
indices
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
updates
=
self
.
graph
.
get_input_node
(
node
,
idx
=
2
,
copy
=
True
)
if
len
(
indices
.
out_shapes
[
0
])
==
1
:
node
.
fluid_code
.
add_layer
(
'scatter'
,
inputs
=
{
'input'
:
val_x
,
'index'
:
indices
,
'updates'
:
updates
},
output
=
node
,
param_attr
=
None
)
else
:
input_inner_indices
=
node
.
layer_name
+
'_input_inner_indices'
node
.
fluid_code
.
add_layer
(
'scatter_nd'
,
inputs
=
{
'shape'
:
val_x
.
out_shapes
[
0
],
'index'
:
indices
,
'updates'
:
updates
},
output
=
input_inner_indices
,
param_attr
=
None
)
constant_minus_one
=
node
.
layer_name
+
'_constant_minus_one'
node
.
fluid_code
.
add_layer
(
'fill_constant'
,
inputs
=
None
,
output
=
constant_minus_one
,
param_attr
=
{
'shape'
:
updates
.
out_shapes
[
0
],
'dtype'
:
string
(
updates
.
dtype
),
'value'
:
-
1
})
indices_mask
=
node
.
layer_name
+
'_indices_mask'
node
.
fluid_code
.
add_layer
(
'scatter_nd'
,
inputs
=
{
'shape'
:
val_x
.
out_shapes
[
0
],
'index'
:
indices
,
'updates'
:
constant_minus_one
},
output
=
indices_mask
,
param_attr
=
None
)
constant_1
=
node
.
layer_name
+
'_constant_1'
node
.
fluid_code
.
add_layer
(
'fill_constant'
,
inputs
=
None
,
output
=
constant_1
,
param_attr
=
{
'shape'
:
val_x
.
out_shapes
[
0
],
'dtype'
:
string
(
val_x
.
dtype
),
'value'
:
1
})
input_out_indices_mask
=
node
.
layer_name
+
'_input_out_indices_mask'
node
.
fluid_code
.
add_layer
(
"elementwise_add"
,
inputs
=
{
"x"
:
indices_mask
,
"y"
:
constant_1
},
output
=
input_out_indices_mask
,
param_attr
=
None
)
input_out_indices
=
node
.
layer_name
+
'_input_out_indices'
node
.
fluid_code
.
add_layer
(
"elementwise_mul"
,
inputs
=
{
"x"
:
val_x
,
"y"
:
input_out_indices_mask
},
output
=
input_out_indices
,
param_attr
=
None
)
node
.
fluid_code
.
add_layer
(
"elementwise_add"
,
inputs
=
{
"x"
:
input_inner_indices
,
"y"
:
input_out_indices
},
output
=
node
,
param_attr
=
None
)
@
print_mapping_info
@
print_mapping_info
def
Range
(
self
,
node
):
def
Range
(
self
,
node
):
val_start
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_start
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
...
@@ -754,17 +860,21 @@ class OpSet9():
...
@@ -754,17 +860,21 @@ class OpSet9():
}
}
else
:
else
:
if
starts
.
dtype
!=
'int32'
:
if
starts
.
dtype
!=
'int32'
:
starts_cast
=
starts
.
layer_name
+
'_cast'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'cast'
,
'cast'
,
inputs
=
starts
,
inputs
=
starts
,
output
=
starts
,
output
=
starts
_cast
,
param_attr
=
{
'dtype'
:
string
(
'int32'
)})
param_attr
=
{
'dtype'
:
string
(
'int32'
)})
attr
[
'starts'
]
=
starts_cast
if
ends
.
dtype
!=
'int32'
:
if
ends
.
dtype
!=
'int32'
:
ends_cast
=
ends
.
layer_name
+
'_cast'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'cast'
,
'cast'
,
inputs
=
ends
,
inputs
=
ends
,
output
=
ends
,
output
=
ends
_cast
,
param_attr
=
{
'dtype'
:
string
(
'int32'
)})
param_attr
=
{
'dtype'
:
string
(
'int32'
)})
attr
[
'ends'
]
=
ends_cast
else
:
else
:
starts
=
node
.
get_attr
(
'starts'
)
starts
=
node
.
get_attr
(
'starts'
)
ends
=
node
.
get_attr
(
'ends'
)
ends
=
node
.
get_attr
(
'ends'
)
...
@@ -789,8 +899,6 @@ class OpSet9():
...
@@ -789,8 +899,6 @@ class OpSet9():
'this is not supported'
)
'this is not supported'
)
if
len
(
value
)
==
1
:
if
len
(
value
)
==
1
:
value
=
value
[
0
]
value
=
value
[
0
]
if
dtype
.
name
==
'int64'
:
dtype
=
'int32'
attr
=
{
attr
=
{
'shape'
:
val_shape
.
layer_name
,
'shape'
:
val_shape
.
layer_name
,
'dtype'
:
string
(
dtype
),
'dtype'
:
string
(
dtype
),
...
@@ -831,6 +939,14 @@ class OpSet9():
...
@@ -831,6 +939,14 @@ class OpSet9():
inputs
=
{
'x'
:
val_x
},
inputs
=
{
'x'
:
val_x
},
output
=
node
,
output
=
node
,
param_attr
=
{
'shape'
:
shape_value
.
tolist
()})
param_attr
=
{
'shape'
:
shape_value
.
tolist
()})
elif
len
(
node
.
out_shapes
[
0
])
>
0
and
_is_static_shape
(
node
.
out_shapes
[
0
]):
node
.
fluid_code
.
add_layer
(
'reshape'
,
inputs
=
{
'x'
:
val_x
,
'shape'
:
node
.
out_shapes
[
0
]},
output
=
node
,
param_attr
=
attr
)
elif
val_shape
.
dtype
==
'int64'
:
elif
val_shape
.
dtype
==
'int64'
:
val_shape_cast
=
val_shape
.
layer_name
+
'_cast'
val_shape_cast
=
val_shape
.
layer_name
+
'_cast'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
...
@@ -882,6 +998,11 @@ class OpSet9():
...
@@ -882,6 +998,11 @@ class OpSet9():
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
'cast'
,
inputs
=
val_input
,
output
=
node
,
param_attr
=
attr
)
'cast'
,
inputs
=
val_input
,
output
=
node
,
param_attr
=
attr
)
@
print_mapping_info
def
Not
(
self
,
node
):
val_input
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
node
.
fluid_code
.
add_layer
(
'logical_not'
,
inputs
=
val_input
,
output
=
node
)
@
print_mapping_info
@
print_mapping_info
def
AveragePool
(
self
,
node
):
def
AveragePool
(
self
,
node
):
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
...
@@ -922,12 +1043,16 @@ class OpSet9():
...
@@ -922,12 +1043,16 @@ class OpSet9():
@
print_mapping_info
@
print_mapping_info
def
Concat
(
self
,
node
):
def
Concat
(
self
,
node
):
inputs
=
[]
inputs
=
[]
dtypes
=
set
()
for
i
in
range
(
len
(
node
.
layer
.
input
)):
for
i
in
range
(
len
(
node
.
layer
.
input
)):
ipt
=
self
.
graph
.
get_input_node
(
node
,
idx
=
i
,
copy
=
True
)
ipt
=
self
.
graph
.
get_input_node
(
node
,
idx
=
i
,
copy
=
True
)
if
isinstance
(
ipt
,
str
):
if
isinstance
(
ipt
,
str
):
inputs
.
append
(
ipt
)
inputs
.
append
(
ipt
)
else
:
else
:
inputs
.
append
(
ipt
.
layer_name
)
inputs
.
append
(
ipt
.
layer_name
)
dtypes
.
add
(
ipt
.
dtype
)
if
len
(
dtypes
)
>
1
:
assert
'Unspported situation happened, please create issue on https://github.com/PaddlePaddle/X2Paddle/issues.'
axis
=
node
.
get_attr
(
'axis'
)
axis
=
node
.
get_attr
(
'axis'
)
attr
=
{
'axis'
:
axis
}
attr
=
{
'axis'
:
axis
}
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
...
@@ -1015,10 +1140,22 @@ class OpSet9():
...
@@ -1015,10 +1140,22 @@ class OpSet9():
def
MatMul
(
self
,
node
):
def
MatMul
(
self
,
node
):
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_x
=
self
.
graph
.
get_input_node
(
node
,
idx
=
0
,
copy
=
True
)
val_y
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
val_y
=
self
.
graph
.
get_input_node
(
node
,
idx
=
1
,
copy
=
True
)
x_shape
=
val_x
.
out_shapes
[
0
]
y_shape
=
val_y
.
out_shapes
[
0
]
inputs
=
{
"x"
:
val_x
,
"y"
:
val_y
}
inputs
=
{
"x"
:
val_x
,
"y"
:
val_y
}
attr
=
{
"name"
:
string
(
node
.
layer_name
)}
if
y_shape
[
0
]
==
1
and
x_shape
[
-
1
]
!=
1
and
x_shape
[
0
]
!=
1
:
y_squeeze
=
val_y
.
layer_name
+
'_squeeze'
node
.
fluid_code
.
add_layer
(
"squeeze"
,
inputs
=
val_y
,
output
=
y_squeeze
,
param_attr
=
{
'axes'
:
[
0
]})
inputs
[
'y'
]
=
y_squeeze
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
"matmul"
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
attr
)
"matmul"
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
None
)
else
:
node
.
fluid_code
.
add_layer
(
"matmul"
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
None
)
@
print_mapping_info
@
print_mapping_info
def
BatchNormalization
(
self
,
node
):
def
BatchNormalization
(
self
,
node
):
...
@@ -1154,7 +1291,6 @@ class OpSet9():
...
@@ -1154,7 +1291,6 @@ class OpSet9():
'y'
:
cast_condition
},
'y'
:
cast_condition
},
output
=
mul_val_x
,
output
=
mul_val_x
,
param_attr
=
None
)
param_attr
=
None
)
mul_val_y
=
val_y
.
layer_name
+
'_mul'
mul_val_y
=
val_y
.
layer_name
+
'_mul'
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
"elementwise_mul"
,
"elementwise_mul"
,
...
@@ -1204,6 +1340,15 @@ class OpSet9():
...
@@ -1204,6 +1340,15 @@ class OpSet9():
if
repeats
is
None
:
if
repeats
is
None
:
repeats
=
val_repeats
.
layer_name
repeats
=
val_repeats
.
layer_name
if
val_repeats
.
dtype
!=
'int32'
:
attr
=
{
"dtype"
:
string
(
"int32"
)}
node
.
fluid_code
.
add_layer
(
"cast"
,
inputs
=
repeats
,
output
=
"{}.tmp"
.
format
(
repeats
),
param_attr
=
attr
)
repeats
=
"{}.tmp"
.
format
(
repeats
)
elif
isinstance
(
repeats
,
int
):
elif
isinstance
(
repeats
,
int
):
repeats
=
[
repeats
]
repeats
=
[
repeats
]
...
...
x2paddle/op_mapper/paddle2onnx/opset11/opset.py
浏览文件 @
ce6ffee2
...
@@ -93,16 +93,13 @@ class OpSet11(OpSet10):
...
@@ -93,16 +93,13 @@ class OpSet11(OpSet10):
else
:
else
:
coordinate_transformation_mode
=
'half_pixel'
coordinate_transformation_mode
=
'half_pixel'
roi_name
=
self
.
get_name
(
op
.
type
,
'roi'
)
roi_node
=
self
.
make_constant_node
(
roi_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
1
,
1
,
1
,
1
,
1
,
1
])
if
(
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
)
or
(
if
(
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
)
or
(
'SizeTensor'
in
input_names
and
'SizeTensor'
in
input_names
and
len
(
op
.
input
(
'SizeTensor'
))
>
0
):
len
(
op
.
input
(
'SizeTensor'
))
>
0
):
node_list
=
list
()
node_list
=
list
()
roi_node
=
self
.
make_constant_node
(
self
.
get_name
(
op
.
type
,
'roi'
),
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
1
,
1
,
1
,
1
,
1
,
1
])
roi_name
=
self
.
get_name
(
op
.
type
,
'roi'
)
roi_node
=
self
.
make_constant_node
(
roi_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
1
,
1
,
1
,
1
,
1
,
1
])
empty_name
=
self
.
get_name
(
op
.
type
,
'empty'
)
empty_name
=
self
.
get_name
(
op
.
type
,
'empty'
)
empty_tensor
=
helper
.
make_tensor
(
empty_tensor
=
helper
.
make_tensor
(
empty_name
,
empty_name
,
...
@@ -168,7 +165,7 @@ class OpSet11(OpSet10):
...
@@ -168,7 +165,7 @@ class OpSet11(OpSet10):
elif
'Scale'
in
input_names
and
len
(
op
.
input
(
'Scale'
))
>
0
:
elif
'Scale'
in
input_names
and
len
(
op
.
input
(
'Scale'
))
>
0
:
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Scale'
)[
0
]],
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
op
.
input
(
'Scale'
)[
0
]],
outputs
=
op
.
output
(
'Out'
),
outputs
=
op
.
output
(
'Out'
),
mode
=
'linear'
,
mode
=
'linear'
,
coordinate_transformation_mode
=
coordinate_transformation_mode
)
coordinate_transformation_mode
=
coordinate_transformation_mode
)
...
@@ -180,10 +177,6 @@ class OpSet11(OpSet10):
...
@@ -180,10 +177,6 @@ class OpSet11(OpSet10):
scale_node
=
self
.
make_constant_node
(
scale_name
,
scale_node
=
self
.
make_constant_node
(
scale_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
scale
,
scale
])
[
1
,
1
,
scale
,
scale
])
roi_name
=
self
.
get_name
(
op
.
type
,
'roi'
)
roi_node
=
self
.
make_constant_node
(
roi_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
1
,
1
,
1
,
1
,
1
,
1
])
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
scale_name
],
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
scale_name
],
...
@@ -194,7 +187,7 @@ class OpSet11(OpSet10):
...
@@ -194,7 +187,7 @@ class OpSet11(OpSet10):
return
[
scale_node
,
roi_node
,
node
]
return
[
scale_node
,
roi_node
,
node
]
else
:
else
:
raise
Exception
(
"Unexpected situation happend"
)
raise
Exception
(
"Unexpected situation happend"
)
return
node
return
[
roi_node
,
node
]
def
nearest_interp
(
self
,
op
,
block
):
def
nearest_interp
(
self
,
op
,
block
):
input_names
=
op
.
input_names
input_names
=
op
.
input_names
...
@@ -203,18 +196,21 @@ class OpSet11(OpSet10):
...
@@ -203,18 +196,21 @@ class OpSet11(OpSet10):
if
align_corners
:
if
align_corners
:
coordinate_transformation_mode
=
'align_corners'
coordinate_transformation_mode
=
'align_corners'
else
:
else
:
coordinate_transformation_mode
=
'asymmetric'
coordinate_transformation_mode
=
'half_pixel'
roi_name
=
self
.
get_name
(
op
.
type
,
'roi'
)
roi_node
=
self
.
make_constant_node
(
roi_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
1
,
1
,
1
,
1
,
1
,
1
])
if
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
:
if
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
:
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
''
,
op
.
input
(
'OutSize'
)[
0
]],
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
op
.
input
(
'OutSize'
)[
0
]],
outputs
=
op
.
output
(
'Out'
),
outputs
=
op
.
output
(
'Out'
),
mode
=
'nearest'
,
mode
=
'nearest'
,
coordinate_transformation_mode
=
coordinate_transformation_mode
)
coordinate_transformation_mode
=
coordinate_transformation_mode
)
elif
'Scale'
in
input_names
and
len
(
op
.
input
(
'Scale'
))
>
0
:
elif
'Scale'
in
input_names
and
len
(
op
.
input
(
'Scale'
))
>
0
:
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Scale'
)[
0
]],
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
op
.
input
(
'Scale'
)[
0
]],
outputs
=
op
.
output
(
'Out'
),
outputs
=
op
.
output
(
'Out'
),
mode
=
'nearest'
,
mode
=
'nearest'
,
coordinate_transformation_mode
=
coordinate_transformation_mode
)
coordinate_transformation_mode
=
coordinate_transformation_mode
)
...
@@ -226,10 +222,6 @@ class OpSet11(OpSet10):
...
@@ -226,10 +222,6 @@ class OpSet11(OpSet10):
scale_node
=
self
.
make_constant_node
(
scale_name
,
scale_node
=
self
.
make_constant_node
(
scale_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
scale
,
scale
])
[
1
,
1
,
scale
,
scale
])
roi_name
=
self
.
get_name
(
op
.
type
,
'roi'
)
roi_node
=
self
.
make_constant_node
(
roi_name
,
onnx_pb
.
TensorProto
.
FLOAT
,
[
1
,
1
,
1
,
1
,
1
,
1
,
1
,
1
])
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
scale_name
],
inputs
=
[
op
.
input
(
'X'
)[
0
],
roi_name
,
scale_name
],
...
@@ -240,7 +232,7 @@ class OpSet11(OpSet10):
...
@@ -240,7 +232,7 @@ class OpSet11(OpSet10):
return
[
scale_node
,
roi_node
,
node
]
return
[
scale_node
,
roi_node
,
node
]
else
:
else
:
raise
Exception
(
"Unexpected situation happend"
)
raise
Exception
(
"Unexpected situation happend"
)
return
node
return
[
roi_node
,
node
]
def
hard_swish
(
self
,
op
,
block
):
def
hard_swish
(
self
,
op
,
block
):
min_name
=
self
.
get_name
(
op
.
type
,
'min'
)
min_name
=
self
.
get_name
(
op
.
type
,
'min'
)
...
...
x2paddle/op_mapper/paddle2onnx/opset11/paddle_custom_layer/multiclass_nms.py
浏览文件 @
ce6ffee2
...
@@ -72,6 +72,8 @@ def multiclass_nms(op, block):
...
@@ -72,6 +72,8 @@ def multiclass_nms(op, block):
dims
=
(),
dims
=
(),
vals
=
[
float
(
attrs
[
'nms_threshold'
])]))
vals
=
[
float
(
attrs
[
'nms_threshold'
])]))
boxes_num
=
block
.
var
(
outputs
[
'Out'
][
0
]).
shape
[
0
]
top_k_value
=
np
.
int64
(
boxes_num
if
attrs
[
'keep_top_k'
]
==
-
1
else
attrs
[
'keep_top_k'
])
node_keep_top_k
=
onnx
.
helper
.
make_node
(
node_keep_top_k
=
onnx
.
helper
.
make_node
(
'Constant'
,
'Constant'
,
inputs
=
[],
inputs
=
[],
...
@@ -80,7 +82,7 @@ def multiclass_nms(op, block):
...
@@ -80,7 +82,7 @@ def multiclass_nms(op, block):
name
=
name_keep_top_k
[
0
]
+
"@const"
,
name
=
name_keep_top_k
[
0
]
+
"@const"
,
data_type
=
onnx
.
TensorProto
.
INT64
,
data_type
=
onnx
.
TensorProto
.
INT64
,
dims
=
(),
dims
=
(),
vals
=
[
np
.
int64
(
attrs
[
'keep_top_k'
])
]))
vals
=
[
top_k_value
]))
node_keep_top_k_2D
=
onnx
.
helper
.
make_node
(
node_keep_top_k_2D
=
onnx
.
helper
.
make_node
(
'Constant'
,
'Constant'
,
...
@@ -90,7 +92,7 @@ def multiclass_nms(op, block):
...
@@ -90,7 +92,7 @@ def multiclass_nms(op, block):
name
=
name_keep_top_k_2D
[
0
]
+
"@const"
,
name
=
name_keep_top_k_2D
[
0
]
+
"@const"
,
data_type
=
onnx
.
TensorProto
.
INT64
,
data_type
=
onnx
.
TensorProto
.
INT64
,
dims
=
[
1
,
1
],
dims
=
[
1
,
1
],
vals
=
[
np
.
int64
(
attrs
[
'keep_top_k'
])
]))
vals
=
[
top_k_value
]))
# the paddle data format is x1,y1,x2,y2
# the paddle data format is x1,y1,x2,y2
kwargs
=
{
'center_point_box'
:
0
}
kwargs
=
{
'center_point_box'
:
0
}
...
...
x2paddle/op_mapper/paddle2onnx/opset9/opset.py
浏览文件 @
ce6ffee2
...
@@ -174,14 +174,15 @@ class OpSet9(object):
...
@@ -174,14 +174,15 @@ class OpSet9(object):
inputs
=
[
op
.
input
(
'X'
)[
0
],
temp_value
],
inputs
=
[
op
.
input
(
'X'
)[
0
],
temp_value
],
outputs
=
op
.
output
(
'Out'
))
outputs
=
op
.
output
(
'Out'
))
return
[
shape_node
,
y_node
,
node
]
return
[
shape_node
,
y_node
,
node
]
elif
len
(
x_shape
)
==
len
(
y_shape
):
elif
axis
==
-
1
or
axis
==
(
len
(
x_shape
)
-
1
)
or
len
(
x_shape
)
==
len
(
y_shape
):
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Add'
,
'Add'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Y'
)[
0
]],
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Y'
)[
0
]],
outputs
=
op
.
output
(
'Out'
))
outputs
=
op
.
output
(
'Out'
))
return
node
return
node
else
:
else
:
raise
Exc
pe
tion
(
"Unexpected situation happend in elementwise_add"
)
raise
Exc
ep
tion
(
"Unexpected situation happend in elementwise_add"
)
def
elementwise_sub
(
self
,
op
,
block
):
def
elementwise_sub
(
self
,
op
,
block
):
axis
=
op
.
attr
(
'axis'
)
axis
=
op
.
attr
(
'axis'
)
...
@@ -203,14 +204,15 @@ class OpSet9(object):
...
@@ -203,14 +204,15 @@ class OpSet9(object):
inputs
=
[
op
.
input
(
'X'
)[
0
],
temp_value
],
inputs
=
[
op
.
input
(
'X'
)[
0
],
temp_value
],
outputs
=
op
.
output
(
'Out'
))
outputs
=
op
.
output
(
'Out'
))
return
[
shape_node
,
y_node
,
node
]
return
[
shape_node
,
y_node
,
node
]
elif
len
(
x_shape
)
==
len
(
y_shape
):
elif
axis
==
-
1
or
axis
==
(
len
(
x_shape
)
-
1
)
or
len
(
x_shape
)
==
len
(
y_shape
):
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Sub'
,
'Sub'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Y'
)[
0
]],
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Y'
)[
0
]],
outputs
=
op
.
output
(
'Out'
))
outputs
=
op
.
output
(
'Out'
))
return
node
return
node
else
:
else
:
raise
Exc
pe
tion
(
"Unexpected situation happend in elementwise_sub"
)
raise
Exc
ep
tion
(
"Unexpected situation happend in elementwise_sub"
)
def
pool2d
(
self
,
op
,
block
):
def
pool2d
(
self
,
op
,
block
):
pool_type
=
{
pool_type
=
{
...
@@ -403,6 +405,22 @@ class OpSet9(object):
...
@@ -403,6 +405,22 @@ class OpSet9(object):
'Sum'
,
inputs
=
op
.
input
(
'X'
),
outputs
=
op
.
output
(
'Out'
))
'Sum'
,
inputs
=
op
.
input
(
'X'
),
outputs
=
op
.
output
(
'Out'
))
return
node
return
node
def
floor
(
self
,
op
,
block
):
node
=
helper
.
make_node
(
'Floor'
,
inputs
=
op
.
input
(
'X'
),
outputs
=
op
.
output
(
'Out'
))
return
node
def
uniform_random_batch_size_like
(
self
,
op
,
block
):
node
=
helper
.
make_node
(
'RandomUniformLike'
,
inputs
=
op
.
input
(
'Input'
),
outputs
=
op
.
output
(
'Out'
),
high
=
op
.
attr
(
'max'
),
dtype
=
self
.
paddle_onnx_dtype_map
[
op
.
attr
(
'dtype'
)],
low
=
op
.
attr
(
'min'
),
seed
=
float
(
op
.
attr
(
'seed'
)),
)
return
node
def
depthwise_conv2d
(
self
,
op
,
block
):
def
depthwise_conv2d
(
self
,
op
,
block
):
return
self
.
conv2d
(
op
,
block
)
return
self
.
conv2d
(
op
,
block
)
...
@@ -444,7 +462,7 @@ class OpSet9(object):
...
@@ -444,7 +462,7 @@ class OpSet9(object):
ends
=
op
.
attr
(
'ends'
)
ends
=
op
.
attr
(
'ends'
)
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
"Slice"
,
"Slice"
,
inputs
=
[
op
.
input
(
'Input'
)[
0
]
,
starts_name
,
ends_name
,
axes_name
],
inputs
=
[
op
.
input
(
'Input'
)[
0
]],
outputs
=
op
.
output
(
'Out'
),
outputs
=
op
.
output
(
'Out'
),
axes
=
axes
,
axes
=
axes
,
starts
=
starts
,
starts
=
starts
,
...
@@ -565,7 +583,7 @@ class OpSet9(object):
...
@@ -565,7 +583,7 @@ class OpSet9(object):
input_shape
=
block
.
vars
[
op
.
input
(
'X'
)[
0
]].
shape
input_shape
=
block
.
vars
[
op
.
input
(
'X'
)[
0
]].
shape
if
op
.
attr
(
'align_corners'
)
or
op
.
attr
(
'align_mode'
)
==
0
:
if
op
.
attr
(
'align_corners'
)
or
op
.
attr
(
'align_mode'
)
==
0
:
raise
Exception
(
raise
Exception
(
"Resize in onnx(opset<=10) only support coordinate_transformation_mode: 'asymmetric', Try converting with --onnx_op
es
t 11"
"Resize in onnx(opset<=10) only support coordinate_transformation_mode: 'asymmetric', Try converting with --onnx_op
se
t 11"
)
)
if
(
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
)
or
(
if
(
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
)
or
(
'SizeTensor'
in
input_names
and
'SizeTensor'
in
input_names
and
...
@@ -671,14 +689,82 @@ class OpSet9(object):
...
@@ -671,14 +689,82 @@ class OpSet9(object):
input_names
=
op
.
input_names
input_names
=
op
.
input_names
if
op
.
attr
(
'align_corners'
):
if
op
.
attr
(
'align_corners'
):
raise
Exception
(
raise
Exception
(
"Resize in onnx(opset<=10) only support coordinate_transformation_mode: 'asymmetric', Try converting with --onnx_op
es
t 11"
"Resize in onnx(opset<=10) only support coordinate_transformation_mode: 'asymmetric', Try converting with --onnx_op
se
t 11"
)
)
if
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
:
if
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
:
node
=
helper
.
make_node
(
node_list
=
list
()
shape_name0
=
self
.
get_name
(
op
.
type
,
'shape'
)
shape_node0
=
helper
.
make_node
(
'Shape'
,
inputs
=
op
.
input
(
'X'
),
outputs
=
[
shape_name0
])
starts_name
=
self
.
get_name
(
op
.
type
,
'slice.starts'
)
starts_node
=
self
.
make_constant_node
(
starts_name
,
onnx_pb
.
TensorProto
.
INT64
,
[
0
])
ends_name
=
self
.
get_name
(
op
.
type
,
'slice.ends'
)
ends_node
=
self
.
make_constant_node
(
ends_name
,
onnx_pb
.
TensorProto
.
INT64
,
[
2
])
shape_name1
=
self
.
get_name
(
op
.
type
,
'shape'
)
shape_node1
=
helper
.
make_node
(
'Slice'
,
inputs
=
[
shape_name0
,
starts_name
,
ends_name
],
outputs
=
[
shape_name1
])
node_list
.
extend
([
shape_node0
,
starts_node
,
ends_node
,
shape_node1
])
if
'OutSize'
in
input_names
and
len
(
op
.
input
(
'OutSize'
))
>
0
:
cast_shape_name
=
self
.
get_name
(
op
.
type
,
"shape.cast"
)
cast_shape_node
=
helper
.
make_node
(
'Cast'
,
inputs
=
op
.
input
(
'OutSize'
),
outputs
=
[
cast_shape_name
],
to
=
onnx_pb
.
TensorProto
.
INT64
)
node_list
.
append
(
cast_shape_node
)
else
:
concat_shape_name
=
self
.
get_name
(
op
.
type
,
op
.
output
(
'Out'
)[
0
]
+
"shape.concat"
)
concat_shape_node
=
helper
.
make_node
(
"Concat"
,
inputs
=
op
.
input
(
'SizeTensor'
),
outputs
=
[
concat_shape_name
],
axis
=
0
)
cast_shape_name
=
self
.
get_name
(
op
.
type
,
"shape.cast"
)
cast_shape_node
=
helper
.
make_node
(
'Cast'
,
inputs
=
[
concat_shape_name
],
outputs
=
[
cast_shape_name
],
to
=
onnx_pb
.
TensorProto
.
INT64
)
node_list
.
extend
([
concat_shape_node
,
cast_shape_node
])
shape_name2
=
self
.
get_name
(
op
.
type
,
"shape.concat"
)
shape_node2
=
helper
.
make_node
(
'Concat'
,
inputs
=
[
shape_name1
,
cast_shape_name
],
outputs
=
[
shape_name2
],
axis
=
0
)
node_list
.
append
(
shape_node2
)
cast_shape_name2
=
self
.
get_name
(
op
.
type
,
"shape.cast"
)
cast_shape_node2
=
helper
.
make_node
(
'Cast'
,
inputs
=
[
shape_name2
],
outputs
=
[
cast_shape_name2
],
to
=
onnx_pb
.
TensorProto
.
FLOAT
)
node_list
.
append
(
cast_shape_node2
)
cast_shape_name0
=
self
.
get_name
(
op
.
type
,
"shape.cast"
)
cast_shape_node0
=
helper
.
make_node
(
'Cast'
,
inputs
=
[
shape_name0
],
outputs
=
[
cast_shape_name0
],
to
=
onnx_pb
.
TensorProto
.
FLOAT
)
node_list
.
append
(
cast_shape_node0
)
outputs_h_w_scales
=
op
.
output
(
'Out'
)[
0
]
+
"@out_hw_scales"
node_h_w_scales
=
helper
.
make_node
(
'Div'
,
inputs
=
[
cast_shape_name2
,
cast_shape_name0
],
outputs
=
[
outputs_h_w_scales
])
node_list
.
append
(
node_h_w_scales
)
result_node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
o
p
.
input
(
'OutSize'
)[
0
]
],
inputs
=
[
op
.
input
(
'X'
)[
0
],
o
utputs_h_w_scales
],
outputs
=
op
.
output
(
'Out'
),
outputs
=
op
.
output
(
'Out'
),
mode
=
'nearest'
)
mode
=
'linear'
)
node_list
.
extend
([
result_node
])
return
node_list
elif
'Scale'
in
input_names
and
len
(
op
.
input
(
'Scale'
))
>
0
:
elif
'Scale'
in
input_names
and
len
(
op
.
input
(
'Scale'
))
>
0
:
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Resize'
,
'Resize'
,
...
@@ -714,6 +800,38 @@ class OpSet9(object):
...
@@ -714,6 +800,38 @@ class OpSet9(object):
beta
=
offset
)
beta
=
offset
)
return
node
return
node
def
swish
(
self
,
op
,
block
):
beta
=
op
.
attr
(
'beta'
)
beta_name
=
self
.
get_name
(
op
.
type
,
'beta'
)
beta_node
=
onnx
.
helper
.
make_node
(
'Constant'
,
name
=
beta_name
,
inputs
=
[],
outputs
=
[
beta_name
],
value
=
onnx
.
helper
.
make_tensor
(
name
=
beta_name
,
data_type
=
onnx
.
TensorProto
.
FLOAT
,
dims
=
(),
vals
=
[
beta
]))
beta_x_name
=
self
.
get_name
(
op
.
type
,
'beta_x'
)
beta_x_node
=
onnx
.
helper
.
make_node
(
'Mul'
,
name
=
beta_x_name
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
beta_name
],
outputs
=
[
beta_x_name
])
sigmoid_name
=
self
.
get_name
(
op
.
type
,
'sigmoid'
)
sigmoid_node
=
onnx
.
helper
.
make_node
(
'Sigmoid'
,
name
=
sigmoid_name
,
inputs
=
[
beta_x_name
],
outputs
=
[
sigmoid_name
])
swish_node
=
onnx
.
helper
.
make_node
(
'Mul'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
sigmoid_name
],
outputs
=
op
.
output
(
'Out'
))
return
[
beta_node
,
beta_x_node
,
sigmoid_node
,
swish_node
]
def
hard_swish
(
self
,
op
,
block
):
def
hard_swish
(
self
,
op
,
block
):
scale_name
=
self
.
get_name
(
op
.
type
,
'scale'
)
scale_name
=
self
.
get_name
(
op
.
type
,
'scale'
)
offset_name
=
self
.
get_name
(
op
.
type
,
'offset'
)
offset_name
=
self
.
get_name
(
op
.
type
,
'offset'
)
...
@@ -728,8 +846,8 @@ class OpSet9(object):
...
@@ -728,8 +846,8 @@ class OpSet9(object):
node0
=
helper
.
make_node
(
node0
=
helper
.
make_node
(
'Add'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
offset_name
],
outputs
=
[
name0
])
'Add'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
offset_name
],
outputs
=
[
name0
])
name1
=
self
.
get_name
(
op
.
type
,
'relu'
)
name1
=
self
.
get_name
(
op
.
type
,
'relu'
)
min_value
=
op
.
attr
(
'min'
)
min_value
=
0.0
max_value
=
op
.
attr
(
'
max
'
)
max_value
=
op
.
attr
(
'
threshold
'
)
node1
=
helper
.
make_node
(
node1
=
helper
.
make_node
(
'Clip'
,
'Clip'
,
inputs
=
[
name0
],
inputs
=
[
name0
],
...
@@ -763,14 +881,15 @@ class OpSet9(object):
...
@@ -763,14 +881,15 @@ class OpSet9(object):
inputs
=
[
op
.
input
(
'X'
)[
0
],
temp_value
],
inputs
=
[
op
.
input
(
'X'
)[
0
],
temp_value
],
outputs
=
op
.
output
(
'Out'
))
outputs
=
op
.
output
(
'Out'
))
return
[
shape_node
,
y_node
,
node
]
return
[
shape_node
,
y_node
,
node
]
elif
len
(
x_shape
)
==
len
(
y_shape
):
elif
axis
==
-
1
or
axis
==
(
len
(
x_shape
)
-
1
)
or
len
(
x_shape
)
==
len
(
y_shape
):
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'Mul'
,
'Mul'
,
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Y'
)[
0
]],
inputs
=
[
op
.
input
(
'X'
)[
0
],
op
.
input
(
'Y'
)[
0
]],
outputs
=
op
.
output
(
'Out'
))
outputs
=
op
.
output
(
'Out'
))
return
node
return
node
else
:
else
:
raise
Exc
petion
(
"Unexpected situation happend in elementwise_add
"
)
raise
Exc
eption
(
"Unexpected situation happend in elementwise_mul
"
)
return
node
return
node
def
feed
(
self
,
op
,
block
):
def
feed
(
self
,
op
,
block
):
...
@@ -799,6 +918,14 @@ class OpSet9(object):
...
@@ -799,6 +918,14 @@ class OpSet9(object):
axes
=
op
.
attr
(
'axes'
))
axes
=
op
.
attr
(
'axes'
))
return
node
return
node
def
cast
(
self
,
op
,
block
):
node
=
helper
.
make_node
(
'Cast'
,
inputs
=
op
.
input
(
'X'
),
outputs
=
op
.
output
(
'Out'
),
to
=
self
.
paddle_onnx_dtype_map
[
op
.
attr
(
'out_dtype'
)])
return
node
def
arg_max
(
self
,
op
,
block
):
def
arg_max
(
self
,
op
,
block
):
node
=
helper
.
make_node
(
node
=
helper
.
make_node
(
'ArgMax'
,
'ArgMax'
,
...
...
x2paddle/op_mapper/paddle2onnx/opset9/paddle_custom_layer/multiclass_nms.py
浏览文件 @
ce6ffee2
...
@@ -72,6 +72,8 @@ def multiclass_nms(op, block):
...
@@ -72,6 +72,8 @@ def multiclass_nms(op, block):
dims
=
(),
dims
=
(),
vals
=
[
float
(
attrs
[
'nms_threshold'
])]))
vals
=
[
float
(
attrs
[
'nms_threshold'
])]))
boxes_num
=
block
.
var
(
outputs
[
'Out'
][
0
]).
shape
[
0
]
top_k_value
=
np
.
int64
(
boxes_num
if
attrs
[
'keep_top_k'
]
==
-
1
else
attrs
[
'keep_top_k'
])
node_keep_top_k
=
onnx
.
helper
.
make_node
(
node_keep_top_k
=
onnx
.
helper
.
make_node
(
'Constant'
,
'Constant'
,
inputs
=
[],
inputs
=
[],
...
@@ -80,7 +82,7 @@ def multiclass_nms(op, block):
...
@@ -80,7 +82,7 @@ def multiclass_nms(op, block):
name
=
name_keep_top_k
[
0
]
+
"@const"
,
name
=
name_keep_top_k
[
0
]
+
"@const"
,
data_type
=
onnx
.
TensorProto
.
INT64
,
data_type
=
onnx
.
TensorProto
.
INT64
,
dims
=
(),
dims
=
(),
vals
=
[
np
.
int64
(
attrs
[
'keep_top_k'
])
]))
vals
=
[
top_k_value
]))
node_keep_top_k_2D
=
onnx
.
helper
.
make_node
(
node_keep_top_k_2D
=
onnx
.
helper
.
make_node
(
'Constant'
,
'Constant'
,
...
@@ -90,7 +92,7 @@ def multiclass_nms(op, block):
...
@@ -90,7 +92,7 @@ def multiclass_nms(op, block):
name
=
name_keep_top_k_2D
[
0
]
+
"@const"
,
name
=
name_keep_top_k_2D
[
0
]
+
"@const"
,
data_type
=
onnx
.
TensorProto
.
INT64
,
data_type
=
onnx
.
TensorProto
.
INT64
,
dims
=
[
1
,
1
],
dims
=
[
1
,
1
],
vals
=
[
np
.
int64
(
attrs
[
'keep_top_k'
])
]))
vals
=
[
top_k_value
]))
# the paddle data format is x1,y1,x2,y2
# the paddle data format is x1,y1,x2,y2
kwargs
=
{
'center_point_box'
:
0
}
kwargs
=
{
'center_point_box'
:
0
}
...
...
x2paddle/op_mapper/tf_op_mapper_nhwc.py
浏览文件 @
ce6ffee2
...
@@ -299,6 +299,10 @@ class TFOpMapperNHWC(OpMapper):
...
@@ -299,6 +299,10 @@ class TFOpMapperNHWC(OpMapper):
data_format
=
node
.
get_attr
(
"data_format"
).
decode
()
data_format
=
node
.
get_attr
(
"data_format"
).
decode
()
pad_mode
=
node
.
get_attr
(
"padding"
).
decode
()
pad_mode
=
node
.
get_attr
(
"padding"
).
decode
()
channel_first
=
data_format
==
"NCHW"
channel_first
=
data_format
==
"NCHW"
if
data_format
==
"NHWC"
:
n
,
h
,
w
,
c
=
input
.
out_shapes
[
0
]
else
:
n
,
c
,
h
,
w
=
input
.
out_shapes
[
0
]
if
kernel
.
layer_type
==
'Const'
:
if
kernel
.
layer_type
==
'Const'
:
kernel_value
=
kernel
.
value
kernel_value
=
kernel
.
value
...
@@ -329,10 +333,15 @@ class TFOpMapperNHWC(OpMapper):
...
@@ -329,10 +333,15 @@ class TFOpMapperNHWC(OpMapper):
"dilation"
:
dilations
[
2
:
4
],
"dilation"
:
dilations
[
2
:
4
],
"padding"
:
string
(
pad_mode
)
"padding"
:
string
(
pad_mode
)
}
}
if
hasattr
(
node
,
'dilation'
)
and
attr
[
'dilation'
]
==
[
1
,
1
]:
if
hasattr
(
node
,
'dilation'
)
and
attr
[
'dilation'
]
==
[
1
,
1
]:
if
len
(
node
.
dilation
)
==
1
:
if
len
(
node
.
dilation
)
==
1
:
attr
[
'dilation'
]
=
[
1
,
node
.
dilation
[
0
]]
attr
[
'dilation'
]
=
[
1
,
node
.
dilation
[
0
]]
if
c
==
-
1
:
reshape_attr
=
{
"shape"
:
[
0
,
k_size
[
2
],
0
,
0
]}
node
.
fluid_code
.
add_layer
(
"reshape"
,
inputs
=
input
,
output
=
input
,
param_attr
=
reshape_attr
)
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
"conv2d"
,
inputs
=
input
,
output
=
node
,
param_attr
=
attr
)
"conv2d"
,
inputs
=
input
,
output
=
node
,
param_attr
=
attr
)
if
not
channel_first
:
if
not
channel_first
:
...
@@ -748,11 +757,12 @@ class TFOpMapperNHWC(OpMapper):
...
@@ -748,11 +757,12 @@ class TFOpMapperNHWC(OpMapper):
self
.
add_omit_nodes
(
begin
.
layer_name
,
node
.
layer_name
)
self
.
add_omit_nodes
(
begin
.
layer_name
,
node
.
layer_name
)
begin
=
begin
.
value
.
tolist
()
begin
=
begin
.
value
.
tolist
()
else
:
else
:
begin
=
begin
begin
=
self
.
decoder
.
infer_tensor
(
begin
).
tolist
()
shape
=
begin
.
out_shapes
[
0
]
attr
=
{
"shape"
:
shape
}
# shape = begin.out_shapes[0]
node
.
fluid_code
.
add_layer
(
# attr = {"shape": shape}
"reshape"
,
inputs
=
begin
,
output
=
begin
,
param_attr
=
attr
)
# node.fluid_code.add_layer(
# "reshape", inputs=begin, output=begin, param_attr=attr)
if
size
.
layer_type
==
"Const"
:
if
size
.
layer_type
==
"Const"
:
self
.
add_omit_nodes
(
size
.
layer_name
,
node
.
layer_name
)
self
.
add_omit_nodes
(
size
.
layer_name
,
node
.
layer_name
)
size
=
size
.
value
.
tolist
()
size
=
size
.
value
.
tolist
()
...
@@ -1058,13 +1068,25 @@ class TFOpMapperNHWC(OpMapper):
...
@@ -1058,13 +1068,25 @@ class TFOpMapperNHWC(OpMapper):
axis
=
axis
.
value
.
tolist
()
axis
=
axis
.
value
.
tolist
()
assert
axis
==
0
,
"Only support axis=0 in GatherV2 OP"
assert
axis
==
0
,
"Only support axis=0 in GatherV2 OP"
attr
=
{
'overwrite'
:
False
}
attr
=
{
'overwrite'
:
False
}
embeddings_shape
=
embeddings
.
out_shapes
[
0
][
-
1
]
reshape_list
=
list
()
reshape_name
=
index
.
layer_name
if
len
(
index
.
out_shapes
[
0
])
!=
1
:
if
len
(
index
.
out_shapes
[
0
])
!=
1
:
reshape_list
=
index
.
out_shapes
[
0
]
reshape_attr
=
{
"shape"
:
[
-
1
]}
reshape_attr
=
{
"shape"
:
[
-
1
]}
reshape_name
=
"{}_reshape"
.
format
(
index
.
layer_name
)
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
"reshape"
,
inputs
=
index
,
output
=
index
,
param_attr
=
reshape_attr
)
"reshape"
,
inputs
=
{
'input'
:
embeddings
,
'index'
:
index
}
inputs
=
index
,
output
=
reshape_name
,
param_attr
=
reshape_attr
)
inputs
=
{
'input'
:
embeddings
,
'index'
:
reshape_name
}
node
.
fluid_code
.
add_layer
(
node
.
fluid_code
.
add_layer
(
"gather"
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
attr
)
"gather"
,
inputs
=
inputs
,
output
=
node
,
param_attr
=
attr
)
if
len
(
index
.
out_shapes
[
0
])
!=
1
:
reshape_attr
=
{
"shape"
:
reshape_list
+
[
embeddings_shape
]}
node
.
fluid_code
.
add_layer
(
"reshape"
,
inputs
=
node
,
output
=
node
,
param_attr
=
reshape_attr
)
def
OneShotIterator
(
self
,
node
):
def
OneShotIterator
(
self
,
node
):
return
self
.
Placeholder
(
node
)
return
self
.
Placeholder
(
node
)
...
...
x2paddle/optimizer/tf_optimizer.py
浏览文件 @
ce6ffee2
...
@@ -863,6 +863,9 @@ class TFOptimizer(object):
...
@@ -863,6 +863,9 @@ class TFOptimizer(object):
weight
=
numpy
.
expand_dims
(
weight
,
2
)
weight
=
numpy
.
expand_dims
(
weight
,
2
)
weight
=
numpy
.
expand_dims
(
weight
,
3
)
weight
=
numpy
.
expand_dims
(
weight
,
3
)
self
.
op_mapper
.
weights
[
in_nodes3
[
0
].
layer_name
]
=
weight
self
.
op_mapper
.
weights
[
in_nodes3
[
0
].
layer_name
]
=
weight
# fix bug in Paddle1.8.3 and may change in next version.
# self.op_mapper.weights[in_nodes3[0].layer_name +
# '_1'] = weight.reshape(1, -1)
in_nodes3
[
0
].
fluid_code
.
layers
[
0
].
param_attr
[
"shape"
]
=
[
in_nodes3
[
0
].
fluid_code
.
layers
[
0
].
param_attr
[
"shape"
]
=
[
1
,
in_shape
[
-
1
],
1
,
1
1
,
in_shape
[
-
1
],
1
,
1
]
]
...
...
x2paddle_model_zoo.md
浏览文件 @
ce6ffee2
# X2Paddle模型测试库
# X2Paddle模型测试库
> 目前X2Paddle支持
5
0+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
> 目前X2Paddle支持
7
0+的TensorFlow OP,40+的Caffe Layer,覆盖了大部分CV分类模型常用的操作。我们在如下模型列表中测试了X2Paddle的转换。
**注:**
受限于不同框架的差异,部分模型可能会存在目前无法转换的情况,如TensorFlow中包含控制流的模型,NLP模型等。对于CV常见的模型,如若您发现无法转换或转换失败,存在较大diff等问题,欢迎通过
[
ISSUE反馈
](
https://github.com/PaddlePaddle/X2Paddle/issues/new
)
的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
**注:**
受限于不同框架的差异,部分模型可能会存在目前无法转换的情况,如TensorFlow中包含控制流的模型,NLP模型等。对于CV常见的模型,如若您发现无法转换或转换失败,存在较大diff等问题,欢迎通过
[
ISSUE反馈
](
https://github.com/PaddlePaddle/X2Paddle/issues/new
)
的方式告知我们(模型名,代码实现或模型获取方式),我们会及时跟进:)
...
@@ -20,10 +20,13 @@
...
@@ -20,10 +20,13 @@
| ResNet_V1_101 |
[
code
](
https://github.com/tensorflow/models/tree/master/research/slim/nets
)
|-|
| ResNet_V1_101 |
[
code
](
https://github.com/tensorflow/models/tree/master/research/slim/nets
)
|-|
| ResNet_V2_101 |
[
code
](
https://github.com/tensorflow/models/tree/master/research/slim/nets
)
|-|
| ResNet_V2_101 |
[
code
](
https://github.com/tensorflow/models/tree/master/research/slim/nets
)
|-|
| UNet |
[
code1
](
https://github.com/jakeret/tf_unet
)
/
[
code2
](
https://github.com/lyatdawn/Unet-Tensorflow
)
|-|
| UNet |
[
code1
](
https://github.com/jakeret/tf_unet
)
/
[
code2
](
https://github.com/lyatdawn/Unet-Tensorflow
)
|-|
|MTCNN |
[
code
](
https://github.com/AITTSMD/MTCNN-Tensorflow
)
|-|
| MTCNN |
[
code
](
https://github.com/AITTSMD/MTCNN-Tensorflow
)
|-|
|YOLO-V3|
[
code
](
https://github.com/YunYang1994/tensorflow-yolov3
)
| 转换需要关闭NHWC->NCHW的优化,见
[
文档Q2
](
FAQ.md
)
|
| YOLO-V3|
[
code
](
https://github.com/YunYang1994/tensorflow-yolov3
)
| 转换需要关闭NHWC->NCHW的优化,见
[
文档Q2
](
FAQ.md
)
|
| FALSR |
[
code
](
https://github.com/xiaomi-automl/FALSR
)
| - |
| FALSR |
[
code
](
https://github.com/xiaomi-automl/FALSR
)
| 需使用参数without_data_format_optimization |
| DCSCN |
[
code
](
https://modelzoo.co/model/dcscn-super-resolution
)
| - |
| DCSCN |
[
code
](
https://modelzoo.co/model/dcscn-super-resolution
)
| 需使用参数without_data_format_optimization |
| Bert(albert) |
[
code
](
https://github.com/google-research/albert#pre-trained-models
)
| 需使用参数without_data_format_optimization |
| Bert(chinese_L-12_H-768_A-12) |
[
code
](
https://github.com/google-research/bert#pre-trained-models
)
| 需使用参数without_data_format_optimization |
| Bert(multi_cased_L-12_H-768_A-12) |
[
code
](
https://github.com/google-research/bert#pre-trained-models
)
| 需使用参数without_data_format_optimization |
## Caffe
## Caffe
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录