Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
X2Paddle
提交
84f717b2
X
X2Paddle
项目概览
PaddlePaddle
/
X2Paddle
大约 2 年 前同步成功
通知
329
Star
698
Fork
167
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
26
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
X
X2Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
26
Issue
26
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
84f717b2
编写于
7月 09, 2019
作者:
J
Jason
提交者:
GitHub
7月 09, 2019
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #34 from MacroBull/master
增加了很有用的类形推导后处理(为PaddleMobile平台)
上级
bbefc375
9d147284
变更
20
隐藏空白更改
内联
并排
Showing
20 changed file
with
2905 addition
and
1704 deletion
+2905
-1704
README.md
README.md
+0
-2
onnx2fluid/README.md
onnx2fluid/README.md
+11
-7
onnx2fluid/README_en.md
onnx2fluid/README_en.md
+10
-6
onnx2fluid/examples/convert_data_npz.py
onnx2fluid/examples/convert_data_npz.py
+8
-8
onnx2fluid/examples/convert_data_pb.py
onnx2fluid/examples/convert_data_pb.py
+8
-8
onnx2fluid/examples/gen_some_samples.py
onnx2fluid/examples/gen_some_samples.py
+126
-77
onnx2fluid/examples/gen_unet.py
onnx2fluid/examples/gen_unet.py
+10
-11
onnx2fluid/examples/gen_yolov2.py
onnx2fluid/examples/gen_yolov2.py
+167
-190
onnx2fluid/examples/onnx_model_zoo.sh
onnx2fluid/examples/onnx_model_zoo.sh
+564
-236
onnx2fluid/onnx2fluid/__main__.py
onnx2fluid/onnx2fluid/__main__.py
+9
-1
onnx2fluid/onnx2fluid/cmdline.py
onnx2fluid/onnx2fluid/cmdline.py
+45
-38
onnx2fluid/onnx2fluid/conversion.py
onnx2fluid/onnx2fluid/conversion.py
+129
-74
onnx2fluid/onnx2fluid/framework_pb2.py
onnx2fluid/onnx2fluid/framework_pb2.py
+173
-81
onnx2fluid/onnx2fluid/onnx_utils.py
onnx2fluid/onnx2fluid/onnx_utils.py
+147
-70
onnx2fluid/onnx2fluid/symbolic.py
onnx2fluid/onnx2fluid/symbolic.py
+1113
-676
onnx2fluid/onnx2fluid/torch_export_helper.py
onnx2fluid/onnx2fluid/torch_export_helper.py
+117
-52
onnx2fluid/onnx2fluid/validation.py
onnx2fluid/onnx2fluid/validation.py
+145
-63
onnx2fluid/onnx2fluid/writer.py
onnx2fluid/onnx2fluid/writer.py
+113
-96
onnx2fluid/requirements.txt
onnx2fluid/requirements.txt
+1
-1
onnx2fluid/setup.cfg
onnx2fluid/setup.cfg
+9
-7
未找到文件。
README.md
浏览文件 @
84f717b2
...
@@ -8,8 +8,6 @@ X2Paddle支持将Caffe和TensorFlow模型转至PaddlePaddle模型,同时我们
...
@@ -8,8 +8,6 @@ X2Paddle支持将Caffe和TensorFlow模型转至PaddlePaddle模型,同时我们
任何使用问题均可通过
[
ISSUE
](
https://github.com/PaddlePaddle/X2Paddle/issues
)
的方式及时反馈,或者也可直接通过pull request的方式一起更新代码和文档。
任何使用问题均可通过
[
ISSUE
](
https://github.com/PaddlePaddle/X2Paddle/issues
)
的方式及时反馈,或者也可直接通过pull request的方式一起更新代码和文档。
> **目前X2Paddle主要支持CV部分模型,对于NLP模型暂未支持。**
## [caffe2fluid](caffe2fluid)
## [caffe2fluid](caffe2fluid)
1.
支持将Caffe模型转至PaddlePaddle fluid可加载预测模型
1.
支持将Caffe模型转至PaddlePaddle fluid可加载预测模型
2.
提供Caffe-PaddlePaddle常用API的对比文档
[
[doc
](
caffe2fluid/doc
)
]
2.
提供Caffe-PaddlePaddle常用API的对比文档
[
[doc
](
caffe2fluid/doc
)
]
...
...
onnx2fluid/README.md
浏览文件 @
84f717b2
...
@@ -17,13 +17,13 @@ onnx2fluid支持将ONNX模型转换为PaddlePaddle模型,并用于预测,用
...
@@ -17,13 +17,13 @@ onnx2fluid支持将ONNX模型转换为PaddlePaddle模型,并用于预测,用
在如下环境配置中测试成功:
在如下环境配置中测试成功:
*
python 3.5+
*
python 3.5+
*
onnx == 1.4.
0
*
onnx == 1.4.
1
*
paddlepaddle == 1.
3
.0 (可选,仅用于验证)
*
paddlepaddle == 1.
5
.0 (可选,仅用于验证)
使用
[
Anaconda
](
https://docs.anaconda.com/anaconda/install
)
:
使用
[
Anaconda
](
https://docs.anaconda.com/anaconda/install
)
:
```
shell
```
shell
conda
install
-c
conda-forge onnx
conda
install
-c
conda-forge onnx
pip
install
paddlepaddle
==
1.
3
.0
pip
install
paddlepaddle
==
1.
5
.0
```
```
## 动手玩
## 动手玩
...
@@ -49,10 +49,12 @@ onnx2fluid sample_1.onnx -t sample_1.npz
...
@@ -49,10 +49,12 @@ onnx2fluid sample_1.onnx -t sample_1.npz
## 使用说明
## 使用说明
目前支持
**ONNX opset 9+**
的部分算子,对应PyTorch版本
**1.0/1.1(stable opset)**
,更多兼容信息请参考
[
ONNX文档
](
https://github.com/onnx/onnx/blob/master/docs/Operators.md
)
onnx2fluid:
onnx2fluid:
```
shell
```
shell
onnx2fluid
[
-dexy
]
[
-o
/path/to/export_dir/]
[
-z
archive.zip]
[
-t
test_data.npz] /path/to/onnx/model.onnx
onnx2fluid
[
-dexy
]
[
-o
/path/to/export_dir/]
[
-z
archive.zip]
[
-t
test_data.npz]
[
-i
[
input_name1,input_name2]]
/path/to/onnx/model.onnx
optional arguments:
optional arguments:
--debug
,
-d
启用调试
--debug
,
-d
启用调试
...
@@ -63,6 +65,8 @@ optional arguments:
...
@@ -63,6 +65,8 @@ optional arguments:
--output_dir
,
-o
指定输出目录
--output_dir
,
-o
指定输出目录
--archive
[
ARCHIVE],
-z
[
ARCHIVE]
--archive
[
ARCHIVE],
-z
[
ARCHIVE]
如果验证通过,打包到指定的ZIP文件
如果验证通过,打包到指定的ZIP文件
--infer_inputs
,
-i
[
input_name1,input_name2]
调用PaddlePaddle fluid类形推导完善模型
```
```
转换工具onnx2fluid.conversion:
转换工具onnx2fluid.conversion:
...
@@ -74,10 +78,10 @@ onnx2fluid.conversion [-dexy] [-o /path/to/export_dir/] /path/to/onnx/model.onnx
...
@@ -74,10 +78,10 @@ onnx2fluid.conversion [-dexy] [-o /path/to/export_dir/] /path/to/onnx/model.onnx
验证工具onnx2fluid.validate:
验证工具onnx2fluid.validate:
```
shell
```
shell
onnx2fluid.validate
[
-d
]
[
-t
test_data.npz]
[
-p
1e-3] /path/to/onnx/model.onnx
onnx2fluid.validate
[
-d
]
[
-t
test_data.npz]
[
-
i
[
input_name1,input_name2]]
[
-
p
1e-3] /path/to/onnx/model.onnx
```
```
## 参考
## 参考
*
PaddlePaddle
[
算子
](
http://www.paddlepaddle.org/documentation/docs/zh/1.
4
/api_cn/layers_cn.html
)
*
PaddlePaddle
[
算子
](
http://www.paddlepaddle.org/documentation/docs/zh/1.
5
/api_cn/layers_cn.html
)
*
PaddlePaddle
[
加载预测模型
](
http://www.paddlepaddle.org/documentation/docs/zh/1.
4
/api_guides/low_level/inference.html#id4
)
*
PaddlePaddle
[
加载预测模型
](
http://www.paddlepaddle.org/documentation/docs/zh/1.
5
/api_guides/low_level/inference.html#id4
)
onnx2fluid/README_en.md
浏览文件 @
84f717b2
...
@@ -19,8 +19,8 @@ PyTorch to Paddlepaddle model conversion can be easily achieved with PyTorch ONN
...
@@ -19,8 +19,8 @@ PyTorch to Paddlepaddle model conversion can be easily achieved with PyTorch ONN
## Environment and dependency
## Environment and dependency
*
python 3.5+ (python 2 not fully supported yet)
*
python 3.5+ (python 2 not fully supported yet)
*
onnx
== 1.4.0
*
onnx
>= 1.4
*
paddlepaddle
=
= 1.3.0 (optional for validation)
*
paddlepaddle
>
= 1.3.0 (optional for validation)
## Get started
## Get started
...
@@ -47,10 +47,12 @@ onnx2fluid sample_unet.onnx -t sample_unet.npz
...
@@ -47,10 +47,12 @@ onnx2fluid sample_unet.onnx -t sample_unet.npz
## Usage
## Usage
**ONNX opset 9+**
is mainly supported, corresponded to PyTorch
**1.0/1.1(stable opset)**
,for more information:
[
ONNX doc
](
https://github.com/onnx/onnx/blob/master/docs/Operators.md
)
onnx2fluid (all in one):
onnx2fluid (all in one):
```
shell
```
shell
onnx2fluid
[
-dexy
]
[
-o
/path/to/export_dir/]
[
-z
archive.zip]
[
-t
test_data.npz] /path/to/onnx/model.onnx
onnx2fluid
[
-dexy
]
[
-o
/path/to/export_dir/]
[
-z
archive.zip]
[
-t
test_data.npz]
[
-i
[
input_name1,input_name2]]
/path/to/onnx/model.onnx
optional arguments:
optional arguments:
--debug
,
-d
enable
debug logging and checking
--debug
,
-d
enable
debug logging and checking
...
@@ -61,6 +63,8 @@ optional arguments:
...
@@ -61,6 +63,8 @@ optional arguments:
--output_dir
,
-o
output directory
--output_dir
,
-o
output directory
--archive
[
ARCHIVE],
-z
[
ARCHIVE]
--archive
[
ARCHIVE],
-z
[
ARCHIVE]
compress outputs to ZIP file
if
conversion successed
compress outputs to ZIP file
if
conversion successed
--infer_inputs
,
-i
[
input_name1,input_name2]
invoke PaddlePaddle fluid type-shape inference
```
```
onnx2fluid.conversion:
onnx2fluid.conversion:
...
@@ -72,10 +76,10 @@ onnx2fluid.conversion [-dexy] [-o /path/to/export_dir/] /path/to/onnx/model.onnx
...
@@ -72,10 +76,10 @@ onnx2fluid.conversion [-dexy] [-o /path/to/export_dir/] /path/to/onnx/model.onnx
onnx2fluid.validate:
onnx2fluid.validate:
```
shell
```
shell
onnx2fluid.validate
[
-d
]
[
-t
test_data.npz]
[
-p
1e-3] /path/to/onnx/model.onnx
onnx2fluid.validate
[
-d
]
[
-t
test_data.npz]
[
-
i
[
input_name1,input_name2]]
[
-
p
1e-3] /path/to/onnx/model.onnx
```
```
## Reference
## Reference
*
[
PaddlePaddle fluid operators
](
http://www.paddlepaddle.org/documentation/docs/en/1.
4
/api/layers.html
)
*
[
PaddlePaddle fluid operators
](
http://www.paddlepaddle.org/documentation/docs/en/1.
5
/api/layers.html
)
*
load converted model via
[
load_inference_model
](
http://www.paddlepaddle.org/documentation/docs/en/1.
4
/api/io.html#permalink-1-load_inference_model
)
*
load converted model via
[
load_inference_model
](
http://www.paddlepaddle.org/documentation/docs/en/1.
5
/api/io.html#permalink-1-load_inference_model
)
onnx2fluid/examples/convert_data_npz
_0
.py
→
onnx2fluid/examples/convert_data_npz.py
浏览文件 @
84f717b2
...
@@ -12,16 +12,16 @@ import numpy as np
...
@@ -12,16 +12,16 @@ import numpy as np
from
collections
import
OrderedDict
as
Dict
from
collections
import
OrderedDict
as
Dict
def
_
make_var_name
(
name
):
def
make_var_name
(
name
):
"""
"""
make a valid variable name in Python code
make a valid variable name in Python code
"""
"""
if
name
==
''
:
assert
name
return
'_'
if
name
[
0
].
isdigit
():
if
name
[
0
].
isdigit
():
return
'var_'
+
name
return
'var_'
+
name
for
s
in
'
*?
\\
/-:'
:
for
s
in
'
\\
|/:-'
:
#
name
=
name
.
replace
(
s
,
'_'
)
name
=
name
.
replace
(
s
,
'_'
)
if
name
.
startswith
(
'_'
):
if
name
.
startswith
(
'_'
):
name
=
'var'
+
name
name
=
'var'
+
name
...
@@ -29,8 +29,8 @@ def _make_var_name(name):
...
@@ -29,8 +29,8 @@ def _make_var_name(name):
fn
=
sys
.
argv
[
1
]
fn
=
sys
.
argv
[
1
]
input_names
=
sys
.
argv
[
2
].
split
(
'
:
'
)
input_names
=
sys
.
argv
[
2
].
split
(
'
,
'
)
output_name
=
sys
.
argv
[
3
].
split
(
':
'
)
output_name
s
=
sys
.
argv
[
3
].
split
(
',
'
)
squeeze_data
=
len
(
sys
.
argv
)
>
4
squeeze_data
=
len
(
sys
.
argv
)
>
4
data
=
np
.
load
(
fn
,
encoding
=
'bytes'
)
data
=
np
.
load
(
fn
,
encoding
=
'bytes'
)
...
@@ -42,7 +42,7 @@ while squeeze_data and input_data.ndim > 4 and input_data.shape[0] == 1:
...
@@ -42,7 +42,7 @@ while squeeze_data and input_data.ndim > 4 and input_data.shape[0] == 1:
while
squeeze_data
and
output_data
.
ndim
>
2
and
output_data
.
shape
[
0
]
==
1
:
while
squeeze_data
and
output_data
.
ndim
>
2
and
output_data
.
shape
[
0
]
==
1
:
output_data
=
output_data
.
squeeze
(
0
)
output_data
=
output_data
.
squeeze
(
0
)
inputs
=
Dict
(
zip
(
map
(
_
make_var_name
,
input_names
),
[
input_data
]))
inputs
=
Dict
(
zip
(
map
(
make_var_name
,
input_names
),
[
input_data
]))
outputs
=
Dict
(
zip
(
map
(
_make_var_name
,
output_name
),
[
output_data
]))
outputs
=
Dict
(
zip
(
map
(
make_var_name
,
output_names
),
[
output_data
]))
np
.
savez
(
fn
,
inputs
=
inputs
,
outputs
=
outputs
)
# overwrite
np
.
savez
(
fn
,
inputs
=
inputs
,
outputs
=
outputs
)
# overwrite
onnx2fluid/examples/convert_data_pb
_0
.py
→
onnx2fluid/examples/convert_data_pb.py
浏览文件 @
84f717b2
...
@@ -15,16 +15,16 @@ from collections import OrderedDict as Dict
...
@@ -15,16 +15,16 @@ from collections import OrderedDict as Dict
from
glob
import
glob
from
glob
import
glob
def
_
make_var_name
(
name
):
def
make_var_name
(
name
):
"""
"""
make a valid variable name in Python code
make a valid variable name in Python code
"""
"""
if
name
==
''
:
assert
name
return
'_'
if
name
[
0
].
isdigit
():
if
name
[
0
].
isdigit
():
return
'var_'
+
name
return
'var_'
+
name
for
s
in
'
*?
\\
/-:'
:
for
s
in
'
\\
|/:-'
:
#
name
=
name
.
replace
(
s
,
'_'
)
name
=
name
.
replace
(
s
,
'_'
)
if
name
.
startswith
(
'_'
):
if
name
.
startswith
(
'_'
):
name
=
'var'
+
name
name
=
'var'
+
name
...
@@ -32,8 +32,8 @@ def _make_var_name(name):
...
@@ -32,8 +32,8 @@ def _make_var_name(name):
data_dir
=
os
.
path
.
dirname
(
sys
.
argv
[
1
])
data_dir
=
os
.
path
.
dirname
(
sys
.
argv
[
1
])
input_names
=
sys
.
argv
[
2
].
split
(
'
:
'
)
input_names
=
sys
.
argv
[
2
].
split
(
'
,
'
)
output_name
=
sys
.
argv
[
3
].
split
(
':
'
)
output_name
s
=
sys
.
argv
[
3
].
split
(
',
'
)
squeeze_data
=
len
(
sys
.
argv
)
>
4
squeeze_data
=
len
(
sys
.
argv
)
>
4
# Load inputs
# Load inputs
...
@@ -58,7 +58,7 @@ for fn in glob(os.path.join(data_dir, 'output_*.pb')):
...
@@ -58,7 +58,7 @@ for fn in glob(os.path.join(data_dir, 'output_*.pb')):
tensor
=
tensor
.
squeeze
(
0
)
tensor
=
tensor
.
squeeze
(
0
)
outputs
.
append
(
tensor
)
outputs
.
append
(
tensor
)
inputs
=
Dict
(
zip
(
map
(
_
make_var_name
,
input_names
),
inputs
))
inputs
=
Dict
(
zip
(
map
(
make_var_name
,
input_names
),
inputs
))
outputs
=
Dict
(
zip
(
map
(
_make_var_name
,
output_name
),
outputs
))
outputs
=
Dict
(
zip
(
map
(
make_var_name
,
output_names
),
outputs
))
np
.
savez
(
data_dir
,
inputs
=
inputs
,
outputs
=
outputs
)
np
.
savez
(
data_dir
,
inputs
=
inputs
,
outputs
=
outputs
)
onnx2fluid/examples/gen_some_samples.py
浏览文件 @
84f717b2
...
@@ -20,50 +20,97 @@ from onnx2fluid.torch_export_helper import export_onnx_with_validation
...
@@ -20,50 +20,97 @@ from onnx2fluid.torch_export_helper import export_onnx_with_validation
prefix
=
'sample_'
prefix
=
'sample_'
idx
=
0
idx
=
0
######### example: RNN ########
######## example: RNN cell ########
#
#class Model(nn.Module):
# def __init__(self):
# super(Model, self).__init__()
# self.rnn = nn.RNN(4, 6, 2)
#
# def forward(self, x):
# y = x
# y, h = self.rnn(y)
# return y
#
#
#model = Model()
#model.eval()
#xb = torch.rand((2, 3, 4))
#yp = model(xb)
#idx += 1
#print('index: ', idx)
#export_onnx_with_validation(model, (xb, ), prefix + str(idx),
# ['x'], ['y'],
# verbose=True, training=False)
######### example: random ########
#
class
Model
(
nn
.
Module
):
#class Model(nn.Module):
def
__init__
(
self
):
# def __init__(self):
super
(
Model
,
self
).
__init__
()
# super(Model, self).__init__()
self
.
gru
=
nn
.
GRUCell
(
6
,
5
)
#
self
.
lstm
=
nn
.
LSTMCell
(
5
,
4
)
# def forward(self, x):
# y = torch.rand((2, 3)) # + torch.rand_like(xb)
def
forward
(
self
,
x
,
h1
,
h2
,
c2
):
# y = y + torch.randn((2, 3)) # + torch.randn_like(xb)
h
=
self
.
gru
(
x
,
h1
)
# return y
h
,
c
=
self
.
lstm
(
h
,
(
h2
,
c2
))
#
return
h
,
c
#
#model = Model()
#model.eval()
model
=
Model
()
#xb = torch.rand((2, 3))
model
.
eval
()
#yp = model(xb)
xb
=
torch
.
rand
((
7
,
6
))
#idx += 1
h1
=
torch
.
zeros
((
7
,
5
))
#print('index: ', idx)
h2
=
torch
.
zeros
((
7
,
4
))
#export_onnx_with_validation(model, (xb, ), prefix + str(idx),
c2
=
torch
.
zeros
((
7
,
4
))
# ['x'], ['y'],
yp
=
model
(
xb
,
h1
,
h2
,
c2
)
# verbose=True, training=False)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
[
xb
,
h1
,
h2
,
c2
],
prefix
+
str
(
idx
),
[
'x'
,
'h1'
,
'h2'
,
'c2'
],
[
'h'
,
'c'
],
verbose
=
True
,
training
=
False
)
######## example: RNN ########
class
Model
(
nn
.
Module
):
def
__init__
(
self
):
super
(
Model
,
self
).
__init__
()
self
.
gru
=
nn
.
GRU
(
6
,
5
,
3
)
self
.
lstm
=
nn
.
LSTM
(
5
,
4
,
2
)
def
forward
(
self
,
x
,
h1
,
h2
,
c2
):
y
,
h1
=
self
.
gru
(
x
,
h1
)
y
,
(
h2
,
c2
)
=
self
.
lstm
(
y
,
(
h2
,
c2
))
return
y
model
=
Model
()
model
.
eval
()
xb
=
torch
.
rand
((
8
,
1
,
6
))
h1
=
torch
.
zeros
((
3
,
1
,
5
))
h2
=
torch
.
zeros
((
2
,
1
,
4
))
c2
=
torch
.
zeros
((
2
,
1
,
4
))
yp
=
model
(
xb
,
h1
,
h2
,
c2
)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
[
xb
,
h1
,
h2
,
c2
],
prefix
+
str
(
idx
),
[
'x'
,
'h1'
,
'h2'
,
'c2'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
######## example: random ########
"""
symbolic registration:
def rand(g, *shapes):
shapes_list = list(shapes)
shape = _maybe_get_const(shapes_list[0], "is")
return g.op('RandomUniform', shape_i=shape)
"""
class
Model
(
nn
.
Module
):
def
__init__
(
self
):
super
(
Model
,
self
).
__init__
()
def
forward
(
self
,
x
):
y
=
torch
.
rand
((
2
,
3
))
# + torch.rand_like(x)
y
=
y
+
torch
.
randn
((
2
,
3
))
# + torch.randn_like(x)
y
=
y
+
x
return
y
model
=
Model
()
model
.
eval
()
xb
=
torch
.
rand
((
2
,
3
))
yp
=
model
(
xb
)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
[
xb
],
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
######## example: fc ########
######## example: fc ########
...
@@ -85,11 +132,10 @@ xb = torch.rand((2, 3))
...
@@ -85,11 +132,10 @@ xb = torch.rand((2, 3))
yp
=
model
(
xb
)
yp
=
model
(
xb
)
idx
+=
1
idx
+=
1
print
(
'index: '
,
idx
)
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb
],
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
######## example: compare ########
######## example: compare ########
...
@@ -113,13 +159,19 @@ xb1 = torch.rand((2, 3))
...
@@ -113,13 +159,19 @@ xb1 = torch.rand((2, 3))
ya
,
yb
,
yc
=
model
(
xb0
,
xb1
)
ya
,
yb
,
yc
=
model
(
xb0
,
xb1
)
idx
+=
1
idx
+=
1
print
(
'index: '
,
idx
)
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb0
,
xb1
],
model
,
(
xb0
,
xb1
),
prefix
+
str
(
idx
),
[
'x0'
,
'x1'
],
[
'ya'
,
'yb'
,
'yc'
],
prefix
+
str
(
idx
),
[
'x0'
,
'x1'
],
[
'ya'
,
'yb'
,
'yc'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
######## example: affine_grid ########
######## example: affine_grid ########
"""
symbolic registration:
@parse_args('v', 'is')
def affine_grid_generator(g, theta, size):
return g.op('AffineGrid', theta, size_i=size)
"""
class
Model
(
nn
.
Module
):
class
Model
(
nn
.
Module
):
...
@@ -137,11 +189,10 @@ theta = torch.rand((2, 2, 3))
...
@@ -137,11 +189,10 @@ theta = torch.rand((2, 2, 3))
grid
=
model
(
theta
)
grid
=
model
(
theta
)
idx
+=
1
idx
+=
1
print
(
'index: '
,
idx
)
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
(
theta
,
),
model
,
(
theta
,
),
prefix
+
str
(
idx
),
[
'theta'
],
[
'grid'
],
prefix
+
str
(
idx
),
[
'theta'
],
[
'grid'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
######## example: conv2d_transpose ########
######## example: conv2d_transpose ########
...
@@ -165,11 +216,10 @@ xb = torch.rand((2, 3, 4, 5))
...
@@ -165,11 +216,10 @@ xb = torch.rand((2, 3, 4, 5))
yp
=
model
(
xb
)
yp
=
model
(
xb
)
idx
+=
1
idx
+=
1
print
(
'index: '
,
idx
)
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb
],
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
######## example: conv2d ########
######## example: conv2d ########
...
@@ -179,7 +229,7 @@ class Model(nn.Module):
...
@@ -179,7 +229,7 @@ class Model(nn.Module):
super
(
Model
,
self
).
__init__
()
super
(
Model
,
self
).
__init__
()
self
.
conv
=
nn
.
Conv2d
(
3
,
8
,
3
)
self
.
conv
=
nn
.
Conv2d
(
3
,
8
,
3
)
self
.
batch_norm
=
nn
.
BatchNorm2d
(
8
)
self
.
batch_norm
=
nn
.
BatchNorm2d
(
8
)
self
.
pool
=
nn
.
AdaptiveAvgPool2d
(
2
)
self
.
pool
=
nn
.
AdaptiveAvgPool2d
(
1
)
def
forward
(
self
,
x
):
def
forward
(
self
,
x
):
y
=
x
y
=
x
...
@@ -195,11 +245,10 @@ xb = torch.rand((2, 3, 4, 5))
...
@@ -195,11 +245,10 @@ xb = torch.rand((2, 3, 4, 5))
yp
=
model
(
xb
)
yp
=
model
(
xb
)
idx
+=
1
idx
+=
1
print
(
'index: '
,
idx
)
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb
],
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
######### example: conv1d ########
######### example: conv1d ########
#
#
...
@@ -220,9 +269,10 @@ export_onnx_with_validation(
...
@@ -220,9 +269,10 @@ export_onnx_with_validation(
#yp = model(xb)
#yp = model(xb)
#idx += 1
#idx += 1
#print('index: ', idx)
#print('index: ', idx)
#export_onnx_with_validation(model, (xb, ), prefix + str(idx),
#export_onnx_with_validation(
# ['x'], ['y'],
# model, [xb], prefix + str(idx),
# verbose=True, training=False)
# ['x'], ['y'],
# verbose=True, training=False)
######## example: empty ########
######## example: empty ########
...
@@ -241,8 +291,7 @@ xb = torch.rand((2, 3))
...
@@ -241,8 +291,7 @@ xb = torch.rand((2, 3))
yp
=
model
(
xb
)
yp
=
model
(
xb
)
idx
+=
1
idx
+=
1
print
(
'index: '
,
idx
)
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb
],
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'y'
],
[
'y'
],
prefix
+
str
(
idx
),
[
'y'
],
[
'y'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
onnx2fluid/examples/gen_unet.py
浏览文件 @
84f717b2
...
@@ -21,10 +21,10 @@ class double_conv(nn.Module):
...
@@ -21,10 +21,10 @@ class double_conv(nn.Module):
def
__init__
(
self
,
in_ch
,
out_ch
):
def
__init__
(
self
,
in_ch
,
out_ch
):
super
(
double_conv
,
self
).
__init__
()
super
(
double_conv
,
self
).
__init__
()
self
.
conv
=
nn
.
Sequential
(
self
.
conv
=
nn
.
Sequential
(
nn
.
Conv2d
(
in_ch
,
out_ch
,
3
,
padding
=
1
),
nn
.
Conv2d
(
in_ch
,
out_ch
,
3
,
padding
=
1
),
nn
.
BatchNorm2d
(
out_ch
),
nn
.
BatchNorm2d
(
out_ch
),
nn
.
ReLU
(
inplace
=
True
),
nn
.
ReLU
(
inplace
=
True
),
nn
.
Conv2d
(
out_ch
,
out_ch
,
3
,
padding
=
1
),
nn
.
Conv2d
(
out_ch
,
out_ch
,
3
,
padding
=
1
),
nn
.
BatchNorm2d
(
out_ch
),
nn
.
ReLU
(
inplace
=
True
))
nn
.
BatchNorm2d
(
out_ch
),
nn
.
ReLU
(
inplace
=
True
))
def
forward
(
self
,
x
):
def
forward
(
self
,
x
):
x
=
self
.
conv
(
x
)
x
=
self
.
conv
(
x
)
...
@@ -58,8 +58,8 @@ class up(nn.Module):
...
@@ -58,8 +58,8 @@ class up(nn.Module):
# would be a nice idea if the upsampling could be learned too,
# would be a nice idea if the upsampling could be learned too,
# but my machine do not have enough memory to handle all those weights
# but my machine do not have enough memory to handle all those weights
if
bilinear
:
if
bilinear
:
self
.
up
=
nn
.
Upsample
(
self
.
up
=
nn
.
Upsample
(
scale_factor
=
2
,
scale_factor
=
2
,
mode
=
'bilinear'
)
#, align_corners=True)
mode
=
'bilinear'
)
#, align_corners=True)
else
:
else
:
self
.
up
=
nn
.
ConvTranspose2d
(
in_ch
//
2
,
in_ch
//
2
,
2
,
stride
=
2
)
self
.
up
=
nn
.
ConvTranspose2d
(
in_ch
//
2
,
in_ch
//
2
,
2
,
stride
=
2
)
...
@@ -131,8 +131,7 @@ model = UNet(3, 80)
...
@@ -131,8 +131,7 @@ model = UNet(3, 80)
model
.
eval
()
model
.
eval
()
xb
=
torch
.
rand
((
1
,
3
,
512
,
512
))
xb
=
torch
.
rand
((
1
,
3
,
512
,
512
))
yp
=
model
(
xb
)
yp
=
model
(
xb
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb
],
model
,
(
xb
,
),
'sample_unet'
,
[
'image'
],
[
'pred'
],
'sample_unet'
,
[
'image'
],
[
'pred'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
onnx2fluid/examples/gen_yolov2.py
浏览文件 @
84f717b2
...
@@ -20,188 +20,166 @@ class Yolov2(nn.Module):
...
@@ -20,188 +20,166 @@ class Yolov2(nn.Module):
def
__init__
(
self
):
def
__init__
(
self
):
super
(
Yolov2
,
self
).
__init__
()
super
(
Yolov2
,
self
).
__init__
()
self
.
conv1
=
nn
.
Conv2d
(
self
.
conv1
=
nn
.
Conv2d
(
in_channels
=
3
,
in_channels
=
3
,
out_channels
=
32
,
out_channels
=
32
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm1
=
nn
.
BatchNorm2d
(
32
)
self
.
batchnorm1
=
nn
.
BatchNorm2d
(
32
)
self
.
conv2
=
nn
.
Conv2d
(
self
.
conv2
=
nn
.
Conv2d
(
in_channels
=
32
,
in_channels
=
32
,
out_channels
=
64
,
out_channels
=
64
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm2
=
nn
.
BatchNorm2d
(
64
)
self
.
batchnorm2
=
nn
.
BatchNorm2d
(
64
)
self
.
conv3
=
nn
.
Conv2d
(
self
.
conv3
=
nn
.
Conv2d
(
in_channels
=
64
,
in_channels
=
64
,
out_channels
=
128
,
out_channels
=
128
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm3
=
nn
.
BatchNorm2d
(
128
)
self
.
batchnorm3
=
nn
.
BatchNorm2d
(
128
)
self
.
conv4
=
nn
.
Conv2d
(
self
.
conv4
=
nn
.
Conv2d
(
in_channels
=
128
,
in_channels
=
128
,
out_channels
=
64
,
out_channels
=
64
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
,
padding
=
0
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm4
=
nn
.
BatchNorm2d
(
64
)
self
.
batchnorm4
=
nn
.
BatchNorm2d
(
64
)
self
.
conv5
=
nn
.
Conv2d
(
self
.
conv5
=
nn
.
Conv2d
(
in_channels
=
64
,
in_channels
=
64
,
out_channels
=
128
,
out_channels
=
128
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm5
=
nn
.
BatchNorm2d
(
128
)
self
.
batchnorm5
=
nn
.
BatchNorm2d
(
128
)
self
.
conv6
=
nn
.
Conv2d
(
self
.
conv6
=
nn
.
Conv2d
(
in_channels
=
128
,
in_channels
=
128
,
out_channels
=
256
,
out_channels
=
256
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm6
=
nn
.
BatchNorm2d
(
256
)
self
.
batchnorm6
=
nn
.
BatchNorm2d
(
256
)
self
.
conv7
=
nn
.
Conv2d
(
self
.
conv7
=
nn
.
Conv2d
(
in_channels
=
256
,
in_channels
=
256
,
out_channels
=
128
,
out_channels
=
128
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
,
padding
=
0
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm7
=
nn
.
BatchNorm2d
(
128
)
self
.
batchnorm7
=
nn
.
BatchNorm2d
(
128
)
self
.
conv8
=
nn
.
Conv2d
(
self
.
conv8
=
nn
.
Conv2d
(
in_channels
=
128
,
in_channels
=
128
,
out_channels
=
256
,
out_channels
=
256
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm8
=
nn
.
BatchNorm2d
(
256
)
self
.
batchnorm8
=
nn
.
BatchNorm2d
(
256
)
self
.
conv9
=
nn
.
Conv2d
(
self
.
conv9
=
nn
.
Conv2d
(
in_channels
=
256
,
in_channels
=
256
,
out_channels
=
512
,
out_channels
=
512
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm9
=
nn
.
BatchNorm2d
(
512
)
self
.
batchnorm9
=
nn
.
BatchNorm2d
(
512
)
self
.
conv10
=
nn
.
Conv2d
(
self
.
conv10
=
nn
.
Conv2d
(
in_channels
=
512
,
in_channels
=
512
,
out_channels
=
256
,
out_channels
=
256
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
,
padding
=
0
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm10
=
nn
.
BatchNorm2d
(
256
)
self
.
batchnorm10
=
nn
.
BatchNorm2d
(
256
)
self
.
conv11
=
nn
.
Conv2d
(
self
.
conv11
=
nn
.
Conv2d
(
in_channels
=
256
,
in_channels
=
256
,
out_channels
=
512
,
out_channels
=
512
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm11
=
nn
.
BatchNorm2d
(
512
)
self
.
batchnorm11
=
nn
.
BatchNorm2d
(
512
)
self
.
conv12
=
nn
.
Conv2d
(
self
.
conv12
=
nn
.
Conv2d
(
in_channels
=
512
,
in_channels
=
512
,
out_channels
=
256
,
out_channels
=
256
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
,
padding
=
0
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm12
=
nn
.
BatchNorm2d
(
256
)
self
.
batchnorm12
=
nn
.
BatchNorm2d
(
256
)
self
.
conv13
=
nn
.
Conv2d
(
self
.
conv13
=
nn
.
Conv2d
(
in_channels
=
256
,
in_channels
=
256
,
out_channels
=
512
,
out_channels
=
512
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm13
=
nn
.
BatchNorm2d
(
512
)
self
.
batchnorm13
=
nn
.
BatchNorm2d
(
512
)
self
.
conv14
=
nn
.
Conv2d
(
self
.
conv14
=
nn
.
Conv2d
(
in_channels
=
512
,
in_channels
=
512
,
out_channels
=
1024
,
out_channels
=
1024
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm14
=
nn
.
BatchNorm2d
(
1024
)
self
.
batchnorm14
=
nn
.
BatchNorm2d
(
1024
)
self
.
conv15
=
nn
.
Conv2d
(
self
.
conv15
=
nn
.
Conv2d
(
in_channels
=
1024
,
in_channels
=
1024
,
out_channels
=
512
,
out_channels
=
512
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
,
padding
=
0
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm15
=
nn
.
BatchNorm2d
(
512
)
self
.
batchnorm15
=
nn
.
BatchNorm2d
(
512
)
self
.
conv16
=
nn
.
Conv2d
(
self
.
conv16
=
nn
.
Conv2d
(
in_channels
=
512
,
in_channels
=
512
,
out_channels
=
1024
,
out_channels
=
1024
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm16
=
nn
.
BatchNorm2d
(
1024
)
self
.
batchnorm16
=
nn
.
BatchNorm2d
(
1024
)
self
.
conv17
=
nn
.
Conv2d
(
self
.
conv17
=
nn
.
Conv2d
(
in_channels
=
1024
,
in_channels
=
1024
,
out_channels
=
512
,
out_channels
=
512
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
,
padding
=
0
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm17
=
nn
.
BatchNorm2d
(
512
)
self
.
batchnorm17
=
nn
.
BatchNorm2d
(
512
)
self
.
conv18
=
nn
.
Conv2d
(
self
.
conv18
=
nn
.
Conv2d
(
in_channels
=
512
,
in_channels
=
512
,
out_channels
=
1024
,
out_channels
=
1024
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm18
=
nn
.
BatchNorm2d
(
1024
)
self
.
batchnorm18
=
nn
.
BatchNorm2d
(
1024
)
self
.
conv19
=
nn
.
Conv2d
(
self
.
conv19
=
nn
.
Conv2d
(
in_channels
=
1024
,
in_channels
=
1024
,
out_channels
=
1024
,
out_channels
=
1024
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm19
=
nn
.
BatchNorm2d
(
1024
)
self
.
batchnorm19
=
nn
.
BatchNorm2d
(
1024
)
self
.
conv20
=
nn
.
Conv2d
(
self
.
conv20
=
nn
.
Conv2d
(
in_channels
=
1024
,
in_channels
=
1024
,
out_channels
=
1024
,
out_channels
=
1024
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm20
=
nn
.
BatchNorm2d
(
1024
)
self
.
batchnorm20
=
nn
.
BatchNorm2d
(
1024
)
self
.
conv21
=
nn
.
Conv2d
(
self
.
conv21
=
nn
.
Conv2d
(
in_channels
=
3072
,
in_channels
=
3072
,
out_channels
=
1024
,
out_channels
=
1024
,
kernel_size
=
3
,
kernel_size
=
3
,
stride
=
1
,
stride
=
1
,
padding
=
1
,
padding
=
1
,
bias
=
False
)
bias
=
False
)
self
.
batchnorm21
=
nn
.
BatchNorm2d
(
1024
)
self
.
batchnorm21
=
nn
.
BatchNorm2d
(
1024
)
self
.
conv22
=
nn
.
Conv2d
(
self
.
conv22
=
nn
.
Conv2d
(
in_channels
=
1024
,
in_channels
=
1024
,
out_channels
=
125
,
out_channels
=
125
,
kernel_size
=
1
,
kernel_size
=
1
,
stride
=
1
,
stride
=
1
,
padding
=
0
)
padding
=
0
)
def
reorg_layer
(
self
,
x
):
def
reorg_layer
(
self
,
x
):
stride
=
2
stride
=
2
...
@@ -227,14 +205,14 @@ class Yolov2(nn.Module):
...
@@ -227,14 +205,14 @@ class Yolov2(nn.Module):
return
passthrough
return
passthrough
def
forward
(
self
,
x
):
def
forward
(
self
,
x
):
out
=
F
.
max_pool2d
(
out
=
F
.
max_pool2d
(
F
.
leaky_relu
(
self
.
batchnorm1
(
self
.
conv1
(
x
)),
F
.
leaky_relu
(
self
.
batchnorm1
(
self
.
conv1
(
x
)),
negative_slope
=
0.1
),
negative_slope
=
0.1
),
2
,
2
,
stride
=
2
)
stride
=
2
)
out
=
F
.
max_pool2d
(
out
=
F
.
max_pool2d
(
F
.
leaky_relu
(
self
.
batchnorm2
(
self
.
conv2
(
out
)),
F
.
leaky_relu
(
self
.
batchnorm2
(
self
.
conv2
(
out
)),
negative_slope
=
0.1
),
negative_slope
=
0.1
),
2
,
2
,
stride
=
2
)
stride
=
2
)
out
=
F
.
leaky_relu
(
self
.
batchnorm3
(
self
.
conv3
(
out
)),
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
self
.
batchnorm3
(
self
.
conv3
(
out
)),
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
self
.
batchnorm4
(
self
.
conv4
(
out
)),
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
self
.
batchnorm4
(
self
.
conv4
(
out
)),
negative_slope
=
0.1
)
...
@@ -247,36 +225,36 @@ class Yolov2(nn.Module):
...
@@ -247,36 +225,36 @@ class Yolov2(nn.Module):
out
=
F
.
max_pool2d
(
out
,
2
,
stride
=
2
)
out
=
F
.
max_pool2d
(
out
,
2
,
stride
=
2
)
out
=
F
.
leaky_relu
(
self
.
batchnorm9
(
self
.
conv9
(
out
)),
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
self
.
batchnorm9
(
self
.
conv9
(
out
)),
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm10
(
self
.
conv10
(
out
)),
self
.
batchnorm10
(
self
.
conv10
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm11
(
self
.
conv11
(
out
)),
self
.
batchnorm11
(
self
.
conv11
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm12
(
self
.
conv12
(
out
)),
self
.
batchnorm12
(
self
.
conv12
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm13
(
self
.
conv13
(
out
)),
self
.
batchnorm13
(
self
.
conv13
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
passthrough
=
self
.
reorg_layer
(
out
)
passthrough
=
self
.
reorg_layer
(
out
)
out
=
F
.
max_pool2d
(
out
,
2
,
stride
=
2
)
out
=
F
.
max_pool2d
(
out
,
2
,
stride
=
2
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm14
(
self
.
conv14
(
out
)),
self
.
batchnorm14
(
self
.
conv14
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm15
(
self
.
conv15
(
out
)),
self
.
batchnorm15
(
self
.
conv15
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm16
(
self
.
conv16
(
out
)),
self
.
batchnorm16
(
self
.
conv16
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm17
(
self
.
conv17
(
out
)),
self
.
batchnorm17
(
self
.
conv17
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm18
(
self
.
conv18
(
out
)),
self
.
batchnorm18
(
self
.
conv18
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm19
(
self
.
conv19
(
out
)),
self
.
batchnorm19
(
self
.
conv19
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm20
(
self
.
conv20
(
out
)),
self
.
batchnorm20
(
self
.
conv20
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
torch
.
cat
([
passthrough
,
out
],
1
)
out
=
torch
.
cat
([
passthrough
,
out
],
1
)
out
=
F
.
leaky_relu
(
out
=
F
.
leaky_relu
(
self
.
batchnorm21
(
self
.
conv21
(
out
)),
self
.
batchnorm21
(
self
.
conv21
(
out
)),
negative_slope
=
0.1
)
negative_slope
=
0.1
)
out
=
self
.
conv22
(
out
)
out
=
self
.
conv22
(
out
)
return
out
return
out
...
@@ -286,8 +264,7 @@ model = Yolov2()
...
@@ -286,8 +264,7 @@ model = Yolov2()
model
.
eval
()
model
.
eval
()
xb
=
torch
.
rand
((
1
,
3
,
224
,
224
))
xb
=
torch
.
rand
((
1
,
3
,
224
,
224
))
yp
=
model
(
xb
)
yp
=
model
(
xb
)
export_onnx_with_validation
(
export_onnx_with_validation
(
model
,
[
xb
],
model
,
(
xb
,
),
'sample_yolov2'
,
[
'image'
],
[
'pred'
],
'sample_yolov2'
,
[
'image'
],
[
'pred'
],
verbose
=
True
,
verbose
=
True
,
training
=
False
)
training
=
False
)
onnx2fluid/examples/onnx_model_zoo.sh
浏览文件 @
84f717b2
...
@@ -2,293 +2,610 @@
...
@@ -2,293 +2,610 @@
# setopt SH_WORD_SPLIT # if zsh
# setopt SH_WORD_SPLIT # if zsh
# alias python="python3" # if ...
# alias http_get="wget -c" # if no aria2
alias
http_get
=
"aria2c -c -s8 -x8"
base_url
=
"https://s3.amazonaws.com/download.onnx/models/opset_9/"
base_url
=
"https://s3.amazonaws.com/download.onnx/models/opset_9/"
convert_cmd
=
"python -m onnx2fluid"
validate_cmd
=
"
$convert_cmd
.validation"
convert_flags
=
"-e -o /tmp/export/"
convert_flags
=
"-e -o /tmp/export/"
validate_flags1
=
"/tmp/export/model.py"
validate_flags1
=
"/tmp/export/model.py"
validate_flags2
=
"/tmp/export/__model__"
validate_flags2
=
"/tmp/export/__model__"
validate_flags3
=
"/tmp/export/__model__ -i"
# alias http_get="wget -c" # if no aria2
alias
http_get
=
"aria2c -c -s8 -x8"
# alias python="python3" # if ...
bvlc_alexnet
()
bvlc_alexnet
()
{
{
bn_tar
=
"bvlc_alexnet"
bn_tar
=
"bvlc_alexnet"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
npz
in
"
$bn_tar
"
/
*
.npz
for
npz
in
"
$bn_tar
/"
*
.npz
do
do
echo
"converting
$npz
..."
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
-s
python convert_data_npz.py
"
$npz
"
data_0 prob_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
$validate_cmd
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
$validate_cmd
$validate_flags2
-t
"
$npz
"
done
done
for
pb_dir
in
"
$bn_tar
"
/
*
/
$validate_cmd
$validate_flags3
-t
"
$npz
"
do
for
pb_dir
in
"
$bn_tar
/"
*
/
echo
"converting
$pb_dir
..."
do
python convert_data_pb_0.py
"
$pb_dir
"
data_0 prob_1
echo
"converting
$pb_dir
..."
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python convert_data_pb.py
"
$pb_dir
"
data_0 prob_1
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
bvlc_googlenet
()
bvlc_googlenet
()
{
{
bn_tar
=
"bvlc_googlenet"
bn_tar
=
"bvlc_googlenet"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
data_0 prob_1
python convert_data_pb.py
"
$pb_dir
"
data_0 prob_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
bvlc_reference_caffenet
()
bvlc_reference_caffenet
()
{
{
bn_tar
=
"bvlc_reference_caffenet"
bn_tar
=
"bvlc_reference_caffenet"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
data_0 prob_1
python convert_data_pb.py
"
$pb_dir
"
data_0 prob_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
bvlc_reference_rcnn_ilsvrc13
()
bvlc_reference_rcnn_ilsvrc13
()
{
{
bn_tar
=
"bvlc_reference_rcnn_ilsvrc13"
bn_tar
=
"bvlc_reference_rcnn_ilsvrc13"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
data_0 fc-rcnn_1
python convert_data_pb.py
"
$pb_dir
"
data_0 fc-rcnn_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
densenet121
()
{
bn_tar
=
"densenet121"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
npz
in
"
$bn_tar
/"
*
.npz
do
echo
"converting
$npz
..."
python convert_data_npz.py
"
$npz
"
data_0 fc6_1
-s
$validate_cmd
$validate_flags1
-t
"
$npz
"
$validate_cmd
$validate_flags2
-t
"
$npz
"
done
$validate_cmd
$validate_flags3
-t
"
$npz
"
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
"
python convert_data_pb.py
"
$pb_dir
"
data_0 fc6_1
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
emotion_ferplus
()
{
bn_tar
=
"emotion_ferplus"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"https://onnxzoo.blob.core.windows.net/models/opset_8/emotion_ferplus/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
Input3 Plus692_Output_0
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
inception_v1
()
inception_v1
()
{
{
bn_tar
=
"inception_v1"
bn_tar
=
"inception_v1"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
npz
in
"
$bn_tar
"
/
*
.npz
for
npz
in
"
$bn_tar
/"
*
.npz
do
do
echo
"converting
$npz
..."
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
-s
python convert_data_npz.py
"
$npz
"
data_0 prob_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
$validate_cmd
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
$validate_cmd
$validate_flags2
-t
"
$npz
"
done
done
for
pb_dir
in
"
$bn_tar
"
/
*
/
$validate_cmd
$validate_flags3
-t
"
$npz
"
do
for
pb_dir
in
"
$bn_tar
/"
*
/
echo
"converting
$pb_dir
..."
do
python convert_data_pb_0.py
"
$pb_dir
"
data_0 prob_1
echo
"converting
$pb_dir
..."
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python convert_data_pb.py
"
$pb_dir
"
data_0 prob_1
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
inception_v2
()
inception_v2
()
{
{
bn_tar
=
"inception_v2"
bn_tar
=
"inception_v2"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
npz
in
"
$bn_tar
"
/
*
.npz
for
npz
in
"
$bn_tar
/"
*
.npz
do
do
echo
"converting
$npz
..."
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
-s
python convert_data_npz.py
"
$npz
"
data_0 prob_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
$validate_cmd
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
$validate_cmd
$validate_flags2
-t
"
$npz
"
done
done
for
pb_dir
in
"
$bn_tar
"
/
*
/
$validate_cmd
$validate_flags3
-t
"
$npz
"
do
for
pb_dir
in
"
$bn_tar
/"
*
/
echo
"converting
$pb_dir
..."
do
python convert_data_pb_0.py
"
$pb_dir
"
data_0 prob_1
echo
"converting
$pb_dir
..."
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python convert_data_pb.py
"
$pb_dir
"
data_0 prob_1
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
mobilenet
()
{
bn_tar
=
"mobilenetv2-1.0"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/mobilenet/mobilenetv2-1.0/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data mobilenetv20_output_flatten0_reshape0
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
resnet18
()
{
bn_tar
=
"resnet18v1"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet18v1/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data resnetv15_dense0_fwd
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
resnet50
()
resnet50
()
{
{
bn_tar
=
"resnet50"
bn_tar
=
"resnet50"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
npz
in
"
$bn_tar
"
/
*
.npz
for
npz
in
"
$bn_tar
/"
*
.npz
do
do
echo
"converting
$npz
..."
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
gpu_0/data_0 gpu_0/softmaxout_1
-s
python convert_data_npz.py
"
$npz
"
gpu_0/data_0 gpu_0/softmaxout_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
$validate_cmd
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
$validate_cmd
$validate_flags2
-t
"
$npz
"
done
done
for
pb_dir
in
"
$bn_tar
"
/
*
/
$validate_cmd
$validate_flags3
-t
"
$npz
"
do
for
pb_dir
in
"
$bn_tar
/"
*
/
echo
"converting
$pb_dir
..."
do
python convert_data_pb_0.py
"
$pb_dir
"
gpu_0/data_0 gpu_0/softmaxout_1
echo
"converting
$pb_dir
..."
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python convert_data_pb.py
"
$pb_dir
"
gpu_0/data_0 gpu_0/softmaxout_1
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
resnet100_arcface
()
{
bn_tar
=
"resnet100"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/arcface/resnet100/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data fc1
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
resnet101_duc
()
{
bn_tar
=
"ResNet101_DUC_HDC"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/duc/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data seg_loss
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
resnet152
()
{
bn_tar
=
"resnet152v2"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet152v2/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data resnetv27_dense0_fwd
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
shufflenet
()
shufflenet
()
{
{
bn_tar
=
"shufflenet"
bn_tar
=
"shufflenet"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
..."
echo
"converting
$pb_dir
..."
python convert_data_pb_0.py
"
$pb_dir
"
gpu_0/data_0 gpu_0/softmaxout_1
python convert_data_pb.py
"
$pb_dir
"
gpu_0/data_0 gpu_0/softmax_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
squeezenet
()
squeezenet
()
{
{
bn_tar
=
"squeezenet"
bn_tar
=
"squeezenet"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
data_0 softmaxout_1
python convert_data_pb.py
"
$pb_dir
"
data_0 softmaxout_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
squeezenet1v1
()
{
bn_tar
=
"squeezenet1.1"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/squeezenet/squeezenet1.1/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data squeezenet0_flatten0_reshape0
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
ssd
()
{
bn_tar
=
"ssd"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"https://onnxzoo.blob.core.windows.net/models/opset_10/ssd/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
mkdir
"
$bn_tar
"
tar
xf
"
$fn_tar
"
-C
"
$bn_tar
/"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
image bboxes,labels,scores
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
tiny_yolov2
()
tiny_yolov2
()
{
{
bn_tar
=
"tiny_yolov2"
bn_tar
=
"tiny_yolov2"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/
$fn_tar
"
http_get
"https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
-xy
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
image grid
python convert_data_pb.py
"
$pb_dir
"
image grid
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
vgg16bn
()
{
bn_tar
=
"vgg16-bn"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/
$bn_tar
.onnx"
http_get
"https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg16-bn/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-y
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
data vgg0_dense2_fwd
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
vgg19
()
vgg19
()
{
{
bn_tar
=
"vgg19"
bn_tar
=
"vgg19"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
data_0 prob_1
python convert_data_pb.py
"
$pb_dir
"
data_0 prob_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
yolov3
()
{
bn_tar
=
"yolov3"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/yolov3.onnx"
http_get
"https://onnxzoo.blob.core.windows.net/models/opset_10/yolov3/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
$convert_cmd
$convert_flags
"
$fn_model
"
-x
#
for
pb_dir
in
"
$bn_tar
/"
*
/
do
echo
"converting
$pb_dir
..."
python convert_data_pb.py
"
$pb_dir
"
input_1:01,image_shape:01 yolonms_layer_1/ExpandDims_1:0,yolonms_layer_1/ExpandDims_3:0,yolonms_layer_1/concat_2:0
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
zfnet512
()
zfnet512
()
{
{
bn_tar
=
"zfnet512"
bn_tar
=
"zfnet512"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_tar
=
"
$bn_tar
.tar.gz"
fn_model
=
"
$bn_tar
/model.onnx"
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
echo
"extracting ..."
tar
xf
"
$fn_tar
"
tar
xf
"
$fn_tar
"
python
-m
onnx2fluid
$convert_flags
"
$fn_model
"
$convert_cmd
$convert_flags
"
$fn_model
"
for
pb_dir
in
"
$bn_tar
"
/
*
/
for
pb_dir
in
"
$bn_tar
/"
*
/
do
do
echo
"converting
$pb_dir
"
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
gpu_0/data_0 gpu_0/softmax_1
python convert_data_pb.py
"
$pb_dir
"
gpu_0/data_0 gpu_0/softmax_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
$validate_cmd
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
done
done
$validate_cmd
$validate_flags3
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
rm
-rf
"
$bn_tar
/"
}
}
...
@@ -296,11 +613,22 @@ bvlc_alexnet
...
@@ -296,11 +613,22 @@ bvlc_alexnet
bvlc_googlenet
bvlc_googlenet
bvlc_reference_caffenet
bvlc_reference_caffenet
bvlc_reference_rcnn_ilsvrc13
bvlc_reference_rcnn_ilsvrc13
densenet121
emotion_ferplus
# not supported
inception_v1
inception_v1
inception_v2
inception_v2
mobilenet
resnet18
resnet50
resnet50
resnet100_arcface
resnet101_duc
resnet152
shufflenet
shufflenet
squeezenet
# softmax bug
squeezenet
# softmax bug
# tiny_yolov2 # not supported
squeezenet1v1
ssd
# version not supported
tiny_yolov2
# not supported
vgg16bn
vgg19
vgg19
yolov3
# malformed model ?
zfnet512
zfnet512
onnx2fluid/onnx2fluid/__main__.py
浏览文件 @
84f717b2
...
@@ -92,9 +92,17 @@ parser.add_argument(
...
@@ -92,9 +92,17 @@ parser.add_argument(
parser
.
add_argument
(
parser
.
add_argument
(
'--rtol'
,
'--rtol'
,
type
=
float
,
type
=
float
,
default
=
1e-
4
,
default
=
1e-
2
,
help
=
'assertion relative tolerance for validation'
,
help
=
'assertion relative tolerance for validation'
,
)
)
parser
.
add_argument
(
'--infer_inputs'
,
'-i'
,
nargs
=
'?'
,
default
=
None
,
const
=
''
,
help
=
'perform type-shape inference with given input names and re-save model'
,
)
args
=
parser
.
parse_args
()
args
=
parser
.
parse_args
()
logging_format
=
'[%(levelname)8s]%(name)s::%(funcName)s:%(lineno)04d: %(message)s'
logging_format
=
'[%(levelname)8s]%(name)s::%(funcName)s:%(lineno)04d: %(message)s'
...
...
onnx2fluid/onnx2fluid/cmdline.py
浏览文件 @
84f717b2
...
@@ -22,7 +22,6 @@ __all__ = [
...
@@ -22,7 +22,6 @@ __all__ = [
'main'
,
'main'
,
]
]
DEFAULT_ONNX_OPSET_VERSION
=
9
DEFAULT_MODEL_MODULE
=
'model'
DEFAULT_MODEL_MODULE
=
'model'
DEFAULT_MODEL_FUNC
=
'inference'
DEFAULT_MODEL_FUNC
=
'inference'
...
@@ -30,6 +29,7 @@ DEFAULT_MODEL_FUNC = 'inference'
...
@@ -30,6 +29,7 @@ DEFAULT_MODEL_FUNC = 'inference'
def
main
(
**
kwargs
):
def
main
(
**
kwargs
):
"""主程序入口"""
"""主程序入口"""
from
.conversion
import
DEFAULT_ONNX_OPSET_VERSION
from
.conversion
import
convert
from
.conversion
import
convert
logger
=
logging
.
getLogger
(
'onnx2fluid'
)
logger
=
logging
.
getLogger
(
'onnx2fluid'
)
...
@@ -44,41 +44,50 @@ def main(**kwargs):
...
@@ -44,41 +44,50 @@ def main(**kwargs):
if
save_dir
else
basepath
)
+
shutil
.
os
.
sep
if
save_dir
else
basepath
)
+
shutil
.
os
.
sep
model_basename
=
DEFAULT_MODEL_MODULE
+
'.py'
model_basename
=
DEFAULT_MODEL_MODULE
+
'.py'
model_func_name
=
DEFAULT_MODEL_FUNC
model_func_name
=
DEFAULT_MODEL_FUNC
onnx_opset_version
=
DEFAULT_ONNX_OPSET_VERSION
onnx_opset_pedantic
=
kwargs
.
pop
(
'pedantic'
,
True
)
onnx_opset_pedantic
=
kwargs
.
pop
(
'pedantic'
,
True
)
onnx_skip_version_conversion
=
kwargs
.
pop
(
'skip_version_conversion'
,
False
)
skip_version_conversion
=
kwargs
.
pop
(
'skip_version_conversion'
,
False
)
onnx_opset_version
=
None
if
skip_version_conversion
else
DEFAULT_ONNX_OPSET_VERSION
# convert
# convert
convert
(
convert
(
filename
,
filename
,
save_dir
,
save_dir
,
model_basename
=
model_basename
,
model_basename
=
model_basename
,
model_func_name
=
model_func_name
,
model_func_name
=
model_func_name
,
onnx_opset_version
=
onnx_opset_version
,
onnx_opset_version
=
onnx_opset_version
,
onnx_opset_pedantic
=
onnx_opset_pedantic
,
onnx_opset_pedantic
=
onnx_opset_pedantic
,
**
kwargs
)
onnx_skip_version_conversion
=
onnx_skip_version_conversion
,
**
kwargs
)
# validate
# validate
passed
=
True
passed
=
True
golden_data_filename
=
kwargs
.
pop
(
'test_data'
,
''
)
golden_data_filename
=
kwargs
.
pop
(
'test_data'
,
''
)
if
golden_data_filename
:
infer_inputs
=
kwargs
.
pop
(
'infer_inputs'
,
None
)
save_inference_model
=
infer_inputs
is
not
None
if
golden_data_filename
or
save_inference_model
:
from
.validation
import
validate
from
.validation
import
validate
if
save_inference_model
:
inference_input_names
=
infer_inputs
.
split
(
','
)
else
:
inference_input_names
=
None
logger
.
info
(
'starting validation on desc ...'
)
logger
.
info
(
'starting validation on desc ...'
)
passed
&=
validate
(
passed
&=
validate
(
shutil
.
os
.
path
.
join
(
save_dir
,
'__model__'
),
shutil
.
os
.
path
.
join
(
save_dir
,
'__model__'
),
golden_data_filename
,
golden_data_filename
=
golden_data_filename
,
**
kwargs
)
save_inference_model
=
save_inference_model
,
inference_input_names
=
inference_input_names
,
**
kwargs
)
logger
.
info
(
'starting validation on code ...'
)
logger
.
info
(
'starting validation on code ...'
)
passed
&=
validate
(
# this re-generate desc proto with Python code when debug on
shutil
.
os
.
path
.
join
(
save_dir
,
model_basename
),
passed
&=
validate
(
shutil
.
os
.
path
.
join
(
save_dir
,
model_basename
),
golden_data_filename
,
golden_data_filename
=
golden_data_filename
,
model_func_name
=
model_func_name
,
model_func_name
=
model_func_name
,
**
kwargs
)
save_inference_model
=
save_inference_model
,
inference_input_names
=
inference_input_names
,
**
kwargs
)
if
not
passed
:
if
not
passed
:
logger
.
error
(
'validation failed, exit'
)
logger
.
fatal
(
'validation failed, exit'
)
return
return
# create zip file
# create zip file
...
@@ -111,19 +120,17 @@ if __name__ == '__main__':
...
@@ -111,19 +120,17 @@ if __name__ == '__main__':
from
onnx2fluid.cmdline
import
main
from
onnx2fluid.cmdline
import
main
main
(
main
(
model
=
[
'../examples/t1.onnx'
],
model
=
[
'../examples/t1.onnx'
],
output_dir
=
'/tmp/export/'
,
output_dir
=
'/tmp/export/'
,
embed_params
=
False
,
embed_params
=
False
,
pedantic
=
False
,
pedantic
=
False
,
test_data
=
'../examples/t1.npz'
,
test_data
=
'../examples/t1.npz'
,
debug
=
True
)
debug
=
True
)
main
(
model
=
[
'../examples/inception_v2/model.onnx'
],
main
(
output_dir
=
'/tmp/export/'
,
model
=
[
'../examples/inception_v2/model.onnx'
],
embed_params
=
True
,
output_dir
=
'/tmp/export/'
,
pedantic
=
False
,
embed_params
=
True
,
skip_version_conversion
=
False
,
pedantic
=
False
,
test_data
=
'../examples/inception_v2/test_data_set_2.npz'
,
skip_version_conversion
=
False
,
debug
=
True
)
test_data
=
'../examples/inception_v2/test_data_set_2.npz'
,
debug
=
True
)
onnx2fluid/onnx2fluid/conversion.py
浏览文件 @
84f717b2
...
@@ -14,53 +14,72 @@ __all__ = [
...
@@ -14,53 +14,72 @@ __all__ = [
'convert'
,
'convert'
,
]
]
DEFAULT_ONNX_OPSET_VERSION
=
9
def
make_var_name
(
name
):
"""
make a valid variable name in Python code and filename in filesystem
"""
if
name
==
''
:
return
''
if
name
[
0
].
isdigit
():
return
'var_'
+
name
for
s
in
'
\\
|/:.-'
:
name
=
name
.
replace
(
s
,
'_'
)
if
name
.
startswith
(
'_'
):
name
=
'var'
+
name
return
name
def
convert
(
onnx_model_filename
,
def
convert
(
onnx_model_filename
,
save_dir
,
save_dir
,
model_basename
=
'model.py'
,
model_basename
=
'model.py'
,
model_func_name
=
'inference'
,
model_func_name
=
'inference'
,
embed_params
=
False
,
embed_params
=
False
,
onnx_opset_version
=
9
,
onnx_opset_version
=
None
,
onnx_opset_pedantic
=
True
,
onnx_opset_pedantic
=
True
,
onnx_skip_version_conversion
=
False
,
debug
=
False
,
debug
=
False
,
**
kwargs
):
**
kwargs
):
"""
"""
convert an ONNX model to Paddle fluid Python code and desc pb
convert an ONNX model to Paddle fluid Python code and desc pb
"""
"""
assert
isinstance
(
onnx_model_filename
,
str
)
assert
isinstance
(
save_dir
,
str
)
assert
isinstance
(
model_basename
,
str
)
assert
isinstance
(
model_func_name
,
str
)
assert
onnx_opset_version
is
None
or
isinstance
(
onnx_opset_version
,
int
)
import
onnx
import
onnx
from
onnx.checker
import
ValidationError
from
onnx.checker
import
ValidationError
from
onnx.checker
import
check_model
from
onnx.checker
import
check_model
from
onnx.utils
import
polish_model
from
onnx.version_converter
import
convert_version
from
onnx.version_converter
import
convert_version
from
.onnx_utils
import
DEFAULT_OP_DOMAIN
from
.onnx_utils
import
DEFAULT_OP_DOMAIN
from
.onnx_utils
import
graph_ops
,
graph_weights
from
.onnx_utils
import
graph_ops
,
graph_weights
from
.onnx_utils
import
inferred_model_value_info
from
.onnx_utils
import
inferred_model_value_info
from
.onnx_utils
import
optimize_model_skip_op_for_inference
from
.onnx_utils
import
polish_model
from
.onnx_utils
import
optimize_model_strip_initializer
from
.onnx_utils
import
optimize_model_cast
,
optimize_model_slice
from
.writer
import
Program
,
Writer
from
.writer
import
Program
,
Writer
from
.writer
import
make_var_name
logger
=
logging
.
getLogger
(
'convert'
)
logger
=
logging
.
getLogger
(
'convert'
)
# prepare onnx model
# prepare onnx model
logger
.
info
(
'loading model: %s ...'
,
onnx_model_filename
)
logger
.
info
(
'loading model: %s ...'
,
onnx_model_filename
)
onnx_model
=
onnx
.
load
(
onnx_model_filename
)
onnx_model
=
onnx
.
load
(
onnx_model_filename
)
try
:
try
:
logger
.
info
(
'checking model ...'
)
logger
.
info
(
'checking model ...'
)
check_model
(
onnx_model
)
check_model
(
onnx_model
)
if
onnx_skip_version_conversion
:
# WORKAROUND: RuntimeError: No Adapter For OP
if
onnx_opset_version
is
None
:
# WORKAROUND: RuntimeError: No Adapter For OP
logger
.
debug
(
'assumed opset version: %d'
,
onnx_opset_version
)
logger
.
warning
(
logger
.
warning
(
'opset conversion skipped for onnx_opset_pedantic is OFF'
)
'opset conversion skipped for onnx_opset_pedantic is OFF'
)
logger
.
info
(
'assumed opset version: %d'
,
DEFAULT_ONNX_OPSET_VERSION
)
else
:
else
:
logger
.
debug
(
'using opset version: %d'
,
onnx_opset_version
)
logger
.
info
(
'using opset version: %d'
,
onnx_opset_version
)
onnx_model
=
convert_version
(
onnx_model
,
onnx_opset_version
)
onnx_model
=
convert_version
(
onnx_model
,
onnx_opset_version
)
onnx_model
=
polish_model
(
onnx_model
)
except
ValidationError
as
e
:
except
ValidationError
as
e
:
if
onnx_opset_pedantic
:
if
onnx_opset_pedantic
:
raise
e
raise
e
...
@@ -68,13 +87,11 @@ def convert(onnx_model_filename,
...
@@ -68,13 +87,11 @@ def convert(onnx_model_filename,
logger
.
warning
(
'due to onnx_opset_pedantic is OFF'
)
logger
.
warning
(
'due to onnx_opset_pedantic is OFF'
)
logger
.
warning
(
'the ONNX model sanity checking error is suppressed'
)
logger
.
warning
(
'the ONNX model sanity checking error is suppressed'
)
logger
.
warning
(
'value_info inferring may be uncompleted'
)
logger
.
warning
(
'value_info inferring may be uncompleted'
)
# onnx model optimization
# onnx model optimization
logger
.
info
(
'model has %d ops'
,
len
(
onnx_model
.
graph
.
node
))
logger
.
info
(
'model has %d ops'
,
len
(
onnx_model
.
graph
.
node
))
logger
.
info
(
'optimizing model ...'
)
logger
.
info
(
'optimizing model ...'
)
onnx_model
=
optimize_model_skip_op_for_inference
(
onnx_model
)
onnx_model
=
polish_model
(
onnx_model
,
checking
=
onnx_opset_pedantic
)
onnx_model
=
optimize_model_strip_initializer
(
onnx_model
)
onnx_model
=
optimize_model_cast
(
onnx_model
)
onnx_model
=
optimize_model_slice
(
onnx_model
)
# prepare filesystem
# prepare filesystem
shutil
.
rmtree
(
save_dir
,
ignore_errors
=
True
)
shutil
.
rmtree
(
save_dir
,
ignore_errors
=
True
)
...
@@ -83,30 +100,31 @@ def convert(onnx_model_filename,
...
@@ -83,30 +100,31 @@ def convert(onnx_model_filename,
# DEBUG:
# DEBUG:
if
debug
:
if
debug
:
model
=
onnx
.
shape_inference
.
infer_shapes
(
onnx_model
)
debug_model_filename
,
_
=
shutil
.
os
.
path
.
splitext
(
onnx_model_filename
)
debug_model_filename
,
_
=
shutil
.
os
.
path
.
splitext
(
onnx_model_filename
)
onnx
.
save
(
model
,
debug_model_filename
+
'.optimized_and_inffered.onnx'
)
onnx
.
save
(
onnx_model
,
debug_model_filename
+
'.polished.onnx'
)
# onnx.save(model, '/tmp/export/optimized_and_inffered.onnx')
# I/O instances
# I/O instances
onnx_graph
=
onnx_model
.
graph
onnx_graph
=
onnx_model
.
graph
fluid_program
=
Program
()
fluid_program
=
Program
()
fluid_writer
=
Writer
()
fluid_writer
=
Writer
()
# model components
# model components
# graph_name = onnx_graph.name
inp_vars
=
[
make_var_name
(
value
.
name
)
for
value
in
onnx_graph
.
input
]
graph_inputs
=
[
value
.
name
for
value
in
onnx_graph
.
input
]
out_vars
=
[
make_var_name
(
value
.
name
)
for
value
in
onnx_graph
.
output
]
graph_outputs
=
[
value
.
name
for
value
in
onnx_graph
.
output
]
par_vars
=
[]
graph_params
=
[]
value_infos
=
inferred_model_value_info
(
onnx_model
)
graph_value_infos
=
inferred_model_value_info
(
onnx_model
)
value_infos
=
{
make_var_name
(
key
):
value
for
key
,
value
in
value_infos
.
items
()
}
# prepare additional value_info
# prepare additional value_info
# for weights
# for weights
for
name
,
weight
in
graph_weights
(
onnx_graph
):
for
name
,
weight
in
graph_weights
(
onnx_graph
):
value_info
=
graph_value_infos
[
name
]
var_name
=
make_var_name
(
name
)
value_info
[
'embeded_as'
]
=
[]
value_info
=
value_infos
[
var_name
]
value_info
[
'lod'
]
=
[
0
]
value_info
[
'embedded_as'
]
=
[]
value_info
[
'get_weight'
]
=
(
lambda
w
:
lambda
:
w
.
tolist
())(
value_info
[
'get_weight'
]
=
(
lambda
w
:
lambda
:
w
.
tolist
())(
weight
)
# lazy getter
weight
)
# lazy getter
...
@@ -114,21 +132,25 @@ def convert(onnx_model_filename,
...
@@ -114,21 +132,25 @@ def convert(onnx_model_filename,
# op set conversion
# op set conversion
# topo = 'backward' if embed_params else 'forward'
# topo = 'backward' if embed_params else 'forward'
topo
=
'forward'
topo
=
'forward'
for
name
,
domain
,
op_type
,
inputs
,
outputs
,
attrs
in
graph_ops
(
for
name
,
domain
,
op_type
,
inputs
,
outputs
,
attrs
in
graph_ops
(
onnx_graph
,
onnx_graph
,
topo
=
topo
):
topo
=
topo
):
logger
.
debug
(
'translating op %s %s::%s ...'
,
name
,
domain
,
op_type
)
op_name
=
make_var_name
(
name
)
inputs
=
list
(
map
(
make_var_name
,
inputs
))
outputs
=
list
(
map
(
make_var_name
,
outputs
))
logger
.
debug
(
'translating op %s(%s) %s::%s ...'
,
name
,
op_name
,
domain
,
op_type
)
if
domain
==
DEFAULT_OP_DOMAIN
:
if
domain
==
DEFAULT_OP_DOMAIN
:
domain
=
''
domain
=
''
try
:
try
:
fluid_writer
.
emit_op
(
fluid_writer
.
emit_op
(
fluid_program
,
fluid_program
,
name
,
op_
name
,
domain
,
domain
,
op_type
,
op_type
,
inputs
,
inputs
,
outputs
,
outputs
,
attrs
,
attrs
,
graph_
value_infos
,
value_infos
,
embed_params
=
embed_params
,
embed_params
=
embed_params
,
)
)
except
BaseException
as
e
:
except
BaseException
as
e
:
...
@@ -140,53 +162,74 @@ def convert(onnx_model_filename,
...
@@ -140,53 +162,74 @@ def convert(onnx_model_filename,
logger
.
info
(
'%d ops in, %d ops out'
,
len
(
onnx_graph
.
node
),
logger
.
info
(
'%d ops in, %d ops out'
,
len
(
onnx_graph
.
node
),
len
(
fluid_program
.
op_descs
))
len
(
fluid_program
.
op_descs
))
# type-shape info copy
for
var_name
,
value_info
in
value_infos
.
items
():
fluid_program
.
VarTypeShapeInfo
(
var_name
,
value_info
,
remove_batch
=
False
)
#
bad_vars
=
[]
for
var_name
,
var_desc
in
fluid_program
.
var_descs
.
items
():
if
not
var_desc
.
type
.
lod_tensor
.
HasField
(
'tensor'
):
bad_vars
.
append
(
var_name
)
if
bad_vars
:
logger
.
warning
(
'type-shape not infered for var %s ...'
,
', '
.
join
(
bad_vars
[:
5
]))
logger
.
warning
(
'this causes little problem for PaddlePaddle, '
'but Paddle Mobile may not infer correctly'
)
logger
.
warning
(
'please consider running validation with -i '
'to invoke type-shape inference in PaddlePaddle'
)
# weight writer
# weight writer
for
name
,
weight
in
graph_weights
(
onnx_graph
):
for
name
,
weight
in
graph_weights
(
onnx_graph
):
graph_params
.
append
(
name
)
var_name
=
make_var_name
(
name
)
value_info
=
graph_value_infos
[
name
]
par_vars
.
append
(
var_name
)
var_names
=
value_info
.
get
(
'embeded_as'
,
[])
value_info
=
value_infos
[
var_name
]
if
var_names
:
embedded_names
=
value_info
.
get
(
'embedded_as'
,
[])
if
len
(
var_names
)
>
1
:
if
embedded_names
:
if
len
(
embedded_names
)
>
1
:
logger
.
info
(
logger
.
info
(
'weight %s is shared between ops, more disk space will be consumed'
,
'weight %s is shared between ops, more disk space will be consumed'
,
name
)
name
)
logger
.
debug
(
'saving weight %s(%s[%d], %dB) as %s ...'
,
name
,
logger
.
debug
(
'saving weight %s(%s[%d], %dB) as %s ...'
,
name
,
weight
.
dtype
,
weight
.
size
,
weight
.
nbytes
,
var_names
)
weight
.
dtype
,
weight
.
size
,
weight
.
nbytes
,
for
var_name
in
var_names
:
# multiple references
embedded_names
)
fluid_writer
.
write_weight
(
for
embedded_name
in
embedded_names
:
# multiple references
weight
,
shutil
.
os
.
path
.
join
(
save_dir
,
var_name
))
fluid_writer
.
write_weight
(
weight
,
shutil
.
os
.
path
.
join
(
save_dir
,
embedded_name
),
lod
=
value_info
[
'lod'
])
else
:
else
:
logger
.
debug
(
'saving weight %s(%s[%d], %dB) to %s ...'
,
name
,
logger
.
debug
(
'saving weight %s(%s[%d], %dB) to %s ...'
,
name
,
weight
.
dtype
,
weight
.
size
,
weight
.
nbytes
,
weight
.
dtype
,
weight
.
size
,
weight
.
nbytes
,
var_name
)
make_var_name
(
name
))
fluid_writer
.
write_weight
(
weight
,
fluid_writer
.
write_weight
(
shutil
.
os
.
path
.
join
(
save_dir
,
var_name
),
weight
,
shutil
.
os
.
path
.
join
(
save_dir
,
make_var_name
(
name
))
)
lod
=
value_info
[
'lod'
]
)
fluid_writer
.
emit_param
(
fluid_program
,
name
,
value_info
)
fluid_writer
.
emit_param
(
fluid_program
,
var_
name
,
value_info
)
param_codes
=
fluid_program
.
codes
param_codes
=
fluid_program
.
codes
fluid_program
.
codes
=
[]
fluid_program
.
codes
=
[]
logger
.
info
(
'%d weights converted'
,
len
(
graph_param
s
))
logger
.
info
(
'%d weights converted'
,
len
(
par_var
s
))
# input writer
# input writer
external_inputs
=
[]
external_inputs
=
[]
for
name
in
graph_input
s
:
for
var_name
in
inp_var
s
:
if
name
not
in
graph_param
s
:
if
var_name
not
in
par_var
s
:
value_info
=
graph_value_infos
[
name
]
value_info
=
value_infos
[
var_
name
]
assert
value_info
[
'external'
]
assert
value_info
[
'external'
]
external_inputs
.
append
(
name
)
external_inputs
.
append
(
var_name
)
fluid_writer
.
emit_inputs
(
fluid_writer
.
emit_inputs
(
fluid_program
,
fluid_program
,
external_inputs
,
graph_value_infos
,
external_inputs
,
remove_batch
=
False
)
# TODO:
value_infos
,
remove_batch
=
False
)
# TODO:
input_codes
=
fluid_program
.
codes
input_codes
=
fluid_program
.
codes
fluid_program
.
codes
=
[]
fluid_program
.
codes
=
[]
logger
.
info
(
'%d inputs converted'
,
len
(
external_inputs
))
logger
.
info
(
'%d inputs converted'
,
len
(
external_inputs
))
# output writer
# output writer
external_outputs
=
[]
external_outputs
=
[]
for
name
in
graph_output
s
:
for
var_name
in
out_var
s
:
if
name
not
in
graph_param
s
:
if
var_name
not
in
par_var
s
:
value_info
=
graph_value_infos
[
name
]
value_info
=
value_infos
[
var_
name
]
assert
value_info
[
'external'
]
assert
value_info
[
'external'
]
external_outputs
.
append
(
name
)
external_outputs
.
append
(
var_
name
)
fluid_writer
.
emit_outputs
(
fluid_program
,
external_outputs
)
fluid_writer
.
emit_outputs
(
fluid_program
,
external_outputs
)
output_codes
=
[
''
]
+
fluid_program
.
codes
# add an empty line
output_codes
=
[
''
]
+
fluid_program
.
codes
# add an empty line
fluid_program
.
codes
=
[]
fluid_program
.
codes
=
[]
...
@@ -194,10 +237,18 @@ def convert(onnx_model_filename,
...
@@ -194,10 +237,18 @@ def convert(onnx_model_filename,
# code generation
# code generation
header_codes
=
fluid_writer
.
header_code
(
header_codes
=
fluid_writer
.
header_code
(
model_func_name
,
'From: {}'
.
format
(
onnx_model_filename
))
model_func_name
,
'From: {}'
.
format
(
onnx_model_filename
),
)
code_filename
=
shutil
.
os
.
path
.
join
(
save_dir
,
model_basename
)
code_filename
=
shutil
.
os
.
path
.
join
(
save_dir
,
model_basename
)
fluid_writer
.
write_code_file
(
code_filename
,
header_codes
,
input_codes
,
fluid_writer
.
write_code_file
(
param_codes
,
op_codes
,
output_codes
)
code_filename
,
header_codes
,
input_codes
,
param_codes
,
op_codes
,
output_codes
,
)
logger
.
info
(
'code saved to %s, factory function: %s'
,
code_filename
,
logger
.
info
(
'code saved to %s, factory function: %s'
,
code_filename
,
model_func_name
)
model_func_name
)
...
@@ -206,19 +257,16 @@ def convert(onnx_model_filename,
...
@@ -206,19 +257,16 @@ def convert(onnx_model_filename,
fluid_writer
.
write_desc_file
(
fluid_writer
.
write_desc_file
(
desc_filename
,
desc_filename
,
op_descs
=
fluid_program
.
op_descs
,
op_descs
=
fluid_program
.
op_descs
,
var_descs
=
fluid_program
.
var_descs
,
var_descs
=
list
(
fluid_program
.
var_descs
.
values
())
,
)
)
logger
.
info
(
'program saved to %s'
,
desc_filename
)
logger
.
info
(
'program saved to %s'
,
desc_filename
)
logger
.
info
(
'conversion finished'
)
logger
.
info
(
'conversion finished'
)
if
__name__
==
'__main__'
:
del
convert
def
main
():
import
argparse
import
argparse
from
onnx2fluid.conversion
import
convert
parser
=
argparse
.
ArgumentParser
(
parser
=
argparse
.
ArgumentParser
(
description
=
'onnx2fluid.convert'
,
description
=
'onnx2fluid.convert'
,
formatter_class
=
argparse
.
ArgumentDefaultsHelpFormatter
,
formatter_class
=
argparse
.
ArgumentDefaultsHelpFormatter
,
...
@@ -283,10 +331,17 @@ if __name__ == '__main__':
...
@@ -283,10 +331,17 @@ if __name__ == '__main__':
pedantic
=
args
.
pedantic
pedantic
=
args
.
pedantic
skip_version_conversion
=
args
.
skip_version_conversion
skip_version_conversion
=
args
.
skip_version_conversion
convert
(
convert
(
model_filename
,
model_filename
,
save_dir
,
save_dir
,
embed_params
=
embed_params
,
embed_params
=
embed_params
,
onnx_opset_pedantic
=
pedantic
,
onnx_opset_pedantic
=
pedantic
,
onnx_skip_version_conversion
=
skip_version_conversion
,
onnx_skip_version_conversion
=
skip_version_conversion
,
debug
=
debug
)
debug
=
debug
)
if
__name__
==
'__main__'
:
del
convert
from
onnx2fluid.conversion
import
convert
main
()
onnx2fluid/onnx2fluid/framework_pb2.py
浏览文件 @
84f717b2
...
@@ -28,30 +28,66 @@ _ATTRTYPE = _descriptor.EnumDescriptor(
...
@@ -28,30 +28,66 @@ _ATTRTYPE = _descriptor.EnumDescriptor(
filename
=
None
,
filename
=
None
,
file
=
DESCRIPTOR
,
file
=
DESCRIPTOR
,
values
=
[
values
=
[
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'INT'
,
name
=
'INT'
,
index
=
0
,
number
=
0
,
options
=
None
,
type
=
None
),
index
=
0
,
_descriptor
.
EnumValueDescriptor
(
number
=
0
,
name
=
'FLOAT'
,
index
=
1
,
number
=
1
,
options
=
None
,
type
=
None
),
options
=
None
,
_descriptor
.
EnumValueDescriptor
(
type
=
None
),
name
=
'STRING'
,
index
=
2
,
number
=
2
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'FLOAT'
,
_descriptor
.
EnumValueDescriptor
(
index
=
1
,
name
=
'INTS'
,
index
=
3
,
number
=
3
,
options
=
None
,
type
=
None
),
number
=
1
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'FLOATS'
,
index
=
4
,
number
=
4
,
options
=
None
,
type
=
None
),
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'STRING'
,
name
=
'STRINGS'
,
index
=
5
,
number
=
5
,
options
=
None
,
type
=
None
),
index
=
2
,
_descriptor
.
EnumValueDescriptor
(
number
=
2
,
name
=
'BOOLEAN'
,
index
=
6
,
number
=
6
,
options
=
None
,
type
=
None
),
options
=
None
,
_descriptor
.
EnumValueDescriptor
(
type
=
None
),
name
=
'BOOLEANS'
,
index
=
7
,
number
=
7
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'INTS'
,
_descriptor
.
EnumValueDescriptor
(
index
=
3
,
name
=
'BLOCK'
,
index
=
8
,
number
=
8
,
options
=
None
,
type
=
None
),
number
=
3
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'LONG'
,
index
=
9
,
number
=
9
,
options
=
None
,
type
=
None
),
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'FLOATS'
,
name
=
'BLOCKS'
,
index
=
10
,
number
=
10
,
options
=
None
,
type
=
None
),
index
=
4
,
_descriptor
.
EnumValueDescriptor
(
number
=
4
,
name
=
'LONGS'
,
index
=
11
,
number
=
11
,
options
=
None
,
type
=
None
),
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'STRINGS'
,
index
=
5
,
number
=
5
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'BOOLEAN'
,
index
=
6
,
number
=
6
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'BOOLEANS'
,
index
=
7
,
number
=
7
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'BLOCK'
,
index
=
8
,
number
=
8
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'LONG'
,
index
=
9
,
number
=
9
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'BLOCKS'
,
index
=
10
,
number
=
10
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'LONGS'
,
index
=
11
,
number
=
11
,
options
=
None
,
type
=
None
),
],
],
containing_type
=
None
,
containing_type
=
None
,
options
=
None
,
options
=
None
,
...
@@ -80,53 +116,111 @@ _VARTYPE_TYPE = _descriptor.EnumDescriptor(
...
@@ -80,53 +116,111 @@ _VARTYPE_TYPE = _descriptor.EnumDescriptor(
filename
=
None
,
filename
=
None
,
file
=
DESCRIPTOR
,
file
=
DESCRIPTOR
,
values
=
[
values
=
[
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'BOOL'
,
name
=
'BOOL'
,
index
=
0
,
number
=
0
,
options
=
None
,
type
=
None
),
index
=
0
,
_descriptor
.
EnumValueDescriptor
(
number
=
0
,
name
=
'INT16'
,
index
=
1
,
number
=
1
,
options
=
None
,
type
=
None
),
options
=
None
,
_descriptor
.
EnumValueDescriptor
(
type
=
None
),
name
=
'INT32'
,
index
=
2
,
number
=
2
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'INT16'
,
_descriptor
.
EnumValueDescriptor
(
index
=
1
,
name
=
'INT64'
,
index
=
3
,
number
=
3
,
options
=
None
,
type
=
None
),
number
=
1
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'FP16'
,
index
=
4
,
number
=
4
,
options
=
None
,
type
=
None
),
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'INT32'
,
name
=
'FP32'
,
index
=
5
,
number
=
5
,
options
=
None
,
type
=
None
),
index
=
2
,
_descriptor
.
EnumValueDescriptor
(
number
=
2
,
name
=
'FP64'
,
index
=
6
,
number
=
6
,
options
=
None
,
type
=
None
),
options
=
None
,
_descriptor
.
EnumValueDescriptor
(
type
=
None
),
name
=
'SIZE_T'
,
index
=
7
,
number
=
19
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'INT64'
,
_descriptor
.
EnumValueDescriptor
(
index
=
3
,
name
=
'UINT8'
,
index
=
8
,
number
=
20
,
options
=
None
,
type
=
None
),
number
=
3
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'INT8'
,
index
=
9
,
number
=
21
,
options
=
None
,
type
=
None
),
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'FP16'
,
name
=
'LOD_TENSOR'
,
index
=
10
,
number
=
7
,
options
=
None
,
type
=
None
),
index
=
4
,
_descriptor
.
EnumValueDescriptor
(
number
=
4
,
name
=
'SELECTED_ROWS'
,
index
=
11
,
number
=
8
,
options
=
None
,
type
=
None
),
options
=
None
,
_descriptor
.
EnumValueDescriptor
(
type
=
None
),
name
=
'FEED_MINIBATCH'
,
index
=
12
,
number
=
9
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'FP32'
,
_descriptor
.
EnumValueDescriptor
(
index
=
5
,
name
=
'FETCH_LIST'
,
index
=
13
,
number
=
10
,
options
=
None
,
type
=
None
),
number
=
5
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'STEP_SCOPES'
,
index
=
14
,
number
=
11
,
options
=
None
,
type
=
None
),
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'FP64'
,
name
=
'LOD_RANK_TABLE'
,
index
=
15
,
number
=
12
,
options
=
None
,
index
=
6
,
type
=
None
),
number
=
6
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'LOD_TENSOR_ARRAY'
,
type
=
None
),
index
=
16
,
_descriptor
.
EnumValueDescriptor
(
name
=
'SIZE_T'
,
number
=
13
,
index
=
7
,
options
=
None
,
number
=
19
,
type
=
None
),
options
=
None
,
_descriptor
.
EnumValueDescriptor
(
type
=
None
),
name
=
'PLACE_LIST'
,
index
=
17
,
number
=
14
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'UINT8'
,
_descriptor
.
EnumValueDescriptor
(
index
=
8
,
name
=
'READER'
,
index
=
18
,
number
=
15
,
options
=
None
,
type
=
None
),
number
=
20
,
_descriptor
.
EnumValueDescriptor
(
options
=
None
,
name
=
'RAW'
,
index
=
19
,
number
=
17
,
options
=
None
,
type
=
None
),
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
_descriptor
.
EnumValueDescriptor
(
name
=
'INT8'
,
name
=
'TUPLE'
,
index
=
20
,
number
=
18
,
options
=
None
,
type
=
None
),
index
=
9
,
number
=
21
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'LOD_TENSOR'
,
index
=
10
,
number
=
7
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'SELECTED_ROWS'
,
index
=
11
,
number
=
8
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'FEED_MINIBATCH'
,
index
=
12
,
number
=
9
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'FETCH_LIST'
,
index
=
13
,
number
=
10
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'STEP_SCOPES'
,
index
=
14
,
number
=
11
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'LOD_RANK_TABLE'
,
index
=
15
,
number
=
12
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'LOD_TENSOR_ARRAY'
,
index
=
16
,
number
=
13
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'PLACE_LIST'
,
index
=
17
,
number
=
14
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'READER'
,
index
=
18
,
number
=
15
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'RAW'
,
index
=
19
,
number
=
17
,
options
=
None
,
type
=
None
),
_descriptor
.
EnumValueDescriptor
(
name
=
'TUPLE'
,
index
=
20
,
number
=
18
,
options
=
None
,
type
=
None
),
],
],
containing_type
=
None
,
containing_type
=
None
,
options
=
None
,
options
=
None
,
...
@@ -1480,11 +1574,10 @@ DESCRIPTOR.enum_types_by_name['AttrType'] = _ATTRTYPE
...
@@ -1480,11 +1574,10 @@ DESCRIPTOR.enum_types_by_name['AttrType'] = _ATTRTYPE
Version
=
_reflection
.
GeneratedProtocolMessageType
(
Version
=
_reflection
.
GeneratedProtocolMessageType
(
'Version'
,
'Version'
,
(
_message
.
Message
,
),
(
_message
.
Message
,
),
dict
(
dict
(
DESCRIPTOR
=
_VERSION
,
DESCRIPTOR
=
_VERSION
,
__module__
=
'framework_pb2'
__module__
=
'framework_pb2'
# @@protoc_insertion_point(class_scope:paddle.framework.proto.Version)
# @@protoc_insertion_point(class_scope:paddle.framework.proto.Version)
))
))
_sym_db
.
RegisterMessage
(
Version
)
_sym_db
.
RegisterMessage
(
Version
)
OpDesc
=
_reflection
.
GeneratedProtocolMessageType
(
OpDesc
=
_reflection
.
GeneratedProtocolMessageType
(
...
@@ -1601,11 +1694,10 @@ _sym_db.RegisterMessage(VarType.Tuple)
...
@@ -1601,11 +1694,10 @@ _sym_db.RegisterMessage(VarType.Tuple)
VarDesc
=
_reflection
.
GeneratedProtocolMessageType
(
VarDesc
=
_reflection
.
GeneratedProtocolMessageType
(
'VarDesc'
,
'VarDesc'
,
(
_message
.
Message
,
),
(
_message
.
Message
,
),
dict
(
dict
(
DESCRIPTOR
=
_VARDESC
,
DESCRIPTOR
=
_VARDESC
,
__module__
=
'framework_pb2'
__module__
=
'framework_pb2'
# @@protoc_insertion_point(class_scope:paddle.framework.proto.VarDesc)
# @@protoc_insertion_point(class_scope:paddle.framework.proto.VarDesc)
))
))
_sym_db
.
RegisterMessage
(
VarDesc
)
_sym_db
.
RegisterMessage
(
VarDesc
)
BlockDesc
=
_reflection
.
GeneratedProtocolMessageType
(
BlockDesc
=
_reflection
.
GeneratedProtocolMessageType
(
...
...
onnx2fluid/onnx2fluid/onnx_utils.py
浏览文件 @
84f717b2
...
@@ -11,9 +11,11 @@ from __future__ import division
...
@@ -11,9 +11,11 @@ from __future__ import division
import
logging
import
logging
import
numpy
as
np
import
numpy
as
np
import
onnx
import
onnx
import
onnx.optimizer
as
optimizer
from
collections
import
OrderedDict
as
Dict
# as default dict
from
collections
import
OrderedDict
as
Dict
# as default dict
from
onnx.helper
import
get_attribute_value
,
make_attribute
from
onnx.checker
import
check_model
from
onnx.helper
import
get_attribute_value
,
make_attribute
,
strip_doc_string
from
onnx.mapping
import
TENSOR_TYPE_TO_NP_TYPE
from
onnx.mapping
import
TENSOR_TYPE_TO_NP_TYPE
from
onnx.numpy_helper
import
to_array
from
onnx.numpy_helper
import
to_array
from
onnx.shape_inference
import
infer_shapes
from
onnx.shape_inference
import
infer_shapes
...
@@ -23,14 +25,16 @@ logger = logging.getLogger(__name__)
...
@@ -23,14 +25,16 @@ logger = logging.getLogger(__name__)
__all__
=
[
__all__
=
[
'print_pb_structure'
,
'print_pb_structure'
,
'build_value_refs'
,
'build_value_refs'
,
'tensor_dtype'
,
'tensor_shape'
,
'node_attrs'
,
'node_attrs'
,
'node_topo'
,
'node_topo'
,
'node_iter'
,
'node_iter'
,
'tensor_dtype'
,
'tensor_shape'
,
'graph_ops'
,
'graph_ops'
,
'graph_weights'
,
'graph_weights'
,
'inferred_model_value_info'
,
'inferred_model_value_info'
,
'polish_model'
,
'polish_and_save'
,
'optimize_model_skip_op_for_inference'
,
'optimize_model_skip_op_for_inference'
,
'optimize_model_strip_initializer'
,
'optimize_model_strip_initializer'
,
'optimize_model_cast'
,
'optimize_model_cast'
,
...
@@ -50,17 +54,17 @@ def print_pb_structure(message, loop_iterative=False, depth=0):
...
@@ -50,17 +54,17 @@ def print_pb_structure(message, loop_iterative=False, depth=0):
if
hasattr
(
message
,
'DESCRIPTOR'
)
and
hasattr
(
message
.
DESCRIPTOR
,
'fields'
):
if
hasattr
(
message
,
'DESCRIPTOR'
)
and
hasattr
(
message
.
DESCRIPTOR
,
'fields'
):
for
field
in
message
.
DESCRIPTOR
.
fields
:
for
field
in
message
.
DESCRIPTOR
.
fields
:
print
(
'
\t
'
*
depth
+
'-'
,
field
.
name
)
print
(
'
\t
'
*
depth
+
'-'
,
field
.
name
)
print_pb_structure
(
print_pb_structure
(
getattr
(
message
,
field
.
name
),
getattr
(
message
,
field
.
name
),
loop_iterative
=
loop_iterative
,
loop_iterative
=
loop_iterative
,
depth
=
(
depth
+
1
))
depth
=
(
depth
+
1
))
if
loop_iterative
and
hasattr
(
message
,
'MergeFrom'
)
and
hasattr
(
if
loop_iterative
and
hasattr
(
message
,
'MergeFrom'
)
and
hasattr
(
message
,
'__len__'
):
message
,
'__len__'
):
for
idx
,
item
in
enumerate
(
message
):
for
idx
,
item
in
enumerate
(
message
):
print
(
'
\t
'
*
depth
+
'-'
,
idx
)
print
(
'
\t
'
*
depth
+
'-'
,
idx
)
print_pb_structure
(
print_pb_structure
(
item
,
item
,
loop_iterative
=
loop_iterative
,
depth
=
(
depth
+
1
))
loop_iterative
=
loop_iterative
,
depth
=
(
depth
+
1
))
def
build_value_refs
(
nodes
):
def
build_value_refs
(
nodes
):
...
@@ -83,14 +87,21 @@ def get_attribute_value2(attr):
...
@@ -83,14 +87,21 @@ def get_attribute_value2(attr):
get_attribute_value enhanced
get_attribute_value enhanced
"""
"""
assert
isinstance
(
attr
,
onnx
.
AttributeProto
),
'attr is not a AttributeProto instance'
if
attr
.
type
==
onnx
.
AttributeProto
.
TENSOR
:
if
attr
.
type
==
onnx
.
AttributeProto
.
TENSOR
:
dtype
=
np
.
dtype
(
TENSOR_TYPE_TO_NP_TYPE
[
attr
.
t
.
data_type
])
dtype
=
np
.
dtype
(
TENSOR_TYPE_TO_NP_TYPE
[
attr
.
t
.
data_type
])
data
=
attr
.
t
.
raw_data
data
=
attr
.
t
.
raw_data
value
=
np
.
frombuffer
(
value
=
np
.
frombuffer
(
data
,
data
,
dtype
=
dtype
,
count
=
(
len
(
data
)
//
dtype
.
itemsize
))
dtype
=
dtype
,
count
=
(
len
(
data
)
//
dtype
.
itemsize
))
elif
attr
.
type
==
onnx
.
AttributeProto
.
STRING
:
elif
attr
.
type
==
onnx
.
AttributeProto
.
STRING
:
value
=
attr
.
s
value
=
attr
.
s
value
=
value
.
decode
()
if
isinstance
(
value
,
bytes
)
else
value
value
=
value
.
decode
()
if
isinstance
(
value
,
bytes
)
else
value
elif
attr
.
type
==
onnx
.
AttributeProto
.
STRINGS
:
value
=
attr
.
strings
value
=
[
s
.
decode
()
if
isinstance
(
s
,
bytes
)
else
s
for
s
in
value
]
else
:
else
:
value
=
get_attribute_value
(
attr
)
value
=
get_attribute_value
(
attr
)
return
value
return
value
...
@@ -101,6 +112,9 @@ def tensor_dtype(tensor):
...
@@ -101,6 +112,9 @@ def tensor_dtype(tensor):
get ONNX tensor in np.dtype
get ONNX tensor in np.dtype
"""
"""
assert
isinstance
(
tensor
,
onnx
.
ValueInfoProto
),
'tensor is not a ValueInfoProto instance'
return
TENSOR_TYPE_TO_NP_TYPE
[
tensor
.
type
.
tensor_type
.
elem_type
]
return
TENSOR_TYPE_TO_NP_TYPE
[
tensor
.
type
.
tensor_type
.
elem_type
]
...
@@ -109,7 +123,10 @@ def tensor_shape(tensor):
...
@@ -109,7 +123,10 @@ def tensor_shape(tensor):
get ONNX tensor shape
get ONNX tensor shape
"""
"""
return
[
dim
.
dim_value
for
dim
in
tensor
.
type
.
tensor_type
.
shape
.
dim
]
assert
isinstance
(
tensor
,
onnx
.
ValueInfoProto
),
'tensor is not a ValueInfoProto instance'
return
tuple
([
dim
.
dim_value
for
dim
in
tensor
.
type
.
tensor_type
.
shape
.
dim
])
def
node_attrs
(
node
):
def
node_attrs
(
node
):
...
@@ -117,6 +134,8 @@ def node_attrs(node):
...
@@ -117,6 +134,8 @@ def node_attrs(node):
convert ONNX node attributes to dict
convert ONNX node attributes to dict
"""
"""
assert
isinstance
(
node
,
onnx
.
NodeProto
),
'node is not a NodeProto instance'
return
{
attr
.
name
:
get_attribute_value2
(
attr
)
return
{
attr
.
name
:
get_attribute_value2
(
attr
)
for
attr
in
node
.
attribute
}
# dict
for
attr
in
node
.
attribute
}
# dict
...
@@ -145,12 +164,12 @@ def node_topo(nodes, topo='default'):
...
@@ -145,12 +164,12 @@ def node_topo(nodes, topo='default'):
for
node_idx
,
degree
in
enumerate
(
node_in_degrees
):
for
node_idx
,
degree
in
enumerate
(
node_in_degrees
):
if
degree
==
0
:
if
degree
==
0
:
queue
.
append
(
node_idx
)
queue
.
append
(
node_idx
)
while
len
(
queue
)
>
0
:
while
queue
:
node_idx
=
queue
.
pop
(
0
)
node_idx
=
queue
.
pop
(
0
)
node_topo
.
append
(
node_idx
)
node_topo
.
append
(
node_idx
)
for
val_name
in
nodes
[
node_idx
].
output
:
for
val_name
in
nodes
[
node_idx
].
output
:
output_refs
[
val_name
].
remove
(
node_idx
)
output_refs
[
val_name
].
remove
(
node_idx
)
if
len
(
output_refs
[
val_name
])
>
0
:
if
output_refs
[
val_name
]
:
continue
continue
output_refs
.
pop
(
val_name
)
output_refs
.
pop
(
val_name
)
if
val_name
not
in
input_refs
:
if
val_name
not
in
input_refs
:
...
@@ -170,12 +189,12 @@ def node_topo(nodes, topo='default'):
...
@@ -170,12 +189,12 @@ def node_topo(nodes, topo='default'):
for
node_idx
,
degree
in
enumerate
(
node_out_degrees
):
for
node_idx
,
degree
in
enumerate
(
node_out_degrees
):
if
degree
==
0
:
if
degree
==
0
:
queue
.
append
(
node_idx
)
queue
.
append
(
node_idx
)
while
len
(
queue
)
>
0
:
while
queue
:
node_idx
=
queue
.
pop
(
0
)
node_idx
=
queue
.
pop
(
0
)
node_topo
.
append
(
node_idx
)
node_topo
.
append
(
node_idx
)
for
val_name
in
nodes
[
node_idx
].
input
:
for
val_name
in
nodes
[
node_idx
].
input
:
input_refs
[
val_name
].
remove
(
node_idx
)
input_refs
[
val_name
].
remove
(
node_idx
)
if
len
(
input_refs
[
val_name
])
>
0
:
if
input_refs
[
val_name
]
:
continue
continue
input_refs
.
pop
(
val_name
)
input_refs
.
pop
(
val_name
)
if
val_name
not
in
output_refs
:
if
val_name
not
in
output_refs
:
...
@@ -208,6 +227,11 @@ def node_iter(nodes, indices=None):
...
@@ -208,6 +227,11 @@ def node_iter(nodes, indices=None):
if
name
==
''
:
if
name
==
''
:
name
=
'op_'
+
str
(
index
)
name
=
'op_'
+
str
(
index
)
# else: # make_op_name
# for s in ' \\|/:-': #
# name = name.replace(s, '_')
if
domain
==
''
:
if
domain
==
''
:
domain
=
DEFAULT_OP_DOMAIN
domain
=
DEFAULT_OP_DOMAIN
...
@@ -219,9 +243,8 @@ def graph_ops(graph, topo='default'):
...
@@ -219,9 +243,8 @@ def graph_ops(graph, topo='default'):
generator for ONNX node graph with given topology
generator for ONNX node graph with given topology
"""
"""
if
not
isinstance
(
graph
,
onnx
.
GraphProto
):
assert
isinstance
(
graph
,
logger
.
error
(
'graph is not a GraphProto instance'
)
onnx
.
GraphProto
),
'graph is not a GraphProto instance'
return
return
node_iter
(
graph
.
node
,
node_topo
(
graph
.
node
,
topo
))
return
node_iter
(
graph
.
node
,
node_topo
(
graph
.
node
,
topo
))
...
@@ -231,9 +254,8 @@ def graph_weights(graph):
...
@@ -231,9 +254,8 @@ def graph_weights(graph):
generator for weights of an ONNX model
generator for weights of an ONNX model
"""
"""
if
not
isinstance
(
graph
,
onnx
.
GraphProto
):
assert
isinstance
(
graph
,
logger
.
error
(
'graph is not a GraphProto instance'
)
onnx
.
GraphProto
),
'graph is not a GraphProto instance'
return
for
initializer
in
graph
.
initializer
:
for
initializer
in
graph
.
initializer
:
name
=
initializer
.
name
name
=
initializer
.
name
...
@@ -246,29 +268,32 @@ def inferred_model_value_info(model):
...
@@ -246,29 +268,32 @@ def inferred_model_value_info(model):
collect value/type info for an ONNX model
collect value/type info for an ONNX model
"""
"""
assert
isinstance
(
model
,
onnx
.
ModelProto
),
'model is not a ModelProto instance'
model
=
infer_shapes
(
model
)
model
=
infer_shapes
(
model
)
graph
=
model
.
graph
graph
=
model
.
graph
value_info
=
Dict
()
value_info
=
Dict
()
for
item
in
graph
.
value_info
:
for
item
in
graph
.
value_info
:
value_info
[
item
.
name
]
=
dict
(
value_info
[
item
.
name
]
=
{
dtype
=
tensor_dtype
(
item
),
'dtype'
:
tensor_dtype
(
item
),
shape
=
tensor_shape
(
item
),
'shape'
:
tensor_shape
(
item
),
external
=
False
,
'external'
:
False
,
)
}
for
item
in
graph
.
input
:
for
item
in
graph
.
input
:
assert
item
.
name
not
in
value_info
assert
item
.
name
not
in
value_info
value_info
[
item
.
name
]
=
dict
(
value_info
[
item
.
name
]
=
{
dtype
=
tensor_dtype
(
item
),
'dtype'
:
tensor_dtype
(
item
),
shape
=
tensor_shape
(
item
),
'shape'
:
tensor_shape
(
item
),
external
=
True
,
'external'
:
True
,
)
}
for
item
in
graph
.
output
:
for
item
in
graph
.
output
:
# assert item.name not in value_info, 'bypass-model not supported'
# assert item.name not in value_info, 'bypass-model not supported'
value_info
[
item
.
name
]
=
dict
(
value_info
[
item
.
name
]
=
{
dtype
=
tensor_dtype
(
item
),
'dtype'
:
tensor_dtype
(
item
),
shape
=
tensor_shape
(
item
),
'shape'
:
tensor_shape
(
item
),
external
=
True
,
'external'
:
True
,
)
}
return
value_info
return
value_info
...
@@ -302,12 +327,63 @@ def skip_node_backward(nodes, src_input_name, dst_output_name, output_refs):
...
@@ -302,12 +327,63 @@ def skip_node_backward(nodes, src_input_name, dst_output_name, output_refs):
return
processed
return
processed
def
polish_model
(
model
,
internals
=
True
,
extras
=
True
,
checking
=
True
):
"""
polish_model enhanced for inference
"""
if
checking
:
check_model
(
model
)
strip_doc_string
(
model
)
if
internals
:
passes
=
optimizer
.
get_available_passes
()
passes
=
list
(
filter
(
lambda
name
:
not
name
.
startswith
(
'split_'
),
passes
))
#
logger
.
debug
(
'builtin optimizations to perform in ONNX:
\n\t
%s'
,
passes
)
model
=
optimizer
.
optimize
(
model
,
passes
=
passes
)
if
extras
:
for
optimize
in
(
optimize_model_skip_op_for_inference
,
optimize_model_strip_initializer
,
optimize_model_cast
,
optimize_model_slice
,
):
model
=
optimize
(
model
)
model
=
infer_shapes
(
model
)
if
checking
:
check_model
(
model
)
return
model
def
polish_and_save
(
model_filename
,
suffix
=
'.polished'
,
save_filename
=
None
,
*
args
,
**
kwargs
):
"""
run polish_model and save
"""
if
save_filename
is
None
:
save_filename
=
model_filename
.
replace
(
'.onnx'
,
suffix
+
'.onnx'
)
model
=
onnx
.
load
(
model_filename
)
model
=
polish_model
(
model
,
*
args
,
**
kwargs
)
onnx
.
save
(
model
,
save_filename
)
logger
.
info
(
'polished model saved to: %s'
,
save_filename
)
return
save_filename
def
optimize_model_skip_op_for_inference
(
model
,
op_list
=
None
):
def
optimize_model_skip_op_for_inference
(
model
,
op_list
=
None
):
"""
"""
skip ops can be bypassed for inference
skip ops can be bypassed for inference
"""
"""
assert
isinstance
(
model
,
onnx
.
ModelProto
),
'model is not a ModelProto instance'
if
op_list
is
None
:
if
op_list
is
None
:
op_list
=
[
'Dropout'
]
op_list
=
(
'Dropout'
,
'Identity'
)
nodes
=
model
.
graph
.
node
nodes
=
model
.
graph
.
node
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
...
@@ -322,10 +398,10 @@ def optimize_model_skip_op_for_inference(model, op_list=None):
...
@@ -322,10 +398,10 @@ def optimize_model_skip_op_for_inference(model, op_list=None):
if
not
(
node
.
domain
==
DEFAULT_OP_DOMAIN
or
node
.
domain
==
''
):
if
not
(
node
.
domain
==
DEFAULT_OP_DOMAIN
or
node
.
domain
==
''
):
continue
continue
op_type
=
node
.
op_type
op_type
=
node
.
op_type
if
not
(
op_type
in
op_list
)
:
if
op_type
not
in
op_list
:
continue
continue
if
op_type
in
[
'Dropout'
]
:
if
op_type
in
(
'Dropout'
,
)
:
input_name
=
node
.
input
[
0
]
input_name
=
node
.
input
[
0
]
output_name
=
node
.
output
[
0
]
output_name
=
node
.
output
[
0
]
elif
not
(
len
(
node
.
input
)
==
1
and
len
(
node
.
output
)
==
1
):
elif
not
(
len
(
node
.
input
)
==
1
and
len
(
node
.
output
)
==
1
):
...
@@ -368,6 +444,9 @@ def optimize_model_strip_initializer(model, keep_input_only=True):
...
@@ -368,6 +444,9 @@ def optimize_model_strip_initializer(model, keep_input_only=True):
strip weights for inference
strip weights for inference
"""
"""
assert
isinstance
(
model
,
onnx
.
ModelProto
),
'model is not a ModelProto instance'
nodes
=
model
.
graph
.
node
nodes
=
model
.
graph
.
node
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
out_names
=
[
val
.
name
for
val
in
model
.
graph
.
output
]
out_names
=
[
val
.
name
for
val
in
model
.
graph
.
output
]
...
@@ -406,9 +485,12 @@ def optimize_model_strip_initializer(model, keep_input_only=True):
...
@@ -406,9 +485,12 @@ def optimize_model_strip_initializer(model, keep_input_only=True):
def
optimize_model_cast
(
model
):
def
optimize_model_cast
(
model
):
"""
"""
strip cascade and unecessary onnx::Cast
strip cascade and unecessary onnx::Cast
-9:
"""
"""
assert
isinstance
(
model
,
onnx
.
ModelProto
),
'model is not a ModelProto instance'
nodes
=
model
.
graph
.
node
nodes
=
model
.
graph
.
node
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
value_info
=
inferred_model_value_info
(
model
)
value_info
=
inferred_model_value_info
(
model
)
...
@@ -422,7 +504,7 @@ def optimize_model_cast(model):
...
@@ -422,7 +504,7 @@ def optimize_model_cast(model):
for
node_idx
,
node
in
enumerate
(
nodes
):
for
node_idx
,
node
in
enumerate
(
nodes
):
if
not
(
node
.
domain
==
DEFAULT_OP_DOMAIN
or
node
.
domain
==
''
):
if
not
(
node
.
domain
==
DEFAULT_OP_DOMAIN
or
node
.
domain
==
''
):
continue
continue
if
no
t
(
node
.
op_type
==
'Cast'
)
:
if
no
de
.
op_type
!=
'Cast'
:
continue
continue
attrs
=
node_attrs
(
node
)
attrs
=
node_attrs
(
node
)
output_dtype
=
TENSOR_TYPE_TO_NP_TYPE
[
attrs
[
'to'
]]
output_dtype
=
TENSOR_TYPE_TO_NP_TYPE
[
attrs
[
'to'
]]
...
@@ -463,19 +545,22 @@ def optimize_model_cast(model):
...
@@ -463,19 +545,22 @@ def optimize_model_cast(model):
def
optimize_model_slice
(
model
):
def
optimize_model_slice
(
model
):
"""
"""
strip cascade and unecessary onnx::Slice
strip cascade and unecessary onnx::Slice
-1:9
"""
"""
assert
isinstance
(
model
,
onnx
.
ModelProto
),
'model is not a ModelProto instance'
nodes
=
model
.
graph
.
node
nodes
=
model
.
graph
.
node
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
input_refs
,
output_refs
=
build_value_refs
(
nodes
)
def
_
build_slice_node_chain
(
node_idx
):
def
build_slice_node_chain
(
node_idx
):
chain
=
[]
chain
=
[]
while
True
:
while
True
:
node
=
nodes
[
node_idx
]
node
=
nodes
[
node_idx
]
if
not
(
node
.
domain
==
DEFAULT_OP_DOMAIN
or
node
.
domain
==
''
):
if
not
(
node
.
domain
==
DEFAULT_OP_DOMAIN
or
node
.
domain
==
''
):
return
chain
return
chain
if
no
t
node
.
op_type
=
=
'Slice'
:
if
no
de
.
op_type
!
=
'Slice'
:
return
chain
return
chain
chain
.
append
(
node_idx
)
chain
.
append
(
node_idx
)
output_name
=
node
.
output
[
0
]
output_name
=
node
.
output
[
0
]
...
@@ -485,7 +570,7 @@ def optimize_model_slice(model):
...
@@ -485,7 +570,7 @@ def optimize_model_slice(model):
node_idx
=
list
(
input_refs
[
output_name
])[
0
]
node_idx
=
list
(
input_refs
[
output_name
])[
0
]
# axis: (start, end)
# axis: (start, end)
def
_
merge_slice
(
slice_chain
):
def
merge_slice
(
slice_chain
):
merged_slice
=
dict
()
merged_slice
=
dict
()
for
slice_node_idx
in
slice_chain
:
for
slice_node_idx
in
slice_chain
:
node
=
nodes
[
slice_node_idx
]
node
=
nodes
[
slice_node_idx
]
...
@@ -508,14 +593,14 @@ def optimize_model_slice(model):
...
@@ -508,14 +593,14 @@ def optimize_model_slice(model):
ret_nodes
=
ret
.
graph
.
node
ret_nodes
=
ret
.
graph
.
node
nodes_to_remove
=
[]
nodes_to_remove
=
[]
for
node_idx
in
range
(
len
(
nodes
)):
for
node_idx
in
range
(
len
(
nodes
)):
slice_chain
=
_
build_slice_node_chain
(
node_idx
)
slice_chain
=
build_slice_node_chain
(
node_idx
)
if
len
(
slice_chain
)
==
0
:
if
not
slice_chain
:
continue
continue
merged_slice
=
_
merge_slice
(
slice_chain
)
merged_slice
=
merge_slice
(
slice_chain
)
if
len
(
merged_slice
)
>
0
and
len
(
slice_chain
)
==
1
:
# no need to merge
if
merged_slice
and
len
(
slice_chain
)
==
1
:
# no need to merge
continue
continue
attrs
=
dict
(
axes
=
[],
starts
=
[],
ends
=
[])
attrs
=
{
'axes'
:
[],
'starts'
:
[],
'ends'
:
[]}
for
axis
,
(
start
,
end
)
in
merged_slice
.
items
():
for
axis
,
(
start
,
end
)
in
merged_slice
.
items
():
attrs
[
'axes'
].
append
(
axis
)
attrs
[
'axes'
].
append
(
axis
)
attrs
[
'starts'
].
append
(
start
)
attrs
[
'starts'
].
append
(
start
)
...
@@ -526,12 +611,11 @@ def optimize_model_slice(model):
...
@@ -526,12 +611,11 @@ def optimize_model_slice(model):
output_name
=
last_node
.
output
[
0
]
output_name
=
last_node
.
output
[
0
]
processed
=
-
1
processed
=
-
1
if
output_name
in
input_refs
:
# 0, [1...]
if
output_name
in
input_refs
:
# 0, [1...]
new_input_name
=
first_node
.
output
[
0
]
if
len
(
new_input_name
=
first_node
.
output
[
0
]
if
merged_slice
else
input_name
merged_slice
)
>
0
else
input_name
processed
=
skip_node_forward
(
ret_nodes
,
output_name
,
processed
=
skip_node_forward
(
ret_nodes
,
output_name
,
new_input_name
,
input_refs
)
new_input_name
,
input_refs
)
if
processed
>
0
:
if
processed
>
0
:
if
len
(
merged_slice
)
>
0
:
if
merged_slice
:
remain_idx
=
slice_chain
[
0
]
remain_idx
=
slice_chain
[
0
]
remove_chain
=
slice_chain
[
1
:]
remove_chain
=
slice_chain
[
1
:]
slice_node
=
ret_nodes
[
remain_idx
]
slice_node
=
ret_nodes
[
remain_idx
]
...
@@ -545,12 +629,11 @@ def optimize_model_slice(model):
...
@@ -545,12 +629,11 @@ def optimize_model_slice(model):
remove_chain
=
slice_chain
remove_chain
=
slice_chain
if
processed
<
0
and
input_name
in
output_refs
:
if
processed
<
0
and
input_name
in
output_refs
:
new_output_name
=
last_node
.
input
[
0
]
if
len
(
new_output_name
=
last_node
.
input
[
0
]
if
merged_slice
else
output_name
merged_slice
)
>
0
else
output_name
processed
=
skip_node_backward
(
ret_nodes
,
input_name
,
processed
=
skip_node_backward
(
ret_nodes
,
input_name
,
new_output_name
,
output_refs
)
new_output_name
,
output_refs
)
if
processed
>
0
:
if
processed
>
0
:
if
len
(
merged_slice
)
>
0
:
if
merged_slice
:
remain_idx
=
slice_chain
[
-
1
]
remain_idx
=
slice_chain
[
-
1
]
remove_chain
=
slice_chain
[:
-
1
]
remove_chain
=
slice_chain
[:
-
1
]
slice_node
=
ret_nodes
[
remain_idx
]
slice_node
=
ret_nodes
[
remain_idx
]
...
@@ -565,7 +648,7 @@ def optimize_model_slice(model):
...
@@ -565,7 +648,7 @@ def optimize_model_slice(model):
if
processed
>
0
:
if
processed
>
0
:
nodes_to_remove
.
extend
(
remove_chain
)
nodes_to_remove
.
extend
(
remove_chain
)
if
len
(
merged_slice
)
==
0
:
if
not
merged_slice
:
logger
.
debug
(
'skip slice chain %s -> %s -> %s'
,
input_name
,
logger
.
debug
(
'skip slice chain %s -> %s -> %s'
,
input_name
,
slice_chain
,
output_name
)
slice_chain
,
output_name
)
elif
processed
<
0
:
# NEVERFIX: not merge standalone slice chain
elif
processed
<
0
:
# NEVERFIX: not merge standalone slice chain
...
@@ -586,22 +669,16 @@ if __name__ == '__main__':
...
@@ -586,22 +669,16 @@ if __name__ == '__main__':
level
=
logging
.
DEBUG
,
level
=
logging
.
DEBUG
,
)
)
from
onnx.checker
import
check_model
from
onnx.utils
import
polish_model
from
onnx.version_converter
import
convert_version
from
onnx.version_converter
import
convert_version
model
=
onnx
.
load
(
'
../examples/t1
.onnx'
)
model
=
onnx
.
load
(
'
/tmp/export
.onnx'
)
print_pb_structure
(
model
,
loop_iterative
=
False
)
print_pb_structure
(
model
,
loop_iterative
=
False
)
check_model
(
model
)
check_model
(
model
)
model
=
convert_version
(
model
,
9
)
model
=
convert_version
(
model
,
9
)
model
=
optimize_model_skip_op_for_inference
(
model
)
model
=
optimize_model_strip_initializer
(
model
)
model
=
optimize_model_cast
(
model
)
model
=
optimize_model_slice
(
model
)
model
=
polish_model
(
model
)
model
=
polish_model
(
model
)
onnx
.
save
(
model
,
'/tmp/
optimiz
ed.onnx'
)
onnx
.
save
(
model
,
'/tmp/
export.polish
ed.onnx'
)
graph
=
model
.
graph
graph
=
model
.
graph
value_info
=
inferred_model_value_info
(
model
)
value_info
=
inferred_model_value_info
(
model
)
...
@@ -613,23 +690,23 @@ if __name__ == '__main__':
...
@@ -613,23 +690,23 @@ if __name__ == '__main__':
logger
.
info
(
'ops:'
)
logger
.
info
(
'ops:'
)
for
name
,
domain
,
op_type
,
_
,
_
,
attrs
in
graph_ops
(
graph
,
topo
=
'forward'
):
for
name
,
domain
,
op_type
,
_
,
_
,
attrs
in
graph_ops
(
graph
,
topo
=
'forward'
):
logger
.
info
(
'%s %s::%s: %s'
,
name
,
domain
,
op_type
,
attrs
)
logger
.
info
(
'
-
\t
%s %s::%s: %s'
,
name
,
domain
,
op_type
,
attrs
)
logger
.
info
(
'weights:'
)
logger
.
info
(
'weights:'
)
for
name
,
array
in
graph_weights
(
graph
):
for
name
,
array
in
graph_weights
(
graph
):
weights
.
append
(
name
)
weights
.
append
(
name
)
logger
.
info
(
'%s: %s'
,
name
,
array
.
shape
)
logger
.
info
(
'
-
\t
%s: %s'
,
name
,
array
.
shape
)
logger
.
info
(
'inputs:'
)
logger
.
info
(
'inputs:'
)
external_inputs
=
[]
external_inputs
=
[]
for
name
in
inputs
:
for
name
in
inputs
:
if
name
not
in
weights
:
if
name
not
in
weights
:
external_inputs
.
append
(
name
)
external_inputs
.
append
(
name
)
logger
.
info
(
'%s: %s'
,
name
,
value_info
[
name
][
'shape'
])
logger
.
info
(
'
-
\t
%s: %s'
,
name
,
value_info
[
name
][
'shape'
])
logger
.
info
(
'outputs:'
)
logger
.
info
(
'outputs:'
)
external_outputs
=
[]
external_outputs
=
[]
for
name
in
outputs
:
for
name
in
outputs
:
if
name
not
in
weights
:
if
name
not
in
weights
:
external_outputs
.
append
(
name
)
external_outputs
.
append
(
name
)
logger
.
info
(
'%s: %s'
,
name
,
value_info
[
name
][
'shape'
])
logger
.
info
(
'
-
\t
%s: %s'
,
name
,
value_info
[
name
][
'shape'
])
onnx2fluid/onnx2fluid/symbolic.py
浏览文件 @
84f717b2
...
@@ -38,47 +38,68 @@ DEFAULT_OP_MAPPING_FIELD_VALUES[
...
@@ -38,47 +38,68 @@ DEFAULT_OP_MAPPING_FIELD_VALUES[
DEFAULT_OP_MAPPING_FIELD_VALUES
[
DEFAULT_OP_MAPPING_FIELD_VALUES
[
'OUTPUT_PERM'
]
=
None
# sampler: [idx_onnx_arg...]
'OUTPUT_PERM'
]
=
None
# sampler: [idx_onnx_arg...]
DEFAULT_OP_MAPPING_FIELD_VALUES
[
'FILL_NAME_FIELD'
]
=
True
DEFAULT_OP_MAPPING_FIELD_VALUES
[
'FILL_NAME_FIELD'
]
=
True
DEFAULT_OP_MAPPING_VALUES
=
list
(
DEFAULT_OP_MAPPING_FIELD_VALUES
.
values
())
DEFAULT_OP_MAPPING
=
{
DEFAULT_OP_MAPPING
=
{
## nil ops ##
## nil ops ##
'RandomUniform'
:
'RandomUniform'
:
[
'uniform_random'
,
[],
[
'Out'
],
dict
(
high
=
'max'
,
low
=
'min'
),
[
'uniform_random'
,
[],
[
'Out'
],
dict
(
high
=
'max'
,
low
=
'min'
),
dict
(
),
None
,
None
,
False
],
dict
(
max
=
1.
,
min
=
0.
,
seed
=
0
),
None
,
None
,
False
],
# TODO: add dtype support
'RandomNormal'
:
'RandomNormal'
:
[
'gaussian_random'
,
[],
[
'Out'
],
dict
(
scale
=
'std'
),
[
'gaussian_random'
,
[],
[
'Out'
],
dict
(
scale
=
'std'
),
dict
(
),
None
,
None
,
False
],
dict
(
mean
=
0.
,
std
=
1.
,
seed
=
0
),
None
,
None
,
False
],
# TODO: add dtype support
## unary ops ##
## unary ops ##
'Abs'
:
[
'abs'
,
[
'X'
],
[
'Out'
]],
'Abs'
:
[
'abs'
,
[
'X'
],
[
'Out'
]],
'ArgMax'
:
[
'argmax'
,
[
'X'
],
[
'Out'
],
dict
(
keepdims
=
''
)],
'Acos'
:
[
'acos'
,
[
'X'
],
[
'Out'
]],
'ArgMin'
:
[
'argmin'
,
[
'X'
],
[
'Out'
],
dict
(
keepdims
=
''
)],
'Asin'
:
[
'asin'
,
[
'X'
],
[
'Out'
]],
'Atan'
:
[
'atan'
,
[
'X'
],
[
'Out'
]],
'ArgMax'
:
[
'argmax'
,
[
'X'
],
[
'Out'
],
dict
(
keepdims
=
''
),
dict
(
axis
=
0
)],
'ArgMin'
:
[
'argmin'
,
[
'X'
],
[
'Out'
],
dict
(
keepdims
=
''
),
dict
(
axis
=
0
)],
'Ceil'
:
[
'ceil'
,
[
'X'
],
[
'Out'
]],
'Ceil'
:
[
'ceil'
,
[
'X'
],
[
'Out'
]],
'Clip'
:
[
'clip'
,
[
'X'
],
[
'Out'
]],
# attrs bypassed
'Clip'
:
[
'clip'
,
[
'X'
],
[
'Out'
],
dict
(),
dict
(
min
=
(
_np
.
array
([
255
,
255
,
127
,
255
],
dtype
=
_np
.
uint8
).
view
(
_np
.
float32
)),
max
=
(
_np
.
array
([
255
,
255
,
127
,
127
],
dtype
=
_np
.
uint8
).
view
(
_np
.
float32
)),
)],
'Cos'
:
[
'cos'
,
[
'X'
],
[
'Out'
]],
'Cos'
:
[
'cos'
,
[
'X'
],
[
'Out'
]],
'Elu'
:
[
'elu'
,
[
'X'
],
[
'Out'
]],
'Elu'
:
[
'elu'
,
[
'X'
],
[
'Out'
]
,
dict
(),
dict
(
alpha
=
1.
)
],
'Exp'
:
[
'exp'
,
[
'X'
],
[
'Out'
]],
'Exp'
:
[
'exp'
,
[
'X'
],
[
'Out'
]],
'Flatten'
:
[
'flatten'
,
[
'X'
],
[
'Out'
]
],
# attrs bypassed,
FIXME: emit flatten2
'Flatten'
:
[
'flatten'
,
[
'X'
],
[
'Out'
]
,
dict
(),
dict
(
axis
=
1
)],
#
FIXME: emit flatten2
'Floor'
:
[
'floor'
,
[
'X'
],
[
'Out'
]],
'Floor'
:
[
'floor'
,
[
'X'
],
[
'Out'
]],
'Gather'
:
[
'gather'
,
[
'X'
],
[
'Out'
],
dict
(
axis
=
''
)],
'Gather'
:
[
'gather'
,
[
'X'
,
"Index"
],
[
'Out'
],
dict
(
axis
=
''
)],
'LeakyRelu'
:
[
'leaky_relu'
,
[
'X'
],
[
'Out'
]],
'HardSigmoid'
:
[
'hard_sigmoid'
,
[
'X'
],
[
'Out'
],
dict
(
alpha
=
'slope'
,
beta
=
'offset'
),
dict
(
slope
=
.
2
,
offset
=
.
5
)],
'Identity'
:
[
'assign'
,
[
'X'
],
[
'Out'
]],
'LeakyRelu'
:
[
'leaky_relu'
,
[
'X'
],
[
'Out'
],
dict
(),
dict
(
alpha
=
.
01
)],
'Log'
:
[
'log'
,
[
'X'
],
[
'Out'
]],
'Log'
:
[
'log'
,
[
'X'
],
[
'Out'
]],
'LRN'
:
[
'lrn'
,
[
'X'
],
[
'Out'
,
'MidOut'
],
dict
(
size
=
'n'
,
bias
=
'k'
)],
#
'LRN'
:
[
'lrn'
,
[
'X'
],
[
'Out'
,
'MidOut'
],
dict
(
size
=
'n'
,
bias
=
'k'
),
dict
(
n
=
5
,
k
=
1.
,
alpha
=
1e-4
,
beta
=
.
75
)],
#
'Reciprocal'
:
[
'reciprocal'
,
[
'X'
],
[
'Out'
]],
'Reciprocal'
:
[
'reciprocal'
,
[
'X'
],
[
'Out'
]],
'Relu'
:
[
'relu'
,
[
'X'
],
[
'Out'
]],
'Relu'
:
[
'relu'
,
[
'X'
],
[
'Out'
]],
'Selu'
:
[
'selu'
,
[
'X'
],
[
'Out'
],
dict
(
gamma
=
'scale'
)],
'Round'
:
[
'round'
,
[
'X'
],
[
'Out'
]],
'Shape'
:
[
'shape'
,
[
'X'
],
[
'Out'
]],
# FIXME: out is int64 vs int32
'Selu'
:
[
'selu'
,
[
'X'
],
[
'Out'
],
dict
(
gamma
=
'scale'
),
dict
(
scale
=
1.0507009873554804934193349852946
,
alpha
=
1.6732632423543772848170429916717
,
)],
'Shrink'
:
[
'softshrink'
,
[
'X'
],
[
'Out'
],
dict
(
bias
=
''
,
labmd
=
''
)],
'Shrink'
:
[
'softshrink'
,
[
'X'
],
[
'Out'
],
dict
(
bias
=
''
,
labmd
=
''
)],
'Sigmoid'
:
[
'sigmoid'
,
[
'X'
],
[
'Out'
]],
'Sigmoid'
:
[
'sigmoid'
,
[
'X'
],
[
'Out'
]],
'Sign'
:
[
'sign'
,
[
'X'
],
[
'Out'
]],
'Sin'
:
[
'sin'
,
[
'X'
],
[
'Out'
]],
'Sin'
:
[
'sin'
,
[
'X'
],
[
'Out'
]],
'Squeeze'
:
[
'squeeze'
,
[
'X'
],
[
'Out'
]],
# attrs bypassed, FIXME: emit squeeze2
'Squeeze'
:
[
'squeeze'
,
[
'X'
],
[
'Out'
]],
# FIXME: emit squeeze2
'Softplus'
:
[
'softplus'
,
[
'X'
],
[
'Out'
]],
# FIXME: default axis = -1, reshape required before and after
# FIXME: default axis = -1, reshape required before and after
'Softmax'
:
[
'softmax'
,
[
'X'
],
[
'Out'
],
dict
(
axis
=
''
)],
'Softmax'
:
[
'softmax'
,
[
'X'
],
[
'Out'
],
dict
(
axis
=
''
),
dict
(
axis
=-
1
)],
'Softplus'
:
[
'softplus'
,
[
'X'
],
[
'Out'
]],
'Softsign'
:
[
'softsign'
,
[
'X'
],
[
'Out'
]],
'Softsign'
:
[
'softsign'
,
[
'X'
],
[
'Out'
]],
'SpaceToDepth'
:
[
'space_to_depth'
,
[
'X'
],
[
'Out'
]],
'Sqrt'
:
[
'sqrt'
,
[
'X'
],
[
'Out'
]],
'Sqrt'
:
[
'sqrt'
,
[
'X'
],
[
'Out'
]],
'Tanh'
:
[
'tanh'
,
[
'X'
],
[
'Out'
]],
'Tanh'
:
[
'tanh'
,
[
'X'
],
[
'Out'
]],
'ThresholdedRelu'
:
[
'thresholded_relu'
,
[
'X'
],
[
'Out'
],
dict
(
alpha
=
'threshold'
)],
'ThresholdedRelu'
:
[
'thresholded_relu'
,
[
'X'
],
[
'Out'
],
dict
(
alpha
=
'threshold'
),
dict
(
alpha
=
1.
)],
#'Transpose': ['transpose', ['X'], ['Out']],
#'Transpose': ['transpose', ['X'], ['Out']],
'Unsqueeze'
:
[
'unsqueeze'
,
[
'X'
],
[
'Out'
]],
#
attrs bypassed,
FIXME: emit unsqueeze2
'Unsqueeze'
:
[
'unsqueeze'
,
[
'X'
],
[
'Out'
]],
# FIXME: emit unsqueeze2
## binary ops ##
## binary ops ##
'Add'
:
[
'elementwise_add'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Add'
:
[
'elementwise_add'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
#'AffineGrid': ['affine_grid', ['Theta'], ['Output'], dict(size='out_shape')],
#'AffineGrid': ['affine_grid', ['Theta'], ['Output'], dict(size='out_shape')],
...
@@ -90,121 +111,120 @@ DEFAULT_OP_MAPPING = {
...
@@ -90,121 +111,120 @@ DEFAULT_OP_MAPPING = {
'MatMul'
:
[
'matmul'
,
[
'X'
,
'Y'
],
[
'Out'
]],
# defaults excluded for transpose_x vs transpose_X
'MatMul'
:
[
'matmul'
,
[
'X'
,
'Y'
],
[
'Out'
]],
# defaults excluded for transpose_x vs transpose_X
'Max'
:
[
'elementwise_max'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Max'
:
[
'elementwise_max'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Min'
:
[
'elementwise_min'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Min'
:
[
'elementwise_min'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Mod'
:
[
'elementwise_mod'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Mul'
:
[
'elementwise_mul'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Mul'
:
[
'elementwise_mul'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Not'
:
[
'logical_not'
,
[
'X'
,
'Y'
],
[
'Out'
]],
'Not'
:
[
'logical_not'
,
[
'X'
,
'Y'
],
[
'Out'
]],
'OneHot'
:
# assuming values=[0, 1], axis=-1 and drop them
'OneHot'
:
# assuming values=[0, 1], axis=-1 and drop them
[
'one_hot'
,
[
'Input'
,
'
Depth
'
],
[
'Out'
],
dict
(
axis
=
''
),
dict
(),
[
'one_hot'
,
[
'Input'
,
'
depth_tensor
'
],
[
'Out'
],
dict
(
axis
=
''
),
dict
(),
[
0
,
1
],
None
,
False
],
[
0
,
1
],
None
,
False
],
'Or'
:
[
'logical_or'
,
[
'X'
,
'Y'
],
[
'Out'
]],
'Or'
:
[
'logical_or'
,
[
'X'
,
'Y'
],
[
'Out'
]],
'Pow'
:
[
'elementwise_pow'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
# TODO: pow for scalar exponent
'Pow'
:
[
'elementwise_pow'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
# TODO: pow for scalar exponent
'Sub'
:
[
'elementwise_sub'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Sub'
:
[
'elementwise_sub'
,
[
'X'
,
'Y'
],
[
'Out'
],
dict
(),
dict
(
axis
=-
1
)],
'Xor'
:
[
'logical_xor'
,
[
'X'
,
'Y'
],
[
'Out'
]],
'Xor'
:
[
'logical_xor'
,
[
'X'
,
'Y'
],
[
'Out'
]],
# reduce ops
# reduce ops
'ReduceMax'
:
[
'reduce_max'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
)],
# TODO: fix reduce_all ?
'ReduceMean'
:
[
'reduce_mean'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
)],
'ReduceMax'
:
'ReduceMin'
:
[
'reduce_min'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
)],
[
'reduce_max'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
),
'ReduceProd'
:
[
'reduce_prod'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
)],
dict
(
keep_dim
=
1
)],
'ReduceSum'
:
[
'reduce_sum'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
)],
'ReduceMean'
:
[
'reduce_mean'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
),
dict
(
keep_dim
=
1
)],
'ReduceMin'
:
[
'reduce_min'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
),
dict
(
keep_dim
=
1
)],
'ReduceProd'
:
[
'reduce_prod'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
),
dict
(
keep_dim
=
1
)],
'ReduceSum'
:
[
'reduce_sum'
,
[
'X'
],
[
'Out'
],
dict
(
axes
=
'dim'
,
keepdims
=
'keep_dim'
),
dict
(
keep_dim
=
1
)],
# other ops
# other ops
'Scatter'
:
[
'scatter'
,
[
'X'
,
'I
ndex'
,
'Updates'
],
[
'Out'
]
],
'Scatter'
:
[
'scatter'
,
[
'X'
,
'I
ds'
,
'Updates'
],
[
'Out'
],
dict
(),
dict
(
overwrite
=
True
)
],
'TopK'
:
[
'topk'
,
[
'X'
,
'K'
],
[
'Out'
,
'Indices'
]],
'TopK'
:
[
'topk'
,
[
'X'
,
'K'
],
[
'Out'
,
'Indices'
]],
}
}
DEFAULT_IOA_CONSTRAINTS
=
{
DEFAULT_IOA_CONSTRAINTS
=
{
'ArgMax'
:
[
'ArgMax'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'keepdims'
,
1
)
==
1
,
(
lambda
i
,
o
,
a
:
a
.
get
(
'keepdims'
,
1
)
==
1
,
'only keepdims = 0
is
supported'
),
'only keepdims = 0 supported'
),
],
],
'ArgMin'
:
[
'ArgMin'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'keepdims'
,
1
)
==
1
,
(
lambda
i
,
o
,
a
:
a
.
get
(
'keepdims'
,
1
)
==
1
,
'only keepdims = 0
is
supported'
),
'only keepdims = 0 supported'
),
],
],
'Gather'
:
[
'Gather'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
0
)
==
0
,
'only axis = 0
is
supported'
),
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
0
)
==
0
,
'only axis = 0 supported'
),
],
],
'Shrink'
:
[
'Shrink'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'bias'
,
0
)
==
a
.
get
(
'lambd'
,
0
.5
),
(
lambda
i
,
o
,
a
:
a
.
get
(
'bias'
,
0
)
==
a
.
get
(
'lambd'
,
.
5
),
'only SoftShrink with bias = lambd
is
supported'
),
'only SoftShrink with bias = lambd supported'
),
],
],
# 'Softmax':
# 'Softmax':
# [(lambda i, o, a: a.get('axis', 1) == -2, 'Paddle fluid Softmax works on dim -2 only'),
# [(lambda i, o, a: a.get('axis', 1) == -2, 'Paddle fluid Softmax works on dim -2 only'),
# ],
# ],
'OneHot'
:
[
'OneHot'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
-
1
)
==
-
1
,
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
-
1
)
==
-
1
,
'only axis = -1 supported'
),
'only axis = -1 is supported'
),
],
],
'Scatter'
:
[
'Scatter'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
0
)
==
0
,
'only axis = 0
is
supported'
),
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
0
)
==
0
,
'only axis = 0 supported'
),
],
],
'TopK'
:
[
'TopK'
:
[
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
-
1
)
==
-
1
,
(
lambda
i
,
o
,
a
:
a
.
get
(
'axis'
,
-
1
)
==
-
1
,
'only axis = -1 supported'
),
'only axis = -1 is supported'
),
],
],
}
}
def
_make_var_name
(
name
):
def
_dtype
(
value_infos
,
name
):
"""
return
_np
.
dtype
(
value_infos
[
name
][
'dtype'
])
make a valid variable name in Python code
"""
if
name
==
''
:
return
'_'
if
name
[
0
].
isdigit
():
return
'var_'
+
name
for
s
in
' *?
\\
/-:'
:
name
=
name
.
replace
(
s
,
'_'
)
if
name
.
startswith
(
'_'
):
name
=
'var'
+
name
return
name
#def _value_info_or_none(value_infos, val_name):
# return value_infos.get(val_name, None)
def
_dtype
(
value_infos
,
val_name
):
return
_np
.
dtype
(
value_infos
[
val_name
][
'dtype'
])
def
_dtype_or_none
(
value_infos
,
name
):
def
_dtype_or_none
(
value_infos
,
val_name
):
if
name
not
in
value_infos
:
if
val_name
not
in
value_infos
:
return
None
return
None
value_info
=
value_infos
[
val_
name
]
value_info
=
value_infos
[
name
]
if
'dtype'
not
in
value_info
:
if
'dtype'
not
in
value_info
:
return
None
return
None
return
_np
.
dtype
(
value_info
[
'dtype'
])
return
_np
.
dtype
(
value_info
[
'dtype'
])
def
_shape
(
value_infos
,
val_
name
):
def
_shape
(
value_infos
,
name
):
return
list
(
value_infos
[
val_
name
][
'shape'
])
return
list
(
value_infos
[
name
][
'shape'
])
def
_shape_or_none
(
value_infos
,
val_
name
):
def
_shape_or_none
(
value_infos
,
name
):
if
val_
name
not
in
value_infos
:
if
name
not
in
value_infos
:
return
None
return
None
value_info
=
value_infos
[
val_
name
]
value_info
=
value_infos
[
name
]
if
'shape'
not
in
value_info
:
if
'shape'
not
in
value_info
:
return
None
return
None
return
list
(
value_info
[
'shape'
])
return
list
(
value_info
[
'shape'
])
def
_const_weight_or_none
(
value_infos
,
val_
name
):
def
_const_weight_or_none
(
value_infos
,
name
):
if
val_
name
not
in
value_infos
:
if
name
not
in
value_infos
:
return
None
return
None
value_info
=
value_infos
[
val_
name
]
value_info
=
value_infos
[
name
]
const_value
=
value_info
.
get
(
'const_value'
,
None
)
const_value
=
value_info
.
get
(
'const_value'
,
None
)
if
const_value
:
if
const_value
is
not
None
:
return
const_value
return
const_value
get_weight_func
=
value_info
.
get
(
'get_weight'
,
None
)
get_weight_func
=
value_info
.
get
(
'get_weight'
,
None
)
if
get_weight_func
:
if
get_weight_func
is
not
None
:
return
get_weight_func
()
return
get_weight_func
()
return
None
return
None
def
_check_embeddable
(
value_infos
,
*
names
):
keyword
=
'get_weight'
for
name
in
names
:
if
keyword
not
in
value_infos
[
name
]:
_logger
.
warning
(
'parameter %s not embeddable'
,
name
)
return
False
return
True
def
_default
(
prog
,
op_type
,
inputs
,
outputs
,
attrs
,
*
args
,
name
=
''
,
**
kwargs
):
def
_default
(
prog
,
op_type
,
inputs
,
outputs
,
attrs
,
*
args
,
name
=
''
,
**
kwargs
):
info
=
DEFAULT_OP_MAPPING
[
op_type
]
info
=
DEFAULT_OP_MAPPING
[
op_type
]
info
.
extend
(
list
(
DEFAULT_OP_MAPPING_FIELD_VALUES
.
values
())
[
len
(
info
):])
info
.
extend
(
DEFAULT_OP_MAPPING_VALUES
[
len
(
info
):])
(
(
fluid_op
,
fluid_op
,
...
@@ -233,12 +253,13 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -233,12 +253,13 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
fluid_attrs
=
default_attrs
.
copy
()
fluid_attrs
=
default_attrs
.
copy
()
fluid_attrs
.
update
(
mapped_attrs
)
# as new attrs
fluid_attrs
.
update
(
mapped_attrs
)
# as new attrs
val_inps
=
inputs
if
input_perm
is
None
else
map
(
lambda
i
:
inputs
[
i
],
var_inps
=
list
(
map
(
inputs
.
__getitem__
,
input_perm
)
input_perm
))
if
input_perm
is
not
None
else
inputs
val_outs
=
outputs
if
output_perm
is
None
else
map
(
lambda
i
:
outputs
[
i
],
var_outs
=
list
(
map
(
outputs
.
__getitem__
,
output_perm
)
output_perm
))
if
output_perm
is
not
None
else
outputs
var_inps
=
[
_make_var_name
(
val
)
for
val
in
val_inps
]
for
var_name
in
var_inps
+
var_outs
:
var_outs
=
[
_make_var_name
(
val
)
for
val
in
val_outs
]
assert
var_name
arg_name
=
', name={}'
.
format
(
arg_name
=
', name={}'
.
format
(
repr
(
name
))
if
fill_name_field
and
name
else
''
repr
(
name
))
if
fill_name_field
and
name
else
''
arg_attrs
=
[
arg_attrs
=
[
...
@@ -249,7 +270,7 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -249,7 +270,7 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
', '
.
join
(
var_outs
),
', '
.
join
(
var_outs
),
fluid_op
,
fluid_op
,
', '
.
join
(
var_inps
),
', '
.
join
(
var_inps
),
''
.
join
(
arg_attrs
),
''
.
join
(
arg_attrs
)
[(
0
if
var_inps
else
2
):]
,
arg_name
,
arg_name
,
))
))
...
@@ -257,23 +278,22 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -257,23 +278,22 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
num_vars
=
len
(
var_outs
)
num_vars
=
len
(
var_outs
)
num_args
=
len
(
fluid_output_args
)
num_args
=
len
(
fluid_output_args
)
if
num_vars
<
num_args
:
if
num_vars
<
num_args
:
assert
fill_name_field
,
'name required to name dummy output variables'
assert
fill_name_field
and
name
,
'name required to name dummy output variables'
for
idx_out
in
range
(
num_vars
,
num_args
):
for
idx_out
in
range
(
num_vars
,
num_args
):
var_out
=
name
+
'.'
+
fluid_output_args
[
idx_out
]
# dummy output
var_out
=
name
+
'.'
+
fluid_output_args
[
idx_out
]
# dummy output
var_outs
.
append
(
var_out
)
var_outs
.
append
(
var_out
)
for
var_out
in
var_outs
:
for
var_out
in
var_outs
:
prog
.
VarDesc
(
var_out
)
prog
.
VarDesc
(
var_out
)
prog
.
OpDesc
(
fluid_op
,
(
var_inps
,
*
fluid_input_arg
s
),
prog
.
OpDesc
(
fluid_op
,
(
fluid_input_args
,
var_inp
s
),
(
var_outs
,
*
fluid_output_arg
s
),
fluid_attrs
)
(
fluid_output_args
,
var_out
s
),
fluid_attrs
)
def
_assign
(
prog
,
mapping
):
def
_assign
(
prog
,
mapping
):
fluid_op
=
'assign'
fluid_op
=
'assign'
for
val_dst
,
val_src
in
mapping
.
items
():
for
var_dst
,
var_src
in
mapping
.
items
():
var_dst
=
_make_var_name
(
val_dst
)
assert
var_dst
and
var_src
var_src
=
_make_var_name
(
val_src
)
prog
.
Code
(
'{} = {} # assign'
.
format
(
var_dst
,
var_src
))
prog
.
Code
(
'{} = {} # assign'
.
format
(
var_dst
,
var_src
))
# prog.Code('{} = layers.{}({})'
# prog.Code('{} = layers.{}({})'
# .format(var_dst,
# .format(var_dst,
...
@@ -283,24 +303,23 @@ def _assign(prog, mapping):
...
@@ -283,24 +303,23 @@ def _assign(prog, mapping):
prog
.
VarDesc
(
var_dst
)
prog
.
VarDesc
(
var_dst
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_src
],
'X'
),
([
'X'
],
[
var_src
]
),
([
var_dst
],
'Out'
),
([
'Out'
],
[
var_dst
]
),
dict
(),
dict
(),
)
)
def
_zeros_like
(
prog
,
va
l_ref
,
val_out
,
value_infos
):
def
_zeros_like
(
prog
,
va
r_ref
,
var_out
):
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Sub'
,
'Sub'
,
[
val_ref
,
val_ref
],
[
var_ref
,
var_ref
],
[
val_out
],
# val
[
var_out
],
dict
(
axis
=
0
),
{
'axis'
:
0
},
value_infos
,
)
)
def
_pad_if_asymmetric
(
prog
,
pads
,
va
l_name
,
value_infos
):
# pads: SSEE
def
_pad_if_asymmetric
(
prog
,
pads
,
va
r_input
,
value_infos
,
scope
):
# pads: SSEE
assert
len
(
pads
)
&
1
==
0
assert
len
(
pads
)
&
1
==
0
ndims
=
len
(
pads
)
//
2
ndims
=
len
(
pads
)
//
2
symmetric
=
True
symmetric
=
True
...
@@ -309,41 +328,36 @@ def _pad_if_asymmetric(prog, pads, val_name, value_infos): # pads: SSEE
...
@@ -309,41 +328,36 @@ def _pad_if_asymmetric(prog, pads, val_name, value_infos): # pads: SSEE
symmetric
=
False
symmetric
=
False
break
break
if
symmetric
:
if
symmetric
:
return
pads
[:
ndims
],
va
l_name
return
pads
[:
ndims
],
va
r_input
val_padded
=
val_name
+
'_padded'
# explicit variable
assert
scope
var_padded
=
scope
+
'_pad'
# explicit variable
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Pad'
,
'Pad'
,
[
va
l_name
],
[
va
r_input
],
[
va
l_padded
],
# val
[
va
r_padded
],
dict
(
{
mode
=
'constant'
,
'mode'
:
'constant'
,
value
=
0.
,
'value'
:
0.
,
pads
=
pads
,
'pads'
:
pads
,
)
,
}
,
value_infos
=
value_infos
,
value_infos
=
value_infos
,
name
=
val_padded
,
name
=
(
scope
+
'/pad'
)
,
)
)
return
[
0
]
*
ndims
,
va
l
_padded
return
[
0
]
*
ndims
,
va
r
_padded
def
_adaptive_pool
(
prog
,
pool_type
,
inputs
,
outputs
,
attrs
,
name
=
''
):
def
_adaptive_pool
(
prog
,
pool_type
,
inputs
,
outputs
,
attrs
,
name
=
''
):
# I/O
# I/O
val_x
,
=
inputs
var_x
,
=
inputs
val_y
,
=
outputs
[:
1
]
var_y
,
var_indices
,
=
(
outputs
+
[
''
]
*
1
)[:
2
]
var_x
=
_make_var_name
(
val_x
)
assert
var_x
and
var_y
var_y
=
_make_var_name
(
val_y
)
has_indices
=
len
(
outputs
)
>
1
if
has_indices
:
val_indices
=
outputs
[
1
]
var_indices
=
_make_var_name
(
val_indices
)
# interpretation
# interpretation
pool_size
=
attrs
[
'output_size'
]
# required
pool_size
=
attrs
[
'output_size'
]
# required
poolnd
=
len
(
pool_size
)
poolnd
=
len
(
pool_size
)
assert
2
<=
poolnd
<=
3
,
'only pool2d and pool3d
is
supported'
assert
2
<=
poolnd
<=
3
,
'only pool2d and pool3d supported'
fluid_op
=
'adaptive_pool{}d'
.
format
(
poolnd
)
fluid_op
=
'adaptive_pool{}d'
.
format
(
poolnd
)
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
...
@@ -355,50 +369,49 @@ def _adaptive_pool(prog, pool_type, inputs, outputs, attrs, name=''):
...
@@ -355,50 +369,49 @@ def _adaptive_pool(prog, pool_type, inputs, outputs, attrs, name=''):
', pool_type={}'
', pool_type={}'
'{})'
.
format
(
'{})'
.
format
(
var_y
,
var_y
,
', {}'
.
format
(
var_indices
)
if
has
_indices
else
''
,
', {}'
.
format
(
var_indices
)
if
var
_indices
else
''
,
fluid_op
,
fluid_op
,
var_x
,
var_x
,
# attrs
# attrs
has_indices
,
bool
(
var_indices
)
,
pool_size
,
pool_size
,
repr
(
pool_type
),
repr
(
pool_type
),
name_attr
,
name_attr
,
))
))
fluid_op
=
'pool{}d'
.
format
(
poolnd
)
fluid_op
=
'pool{}d'
.
format
(
poolnd
)
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
if
has
_indices
:
if
var
_indices
:
prog
.
VarDesc
(
var_indices
)
prog
.
VarDesc
(
var_indices
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
],
'X'
),
([
'X'
],
[
var_x
]
),
([
var_y
]
+
([
var_indices
]
if
has_indices
else
[]),
'Out'
,
'Indices'
),
([
'Out'
,
'Indices'
],
[
var_y
]
+
([
var_indices
]
if
var_indices
else
[])
),
dict
(
{
global_pooling
=
Fals
e
,
'adaptive'
:
Tru
e
,
adaptive
=
Tru
e
,
'pooling_type'
:
pool_typ
e
,
exclusive
=
Tru
e
,
'ksize'
:
pool_siz
e
,
require_index
=
has_indices
,
# unused
pooling_type
=
pool_typ
e
,
# 'exclusive': Tru
e,
ksize
=
pool_siz
e
,
# 'global_pooling': Fals
e,
)
,
}
,
)
)
def
_global_pool
(
prog
,
pool_type
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
):
def
_global_pool
(
prog
,
pool_type
,
inputs
,
outputs
,
value_infos
,
name
=
''
):
# I/O
# I/O
val_x
,
=
inputs
var_x
,
=
inputs
val_y
,
=
outputs
var_y
,
=
outputs
var_x
=
_make_var_name
(
val_x
)
assert
var_x
and
var_y
var_y
=
_make_var_name
(
val_y
)
# interpretation
# interpretation
input_shape
=
_shape_or_none
(
value_infos
,
va
l
_x
)
input_shape
=
_shape_or_none
(
value_infos
,
va
r
_x
)
output_shape
=
_shape_or_none
(
value_infos
,
va
l
_y
)
output_shape
=
_shape_or_none
(
value_infos
,
va
r
_y
)
assert
input_shape
is
not
None
or
output_shape
is
not
None
,
'poolnd not inferred'
# NC...
assert
input_shape
is
not
None
or
output_shape
is
not
None
,
'poolnd not inferred'
# NC...
if
input_shape
:
if
input_shape
is
not
None
:
poolnd
=
len
(
input_shape
)
-
2
# NC...
poolnd
=
len
(
input_shape
)
-
2
# NC...
elif
output_shape
:
elif
output_shape
is
not
None
:
poolnd
=
len
(
output_shape
)
-
2
# NC...
poolnd
=
len
(
output_shape
)
-
2
# NC...
assert
2
<=
poolnd
<=
3
,
'only pool2d and pool3d
is
supported'
assert
2
<=
poolnd
<=
3
,
'only pool2d and pool3d supported'
fluid_op
=
'pool{}d'
.
format
(
poolnd
)
fluid_op
=
'pool{}d'
.
format
(
poolnd
)
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
...
@@ -417,43 +430,43 @@ def _global_pool(prog, pool_type, inputs, outputs, attrs, value_infos, name=''):
...
@@ -417,43 +430,43 @@ def _global_pool(prog, pool_type, inputs, outputs, attrs, value_infos, name=''):
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
],
'X'
),
([
'X'
],
[
var_x
]),
([
var_y
],
'Out'
),
([
'Out'
],
[
var_y
]),
dict
(
{
global_pooling
=
True
,
'global_pooling'
:
True
,
adaptive
=
False
,
'pooling_type'
:
pool_type
,
pooling_type
=
pool_type
,
# unused
ksize
=
[
-
1
,
-
1
],
'adaptive'
:
False
,
),
'ksize'
:
[
-
1
,
-
1
],
'strides'
:
[
-
1
,
-
1
],
'paddings'
:
[
0
,
0
],
'ceil_mode'
:
False
,
},
)
)
def
_pool
(
prog
,
pool_type
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
):
def
_pool
(
prog
,
pool_type
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
):
# I/O
# I/O
val_x
,
=
inputs
var_x
,
=
inputs
val_y
,
=
outputs
[:
1
]
var_y
,
var_indices
,
=
(
outputs
+
[
''
]
*
1
)[:
2
]
var_y
=
_make_var_name
(
val_y
)
assert
name
and
var_x
and
var_y
has_indices
=
len
(
outputs
)
>
1
if
has_indices
:
val_indices
=
outputs
[
1
]
var_indices
=
_make_var_name
(
val_indices
)
# interpretation
# interpretation
assert
attrs
.
get
(
assert
attrs
.
get
(
'auto_pad'
,
'auto_pad'
,
'NOTSET'
)
==
'NOTSET'
,
'only auto_pad = NOTSET is supported'
# optional
'NOTSET'
)
==
'NOTSET'
,
'only auto_pad = NOTSET supported'
# optional
assert
attrs
.
get
(
'count_include_pad'
,
0
)
==
0
,
'only count_include_pad = 0 supported'
# optional
pool_size
=
attrs
[
'kernel_shape'
]
# required
pool_size
=
attrs
[
'kernel_shape'
]
# required
poolnd
=
len
(
pool_size
)
poolnd
=
len
(
pool_size
)
assert
2
<=
poolnd
<=
3
,
'only pool2d and pool3d
is
supported'
assert
2
<=
poolnd
<=
3
,
'only pool2d and pool3d supported'
fluid_op
=
'pool{}d'
.
format
(
poolnd
)
fluid_op
=
'pool{}d'
.
format
(
poolnd
)
strides
=
attrs
.
get
(
'strides'
,
[
1
]
*
poolnd
)
# optional
strides
=
attrs
.
get
(
'strides'
,
[
1
]
*
poolnd
)
# optional
ceil_mode
=
bool
(
attrs
.
get
(
'ceil_mode'
,
0
))
# optional
ceil_mode
=
bool
(
attrs
.
get
(
'ceil_mode'
,
0
))
# optional
pads
=
attrs
.
get
(
'pads'
,
[
0
]
*
(
poolnd
*
2
))
# optional
pads
=
attrs
.
get
(
'pads'
,
[
0
]
*
(
poolnd
*
2
))
# optional
paddings
,
val_x
=
_pad_if_asymmetric
(
prog
,
pads
,
val_x
,
value_infos
)
paddings
,
var_x
=
_pad_if_asymmetric
(
prog
,
pads
,
var_x
,
value_infos
,
name
)
var_x
=
_make_var_name
(
val_x
)
name_attr
=
', name={}'
.
format
(
repr
(
name
))
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
# generation
# generation
prog
.
Code
(
'{} = layers.{}({}, exclusive=True'
prog
.
Code
(
'{} = layers.{}({}, exclusive=True'
...
@@ -475,42 +488,40 @@ def _pool(prog, pool_type, inputs, outputs, attrs, value_infos, name=''):
...
@@ -475,42 +488,40 @@ def _pool(prog, pool_type, inputs, outputs, attrs, value_infos, name=''):
name_attr
,
name_attr
,
))
))
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
if
has
_indices
:
if
var
_indices
:
prog
.
VarDesc
(
var_indices
)
prog
.
VarDesc
(
var_indices
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
],
'X'
),
([
'X'
],
[
var_x
]
),
([
var_y
]
+
([
var_indices
]
if
has_indices
else
[]),
'Out'
,
'Indices'
),
([
'Out'
,
'Indices'
],
[
var_y
]
+
([
var_indices
]
if
var_indices
else
[])
),
dict
(
{
global_pooling
=
False
,
'global_pooling'
:
False
,
adaptive
=
Fals
e
,
'pooling_type'
:
pool_typ
e
,
exclusive
=
Tru
e
,
'ksize'
:
pool_siz
e
,
require_index
=
has_indic
es
,
'strides'
:
strid
es
,
pooling_type
=
pool_type
,
'paddings'
:
paddings
,
ksize
=
pool_siz
e
,
'ceil_mode'
:
ceil_mod
e
,
strides
=
strides
,
# unused
paddings
=
paddings
,
'adaptive'
:
False
,
ceil_mode
=
ceil_mod
e
,
# 'exclusive': Tru
e,
)
,
}
,
)
)
def
_roi_pool
(
prog
,
fluid_op
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
):
def
_roi_pool
(
prog
,
fluid_op
,
inputs
,
outputs
,
attrs
,
name
):
# I/O
# I/O
val_x
,
val_rois
=
inputs
var_x
,
var_rois
,
=
inputs
val_y
,
=
outputs
var_y
,
=
outputs
var_x
=
_make_var_name
(
val_x
)
assert
name
and
var_x
and
var_rois
and
var_y
var_rois
=
_make_var_name
(
val_rois
)
var_y
=
_make_var_name
(
val_y
)
# interpretation
# interpretation
spatial_scale
=
attrs
[
'spatial_scale'
]
# required
spatial_scale
=
attrs
[
'spatial_scale'
]
# required
pooled_height
,
pooled_width
=
attrs
[
'pooled_shape'
]
# required
pooled_height
,
pooled_width
=
attrs
[
'pooled_shape'
]
# required
od_attrs
=
dict
(
od_attrs
=
{
pooled_height
=
pooled_height
,
'pooled_height'
:
pooled_height
,
pooled_width
=
pooled_width
,
'pooled_width'
:
pooled_width
,
spatial_scale
=
spatial_scale
,
'spatial_scale'
:
spatial_scale
,
)
}
feature_attr
=
''
feature_attr
=
''
is_max_pool
=
fluid_op
==
'roi_pool'
is_max_pool
=
fluid_op
==
'roi_pool'
if
'sampling_ratio'
in
attrs
:
#
if
'sampling_ratio'
in
attrs
:
#
...
@@ -530,7 +541,7 @@ def _roi_pool(prog, fluid_op, inputs, outputs, attrs, value_infos, name):
...
@@ -530,7 +541,7 @@ def _roi_pool(prog, fluid_op, inputs, outputs, attrs, value_infos, name):
'{})'
.
format
(
'{})'
.
format
(
var_y
,
var_y
,
fluid_op
,
fluid_op
,
va
l
_x
,
va
r
_x
,
var_rois
,
var_rois
,
# attrs
# attrs
spatial_scale
,
spatial_scale
,
...
@@ -540,46 +551,45 @@ def _roi_pool(prog, fluid_op, inputs, outputs, attrs, value_infos, name):
...
@@ -540,46 +551,45 @@ def _roi_pool(prog, fluid_op, inputs, outputs, attrs, value_infos, name):
))
))
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
if
is_max_pool
:
if
is_max_pool
:
var_argmax
=
_make_var_name
(
name
+
'.argmax'
)
# hidden variable
var_argmax
=
name
+
'.argmax'
# hidden variable
prog
.
VarDesc
(
var_argmax
)
prog
.
VarDesc
(
var_argmax
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
,
var_rois
],
'X'
,
'Rois'
),
([
'X'
,
'ROIs'
],
[
var_x
,
var_rois
]
),
([
var_y
]
+
([
var_argmax
]
if
is_max_pool
else
[]),
'Out'
,
'Argmax'
),
([
'Out'
,
'Argmax'
],
[
var_y
]
+
([
var_argmax
]
if
is_max_pool
else
[])
),
od_attrs
,
od_attrs
,
)
)
def
_interpolate
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
):
def
_interpolate
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
):
# I/O
# I/O
val_x
,
val_scales
=
inputs
var_x
,
var_scales
,
=
inputs
val_y
,
=
outputs
var_y
,
=
outputs
var_x
=
_make_var_name
(
val_x
)
assert
var_x
and
var_scales
and
var_y
var_y
=
_make_var_name
(
val_y
)
# interpretation
# interpretation
# output shape
# output shape
out_shape_
=
_shape_or_none
(
value_infos
,
va
l
_y
)
out_shape_
=
_shape_or_none
(
value_infos
,
va
r
_y
)
if
out_shape_
is
not
None
:
if
out_shape_
is
not
None
:
assert
len
(
out_shape_
)
==
4
,
'only 4-D Tensor as X and Y supported'
assert
len
(
out_shape_
)
==
4
,
'only 4-D Tensor as X and Y supported'
out_shape_
=
out_shape_
[
2
:]
out_shape_
=
out_shape_
[
2
:]
# try scales
# try scales
scales
=
_const_weight_or_none
(
value_infos
,
va
l
_scales
)
scales
=
_const_weight_or_none
(
value_infos
,
va
r
_scales
)
if
scales
is
not
None
:
if
scales
is
not
None
:
assert
len
(
scales
)
==
4
,
'only 4-D Tensor as X and Y supported'
assert
len
(
scales
)
==
4
,
'only 4-D Tensor as X and Y supported'
assert
scales
[
0
]
==
1
and
scales
[
assert
scales
[
0
]
==
1
and
scales
[
1
]
==
1
,
'only scale on (NC)HW supported'
1
]
==
1
,
'only scale on (NC)HW supported'
assert
scales
[
2
]
==
scales
[
assert
scales
[
2
]
==
scales
[
3
],
'only aspect-ratio-invariant scale supported'
3
],
'only aspect-ratio-invariant scale supported'
scale
=
scales
[
2
]
if
scales
else
None
scale
=
scales
[
2
]
# try input shape
# try input shape
if
scale
is
None
:
if
scale
is
None
:
assert
out_shape_
,
'neither scales nor output shape
is
available'
assert
out_shape_
,
'neither scales nor output shape available'
out_shape
=
out_shape_
out_shape
=
out_shape_
else
:
else
:
out_shape
=
None
out_shape
=
None
if
out_shape_
is
None
:
if
out_shape_
is
None
:
in_shape
=
_shape_or_none
(
value_infos
,
va
l
_x
)
in_shape
=
_shape_or_none
(
value_infos
,
va
r
_x
)
assert
in_shape
is
not
None
,
'out_shape required but not inferrable'
assert
in_shape
is
not
None
,
'out_shape required but not inferrable'
assert
len
(
in_shape
)
==
4
,
'only 4-D Tensor as X and Y supported'
assert
len
(
in_shape
)
==
4
,
'only 4-D Tensor as X and Y supported'
out_shape_
=
[
in_shape
[
2
]
*
scale
,
in_shape
[
3
]
*
scale
]
out_shape_
=
[
in_shape
[
2
]
*
scale
,
in_shape
[
3
]
*
scale
]
...
@@ -604,13 +614,13 @@ def _interpolate(prog, inputs, outputs, attrs, value_infos, name=''):
...
@@ -604,13 +614,13 @@ def _interpolate(prog, inputs, outputs, attrs, value_infos, name=''):
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
],
'X'
),
([
'X'
],
[
var_x
]
),
([
var_y
],
'Out'
),
([
'Out'
],
[
var_y
]
),
dict
(
{
interp_method
=
mode
,
'interp_method'
:
mode
,
out_h
=
out_shape_
[
0
],
'out_h '
:
out_shape_
[
0
],
out_w
=
out_shape_
[
1
],
'out_w '
:
out_shape_
[
1
],
)
,
}
,
)
)
...
@@ -636,10 +646,9 @@ def AffineGrid(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -636,10 +646,9 @@ def AffineGrid(prog, inputs, outputs, attrs, *args, name='', **kwargs):
"""
"""
# I/O
# I/O
val_theta
,
=
inputs
var_theta
,
=
inputs
val_grid
,
=
outputs
var_grid
,
=
outputs
var_theta
=
_make_var_name
(
val_theta
)
assert
var_theta
and
var_grid
var_grid
=
_make_var_name
(
val_grid
)
# interpretation
# interpretation
fluid_op
=
'affine_grid'
fluid_op
=
'affine_grid'
...
@@ -660,25 +669,19 @@ def AffineGrid(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -660,25 +669,19 @@ def AffineGrid(prog, inputs, outputs, attrs, *args, name='', **kwargs):
prog
.
VarDesc
(
var_grid
)
prog
.
VarDesc
(
var_grid
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_theta
],
'Theta'
),
([
'Theta'
],
[
var_theta
]
),
([
var_grid
],
'Output'
),
([
'Output'
],
[
var_grid
]
),
dict
(
output_shape
=
size
)
,
# f**k you API
{
'output_shape'
:
size
}
,
# f**k you API
)
)
def
AveragePool
(
prog
,
def
AveragePool
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
**
kwargs
):
"""
"""
onnx::AveragePool-10:
onnx::AveragePool-10:
"""
"""
return
_pool
(
prog
,
'avg'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
name
)
return
_pool
(
prog
,
'avg'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
)
def
BatchNormalization
(
prog
,
def
BatchNormalization
(
prog
,
...
@@ -695,41 +698,52 @@ def BatchNormalization(prog,
...
@@ -695,41 +698,52 @@ def BatchNormalization(prog,
"""
"""
# I/O
# I/O
val_x
,
val_scale
,
val_b
,
val_mean
,
val_var
=
inputs
var_x
,
var_scale
,
var_b
,
var_mean
,
var_var
,
=
inputs
val_y
,
=
outputs
var_y
,
var_mean_
,
var_var_
,
var_saved_mean
,
var_saved_variance
,
=
(
var_x
=
_make_var_name
(
val_x
)
outputs
+
[
''
]
*
4
)[:
5
]
var_y
=
_make_var_name
(
val_y
)
assert
var_x
and
var_scale
and
var_b
and
var_mean
and
var_var
and
var_y
var_saved_mean
=
name
+
'.saved_mean'
# dummy output
assert
var_saved_mean
or
name
var_saved_variance
=
name
+
'.saved_variance'
# dummy output
assert
var_saved_variance
or
name
var_saved_mean
=
var_saved_mean
or
(
name
+
'.saved_mean'
)
# dummy output
var_saved_variance
=
var_saved_variance
or
(
name
+
'.saved_variance'
)
# dummy output
# interpretation
# interpretation
fluid_op
=
'batch_norm'
fluid_op
=
'batch_norm'
momentum
=
attrs
.
get
(
'momentum'
,
.
9
)
# optional
momentum
=
attrs
.
get
(
'momentum'
,
.
9
)
# optional
epsilon
=
attrs
.
get
(
'epsilon'
,
1e-5
)
# optional
epsilon
=
attrs
.
get
(
'epsilon'
,
1e-5
)
# optional
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
embeddable
=
_check_embeddable
(
value_infos
,
var_scale
,
var_b
,
var_mean
,
var_var
)
if
not
embeddable
:
_logger
.
warning
(
'for op %s(%s -> BatchNormalization -> %s)'
,
name
,
inputs
,
outputs
)
_logger
.
warning
(
'one of the parameters is intermediate value'
)
_logger
.
warning
(
'broken Python code will be generated'
)
embed_params
&=
embeddable
if
embed_params
:
if
embed_params
:
assert
name
!=
''
assert
name
var_scale
=
name
+
'.w_0'
embedded_scale
=
name
+
'.w_0'
var_b
=
name
+
'.b_0'
embedded_b
=
name
+
'.b_0'
var_mean
=
name
+
'.w_1'
embedded_mean
=
name
+
'.w_1'
var_var
=
name
+
'.w_2'
embedded_var
=
name
+
'.w_2'
value_infos
[
val_scale
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_scale
)
value_infos
[
var_scale
][
'embedded_as'
].
append
(
embedded_scale
)
value_infos
[
val_b
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_b
)
value_infos
[
var_b
][
'embedded_as'
].
append
(
embedded_b
)
value_infos
[
val_mean
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_mean
)
value_infos
[
var_mean
][
'embedded_as'
].
append
(
embedded_mean
)
value_infos
[
val_var
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_var
)
value_infos
[
var_var
][
'embedded_as'
].
append
(
embedded_var
)
var_scale
=
embedded_scale
var_b
=
embedded_b
var_mean
=
embedded_mean
var_var
=
embedded_var
param_attr
=
''
param_attr
=
''
else
:
else
:
var_scale
=
_make_var_name
(
val_scale
)
var_b
=
_make_var_name
(
val_b
)
var_mean
=
_make_var_name
(
val_mean
)
var_var
=
_make_var_name
(
val_var
)
param_attr
=
(
', param_attr={}, bias_attr={}'
param_attr
=
(
', param_attr={}, bias_attr={}'
', moving_mean_name={}, moving_variance_name={}'
).
format
(
', moving_mean_name={}, moving_variance_name={}'
).
format
(
repr
(
var_scale
),
repr
(
var_b
),
repr
(
var_mean
),
repr
(
var_scale
),
repr
(
var_b
),
repr
(
var_mean
),
repr
(
var_var
))
repr
(
var_var
))
# generation
# generation
prog
.
Code
(
'{} = layers.{}({}, is_test=True
, data_layout="NCHW"
'
prog
.
Code
(
'{} = layers.{}({}, is_test=True'
', momentum={}'
', momentum={}'
', epsilon={}'
', epsilon={}'
'{}{})'
.
format
(
'{}{})'
.
format
(
...
@@ -747,16 +761,17 @@ def BatchNormalization(prog,
...
@@ -747,16 +761,17 @@ def BatchNormalization(prog,
prog
.
VarDesc
(
var_saved_variance
)
prog
.
VarDesc
(
var_saved_variance
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
,
var_scale
,
var_b
,
var_mean
,
var_var
],
'X'
,
'Scale'
,
'Bias'
,
([
'X'
,
'Scale'
,
'Bias'
,
'Mean'
,
'Variance'
'Mean'
,
'Variance'
),
],
[
var_x
,
var_scale
,
var_b
,
var_mean
,
var_var
]),
([
var_y
,
var_mean
,
var_saved_mean
,
var_saved_variance
,
var_var
],
'Y'
,
([
'Y'
,
'MeanOut'
,
'SavedMean'
,
'SavedVariance'
,
'VarianceOut'
'MeanOut'
,
'SavedMean'
,
'SavedVariance'
,
'VarianceOut'
),
],
[
var_y
,
var_mean
,
var_saved_mean
,
var_saved_variance
,
var_var
]),
dict
(
{
is_test
=
1
,
'momentum'
:
momentum
,
data_layout
=
'NCHW'
,
'epsilon'
:
epsilon
,
use_global_stats
=
False
,
'is_test'
:
1
,
momentum
=
momentum
,
# unused
epsilon
=
epsilon
),
'data_layout'
:
'NCHW'
,
},
)
)
...
@@ -766,18 +781,19 @@ def Cast(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -766,18 +781,19 @@ def Cast(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
"""
"""
# I/O
# I/O
val_input
,
=
inputs
var_input
,
=
inputs
val_output
,
=
outputs
var_output
,
=
outputs
var_input
=
_make_var_name
(
val_input
)
assert
var_input
and
var_output
var_output
=
_make_var_name
(
val_output
)
# interpretation
# interpretation
dtype
=
attrs
[
'to'
]
# required
dtype
=
attrs
[
'to'
]
# required
if
not
isinstance
(
dtype
,
_np
.
dtype
):
# additional: possible np.dtype
if
not
isinstance
(
dtype
,
_np
.
dtype
):
# additional: possible np.dtype
dtype
=
TENSOR_TYPE_TO_NP_TYPE
[
dtype
]
dtype
=
TENSOR_TYPE_TO_NP_TYPE
[
dtype
]
output_dtype
=
_dtype_or_none
(
value_infos
,
val_output
)
if
output_dtype
:
assert
dtype
==
output_dtype
,
'dtype of to unmatches output'
# output_dtype = _dtype_or_none(value_infos, var_output)
# if output_dtype is not None:
# assert dtype == output_dtype, 'dtype of to unmatches output'
fluid_op
=
'cast'
fluid_op
=
'cast'
...
@@ -794,13 +810,14 @@ def Cast(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -794,13 +810,14 @@ def Cast(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
prog
.
VarDesc
(
var_output
)
prog
.
VarDesc
(
var_output
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_input
],
'X'
),
([
'X'
],
[
var_input
]),
([
var_output
],
'Out'
),
([
'Out'
],
[
var_output
]),
dict
(
{
in_dtype
=
prog
.
Dtype
(
_dtype
(
value_infos
,
'in_dtype'
:
prog
.
Dtype
(
_dtype
(
value_infos
,
val_input
)),
# holy, required
var_input
)),
# holy, required
out_dtype
=
prog
.
Dtype
(
dtype
),
'out_dtype'
:
prog
.
Dtype
(
dtype
),
))
},
)
def
Concat
(
prog
,
inputs
,
outputs
,
attrs
,
*
args
,
name
=
''
,
**
kwargs
):
def
Concat
(
prog
,
inputs
,
outputs
,
attrs
,
*
args
,
name
=
''
,
**
kwargs
):
...
@@ -809,9 +826,8 @@ def Concat(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -809,9 +826,8 @@ def Concat(prog, inputs, outputs, attrs, *args, name='', **kwargs):
"""
"""
# I/O
# I/O
val_concat_result
,
=
outputs
var_ret
,
=
outputs
var_inps
=
[
_make_var_name
(
val
)
for
val
in
inputs
]
assert
var_ret
var_concat_result
=
_make_var_name
(
val_concat_result
)
# interpretation
# interpretation
fluid_op
=
'concat'
fluid_op
=
'concat'
...
@@ -822,19 +838,19 @@ def Concat(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -822,19 +838,19 @@ def Concat(prog, inputs, outputs, attrs, *args, name='', **kwargs):
prog
.
Code
(
'{} = layers.{}({}'
prog
.
Code
(
'{} = layers.{}({}'
', axis={}'
', axis={}'
'{})'
.
format
(
'{})'
.
format
(
var_
concat_resul
t
,
var_
re
t
,
fluid_op
,
fluid_op
,
'['
+
', '
.
join
(
var_inp
s
)
+
']'
,
'['
+
', '
.
join
(
input
s
)
+
']'
,
# attrs
# attrs
axis
,
axis
,
name_attr
,
name_attr
,
))
))
prog
.
VarDesc
(
var_
concat_resul
t
)
prog
.
VarDesc
(
var_
re
t
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
(
var_inps
,
*
([
'X'
]
*
len
(
var_inps
))
),
(
[
'X'
]
*
len
(
inputs
),
inputs
),
([
var_concat_result
],
'Out'
),
([
'Out'
],
[
var_ret
]
),
dict
(
axis
=
axis
)
,
{
'axis'
:
axis
}
,
)
)
...
@@ -844,34 +860,31 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -844,34 +860,31 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
"""
"""
# I/O
# I/O
assert
len
(
inputs
)
==
0
assert
len
(
inputs
)
==
0
,
'constant op accept no inputs'
va
l
_output
,
=
outputs
va
r
_output
,
=
outputs
var_output
=
_make_var_name
(
val_output
)
assert
var_output
# interpretation
# interpretation
value
=
attrs
[
'value'
]
# required
value
=
attrs
[
'value'
]
# required
dtype
=
_np
.
dtype
(
value
.
dtype
)
dtype
=
_np
.
dtype
(
value
.
dtype
)
output_dtype
=
_dtype_or_none
(
value_infos
,
val_output
)
# output_dtype = _dtype_or_none(value_infos, var_output)
if
output_dtype
:
# if output_dtype is not None:
assert
dtype
==
output_dtype
,
'tensor dtype unmatches storage dtype'
# assert dtype == output_dtype, 'tensor dtype unmatches storage dtype'
# dtype = _np.dtype('float32') # HINT: force to float32
shape
=
attrs
.
get
(
'shape'
,
None
)
# additional
# dtype = _np.dtype('float32') # HINT: force to float32
shape
=
attrs
.
get
(
'shape'
,
None
)
#
if
shape
is
None
:
if
shape
is
None
:
shape
=
_shape_or_none
(
value_infos
,
va
l
_output
)
shape
=
_shape_or_none
(
value_infos
,
va
r
_output
)
if
shape
is
None
:
if
shape
is
None
:
shape
=
list
(
value
.
shape
)
shape
=
list
(
value
.
shape
)
_logger
.
warning
(
_logger
.
warning
(
'in (Constant -> %s): '
'in
op
(Constant -> %s): '
'attribute "shape" of %s not inferred, '
'attribute "shape" of %s not inferred, '
'using value as 1-D tensor may lead to fails'
,
outputs
,
va
l
_output
)
'using value as 1-D tensor may lead to fails'
,
outputs
,
va
r
_output
)
# generation
# generation
value
=
value
.
tolist
()
if
not
shape
or
value
.
size
==
1
:
# scalar or 1-size
if
len
(
value
)
==
1
:
# scalar
shape
=
[
1
]
# WORKAROUND: bad scalar support
shape
=
[
1
]
# WORKAROUND: bad scalar support
value
=
value
[
0
]
value
=
value
.
tolist
()
[
0
]
fluid_op
=
'fill_constant'
fluid_op
=
'fill_constant'
prog
.
Code
(
'{} = layers.{}(shape={}, dtype={}, value={})'
.
format
(
prog
.
Code
(
'{} = layers.{}(shape={}, dtype={}, value={})'
.
format
(
var_output
,
var_output
,
...
@@ -884,19 +897,19 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -884,19 +897,19 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
prog
.
VarDesc
(
var_output
)
prog
.
VarDesc
(
var_output
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([],
),
([],
[]
),
([
var_output
],
'Out'
),
([
'Out'
],
[
var_output
]
),
dict
(
{
shape
=
shape
,
'shape'
:
shape
,
dtype
=
prog
.
Dtype
(
dtype
),
'dtype'
:
prog
.
Dtype
(
dtype
),
value
=
value
,
'value'
:
value
,
)
,
}
,
)
)
else
:
# list parameter -> const_value
else
:
# list parameter -> const_value
prog
.
Code
(
'# {} = {} # passed directly as literal'
.
format
(
prog
.
Code
(
'# {} = {} # passed directly as literal'
.
format
(
var_output
,
value
))
var_output
,
value
.
tolist
()
))
value_infos
[
va
l
_output
][
'const_value'
]
=
value
value_infos
[
va
r
_output
][
'const_value'
]
=
value
def
ConstantOfShape
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
*
args
,
**
kwargs
):
def
ConstantOfShape
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
*
args
,
**
kwargs
):
...
@@ -905,28 +918,28 @@ def ConstantOfShape(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -905,28 +918,28 @@ def ConstantOfShape(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
"""
"""
# I/O
# I/O
va
l
_shape
,
=
inputs
va
r
_shape
,
=
inputs
va
l
_output
,
=
outputs
va
r
_output
,
=
outputs
var_shape
=
_make_var_name
(
val_shape
)
assert
var_shape
and
var_output
shape
=
_const_weight_or_none
(
value_infos
,
va
l
_shape
)
shape
=
_const_weight_or_none
(
value_infos
,
va
r
_shape
)
if
shape
is
None
:
if
shape
is
None
:
shape
=
_shape_or_none
(
value_infos
,
va
l
_output
)
shape
=
_shape_or_none
(
value_infos
,
va
r
_output
)
assert
shape
is
not
None
,
(
assert
shape
is
not
None
,
(
'given shape is neither const value nor deductible from output, '
'given shape is neither const value nor deductible from output, '
'this is not supported'
)
'this is not supported'
)
dtype
=
attrs
[
'value'
].
dtype
attrs
=
attrs
.
copy
()
attrs
=
attrs
.
copy
()
attrs
.
update
(
dict
(
shape
=
shape
,
dtype
=
dtype
))
# pass const
attrs
.
setdefault
(
'value'
,
_np
.
array
(
0
,
dtype
=
_np
.
float32
))
attrs
.
update
({
'shape'
:
shape
})
# pass const
prog
.
Code
(
'# shape:
{}=
{} # const as literal'
.
format
(
var_shape
,
shape
))
prog
.
Code
(
'# shape:
{} =
{} # const as literal'
.
format
(
var_shape
,
shape
))
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Constant'
,
'Constant'
,
[],
[],
outputs
,
# val
outputs
,
attrs
,
attrs
,
value_infos
,
value_infos
=
value_infos
,
)
)
...
@@ -935,7 +948,7 @@ def Conv(prog,
...
@@ -935,7 +948,7 @@ def Conv(prog,
outputs
,
outputs
,
attrs
,
attrs
,
value_infos
,
value_infos
,
name
=
''
,
name
,
embed_params
=
False
,
embed_params
=
False
,
*
args
,
*
args
,
**
kwargs
):
**
kwargs
):
...
@@ -944,46 +957,47 @@ def Conv(prog,
...
@@ -944,46 +957,47 @@ def Conv(prog,
"""
"""
# I/O
# I/O
val_x
,
val_w
=
inputs
[:
2
]
var_x
,
var_w
,
var_b
,
=
(
inputs
+
[
''
]
*
1
)[:
3
]
val_y
,
=
outputs
var_y
,
=
outputs
var_y
=
_make_var_name
(
val_y
)
assert
name
and
var_x
and
var_w
and
var_y
has_bias
=
len
(
inputs
)
==
3
if
has_bias
:
val_b
,
=
inputs
[
2
:]
# interpretation
# interpretation
assert
attrs
.
get
(
assert
attrs
.
get
(
'auto_pad'
,
'NOTSET'
'auto_pad'
,
)
==
'NOTSET'
,
'only auto_pad == NOTSET is
supported'
# optional
'NOTSET'
)
==
'NOTSET'
,
'only auto_pad = NOTSET
supported'
# optional
kernel_shape
=
_shape
(
value_infos
,
val_w
)[
2
:]
# OI...
kernel_shape
=
attrs
.
get
(
'kernel_shape'
,
assert
kernel_shape
==
attrs
[
_shape
(
value_infos
,
var_w
)[
2
:])
# optional, HW
'kernel_shape'
],
'kernel_shape in attr unmatches value_info'
# HW
assert
kernel_shape
,
'kernel_shape not inferred'
convnd
=
len
(
kernel_shape
)
convnd
=
len
(
kernel_shape
)
assert
2
<=
convnd
<=
3
,
'only conv2d and conv3d
is
supported'
assert
2
<=
convnd
<=
3
,
'only conv2d and conv3d supported'
num_out_channels
=
_shape
(
value_infos
,
va
l
_w
)[
0
]
# OI...
num_out_channels
=
_shape
(
value_infos
,
va
r
_w
)[
0
]
# OI...
fluid_op
=
'conv{}d'
.
format
(
convnd
)
fluid_op
=
'conv{}d'
.
format
(
convnd
)
num_groups
=
attrs
.
get
(
'group'
,
1
)
# optional
num_groups
=
attrs
.
get
(
'group'
,
1
)
# optional
strides
=
attrs
.
get
(
'strides'
,
[
1
]
*
convnd
)
# optional
strides
=
attrs
.
get
(
'strides'
,
[
1
]
*
convnd
)
# optional
dilations
=
attrs
.
get
(
'dilations'
,
[
1
]
*
convnd
)
# optional
dilations
=
attrs
.
get
(
'dilations'
,
[
1
]
*
convnd
)
# optional
pads
=
attrs
.
get
(
'pads'
,
[
0
]
*
(
convnd
*
2
))
# optional
pads
=
attrs
.
get
(
'pads'
,
[
0
]
*
(
convnd
*
2
))
# optional
paddings
,
val_x
=
_pad_if_asymmetric
(
prog
,
pads
,
val_x
,
value_infos
)
paddings
,
var_x
=
_pad_if_asymmetric
(
prog
,
pads
,
var_x
,
value_infos
,
name
)
var_x
=
_make_var_name
(
val_x
)
name_attr
=
', name={}'
.
format
(
repr
(
name
))
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
embeddable
=
_check_embeddable
(
value_infos
,
*
([
var_w
]
+
([
var_b
]
if
var_b
else
[])))
if
not
embeddable
:
_logger
.
warning
(
'for op %s(%s -> Conv -> %s)'
,
name
,
inputs
,
outputs
)
_logger
.
warning
(
'one of the parameters is intermediate value'
)
_logger
.
warning
(
'broken Python code will be generated'
)
embed_params
&=
embeddable
if
embed_params
:
if
embed_params
:
assert
name
!=
''
embedded_w
=
name
+
'.w_0'
var_w
=
name
+
'.w_0'
value_infos
[
var_w
][
'embedded_as'
].
append
(
embedded_w
)
value_infos
[
val_w
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_w
)
var_w
=
embedded_w
if
has_bias
:
if
var_b
:
var_b
=
name
+
'.b_0'
embedded_b
=
name
+
'.b_0'
value_infos
[
val_b
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_b
)
value_infos
[
var_b
][
'embedded_as'
].
append
(
embedded_b
)
var_b
=
embedded_b
param_attr
=
''
param_attr
=
''
else
:
else
:
param_attr
=
', bias_attr=False'
param_attr
=
', bias_attr=False'
else
:
else
:
var_w
=
_make_var_name
(
val_w
)
var_b
=
_make_var_name
(
val_b
)
if
has_bias
else
False
param_attr
=
', param_attr={}, bias_attr={}'
.
format
(
param_attr
=
', param_attr={}, bias_attr={}'
.
format
(
repr
(
var_w
),
repr
(
var_w
),
repr
(
var_b
)
if
var_b
else
False
)
repr
(
var_b
)
if
var_b
else
False
)
...
@@ -1010,27 +1024,27 @@ def Conv(prog,
...
@@ -1010,27 +1024,27 @@ def Conv(prog,
param_attr
,
param_attr
,
name_attr
,
name_attr
,
))
))
var_conv
=
name
+
'.conv'
# hidden variable
var_conv
=
(
name
+
'.conv'
)
if
var_b
else
var_y
# hidden variable
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
,
var_w
],
'Input'
,
'Filter'
),
# , 'Bias', 'ResidualData'
([
'Input'
,
'Filter'
],
[
var_x
,
var_w
]),
([
var_conv
if
has_bias
else
var_y
],
'Output'
),
([
'Output'
],
[
var_conv
]),
dict
(
{
strides
=
strides
,
'strides'
:
strides
,
paddings
=
paddings
,
'paddings'
:
paddings
,
dilations
=
dilations
,
'dilations'
:
dilations
,
groups
=
num_groups
,
'groups'
:
num_groups
,
))
},
if
has_bias
:
)
if
var_b
:
prog
.
VarDesc
(
var_conv
)
prog
.
VarDesc
(
var_conv
)
prog
.
IntermediateOp
(
prog
.
IntermediateOp
(
''
,
''
,
'Add'
,
'Add'
,
[
var_conv
,
var_b
],
#
[
var_conv
,
var_b
],
#
[
val_y
],
[
var_y
],
dict
(
axis
=
1
),
{
'axis'
:
1
},
value_infos
=
value_infos
,
name
=
(
name
+
'/bias'
),
name
=
(
name
+
'.bias'
),
)
)
else
:
else
:
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
...
@@ -1041,7 +1055,7 @@ def ConvTranspose(prog,
...
@@ -1041,7 +1055,7 @@ def ConvTranspose(prog,
outputs
,
outputs
,
attrs
,
attrs
,
value_infos
,
value_infos
,
name
=
''
,
name
,
embed_params
=
False
,
embed_params
=
False
,
*
args
,
*
args
,
**
kwargs
):
**
kwargs
):
...
@@ -1050,49 +1064,52 @@ def ConvTranspose(prog,
...
@@ -1050,49 +1064,52 @@ def ConvTranspose(prog,
"""
"""
# I/O
# I/O
val_x
,
val_w
=
inputs
[:
2
]
var_x
,
var_w
,
var_b
,
=
(
inputs
+
[
''
]
*
1
)[:
3
]
val_y
,
=
outputs
var_y
,
=
outputs
var_y
=
_make_var_name
(
val_y
)
assert
name
and
var_x
and
var_w
and
var_y
has_bias
=
len
(
inputs
)
==
3
if
has_bias
:
val_b
,
=
inputs
[
2
:]
# interpretation
# interpretation
assert
attrs
.
get
(
assert
attrs
.
get
(
'auto_pad'
,
'NOTSET'
'auto_pad'
,
)
==
'NOTSET'
,
'only auto_pad == NOTSET is
supported'
# optional
'NOTSET'
)
==
'NOTSET'
,
'only auto_pad = NOTSET
supported'
# optional
assert
sum
(
attrs
.
get
(
assert
sum
(
'output_padding'
,
attrs
.
get
(
'output_padding'
,
[]))
==
0
,
'only zero output_padding is
supported'
# optional ?
[]))
==
0
,
'only zero output_padding
supported'
# optional ?
kernel_shape
=
_shape
(
value_infos
,
val_w
)[
2
:]
# IO...
kernel_shape
=
attrs
.
get
(
'kernel_shape'
,
assert
kernel_shape
==
attrs
[
_shape
(
value_infos
,
var_w
)[
2
:])
# optional, HW
'kernel_shape'
],
'kernel_shape in attr unmatches value_info'
# HW
assert
kernel_shape
,
'kernel_shape not inferred'
convnd
=
len
(
kernel_shape
)
convnd
=
len
(
kernel_shape
)
assert
2
<=
convnd
<=
3
,
'only conv2d_transpose and conv3d_transpose
is
supported'
assert
2
<=
convnd
<=
3
,
'only conv2d_transpose and conv3d_transpose supported'
num_out_channels
=
_shape
(
value_infos
,
va
l
_w
)[
1
]
# IO...
num_out_channels
=
_shape
(
value_infos
,
va
r
_w
)[
1
]
# IO...
fluid_op
=
'conv{}d_transpose'
.
format
(
convnd
)
fluid_op
=
'conv{}d_transpose'
.
format
(
convnd
)
num_groups
=
attrs
.
get
(
'group'
,
1
)
# optional
num_groups
=
attrs
.
get
(
'group'
,
1
)
# optional
strides
=
attrs
.
get
(
'strides'
,
[
1
]
*
convnd
)
# optional
strides
=
attrs
.
get
(
'strides'
,
[
1
]
*
convnd
)
# optional
dilations
=
attrs
.
get
(
'dilations'
,
[
1
]
*
convnd
)
# optional
dilations
=
attrs
.
get
(
'dilations'
,
[
1
]
*
convnd
)
# optional
output_size
=
attrs
.
get
(
'output_shape'
,
[])
# optional
pads
=
attrs
.
get
(
'pads'
,
[
0
]
*
(
convnd
*
2
))
# optional
pads
=
attrs
.
get
(
'pads'
,
[
0
]
*
(
convnd
*
2
))
# optional
paddings
,
val_x
=
_pad_if_asymmetric
(
prog
,
pads
,
val_x
,
value_infos
)
paddings
,
var_x
=
_pad_if_asymmetric
(
prog
,
pads
,
var_x
,
value_infos
,
name
)
var_x
=
_make_var_name
(
val_x
)
name_attr
=
', name={}'
.
format
(
repr
(
name
))
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
embeddable
=
_check_embeddable
(
value_infos
,
*
([
var_w
]
+
([
var_b
]
if
var_b
else
[])))
if
not
embeddable
:
_logger
.
warning
(
'for op %s(%s -> ConvTranspose -> %s)'
,
name
,
inputs
,
outputs
)
_logger
.
warning
(
'one of the parameters is intermediate value'
)
_logger
.
warning
(
'broken Python code will be generated'
)
embed_params
&=
embeddable
if
embed_params
:
if
embed_params
:
assert
name
!=
''
embedded_w
=
name
+
'.w_0'
var_w
=
name
+
'.w_0'
value_infos
[
var_w
][
'embedded_as'
].
append
(
embedded_w
)
value_infos
[
val_w
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_w
)
var_w
=
embedded_w
if
has_bias
:
if
var_b
:
var_b
=
name
+
'.b_0'
embedded_b
=
name
+
'.b_0'
value_infos
[
val_b
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_b
)
value_infos
[
var_b
][
'embedded_as'
].
append
(
embedded_b
)
var_b
=
embedded_b
param_attr
=
''
param_attr
=
''
else
:
else
:
param_attr
=
', bias_attr=False'
param_attr
=
', bias_attr=False'
else
:
else
:
var_w
=
_make_var_name
(
val_w
)
var_b
=
_make_var_name
(
val_b
)
if
has_bias
else
False
param_attr
=
', param_attr={}, bias_attr={}'
.
format
(
param_attr
=
', param_attr={}, bias_attr={}'
.
format
(
repr
(
var_w
),
repr
(
var_w
),
repr
(
var_b
)
if
var_b
else
False
)
repr
(
var_b
)
if
var_b
else
False
)
...
@@ -1100,7 +1117,7 @@ def ConvTranspose(prog,
...
@@ -1100,7 +1117,7 @@ def ConvTranspose(prog,
# generation
# generation
prog
.
Code
(
'{} = layers.{}({}'
prog
.
Code
(
'{} = layers.{}({}'
', num_filters={}'
', num_filters={}'
#
', output_size={}'
', output_size={}'
', filter_size={}'
', filter_size={}'
', padding={}'
', padding={}'
', stride={}'
', stride={}'
...
@@ -1112,6 +1129,7 @@ def ConvTranspose(prog,
...
@@ -1112,6 +1129,7 @@ def ConvTranspose(prog,
var_x
,
var_x
,
# attrs
# attrs
num_out_channels
,
num_out_channels
,
output_size
or
None
,
kernel_shape
,
kernel_shape
,
paddings
,
paddings
,
strides
,
strides
,
...
@@ -1120,103 +1138,86 @@ def ConvTranspose(prog,
...
@@ -1120,103 +1138,86 @@ def ConvTranspose(prog,
param_attr
,
param_attr
,
name_attr
,
name_attr
,
))
))
var_conv
=
name
+
'.conv'
# hidden variable
var_conv
=
(
name
+
'.conv'
)
if
var_b
else
var_y
# hidden variable
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
,
var_w
],
'Input'
,
'Filter'
),
# , 'Bias', 'ResidualData'
([
'Input'
,
'Filter'
],
[
var_x
,
var_w
]),
([
var_conv
if
has_bias
else
var_y
],
'Output'
),
([
'Output'
],
[
var_conv
]),
dict
(
{
strides
=
strides
,
'strides'
:
strides
,
paddings
=
paddings
,
'paddings'
:
paddings
,
dilations
=
dilations
,
'dilations'
:
dilations
,
# output_size=output_size,
'groups'
:
num_groups
,
groups
=
num_groups
,
# unused
))
'output_size'
:
output_size
,
if
has_bias
:
},
)
if
var_b
:
prog
.
VarDesc
(
var_conv
)
prog
.
VarDesc
(
var_conv
)
prog
.
IntermediateOp
(
prog
.
IntermediateOp
(
''
,
''
,
'Add'
,
'Add'
,
[
var_conv
,
var_b
],
#
[
var_conv
,
var_b
],
#
[
val_y
],
[
var_y
],
dict
(
axis
=
1
),
{
'axis'
:
1
},
value_infos
=
value_infos
,
name
=
(
name
+
'/bias'
),
name
=
(
name
+
'.bias'
),
)
)
else
:
else
:
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
# should not appear
#def Dropout(
# prog, inputs, outputs, value_infos,
# *args, **kwargs):
# """
# onnx::Dropout-7:9
# """
#
# val_data, = inputs
# val_output, = outputs[:1]
#
# _assign(prog,
# dict([(val_output, val_data)]),
# value_infos,
# )
def
Gemm
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
def
Gemm
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
"""
"""
onnx::Gemm-9:
onnx::Gemm-9:
"""
"""
# due to fluid fc don't support transposed weight, we use matmul + ew_add
# due to fluid fc don't support transposed weight, we use matmul + ew_add
val_a
,
val_b
,
val_c
=
inputs
var_a
,
var_b
,
var_c
,
=
inputs
val_y
,
=
outputs
var_y
,
=
outputs
assert
name
and
var_a
and
var_b
and
var_c
and
var_y
alpha
=
attrs
.
get
(
'alpha'
,
1.
)
# optional
alpha
=
attrs
.
get
(
'alpha'
,
1.
)
# optional
beta
=
attrs
.
get
(
'beta'
,
1.
)
# optional
beta
=
attrs
.
get
(
'beta'
,
1.
)
# optional
trans_a
=
bool
(
attrs
.
get
(
'transA'
,
0
))
# optional
trans_a
=
bool
(
attrs
.
get
(
'transA'
,
0
))
# optional
trans_b
=
bool
(
attrs
.
get
(
'transB'
,
0
))
# optional
trans_b
=
bool
(
attrs
.
get
(
'transB'
,
0
))
# optional
va
l_mm
=
name
+
'_mm'
# explicit variable
va
r_mm
=
var_y
if
beta
==
0
else
(
name
+
'_mm'
)
# explicit variable
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'MatMul'
,
'MatMul'
,
[
val_a
,
val_b
],
[
var_a
,
var_b
],
[
val_mm
],
# val
[
var_mm
],
dict
(
{
transpose_x
=
trans_a
,
'transpose_x'
:
trans_a
,
transpose_y
=
trans_b
,
'transpose_y'
:
trans_b
,
alpha
=
alpha
,
'alpha'
:
alpha
,
),
},
value_infos
=
value_infos
,
name
=
(
name
+
'/mm'
),
name
=
val_mm
,
)
)
prog
.
op_descs
[
-
1
].
attrs
.
extend
(
prog
.
op_descs
[
-
1
].
attrs
.
extend
(
prog
.
OpDescAttrs
(
dict
(
prog
.
OpDescAttrs
(
{
transpose_X
=
trans_a
,
'transpose_X'
:
trans_a
,
transpose_Y
=
trans_b
,
'transpose_Y'
:
trans_b
,
)
))
# f**k you API
}
))
# f**k you API
if
beta
!=
0
:
if
beta
!=
0
:
if
beta
==
1.
:
# exactly
if
beta
==
1.
:
# exactly
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Add'
,
'Add'
,
[
val_mm
,
val_c
],
[
var_mm
,
var_c
],
[
val_y
],
# val
[
var_y
],
dict
(
axis
=
1
),
{
'axis'
:
1
},
value_infos
=
value_infos
,
name
=
(
name
+
'/bias'
),
name
=
(
name
+
'_beta'
),
)
)
else
:
else
:
va
l
_beta
=
name
+
'_beta'
# explicit variable
va
r
_beta
=
name
+
'_beta'
# explicit variable
va
l
_vm
=
name
+
'_vm'
# explicit variable
va
r
_vm
=
name
+
'_vm'
# explicit variable
if
beta
.
is_integer
():
if
beta
.
is_integer
():
vm_dtype
=
_dtype_or_none
(
value_infos
,
va
l
_c
)
vm_dtype
=
_dtype_or_none
(
value_infos
,
va
r
_c
)
if
vm_dtype
is
None
:
if
vm_dtype
is
None
:
vm_dtype
=
_np
.
dtype
(
'float32'
)
vm_dtype
=
_np
.
dtype
(
'float32'
)
_logger
.
warning
(
_logger
.
warning
(
'in %s(%s -> Gemm -> %s): '
'in
op
%s(%s -> Gemm -> %s): '
'attribute "beta" seems to be an interger, '
'attribute "beta" seems to be an interger, '
'however dtype can not be inferred, '
'however dtype can not be inferred, '
'still use float32'
,
name
,
inputs
,
outputs
)
'still use float32'
,
name
,
inputs
,
outputs
)
...
@@ -1225,34 +1226,31 @@ def Gemm(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
...
@@ -1225,34 +1226,31 @@ def Gemm(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
''
,
''
,
'Constant'
,
'Constant'
,
[],
[],
[
val_beta
],
# val
[
var_beta
],
dict
(
value
=
beta
),
{
'value'
:
beta
},
value_infos
=
value_infos
,
name
=
val_beta
,
)
)
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Mul'
,
'Mul'
,
[
va
l_c
,
val
_beta
],
[
va
r_c
,
var
_beta
],
[
va
l_vm
],
# val
[
va
r_vm
],
dict
(),
dict
(),
value_infos
=
value_infos
,
name
=
(
name
+
'.beta/scale'
),
name
=
(
name
+
'_scale'
),
)
)
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Add'
,
'Add'
,
[
va
l_mm
,
val
_vm
],
[
va
r_mm
,
var
_vm
],
[
va
l_y
],
# val
[
va
r_y
],
dict
(
axis
=
1
),
{
'axis'
:
1
},
#
name
=
(
name
+
'
_
bias'
),
name
=
(
name
+
'
/
bias'
),
)
)
def
GlobalAveragePool
(
prog
,
def
GlobalAveragePool
(
prog
,
inputs
,
inputs
,
outputs
,
outputs
,
attrs
,
attrs
_
,
value_infos
,
value_infos
,
name
=
''
,
name
=
''
,
*
args
,
*
args
,
...
@@ -1261,14 +1259,13 @@ def GlobalAveragePool(prog,
...
@@ -1261,14 +1259,13 @@ def GlobalAveragePool(prog,
onnx::GlobalAveragePool-1:
onnx::GlobalAveragePool-1:
"""
"""
return
_global_pool
(
return
_global_pool
(
prog
,
'avg'
,
inputs
,
outputs
,
value_infos
,
name
=
name
)
prog
,
'avg'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
name
)
def
GlobalMaxPool
(
prog
,
def
GlobalMaxPool
(
prog
,
inputs
,
inputs
,
outputs
,
outputs
,
attrs
,
attrs
_
,
value_infos
,
value_infos
,
name
=
''
,
name
=
''
,
*
args
,
*
args
,
...
@@ -1277,78 +1274,490 @@ def GlobalMaxPool(prog,
...
@@ -1277,78 +1274,490 @@ def GlobalMaxPool(prog,
onnx::GlobalMaxPool-1:
onnx::GlobalMaxPool-1:
"""
"""
return
_global_pool
(
return
_global_pool
(
prog
,
'max'
,
inputs
,
outputs
,
value_infos
,
name
=
name
)
prog
,
'max'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
name
)
def
GRU
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
#def LRN(
"""
# prog, inputs, outputs, attrs, value_infos, name, # name required
onnx::GRU-7:
# *args, **kwargs):
"""
# """
# onnx::LRN-1:
var_x
,
var_w
,
var_r
,
var_b
,
var_len
,
var_xh
,
=
(
inputs
+
[
''
]
*
3
)[:
6
]
# """
var_y
,
var_yh
,
=
(
outputs
+
[
''
]
*
2
)[:
2
]
#
assert
name
and
var_x
and
var_w
and
var_r
# and (var_y or var_yh)
# # I/O
var_gate
=
name
+
'.gate'
# dummy output
# val_x, = inputs
var_reset
=
name
+
'.reset'
# dummy output
# val_y, = outputs
var_hidden
=
name
+
'.hidden'
# dummy output, # var_yh
# var_x = _make_var_name(val_x)
# var_y = _make_var_name(val_y)
# interpretation
#
x_shape
=
_shape_or_none
(
value_infos
,
var_x
)
# # interpretation
assert
x_shape
is
not
None
,
'shape of X required to be known'
# fluid_op = 'lrn'
assert
x_shape
[
1
]
==
1
,
'only X with batch_size = 1 supported'
# size = attrs['size'] # required
assert
'clip'
not
in
attrs
,
'clipping not supported'
# alpha = attrs.get('alpha', 0.0001) # optional
hidden_size
=
attrs
.
get
(
'hidden_size'
,
None
)
# optional
# beta = attrs.get('beta', 0.75) # optional
if
hidden_size
is
None
:
# bias = attrs.get('bias', 1.0) # optional
r_shape
=
_shape_or_none
(
value_infos
,
var_r
)
# name_attr = ', name={}'.format(repr(name)) if name else ''
if
r_shape
:
#
hidden_size
=
r_shape
[
-
1
]
# # generation
if
hidden_size
is
None
:
# prog.Code('{} = layers.{}({}'
w_shape
=
_shape_or_none
(
value_infos
,
var_w
)
# ', n={}'
if
w_shape
:
# ', k={}'
hidden_size
=
w_shape
[
-
2
]
//
3
# ', alpha={}'
if
hidden_size
is
None
and
var_b
:
# ', beta={}'
b_shape
=
_shape_or_none
(
value_infos
,
var_b
)
# '{})'
if
b_shape
:
# .format(var_y,
hidden_size
=
b_shape
[
-
1
]
//
6
# fluid_op,
if
hidden_size
is
None
and
var_xh
:
# var_x,
xh_shape
=
_shape_or_none
(
value_infos
,
var_xh
)
# # attrs
if
xh_shape
:
# size,
hidden_size
=
xh_shape
[
-
1
]
# bias,
assert
hidden_size
,
'hidden_size not inferred'
# alpha,
assert
attrs
.
get
(
# beta,
'linear_before_reset'
,
# name_attr,
0
)
==
0
,
'only linear_before_reset = 0 supported'
# optional
# ))
direction
=
attrs
.
get
(
'direction'
,
'forward'
)
# optional
# var_mid = name + '.mid' # hidden variable
assert
direction
!=
'bidirectional'
,
'direction = bidirectional not supported'
# prog.VarDesc(var_y)
activations
=
attrs
.
get
(
'activations'
,
[
'Sigmoid'
,
'Tanh'
])
# optional
# prog.VarDesc(var_mid)
assert
len
(
activations
)
==
2
,
'bidirectional operation not supported'
# prog.OpDesc(fluid_op,
activations
=
[
s
.
lower
()
for
s
in
activations
]
# TODO: check support
# ([var_x], 'X'),
gate_activation
,
candidate_activation
=
activations
# ([var_y, var_mid], 'Out', 'MidOut'),
is_reverse
=
direction
==
'reverse'
# dict(n=size,
# k=bias,
fluid_op
=
'dynamic_gru'
# alpha=alpha,
_logger
.
warning
(
'for op (%s -> GRU -> %s)'
,
inputs
,
outputs
)
# beta=beta,
_logger
.
warning
(
'one of the parameters is intermediate value'
)
# ),
_logger
.
warning
(
'broken Python code will be generated'
)
# )
# generation
var_x0
=
name
+
'_x0'
# explicit variable
def
MaxPool
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
prog
.
Op
(
**
kwargs
):
''
,
'Squeeze'
,
[
var_x
],
[
var_x0
],
{
'axes'
:
[
1
]},
# index on n
name
=
(
name
+
'.x/index'
),
)
var_w0
=
name
+
'_w0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_w
],
[
var_w0
],
{
'axes'
:
[
0
]},
# index on d
name
=
(
name
+
'.w/index'
),
)
var_fc
=
name
+
'_fc'
var_mm
=
(
name
+
'_mm'
)
if
var_b
else
var_fc
prog
.
Op
(
''
,
'MatMul'
,
[
var_x0
,
var_w0
],
[
var_mm
],
{
'transpose_x'
:
0
,
'transpose_y'
:
1
,
},
name
=
(
name
+
'/mm'
),
)
prog
.
op_descs
[
-
1
].
attrs
.
extend
(
prog
.
OpDescAttrs
({
'transpose_X'
:
0
,
'transpose_Y'
:
1
,
}))
# f**k you API
var_r0
=
name
+
'_r0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_r
],
[
var_r0
],
{
'axes'
:
[
0
]},
# index on d
name
=
(
name
+
'.r/index'
),
)
var_r0t
=
name
+
'_r0t'
# explicit variable
prog
.
Op
(
''
,
'Transpose'
,
[
var_r0
],
[
var_r0t
],
{
'perm'
:
[
1
,
0
]},
# transpose OI->IO
name
=
(
name
+
'.r0/transpose'
),
)
if
var_b
:
var_bi
=
name
+
'_bi'
# explicit variable
var_bh
=
name
+
'_bh'
# explicit variable
prog
.
Op
(
''
,
'Split'
,
[
var_b
],
[
var_bi
,
var_bh
],
{
'axis'
:
1
,
# split on x
'split'
:
[
hidden_size
*
3
,
hidden_size
*
3
],
},
name
=
(
name
+
'.b/split'
),
)
# squeeze bi so Gemm Add can be performed on axis=1 exaclty
var_bi0
=
name
+
'_bi0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_bi
],
[
var_bi0
],
{
'axes'
:
[
0
]},
# slice on d
name
=
(
name
+
'.bi/index'
),
)
prog
.
Op
(
''
,
'Add'
,
[
var_mm
,
var_bi0
],
[
var_fc
],
{
'axis'
:
1
},
#
name
=
(
name
+
'.i/bias'
),
)
if
var_xh
:
var_xh0
=
name
+
'_xh0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_xh
],
[
var_xh0
],
{
'axes'
:
[
1
]},
# index on n
name
=
(
name
+
'.xh/index'
),
)
var_y00
=
name
+
'_y00'
# explicit variable #
prog
.
Code
(
'{} = layers.{}({}, {}, origin_mode=True'
', h_0={}'
', is_reverse={}'
', gate_activation={}'
', candidate_activation={}'
', param_attr={}, bias_attr={})'
.
format
(
var_y00
,
fluid_op
,
var_fc
,
hidden_size
,
var_xh0
if
var_xh
else
None
,
is_reverse
,
repr
(
gate_activation
),
repr
(
candidate_activation
),
repr
(
var_r0t
),
repr
(
var_bh
)
if
var_b
else
False
,
))
fluid_op
=
'gru'
prog
.
VarDesc
(
var_y00
)
prog
.
VarDesc
(
var_gate
)
prog
.
VarDesc
(
var_reset
)
prog
.
VarDesc
(
var_hidden
)
prog
.
OpDesc
(
fluid_op
,
([
'Input'
,
'Weight'
,
'Bias'
,
'H0'
],
[
var_fc
,
var_r0t
]
+
([
var_bh
]
if
var_b
else
[])
+
([
var_xh0
]
if
var_xh
else
[])),
([
'Hidden'
,
'BatchGate'
,
'BatchResetHiddenPrev'
,
'BatchHidden'
],
[
var_y00
,
var_gate
,
var_reset
,
var_hidden
]),
{
'is_reverse'
:
is_reverse
,
'gate_activation'
:
gate_activation
,
'activation'
:
candidate_activation
,
'origin_mode'
:
True
,
},
)
if
var_y
:
prog
.
Op
(
''
,
'Unsqueeze'
,
[
var_y00
],
[
var_y
],
{
'axes'
:
[
1
,
1
]},
# extrude on dn
name
=
(
name
+
'.y/reshape'
),
)
if
var_yh
:
prog
.
Op
(
''
,
'Unsqueeze'
,
[
var_y00
],
#
[
var_yh
],
#
{
'axes'
:
[
1
,
1
]},
# extrude on dn
name
=
(
name
+
'.yh/reshape'
),
)
def
LSTM
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
"""
onnx::LSTM-7:
"""
var_x
,
var_w
,
var_r
,
var_b
,
var_len
,
var_xh
,
var_xc
,
var_p
,
=
(
inputs
+
[
''
]
*
5
)[:
8
]
var_y
,
var_yh
,
var_yc
,
=
(
outputs
+
[
''
]
*
3
)[:
3
]
assert
name
and
var_x
and
var_w
and
var_r
# and (var_y or var_yh or var_yc)
var_gate
=
name
+
'.gate'
var_pre
=
name
+
'.pre'
# interpretation
x_shape
=
_shape_or_none
(
value_infos
,
var_x
)
assert
x_shape
is
not
None
,
'shape of X required to be known'
assert
x_shape
[
1
]
==
1
,
'only X with batch_size = 1 supported'
assert
'clip'
not
in
attrs
,
'clipping not supported'
hidden_size
=
attrs
.
get
(
'hidden_size'
,
None
)
# optional
if
hidden_size
is
None
:
r_shape
=
_shape_or_none
(
value_infos
,
var_r
)
if
r_shape
:
hidden_size
=
r_shape
[
-
1
]
if
hidden_size
is
None
:
w_shape
=
_shape_or_none
(
value_infos
,
var_w
)
if
w_shape
:
hidden_size
=
w_shape
[
-
2
]
//
4
if
hidden_size
is
None
and
var_b
:
b_shape
=
_shape_or_none
(
value_infos
,
var_b
)
if
b_shape
:
hidden_size
=
b_shape
[
-
1
]
//
8
if
hidden_size
is
None
and
var_xh
:
xh_shape
=
_shape_or_none
(
value_infos
,
var_xh
)
if
xh_shape
:
hidden_size
=
xh_shape
[
-
1
]
if
hidden_size
is
None
and
var_xc
:
xc_shape
=
_shape_or_none
(
value_infos
,
var_xc
)
if
xc_shape
:
hidden_size
=
xc_shape
[
-
1
]
if
hidden_size
is
None
and
var_p
:
p_shape
=
_shape_or_none
(
value_infos
,
var_p
)
if
p_shape
:
hidden_size
=
p_shape
[
-
1
]
//
3
assert
hidden_size
,
'hidden_size not inferred'
assert
attrs
.
get
(
'linear_before_reset'
,
0
)
==
0
,
'only linear_before_reset = 0 supported'
# optional
assert
attrs
.
get
(
'input_forget'
,
0
)
==
0
,
'only input_forget = 0 supported'
# optional
direction
=
attrs
.
get
(
'direction'
,
'forward'
)
# optional
assert
direction
!=
'bidirectional'
,
'direction = bidirectional not supported'
activations
=
attrs
.
get
(
'activations'
,
[
'Sigmoid'
,
'Tanh'
,
'Tanh'
])
# optional
assert
len
(
activations
)
==
3
,
'bidirectional operation not supported'
activations
=
[
s
.
lower
()
for
s
in
activations
]
# TODO: check support
gate_activation
,
cell_activation
,
candidate_activation
=
activations
is_reverse
=
direction
==
'reverse'
fluid_op
=
'dynamic_lstm'
name_attr
=
', name={}'
.
format
(
repr
(
name
))
_logger
.
warning
(
'for op %s(%s -> LSTM -> %s)'
,
name
,
inputs
,
outputs
)
_logger
.
warning
(
'one of the parameters is intermediate value'
)
_logger
.
warning
(
'broken Python code will be generated'
)
# generation
var_x0
=
name
+
'_x0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_x
],
[
var_x0
],
{
'axes'
:
[
1
]},
# index on n
name
=
(
name
+
'.x/index'
),
)
var_w0
=
name
+
'_w0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_w
],
[
var_w0
],
{
'axes'
:
[
0
]},
# index on d
name
=
(
name
+
'.w/index'
),
)
var_fc
=
name
+
'_fc'
var_mm
=
(
name
+
'_mm'
)
if
var_b
else
var_fc
prog
.
Op
(
''
,
'MatMul'
,
[
var_x0
,
var_w0
],
[
var_mm
],
{
'transpose_x'
:
0
,
'transpose_y'
:
1
,
},
name
=
(
name
+
'/mm'
),
)
prog
.
op_descs
[
-
1
].
attrs
.
extend
(
prog
.
OpDescAttrs
({
'transpose_X'
:
0
,
'transpose_Y'
:
1
,
}))
# f**k you API
var_r0
=
name
+
'_r0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_r
],
[
var_r0
],
{
'axes'
:
[
0
]},
# index on d
name
=
(
name
+
'.r/index'
),
)
var_r0t
=
name
+
'_r0t'
# explicit variable
prog
.
Op
(
''
,
'Transpose'
,
[
var_r0
],
[
var_r0t
],
{
'perm'
:
[
1
,
0
]},
# transpose OI->IO
name
=
(
name
+
'.r0/transpose'
),
)
if
var_b
:
var_bi
=
name
+
'_bi'
# explicit variable
var_bh
=
name
+
'_bh'
# explicit variable
prog
.
Op
(
''
,
'Split'
,
[
var_b
],
[
var_bi
,
var_bh
],
{
'axis'
:
1
,
# split on x
'split'
:
[
hidden_size
*
4
,
hidden_size
*
4
],
},
name
=
(
name
+
'.b/split'
),
)
# squeeze bi so Gemm Add can be performed on axis=1 exaclty
var_bi0
=
name
+
'_bi0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_bi
],
[
var_bi0
],
{
'axes'
:
[
0
]},
# slice on d
name
=
(
name
+
'.bi/index'
),
)
prog
.
Op
(
''
,
'Add'
,
[
var_mm
,
var_bi0
],
[
var_fc
],
{
'axis'
:
1
},
#
name
=
(
name
+
'.i/bias'
),
)
if
var_xh
:
var_xh0
=
name
+
'_xh0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_xh
],
[
var_xh0
],
{
'axes'
:
[
1
]},
# index on n
name
=
(
name
+
'.xh/index'
),
)
if
var_xc
:
var_xc0
=
name
+
'_xc0'
# explicit variable
prog
.
Op
(
''
,
'Squeeze'
,
[
var_xc
],
[
var_xc0
],
{
'axes'
:
[
1
]},
# index on n
name
=
(
name
+
'.xc/index'
),
)
var_bhp
=
var_p
if
var_b
:
if
var_p
:
var_bhp
=
name
+
'_bhp'
# explicit variable
prog
.
Op
(
''
,
'Concat'
,
[
var_bh
,
var_p
],
[
var_bhp
],
{
'axis'
:
[
1
]},
# cat on x
name
=
(
name
+
'/concat'
),
)
else
:
var_bhp
=
var_bh
var_yh0
=
name
+
'_yh0'
# explicit variable
var_yc0
=
name
+
'_yc0'
# explicit variable
prog
.
Code
(
'{}, {} = layers.{}({}, {}'
', h_0={}'
', c_0={}'
', use_peepholes={}'
', is_reverse={}'
', gate_activation={}'
', cell_activation={}'
', candidate_activation={}'
', param_attr={}, bias_attr={}'
'{})'
.
format
(
var_yh0
,
var_yc0
,
fluid_op
,
var_fc
,
hidden_size
*
4
,
var_xh0
if
var_xh
else
None
,
var_xc0
if
var_xc
else
None
,
bool
(
var_p
),
is_reverse
,
repr
(
gate_activation
),
repr
(
cell_activation
),
repr
(
candidate_activation
),
repr
(
var_r0t
),
repr
(
var_bhp
)
if
var_bhp
else
False
,
name_attr
,
))
fluid_op
=
'lstm'
prog
.
VarDesc
(
var_yh0
)
prog
.
VarDesc
(
var_yc0
)
prog
.
VarDesc
(
var_gate
)
prog
.
VarDesc
(
var_pre
)
prog
.
OpDesc
(
fluid_op
,
([
'Input'
,
'Weight'
,
'Bias'
,
'H0'
,
'C0'
],
[
var_fc
,
var_r0t
]
+
([
var_bhp
]
if
var_bhp
else
[])
+
([
var_xh0
]
if
var_xh
else
[])
+
([
var_xc0
]
if
var_xc
else
[])),
([
'Hidden'
,
'Cell'
,
'BatchGate'
,
'BatchCellPreAct'
],
[
var_yh0
,
var_yc0
,
var_gate
,
var_pre
]),
{
'use_peepholes'
:
bool
(
var_p
),
'is_reverse'
:
is_reverse
,
'gate_activation'
:
gate_activation
,
'cell_activation'
:
cell_activation
,
'candidate_activation'
:
candidate_activation
,
},
)
if
var_y
:
prog
.
Op
(
''
,
'Unsqueeze'
,
[
var_yh0
],
#
[
var_y
],
# var_y
{
'axes'
:
[
1
,
1
]},
# extrude on dn
name
=
(
name
+
'.y/reshape'
),
)
if
var_yh
:
prog
.
Op
(
''
,
'Unsqueeze'
,
[
var_yh0
],
[
var_yh
],
# var_yh
{
'axes'
:
[
1
,
1
]},
# extrude on dn
name
=
(
name
+
'.yh/reshape'
),
)
if
var_yc
:
prog
.
Op
(
''
,
'Unsqueeze'
,
[
var_yc0
],
[
var_yc
],
{
'axes'
:
[
1
,
1
]},
# extrude on dn
name
=
(
name
+
'.yc/reshape'
),
)
def
MaxPool
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
"""
"""
onnx::MaxPool-10:
onnx::MaxPool-10:
"""
"""
return
_pool
(
prog
,
'max'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
name
)
return
_pool
(
prog
,
'max'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
)
def
MaxRoiPool
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
def
MaxRoiPool
(
prog
,
inputs
,
outputs
,
attrs
,
name
,
*
args
,
**
kwargs
):
**
kwargs
):
"""
"""
onnx::MaxRoiPool-1:
onnx::MaxRoiPool-1:
"""
"""
_roi_pool
(
prog
,
'roi_pool'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
)
_roi_pool
(
prog
,
'roi_pool'
,
inputs
,
outputs
,
attrs
,
name
)
def
Pad
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
def
Pad
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
...
@@ -1357,32 +1766,31 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
...
@@ -1357,32 +1766,31 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
"""
"""
# I/O
# I/O
val_data
,
=
inputs
var_data
,
=
inputs
val_output
,
=
outputs
var_output
,
=
outputs
var_data
=
_make_var_name
(
val_data
)
assert
var_data
and
var_output
var_output
=
_make_var_name
(
val_output
)
# interpretation
# interpretation
pads
=
attrs
[
'pads'
]
# required
pads
=
attrs
[
'pads'
]
# required
mode
=
attrs
.
get
(
'mode'
,
'constant'
)
# optional
mode
=
attrs
.
get
(
'mode'
,
'constant'
)
# optional
value
=
attrs
.
get
(
'value'
,
0.
)
# optional
value
=
attrs
.
get
(
'value'
,
0.
)
# optional
data_shape
=
_shape_or_none
(
value_infos
,
va
l
_data
)
data_shape
=
_shape_or_none
(
value_infos
,
va
r
_data
)
output_shape
=
_shape_or_none
(
value_infos
,
va
l
_output
)
output_shape
=
_shape_or_none
(
value_infos
,
va
r
_output
)
assume_pad2d
=
False
assume_pad2d
=
False
if
len
(
pads
)
==
4
:
if
len
(
pads
)
==
4
:
assume_pad2d
|=
mode
!=
'constant'
assume_pad2d
|=
mode
!=
'constant'
if
data_shape
:
if
data_shape
is
not
None
:
assume_pad2d
|=
data_shape
and
len
(
data_shape
)
==
4
# NCHW
assume_pad2d
|=
data_shape
and
len
(
data_shape
)
==
4
# NCHW
if
output_shape
:
if
output_shape
is
not
None
:
assume_pad2d
|=
output_shape
and
len
(
output_shape
)
==
4
# NCHW
assume_pad2d
|=
output_shape
and
len
(
output_shape
)
==
4
# NCHW
od_attrs
=
dict
(
pad_value
=
value
)
od_attrs
=
{
'pad_value'
:
value
}
if
assume_pad2d
:
if
assume_pad2d
:
fluid_op
=
'pad2d'
fluid_op
=
'pad2d'
pad2d_attr
=
', mode={}, data_format="NCHW"'
.
format
(
repr
(
mode
))
pad2d_attr
=
', mode={}, data_format="NCHW"'
.
format
(
repr
(
mode
))
od_attrs
[
'mode'
]
=
mode
od_attrs
[
'mode'
]
=
mode
od_attrs
[
'data_format'
]
=
"NCHW"
od_attrs
[
'data_format'
]
=
"NCHW"
else
:
else
:
assert
mode
==
'constant'
,
'mode {}
is
supported only in pad2d'
.
format
(
assert
mode
==
'constant'
,
'mode {} supported only in pad2d'
.
format
(
mode
)
mode
)
fluid_op
=
'pad'
fluid_op
=
'pad'
pad2d_attr
=
''
pad2d_attr
=
''
...
@@ -1408,8 +1816,8 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
...
@@ -1408,8 +1816,8 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
prog
.
VarDesc
(
var_output
)
prog
.
VarDesc
(
var_output
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_data
],
'X'
),
([
'X'
,
'Paddings'
],
[
var_data
]),
#
([
var_output
],
'Out'
),
([
'Out'
],
[
var_output
]
),
od_attrs
,
od_attrs
,
)
)
...
@@ -1417,7 +1825,7 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
...
@@ -1417,7 +1825,7 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
def
PRelu
(
prog
,
def
PRelu
(
prog
,
inputs
,
inputs
,
outputs
,
outputs
,
attrs
,
attrs
_
,
value_infos
,
value_infos
,
name
=
''
,
name
=
''
,
embed_params
=
False
,
embed_params
=
False
,
...
@@ -1428,67 +1836,82 @@ def PRelu(prog,
...
@@ -1428,67 +1836,82 @@ def PRelu(prog,
"""
"""
# I/O
# I/O
val_x
,
val_slope
=
inputs
var_x
,
var_slope
,
=
inputs
val_y
,
=
outputs
var_y
,
=
outputs
var_x
=
_make_var_name
(
val_x
)
assert
name
and
var_x
and
var_slope
and
var_y
var_y
=
_make_var_name
(
val_y
)
# interpretation
# interpretation
mode
=
'channel'
slope_shape
=
_shape_or_none
(
value_infos
,
var_slope
)
if
slope_shape
is
not
None
:
if
not
slope_shape
:
mode
=
'all'
elif
len
(
slope_shape
)
>=
2
:
if
slope_shape
[
1
]
!=
_np
.
product
(
slope_shape
):
# not channel broadcasting
mode
=
'element'
fluid_op
=
'prelu'
fluid_op
=
'prelu'
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
embeddable
=
_check_embeddable
(
value_infos
,
var_slope
)
if
not
embeddable
:
_logger
.
warning
(
'for op %s(%s -> PRelu -> %s)'
,
name
,
inputs
,
outputs
)
_logger
.
warning
(
'one of the parameters is intermediate value'
)
_logger
.
warning
(
'broken Python code will be generated'
)
embed_params
&=
embeddable
if
embed_params
:
if
embed_params
:
assert
name
!=
''
assert
name
var_slope
=
'{}.w_0'
.
format
(
val_slope
)
embedded_slope
=
name
+
'.w_0'
value_infos
[
val_slope
].
setdefault
(
'embeded_as'
,
[]).
append
(
var_slope
)
value_infos
[
var_slope
][
'embedded_as'
].
append
(
embedded_slope
)
var_slope
=
embedded_slope
param_attr
=
''
param_attr
=
''
else
:
else
:
var_slope
=
_make_var_name
(
val_slope
)
param_attr
=
', param_attr={}'
.
format
(
repr
(
var_slope
))
param_attr
=
', param_attr={}'
.
format
(
repr
(
var_slope
))
# generation
# generation
prog
.
Code
(
'{} = layers.{}({}, mode="all"'
prog
.
Code
(
'{} = layers.{}({}'
', mode={}'
'{}{})'
.
format
(
'{}{})'
.
format
(
var_y
,
var_y
,
fluid_op
,
fluid_op
,
var_x
,
var_x
,
# attrs
# attrs
repr
(
mode
),
param_attr
,
param_attr
,
name_attr
,
name_attr
,
))
))
prog
.
VarDesc
(
var_y
)
prog
.
VarDesc
(
var_y
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_x
],
'X'
),
([
'X'
,
'Alpha'
],
[
var_x
,
var_slope
]
),
([
var_y
],
'Out'
),
([
'Out'
],
[
var_y
]
),
dict
(
mode
=
'all'
)
,
{
'mode'
:
mode
}
,
)
)
def
PsRoiPool
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
def
PsRoiPool
(
prog
,
inputs
,
outputs
,
attrs
,
name
,
*
args
,
**
kwargs
):
"""
"""
caffe2::PsRoiPool
caffe2::PsRoiPool
"""
"""
_roi_pool
(
prog
,
'psroi_pool'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
)
_roi_pool
(
prog
,
'psroi_pool'
,
inputs
,
outputs
,
attrs
,
name
)
def
Reshape
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
def
Reshape
(
prog
,
inputs
,
outputs
,
attrs
_
,
value_infos
,
name
,
*
args
,
**
kwargs
):
"""
"""
onnx::Reshape-5:
onnx::Reshape-5:
"""
"""
# I/O
# I/O
val_data
,
val_shape
=
inputs
var_data
,
var_shape
,
=
inputs
val_reshaped
,
=
outputs
var_reshaped
,
=
outputs
var_data
=
_make_var_name
(
val_data
)
assert
name
and
var_data
and
var_shape
and
var_reshaped
var_shape
=
_make_var_name
(
val_shape
)
var_reshaped
=
_make_var_name
(
val_reshaped
)
# interpretation
# interpretation
shape
=
_const_weight_or_none
(
value_infos
,
val_shape
)
shape
=
_const_weight_or_none
(
value_infos
,
var_shape
)
is_const_shape
=
shape
and
'const_value'
in
value_infos
[
val_shape
]
is_const_shape
=
shape
is
not
None
and
'const_value'
in
value_infos
[
var_shape
]
if
shape
is
None
:
if
shape
is
None
:
shape
=
_shape_or_none
(
value_infos
,
va
l
_reshaped
)
shape
=
_shape_or_none
(
value_infos
,
va
r
_reshaped
)
# assert shape is not None, ('given shape is neither const value nor deductible from output, '
# assert shape is not None, ('given shape is neither const value nor deductible from output, '
...
@@ -1496,15 +1919,25 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
...
@@ -1496,15 +1919,25 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
if
shape
is
None
:
if
shape
is
None
:
shape
=
[
1
,
-
1
]
# who knows
shape
=
[
1
,
-
1
]
# who knows
_logger
.
warning
(
_logger
.
warning
(
'in %s(%s -> Reshape -> %s): '
'in
op
%s(%s -> Reshape -> %s): '
'input "shape" not inferred, use [1, -1] as dummy value, '
'input "shape" not inferred, use [1, -1] as dummy value, '
'the behavior of Paddle fluid maybe undefined'
,
name
,
inputs
,
'the behavior of Paddle fluid maybe undefined'
,
name
,
inputs
,
outputs
)
outputs
)
shape_dtype
=
_dtype_or_none
(
value_infos
,
var_shape
)
if
shape_dtype
is
None
:
_logger
.
warning
(
'in op %s(%s -> Reshape -> %s): '
'dtype of input "shape" not inferred, int32 assumed'
,
name
,
inputs
,
outputs
)
shape_dtype
=
_np
.
dtype
(
'int32'
)
fluid_op
=
'reshape'
fluid_op
=
'reshape'
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
name_attr
=
', name={}'
.
format
(
repr
(
name
))
# generation
# generation
prog
.
Code
(
'# shape:{}={} # const as literal'
.
format
(
var_shape
,
shape
))
var_shape_i32
=
(
name
+
'_shape_i32'
)
if
shape_dtype
!=
_np
.
int32
else
var_shape
# explicit variable
prog
.
Code
(
'# shape: {} = {} # const as literal'
.
format
(
var_shape
,
shape
))
if
is_const_shape
:
if
is_const_shape
:
prog
.
Code
(
'{} = layers.{}({}'
prog
.
Code
(
'{} = layers.{}({}'
', shape={}'
', shape={}'
...
@@ -1517,17 +1950,23 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
...
@@ -1517,17 +1950,23 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
name_attr
,
name_attr
,
))
))
else
:
else
:
val_shape_int32
=
val_shape
+
'_int32'
# explicit variable
if
shape_dtype
!=
_np
.
int32
:
var_shape_int32
=
_make_var_name
(
val_shape_int32
)
prog
.
Op
(
prog
.
Op
(
''
,
''
,
'Cast'
,
'Cast'
,
[
var_shape
],
[
val_shape
],
[
var_shape_i32
],
[
val_shape_int32
],
# var
{
'to'
:
_np
.
dtype
(
'int32'
)},
# use np.dtype
dict
(
to
=
_np
.
dtype
(
'int32'
)),
# use np.dtype
value_infos
=
{
value_infos
=
value_infos
,
var_shape
:
{
name
=
(
name
+
'_cast'
),
'dtype'
:
shape_dtype
)
},
var_shape_i32
:
{
'dtype'
:
_np
.
dtype
(
'int32'
)
},
},
name
=
(
name
+
'/cast'
),
)
prog
.
Code
(
'{} = layers.{}({}'
prog
.
Code
(
'{} = layers.{}({}'
', shape={}'
', shape={}'
', actual_shape={}'
', actual_shape={}'
...
@@ -1537,27 +1976,19 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
...
@@ -1537,27 +1976,19 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
var_data
,
var_data
,
# attrs
# attrs
shape
,
shape
,
var_shape_i
nt
32
,
var_shape_i32
,
name_attr
,
name_attr
,
))
))
fluid_op
=
'reshape2'
fluid_op
=
'reshape2'
var_xshape
=
name
+
'.xshape'
# dummy output
var_xshape
=
name
+
'.xshape'
# dummy output
prog
.
VarDesc
(
var_reshaped
)
prog
.
VarDesc
(
var_reshaped
)
prog
.
VarDesc
(
var_xshape
)
prog
.
VarDesc
(
var_xshape
)
if
is_const_shape
:
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
'X'
,
'Shape'
,
'ShapeTensor'
],
[
var_data
,
var_shape_i32
]),
#
([
var_data
],
'X'
),
([
'Out'
,
'XShape'
],
[
var_reshaped
,
var_xshape
]),
([
var_reshaped
,
var_xshape
],
'Out'
,
'XShape'
),
{
'shape'
:
shape
},
dict
(
shape
=
shape
),
)
)
else
:
prog
.
OpDesc
(
fluid_op
,
([
var_data
,
var_shape_int32
],
'X'
,
'Shape'
),
([
var_reshaped
,
var_xshape
],
'Out'
,
'XShape'
),
dict
(
shape
=
shape
),
)
def
Resize
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
def
Resize
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
...
@@ -1568,44 +1999,57 @@ def Resize(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
...
@@ -1568,44 +1999,57 @@ def Resize(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
return
_interpolate
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
name
)
return
_interpolate
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
name
)
def
RoiAlign
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
,
*
args
,
**
kwargs
):
def
RoiAlign
(
prog
,
inputs
,
outputs
,
attrs
,
name
,
*
args
,
**
kwargs
):
"""
"""
caffe2::RoiAlign
caffe2::RoiAlign
"""
"""
_roi_pool
(
prog
,
'roi_align'
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
)
_roi_pool
(
prog
,
'roi_align'
,
inputs
,
outputs
,
attrs
,
name
)
#def Shape(
def
Shape
(
prog
,
inputs
,
outputs
,
attrs_
,
name
,
**
kwargs
):
# prog, inputs, outputs, attrs, value_infos,
"""
# *args, **kwargs):
onnx::Shape-1:
# """
"""
# onnx::ConstantOfShape-1:
# """
# I/O
#
var_data
,
=
inputs
# # I/O
var_shape
,
=
outputs
# val_data, = inputs
assert
name
and
var_data
and
var_shape
# val_shape, = outputs
# var_data = _make_var_name(val_data)
# interpretation
# var_shape = _make_var_name(val_shape)
fluid_op
=
'shape'
#
var_shape_i64
=
name
+
'_shape_i64'
# # interpretation
# fluid_op = 'shape'
# generation
## value_infos[val_shape]['remove_batch'] = False
prog
.
Code
(
'{} = layers.{}({})'
.
format
(
#
var_shape_i64
,
# # generation
fluid_op
,
# prog.Code('{} = layers.{}({})'
var_data
,
# .format(var_shape,
# attrs
# fluid_op,
))
# var_data,
prog
.
VarDesc
(
var_shape_i64
)
# # attrs
prog
.
OpDesc
(
# ))
fluid_op
,
# prog.VarDesc(var_shape) # , _value_info_or_none(value_infos, val_shape))
([
'Input'
],
[
var_data
]),
# prog.OpDesc(fluid_op,
([
'Out'
],
[
var_shape_i64
]),
# ([var_data], 'X'),
)
# ([var_shape], 'Out'),
prog
.
Op
(
# dict(),
''
,
# )
'Cast'
,
[
var_shape_i64
],
[
var_shape
],
{
'to'
:
_np
.
dtype
(
'int32'
)},
# use np.dtype
value_infos
=
{
var_shape
:
{
'dtype'
:
_np
.
dtype
(
'int32'
)
},
var_shape_i64
:
{
'dtype'
:
_np
.
dtype
(
'int64'
)
},
},
name
=
(
name
+
'/cast'
),
)
def
Slice
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
*
args
,
**
kwargs
):
def
Slice
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
*
args
,
**
kwargs
):
...
@@ -1614,18 +2058,17 @@ def Slice(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -1614,18 +2058,17 @@ def Slice(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
"""
"""
# I/O
# I/O
val_data
,
=
inputs
var_data
,
=
inputs
val_output
,
=
outputs
var_output
,
=
outputs
var_data
=
_make_var_name
(
val_data
)
assert
var_data
and
var_output
var_output
=
_make_var_name
(
val_output
)
# interpretation
# interpretation
fluid_op
=
'slice'
fluid_op
=
'slice'
axes
=
attrs
[
'axes'
]
# required
axes
=
attrs
[
'axes'
]
# required
starts
=
attrs
[
'starts'
]
# required
starts
=
attrs
[
'starts'
]
# required
ends
=
attrs
[
'ends'
]
# required
ends
=
attrs
[
'ends'
]
# required
shape
=
_shape_or_none
(
value_infos
,
va
l
_data
)
shape
=
_shape_or_none
(
value_infos
,
va
r
_data
)
if
shape
:
if
shape
is
not
None
:
# ndims = len(shape)
# ndims = len(shape)
# for idx, value in enumerate(axes):
# for idx, value in enumerate(axes):
# if value > ONNX_INT_MAX // 2:
# if value > ONNX_INT_MAX // 2:
...
@@ -1657,13 +2100,13 @@ def Slice(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
...
@@ -1657,13 +2100,13 @@ def Slice(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
prog
.
VarDesc
(
var_output
)
prog
.
VarDesc
(
var_output
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_data
],
'Input'
),
([
'Input'
],
[
var_data
]
),
([
var_output
],
'Out'
),
([
'Out'
],
[
var_output
]
),
dict
(
{
axes
=
axes
,
'axes'
:
axes
,
starts
=
starts
,
'starts'
:
starts
,
ends
=
ends
,
'ends'
:
ends
,
)
,
}
,
)
)
...
@@ -1673,9 +2116,8 @@ def Split(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -1673,9 +2116,8 @@ def Split(prog, inputs, outputs, attrs, *args, name='', **kwargs):
"""
"""
# I/O
# I/O
val_input
,
=
inputs
var_input
,
=
inputs
var_outs
=
[
_make_var_name
(
val
)
for
val
in
outputs
]
assert
var_input
var_input
=
_make_var_name
(
val_input
)
# interpretation
# interpretation
fluid_op
=
'split'
fluid_op
=
'split'
...
@@ -1687,7 +2129,7 @@ def Split(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -1687,7 +2129,7 @@ def Split(prog, inputs, outputs, attrs, *args, name='', **kwargs):
prog
.
Code
(
'{} = layers.{}({}, {}'
prog
.
Code
(
'{} = layers.{}({}, {}'
', dim={}'
', dim={}'
'{})'
.
format
(
'{})'
.
format
(
', '
.
join
(
var_o
uts
),
', '
.
join
(
outp
uts
),
fluid_op
,
fluid_op
,
var_input
,
var_input
,
split
,
split
,
...
@@ -1695,28 +2137,29 @@ def Split(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -1695,28 +2137,29 @@ def Split(prog, inputs, outputs, attrs, *args, name='', **kwargs):
axis
,
axis
,
name_attr
,
name_attr
,
))
))
for
var_out
in
var_o
uts
:
for
var_out
in
outp
uts
:
prog
.
VarDesc
(
var_out
)
prog
.
VarDesc
(
var_out
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
(
var_input
,
'X'
),
([
'X'
],
[
var_input
]),
([
var_outs
],
*
([
'Out'
]
*
len
(
var_outs
))),
([
'Out'
]
*
len
(
outputs
),
outputs
),
dict
(
{
axis
=
axis
,
'axis'
:
axis
,
sections
=
split
,
'sections'
:
split
,
),
# unused
'num'
:
0
,
},
)
)
def
Sum
(
prog
,
inputs
,
outputs
,
*
args
,
**
kwargs
):
def
Sum
(
prog
,
inputs
,
outputs
,
attrs_
,
*
args
,
**
kwargs
):
"""
"""
onnx::Sum-8:
onnx::Sum-8:
"""
"""
# I/O
# I/O
val_sum
,
=
outputs
var_sum
,
=
outputs
var_inps
=
[
_make_var_name
(
val
)
for
val
in
inputs
]
assert
var_sum
var_sum
=
_make_var_name
(
val_sum
)
# interpretation
# interpretation
fluid_op
=
'sums'
fluid_op
=
'sums'
...
@@ -1725,39 +2168,38 @@ def Sum(prog, inputs, outputs, *args, **kwargs):
...
@@ -1725,39 +2168,38 @@ def Sum(prog, inputs, outputs, *args, **kwargs):
prog
.
Code
(
'{} = layers.{}({})'
.
format
(
prog
.
Code
(
'{} = layers.{}({})'
.
format
(
var_sum
,
var_sum
,
fluid_op
,
fluid_op
,
'['
+
', '
.
join
(
var_inp
s
)
+
']'
,
'['
+
', '
.
join
(
input
s
)
+
']'
,
# attrs
# attrs
))
))
fluid_op
=
'sum'
fluid_op
=
'sum'
prog
.
VarDesc
(
var_sum
)
prog
.
VarDesc
(
var_sum
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
(
var_inps
,
*
([
'X'
]
*
len
(
var_inps
))
),
(
[
'X'
]
*
len
(
inputs
),
inputs
),
([
var_sum
],
'Out'
),
([
'Out'
],
[
var_sum
]
),
dict
(),
dict
(),
)
)
def
Tile
(
prog
,
inputs
,
outputs
,
attrs
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
def
Tile
(
prog
,
inputs
,
outputs
,
attrs
_
,
value_infos
,
name
=
''
,
*
args
,
**
kwargs
):
"""
"""
onnx::Tile-1:
onnx::Tile-1:
"""
"""
# I/O
# I/O
val_input
,
val_repeats
=
inputs
var_input
,
var_repeats
,
=
inputs
val_output
,
=
outputs
var_output
,
=
outputs
var_input
=
_make_var_name
(
val_input
)
assert
var_input
and
var_repeats
and
var_output
var_repeats
=
_make_var_name
(
val_repeats
)
var_output
=
_make_var_name
(
val_output
)
# interpretation
# interpretation
repeats
=
_const_weight_or_none
(
value_infos
,
va
l
_repeats
)
repeats
=
_const_weight_or_none
(
value_infos
,
va
r
_repeats
)
assert
repeats
is
not
None
,
'only const repeats
is supported'
assert
repeats
is
not
None
,
'only const repeats
supported'
# if contain_tensor(expand_times)
fluid_op
=
'expand'
fluid_op
=
'expand'
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
# generation
# generation
prog
.
Code
(
'# repeats:{}={} # const as literal'
.
format
(
var_repeats
,
repeats
))
prog
.
Code
(
'# repeats: {} = {} # const as literal'
.
format
(
var_repeats
,
repeats
))
prog
.
Code
(
'{} = layers.{}({}'
prog
.
Code
(
'{} = layers.{}({}'
', expand_times={}'
', expand_times={}'
'{})'
.
format
(
'{})'
.
format
(
...
@@ -1771,22 +2213,21 @@ def Tile(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
...
@@ -1771,22 +2213,21 @@ def Tile(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
prog
.
VarDesc
(
var_output
)
prog
.
VarDesc
(
var_output
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_input
],
'X'
),
([
'X'
,
'expand_times_tensor'
],
[
var_input
]),
# TODO
([
var_output
],
'Out'
),
([
'Out'
],
[
var_output
]
),
dict
(
expand_times
=
repeats
)
,
{
'expand_times'
:
repeats
}
,
)
)
def
Transpose
(
prog
,
inputs
,
outputs
,
attrs
,
*
args
,
name
=
''
,
**
kwargs
):
def
Transpose
(
prog
,
inputs
,
outputs
,
attrs
,
name
,
*
args
,
**
kwargs
):
"""
"""
onnx::Transpose-1:
onnx::Transpose-1:
"""
"""
# I/O
# I/O
val_data
,
=
inputs
var_data
,
=
inputs
val_transposed
,
=
outputs
var_transposed
,
=
outputs
var_data
=
_make_var_name
(
val_data
)
assert
name
and
var_data
and
var_transposed
var_transposed
=
_make_var_name
(
val_transposed
)
# interpretation
# interpretation
fluid_op
=
'transpose'
fluid_op
=
'transpose'
...
@@ -1810,9 +2251,9 @@ def Transpose(prog, inputs, outputs, attrs, *args, name='', **kwargs):
...
@@ -1810,9 +2251,9 @@ def Transpose(prog, inputs, outputs, attrs, *args, name='', **kwargs):
prog
.
VarDesc
(
var_transposed
)
prog
.
VarDesc
(
var_transposed
)
prog
.
OpDesc
(
prog
.
OpDesc
(
fluid_op
,
fluid_op
,
([
var_data
],
'X'
),
([
'X'
],
[
var_data
]
),
([
var_transposed
,
var_xshape
],
'Out'
,
'XShape'
),
([
'Out'
,
'XShape'
],
[
var_transposed
,
var_xshape
]
),
dict
(
axis
=
perm
)
,
# f**k you API
{
'axis'
:
perm
}
,
# f**k you API
)
)
...
@@ -1902,9 +2343,8 @@ if __name__ == '__main__':
...
@@ -1902,9 +2343,8 @@ if __name__ == '__main__':
[
'input'
],
[
'input'
],
[
'output'
],
[
'output'
],
dict
(
to
=
2
),
# TensorProto.UINT8
dict
(
to
=
2
),
# TensorProto.UINT8
dict
(
dict
(
input
=
dict
(
shape
=
(
2
,
3
),
dtype
=
np
.
float32
),
input
=
dict
(
shape
=
(
2
,
3
),
dtype
=
np
.
float32
),
output
=
dict
(
shape
=
(
2
,
3
),
dtype
=
np
.
uint8
)),
output
=
dict
(
shape
=
(
2
,
3
),
dtype
=
np
.
uint8
)),
)
)
logger
.
info
(
'Cast program:
\n
%s'
,
prog
)
logger
.
info
(
'Cast program:
\n
%s'
,
prog
)
...
@@ -2101,12 +2541,11 @@ if __name__ == '__main__':
...
@@ -2101,12 +2541,11 @@ if __name__ == '__main__':
logger
.
info
(
'Less program:
\n
%s'
,
prog
)
logger
.
info
(
'Less program:
\n
%s'
,
prog
)
prog
=
Program
()
prog
=
Program
()
_default
(
_default
(
prog
,
prog
,
'MatMul'
,
[
'A'
,
'B'
],
[
'Y'
],
'MatMul'
,
[
'A'
,
'B'
],
[
'Y'
],
dict
(),
dict
(),
dict
(
Y
=
dict
(
shape
=
(
2
,
8
),
dtype
=
np
.
float32
)),
dict
(
Y
=
dict
(
shape
=
(
2
,
8
),
dtype
=
np
.
float32
)),
name
=
'MatMul'
)
name
=
'MatMul'
)
logger
.
info
(
'MatMul program:
\n
%s'
,
prog
)
logger
.
info
(
'MatMul program:
\n
%s'
,
prog
)
prog
=
Program
()
prog
=
Program
()
...
@@ -2168,11 +2607,9 @@ if __name__ == '__main__':
...
@@ -2168,11 +2607,9 @@ if __name__ == '__main__':
logger
.
info
(
'PRelu program:
\n
%s'
,
prog
)
logger
.
info
(
'PRelu program:
\n
%s'
,
prog
)
prog
=
Program
()
prog
=
Program
()
Tile
(
Tile
(
prog
,
[
'input'
,
'repeats'
],
[
'output'
],
prog
,
[
'input'
,
'repeats'
],
[
'output'
],
dict
(),
dict
(),
dict
(
repeats
=
dict
(
const_value
=
[
1
,
2
]),
dict
(
output
=
dict
(
shape
=
(
2
,
2
,
4
),
dtype
=
np
.
float32
)),
repeats
=
dict
(
const_value
=
[
1
,
2
]),
name
=
'Tile'
)
output
=
dict
(
shape
=
(
2
,
2
,
4
),
dtype
=
np
.
float32
)),
name
=
'Tile'
)
logger
.
info
(
'Tile program:
\n
%s'
,
prog
)
logger
.
info
(
'Tile program:
\n
%s'
,
prog
)
onnx2fluid/onnx2fluid/torch_export_helper.py
浏览文件 @
84f717b2
...
@@ -6,115 +6,180 @@ Created on Fri Mar 22 11:22:46 2019
...
@@ -6,115 +6,180 @@ Created on Fri Mar 22 11:22:46 2019
@author: Macrobull
@author: Macrobull
"""
"""
from
__future__
import
division
import
logging
import
numpy
as
np
import
numpy
as
np
import
torch
import
torch
from
collections
import
OrderedDict
as
Dict
from
collections
import
OrderedDict
from
typing
import
(
TypeVar
,
Any
,
Generic
,
Iterable
,
List
,
Mapping
,
Optional
,
Sequence
,
Text
,
Tuple
,
Union
,
)
logger
=
logging
.
getLogger
(
__name__
)
__all__
=
[
'export_data'
,
'export_onnx_with_validation'
,
]
my_dict
=
OrderedDict
KT
=
TypeVar
(
'KT'
)
VT
=
TypeVar
(
'VT'
)
def
_ensure_list
(
obj
):
class
MyDict
(
my_dict
,
Generic
[
KT
,
VT
]):
if
isinstance
(
obj
,
(
list
,
set
,
tuple
)):
pass
def
ensure_list
(
obj
:
Union
[
object
,
Sequence
[
object
]])
->
List
[
object
]:
if
isinstance
(
obj
,
(
list
,
tuple
,
set
)):
return
list
(
obj
)
return
list
(
obj
)
return
[
obj
]
return
[
obj
]
def
_ensure_tuple
(
obj
)
:
def
ensure_tuple
(
obj
:
Union
[
object
,
Sequence
[
object
]])
->
Tuple
[
object
,
...]
:
if
isinstance
(
obj
,
(
list
,
set
,
tuple
)):
if
isinstance
(
obj
,
(
tuple
,
list
,
set
)):
return
tuple
(
obj
)
return
tuple
(
obj
)
return
(
obj
,
)
return
(
obj
,
)
def
_flatten_list
(
obj
,
out
=
None
):
def
flatten_list
(
obj
:
List
[
Union
[
object
,
List
[
object
]]],
assert
isinstance
(
obj
,
list
)
out
:
Optional
[
List
[
object
]]
=
None
)
->
List
[
object
]:
assert
isinstance
(
obj
,
list
),
'list type required'
if
out
is
None
:
if
out
is
None
:
out
=
type
(
obj
)()
out
=
type
(
obj
)()
for
item
in
obj
:
for
item
in
obj
:
if
isinstance
(
item
,
list
):
if
isinstance
(
item
,
list
):
_
flatten_list
(
item
,
out
)
flatten_list
(
item
,
out
)
else
:
else
:
out
.
append
(
item
)
out
.
append
(
item
)
return
out
return
out
def
export_data
(
state_dict
,
prefix
=
''
)
:
def
export_data
(
state_dict
:
Mapping
[
Text
,
Any
],
prefix
:
Text
=
''
)
->
None
:
"""
"""
export binary data with meta text for raw C++ inference engines
export binary data with meta text for raw C++ inference engines
"""
"""
def
_str
(
obj
)
:
def
str_
(
obj
:
object
)
->
Text
:
if
isinstance
(
obj
,
(
tuple
,
list
)):
if
isinstance
(
obj
,
(
tuple
,
list
,
set
)):
return
str
(
obj
)[
1
:
-
1
].
replace
(
' '
,
''
)
return
str
(
obj
)[
1
:
-
1
].
replace
(
' '
,
''
)
return
str
(
obj
)
return
str
(
obj
)
prefix_
=
prefix
+
(
'_'
if
prefix
else
''
)
prefix_
=
prefix
+
(
'_'
if
prefix
else
''
)
fp
=
open
(
'{}.txt'
.
format
(
prefix
if
prefix
else
'meta'
),
'w'
)
fp
=
open
(
'{}.txt'
.
format
(
prefix
or
'meta'
),
mode
=
'w'
)
for
key
,
value
in
state_dict
.
items
():
for
key
,
value
in
state_dict
.
items
():
data
=
None
data
=
None
if
torch
and
torch
.
is_tensor
(
value
):
if
torch
.
is_tensor
(
value
):
data
=
value
.
data
.
cpu
().
numpy
()
data
=
value
.
data
.
cpu
().
numpy
()
elif
np
and
isinstance
(
value
,
np
.
ndarray
):
elif
isinstance
(
value
,
np
.
ndarray
):
data
=
value
data
=
value
if
data
is
not
None
:
if
data
is
not
None
:
data
.
tofile
(
'{}{}.bin'
.
format
(
prefix_
,
key
))
data
.
tofile
(
'{}{}.bin'
.
format
(
prefix_
,
key
))
fp
.
write
(
'{}.dtype={}
\n
'
.
format
(
key
,
_str
(
data
.
dtype
.
name
)))
fp
.
write
(
'{}.dtype={}
\n
'
.
format
(
key
,
str_
(
data
.
dtype
.
name
)))
fp
.
write
(
'{}.shape={}
\n
'
.
format
(
key
,
_str
(
data
.
shape
)))
fp
.
write
(
'{}.shape={}
\n
'
.
format
(
key
,
str_
(
data
.
shape
)))
else
:
else
:
fp
.
write
(
'{}={}
\n
'
.
format
(
key
,
_str
(
value
)))
fp
.
write
(
'{}={}
\n
'
.
format
(
key
,
str_
(
value
)))
fp
.
close
()
fp
.
close
()
def
export_onnx_with_validation
(
model
,
def
export_onnx_with_validation
(
inputs
,
model
:
torch
.
nn
.
Module
,
# or JITScriptModule
export_basepath
,
inputs
:
Sequence
[
Union
[
torch
.
Tensor
,
Sequence
[
object
]]],
input_names
=
None
,
export_basepath
:
Text
,
output_names
=
None
,
input_names
:
Optional
[
List
[
Text
]]
=
None
,
use_npz
=
True
,
output_names
:
Optional
[
List
[
Text
]]
=
None
,
*
args
,
use_npz
:
bool
=
True
,
**
kwargs
):
*
args
,
**
kwargs
)
->
Sequence
[
Union
[
torch
.
Tensor
,
Sequence
[
object
]]]:
"""
"""
export PyTorch model to ONNX model and export sample inputs and outputs in a Numpy file
export PyTorch model to ONNX model and export sample inputs and outputs in a Numpy file
"""
"""
is_
list_or_tuple
=
lambda
x
:
isinstance
(
x
,
(
list
,
tuple
))
is_
tuple_or_list
=
lambda
x
:
isinstance
(
x
,
(
tuple
,
list
))
def
_tensors_to_arrays
(
tensors
):
def
tensors_to_arrays
(
tensors
:
Union
[
torch
.
Tensor
,
Iterable
[
Union
[
torch
.
Tensor
,
Iterable
[
Any
]]]],
)
->
List
[
np
.
ndarray
]:
if
torch
.
is_tensor
(
tensors
):
if
torch
.
is_tensor
(
tensors
):
return
tensors
.
data
.
cpu
().
numpy
()
return
tensors
.
data
.
cpu
().
numpy
()
arrays
=
[]
return
list
(
map
(
tensors_to_arrays
,
tensors
))
for
tensor
in
tensors
:
arrays
.
append
(
_tensors_to_arrays
(
tensor
))
def
zip_dict
(
return
arrays
keys
:
Optional
[
Iterable
[
Any
]],
values
:
Sequence
[
Union
[
Any
,
Sequence
[
Any
]]],
def
_zip_dict
(
keys
,
values
):
)
->
MyDict
[
Text
,
Union
[
object
,
MyDict
[
Text
,
object
]]]:
ret
=
Dict
()
keys
=
keys
or
range
(
len
(
values
))
ret
=
my_dict
()
for
idx
,
(
key
,
value
)
in
enumerate
(
zip
(
keys
,
values
)):
for
idx
,
(
key
,
value
)
in
enumerate
(
zip
(
keys
,
values
)):
is_key_list
=
is_
list_or_tuple
(
key
)
is_key_list
=
is_
tuple_or_list
(
key
)
is_value_list
=
is_
list_or_tuple
(
value
)
is_value_list
=
is_
tuple_or_list
(
value
)
assert
is_key_list
==
is_value_list
,
'keys and values mismatch'
assert
is_key_list
==
is_value_list
,
'keys and values mismatch'
if
is_value_list
:
if
is_value_list
:
ret
[
str
(
idx
)]
=
_
zip_dict
(
key
,
value
)
ret
[
str
(
idx
)]
=
zip_dict
(
key
,
value
)
else
:
else
:
ret
[
key
]
=
value
ret
[
key
]
=
value
return
ret
return
ret
torch_inputs
=
_ensure_tuple
(
inputs
)
# WORKAROUND: for torch.onnx
torch_inputs
=
ensure_tuple
(
inputs
)
# WORKAROUND: for torch.onnx
outputs
=
torch
.
onnx
.
export
(
outputs
=
torch
.
onnx
.
export
(
model
,
model
,
torch_inputs
,
torch_inputs
,
export_basepath
+
'.onnx'
,
export_basepath
+
'.onnx'
,
input_names
=
(
None
if
input_names
is
None
else
input_names
=
_flatten_list
(
input_names
),
flatten_list
(
input_names
)),
output_names
=
_flatten_list
(
output_names
),
output_names
=
(
None
if
output_names
is
None
else
*
args
,
flatten_list
(
output_names
)),
**
kwargs
)
*
args
,
**
kwargs
)
if
outputs
is
None
:
# WORKAROUND: for torch.onnx
if
outputs
is
None
:
# WORKAROUND: for torch.onnx
outputs
=
model
(
*
inputs
)
training
=
kwargs
.
get
(
'training'
,
False
)
torch_outputs
=
_ensure_tuple
(
outputs
)
with
torch
.
onnx
.
set_training
(
model
,
training
):
outputs
=
model
(
*
inputs
)
torch_outputs
=
ensure_tuple
(
outputs
)
inputs
=
_zip_dict
(
input_names
,
_
tensors_to_arrays
(
torch_inputs
))
inputs
=
zip_dict
(
input_names
,
tensors_to_arrays
(
torch_inputs
))
outputs
=
_zip_dict
(
output_names
,
_
tensors_to_arrays
(
torch_outputs
))
outputs
=
zip_dict
(
output_names
,
tensors_to_arrays
(
torch_outputs
))
if
use_npz
:
if
use_npz
:
np
.
savez
(
export_basepath
+
'.npz'
,
inputs
=
inputs
,
outputs
=
outputs
)
np
.
savez
(
export_basepath
+
'.npz'
,
inputs
=
inputs
,
outputs
=
outputs
,
)
else
:
else
:
np
.
save
(
export_basepath
+
'.npy'
,
np
.
save
(
export_basepath
+
'.npy'
,
np
.
array
(
Dict
(
inputs
=
inputs
,
outputs
=
outputs
)))
np
.
asarray
(
my_dict
(
inputs
=
inputs
,
outputs
=
outputs
)),
allow_pickle
=
True
)
return
torch_outputs
return
torch_outputs
if
__name__
==
'__main__'
:
from
torchvision.models
import
resnet18
as
net
model
=
net
()
xb
=
torch
.
rand
((
1
,
3
,
224
,
224
))
export_onnx_with_validation
(
model
,
(
xb
,
),
'/tmp/export'
,
input_names
=
[
'image'
,
],
output_names
=
[
'prob'
,
],
use_npz
=
True
,
)
onnx2fluid/onnx2fluid/validation.py
浏览文件 @
84f717b2
...
@@ -8,38 +8,85 @@ Created on Fri Mar 22 12:17:19 2019
...
@@ -8,38 +8,85 @@ Created on Fri Mar 22 12:17:19 2019
import
importlib
,
logging
,
os
,
sys
import
importlib
,
logging
,
os
,
sys
logger
=
logging
.
getLogger
(
__name__
)
__all__
=
[
'fluid_prog_shape_infer'
,
'validate'
,
]
def
flatten_dict
(
obj
,
out
=
None
):
assert
isinstance
(
obj
,
dict
),
'dict type required'
def
_flatten_dict
(
obj
,
out
=
None
):
assert
isinstance
(
obj
,
dict
)
if
out
is
None
:
if
out
is
None
:
out
=
type
(
obj
)()
out
=
type
(
obj
)()
for
key
,
value
in
obj
.
items
():
for
key
,
value
in
obj
.
items
():
if
isinstance
(
value
,
dict
):
if
isinstance
(
value
,
dict
):
_
flatten_dict
(
value
,
out
)
flatten_dict
(
value
,
out
)
else
:
else
:
assert
key
not
in
out
assert
key
not
in
out
,
'key conflicted'
out
[
key
]
=
value
out
[
key
]
=
value
return
out
return
out
def
_ensure_list
(
obj
):
def
ensure_list
(
obj
):
for
cls
in
[
list
,
set
,
tuple
]:
if
isinstance
(
obj
,
(
list
,
tuple
,
set
)):
if
isinstance
(
obj
,
cls
):
return
list
(
obj
)
return
list
(
obj
)
return
[
obj
]
return
[
obj
]
def
fluid_prog_shape_infer
(
prog
):
"""
additional type-shape inference for fluid program
"""
import
paddle.fluid
as
fluid
assert
isinstance
(
prog
,
fluid
.
framework
.
Program
),
'prog is not a Program instance'
logger
.
info
(
'performing type-shape inference ...'
)
for
block
in
prog
.
blocks
:
block_desc
=
block
.
desc
for
idx_op
in
range
(
block_desc
.
op_size
()):
op_desc
=
block_desc
.
op
(
idx_op
)
if
op_desc
.
type
()
in
(
'feed'
,
'fetch'
):
continue
op_desc
.
infer_var_type
(
block_desc
)
op_desc
.
infer_shape
(
block_desc
)
for
var_name
,
var
in
block
.
vars
.
items
():
var_desc
=
var
.
desc
if
var_desc
.
type
()
!=
fluid
.
core
.
VarDesc
.
VarType
.
LOD_TENSOR
:
continue
# WORKAROUND: dirty way to give dtype to partial-infered vars
# which could not be cleared!
try
:
var
.
to_string
(
True
)
except
ValueError
:
var_desc
.
set_dtype
(
fluid
.
core
.
VarDesc
.
VarType
.
FP32
)
logger
.
debug
(
'dtype of var %s not inferred, float32 assumed'
,
var_name
)
def
validate
(
fluid_model_filename
,
def
validate
(
fluid_model_filename
,
golden_data_filename
,
golden_data_filename
=
''
,
model_func_name
=
'inference'
,
atol
=
1e-3
,
atol
=
1e-3
,
rtol
=
1e-4
,
rtol
=
1e-3
,
model_func_name
=
'inference'
,
save_inference_model
=
False
,
save_inference_model
=
False
,
inference_input_names
=
None
,
**
kwargs
):
**
kwargs
):
"""
"""
inference the converted Paddle fluid model, validate with given golden data
inference the converted Paddle fluid model, validate with given golden data
"""
"""
assert
isinstance
(
fluid_model_filename
,
str
)
import
numpy
as
np
import
numpy
as
np
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
...
@@ -52,12 +99,12 @@ def validate(fluid_model_filename,
...
@@ -52,12 +99,12 @@ def validate(fluid_model_filename,
# load model
# load model
fluid_model_dir
,
basename
=
os
.
path
.
split
(
fluid_model_filename
)
fluid_model_dir
,
basename
=
os
.
path
.
split
(
fluid_model_filename
)
if
basename
==
'__model__'
:
# is desc program
if
basename
==
'__model__'
:
# is desc program
logger
.
debug
(
'using desc file %s'
,
basename
)
logger
.
info
(
'using desc file %s'
,
basename
)
prog
,
_
,
var_outs
=
fluid
.
io
.
load_inference_model
(
fluid_model_dir
,
exe
)
prog
,
_
,
var_outs
=
fluid
.
io
.
load_inference_model
(
fluid_model_dir
,
exe
)
out_names
=
var_outs
# HINT: pass var if fetch ops already created
out_names
=
var_outs
# HINT: pass var if fetch ops already created
logger
.
info
(
'model load passed'
)
logger
.
info
(
'model load passed'
)
elif
basename
.
endswith
(
'.py'
):
# is
p
ython code
elif
basename
.
endswith
(
'.py'
):
# is
P
ython code
logger
.
debug
(
'using python
code file %s'
,
basename
)
logger
.
info
(
'using
code file %s'
,
basename
)
module_name
,
_
=
os
.
path
.
splitext
(
basename
)
module_name
,
_
=
os
.
path
.
splitext
(
basename
)
sys_path
=
sys
.
path
.
copy
()
sys_path
=
sys
.
path
.
copy
()
sys
.
path
.
append
(
fluid_model_dir
)
sys
.
path
.
append
(
fluid_model_dir
)
...
@@ -73,74 +120,92 @@ def validate(fluid_model_filename,
...
@@ -73,74 +120,92 @@ def validate(fluid_model_filename,
func
)
func
)
var_outs
=
func
()
var_outs
=
func
()
var_outs
=
_
ensure_list
(
var_outs
)
var_outs
=
ensure_list
(
var_outs
)
out_names
=
[
var
.
name
for
var
in
var_outs
out_names
=
[
var
.
name
for
var
in
var_outs
]
# HINT: pass string to create fetch ops
]
# HINT: pass string to create fetch ops
logger
.
info
(
'import passed'
)
logger
.
info
(
'import passed'
)
prog
=
fluid
.
default_main_program
()
prog
=
fluid
.
default_main_program
()
fluid
.
io
.
load_persistables
(
fluid
.
io
.
load_persistables
(
executor
=
exe
,
executor
=
exe
,
dirname
=
fluid_model_dir
,
main_program
=
prog
)
dirname
=
fluid_model_dir
,
main_program
=
prog
)
logger
.
info
(
'weight load passed'
)
logger
.
info
(
'weight load passed'
)
else
:
else
:
raise
ValueError
(
'unsupported Paddle fluid model filename'
)
raise
ValueError
(
'unsupported Paddle fluid model filename'
)
# load data
# load data
logger
.
info
(
'using golden data %s'
,
golden_data_filename
)
if
golden_data_filename
:
if
golden_data_filename
.
endswith
(
'.npz'
):
logger
.
info
(
'using golden data %s'
,
golden_data_filename
)
test_data
=
np
.
load
(
golden_data_filename
,
encoding
=
'bytes'
)
if
golden_data_filename
.
endswith
(
'.npz'
):
input_data
=
test_data
[
'inputs'
].
tolist
()
test_data
=
np
.
load
(
output_data
=
test_data
[
'outputs'
].
tolist
()
golden_data_filename
,
else
:
encoding
=
'bytes'
,
test_data
=
np
.
load
(
golden_data_filename
,
encoding
=
'bytes'
).
tolist
()
allow_pickle
=
True
,
input_data
=
test_data
[
'inputs'
]
)
output_data
=
test_data
[
'outputs'
]
input_data
=
test_data
[
'inputs'
].
tolist
()
input_data
=
_flatten_dict
(
input_data
)
output_data
=
test_data
[
'outputs'
].
tolist
()
output_data
=
_flatten_dict
(
output_data
)
else
:
logger
.
info
(
'found %d I/O golden data, starting test ...'
,
test_data
=
np
.
load
(
len
(
input_data
)
+
len
(
output_data
))
golden_data_filename
,
encoding
=
'bytes'
,
# DEBUG: reload test for python code
allow_pickle
=
True
,
if
basename
.
endswith
(
'.py'
)
and
save_inference_model
:
).
tolist
()
fluid
.
io
.
save_inference_model
(
input_data
=
test_data
[
'inputs'
]
fluid_model_dir
,
output_data
=
test_data
[
'outputs'
]
input_data
.
keys
(),
var_outs
,
input_data
=
flatten_dict
(
input_data
)
exe
,
output_data
=
flatten_dict
(
output_data
)
main_program
=
prog
,
input_names
=
input_data
.
keys
()
export_for_deployment
=
True
)
# output_names = output_data.keys()
logger
.
info
(
'with %d inputs and %d outputs'
,
len
(
input_data
),
len
(
output_data
))
elif
save_inference_model
:
assert
inference_input_names
is
not
None
,
(
'input names required for type-shape inference'
)
input_names
=
inference_input_names
logger
.
info
(
'using input names: %s'
,
', '
.
join
(
input_names
))
# type-shape inference and re-save
if
save_inference_model
:
fluid_prog_shape_infer
(
prog
)
fluid
.
io
.
save_inference_model
(
fluid_model_dir
,
input_names
,
var_outs
,
exe
,
main_program
=
prog
,
export_for_deployment
=
True
)
logger
.
info
(
'model re-save passed'
)
logger
.
info
(
'model re-save passed'
)
fluid
.
io
.
load_inference_model
(
fluid_model_dir
,
exe
)
fluid
.
io
.
load_inference_model
(
fluid_model_dir
,
exe
)
logger
.
info
(
'model re-load passed'
)
logger
.
info
(
'model re-load passed'
)
if
golden_data_filename
==
''
:
return
True
# execute
# execute
outputs
=
exe
.
run
(
prog
,
feed
=
input_data
,
fetch_list
=
out_names
)
outputs
=
exe
.
run
(
prog
,
feed
=
input_data
,
fetch_list
=
out_names
)
# out_names can be vars
logger
.
info
(
'execution passed'
)
logger
.
info
(
'execution passed'
)
# validate
# validate
passed
=
True
passed
=
True
for
(
name
,
truth
),
output
in
zip
(
output_data
.
items
(),
outputs
):
for
(
name
,
truth
),
output
in
zip
(
output_data
.
items
(),
outputs
):
logger
.
info
(
'testing output {} ...'
.
format
(
name
))
logger
.
info
(
'testing o
n o
utput {} ...'
.
format
(
name
))
try
:
try
:
np
.
testing
.
assert_allclose
(
np
.
testing
.
assert_allclose
(
output
,
output
,
truth
,
truth
,
rtol
=
rtol
,
rtol
=
rtol
,
atol
=
atol
,
atol
=
atol
,
equal_nan
=
False
,
equal_nan
=
False
,
verbose
=
True
)
verbose
=
True
)
except
AssertionError
as
e
:
except
AssertionError
as
e
:
passed
=
False
passed
=
False
logger
.
error
(
'failed: %s
\n
'
,
e
)
logger
.
error
(
'failed: %s
\n
'
,
e
)
if
passed
:
logger
.
info
(
'accuracy %spassed'
,
''
if
passed
else
'not '
)
logger
.
info
(
'accuracy passed'
)
else
:
logger
.
info
(
'accuracy not passed'
)
return
passed
return
passed
if
__name__
==
'__main__'
:
def
main
()
:
import
argparse
import
argparse
parser
=
argparse
.
ArgumentParser
(
parser
=
argparse
.
ArgumentParser
(
...
@@ -162,6 +227,7 @@ if __name__ == '__main__':
...
@@ -162,6 +227,7 @@ if __name__ == '__main__':
'--test_data'
,
'--test_data'
,
'-t'
,
'-t'
,
type
=
str
,
type
=
str
,
default
=
''
,
help
=
'I/O golden data for validation, e.g. test.npy, test.npz'
,
help
=
'I/O golden data for validation, e.g. test.npy, test.npz'
,
)
)
parser
.
add_argument
(
parser
.
add_argument
(
...
@@ -174,23 +240,39 @@ if __name__ == '__main__':
...
@@ -174,23 +240,39 @@ if __name__ == '__main__':
parser
.
add_argument
(
parser
.
add_argument
(
'--rtol'
,
'--rtol'
,
type
=
float
,
type
=
float
,
default
=
1e-
4
,
default
=
1e-
2
,
help
=
'assertion relative tolerance for validation'
,
help
=
'assertion relative tolerance for validation'
,
)
)
parser
.
add_argument
(
'--infer_inputs'
,
'-i'
,
nargs
=
'?'
,
default
=
None
,
const
=
''
,
help
=
'perform type-shape inference with given input names and re-save model'
,
)
args
=
parser
.
parse_args
()
args
=
parser
.
parse_args
()
logging_format
=
'[%(levelname)8s]%(name)s::%(funcName)s:%(lineno)04d: %(message)s'
logging_format
=
'[%(levelname)8s]%(name)s::%(funcName)s:%(lineno)04d: %(message)s'
logging_level
=
logging
.
DEBUG
if
args
.
debug
else
logging
.
INFO
logging_level
=
logging
.
DEBUG
if
args
.
debug
else
logging
.
INFO
logging
.
basicConfig
(
format
=
logging_format
,
level
=
logging_level
)
logging
.
basicConfig
(
format
=
logging_format
,
level
=
logging_level
)
debug
=
args
.
debug
#
debug = args.debug
fluid_model_filename
=
args
.
model
[
0
]
fluid_model_filename
=
args
.
model
[
0
]
golden_data_filename
=
args
.
test_data
golden_data_filename
=
args
.
test_data
atol
,
rtol
=
args
.
atol
,
args
.
rtol
atol
,
rtol
=
args
.
atol
,
args
.
rtol
save_inference_model
=
args
.
infer_inputs
is
not
None
inference_input_names
=
args
.
infer_inputs
.
split
(
','
)
if
args
.
infer_inputs
else
None
validate
(
fluid_model_filename
,
golden_data_filename
=
golden_data_filename
,
atol
=
atol
,
rtol
=
rtol
,
save_inference_model
=
save_inference_model
,
inference_input_names
=
inference_input_names
)
validate
(
fluid_model_filename
,
if
__name__
==
'__main__'
:
golden_data_filename
,
main
()
atol
=
atol
,
rtol
=
rtol
,
save_inference_model
=
debug
)
onnx2fluid/onnx2fluid/writer.py
浏览文件 @
84f717b2
...
@@ -11,10 +11,11 @@ from __future__ import division
...
@@ -11,10 +11,11 @@ from __future__ import division
import
logging
,
os
import
logging
,
os
import
numpy
as
np
import
numpy
as
np
from
collections
import
OrderedDict
as
Dict
logger
=
logging
.
getLogger
(
__name__
)
logger
=
logging
.
getLogger
(
__name__
)
from
.
import
symbolic
from
.
import
symbolic
from
.symbolic
import
_make_var_name
as
make_var_name
try
:
try
:
import
paddle.fluid.proto.framework_pb2
as
framework_pb2
import
paddle.fluid.proto.framework_pb2
as
framework_pb2
...
@@ -30,7 +31,7 @@ __all__ = [
...
@@ -30,7 +31,7 @@ __all__ = [
]
]
def
_
irepr
(
obj
,
to
=
'_'
):
def
irepr
(
obj
,
to
=
'_'
):
"""inline repr"""
"""inline repr"""
s
=
repr
(
obj
)
s
=
repr
(
obj
)
...
@@ -41,12 +42,14 @@ def _irepr(obj, to='_'):
...
@@ -41,12 +42,14 @@ def _irepr(obj, to='_'):
return
s
return
s
def
_flatten_list
(
obj
,
out
=
None
):
def
flatten_list
(
obj
,
out
=
None
):
assert
isinstance
(
obj
,
list
),
'list type required'
if
out
is
None
:
if
out
is
None
:
out
=
type
(
obj
)()
out
=
type
(
obj
)()
for
item
in
obj
:
for
item
in
obj
:
if
isinstance
(
item
,
list
):
if
isinstance
(
item
,
list
):
_
flatten_list
(
item
,
out
)
flatten_list
(
item
,
out
)
else
:
else
:
out
.
append
(
item
)
out
.
append
(
item
)
return
out
return
out
...
@@ -57,9 +60,9 @@ def make_attr_name(name):
...
@@ -57,9 +60,9 @@ def make_attr_name(name):
make a valid code name for ParamAttr
make a valid code name for ParamAttr
"""
"""
if
name
==
''
:
assert
name
!=
''
,
'name should not be empty'
raise
ValueError
(
'name should not be empty'
)
for
s
in
'
*?
\\
/-:
'
:
#
for
s
in
'
\\
|/:.-
'
:
#
name
=
name
.
replace
(
s
,
'_'
)
name
=
name
.
replace
(
s
,
'_'
)
if
not
name
.
startswith
(
'_'
):
if
not
name
.
startswith
(
'_'
):
name
=
'_'
+
name
name
=
'_'
+
name
...
@@ -93,7 +96,7 @@ class Program(object):
...
@@ -93,7 +96,7 @@ class Program(object):
return
Program
.
DTYPE_TO_FRAMEWORK_DTYPE
[
dtype
]
return
Program
.
DTYPE_TO_FRAMEWORK_DTYPE
[
dtype
]
@
staticmethod
@
staticmethod
def
OpDescVars
(
vals
,
*
key
s
):
def
OpDescVars
(
keys
,
val
s
):
"""
"""
make (OpDesc.Var)s
make (OpDesc.Var)s
"""
"""
...
@@ -130,8 +133,8 @@ class Program(object):
...
@@ -130,8 +133,8 @@ class Program(object):
od_attr
.
type
=
framework_pb2
.
STRING
od_attr
.
type
=
framework_pb2
.
STRING
od_attr
.
s
=
value
od_attr
.
s
=
value
elif
isinstance
(
value
,
list
):
elif
isinstance
(
value
,
list
):
if
len
(
value
)
>
0
:
if
value
:
# TODO: test all items
if
isinstance
(
value
,
if
isinstance
(
value
[
0
]
,
bool
):
# bool.mro() = [bool, int, object]
bool
):
# bool.mro() = [bool, int, object]
od_attr
.
type
=
framework_pb2
.
BOOLEANS
od_attr
.
type
=
framework_pb2
.
BOOLEANS
od_attr
.
bools
.
extend
(
value
)
od_attr
.
bools
.
extend
(
value
)
...
@@ -147,13 +150,11 @@ class Program(object):
...
@@ -147,13 +150,11 @@ class Program(object):
else
:
else
:
raise
ValueError
(
'unsupported attribute {} = {}'
.
format
(
raise
ValueError
(
'unsupported attribute {} = {}'
.
format
(
key
,
value
))
key
,
value
))
else
:
# WORKAROUND: shape of scalars is []
else
:
# WORKAROUND: [] not inferred
raise
ValueError
(
'unsupported attribute {} = {}'
.
format
(
# raise ValueError('unsupported attribute {} = {}'.format(key, value))
key
,
value
))
od_attr
.
type
=
framework_pb2
.
INTS
logger
.
warning
(
'using attribute %s = %s as INTS'
,
key
,
value
)
# od_attr.type = framework_pb2.INTS
# logger.warning('using attribute %s = %s as INTS', key, value)
else
:
else
:
raise
ValueError
(
'unsupported attribute {} = {}'
.
format
(
raise
ValueError
(
'unsupported attribute {} = {}'
.
format
(
key
,
value
))
key
,
value
))
...
@@ -164,14 +165,15 @@ class Program(object):
...
@@ -164,14 +165,15 @@ class Program(object):
self
.
code_mutable
=
True
self
.
code_mutable
=
True
self
.
codes
=
[]
self
.
codes
=
[]
self
.
op_descs
=
[]
self
.
op_descs
=
[]
self
.
var_descs
=
[]
self
.
var_descs
=
Dict
()
def
__repr__
(
self
):
def
__repr__
(
self
):
return
(
'Program(code mutable: {}) with:
\n
'
return
(
'Program(code mutable: {}) with:
\n
'
'codes: {}
\n
'
'codes: {}
\n
'
'op_descs: {}
\n
'
'op_descs: {}
\n
'
'var_descs: {}
\n
'
).
format
(
self
.
code_mutable
,
self
.
codes
,
'var_descs: {}
\n
'
).
format
(
self
.
code_mutable
,
self
.
codes
,
self
.
op_descs
,
self
.
var_descs
)
self
.
op_descs
,
list
(
self
.
var_descs
.
values
()))
def
Code
(
self
,
code
):
def
Code
(
self
,
code
):
"""
"""
...
@@ -181,23 +183,16 @@ class Program(object):
...
@@ -181,23 +183,16 @@ class Program(object):
if
self
.
code_mutable
:
if
self
.
code_mutable
:
self
.
codes
.
append
(
code
)
self
.
codes
.
append
(
code
)
def
OpDesc
(
self
,
def
OpDesc
(
self
,
op_type
,
input_key_vals
,
output_key_vals
,
attrs
):
name
,
input_val_keys
=
None
,
output_val_keys
=
None
,
attrs
=
None
):
"""
"""
add OpDesc
add OpDesc
"""
"""
desc
=
framework_pb2
.
OpDesc
()
desc
=
framework_pb2
.
OpDesc
()
desc
.
type
=
name
desc
.
type
=
op_type
if
input_val_keys
is
not
None
:
desc
.
inputs
.
extend
(
self
.
OpDescVars
(
*
input_key_vals
))
desc
.
inputs
.
extend
(
self
.
OpDescVars
(
*
input_val_keys
))
desc
.
outputs
.
extend
(
self
.
OpDescVars
(
*
output_key_vals
))
if
output_val_keys
is
not
None
:
desc
.
attrs
.
extend
(
self
.
OpDescAttrs
(
attrs
))
desc
.
outputs
.
extend
(
self
.
OpDescVars
(
*
output_val_keys
))
if
attrs
is
not
None
:
desc
.
attrs
.
extend
(
self
.
OpDescAttrs
(
attrs
))
self
.
op_descs
.
append
(
desc
)
self
.
op_descs
.
append
(
desc
)
return
desc
return
desc
...
@@ -210,26 +205,18 @@ class Program(object):
...
@@ -210,26 +205,18 @@ class Program(object):
add VarDesc,
add VarDesc,
"""
"""
assert
name
not
in
self
.
var_descs
,
'var name {} conflicts'
.
format
(
name
)
var_desc
=
framework_pb2
.
VarDesc
()
var_desc
=
framework_pb2
.
VarDesc
()
var_desc
.
name
=
name
var_desc
.
name
=
name
var_desc
.
persistable
=
persistable
var_desc
.
persistable
=
persistable
var_desc
.
type
.
type
=
framework_pb2
.
VarType
.
LOD_TENSOR
var_desc
.
type
.
type
=
framework_pb2
.
VarType
.
LOD_TENSOR
self
.
var_descs
[
name
]
=
var_desc
if
value_info
and
'dtype'
in
value_info
:
if
value_info
is
not
None
:
tensor_desc
=
var_desc
.
type
.
lod_tensor
.
tensor
self
.
VarTypeShapeInfo
(
name
,
value_info
,
remove_batch
=
remove_batch
)
tensor_desc
.
data_type
=
self
.
Dtype
(
value_info
[
'dtype'
])
# required
if
'shape'
in
value_info
:
tensor_desc
.
dims
.
extend
(
value_info
[
'shape'
])
if
len
(
value_info
[
'shape'
])
>
0
:
# skip scalars
if
remove_batch
is
None
:
remove_batch
=
value_info
.
get
(
'remove_batch'
,
not
persistable
)
if
remove_batch
:
tensor_desc
.
dims
[
0
]
=
-
1
self
.
var_descs
.
append
(
var_desc
)
def
Op
(
self
,
domain
,
op_type
,
*
args
,
**
kwargs
):
def
Op
(
self
,
domain
,
op_type
,
inputs
,
outputs
,
attrs
,
*
args
,
**
kwargs
):
"""
"""
convert an ONNX op and add it to program
convert an ONNX op and add it to program
"""
"""
...
@@ -238,15 +225,17 @@ class Program(object):
...
@@ -238,15 +225,17 @@ class Program(object):
raise
ValueError
(
'only default domain supported'
)
raise
ValueError
(
'only default domain supported'
)
if
op_type
in
symbolic
.
DEFAULT_OP_MAPPING
:
if
op_type
in
symbolic
.
DEFAULT_OP_MAPPING
:
symbolic
.
_default
(
self
,
op_type
,
*
args
,
**
kwargs
)
symbolic
.
_default
(
self
,
op_type
,
inputs
,
outputs
,
attrs
,
*
args
,
**
kwargs
)
elif
hasattr
(
symbolic
,
op_type
):
elif
hasattr
(
symbolic
,
op_type
):
fn
=
getattr
(
symbolic
,
op_type
)
fn
=
getattr
(
symbolic
,
op_type
)
fn
(
self
,
*
args
,
**
kwargs
)
fn
(
self
,
inputs
,
outputs
,
attrs
,
*
args
,
**
kwargs
)
else
:
else
:
raise
ValueError
(
'conversion for {}::{} not supported'
.
format
(
raise
ValueError
(
'conversion for {}::{} not supported'
.
format
(
domain
,
op_type
))
domain
,
op_type
))
def
IntermediateOp
(
self
,
domain
,
op_type
,
*
args
,
**
kwargs
):
def
IntermediateOp
(
self
,
domain
,
op_type
,
inputs
,
outputs
,
attrs
,
*
args
,
**
kwargs
):
"""
"""
convert an intermediate ONNX op declaring in desc program only
convert an intermediate ONNX op declaring in desc program only
"""
"""
...
@@ -254,20 +243,47 @@ class Program(object):
...
@@ -254,20 +243,47 @@ class Program(object):
code_mutable
=
self
.
code_mutable
code_mutable
=
self
.
code_mutable
self
.
code_mutable
=
False
self
.
code_mutable
=
False
try
:
try
:
self
.
Op
(
domain
,
op_type
,
*
args
,
**
kwargs
)
self
.
Op
(
domain
,
op_type
,
inputs
,
outputs
,
attrs
,
*
args
,
**
kwargs
)
except
BaseException
as
e
:
except
BaseException
as
e
:
self
.
code_mutable
=
code_mutable
self
.
code_mutable
=
code_mutable
raise
e
raise
e
else
:
else
:
self
.
code_mutable
=
code_mutable
self
.
code_mutable
=
code_mutable
def
VarTypeShapeInfo
(
self
,
name
,
value_info
,
remove_batch
=
None
):
"""
set value_info for var
"""
if
name
not
in
self
.
var_descs
:
return
dtype
=
value_info
.
get
(
'dtype'
,
None
)
if
dtype
is
None
:
return
var_desc
=
self
.
var_descs
[
name
]
tensor_desc
=
var_desc
.
type
.
lod_tensor
.
tensor
tensor_desc
.
data_type
=
self
.
Dtype
(
dtype
)
# required
shape
=
value_info
.
get
(
'shape'
,
None
)
if
not
shape
:
# None or scalars
return
tensor_desc
.
dims
.
extend
(
shape
)
if
remove_batch
is
None
:
remove_batch
=
value_info
.
get
(
'remove_batch'
,
False
)
#not persistable)
if
remove_batch
:
tensor_desc
.
dims
[
0
]
=
-
1
class
Writer
(
object
):
class
Writer
(
object
):
"""
"""
fluid code and desc writter
fluid code and desc writter
"""
"""
CODE_INDENT
=
' '
*
4
CODE_INDENT
=
' '
*
4
# '\t'
@
staticmethod
@
staticmethod
def
header_code
(
func_name
,
info
=
''
):
def
header_code
(
func_name
,
info
=
''
):
...
@@ -275,7 +291,7 @@ class Writer(object):
...
@@ -275,7 +291,7 @@ class Writer(object):
Python header codes
Python header codes
"""
"""
codes
=
list
()
codes
=
[]
codes
.
append
(
'"""'
)
codes
.
append
(
'"""'
)
codes
.
append
(
'This code is generated by onnx2fluid.'
)
codes
.
append
(
'This code is generated by onnx2fluid.'
)
codes
.
append
(
'{}'
.
format
(
info
))
codes
.
append
(
'{}'
.
format
(
info
))
...
@@ -287,6 +303,7 @@ class Writer(object):
...
@@ -287,6 +303,7 @@ class Writer(object):
codes
.
append
(
'from paddle.fluid import initializer, layers'
)
codes
.
append
(
'from paddle.fluid import initializer, layers'
)
codes
.
append
(
''
)
codes
.
append
(
''
)
codes
.
append
(
''
)
codes
.
append
(
''
)
codes
.
append
(
'def {}():'
.
format
(
func_name
))
codes
.
append
(
'def {}():'
.
format
(
func_name
))
return
codes
return
codes
...
@@ -299,17 +316,16 @@ class Writer(object):
...
@@ -299,17 +316,16 @@ class Writer(object):
prog
.
Code
(
'# {}, {}::{}: {} -> {}, {}'
.
format
(
name
,
domain
,
op_type
,
prog
.
Code
(
'# {}, {}::{}: {} -> {}, {}'
.
format
(
name
,
domain
,
op_type
,
inputs
,
outputs
,
inputs
,
outputs
,
_irepr
(
attrs
,
to
=
', '
)))
irepr
(
attrs
,
to
=
', '
)))
prog
.
Op
(
prog
.
Op
(
domain
,
domain
,
op_type
,
op_type
,
inputs
,
inputs
,
outputs
,
outputs
,
attrs
,
attrs
,
value_infos
=
value_infos
,
value_infos
=
value_infos
,
name
=
name
,
name
=
name
,
*
args
,
*
args
,
**
kwargs
)
**
kwargs
)
@
staticmethod
@
staticmethod
def
emit_param
(
prog
,
name
,
value_info
):
def
emit_param
(
prog
,
name
,
value_info
):
...
@@ -317,24 +333,26 @@ class Writer(object):
...
@@ -317,24 +333,26 @@ class Writer(object):
emit an ONNX weight into program
emit an ONNX weight into program
"""
"""
if
value_info
.
get
(
'embeded_as'
,
[]):
embedded_names
=
value_info
.
get
(
'embedded_as'
,
[])
var_names
=
value_info
[
'embeded_as'
]
if
embedded_names
:
prog
.
Code
(
'# parameter {} embeded as {}'
.
format
(
name
,
var_names
))
prog
.
Code
(
'# parameter {} embedded as {}'
.
format
(
for
var_name
in
var_names
:
name
,
embedded_names
))
prog
.
VarDesc
(
var_name
,
persistable
=
True
,
value_info
=
value_info
)
for
embedded_name
in
embedded_names
:
prog
.
VarDesc
(
embedded_name
,
persistable
=
True
,
value_info
=
value_info
)
else
:
else
:
var_name
=
make_var_name
(
name
)
attr_name
=
make_attr_name
(
name
)
attr_name
=
make_attr_name
(
name
)
prog
.
Code
(
'# parameter {}
: {}'
.
format
(
name
,
var_
name
))
prog
.
Code
(
'# parameter {}
'
.
format
(
name
))
prog
.
Code
(
'{} = ParamAttr(name={})'
# , trainable=True
prog
.
Code
(
'{} = ParamAttr(name={})'
# , trainable=True
.
format
(
attr_name
,
repr
(
var_
name
)))
.
format
(
attr_name
,
repr
(
name
)))
prog
.
Code
(
prog
.
Code
(
'{} = layers.create_parameter(shape={}, dtype={}, name={}, attr={}'
'{} = layers.create_parameter(shape={}, dtype={}, name={}, attr={}'
', default_initializer=initializer.Constant(0))'
#, is_bias={}
', default_initializer=initializer.Constant(0))'
#, is_bias={}
.
format
(
var_
name
,
value_info
[
'shape'
],
.
format
(
name
,
value_info
[
'shape'
],
repr
(
value_info
[
'dtype'
].
name
),
repr
(
name
),
repr
(
value_info
[
'dtype'
].
name
),
repr
(
name
),
attr_name
))
#, value_info.get('is_bias', False)))
attr_name
))
#, value_info.get('is_bias', False)))
prog
.
VarDesc
(
var_
name
,
persistable
=
True
,
value_info
=
value_info
)
prog
.
VarDesc
(
name
,
persistable
=
True
,
value_info
=
value_info
)
@
staticmethod
@
staticmethod
def
emit_inputs
(
prog
,
names
,
value_infos
,
remove_batch
=
None
):
def
emit_inputs
(
prog
,
names
,
value_infos
,
remove_batch
=
None
):
...
@@ -343,7 +361,6 @@ class Writer(object):
...
@@ -343,7 +361,6 @@ class Writer(object):
"""
"""
for
idx
,
name
in
enumerate
(
names
):
for
idx
,
name
in
enumerate
(
names
):
var_name
=
make_var_name
(
name
)
value_info
=
value_infos
[
name
]
value_info
=
value_infos
[
name
]
shape
=
value_info
[
'shape'
]
shape
=
value_info
[
'shape'
]
if
remove_batch
is
None
:
if
remove_batch
is
None
:
...
@@ -352,25 +369,24 @@ class Writer(object):
...
@@ -352,25 +369,24 @@ class Writer(object):
if
remove_batch
:
if
remove_batch
:
shape
=
shape
[
1
:]
shape
=
shape
[
1
:]
prog
.
Code
(
'# input {}
: {}'
.
format
(
name
,
var_
name
))
prog
.
Code
(
'# input {}
'
.
format
(
name
))
prog
.
Code
((
prog
.
Code
((
'{} = layers.data(name={}, shape={}, dtype={}, '
'{} = layers.data(name={}, shape={}, dtype={}, '
'append_batch_size={})'
# , stop_gradient=True
'append_batch_size={})'
# , stop_gradient=True
).
format
(
).
format
(
var_
name
,
name
,
repr
(
var_
name
),
repr
(
name
),
shape
,
shape
,
repr
(
value_info
[
'dtype'
].
name
),
repr
(
value_info
[
'dtype'
].
name
),
remove_batch
,
remove_batch
,
))
))
prog
.
OpDesc
(
prog
.
OpDesc
(
'feed'
,
'feed'
,
([
'
feed'
],
'X'
),
([
'
X'
],
[
'feed'
]
),
([
var_name
],
'Out'
),
([
'Out'
],
[
name
]
),
dict
(
col
=
idx
)
,
{
'col'
:
idx
}
,
)
)
prog
.
VarDesc
(
prog
.
VarDesc
(
name
,
value_info
=
value_info
,
remove_batch
=
remove_batch
)
var_name
,
value_info
=
value_info
,
remove_batch
=
remove_batch
)
@
staticmethod
@
staticmethod
def
emit_outputs
(
prog
,
names
):
#, value_infos
def
emit_outputs
(
prog
,
names
):
#, value_infos
...
@@ -380,14 +396,13 @@ class Writer(object):
...
@@ -380,14 +396,13 @@ class Writer(object):
code
=
'return '
code
=
'return '
for
idx
,
name
in
enumerate
(
names
):
for
idx
,
name
in
enumerate
(
names
):
var_name
=
make_var_name
(
name
)
code
+=
name
+
', '
code
+=
var_name
+
', '
prog
.
OpDesc
(
prog
.
OpDesc
(
'fetch'
,
'fetch'
,
([
var_name
],
'X'
),
([
'X'
],
[
name
]
),
([
'
fetch'
],
'Out'
),
([
'
Out'
],
[
'fetch'
]
),
dict
(
col
=
idx
)
,
{
'col'
:
idx
}
,
)
)
# var is emitted over ops
# var is emitted over ops
prog
.
Code
(
code
)
prog
.
Code
(
code
)
...
@@ -398,18 +413,22 @@ class Writer(object):
...
@@ -398,18 +413,22 @@ class Writer(object):
flatten codes in program
flatten codes in program
"""
"""
for
code
in
_
flatten_list
(
others
):
for
code
in
flatten_list
(
others
):
codes
.
append
(
Writer
.
CODE_INDENT
*
indent
+
code
)
codes
.
append
(
Writer
.
CODE_INDENT
*
indent
+
code
)
return
codes
return
codes
@
staticmethod
@
staticmethod
def
write_weight
(
weight
,
filename
):
def
write_weight
(
weight
,
filename
,
lod
=
None
):
"""
"""
write single weight in fluid desc
write single weight in fluid desc
"""
"""
if
not
isinstance
(
weight
,
np
.
ndarray
):
assert
isinstance
(
weight
,
np
.
ndarray
),
'weight is not an ndarray'
raise
TypeError
(
'weight is not an ndarray'
)
assert
lod
is
None
or
isinstance
(
lod
,
list
),
'lod should be None or list'
if
lod
is
None
:
lod
=
[
0
]
tensor_desc
=
framework_pb2
.
VarType
.
TensorDesc
()
tensor_desc
=
framework_pb2
.
VarType
.
TensorDesc
()
tensor_desc
.
data_type
=
Program
.
Dtype
(
weight
.
dtype
)
tensor_desc
.
data_type
=
Program
.
Dtype
(
weight
.
dtype
)
...
@@ -417,7 +436,7 @@ class Writer(object):
...
@@ -417,7 +436,7 @@ class Writer(object):
fp
=
open
(
filename
,
'wb'
)
fp
=
open
(
filename
,
'wb'
)
np
.
array
([
0
],
dtype
=
np
.
int32
).
tofile
(
fp
)
# version
np
.
array
([
0
],
dtype
=
np
.
int32
).
tofile
(
fp
)
# version
np
.
array
(
[
0
]
,
dtype
=
np
.
int64
).
tofile
(
fp
)
# LOD level
np
.
array
(
lod
,
dtype
=
np
.
int64
).
tofile
(
fp
)
# LOD level
np
.
array
([
0
],
dtype
=
np
.
int32
).
tofile
(
fp
)
# tensor version
np
.
array
([
0
],
dtype
=
np
.
int32
).
tofile
(
fp
)
# tensor version
np
.
array
([
tensor_desc
.
ByteSize
()],
dtype
=
np
.
int32
).
tofile
(
fp
)
np
.
array
([
tensor_desc
.
ByteSize
()],
dtype
=
np
.
int32
).
tofile
(
fp
)
fp
.
write
(
tensor_desc
.
SerializeToString
())
fp
.
write
(
tensor_desc
.
SerializeToString
())
...
@@ -431,11 +450,9 @@ class Writer(object):
...
@@ -431,11 +450,9 @@ class Writer(object):
"""
"""
for
name
,
weight
in
weights
.
items
():
for
name
,
weight
in
weights
.
items
():
if
not
isinstance
(
weights
,
dict
):
assert
isinstance
(
weights
,
dict
),
'dict type weights required'
raise
TypeError
(
'dict type weights required'
)
var_name
=
make_var_name
(
name
)
filename
=
os
.
path
.
join
(
save_dir
,
name
)
filename
=
os
.
path
.
join
(
save_dir
,
var_name
)
Writer
.
write_weight
(
weight
,
filename
)
Writer
.
write_weight
(
weight
,
filename
)
logger
.
debug
(
'saved weight %s to %s'
,
name
,
filename
)
logger
.
debug
(
'saved weight %s to %s'
,
name
,
filename
)
...
@@ -451,7 +468,7 @@ class Writer(object):
...
@@ -451,7 +468,7 @@ class Writer(object):
Writer
.
add_codes
(
codes
,
body_code
,
1
)
Writer
.
add_codes
(
codes
,
body_code
,
1
)
fp
=
open
(
filename
,
'w'
)
fp
=
open
(
filename
,
'w'
)
for
code
in
_
flatten_list
(
codes
):
for
code
in
flatten_list
(
codes
):
fp
.
write
(
code
)
fp
.
write
(
code
)
fp
.
write
(
'
\n
'
)
fp
.
write
(
'
\n
'
)
fp
.
close
()
fp
.
close
()
...
...
onnx2fluid/requirements.txt
浏览文件 @
84f717b2
-e .
-e .
onnx>=1.4
onnx>=1.4
paddlepaddle
paddlepaddle
>=1.5
onnx2fluid/setup.cfg
浏览文件 @
84f717b2
...
@@ -19,13 +19,13 @@ license = MIT
...
@@ -19,13 +19,13 @@ license = MIT
# 从PyPI官方给出的列表中选择符合的内容进行填写
# 从PyPI官方给出的列表中选择符合的内容进行填写
# https://pypi.org/pypi?%3Aaction=list_classifiers
# https://pypi.org/pypi?%3Aaction=list_classifiers
classifier =
classifier =
Private :: Do Not Upload
Private :: Do Not Upload
Programming Language :: Python
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
Programming Language :: Python :: 3.5
# 关键字,用于检索,方便用户搜索到你的项目
# 关键字,用于检索,方便用户搜索到你的项目
keywords =
keywords =
onnx paddlepaddle
onnx paddlepaddle
[options]
[options]
# 包名称,find:表示自动寻找,可在options.packages.find中进行详细配置
# 包名称,find:表示自动寻找,可在options.packages.find中进行详细配置
...
@@ -34,7 +34,7 @@ packages = find:
...
@@ -34,7 +34,7 @@ packages = find:
# 每行一个依赖库,只写直接依赖,通常无需考虑间接依赖
# 每行一个依赖库,只写直接依赖,通常无需考虑间接依赖
# 在这里指定的版本限制应当尽量抽象,通常只要指定最低版本和大版本号即可
# 在这里指定的版本限制应当尽量抽象,通常只要指定最低版本和大版本号即可
install_requires =
install_requires =
onnx >= 1.4
onnx >= 1.4
# 测试依赖,包含项目测试时所需要的额外的依赖库,格式与install_requires一致
# 测试依赖,包含项目测试时所需要的额外的依赖库,格式与install_requires一致
# 可以使用内置的unittest,也可以使用更简单的pytest或nose等单测框架
# 可以使用内置的unittest,也可以使用更简单的pytest或nose等单测框架
...
@@ -53,7 +53,9 @@ zip_safe = True
...
@@ -53,7 +53,9 @@ zip_safe = True
# 可以通过以下配置将指定的函数变成命令行工具,允许用户直接执行
# 可以通过以下配置将指定的函数变成命令行工具,允许用户直接执行
[options.entry_points]
[options.entry_points]
console_scripts =
console_scripts =
onnx2fluid = onnx2fluid.__main__
onnx2fluid = onnx2fluid.__main__
onnx2fluid_convert = onnx2fluid.conversion:main
onnx2fluid_validate = onnx2fluid.validation:main
# 可以通过以下配置向包中添加conf或data等非py文件,安装时会一同安装到site-packages目录下
# 可以通过以下配置向包中添加conf或data等非py文件,安装时会一同安装到site-packages目录下
# 仅支持文件,不支持目录,但可以使用通配
# 仅支持文件,不支持目录,但可以使用通配
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录