Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
X2Paddle
提交
47631ae4
X
X2Paddle
项目概览
PaddlePaddle
/
X2Paddle
大约 1 年 前同步成功
通知
328
Star
698
Fork
167
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
26
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
X
X2Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
26
Issue
26
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
47631ae4
编写于
4月 02, 2019
作者:
J
Jason
提交者:
GitHub
4月 02, 2019
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #19 from MacroBull/master
Get started with onnx_model_zoo.sh and gen_some_samples.py
上级
98141b33
f484a779
变更
10
隐藏空白更改
内联
并排
Showing
10 changed file
with
115 addition
and
64 deletion
+115
-64
onnx2fluid/README.md
onnx2fluid/README.md
+36
-34
onnx2fluid/examples/convert_data_npz_0.py
onnx2fluid/examples/convert_data_npz_0.py
+6
-0
onnx2fluid/examples/convert_data_pb_0.py
onnx2fluid/examples/convert_data_pb_0.py
+9
-2
onnx2fluid/examples/gen_some_samples.py
onnx2fluid/examples/gen_some_samples.py
+22
-9
onnx2fluid/examples/onnx_model_zoo.sh
onnx2fluid/examples/onnx_model_zoo.sh
+20
-8
onnx2fluid/onnx2fluid/cmdline.py
onnx2fluid/onnx2fluid/cmdline.py
+3
-2
onnx2fluid/onnx2fluid/symbolic.py
onnx2fluid/onnx2fluid/symbolic.py
+15
-5
onnx2fluid/onnx2fluid/validation.py
onnx2fluid/onnx2fluid/validation.py
+1
-1
onnx2fluid/onnx2fluid/writer.py
onnx2fluid/onnx2fluid/writer.py
+1
-1
onnx2fluid/setup.cfg
onnx2fluid/setup.cfg
+2
-2
未找到文件。
onnx2fluid/README.md
浏览文件 @
47631ae4
# onnx2fluid
[
![License
](
https://img.shields.io/badge/license-Apache%202-blue.svg
)
](LICENSE)
[
![License
](
https://img.shields.io/badge/license-Apache%202-blue.svg
)
](LICENSE)
onnx2fluid支持将onnx模型转换为PaddlePaddle模型,并用于预测,用户也可以通过将Pytorch模型导出为ONNX格式模型,再使用onnx2fluid将模型转为PaddlePaddle模型
onnx2fluid支持将onnx模型转换为PaddlePaddle模型,并用于预测,用户也可以通过将Pytorch模型导出为ONNX格式模型,再使用onnx2fluid将模型转为PaddlePaddle模型
。
## 环境安装
工具开发过程中,我们在如下环境配置中测试模型转换,建议使用
[
anaconda
](
https://docs.anaconda.com/anaconda/install
)
> python2 & python3
工具开发过程中,我们在如下环境配置中测试模型转换:
> onnx == 1.4.0
*
python3.5+ (python2 working in progress)
*
onnx == 1.4.0
*
paddlepaddle == 1.3.0
> paddlepaddle == 1.3.0
建议使用
[
anaconda
](
https://docs.anaconda.com/anaconda/install
)
:
```
shell
# 安装onnx
...
...
@@ -20,36 +20,38 @@ onnx2fluid支持将onnx模型转换为PaddlePaddle模型,并用于预测,用
conda
install
-c
conda-forge onnx
```
## 使用说明
```
shell
# 运行目录 X2Paddle/onnx2fluid
python
-m
onnx2fluid
-e
-o
/path/to/export/model /path/of/onnx/model
## Get started
# 按如下流程安装后,则不限定上述命令的运行目录
Test with pretrained models from ONNX repositories:
```
shell
python setup.py
install
cd
examples
sh onnx_model_zoo.sh
```
**VGG19转换**
```
shell
# 下载并解压onnx模型vgg19
wget https://s3.amazonaws.com/download.onnx/models/opset_9/vgg19.tar.gz
tar
xzvf vgg19.tar.gz
# 转换为PaddlePaddle模型
python
-m
onnx2fluid
-e
-o
paddle_model vgg19/model.onnx
Try exporting from PyTorch to Paddle fluid:
```
shell
python setup.py
install
cd
examples
python gen_some_samples.py
onnx2fluid sample_1.onnx
-t
sample_1.npz
```
转换后的PaddlePaddle模型加载可参考文档
[
加载预测模型
](
http://www.paddlepaddle.org/documentation/docs/zh/1.3/api_guides/low_level/inference.html#id4
)
## 模型测试
onnx2fluid在如下模型中进行了测试
[
bvlc_alexnet
](
https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_alexnet.tar.gz
)
[
bvlc_googlenet
](
https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_googlenet.tar.gz
)
[
bvlc_reference_caffenet
](
https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_reference_caffenet.tar.gz
)
[
bvlc_reference_rcnn_ilsvrc13
](
https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_reference_rcnn_ilsvrc13.tar.gz
)
[
inception_v1
](
https://s3.amazonaws.com/download.onnx/models/opset_9/inception_v1.tar.gz
)
[
inception_v2
](
https://s3.amazonaws.com/download.onnx/models/opset_9/inception_v2.tar.gz
)
[
resnet50
](
https://s3.amazonaws.com/download.onnx/models/opset_9/resnet50.tar.gz
)
[
shufflenet
](
https://s3.amazonaws.com/download.onnx/models/opset_9/shufflenet.tar.gz
)
[
squeezenet
](
https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz
)
[
vgg19
](
https://s3.amazonaws.com/download.onnx/models/opset_9/vgg19.tar.gz
)
[
zfnet512
](
https://s3.amazonaws.com/download.onnx/models/opset_9/zfnet512.tar.gz
)
## 使用说明
```
shell
onnx2fluid
[
-dexy
]
-o
/path/to/export_dir/ /path/of/onnx/model.onnx
optional arguments:
--embed_params
,
-e
try to embed parameters
for
trainable Paddle fluid layers
--no-pedantic
,
-x
process non-standard ONNX ops
--skip-version-conversion
,
-y
skip ONNX op version conversion, workaround
for
RumtimeErrors
--archive
[
ARCHIVE],
-z
[
ARCHIVE]
compress outputs to ZIP file
if
conversion successed
```
转换后的PaddlePaddle模型加载可参考文档
[
加载预测模型
](
http://www.paddlepaddle.org/documentation/docs/zh/1.3/api_guides/low_level/inference.html#id4
)
onnx2fluid/examples/convert_data_npz_0.py
浏览文件 @
47631ae4
...
...
@@ -31,11 +31,17 @@ def _make_var_name(name):
fn
=
sys
.
argv
[
1
]
input_names
=
sys
.
argv
[
2
].
split
(
':'
)
output_name
=
sys
.
argv
[
3
].
split
(
':'
)
squeeze_data
=
len
(
sys
.
argv
)
>
4
data
=
np
.
load
(
fn
,
encoding
=
'bytes'
)
input_data
=
data
[
'inputs'
]
output_data
=
data
[
'outputs'
]
while
squeeze_data
and
input_data
.
ndim
>
4
and
input_data
.
shape
[
0
]
==
1
:
input_data
=
input_data
.
squeeze
(
0
)
while
squeeze_data
and
output_data
.
ndim
>
2
and
output_data
.
shape
[
0
]
==
1
:
output_data
=
output_data
.
squeeze
(
0
)
inputs
=
Dict
(
zip
(
map
(
_make_var_name
,
input_names
),
[
input_data
]))
outputs
=
Dict
(
zip
(
map
(
_make_var_name
,
output_name
),
[
output_data
]))
...
...
onnx2fluid/examples/convert_data_pb_0.py
浏览文件 @
47631ae4
...
...
@@ -36,6 +36,7 @@ def _make_var_name(name):
data_dir
=
os
.
path
.
dirname
(
sys
.
argv
[
1
])
input_names
=
sys
.
argv
[
2
].
split
(
':'
)
output_name
=
sys
.
argv
[
3
].
split
(
':'
)
squeeze_data
=
len
(
sys
.
argv
)
>
4
# Load inputs
inputs
=
[]
...
...
@@ -43,7 +44,10 @@ for fn in glob(os.path.join(data_dir, 'input_*.pb')):
tensor
=
onnx
.
TensorProto
()
with
open
(
fn
,
'rb'
)
as
f
:
tensor
.
ParseFromString
(
f
.
read
())
inputs
.
append
(
numpy_helper
.
to_array
(
tensor
))
tensor
=
numpy_helper
.
to_array
(
tensor
)
while
squeeze_data
and
tensor
.
ndim
>
4
and
tensor
.
shape
[
0
]
==
1
:
tensor
=
tensor
.
squeeze
(
0
)
inputs
.
append
(
tensor
)
# Load outputs
outputs
=
[]
...
...
@@ -51,7 +55,10 @@ for fn in glob(os.path.join(data_dir, 'output_*.pb')):
tensor
=
onnx
.
TensorProto
()
with
open
(
fn
,
'rb'
)
as
f
:
tensor
.
ParseFromString
(
f
.
read
())
outputs
.
append
(
numpy_helper
.
to_array
(
tensor
))
tensor
=
numpy_helper
.
to_array
(
tensor
)
while
squeeze_data
and
tensor
.
ndim
>
2
and
tensor
.
shape
[
0
]
==
1
:
tensor
=
tensor
.
squeeze
(
0
)
outputs
.
append
(
tensor
)
inputs
=
Dict
(
zip
(
map
(
_make_var_name
,
input_names
),
inputs
))
outputs
=
Dict
(
zip
(
map
(
_make_var_name
,
output_name
),
outputs
))
...
...
onnx2fluid/examples/gen_some_samples.py
浏览文件 @
47631ae4
...
...
@@ -18,6 +18,7 @@ import torch.nn.functional as F
from
onnx2fluid.torch_export_helper
import
export_onnx_with_validation
prefix
=
'sample_'
idx
=
0
######### example: RNN ########
...
...
@@ -38,7 +39,7 @@ idx = 0
#yp = model(xb)
#idx += 1
#print('index: ', idx)
#export_onnx_with_validation(model, (xb, ),
't'
+ str(idx),
#export_onnx_with_validation(model, (xb, ),
prefix
+ str(idx),
# ['x'], ['y'],
# verbose=True, training=False)
...
...
@@ -59,7 +60,7 @@ idx = 0
#yp = model(xb)
#idx += 1
#print('index: ', idx)
#export_onnx_with_validation(model, (xb, ),
't'
+ str(idx),
#export_onnx_with_validation(model, (xb, ),
prefix
+ str(idx),
# ['x'], ['y'],
# verbose=True, training=False)
...
...
@@ -83,7 +84,10 @@ yp = model(xb)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
(
xb
,
),
't'
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
######## example: compare ########
...
...
@@ -108,7 +112,7 @@ idx += 1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
(
xb0
,
xb1
),
't'
+
str
(
idx
),
[
'x0'
,
'x1'
],
[
'ya'
,
'yb'
,
'yc'
],
prefix
+
str
(
idx
),
[
'x0'
,
'x1'
],
[
'ya'
,
'yb'
,
'yc'
],
verbose
=
True
,
training
=
False
)
...
...
@@ -131,7 +135,7 @@ idx += 1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
(
theta
,
),
't'
+
str
(
idx
),
[
'theta'
],
[
'grid'
],
prefix
+
str
(
idx
),
[
'theta'
],
[
'grid'
],
verbose
=
True
,
training
=
False
)
...
...
@@ -157,7 +161,10 @@ yp = model(xb)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
(
xb
,
),
't'
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
######## example: conv2d ########
...
...
@@ -183,7 +190,10 @@ yp = model(xb)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
(
xb
,
),
't'
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'x'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
######### example: conv1d ########
#
...
...
@@ -203,7 +213,7 @@ export_onnx_with_validation(
#yp = model(xb)
#idx += 1
#print('index: ', idx)
#export_onnx_with_validation(model, (xb, ),
't'
+ str(idx),
#export_onnx_with_validation(model, (xb, ),
prefix
+ str(idx),
# ['x'], ['y'],
# verbose=True, training=False)
...
...
@@ -224,4 +234,7 @@ yp = model(xb)
idx
+=
1
print
(
'index: '
,
idx
)
export_onnx_with_validation
(
model
,
(
xb
,
),
't'
+
str
(
idx
),
[
'y'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
model
,
(
xb
,
),
prefix
+
str
(
idx
),
[
'y'
],
[
'y'
],
verbose
=
True
,
training
=
False
)
onnx2fluid/examples/onnx_model_zoo.sh
浏览文件 @
47631ae4
...
...
@@ -18,6 +18,7 @@ bvlc_alexnet()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -25,7 +26,7 @@ bvlc_alexnet()
for
npz
in
"
$bn_tar
"
/
*
.npz
do
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
done
...
...
@@ -45,6 +46,7 @@ bvlc_googlenet()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -65,6 +67,7 @@ bvlc_reference_caffenet()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -85,6 +88,7 @@ bvlc_reference_rcnn_ilsvrc13()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -93,8 +97,8 @@ bvlc_reference_rcnn_ilsvrc13()
do
echo
"converting
$pb_dir
"
python convert_data_pb_0.py
"
$pb_dir
"
data_0 fc-rcnn_1
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
python
-m
onnx2fluid.validation
$validate_flags1
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
-p
0
python
-m
onnx2fluid.validation
$validate_flags2
-t
$(
dirname
"
$pb_dir
/x"
)
.npz
-p
0
done
}
...
...
@@ -105,6 +109,7 @@ inception_v1()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -112,7 +117,7 @@ inception_v1()
for
npz
in
"
$bn_tar
"
/
*
.npz
do
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
done
...
...
@@ -132,6 +137,7 @@ inception_v2()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -139,7 +145,7 @@ inception_v2()
for
npz
in
"
$bn_tar
"
/
*
.npz
do
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
python convert_data_npz_0.py
"
$npz
"
data_0 prob_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
done
...
...
@@ -159,6 +165,7 @@ resnet50()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -166,7 +173,7 @@ resnet50()
for
npz
in
"
$bn_tar
"
/
*
.npz
do
echo
"converting
$npz
..."
python convert_data_npz_0.py
"
$npz
"
gpu_0/data_0 gpu_0/softmaxout_1
python convert_data_npz_0.py
"
$npz
"
gpu_0/data_0 gpu_0/softmaxout_1
-s
python
-m
onnx2fluid.validation
$validate_flags1
-t
"
$npz
"
python
-m
onnx2fluid.validation
$validate_flags2
-t
"
$npz
"
done
...
...
@@ -186,6 +193,7 @@ shufflenet()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -206,6 +214,7 @@ squeezenet()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -226,6 +235,7 @@ tiny_yolov2()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/
$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -246,6 +256,7 @@ vgg19()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -266,6 +277,7 @@ zfnet512()
fn_model
=
"
$bn_tar
/model.onnx"
http_get
"
$base_url$fn_tar
"
rm
-rf
"
$bn_tar
/"
echo
"extracting ..."
tar
xf
"
$fn_tar
"
...
...
@@ -288,7 +300,7 @@ inception_v1
inception_v2
resnet50
shufflenet
squeezenet
tiny_yolov2
# not supported
squeezenet
# softmax bug
#
tiny_yolov2 # not supported
vgg19
zfnet512
onnx2fluid/onnx2fluid/cmdline.py
浏览文件 @
47631ae4
...
...
@@ -49,7 +49,8 @@ def main(**kwargs):
basepath
,
_
=
shutil
.
os
.
path
.
splitext
(
filename
)
save_dir
=
kwargs
.
get
(
'output_dir'
,
''
)
# model.onnx -> model/
save_dir
=
(
save_dir
.
rstrip
(
'/'
)
if
save_dir
else
basepath
)
+
'/'
save_dir
=
(
save_dir
.
rstrip
(
shutil
.
os
.
sep
)
if
save_dir
else
basepath
)
+
shutil
.
os
.
sep
model_basename
=
DEFAULT_MODEL_MODULE
+
'.py'
model_func_name
=
DEFAULT_MODEL_FUNC
embed_params
=
kwargs
.
get
(
'embed_params'
,
False
)
...
...
@@ -109,7 +110,7 @@ def main(**kwargs):
# create zip file
if
archive
is
not
None
:
if
archive
==
''
:
archive
=
save_dir
.
rstrip
(
'/'
)
+
'.zip'
archive
=
save_dir
.
rstrip
(
shutil
.
os
.
sep
)
+
'.zip'
logger
.
info
(
'compressing file to %s ...'
,
archive
)
shutil
.
sys
.
stderr
.
write
(
'
\n
'
)
shutil
.
sys
.
stderr
.
flush
()
...
...
onnx2fluid/onnx2fluid/symbolic.py
浏览文件 @
47631ae4
...
...
@@ -69,6 +69,7 @@ DEFAULT_OP_MAPPING = {
'Sin'
:
[
'sin'
,
[
'X'
],
[
'Out'
]],
'Squeeze'
:
[
'squeeze'
,
[
'X'
],
[
'Out'
]],
# attrs bypassed, FIXME: emit squeeze2
'Softplus'
:
[
'softplus'
,
[
'X'
],
[
'Out'
]],
# FIXME: default axis = -1, reshape required before and after
'Softmax'
:
[
'softmax'
,
[
'X'
],
[
'Out'
],
dict
(
axis
=
''
)],
'Softsign'
:
[
'softsign'
,
[
'X'
],
[
'Out'
]],
'Sqrt'
:
[
'sqrt'
,
[
'X'
],
[
'Out'
]],
...
...
@@ -799,7 +800,7 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
shape
=
list
(
value
.
shape
)
_logger
.
warning
(
'in (Constant -> %s): '
'
shape
of %s not inferred, '
'
attribute "shape"
of %s not inferred, '
'using value as 1-D tensor may lead to fails'
,
outputs
,
val_output
)
# generation
...
...
@@ -1152,7 +1153,7 @@ def Gemm(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
vm_dtype
=
np
.
dtype
(
'float32'
)
_logger
.
warning
(
'in %s(%s -> Gemm -> %s): '
'
beta
seems to be an interger, '
'
attribute "beta"
seems to be an interger, '
'however dtype can not be inferred, '
'still use float32'
,
name
,
inputs
,
outputs
)
beta
=
np
.
dtype
(
vm_dtype
).
type
(
beta
)
...
...
@@ -1432,9 +1433,17 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
is_const_shape
=
shape
and
'const_value'
in
value_infos
[
val_shape
]
if
shape
is
None
:
shape
=
_shape_or_none
(
value_infos
,
val_reshaped
)
assert
shape
is
not
None
,
(
'given shape is neither const value nor deductible from output, '
'this is not supported'
)
# assert shape is not None, ('given shape is neither const value nor deductible from output, '
# 'this is not supported')
if
shape
is
None
:
shape
=
[
1
,
-
1
]
# who knows
_logger
.
warning
(
'in %s(%s -> Reshape -> %s): '
'input "shape" not inferred, use [1, -1] as dummy value, '
'the behavior of Paddle fluid maybe undefined'
,
name
,
inputs
,
outputs
)
fluid_op
=
'reshape'
name_attr
=
', name={}'
.
format
(
repr
(
name
))
if
name
else
''
...
...
@@ -1574,6 +1583,7 @@ def Sum(prog, inputs, outputs, *args, **kwargs):
'['
+
', '
.
join
(
var_inps
)
+
']'
,
# attrs
))
fluid_op
=
'sum'
prog
.
VarDesc
(
var_sum
)
prog
.
OpDesc
(
fluid_op
,
...
...
onnx2fluid/onnx2fluid/validation.py
浏览文件 @
47631ae4
...
...
@@ -86,7 +86,7 @@ def validate(fluid_model_filename,
executor
=
exe
,
dirname
=
fluid_model_dir
,
main_program
=
prog
)
logger
.
info
(
'weight load passed'
)
else
:
raise
ValueError
(
'unsupported Paddle fluid model'
)
raise
ValueError
(
'unsupported Paddle fluid model
filename
'
)
# load data
logger
.
info
(
'using golden data %s'
,
golden_data_filename
)
...
...
onnx2fluid/onnx2fluid/writer.py
浏览文件 @
47631ae4
...
...
@@ -251,7 +251,7 @@ class Program(object):
def
IntermediateOp
(
self
,
domain
,
op_type
,
*
args
,
**
kwargs
):
"""
convert an intermediate ONNX op declaring
just desc
only
convert an intermediate ONNX op declaring
in desc program
only
"""
code_mutable
=
self
.
code_mutable
...
...
onnx2fluid/setup.cfg
浏览文件 @
47631ae4
...
...
@@ -48,12 +48,12 @@ install_requires =
# 自动添加被版本控制的数据文件
include_package_data = True
# 项目是纯py项目,可以直接执行zip源码包
zip_safe =
Fals
e
zip_safe =
Tru
e
# 可以通过以下配置将指定的函数变成命令行工具,允许用户直接执行
[options.entry_points]
console_scripts =
onnx2fluid = onnx2fluid.
cmdline:main
onnx2fluid = onnx2fluid.
__main__
# 可以通过以下配置向包中添加conf或data等非py文件,安装时会一同安装到site-packages目录下
# 仅支持文件,不支持目录,但可以使用通配
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录