未验证 提交 9eaa7470 编写于 作者: J Jason 提交者: GitHub

Merge pull request #22 from MacroBull/master

minor tweaks and update of README
...@@ -18,7 +18,7 @@ X2Paddle支持将Caffe和TensorFlow模型转至PaddlePaddle模型,同时我们 ...@@ -18,7 +18,7 @@ X2Paddle支持将Caffe和TensorFlow模型转至PaddlePaddle模型,同时我们
## [onnx2fluid](onnx2fluid) ## [onnx2fluid](onnx2fluid)
1. 支持将ONNX模型转至PaddlePaddle fluid可加载预测模型 1. 支持将ONNX模型转至PaddlePaddle fluid可加载预测模型
2. Pytorch支持导出为ONNX模型,因此也可通过onnx2fluid支持PyTorch模型的转换 2. PyTorch支持导出为ONNX模型,因此也可通过onnx2fluid支持PyTorch模型的转换
# 贡献代码 # 贡献代码
clone代码至本地后,先运行`X2Paddle/commit-prepare.sh`配置代码提交环境 clone代码至本地后,先运行`X2Paddle/commit-prepare.sh`配置代码提交环境
...@@ -2,59 +2,82 @@ ...@@ -2,59 +2,82 @@
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
onnx2fluid支持将onnx模型转换为PaddlePaddle模型,并用于预测,用户也可以通过将Pytorch模型导出为ONNX格式模型,再使用onnx2fluid将模型转为PaddlePaddle模型。 onnx2fluid支持将ONNX模型转换为PaddlePaddle模型,并用于预测,用户也可以通过将PyTorch模型导出为ONNX模型,再使用onnx2fluid将模型转为PaddlePaddle模型。
## 环境安装 ## 特色
工具开发过程中,我们在如下环境配置中测试模型转换: * 导出Python代码和fluid ProgramDesc模型
* 权重可嵌入支持的算子中
* 转换验证打包三合一
* 转换过程不依赖PaddlePaddle
* 可自由扩展算子
* python3.5+ ## 环境配置
* onnx == 1.4.0
* paddlepaddle == 1.3.0 在如下环境配置中测试成功:
建议使用[anaconda](https://docs.anaconda.com/anaconda/install): * python 3.5+
* onnx == 1.4.0
* paddlepaddle == 1.3.0 (可选,仅用于验证)
使用[Anaconda](https://docs.anaconda.com/anaconda/install):
``` shell ``` shell
# 安装onnx
# 也可参考https://github.com/onnx/onnx
conda install -c conda-forge onnx conda install -c conda-forge onnx
pip install paddlepaddle==1.3.0
``` ```
## 使用说明 ## 动手玩
```shell
# 安装 测试ONNX官方预训练模型,包含alexnet, googlenet, caffenet, rcnn
git clone https://github.com/PaddlePaddle/X2Paddle.git inception_v1, inception_v2, resnet50, shufflenet, squeezenet,
cd X2Paddle/onnx2fluid vgg19, zfnet512等:
``` shell
python setup.py install python setup.py install
cd examples
sh onnx_model_zoo.sh
```
# 模型转换 使用PyTorch搭建模型,导出ONNX,转换并验证:
python -m onnx2fluid -o /path/to/export_dir/ /path/of/onnx/model.onnx
``` shell
python setup.py install
cd examples
python gen_some_samples.py
onnx2fluid sample_1.onnx -t sample_1.npz
``` ```
**示例:VGG19模型**
## 使用说明
onnx2fluid:
```shell ```shell
wget https://s3.amazonaws.com/download.onnx/models/opset_9/vgg19.tar.gz onnx2fluid [-dexy] [-o /path/to/export_dir/] [-z archive.zip] [-t test_data.npz] /path/to/onnx/model.onnx
tar xzvf vgg19.tar.gz
python -m onnx2fluid -o paddle_model vgg19/model.onnx optional arguments:
--debug, -d 启用调试
--embed_params, -e 尝试权重内嵌
--no-pedantic, -x 转换扩展的ONNX算子
--skip-version-conversion, -y
跳过ONNX算子版本转换
--output_dir, -o 指定输出目录
--archive [ARCHIVE], -z [ARCHIVE]
如果验证通过,打包到指定的ZIP文件
``` ```
转换后的PaddlePaddle模型加载可参考文档[加载预测模型](http://www.paddlepaddle.org/documentation/docs/zh/1.3/api_guides/low_level/inference.html#id4)
## 模型测试 转换工具onnx2fluid.conversion:
目录[examples](examples)中集成了部分ONNX预训练模型的转换测试
```shell ```shell
cd examples onnx2fluid.conversion [-dexy] [-o /path/to/export_dir/] /path/to/onnx/model.onnx
# 测试和验证各onnx模型的转换
sh onnx_model_zoo.sh
``` ```
目前测试脚本中已包含的测试模型如下,
1. [bvlc_alexnet](https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_alexnet.tar.gz) 验证工具onnx2fluid.validate:
2. [bvlc_googlenet](https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_googlenet.tar.gz)
3. [bvlc_reference_caffenet](https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_reference_caffenet.tar.gz) ```shell
4. [bvlc_reference_rcnn_ilsvrc13](https://s3.amazonaws.com/download.onnx/models/opset_9/bvlc_reference_rcnn_ilsvrc13.tar.gz) onnx2fluid.validate [-d] [-t test_data.npz] [-p 1e-3] /path/to/onnx/model.onnx
5. [inception_v1](https://s3.amazonaws.com/download.onnx/models/opset_9/inception_v1.tar.gz) ```
6. [inception_v2](https://s3.amazonaws.com/download.onnx/models/opset_9/inception_v2.tar.gz)
7. [resnet50](https://s3.amazonaws.com/download.onnx/models/opset_9/resnet50.tar.gz) ## 参考
8. [shufflenet](https://s3.amazonaws.com/download.onnx/models/opset_9/shufflenet.tar.gz)
9. [squeezenet](https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz) * PaddlePaddle [算子](http://www.paddlepaddle.org/documentation/docs/zh/1.4/api_cn/layers_cn.html)
10. [vgg19](https://s3.amazonaws.com/download.onnx/models/opset_9/vgg19.tar.gz) * PaddlePaddle [加载预测模型](http://www.paddlepaddle.org/documentation/docs/zh/1.4/api_guides/low_level/inference.html#id4)
11. [zfnet512](https://s3.amazonaws.com/download.onnx/models/opset_9/zfnet512.tar.gz)
...@@ -2,13 +2,25 @@ ...@@ -2,13 +2,25 @@
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
onnx2fluid supports converting ONNX model to PaddlePaddle Model for prediction. onnx2fluid supports converting ONNX model to PaddlePaddle fluid model for prediction.
## Running Environment PyTorch to Paddlepaddle model conversion can be easily achieved with PyTorch ONNX export functions.
* python3.5+ (python2 working in progress) ## Features
* Python code + ProgramDesc proto generation, flexible and compatible
* fluid layer weight embedding support
* conversion, validation, archiving all in one
* convert without PaddlePaddle dependency
* export and validation helper functions for PyTorch to PaddlePaddle conversion
* extra ONNX operator optimization for inference
* easily extensible for user-defined operators
## Environment and dependency
* python 3.5+ (python 2 not fully supported yet)
* onnx == 1.4.0 * onnx == 1.4.0
* paddlepaddle == 1.3.0 * paddlepaddle == 1.3.0 (optional for validation)
## Get started ## Get started
...@@ -20,26 +32,50 @@ cd examples ...@@ -20,26 +32,50 @@ cd examples
sh onnx_model_zoo.sh sh onnx_model_zoo.sh
``` ```
Try exporting from PyTorch to Paddle fluid: Try exporting and validating from PyTorch to PaddlePaddle fluid:
``` shell ``` shell
python setup.py install python setup.py install
cd examples cd examples
python gen_some_samples.py python gen_some_samples.py
onnx2fluid sample_1.onnx -t sample_1.npz onnx2fluid sample_1.onnx -t sample_1.npz
python gen_unet.py
onnx2fluid sample_unet.onnx -t sample_unet.npz
``` ```
## Usage ## Usage
onnx2fluid (all in one):
```shell ```shell
onnx2fluid [-dexy] -o /path/to/export_dir/ /path/of/onnx/model.onnx onnx2fluid [-dexy] [-o /path/to/export_dir/] [-z archive.zip] [-t test_data.npz] /path/to/onnx/model.onnx
optional arguments: optional arguments:
--embed_params, -e try to embed parameters for trainable Paddle fluid layers --debug, -d enable debug logging and checking
--embed_params, -e try to embed parameters for trainable PaddlePaddle fluid layers
--no-pedantic, -x process non-standard ONNX ops --no-pedantic, -x process non-standard ONNX ops
--skip-version-conversion, -y --skip-version-conversion, -y
skip ONNX op version conversion, workaround for skip ONNX op version conversion, workaround for RumtimeErrors
RumtimeErrors --output_dir, -o output directory
--archive [ARCHIVE], -z [ARCHIVE] --archive [ARCHIVE], -z [ARCHIVE]
compress outputs to ZIP file if conversion successed compress outputs to ZIP file if conversion successed
``` ```
onnx2fluid.conversion:
```shell
onnx2fluid.conversion [-dexy] [-o /path/to/export_dir/] /path/to/onnx/model.onnx
```
onnx2fluid.validate:
```shell
onnx2fluid.validate [-d] [-t test_data.npz] [-p 1e-3] /path/to/onnx/model.onnx
```
## Reference
* [PaddlePaddle fluid operators](http://www.paddlepaddle.org/documentation/docs/en/1.4/api/layers.html)
* load converted model via [load_inference_model](http://www.paddlepaddle.org/documentation/docs/en/1.4/api/io.html#permalink-1-load_inference_model)
...@@ -6,9 +6,7 @@ Created on Wed Mar 27 11:50:03 2019 ...@@ -6,9 +6,7 @@ Created on Wed Mar 27 11:50:03 2019
@author: Macrobull @author: Macrobull
""" """
# import os, sys import os, sys
import os
import sys
import numpy as np import numpy as np
import onnx import onnx
import onnx.numpy_helper as numpy_helper import onnx.numpy_helper as numpy_helper
......
...@@ -5,9 +5,8 @@ Created on Fri Mar 22 11:19:45 2019 ...@@ -5,9 +5,8 @@ Created on Fri Mar 22 11:19:45 2019
@author: Macrobull @author: Macrobull
Not all ops in this file are supported by both Pytorch and ONNX Not all ops in this file are supported by both PyTorch and ONNX
This only demostrates the conversion/validation workflow from Pytorch to ONNX to Paddle fluid This only demostrates the conversion/validation workflow from PyTorch to ONNX to Paddle fluid
""" """
from __future__ import print_function from __future__ import print_function
......
...@@ -4,10 +4,6 @@ ...@@ -4,10 +4,6 @@
Created on Fri Mar 22 11:19:45 2019 Created on Fri Mar 22 11:19:45 2019
@author: Macrobull @author: Macrobull
Not all ops in this file are supported by both Pytorch and ONNX
This only demostrates the conversion/validation workflow from Pytorch to ONNX to Paddle fluid
""" """
from __future__ import print_function from __future__ import print_function
......
...@@ -4,10 +4,6 @@ ...@@ -4,10 +4,6 @@
Created on Fri Mar 22 11:19:45 2019 Created on Fri Mar 22 11:19:45 2019
@author: Macrobull @author: Macrobull
Not all ops in this file are supported by both Pytorch and ONNX
This only demostrates the conversion/validation workflow from Pytorch to ONNX to Paddle fluid
""" """
from __future__ import print_function from __future__ import print_function
......
...@@ -97,8 +97,8 @@ bvlc_reference_rcnn_ilsvrc13() ...@@ -97,8 +97,8 @@ bvlc_reference_rcnn_ilsvrc13()
do do
echo "converting $pb_dir" echo "converting $pb_dir"
python convert_data_pb_0.py "$pb_dir" data_0 fc-rcnn_1 python convert_data_pb_0.py "$pb_dir" data_0 fc-rcnn_1
python -m onnx2fluid.validation $validate_flags1 -t $(dirname "$pb_dir/x").npz -p 0 python -m onnx2fluid.validation $validate_flags1 -t $(dirname "$pb_dir/x").npz
python -m onnx2fluid.validation $validate_flags2 -t $(dirname "$pb_dir/x").npz -p 0 python -m onnx2fluid.validation $validate_flags2 -t $(dirname "$pb_dir/x").npz
done done
} }
......
...@@ -16,10 +16,7 @@ from __future__ import division ...@@ -16,10 +16,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from __future__ import unicode_literals from __future__ import unicode_literals
# import argparse, logging, sys import argparse, logging, sys
import argparse
import logging
import sys
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description='onnx2fluid', description='onnx2fluid',
...@@ -86,11 +83,17 @@ parser.add_argument( ...@@ -86,11 +83,17 @@ parser.add_argument(
help='compress outputs to ZIP file if conversion successed', help='compress outputs to ZIP file if conversion successed',
) )
parser.add_argument( parser.add_argument(
'--precision', '--atol',
'-p', '-p',
type=int, type=float,
default=3, default=1e-3,
help='assertion decimal for validation', help='assertion absolute tolerance for validation',
)
parser.add_argument(
'--rtol',
type=float,
default=1e-4,
help='assertion relative tolerance for validation',
) )
args = parser.parse_args() args = parser.parse_args()
...@@ -98,12 +101,6 @@ logging_format = '[%(levelname)8s]%(name)s::%(funcName)s:%(lineno)04d: %(message ...@@ -98,12 +101,6 @@ logging_format = '[%(levelname)8s]%(name)s::%(funcName)s:%(lineno)04d: %(message
logging_level = logging.DEBUG if args.debug else logging.INFO logging_level = logging.DEBUG if args.debug else logging.INFO
logging.basicConfig(format=logging_format, level=logging_level) logging.basicConfig(format=logging_format, level=logging_level)
try: from .cmdline import main
from . import cmdline
except ImportError:
import cmdline
# imports
main = cmdline.main
sys.exit(main(**args.__dict__)) sys.exit(main(**args.__dict__))
...@@ -17,9 +17,6 @@ from __future__ import print_function ...@@ -17,9 +17,6 @@ from __future__ import print_function
from __future__ import unicode_literals from __future__ import unicode_literals
import logging, shutil, zipfile import logging, shutil, zipfile
#import logging
#import shutil
#import zipfile
__all__ = [ __all__ = [
'main', 'main',
...@@ -33,31 +30,23 @@ DEFAULT_MODEL_FUNC = 'inference' ...@@ -33,31 +30,23 @@ DEFAULT_MODEL_FUNC = 'inference'
def main(**kwargs): def main(**kwargs):
"""主程序入口""" """主程序入口"""
try: from .conversion import convert
from . import conversion
except ImportError:
import conversion
# imports
convert = conversion.convert
logger = logging.getLogger('onnx2fluid') logger = logging.getLogger('onnx2fluid')
debug = kwargs.get('debug', False) # debug = kwargs.get('debug', False)
# prepare arguments # prepare arguments
filename = kwargs['model'][0] filename = kwargs.pop('model')[0]
basepath, _ = shutil.os.path.splitext(filename) basepath, _ = shutil.os.path.splitext(filename)
save_dir = kwargs.get('output_dir', '') save_dir = kwargs.pop('output_dir', '')
# model.onnx -> model/ # model.onnx -> model/
save_dir = (save_dir.rstrip(shutil.os.sep) save_dir = (save_dir.rstrip(shutil.os.sep)
if save_dir else basepath) + shutil.os.sep if save_dir else basepath) + shutil.os.sep
model_basename = DEFAULT_MODEL_MODULE + '.py' model_basename = DEFAULT_MODEL_MODULE + '.py'
model_func_name = DEFAULT_MODEL_FUNC model_func_name = DEFAULT_MODEL_FUNC
embed_params = kwargs.get('embed_params', False)
onnx_opset_version = DEFAULT_ONNX_OPSET_VERSION onnx_opset_version = DEFAULT_ONNX_OPSET_VERSION
onnx_opset_pedantic = kwargs.get('pedantic', True) onnx_opset_pedantic = kwargs.pop('pedantic', True)
onnx_skip_version_conversion = kwargs.get('skip_version_conversion', False) onnx_skip_version_conversion = kwargs.pop('skip_version_conversion', False)
archive = kwargs.get('archive', None)
# convert # convert
convert( convert(
...@@ -65,49 +54,35 @@ def main(**kwargs): ...@@ -65,49 +54,35 @@ def main(**kwargs):
save_dir, save_dir,
model_basename=model_basename, model_basename=model_basename,
model_func_name=model_func_name, model_func_name=model_func_name,
embed_params=embed_params,
onnx_opset_version=onnx_opset_version, onnx_opset_version=onnx_opset_version,
onnx_opset_pedantic=onnx_opset_pedantic, onnx_opset_pedantic=onnx_opset_pedantic,
onnx_skip_version_conversion=onnx_skip_version_conversion, onnx_skip_version_conversion=onnx_skip_version_conversion,
debug=debug) **kwargs)
# validate # validate
passed = True passed = True
golden_data_filename = kwargs.get('test_data', '') golden_data_filename = kwargs.pop('test_data', '')
if golden_data_filename: if golden_data_filename:
try: from .validation import validate
from . import validation
except ImportError:
import validation
# imports
validate = validation.validate
# in fact fluid can not fully clear the context
# continuous validation may be inaccurate
decimal = kwargs.get('precision', 3)
logger.info('starting validation on desc ...') logger.info('starting validation on desc ...')
passed &= validate( passed &= validate(
shutil.os.path.join(save_dir, '__model__'), shutil.os.path.join(save_dir, '__model__'), golden_data_filename,
golden_data_filename, **kwargs)
decimal=decimal,
)
logger.info('starting validation on code ...') logger.info('starting validation on code ...')
passed &= validate( passed &= validate(
shutil.os.path.join(save_dir, model_basename), shutil.os.path.join(save_dir, model_basename),
golden_data_filename, golden_data_filename,
model_func_name=model_func_name, model_func_name=model_func_name,
decimal=decimal, **kwargs)
save_inference_model=debug, # this overwrite desc file for test
)
if not passed: if not passed:
logger.error('validation failed, exit') logger.error('validation failed, exit')
return return
# create zip file # create zip file
archive = kwargs.pop('archive', None)
if archive is not None: if archive is not None:
if archive == '': if archive == '':
archive = save_dir.rstrip(shutil.os.sep) + '.zip' archive = save_dir.rstrip(shutil.os.sep) + '.zip'
...@@ -132,6 +107,10 @@ if __name__ == '__main__': ...@@ -132,6 +107,10 @@ if __name__ == '__main__':
level=logging.DEBUG, level=logging.DEBUG,
) )
del main
from onnx2fluid.cmdline import main
main( main(
model=['../examples/t1.onnx'], model=['../examples/t1.onnx'],
output_dir='/tmp/export/', output_dir='/tmp/export/',
......
...@@ -9,8 +9,6 @@ Created on Mon Feb 25 09:50:35 2019 ...@@ -9,8 +9,6 @@ Created on Mon Feb 25 09:50:35 2019
from __future__ import division from __future__ import division
import logging, shutil import logging, shutil
#import logging
#import shutil
__all__ = [ __all__ = [
'convert', 'convert',
...@@ -25,7 +23,8 @@ def convert(onnx_model_filename, ...@@ -25,7 +23,8 @@ def convert(onnx_model_filename,
onnx_opset_version=9, onnx_opset_version=9,
onnx_opset_pedantic=True, onnx_opset_pedantic=True,
onnx_skip_version_conversion=False, onnx_skip_version_conversion=False,
debug=False): debug=False,
**kwargs):
""" """
convert an ONNX model to Paddle fluid Python code and desc pb convert an ONNX model to Paddle fluid Python code and desc pb
""" """
...@@ -37,21 +36,14 @@ def convert(onnx_model_filename, ...@@ -37,21 +36,14 @@ def convert(onnx_model_filename,
from onnx.utils import polish_model from onnx.utils import polish_model
from onnx.version_converter import convert_version from onnx.version_converter import convert_version
try: from .onnx_utils import DEFAULT_OP_DOMAIN
from . import onnx_utils, writer from .onnx_utils import graph_ops, graph_weights
except ImportError: from .onnx_utils import inferred_model_value_info
import onnx_utils, writer from .onnx_utils import optimize_model_skip_op_for_inference
from .onnx_utils import optimize_model_strip_initializer
# imports from .onnx_utils import optimize_model_cast, optimize_model_slice
DEFAULT_OP_DOMAIN = onnx_utils.DEFAULT_OP_DOMAIN from .writer import Program, Writer
graph_ops, graph_weights = onnx_utils.graph_ops, onnx_utils.graph_weights from .writer import make_var_name
inferred_model_value_info = onnx_utils.inferred_model_value_info
optimize_model_skip_op_for_inference = onnx_utils.optimize_model_skip_op_for_inference
optimize_model_strip_initializer = onnx_utils.optimize_model_strip_initializer
optimize_model_cast = onnx_utils.optimize_model_cast
optimize_model_slice = onnx_utils.optimize_model_slice
Program, Writer = writer.Program, writer.Writer
make_var_name = writer.make_var_name
logger = logging.getLogger('convert') logger = logging.getLogger('convert')
...@@ -94,6 +86,8 @@ def convert(onnx_model_filename, ...@@ -94,6 +86,8 @@ def convert(onnx_model_filename,
model = onnx.shape_inference.infer_shapes(onnx_model) model = onnx.shape_inference.infer_shapes(onnx_model)
debug_model_filename, _ = shutil.os.path.splitext(onnx_model_filename) debug_model_filename, _ = shutil.os.path.splitext(onnx_model_filename)
onnx.save(model, debug_model_filename + '.optimized_and_inffered.onnx') onnx.save(model, debug_model_filename + '.optimized_and_inffered.onnx')
# onnx.save(model, '/tmp/export/optimized_and_inffered.onnx') # onnx.save(model, '/tmp/export/optimized_and_inffered.onnx')
# I/O instances # I/O instances
...@@ -218,12 +212,13 @@ def convert(onnx_model_filename, ...@@ -218,12 +212,13 @@ def convert(onnx_model_filename,
logger.info('conversion finished') logger.info('conversion finished')
# globals().update(locals())
if __name__ == '__main__': if __name__ == '__main__':
del convert
import argparse import argparse
from onnx2fluid.conversion import convert
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description='onnx2fluid.convert', description='onnx2fluid.convert',
formatter_class=argparse.ArgumentDefaultsHelpFormatter, formatter_class=argparse.ArgumentDefaultsHelpFormatter,
...@@ -280,7 +275,10 @@ if __name__ == '__main__': ...@@ -280,7 +275,10 @@ if __name__ == '__main__':
debug = args.debug debug = args.debug
model_filename = args.model[0] model_filename = args.model[0]
basepath, _ = shutil.os.path.splitext(model_filename)
save_dir = args.output_dir save_dir = args.output_dir
save_dir = (save_dir.rstrip(shutil.os.sep)
if save_dir else basepath) + shutil.os.sep
embed_params = args.embed_params embed_params = args.embed_params
pedantic = args.pedantic pedantic = args.pedantic
skip_version_conversion = args.skip_version_conversion skip_version_conversion = args.skip_version_conversion
......
...@@ -13,7 +13,7 @@ Created on Mon Feb 25 09:33:43 2019 ...@@ -13,7 +13,7 @@ Created on Mon Feb 25 09:33:43 2019
from __future__ import division from __future__ import division
import logging as _logging import logging as _logging
import numpy as np import numpy as _np
from collections import OrderedDict as _dict from collections import OrderedDict as _dict
from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE
...@@ -77,11 +77,11 @@ DEFAULT_OP_MAPPING = { ...@@ -77,11 +77,11 @@ DEFAULT_OP_MAPPING = {
'Sqrt': ['sqrt', ['X'], ['Out']], 'Sqrt': ['sqrt', ['X'], ['Out']],
'Tanh': ['tanh', ['X'], ['Out']], 'Tanh': ['tanh', ['X'], ['Out']],
'ThresholdedRelu': ['thresholded_relu', ['X'], ['Out'], dict(alpha='threshold')], 'ThresholdedRelu': ['thresholded_relu', ['X'], ['Out'], dict(alpha='threshold')],
# 'Transpose': ['transpose', ['X'], ['Out']], #'Transpose': ['transpose', ['X'], ['Out']],
'Unsqueeze': ['unsqueeze', ['X'], ['Out']], # attrs bypassed, FIXME: emit unsqueeze2 'Unsqueeze': ['unsqueeze', ['X'], ['Out']], # attrs bypassed, FIXME: emit unsqueeze2
## binary ops ## ## binary ops ##
'Add': ['elementwise_add', ['X', 'Y'], ['Out'], dict(), dict(axis=-1)], 'Add': ['elementwise_add', ['X', 'Y'], ['Out'], dict(), dict(axis=-1)],
# 'AffineGrid': ['affine_grid', ['Theta'], ['Output'], dict(size='out_shape')], #'AffineGrid': ['affine_grid', ['Theta'], ['Output'], dict(size='out_shape')],
'And': ['logical_and', ['X', 'Y'], ['Out']], 'And': ['logical_and', ['X', 'Y'], ['Out']],
'Div': ['elementwise_div', ['X', 'Y'], ['Out'], dict(), dict(axis=-1)], 'Div': ['elementwise_div', ['X', 'Y'], ['Out'], dict(), dict(axis=-1)],
'Equal': ['equal', ['X', 'Y'], ['Out'], dict(), dict(), None, None, False], 'Equal': ['equal', ['X', 'Y'], ['Out'], dict(), dict(), None, None, False],
...@@ -110,7 +110,7 @@ DEFAULT_OP_MAPPING = { ...@@ -110,7 +110,7 @@ DEFAULT_OP_MAPPING = {
'TopK': ['topk', ['X', 'K'], ['Out', 'Indices']], 'TopK': ['topk', ['X', 'K'], ['Out', 'Indices']],
} }
DEFAULT_IOA_CONSTRAINT = { DEFAULT_IOA_CONSTRAINTS = {
'ArgMax': [ 'ArgMax': [
(lambda i, o, a: a.get('keepdims', 1) == 1, (lambda i, o, a: a.get('keepdims', 1) == 1,
'only keepdims = 0 is supported'), 'only keepdims = 0 is supported'),
...@@ -164,7 +164,7 @@ def _make_var_name(name): ...@@ -164,7 +164,7 @@ def _make_var_name(name):
def _dtype(value_infos, val_name): def _dtype(value_infos, val_name):
return np.dtype(value_infos[val_name]['dtype']) return _np.dtype(value_infos[val_name]['dtype'])
def _dtype_or_none(value_infos, val_name): def _dtype_or_none(value_infos, val_name):
...@@ -173,7 +173,7 @@ def _dtype_or_none(value_infos, val_name): ...@@ -173,7 +173,7 @@ def _dtype_or_none(value_infos, val_name):
value_info = value_infos[val_name] value_info = value_infos[val_name]
if 'dtype' not in value_info: if 'dtype' not in value_info:
return None return None
return np.dtype(value_info['dtype']) return _np.dtype(value_info['dtype'])
def _shape(value_infos, val_name): def _shape(value_infos, val_name):
...@@ -217,8 +217,8 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs): ...@@ -217,8 +217,8 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
fill_name_field, fill_name_field,
) = info ) = info
if fluid_op in DEFAULT_IOA_CONSTRAINT: if fluid_op in DEFAULT_IOA_CONSTRAINTS:
for predicate, message in DEFAULT_IOA_CONSTRAINT[fluid_op]: for predicate, message in DEFAULT_IOA_CONSTRAINTS[fluid_op]:
assert predicate(inputs, outputs, attrs), message assert predicate(inputs, outputs, attrs), message
# bypass if key absent, drop if mapped key is '' or '_' # bypass if key absent, drop if mapped key is '' or '_'
...@@ -268,14 +268,13 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs): ...@@ -268,14 +268,13 @@ def _default(prog, op_type, inputs, outputs, attrs, *args, name='', **kwargs):
(var_outs, *fluid_output_args), fluid_attrs) (var_outs, *fluid_output_args), fluid_attrs)
def _assign(prog, attrs): def _assign(prog, mapping):
mapping = attrs['mapping'] # additional
fluid_op = 'assign' fluid_op = 'assign'
for val_dst, val_src in mapping.items(): for val_dst, val_src in mapping.items():
var_dst = _make_var_name(val_dst) var_dst = _make_var_name(val_dst)
var_src = _make_var_name(val_src) var_src = _make_var_name(val_src)
prog.Code('{} = {}'.format(var_dst, var_src)) prog.Code('{} = {} # assign'.format(var_dst, var_src))
# prog.Code('{} = layers.{}({})' # prog.Code('{} = layers.{}({})'
# .format(var_dst, # .format(var_dst,
# fluid_op, # fluid_op,
...@@ -552,7 +551,6 @@ def _roi_pool(prog, fluid_op, inputs, outputs, attrs, value_infos, name): ...@@ -552,7 +551,6 @@ def _roi_pool(prog, fluid_op, inputs, outputs, attrs, value_infos, name):
def _interpolate(prog, inputs, outputs, attrs, value_infos, name=''): def _interpolate(prog, inputs, outputs, attrs, value_infos, name=''):
# I/O # I/O
val_x, val_scales = inputs val_x, val_scales = inputs
val_y, = outputs val_y, = outputs
...@@ -775,7 +773,7 @@ def Cast(prog, inputs, outputs, attrs, value_infos, *args, **kwargs): ...@@ -775,7 +773,7 @@ def Cast(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
# interpretation # interpretation
dtype = attrs['to'] # required dtype = attrs['to'] # required
if not isinstance(dtype, np.dtype): # additional: possible np.dtype if not isinstance(dtype, _np.dtype): # additional: possible np.dtype
dtype = TENSOR_TYPE_TO_NP_TYPE[dtype] dtype = TENSOR_TYPE_TO_NP_TYPE[dtype]
output_dtype = _dtype_or_none(value_infos, val_output) output_dtype = _dtype_or_none(value_infos, val_output)
if output_dtype: if output_dtype:
...@@ -852,13 +850,13 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs): ...@@ -852,13 +850,13 @@ def Constant(prog, inputs, outputs, attrs, value_infos, *args, **kwargs):
# interpretation # interpretation
value = attrs['value'] # required value = attrs['value'] # required
dtype = np.dtype(value.dtype) dtype = _np.dtype(value.dtype)
output_dtype = _dtype_or_none(value_infos, val_output) output_dtype = _dtype_or_none(value_infos, val_output)
if output_dtype: if output_dtype:
assert dtype == output_dtype, 'tensor dtype unmatches storage dtype' assert dtype == output_dtype, 'tensor dtype unmatches storage dtype'
# dtype = np.dtype('float32') # HINT: force to float32 # dtype = _np.dtype('float32') # HINT: force to float32
shape = attrs.get('shape', None) # shape = attrs.get('shape', None) #
if shape is None: if shape is None:
shape = _shape_or_none(value_infos, val_output) shape = _shape_or_none(value_infos, val_output)
...@@ -1161,7 +1159,7 @@ def ConvTranspose(prog, ...@@ -1161,7 +1159,7 @@ def ConvTranspose(prog,
# val_output, = outputs[:1] # val_output, = outputs[:1]
# #
# _assign(prog, # _assign(prog,
# dict(mapping=dict([(val_output, val_data)])), # dict([(val_output, val_data)]),
# value_infos, # value_infos,
# ) # )
...@@ -1216,13 +1214,13 @@ def Gemm(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs): ...@@ -1216,13 +1214,13 @@ def Gemm(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
if beta.is_integer(): if beta.is_integer():
vm_dtype = _dtype_or_none(value_infos, val_c) vm_dtype = _dtype_or_none(value_infos, val_c)
if vm_dtype is None: if vm_dtype is None:
vm_dtype = np.dtype('float32') vm_dtype = _np.dtype('float32')
_logger.warning( _logger.warning(
'in %s(%s -> Gemm -> %s): ' 'in %s(%s -> Gemm -> %s): '
'attribute "beta" seems to be an interger, ' 'attribute "beta" seems to be an interger, '
'however dtype can not be inferred, ' 'however dtype can not be inferred, '
'still use float32', name, inputs, outputs) 'still use float32', name, inputs, outputs)
beta = np.dtype(vm_dtype).type(beta) beta = _np.dtype(vm_dtype).type(beta)
prog.Op( prog.Op(
'', '',
'Constant', 'Constant',
...@@ -1388,7 +1386,7 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs): ...@@ -1388,7 +1386,7 @@ def Pad(prog, inputs, outputs, attrs, value_infos, name='', *args, **kwargs):
mode) mode)
fluid_op = 'pad' fluid_op = 'pad'
pad2d_attr = '' pad2d_attr = ''
paddings = np.array(pads).reshape( paddings = _np.array(pads).reshape(
(-1, 2)).transpose().flatten().tolist() # SSEE -> SESE (-1, 2)).transpose().flatten().tolist() # SSEE -> SESE
od_attrs['paddings'] = paddings od_attrs['paddings'] = paddings
name_attr = ', name={}'.format(repr(name)) if name else '' name_attr = ', name={}'.format(repr(name)) if name else ''
...@@ -1526,7 +1524,7 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs): ...@@ -1526,7 +1524,7 @@ def Reshape(prog, inputs, outputs, attrs, value_infos, name, *args, **kwargs):
'Cast', 'Cast',
[val_shape], [val_shape],
[val_shape_int32], # var [val_shape_int32], # var
dict(to=np.dtype('int32')), # use np.dtype dict(to=_np.dtype('int32')), # use np.dtype
value_infos=value_infos, value_infos=value_infos,
name=(name + '_cast'), name=(name + '_cast'),
) )
...@@ -1841,7 +1839,9 @@ if __name__ == '__main__': ...@@ -1841,7 +1839,9 @@ if __name__ == '__main__':
) )
logger = _logging.getLogger('symbolic_test') logger = _logging.getLogger('symbolic_test')
from writer import Program import numpy as np
from onnx2fluid.writer import Program
prog = Program() prog = Program()
AdaptiveAveragePool( AdaptiveAveragePool(
......
...@@ -8,11 +8,6 @@ Created on Fri Mar 22 12:17:19 2019 ...@@ -8,11 +8,6 @@ Created on Fri Mar 22 12:17:19 2019
import importlib, logging, os, sys import importlib, logging, os, sys
#import importlib
#import logging
#import os
#import sys
def _flatten_dict(obj, out=None): def _flatten_dict(obj, out=None):
assert isinstance(obj, dict) assert isinstance(obj, dict)
...@@ -37,10 +32,12 @@ def _ensure_list(obj): ...@@ -37,10 +32,12 @@ def _ensure_list(obj):
def validate(fluid_model_filename, def validate(fluid_model_filename,
golden_data_filename, golden_data_filename,
model_func_name='inference', model_func_name='inference',
decimal=3, atol=1e-3,
save_inference_model=False): rtol=1e-4,
save_inference_model=False,
**kwargs):
""" """
inferece the converted Paddle fluid model, validate with given golden data inference the converted Paddle fluid model, validate with given golden data
""" """
import numpy as np import numpy as np
...@@ -125,7 +122,13 @@ def validate(fluid_model_filename, ...@@ -125,7 +122,13 @@ def validate(fluid_model_filename,
for (name, truth), output in zip(output_data.items(), outputs): for (name, truth), output in zip(output_data.items(), outputs):
logger.info('testing output {} ...'.format(name)) logger.info('testing output {} ...'.format(name))
try: try:
np.testing.assert_almost_equal(output, truth, decimal=decimal) np.testing.assert_allclose(
output,
truth,
rtol=rtol,
atol=atol,
equal_nan=False,
verbose=True)
except AssertionError as e: except AssertionError as e:
passed = False passed = False
logger.error('failed: %s\n', e) logger.error('failed: %s\n', e)
...@@ -134,10 +137,9 @@ def validate(fluid_model_filename, ...@@ -134,10 +137,9 @@ def validate(fluid_model_filename,
else: else:
logger.info('accuracy not passed') logger.info('accuracy not passed')
# globals().update(locals())
return passed return passed
if __name__ == '__main__': if __name__ == '__main__':
import argparse import argparse
...@@ -163,11 +165,17 @@ if __name__ == '__main__': ...@@ -163,11 +165,17 @@ if __name__ == '__main__':
help='I/O golden data for validation, e.g. test.npy, test.npz', help='I/O golden data for validation, e.g. test.npy, test.npz',
) )
parser.add_argument( parser.add_argument(
'--precision', '--atol',
'-p', '-p',
type=int, type=float,
default=3, default=1e-3,
help='assertion decimal for validation', help='assertion absolute tolerance for validation',
)
parser.add_argument(
'--rtol',
type=float,
default=1e-4,
help='assertion relative tolerance for validation',
) )
args = parser.parse_args() args = parser.parse_args()
...@@ -178,10 +186,11 @@ if __name__ == '__main__': ...@@ -178,10 +186,11 @@ if __name__ == '__main__':
debug = args.debug debug = args.debug
fluid_model_filename = args.model[0] fluid_model_filename = args.model[0]
golden_data_filename = args.test_data golden_data_filename = args.test_data
decimal = args.precision atol, rtol = args.atol, args.rtol
validate( validate(
fluid_model_filename, fluid_model_filename,
golden_data_filename, golden_data_filename,
decimal=decimal, atol=atol,
rtol=rtol,
save_inference_model=debug) save_inference_model=debug)
...@@ -9,27 +9,17 @@ Created on Sun Feb 24 20:44:43 2019 ...@@ -9,27 +9,17 @@ Created on Sun Feb 24 20:44:43 2019
from __future__ import division from __future__ import division
import logging, os import logging, os
#import logging
#import os
import numpy as np import numpy as np
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
try: from . import symbolic
from . import symbolic from .symbolic import _make_var_name as make_var_name
except ImportError:
import symbolic
# imports
make_var_name = symbolic._make_var_name
try: try:
import paddle.fluid.proto.framework_pb2 as framework_pb2 import paddle.fluid.proto.framework_pb2 as framework_pb2
except ImportError: except ImportError:
try:
from . import framework_pb2 from . import framework_pb2
except ImportError:
import framework_pb2
logger.warning('importing paddle.fluid.proto.framework_pb2d failed,' logger.warning('importing paddle.fluid.proto.framework_pb2d failed,'
'using fallback framework_pb2') 'using fallback framework_pb2')
...@@ -176,16 +166,13 @@ class Program(object): ...@@ -176,16 +166,13 @@ class Program(object):
self.op_descs = [] self.op_descs = []
self.var_descs = [] self.var_descs = []
def __str__(self): def __repr__(self):
return ('Program(code mutable: {}) with:\n' return ('Program(code mutable: {}) with:\n'
'codes: {}\n' 'codes: {}\n'
'op_descs: {}\n' 'op_descs: {}\n'
'var_descs: {}\n').format(self.code_mutable, self.codes, 'var_descs: {}\n').format(self.code_mutable, self.codes,
self.op_descs, self.var_descs) self.op_descs, self.var_descs)
def __repr__(self):
return self.__str__()
def Code(self, code): def Code(self, code):
""" """
add Python code add Python code
...@@ -218,11 +205,9 @@ class Program(object): ...@@ -218,11 +205,9 @@ class Program(object):
name, name,
persistable=False, persistable=False,
value_info=None, value_info=None,
remove_batch=None, remove_batch=None):
dummy_dtype='float32'):
""" """
add VarDesc, add VarDesc,
dummy_dtype: WORKAROUND for Netron viewer
""" """
var_desc = framework_pb2.VarDesc() var_desc = framework_pb2.VarDesc()
...@@ -241,9 +226,6 @@ class Program(object): ...@@ -241,9 +226,6 @@ class Program(object):
not persistable) not persistable)
if remove_batch: if remove_batch:
tensor_desc.dims[0] = -1 tensor_desc.dims[0] = -1
else: # REMOVEIT: WORKAROUND: Netron: null.tensor error
tensor_desc = var_desc.type.lod_tensor.tensor
tensor_desc.data_type = self.Dtype(dummy_dtype) # required
self.var_descs.append(var_desc) self.var_descs.append(var_desc)
......
-e . -e .
onnx>=1.4.0 onnx>=1.4
paddlepaddle paddlepaddle
...@@ -7,7 +7,7 @@ name = onnx2fluid ...@@ -7,7 +7,7 @@ name = onnx2fluid
author = Macrobull author = Macrobull
# author_email = .Github@github.com # author_email = .Github@github.com
# 项目版本号,1.0以上版本才视为正式版 # 项目版本号,1.0以上版本才视为正式版
version = 0.1.0 version = 0.1.1
# 项目概要描述信息,一句话让用户明白项目概要,不支持中文 # 项目概要描述信息,一句话让用户明白项目概要,不支持中文
description = Inference model conversion from ONNX/PyTorch to Paddle fluid description = Inference model conversion from ONNX/PyTorch to Paddle fluid
# 项目的详细描述内容和格式,包括readme和changelog等,通常使用md或rst等格式 # 项目的详细描述内容和格式,包括readme和changelog等,通常使用md或rst等格式
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册